Quantcast
Channel: KitPloit - PenTest Tools!
Viewing all 5816 articles
Browse latest View live

SwiftyInsta - Instagram Unofficial Private API Swift

$
0
0


Instagram offers two kinds of APIs to developers. The Instagram API Platform (extremely limited in functionality and close to being discontinued), and the Instagram Graph API for Business and Creator accounts only.

However, Instagram apps rely on a third type of API, the so-called Private API or Unofficial API, and SwiftyInsta is an iOS, macOS, tvOS and watchOS client for them, written entirely in Swift. You can try and create a better Instagram experience for your users, or write bots for automating different tasks.

These Private API require no token or app registration but they're not authorized by Instagram for external use. Use this at your own risk.


Installation

Swift Package Manager (Xcode 11 and above)
  1. Select File/Swift Packages/Add Package Dependency… from the menu.
  2. Paste https://github.com/TheM4hd1/SwiftyInsta.git.
  3. Follow the steps.

CocoaPods

CocoaPods is a dependency manager for Cocoa projects. You can install it with the following command:

$ gem install cocoapods  

To integrate SwiftyInsta into your Xcode project using CocoaPods, specify it in your Podfile:

use_frameworks!    target '<Your Target Name>' do      pod 'SwiftyInsta', '~> 2.0'  end  

Then, run the following command:

$ pod install  

SwiftyInsta depends on CryptoSwift and keychain-swift.


Login

Credentials
$ gem install cocoapods

Once the user has typed the two factor authentication code or challenge code, you simply do

use_frameworks!

target '<Your Target Name>' do
pod 'SwiftyInsta', '~> 2.0'
end

And the completionHandler in the previous authenticate(with: completionHandler:) will automatically catch the response.


LoginWebViewController (>= iOS 12 only)
$ pod install

Or implement your own custom UIViewController using LoginWebView, and pass it to an APIHandlerauthenticate method using .webView(/* your login web view */).


Authentication.Response

If you've already persisted a user's Authentication.Response:

// these need to be strong references.
self.credentials = Credentials(username: /* username */, password: /* password */, verifyBy: .text)
self.handler = APIHandler()
handler.authenticate(with: .user(credentials)) {
switch $0 {
case .success(let response, _):
print("Login successful.")
// persist cache safely in the keychain for logging in again in the future.
guard let key = response.persist() else { return print("`Authentication.Response` could not be persisted.") }
// store the `key` wherever you want, so you can access the `Authentication.Response` later.
// `UserDefaults` is just an example.
UserDefaults.standard.set(key, forKey: "current.account")
UserDefaults.standard.synchronize()
case .failure(let error):
if error.requiresInstagramCode {
/* update interface to ask for code */
} else {
/* notify the user */
}
}
}

Usage

All endpoints are easily accessible from your APIHandler instance.

self.credentials.code = /* the code */

Futhermore, responses now display every single value contained in the JSON file returned by the API: just access any ParsedResponserawResponse and start browsing, or stick with the suggested accessories (e.g. User's username, name, etc. and Media's aspectRatio, takenAt, content, etc.).




Kraken - Cross-platform Yara Scanner Written In Go

$
0
0


Kraken is a simple cross-platform Yara scanner that can be built for Windows, Mac, FreeBSD and Linux. It is primarily intended for incident response, research and ad-hoc detections (not for endpoint protection). Following are the core features:

  • Scan running executables and memory of running processes with provided Yara rules (leveraging go-yara).
  • Scan executables installed for autorun (leveraging go-autoruns).
  • Scan the filesystem with the provided Yara rules.
  • Report any detection to a remote server provided with a Django-based web interface.
  • Run continuously and periodically check for new autoruns and scan any newly-executed processes. Kraken will store events in a local SQLite3 database and will keep copies of autorun and detected executables.

Some features are still under work or almost completed:

  • Installer and launcher to automatically start Kraken at startup.
  • Download updated Yara rules from the server.

Screenshots




How to use

Launch Kraken with any of the available options:

Usage of kraken:
--backend string Specify a particular hostname to the backend to connect to (overrides the default)
--daemon Enable daemon mode (this will also enable the report flag)
--debug Enable debug logs
--folder string Specify a particular folder to be scanned (overrides the default full filesystem)
--no-autoruns Disable scanning of autoruns
--no-filesystem Disable scanning of filesystem
--no-process Disable scanning of running processes
--report Enable reporting of events to the backend
--rules Specify a particular path to a file or folder containing the Yara rules to use

User Guide

For details on how to install, use and build Kraken you should refer to the User Guide. The original source files for the documentation are available here, please open any issue or pull request pertinent to documentation there.



Tempomail - Generate A Custom Email Address In 1 Second And Receive Emails

$
0
0


tempomail is a standalone binary that allows you to create a temporary email address in 1 Second and receive emails. It uses 1secmail's API. No dependencies required!


Installation

From Binary

Download the pre-built binaries for different platforms from the releases page. Extract them using tar, move it to your $PATH and you're ready to go.

▶ # download release from https://github.com/kavishgr/tempomail/releases/
▶ tar -xzvf linux-amd64-tempomail.tgz
▶ mv tempomail /usr/local/bin/
▶ tempomail -h

From Github
git clone https://github.com/kavishgr/tempomail.git
cd tempomail
go build .
mv tempomail /usr/local/bin/ #OR $HOME/go/bin
tempomail -h

Usage

By default, all emails are saved in /tmp/1secmails/. It only has only one flag --path to specify a directory to store your emails:

Usage of tempomail:
-path string
specify directory to store emails (default "/tmp/1secmails/")

Press CTRL+c or SIGTERM to quit and delete all received emails.


Does it need improvement ?

Open an issue.


TODO
  • Download Attachments


GWTMap - Tool to help map the attack surface of Google Web Toolkit

$
0
0

GWTMap is a tool to help map the attack surface of Google Web Toolkit (GWT) based applications. The purpose of this tool is to facilitate the extraction of any service method endpoints buried within a modern GWT application's obfuscated client-side code, and attempt to generate example GWT-RPC requests payloads to interact with them.


More information can be found here: https://labs.f-secure.com/blog/gwtmap-reverse-engineering-google-web-toolkit-applications.


Requirements

The script requires Python3, argparse, and requests to run. They can be installed using the following command:

python -m pip install -r requirements.txt

Usage

Help
$ ./gwtmap.py -h
usage: gwtmap.py [-h] [--version] [-u <TARGET_URL>] -F <FILE> [-b <BASE_URL>] [-p <PROXY>] [-c <COOKIES>] [-f <FILTER>] [--basic] [--rpc] [--probe] [--svc] [--code] [--color] [--backup [DIR]] [-q]

Enumerates GWT-RPC methods from {hex}.cache.js permutation files

Arguments:
-h, --help show this help message and exit
--version show program's version number and exit
-u <TARGET_URL>, --url <TARGET_URL>
URL of the target GWT {name}.nocache.js bootstrap or {hex}.cache.js file
-F <FILE>, --file <FILE>
path to the local copy of a {hex}.cache.js GWT permutation file
-b <BASE_URL>, --base <BASE_URL>
specifies the base URL for a given permutation file in -F/--file mode
-p <PROXY>, --proxy <PROXY>
URL for an optional HTTP proxy (e.g. -p http://127.0.0.1:8080)
-c <COOKIES>, --cookies <COOKIES>
any cookies required to access the remote resource in -u/--url mode (e.g. 'JSESSIONID=ABCDEF; OTHER=XYZABC')
-f <FILTER>, --filter <FILTER>
case-sensitive method filter for output (e.g. -f AuthSvc.checkSession)
--basic enables HTTP Basic authentication if require. Prompts for credentials
--rpc attempts to generate a serialized RPC request for each method
--probe sends an HTTP probe request to test each method returned in --rpc mode
--svc displays enumerated service information, in addition to methods
--code skips all and dumps the 're-formatted' state of the provided resource
--color enables console output colors
--backup [DIR] creates a local backup of retrieved code in -u/--url mode
-q, --quiet enables quiet mode (minimal output)

Example: ./gwtmap.py -u "http://127.0.0.1/example/example.nocache.js" -p "http://127.0.0.1:8080" --rpc

Usage

Enumerate the methods of a remote application via it's bootstrap file and create a local backup of the code (selects permutation at random):

./gwtmap.py -u http://192.168.22.120/olympian/olympian.nocache.js --backup

Enumerate the methods of a remote application via a specific code permutation

./gwtmap.py -u http://192.168.22.120/olympian/C39AB19B83398A76A21E0CD04EC9B14C.cache.js

Enumerate the methods whilst routing traffic through an HTTP proxy:

./gwtmap.py -u http://192.168.22.120/olympian/olympian.nocache.js --backup -p http://127.0.0.1:8080

Enumerate the methods of a local copy (a file) of any given permutation:

./gwtmap.py -F test_data/olympian/C39AB19B83398A76A21E0CD04EC9B14C.cache.js

Filter output to a specific service or method:

./gwtmap.py -u http://192.168.22.120/olympian/olympian.nocache.js --filter AuthenticationService.login

Generate RPC payloads for all methods of the filtered service, with coloured output

./gwtmap.py -u http://192.168.22.120/olympian/olympian.nocache.js --filter AuthenticationService --rpc --color

Automatically test (probe) the generate RPC request for the filtered service method

./gwtmap.py -u http://192.168.22.120/olympian/olympian.nocache.js --filter AuthenticationService.login --rpc --probe

Complete Examples

Generate an RPC request for the method "testDetails", and automatically probe the service

$ ./gwtmap.py -u http://192.168.22.120/olympian/olympian.nocache.js --filter TestService.testDetails --rpc --probe   

___| \ / __ __| \ | \ _ \
| \ \ / | |\/ | _ \ | |
| | \ \ / | | | ___ \ ___/
\____| _/\_/ _| _| _| _/ _\ _|
version 0.1

[+] Analysing
====================
http://192.168.22.120/olympian/olympian.nocache.js
Permutation: http://192.168.22.120/olympian/4DE825BB25A8D7B3950D45A81EA7CD84.cache.js
+ fragment : http://192.168.22.120/olympian/deferredjs/4DE825BB25A8D7B3950D45A81EA7CD84/1.cache.js
+ fragment : http://192.168.22.120/olympian/deferredjs/4DE825BB25A8D7B3950D45A81EA7CD84/2.cache.js


[+] Module Info
====================
GWT Version: 2.9.0
Content-Type: text/x-gwt-rpc; charset=utf-8
X-GWT-Module-Base: http://192.168.22.120/olympian/
X-GWT-Permutation: 4DE825 BB25A8D7B3950D45A81EA7CD84
RPC Version: 7
RPC Flags: 0


[+] Methods Found
====================

----- TestService -----

TestService.testDetails( java.lang.String/2004016611, java.lang.String/2004016611, I, D, java.lang.String/2004016611 )
POST /olympian/testService HTTP/1.1
Host: 192.168.22.120
Content-Type: text/x-gwt-rpc; charset=utf-8
X-GWT-Permutation: 4DE825BB25A8D7B3950D45A81EA7CD84
X-GWT-Module-Base: http://192.168.22.120/olympian/
Content-Length: 262

7|0|10|http://192.168.22.120/olympian/|67E3923F861223EE4967653A96E43846|com.ecorp.olympian.client.asyncService.TestService|testDetails|java.lang.String/2004016611|D|I|§param_Bob§|§param_Smith§|§param_"Im_a_test"§|1|2|3|4|5|5|5|7|6|5|8|9|§32§|§76.6§|10|

HTTP/1.1 200
//OK[1,["Name: param_Bob param_Smith\nAge: 32\nWeight: 76.6\nBio: param_\"Im_a_test\"\n"],0,7]


[+] Summary
===== ===============
Showing 1/5 Services
Showing 1/25 Methods


Threagile - Agile Threat Modeling Toolkit

$
0
0


Threagile (see https://threagile.io for more details) is an open-source toolkit for agile threat modeling:

It allows to model an architecture with its assets in an agile fashion as a YAML file directly inside the IDE. Upon execution of the Threagile toolkit all standard risk rules (as well as individual custom rules if present) are checked against the architecture model.


Execution via Docker Container

The easiest way to execute Threagile on the commandline is via its Docker container:

docker run --rm -it threagile/threagile


_____ _ _ _
|_ _| |__ _ __ ___ __ _ __ _(_) | ___
| | | '_ \| '__/ _ \/ _` |/ _` | | |/ _ \
| | | | | | | | __/ (_| | (_| | | | __/
|_| |_| |_|_| \___|\__,_|\__, |_|_|\___|
|___/
Threagile - Agile Threat Modeling


Documentation: https://threagile.io
Docker Images: https://hub.docker.com/r/threagile
Sourcecode: https://github.com/threagile
License: Open-Source (MIT License)

Usage: threagile [options]


Options:

-background string
background pdf file (default "background.pdf")
-create-editing-support
just create some editing support stuff in the output directory
-create-example-model
just create an example model named threagile-example-model.yaml in the output directory
-create-stub-model
just create a minimal stub model named threagile-stub-model.yaml in the output directory
-custom-risk-rules-plugins string
comma-separated list of plugins (.so shared object) file names with custom risk rules to load
-diagram-dpi int
DPI used to render: maximum is 240 (default 120)
-execute-model-macro string
Execute model macro (by ID)
-generate-data-asset-diagram
generate data asset diagram (default true)
-generate-data-flow-diagram
generate data-flow diagram (default true)
-generate-report-pdf
generate report pdf, including diagrams (default true)
-generate-risks-excel
generate risks excel (default true)
-generate-risks-json
generate risks json (default true)
-generate-stats-json
generate stats json (default true)
-genera te-tags-excel
generate tags excel (default true)
-generate-technical-assets-json
generate technical assets json (default true)
-ignore-orphaned-risk-tracking
ignore orphaned risk tracking (just log them) not matching a concrete risk
-list-model-macros
print model macros
-list-risk-rules
print risk rules
-list-types
print type information (enum values to be used in models)
-model string
input model yaml file (default "threagile.yaml")
-output string
output directory (default ".")
-print-3rd-party-licenses
print 3rd-party license information
-print-license
print license information
-raa-plugin string
RAA calculation plugin (.so shared object) file name (default "raa.so")
-server int
sta rt a server (instead of commandline execution) on the given port
-skip-risk-rules string
comma-separated list of risk rules (by their ID) to skip
-verbose
verbose output
-version
print version


Examples:

If you want to create an example model (via docker) as a starting point to learn about Threagile just run:
docker run --rm -it -v "$(pwd)":/app/work threagile/threagile -create-example-model -output /app/work

If you want to create a minimal stub model (via docker) as a starting point for your own model just run:
docker run --rm -it -v "$(pwd)":/app/work threagile/threagile -create-stub-model -output /app/work

If you want to execute Threagile on a model yaml file (via docker):
docker run --rm -it -v "$(pwd)":/app/work threagile/threagile -verbose -model /app/work/threagile.yaml -output /app/work

If you want to run Threagile as a server (REST API) on some port (here 8080):
docker run --rm -it --shm-size=256m -p 8080:8080 --name threagile-server --mount 'type=volume,src=threagile-storage,dst=/data,readonly=false' threagile/threagile -server 8080

If you want to find out about the different enum values usable in the model yaml file:
docker run --rm -it threagile/threagile -list-types

If you want to use some nice editing help (syntax validation, autocompletion, and live templates) in your favourite IDE:
docker run --rm -it -v "$(pwd)":/app/work threagile/threagile -create-editing-support -output /app/work

If you want to list all available model macros (which are macros capable of reading a model yaml file, asking you questions in a wizard-style and then update the model yaml file accordingly):
docker run --rm -it threagile/threagile -list-model-macros

If you want to execute a certain model macro on the model yaml file (here the macro add-build-pipeline):
docker run --rm -it -v "$(pw d)":/app/work threagile/threagile -model /app/work/threagile.yaml -output /app/work -execute-model-macro add-build-pipeline


JSMon - JavaScript Change Monitor for BugBounty

$
0
0


Using this script, you can configure a number of JavaScript files on websites that you want to monitor. Everytime you run this script, these files will be fetched and compared to the previously fetched version. If they have changed, you will be notified via Telegram with a message containing a link to the script, the changed filesizes, and a diff file to inspect the changes easily.



Installation

To install JSMon:

git clone https://github.com/robre/jsmon.git 
cd jsmon
python setup.py install

You need to set up your Slack or Telegram token in the Environment, e.g. by creating a .env File: touch .env With The Contents:

JSMON_NOTIFY_TELEGRAM=True
JSMON_TELEGRAM_TOKEN=YOUR TELEGRAM TOKEN
JSMON_TELEGRAM_CHAT_ID=YOUR TELEGRAM CHAT ID
#JSMON_NOTIFY_SLACK=True
#JSMON_SLACK_TOKEN=sometoken
#JSMON_SLACK_CHANNEL_ID=somechannel

To Enable slack, uncomment the slack lines in the env and add your token.

To create a cron script to run JSMon regularly:

crontab -e

create an entry like this:

@daily /path/to/jsmon.sh

Note that you should run the .sh file, because otherwise the environment will be messed up.

This will run JSMon once a day, at midnight. You can change @daily to whatever schedule suits you.

To configure Telegram notifications, you need to add your Telegram API key and chat_id to the code, at the start of jsmon.py. You can read how to get these values here.

Note, for Slack Support, you need to set up your slack app correctly and use the slack oauth token. The App needs to have file upload rights and needs to be in the channel that you want it in.. Lastly, you need to get started with some targets that you want to monitor. Lets create an example:

echo "https://cdnjs.cloudflare.com/ajax/libs/jquery/3.5.1/jquery.js" >> targets/cdnjs-example

All done ! now you can run python jsmon.py to download the specified files for the first time!


Features
  • Keep Track of endpoints - check them in a configurable interval (using cron)
  • when endpoints change - send a notification via Telegram or Slack

Usage
  • Provide Endpoints via files in targets/ directory (line seperated endpoints)

    • any number of files, with one endpoint per line
    • e.g. one file per website, or one file per program, etc.
  • Every endpoint gets downloaded and stored in downloads/ with its hash as file name (first 10 chars of md5 hash)

    • if it already exists nothing changes
    • if it is changed, user gets notified
  • jsmon.json keeps track of which endpoints are associated with which filehashes

  • jsmon is designed to keep track of javascript files on websites - but it can be used for any filetype to add endpoints


Contributors

@r0bre - Core

@Yassineaboukir - Slack Notifications



Hetty - An HTTP Toolkit For Security Research

$
0
0

Hetty is an HTTP toolkit for security research. It aims to become an open source alternative to commercial software like Burp Suite Pro, with powerful features tailored to the needs of the infosec and bug bounty community.


Features
  • Man-in-the-middle (MITM) HTTP/1.1 proxy with logs
  • Project based database storage (SQLite)
  • Scope support
  • Headless management API using GraphQL
  • Embedded web interface (Next.js)

Hetty is in early development. Additional features are planned for a v1.0 release. Please see the backlog for details. 

Installation

Hetty compiles to a self-contained binary, with an embedded SQLite database and web based admin interface.


Install pre-built release (recommended)

Downloads for Linux, macOS and Windows are available on the releases page.
Build from source

Prerequisites

Hetty depends on SQLite (via mattn/go-sqlite3) and needs cgo to compile. Additionally, the static resources for the admin interface (Next.js) need to be generated via Yarn and embedded in a .go file with go.rice beforehand.

Clone the repository and use the build make target to create a binary:

$ git clone git@github.com:dstotijn/hetty.git
$ cd hetty
$ make build

Docker

A Docker image is available on Docker Hub: dstotijn/hetty. For persistent storage of CA certificates and project databases, mount a volume:

$ mkdir -p $HOME/.hetty
$ docker run -v $HOME/.hetty:/root/.hetty -p 8080:8080 dstotijn/hetty

Usage

When Hetty is run, by default it listens on :8080 and is accessible via http://localhost:8080. Depending on incoming HTTP requests, it either acts as a MITM proxy, or it serves the API and web interface.

By default, project database files and CA certificates are stored in a .hetty directory under the user's home directory ($HOME on Linux/macOS, %USERPROFILE% on Windows).

To start, ensure hetty (downloaded from a release, or manually built) is in your $PATH and run:

$ hetty

An overview of configuration flags:

$ hetty -h
Usage of ./hetty:
-addr string
TCP address to listen on, in the form "host:port" (default ":8080")
-adminPath string
File path to admin build
-cert string
CA certificate filepath. Creates a new CA certificate is file doesn't exist (default "~/.hetty/hetty_cert.pem")
-key string
CA private key filepath. Creates a new CA private key if file doesn't exist (default "~/.hetty/hetty_key.pem")
-projects string
Projects directory path (default "~/.hetty/projects")

You should see:

2020/11/01 14:47:10 [INFO] Running server on :8080 ...

Then, visit http://localhost:8080 to get started.

Detailed documentation is under development and will be available soon. 

Certificate Setup and Installation

In order for Hetty to proxy requests going to HTTPS endpoints, a root CA certificate for Hetty will need to be set up. Furthermore, the CA certificate may need to be installed to the host for them to be trusted by your browser. The following steps will cover how you can generate your certificate, provide them to hetty, and how you can install them in your local CA store.

This process was done on a Linux machine but shouldprovide guidance on Windows and macOS as well.
Generating CA certificates

You can generate a CA keypair two different ways. The first is bundled directly with Hetty, and simplifies the process immensely. The alternative is using OpenSSL to generate them, which provides more control over expiration time and cryptography used, but requires you install the OpenSSL tooling. The first is suggested for any beginners trying to get started.


Generating CA certificates with hetty

Hetty will generate the default key and certificate on its own if none are supplied or found in ~/.hetty/ when first running the CLI. To generate a default key and certificate with hetty, simply run the command with no arguments

hetty

You should now have a key and certificate located at ~/.hetty/hetty_key.pem and ~/.hetty/hetty_cert.pem respectively.


Generating CA certificates with OpenSSL

You can start off by generating a new key and CA certificate which will both expire after a month.

mkdir ~/.hetty
openssl req -newkey rsa:2048 -new -nodes -x509 -days 31 -keyout ~/.hetty/hetty_key.pem -out ~/.hetty/hetty_cert.pem

The default location which hetty will check for the key and CA certificate is under ~/.hetty/, at hetty_key.pem and hetty_cert.pem respectively. You can move them here and hetty will detect them automatically. Otherwise, you can specify the location of these as arguments to hetty.

hetty -key key.pem -cert cert.pem

Trusting the CA certificate

In order for your browser to allow traffic to the local Hetty proxy, you may need to install these certificates to your local CA store.

On Ubuntu, you can update your local CA store with the certificate by running the following commands:

sudo cp ~/.hetty/hetty_cert.pem /usr/local/share/ca-certificates/hetty.crt
sudo update-ca-certificates

On Windows, you would add your certificate by using the Certificate Manager. You can launch that by running the command:

certmgr.msc

On macOS, you can add your certificate by using the Keychain Access program. This can be found under Application/Utilities/Keychain Access.app. After opening this, drag the certificate into the app. Next, open the certificate in the app, enter the Trust section, and under When using this certificate select Always Trust.

Note: Various Linux distributions may require other steps or commands for updatingtheir certificate authority. See the documentation relevant to your distribution formore information on how to update the system to trust your self-signed certificate.


Vision and roadmap
  • Fast core/engine, built with Go, with a minimal memory footprint.
  • Easy to use admin interface, built with Next.js and Material UI.
  • Headless management, via GraphQL API.
  • Extensibility is top of mind. All modules are written as Go packages, to be used by Hetty, but also as libraries by other software.
  • Pluggable architecture for MITM proxy, projects, scope. It should be possible. to build a plugin system in the (near) future.
  • Based on feedback and real-world usage of pentesters and bug bounty hunters.
  • Aim for a relatively small core feature set that the majority of security researchers need.

Support

Use issues for bug reports and feature requests, and discussions for questions and troubleshooting.


Community
Join the Hetty Discord server
 
Contributing

Want to contribute? Great! Please check the Contribution Guidelines for details.


Acknowledgements

2020 David Stotijn — Twitter, Email



ShowStopper - Anti-Debug tricks exploration tool

$
0
0


The ShowStopper project is a tool to help malware researchers explore and test anti-debug techniques or verify debugger plugins or other solutions that clash with standard anti-debug methods.
With this tool, you can attach a debugger to its process and research the debugger’s behavior for the techniques you need (the virtual addresses of functions that apply to anti-debug techniques are printed to console) and compare them with their implementation. The tool includes a varied set of different techniques from multiple sources, including real-world malware and published documents and articles. The implemented techniques work for the latest Windows releases and for different modern debuggers.


Documenattion

How to install and use the tool, and contribute your findings in the documentation for the project.


System Requirements
  • Windows 7, 8, 8.1, 10 (x86/x86-64)
  • 32-Bit debuggers (OllyDbg, x32dbg, WinDbg, etc.)

References

Contributed by Check Point Software Technologies LTD.
Programmed by Yaraslau Harakhavik



PCWT - A Web Application That Makes It Easy To Run Your Pentest And Bug Bounty Projects

$
0
0


A web application that makes it easy to run your pentest and bug bounty projects.


Description

The app provides a convenient web interface for working with various types of files that are used during the pentest, automate port scan and subdomain search.


Main page


 

Project settings



Domains dashboard


 

Port scan

You can scan ports using nmap or masscan. The nmap is started with the following arguments:

nmap --top-ports 10000 -sV -Pn --min-rate 300 --max-retries 2 [ip]

The masscan is started with the following arguments:

masscan -p 1-65535 --rate 2000

Subdomain search

Amass and findomain are used to find subdomains.


Features
  • Leave notes to host, port or domain.
  • Mark host or domain with tags.
  • Search by any field related with host, port or domain (tags and notes are included). Regexp is available.
  • Different types of sorting ara available on almost all dashboards.
  • Run port scan for all hosts, hosts without port scan or custom list.
  • Create tasks for subdomains search (every 2 hours, every 5 hours, every day or every week). You can also disable and enable them on demand using Subdomain tasks dashboard.
  • Different types of export are available.
  • Notifications about the start and end of the scan, as well as about new found domains can be sent to Telegram. Update the config.py with your chat id and token.

Install from sources

NOTE 1: Change the paths for amass, findomain, nmap and masscan in config.py before running commands. NOTE 2: The app must be started as root if you want masscan to work.

apt install python3 python-venv python3-pip
git clone https://github.com/ascr0b/PCWT
cd PCWT

python3 -m venv env
source env/bin/activate
pip3 install -r requirements.txt

flask init-db
flask crontab add

export FLASK_APP=app
flask run

The app is available at http://127.0.0.1:5000



ReconNote - Web Application Security Automation Framework Which Recons The Target For Various Assets To Maximize The Attack Surface For Security Professionals & Bug-Hunters

$
0
0


Web Application Security Recon Automation Framework

It takes user input as a domain name and maximize the attack surface area by listing the assets of the domain like -

  • Subdomains from - Amass ,findomain, subfinder& resolvable subdomains using shuffledns
  • Screenshots
  • Port Scan
  • JS files
  • Httpx Status codes of subdomains
  • Dirsearch file/dir paths by fuzzing

Installation

1 - Install Docker & docker-compose according to you OS from here - https://docs.docker.com/get-docker/
2 - git clone https://github.com/0xdekster/ReconNote.git
3 - Open docker-compose.yml & change the volumes directory path to the output folder

example -

volumes: - /root/reconnote/output/:/var/www/html

4 - Change the API_HOST parameter value to your server/host ip or domain name.
5 - Run docker-compose build OR docker-compose build --no-cache
6 - Run docker-compose up -d
7 - Reconnote framework will be up at - {your-server}:3000


Set Amass Config File to set API Keys

1- cd /ReconNote
2- docker exec -it reconnote_dekster_1 bash
3- cd /deksterrecon
4- nano amass-config.ini
5- Set your API keys and save, exit.


Usage

1 - Just enter domain/target name in Add Target & choose scan type
2 - Everything will be done by Reconnote and in few minutes you will get the Scan Results



Scan Result



Demo Video


 Contributions

This is an open source project so contributins are welcome. You can request a PR for any changes that can enhance the ReconNote framework be it UI enhancement , tools adjustment ,features , etc..


Acknowledgements

ReconNote Security framework have been created by using the open source security tools made by amazing security community -

1- Eduard Tolosa
2- Tomnomnom
3- Michen riksen
4- Project Discovery
5- Corben Leo



paradoxiaRAT - Native Windows Remote Access Tool

$
0
0


Paradoxia Remote Access Tool. 

Features

Paradoxia Console

FeatureDescription
Easy to useParadoxia is extremely easy to use, So far the easiest rat!
Root Shell-
Automatic Client buildBuild Paradoxia Client easily with or without the icon of your choice.
MultithreadedMultithreaded Console server, You can get multiple sessions.
Toast NotificationsDesktop notification on new session
Configurable SettingsConfigurable values in paradoxia.ini
Kill SessionsKill Sessions without getting in sesssion.
View Session informationView Session information without getting in Session.

Paradoxia Client

FeatureDescription
StealthRuns in background.
Full File AccessFull access to the entire file system.
PersistenceInstalls inside APPDATA and has startup persistence via Registry key.
Upload / Download FilesUpload and download files.
ScreenshotTake screenshot.
Mic RecordingRecord Microphone.
Chrome Password RecoveryDump Chrome Passwords using Reflective DLL (Does not work on latest version)
KeyloggerLog Keystrokes and save to file via Reflective DLL.
GeolocateGeolocate Paradoxia Client.
Process InfoGet Process information.
DLL InjectionReflective DLL Injection over Socket, Load your own Reflective DLL, OR use ones available here.
Power offPower off the Client system.
RebootReboot the client system.
MSVC + MINGW SupportVisual studio project is also included.
Reverse ShellStable Reverse Shell.
Small ClientMaximum size is 30kb without icon.

Installation (via APT)
$ git clone https://github.com/quantumcored/paradoxia
$ cd paradoxia
$ sudo ./install.sh

Example Usage :
  • Run Paradoxia
sudo python3 paradoxia.py
  • Once in paradoxia Console, The first step would be to build the Client, Preferrably with an Icon.


 

  • After that's built, As you can see below it is detected by Windows Defender as a severe malware. Which is expected since it IS malware.


 

  • I'm going to transfer the client on a Windows 10 Virtual machine and execute it. After Executing it, It appears under Startup programs in task manager.


 

  • Also it has copied itself inside Appdata directory and installed under the name we specified during build.


 

  • At the same time, I get a session at server side.


  • First thing I'd do is get in the session and view information.


 

  • There are plenty of things we can do right now, but for example only, I will demonstrate keylogging.


 

You can see in the image above that It says it successfully injected dll, And in file listing there is a file named log.log, Which contains the logged keystrokes.

  • Lets view captures keystrokes.


Changelogs
  • This repository was home to 3 tools previously, Iris, Thawne and Previous version of Paradoxia. This can be found here.
  • Everything is entirely changed, Client has been rewritten, Infodb removed. Much new features added. Stability added.

Developer

Hi my name's Fahad. You may contact me, on Discord or My Website



Py3Webfuzz - A Python3 Module To Assist In Fuzzing Web Applications

$
0
0


Based on pywebfuzz, Py3webfuzz is a Python3 module to assist in the identification of vulnerabilities in web applications, Web Services through brute force, fuzzing and analysis. The module does this by providing common testing values, generators and other utilities that would be helpful when fuzzing web applications, API endpoints and developing web exploits.

py3webfuzz has the fuzzdb and some other miscellaneous sources implemented in Python classes, methods and functions for ease of use. fuzzdb project is just a collection of values for testing. The point is to provide a pretty good selection of values from fuzzdb project and some others sources, cleaned up and available through Python3 classes, methods and namespaces. This makes it easier and handy when the time comes up to use these values in your own exploits and PoC.

Effort was made to match the names up similarly to the folders and values from the latest fuzzdb project. This effort can sometimes make for some ugly looking namespaces. This balance was struck so that familiarity with the fuzzdb project would cross over into the Python code. The exceptions come in with the replacement of hyphens with underscores.


INSTALLATION

Installation can be done in a couple of ways. If you want use virtual environment


Using Python setuptools

http://pypi.python.org/pypi/setuptools

$ git clone https://github.com/jangelesg/py3webfuzz.git
$ cd py3webfuzz/

You can run the supplied setup.py with the install command

 $  python setup.py install

You can also use easy_install if that's what you do to manage your installed packages

 $ easy_install py3webfuzz_VERSION.tar.gz

You can also point to the location where the tar.gz lives on the web

 $ easy_install URL_package

You should be able to go.


Use in your Code
  • Some test cases can be found within info sub folder
# Accessing SQLi values and encode them for further use 
# Import Library
from py3webfuzz import fuzzdb
from py3webfuzz import utils, encoderFuncs
# Instantiate a Class Object that give you access to a set of SQLi values
sqli_detect_payload = fuzzdb.Attack.AttackPayloads.SQLi.Detect()
# Getting Access to those values through a list
for index, payload in enumerate(sqli_detect_payload.Generic_SQLI):
print(f"Payload: {index} Value: {payload}")
# Using encoderFuncs you can get different handy encodings to develop exploits
print(f"SQLi Char Encode: {encoderFuncs.sqlchar_encode(payload)}")
# Send HTTP request to your target
# Import Library
from py3webfuzz import utils
# Custome your target and Headers
location = "http://127.0.0.1:8080/WebGoat/start.mvc#lesson/WebGoatIntroduction.lesson"
headers = {"Host": "ssl.scroogle.org", "User-Agent": \
"Mozilla/4.0 (compatible; MSIE 4.01; AOL 4.0; Mac_68K)",
"Content-Type": "application/x-www-form-urlencoded"}
# at this point you have a dic object with all the elements for your pentest
# "headers": response.headers, "content": response.content, "status_code": response.status_code,
# 'json': response.json, "text": response.text, "time": f"Total in seconds: {time}"
res = utils.make_request(location, headers=headers, method="get")
# print the response
print(res)

Demo




FUTURE

  • Uploading this module to the Python Package Index.
  • Integrate features, classes , methods and values for Mobile Pentest
  • Enhance the XSS, XXE, techniques throw some new features (Any ideas are welcome)
  • Feature for Server-Side Template Injection

Author

Contributors
  • Nathan Hamiel @nathanhamiel


NFCGate - An NFC Research Toolkit Application For Android

$
0
0


NFCGate is an Android application meant to capture, analyze, or modify NFC traffic. It can be used as a researching tool to reverse engineer protocols or assess the security of protocols against traffic modifications.


Notice

This application was developed for security research purposes by students of the Secure Mobile Networking Lab at TU Darmstadt. Please do not use this application for malicious purposes.


Features
  • On-device capture: Captures NFC traffic sent and received by other applications running on the device.
  • Relay: Relays NFC traffic between two devices using a server. One device operates as a "reader" reading an NFC tag, the other device emulates an NFC tag using the Host Card Emulation (HCE).
  • Replay: Replays previously captured NFC traffic in either "reader" or "tag" mode.
  • Clone: Clones the initial tag information (e.g. ID).
  • pcapng export of captured NFC traffic, readable by Wireshark.

Requirements for specific modes
  • NFC support
  • Android 4.4+ (API level 19+)
  • EdXposed or Xposed: On-device capture, relay tag mode, replay tag mode, clone mode.
  • ARMv8-A, ARMv7: Relay tag mode, replay tag mode, clone mode.
  • HCE: Relay tag mode, replay tag mode, clone mode.

Usage

Building
  1. Initialize submodules: git submodule update --init
  2. Build using Android Studio or Gradle

Operating Modes

As instructions differ per mode, each mode is described in detail in its own document in doc/mode/:


Pcapng Export

Captured traffic can be exported in or imported from the pcapng file format. For example, Wireshark can be used to further analyze NFC traffic. A detailed description of the import and export functionality is documented in doc/pcapng.md.


Compatibility

NFCGate provides an in-app status check. For further notes on compatibility see the compatibility document.


Known Issues and Caveats

Please consider the following issues and caveats before using the application (and especially before filing a bug report).


NFC Stack

When using modes, that utilize HCE, the phone has to implement the NFC Controller Interface (NCI) specification. Most of the phones should implement this specification when offering HCE support.


Confidentiality of Data Channel (relay)

Right now, all data in relay mode is sent unencrypted over the network. We may or may not get around to implementing cryptographic protection, but for now, consider everything you send over the network to be readable by anyone interested, unless you use extra protection like VPNs. Keep that in mind while performing your own tests.


Compatibility with Cards (relay, replay, clone)

We can only proxy tags supported by Android. For example, Android no longer offers support for MiFare classic chips, so these cards are not supported. When in doubt, use an application like NFC Tag info to find out if your tag is compatible. Also, at the moment, every tag technology supported by Android's HCE is supported (A, B, F), however NFC-B and NFC-F remain untested. NFC-A tags are the most common tags (for example, both the MiFare DESFire and specialized chips like the ones in electronic passports use NFC-A), but you may experience problems if you use other tags.


Compatibility with readers (relay)

This application only works with readers which do not implement additional security measures. One security measure which will prevent our application from working in relay mode is when the reader checks the time it takes the card to respond (or, to use the more general case, if the reader implements "distance bounding"). The network transmission adds a noticeable delay to any transaction, so any secure reader will not accept our proxied replies.
This does not affect other operating modes.


Android NFC limitations (relay, replay)

Some features of NFC are not supported by Android and thus cannot be used with our application. We have experienced cases where the NFC field generated by the phone was not strong enough to properly power more advanced features of some NFC chips (e.g. cryptographic operations). Keep this in mind if you are testing chips we have not experimented with.


Publications and Media

This application was presented at the 14th USENIX Workshop on Offensive Technologies (WOOT '20). An arXiv preprint can be found here.

An early version of this application was presented at WiSec 2015. The extended Abstract and poster can be found on the website of one of the authors. It was also presented in a brief Lightning Talk at the Chaos Communication Camp 2015.


Reference our Project

Any use of this project which results in an academic publication or other publication which includes a bibliography should include a citation to NFCGate:

@inproceedings {257188,
author = {Steffen Klee and Alexandros Roussos and Max Maass and Matthias Hollick},
title = {NFCGate: Opening the Door for {NFC} Security Research with a Smartphone-Based Toolkit},
booktitle = {14th {USENIX} Workshop on Offensive Technologies ({WOOT} 20)},
year = {2020},
url = {https://www.usenix.org/conference/woot20/presentation/klee},
publisher = {{USENIX} Association},
month = aug,
}

License
   Copyright 2015-2020 NFCGate Team

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

Contact

Used Libraries

Credits
  • ADBI: ARM and THUMB inline hooking


Octopus WAF - Web Application Firewall Made In C Language And Use Libevent

$
0
0


OctopusWAF is a open source Web application firewall, is made in C language uses libevent to make multiple connections.


First step

Instal lib-pcre, if you use RPM based distros search pcre-devel package, in BSD based search in ports or brew... Need libevent-dev, on RPM distros libevent-devel, Openssl-dev and openssl-devel.

To compile and run OctopusWAF follow this commands:

$ git clone https://github.com/CoolerVoid/OctopusWAF
$ cd OctopusWAF; make
$ bin/OctopusWAF

Example tested on DVWA on simple HTTP channel

$ bin/OctopusWAF -h 127.0.0.1:2006 -r 127.0.0.1:80 -m horspool --debug

Open your browser in http://127.0.0.1:2006

  • Notes: Don't execute with "cd bin; ./OctopusWAF" use full path "bin/OctopusWAF", because binary need load content in config directory. Use HTTP only for WAF usage, this version 0.1 run TLS but don't have resource to load cert and read TLS requests/responses, if you use TLS the service can lost WAF function and work like reverse proxy.

Tested on Linux but can run in FreeBSD.


Code overview
-------------------------------------------------------------------------------
Language files blank comment code
-------------------------------------------------------------------------------
C 12 324 138 997
C/C++ Header 11 63 70 212
make 1 1 0 30
Markdown 1 6 0 3
-------------------------------------------------------------------------------
SUM: 25 394 208 1242
-------------------------------------------------------------------------------


TODO:

Resource to load modsec rules https://github.com/SpiderLabs/owasp-modsecurity-crs/

Insert rules to detect XSS

Insert rules to detect SQLi

Insert rules to detect RCE

Insert rules to detect RFI/LFI

Insert rules to detect XXE

Insert rules to detect Anomalys...

Channel for TLS

Cert Load


Reference:

https://libevent.org/

https://owasp.org/www-community/Web_Application_Firewall



Leonidas - Automated Attack Simulation In The Cloud, Complete With Detection Use Cases

$
0
0


Leonidas is a framework for executing attacker actions in the cloud. It provides a YAML-based format for defining cloud attacker tactics, techniques and procedures (TTPs) and their associated detection properties. These definitions can then be compiled into:


Deploying the API

The API is deployed via an AWS-native CI/CD pipeline. Instructions for this can be found at Deploying Leonidas.


Using the API

The API is invoked via web requests secured by an API key. Details on using the API can be found at Using Leonidas


Installing the Generator Locally

To build documentation or Sigma rules, you'll need to install the generator locally. You can do this by:

  • cd generator
  • poetry install

Generating Sigma Rules

Sigma rules can be generated as follows:

  • poetry run ./generator.py sigma

The rules will then appear in ./output/sigma


Generating Documentation

The documentation is generated as follows:

  • poetry run ./generator.py docs

This will produce markdown versions of the documentation available at output/docs. This can be uploaded to an existing markdown-based documentation system, or the following can be used to create a prettified HTML version of the docs:

  • cd output
  • mkdocs build

This will create a output/site folder containing the HTML site. It is also possible to view this locally by running mkdocs serve in the same folder.


Writing Definitions

The definitions are written in a YAML-based format, for which an example is provided below. Documentation on how to write these can be found in Writing Definitions

---
name: Enumerate Cloudtrails for a Given Region
author: Nick Jones
description: |
An adversary may attempt to enumerate the configured trails, to identify what actions will be logged and where they will be logged to. In AWS, this may start with a single call to enumerate the trails applicable to the default region.
category: Discovery
mitre_ids:
- T1526
platform: aws
permissions:
- cloudtrail:DescribeTrails
input_arguments:
executors:
sh:
code: |
aws cloudtrail describe-trails
leonidas_aws:
implemented: True
clients:
- cloudtrail
code: |
result = clients["cloudtrail"].describe_trails()
detection:
sigma_id: 48653a63-085a-4a3b-88be-9680e9adb449
status: experimental
level: low
sources:
- name: "cloudtrail"
attributes:
eventName: "DescribeTrails"
eventSource: "*.cloudtrail.a mazonaws.com"

Credits

Project built and maintained by Nick Jones ( NJonesUK / @nojonesuk).

This project drew ideas and inspiration from a range of sources, including:




FAMA - Forensic Analysis For Mobile Apps

$
0
0


LabCIF - Forensic Analysis for Mobile Apps

Getting Started

Android extraction and analysis framework with an integrated Autopsy Module. Dump easily user data from a device and generate powerful reports for Autopsy or external applications.


Functionalities
  • Extract user application data from an Android device with ADB (root and ADB required).
  • Dump user data from an android image or mounted path.
  • Easily build modules for a specific Android application.
  • Generate clean and readable JSON reports.
  • Complete integrated Autopsy compatibility (datasource processor module, ingest module, report module, geolocation, communication and timeline support).
  • Export HTML report based on the current case.

Report Screenshots




Prerequisites

How to use

The script can be used directly in terminal or as Autopsy module.


Running from Terminal
usage: start.py [-h] [-d DUMP [DUMP ...]] [-p PATH] [-o OUTPUT] [-a] app

Forensics Artefacts Analyzer

positional arguments:
app Application or package to be analyzed <tiktok> or <com.zhiliaoapp.musically>

optional arguments:
-h, --help show this help message and exit
-d DUMP [DUMP ...], --dump DUMP [DUMP ...] Analyze specific(s) dump(s) <20200307_215555 ...>
-p PATH, --path PATH Dump app data in path (mount or folder structure)
-o OUTPUT, --output OUTPUT Report output path folder
-a, --adb Dump app data directly from device with ADB
-H, --html Generate HTML report

Running from Autopsy
  1. Download repository contents (zip).
  2. Open Autopsy -> Tools -> Python Plugins
  3. Unzip previously downloaded zip in python_modules folder.
  4. Restart Autopsy, create a case and select the module.
  5. Select your module options in the Ingest Module window selector.
  6. Click "Generate Report" to generate an HTML report of the case.

Build an application module

Do you need a forensics module for a specific Android application? Follow the instructions here and build a module by yourself.


Authors

Mentors

Project developed as final project for Computer Engineering course in Escola Superior de Tecnologia e Gestão de Leiria.


Environments Tested
  • Windows (primary)
  • Linux
  • Mac OS

License

This project is licensed under the terms of the GNU GPL v3 License.



    Scripthunter - Tool To Find JavaScript Files On Websites

    $
    0
    0


    Scripthunter is a tool that finds javascript files for a given website. To scan Google, simply run ./scripthunter.sh https://google.com. Note that it may take a while, which is why scripthunter also implements a notification mechanism to inform you when a scan is finished via Telegram API. Blogpost


    Setup

    To install scripthunter, clone this repository. Scripthunter relies on a couple of tools to be installed so make sure you have them:

    please make sure that as most of these tools are written in Go, that you have Go installed and configured properly. Make sure that when you type any of the above commands in the terminal, they are recognized and work.

    Furthermore, scripthunter uses Telegram to send you a notification once a scan is finished. To enable this feature, you need to create a Telegram Bot and paste your Bot API key and chatid in the scripthunter script. You can follow this guide to get these values.



    Features
    • Extract public javascript files from website using Gau and Hakrawler
    • Parse directories containing js files from found public files
    • Scan js directories for hidden js files using ffuf and a custom wordlist
    • check all found files for connectivity
    • notify user once scans are finished
    • aggregate all seen js filenames into one global wordlist

    Example

    I ran this on some random verizon subdomain:

    ➜  scripthunter-dev ./scripthunter.sh https://developer.verizonmedia.com/
    _ __ __ __
    ___ ________(_)__ / /_/ / __ _____ / /____ ____
    (_-</ __/ __/ / _ \/ __/ _ \/ // / _ \/ __/ -_) __/
    /___/\__/_/ /_/ .__/\__/_//_/\_,_/_//_/\__/\__/_/
    /_/
    by @r0bre
    [*] Running GAU
    [+] GAU found 7 scripts!
    [*] Running hakrawler
    [+] HAKRAWLER found 5 scripts!
    [*] Found 2 directories containing .js files.
    [*] Running FFUF on https://developer.verizonmedia.com/./

    [+] FFUF found 0 scripts in https://developer.verizonmedia.com/./ !
    [*] Running FFUF on https://developer.verizonmedia.com/assets/

    [+] FFUF found 0 scripts in https://developer.verizonmedia.com/assets/ !
    [*] Running FFUF on https://developer.verizonmedia.com/assets/js /

    [+] FFUF found 0 scripts in https://developer.verizonmedia.com/assets/js/ !
    [*] Running FFUF on https://developer.verizonmedia.com/js/

    [+] FFUF found 0 scripts in https://developer.verizonmedia.com/js/ !
    [*] Running FFUF on https://developer.verizonmedia.com/static/

    [+] FFUF found 0 scripts in https://developer.verizonmedia.com/static/ !
    [*] Running FFUF on https://developer.verizonmedia.com/static/js/

    [+] FFUF found 7 scripts in https://developer.verizonmedia.com/static/js/ !
    [*] Running FFUF on https://developer.verizonmedia.com/static/js/vendor/

    [+] FFUF found 3 scripts in https://developer.verizonmedia.com/static/js/vendor/ !
    [+] Checking Script Responsiveness of 13 scripts..
    https://developer.verizonmedia.com/static/js/vendor/js-cookie.js
    https://developer.verizonmedia.com/static/js/jquery-3.3.1.min.js
    https://developer.verizonmedia.com/static/js/autotrack.js
    https://developer.verizonmedia.com/st atic/js/utils.js
    https://developer.verizonmedia.com/static/js/navigation.js
    https://developer.verizonmedia.com/static/js/vendor/rapidworker-1.2.js
    https://developer.verizonmedia.com/static/js/vmdn.js
    https://developer.verizonmedia.com/static/js/right-nav.js
    [+] All Done!
    [+] Found total of 13 (8 responsive) scripts!

    If you like this tool, consider following me on Twitter@r0bre! ;)



    Tfsec - Security Scanner For Your Terraform Code

    $
    0
    0


    tfsec uses static analysis of your terraform templates to spot potential security issues. Now with terraform v0.12+ support.


    Example Output



    Installation

    Install with brew/linuxbrew:

    brew install tfsec

    Install with Chocolatey:

    choco install tfsec

    You can also grab the binary for your system from the releases page.

    Alternatively, install with Go:

    go get -u github.com/tfsec/tfsec/cmd/tfsec

    Usage

    tfsec will scan the specified directory. If no directory is specified, the current working directory will be used.

    The exit status will be non-zero if tfsec finds problems, otherwise the exit status will be zero.

    tfsec .

    Use with Docker

    As an alternative to installing and running tfsec on your system, you may run tfsec in a Docker container.

    To run:

    docker run --rm -it -v "$(pwd):/src" liamg/tfsec /src

    Use as GitHub Action

    If you want to run tfsec on your repository as a GitHub Action, you can use https://github.com/triat/terraform-security-scan.


    Features
    • Checks for sensitive data inclusion across all providers
    • Checks for violations of AWS, Azure and GCP security best practice recommendations
    • Scans modules (currently only local modules are supported)
    • Evaluates expressions as well as literal values
    • Evaluates Terraform functions e.g. concat()

    Ignoring Warnings

    You may wish to ignore some warnings. If you'd like to do so, you can simply add a comment containing tfsec:ignore:<RULE> to the offending line in your templates. If the problem refers to a block of code, such as a multiline string, you can add the comment on the line above the block, by itself.

    For example, to ignore an open security group rule:

    resource "aws_security_group_rule" "my-rule" {
    type = "ingress"
    cidr_blocks = ["0.0.0.0/0"] #tfsec:ignore:AWS006
    }

    ...or...

    resource "aws_security_group_rule" "my-rule" {
    type = "ingress"
    #tfsec:ignore:AWS006
    cidr_blocks = ["0.0.0.0/0"]
    }

    If you're not sure which line to add the comment on, just check the tfsec output for the line number of the discovered problem.


    Disable checks

    You may wish to exclude some checks from running. If you'd like to do so, you can simply add new argument -e CHECK1,CHECK2,etc to your cmd command

    tfsec . -e GEN001,GCP001,GCP002

    Including values from .tfvars

    You can include values from a tfvars file in the scan, using, for example: --tfvars-file terraform.tfvars.


    Included Checks

    Checks are currently limited to AWS/Azure/GCP resources, but there are also checks which are provider agnostic.

    Checks
    AWS Checks
    Azure Checks
    GCP Checks
    General Checks

    Running in CI

    tfsec is designed for running in a CI pipeline. For this reason it will exit with a non-zero exit code if a potential problem is detected. You may wish to run tfsec as part of your build without coloured output. You can do this using --no-colour (or --no-color for our American friends).


    Output options

    You can output tfsec results as JSON, CSV, Checkstyle, Sarif, JUnit or just plain old human readable format. Use the --format flag to specify your desired format.


    Github Security Alerts

    If you want to integrate with Github Security alerts and include the output of your tfsec checks you can use the tfsec-sarif-action Github action to run the static analysis then upload the results to the security alerts tab.

    The alerts generated for tfsec-example-project look like this.


     

    When you click through the alerts for the branch, you get more information about the actual issue.


     

    For more information about adding security alerts, check


    Support for older terraform versions

    If you need to support versions of terraform which use HCL v1 (terraform <0.12), you can use v0.1.3 of tfsec, though support is very limited and has fewer checks.



    Linux-Evil-Toolkit - A Framework That Aims To Centralize, Standardize And Simplify The Use Of Various Security Tools For Pentest Professionals

    $
    0
    0


    Linux evil toolkit is a framework that aims to centralize, standardize 
    and simplify the use of various security tools for pentest professionals.
    LETK (Linux evil toolkit) has few simple commands, one of which is the
    INIT that allows you to define a target, and thus use all the tools
    without typing anything else.

    Is LETK better than setoolkit? Yes and no, there are two that serve the
    same thing and in a different way, the Linux Evil Toolkit and an
    automated attack information automation script.

    considerations
    ยง 1 About use

    This script was made to automate the steps of gathering information about web
    targets, the misuse and responsibility of the user, to report bugs or make
    suggestions open a report on github.

    ยง 2 About simple_scan

    Automap was replaced by simple_scan, it is lighter and faster, in addition to being
    less detectable, now it has different modes of execution that make it possible from
    a quick and simple execution to more complex modes.

    ยง 3 About Console

    The output of the script can be extremely long, so see if your console,
    (gnome-terminal, cmd, konsole) is configured to display 1000 lines
    (I particularly recommend 10,000 lines), for professional purposes it allows
    the documentation, it records the commands, exits and formats th e text.

    Usage

    NOTE: When you start a pentest, type the INIT command and define the target, or write values in linux-evil-toolkit/config/letk.rb

    Basics
    |exit           |   Close this script                                           
    |clear | Clear terminal
    |update | Update Linux evil toolkit
    |train | Show train in terminal, tuutuu
    |INIT | Setup global variables
    |reset | Clear terminal and reset global variables
    |cover | Cover your tracks on your computer
    |simple_map | This command execute automap (auto namap)
    |search | Search email, whois and banner grep
    |status | Show machine status
    |dnsscanner | Scan for 'A', 'AAAA', 'CNAME', 'MX', 'NS', 'PTR', 'SOA'
    |dirscanner | Scan files and folders
    |banner | Show Linux evil Toolkit banner in terminal
    |webdns | Show Web Sites for dns scanner
    |linuxfiles | Show important linux files
    |linuxfolders | Show important linux folders
    |windowsfolders | Show important windows folders
    |linuxutil | Show useful commands in linux
    |test | For development only

    simple_scan options
    alone

    "-sL" --> "List Scan - simply list targets to scan"
    "-sP" --> "Ping Scan - go no further than determining if host is online"

    default

    "-sS -sV" --> "TCP SYN"
    "-sU -sV" --> "UDP Scan"

    icmp_echo

    "-sS -sV -PE" --> "TCP SYN + ICMP echo discovery probes"
    "-sU -sV -PE" --> "UDP Scan + ICMP echo discovery probes"
    "-sA -sV -PE" --> "ACK + ICMP echo discovery probes"

    port_list

    "-sS" --> "TCP SYN + [portlist]: TCP SYN discovery probes to given ports"
    "-sA" --> "ACK + [portlist]: TCP ACK discovery probes to given ports"
    "-sU" --> "UDP Scan + [portlist]: TCP UDP discovery probes to given ports"

    special

    "-sT -sV" --> "Connect()"
    "-sW -sV" --> "Window"
    "-sM -sV" --> "Maimon scans"
    "-sN -sV" --> "TCP Null"
    "-sF -sV" --> "FIN"
    "-sX -sV" --> "Xmas scans"

    DeepLink
    DeepLink is a deepweb (tor onion domain) database for your test and explore "deep web" for fun

    usage: type deeplink and type option
    --site | Cat best site for your learn about deepweb
    --darklinks | show dark-net links
    --onionlinks | show more 500 deep web links
    --onionlinks-active | show more links, but active links only
    --searchlinks | show tor search (google-like)
    --toralt | show tor alternatives (i2-, freenet, etc)

    Backend Functions

    From engine module
    Engine.INIT()               | Setup variables
    Engine.sys("ls") | Test Function
    Engine.R() | Reset variables
    Engine.cover() | Cover bash history
    Engine.compress() | Compress files
    Engine.port_scanner() | Repleced by automap
    Engine.search() | Search whois, emails, banner grep
    Engine.status() | Show machine status
    Engine.dns_scanner() | Scan for 'A', 'AAAA', 'CNAME', 'MX', 'NS', 'PTR', 'SOA'
    Emgine.dir_scanner() | Brute force for search files and folders
    Engine.simple_scan() | Execute automap
    Engine.assembly() | Backend function
    Engine.exec() | Backend function

    From Visual module
    Visual.banner()             | Function for show text 
    Visual.web_dns() | Function for show text
    Visual.linux_files() | Function for show text
    Visual.linux_folders() | Function for show text
    Visual.linux_util() | Function for show text

    From Interpreter Module
    Interpreter.interpreter()   | Backend function
    Interpreter.main() | Backend function

    ERROR CODES & COLORS
    prGreen()                   | Succesful
    prRed() | Error
    Other[Cyan, yellow] | Execultion error


    Herpaderping - Process Herpaderping Bypasses Security Products By Obscuring The Intentions Of A Process

    $
    0
    0


    Process Herpaderping is a method of obscuring the intentions of a process by modifying the content on disk after the image has been mapped. This results in curious behavior by security products and the OS itself.



    Summary

    Generally, a security product takes action on process creation by registering a callback in the Windows Kernel (PsSetCreateProcessNotifyRoutineEx). At this point, a security product may inspect the file that was used to map the executable and determine if this process should be allowed to execute. This kernel callback is invoked when the initial thread is inserted, not when the process object is created.

    Because of this, an actor can create and map a process, modify the content of the file, then create the initial thread. A product that does inspection at the creation callback would see the modified content. Additionally, some products use an on-write scanning approach which consists of monitoring for file writes. A familiar optimization here is recording the file has been written to and defer the actual inspection until IRP_MJ_CLEANUP occurs (e.g. the file handle is closed). Thus, an actor using a write -> map -> modify -> execute -> close workflow will subvert on-write scanning that solely relies on inspection at IRP_MJ_CLEANUP.

    To abuse this convention, we first write a binary to a target file on disk. Then, we map an image of the target file and provide it to the OS to use for process creation. The OS kindly maps the original binary for us. Using the existing file handle, and before creating the initial thread, we modify the target file content to obscure or fake the file backing the image. Some time later, we create the initial thread to begin execution of the original binary. Finally, we will close the target file handle. Let's walk through this step-by-step:

    1. Write target binary to disk, keeping the handle open. This is what will execute in memory.
    2. Map the file as an image section (NtCreateSection, SEC_IMAGE).
    3. Create the process object using the section handle (NtCreateProcessEx).
    4. Using the same target file handle, obscure the file on disk.
    5. Create the initial thread in the process (NtCreateThreadEx).
      • At this point the process creation callback in the kernel will fire. The contents on disk do not match what was mapped. Inspection of the file at this point will result in incorrect attribution.
    6. Close the handle. IRP_MJ_CLEANUP will occur here.
      • Since we've hidden the contents of what is executing, inspection at this point will result in incorrect attribution.



    plantuml

    @startuml
    hide empty description

    [*] --> CreateFile
    CreateFile --> FileHandle
    FileHandle --> Write
    FileHandle --> NtCreateSection
    Write -[hidden]-> NtCreateSection
    NtCreateSection --> SectionHandle
    SectionHandle --> NtCreateProcessEx
    FileHandle --> Modify
    NtCreateProcessEx -[hidden]-> Modify
    NtCreateProcessEx --> NtCreateThreadEx
    Modify -[hidden]-> NtCreateThreadEx
    NtCreateThreadEx --> [*]
    FileHandle --> CloseFile
    NtCreateThreadEx -[hidden]-> CloseFile
    NtCreateThreadEx --> PspCallProcessNotifyRoutines
    PspCallProcessNotifyRoutines -[hidden]-> [*]
    CloseFile --> IRP_MJ_CLEANUP
    IRP_MJ_CLEANUP -[hidden]-> [*]
    PspCallProcessNotifyRoutines --> Inspect
    PspCallProcessNotifyRoutines -[hidden]-> CloseFile
    IRP_MJ_CLEANUP --> Inspect
    Inspect -[hidden]-> [*]

    CreateFile : Create target file, keep handle open.
    Write : Write source payload into target file.
    Modify : Obscure the file on disk.
    NtCreateSection : Create section using file handle.
    NtCreateProcessEx : Image section for process is mapped and cached in file object.
    NtCreateThreadEx : The cached section is used.
    NtCreateThreadEx : Process notify routines fire in kernel.
    Inspect : The contents on disk do not match what was executed.
    Inspect : Inspection of the file at this point will result in incorrect attribution.
    @enduml


    Behavior

    You'll see in the demo below, CMD.exe is used as the execution target. The first run overwrites the bytes on disk with a pattern. The second run overwrites CMD.exe with ProcessHacker.exe. The Herpaderping tool fixes up the binary to look as close to ProcessHacker.exe as possible, even retaining the original signature. Note the multiple executions of the same binary and how the process looks to the user compared to what is in the file on disk.




    Diving Deeper

    We've observed the behavior and some of this may be surprising. Let's try to explain this behavior.

    Technical Deep Dive


    Background and Motivation

    When designing products for securing Windows platforms, many engineers in this field (myself included) have fallen on preconceived notions with respect to how the OS will handle data. In this scenario, some might expect the file on disk to remain "locked" when the process is created. You can't delete the file. You can't write to it. But you can rename it. Seen here, under the right conditions, you can in fact write to it. Remain vigilant on your assumptions, always question them, and do your research.

    The motivation for this research came about when discovering how to do analysis when a file is written. With prior background researching process Hollowing and Doppelganging, I had theorized this might be possible. The goal is to provide better security. You cannot create a better lock without first understanding how to break the old one.


    Similar Techniques

    Herpaderping is similar to Hollowing and Doppelganging however there are some key differences:


    Process Hollowing

    Process Hollowing involves modifying the mapped section before execution begins, which abstractly this looks like: map -> modify section -> execute. This workflow results in the intended execution flow of the Hollowed process diverging into unintended code. Doppelganging might be considered a form of Hollowing. However, Hollowing, in my opinion, is closer to injection in that Hollowing usually involves an explicit write to the already mapped code. This differs from Herpaderping where there are no modified sections.


    Process Doppelganging

    Process Doppelganging is closer to Herpaderping. Doppelganging abuses transacted file operations and generally involves these steps: transact -> write -> map -> rollback -> execute. In this workflow, the OS will create the image section and account for transactions, so the cached image section ends up being what you wrote to the transaction. The OS has patched this technique. Well, they patched the crash it caused. Maybe they consider this a "legal" use of a transaction. Thankfully, Windows Defender does catch the Doppelganging technique. Doppelganging differs from Herpaderping in that Herpaderping does not rely on transacted file operations. And Defender doesn't catch Herpaderping.


    Comparison

    For reference, the generalized techniques:

    TypeTechnique
    Hollowingmap -> modify section -> execute
    Doppelgangingtransact -> write -> map -> rollback -> execute
    Herpaderpingwrite -> map -> modify -> execute -> close

    We can see the differences laid out here. While Herpaderping is arguably noisier than Doppelganging, in that the malicious bits do hit the disk, we've seen that security products are still incapable of detecting Herpaderping.


    Possible Solution

    There is not a clear fix here. It seems reasonable that preventing an image section from being mapped/cached when there is write access to the file should close the hole. However, that may or may not be a practical solution.

    Another option might be to flush the changes to the file through to the cached image section if it hasn't yet been mapped into a process. However, since the map into the new process occurs at NtCreateProcess that is probably not a viable solution.

    From a detection standpoint, there is not a great way to identify the actual bits that got mapped, inspection at IRP_MJ_CLEANUP or a callback registered at PsSetCreateProcessNotifyRoutineEx results in incorrect attribution since the bits on disk have been changed, you would have to rebuild the file from the section that got created. It's worth pointing out here there is a new callback in Windows 10 you may register for PsSetCreateProcessNotifyRoutineEx2 however this suffers from the same problem as the previous callback, it's called out when the initial thread is executed, not when the process object is created. Microsoft did add PsSetCreateThreadNotifyRoutineEx which is called out when the initial thread is inserted if registered with PsCreateThreadNotifyNonSystem, opposed to when it is about to begin execution (as the old callback did). Extending PSCREATEPROCESSNOTIFYTYPE to be called out when the process object is created won't help either, we've seen in the Diving Deeper section that the image section object is cached on the NtCreateSection call not NtCreateProcess.

    We can't easily identify what got executed. We're left with trying to detect the exploitive behavior by the actor, I'll leave discovery of the behavior indicators as an exercise for the reader.


    Known Affected Platforms

    Below is a list of products and Windows OSes that have been tested as of (8/31/2020). Tests were carried out with a known malicious binary.

    Operating SystemVersionVulnerable
    Windows 7 Enterprise x866.1.7601Yes
    Windows 10 Pro x6410.0.18363.900Yes
    Windows 10 Pro Insider Preview x6410.0.20170.1000Yes
    Windows 10 Pro Insider Preview x6410.0.20201.1000Yes
    Security ProductVersionVulnerable
    Windows Defender AntiMalware Client4.18.2006.10Yes
    Windows Defender Engine1.1.17200.2Yes
    Windows Defender Antivirus1.319.1127.0Yes
    Windows Defender Antispyware1.319.1127.0Yes
    Windows Defender AntiMalware Client4.18.2007.6Yes
    Windows Defender Engine1.1.17300.2Yes
    Windows Defender Antivirus1.319.1676.0Yes
    Windows Defender Antispyware1.319.1676.0Yes
    Windows Defender AntiMalware Client4.18.2007.8Yes
    Windows Defender Engine1.1.17400.5Yes
    Windows Defender Antivirus1.323.267.0Yes
    Windows Defender Antispyware1.323.267.0Yes

    Responsible Disclosure

    This vulnerability was disclosed to the Microsoft Security Response Center (MSRC) on 7/17/2020 and a case was opened by MSRC on 7/22/2020. MSRC concluded their investigation on 8/25/2020 and determined the findings are valid but do not meet their bar for immediate servicing. At this time their case is closed, without resolution, and is marked for future review, with no timeline.

    We disagree on the severity of this bug; this was communicated to MSRC on 8/27/2020.

    1. There are similar vulnerabilities in this class (Hollowing and Doppelganging).
    2. The vulnerability is shown to defeat security features inherent to the OS (Windows Defender).
    3. The vulnerability allows an actor to gain execution of arbitrary code.
    4. The user is not notified of the execution of unintended code.
    5. The process information presented to the user does not accurately reflect what is executing.
    6. Facilities to accurately identify the process are not intuitive or incorrect, even from the kernel.

    Source

    This repo contains a tool for exercising the Herpaderping method of process obfuscation. Usage is as follows:

    Process Herpaderping Tool - Copyright (c) Johnny Shaw
    ProcessHerpaderping.exe SourceFile TargetFile [ReplacedWith] [Options...]
    Usage:
    SourceFile Source file to execute.
    TargetFile Target file to execute the source from.
    ReplacedWith File to replace the target with. Optional,
    default overwrites the binary with a pattern.
    -h,--help Prints tool usage.
    -d,--do-not-wait Does not wait for spawned process to exit,
    default waits.
    -l,--logging-mask number Specifies the logging mask, defaults to full
    logging.
    0x1 Successes
    0x2 Informational
    0x4 Warnings
    0x8 Errors
    0x10 Contextual
    -q,--quiet Runs quietly, overrides logging mask, no title.
    -r,--random-obfuscation Uses random bytes rather than a pattern for
    file obfuscation.
    -e,--exclusive Target file is created with exclusive access and
    the handle is held open as long as possible.
    Without this option the handle has full share
    access and is closed as soon as possible.
    -u,--do-not-flush-file Does not flush file after overwrite.
    -c,--close-file-early Closes file before thread creation (before the
    process notify callback fires in the kernel).
    Not valid with "--exclusive" option.
    -k,--kill Terminates the spawned process regardless of
    success or failure, this is useful in some
    automation environments. Forces "--do-not-wait
    option.

    Cloning and Building

    The repo uses submodules, after cloning be sure to init and update the submodules. Projects files are targeted to Visual Studio 2019.

    git clone https://github.com/jxy-s/herpaderping.git
    cd .\herpaderping\
    git submodule update --init --recursive
    MSBuild .\herpaderping.sln

    Credits

    The following are used without modification. Credits to their authors.



    Viewing all 5816 articles
    Browse latest View live


    <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>