Quantcast
Channel: KitPloit - PenTest Tools!
Viewing all 5751 articles
Browse latest View live

Parrot Security 4.5 - Security GNU/Linux Distribution Designed with Cloud Pentesting and IoT Security in Mind

$
0
0

Parrot 4.5 is officially released, and there are some major changes under the hood, powered by the long-term supported Linux 4.19 kernel series, preparing the project for the upcoming Parrot 5.0 LTS release. For future releases, Parrot Security plans to a support two kernels, stable kernel and a testing kernel.

Parrot 4.5 also comes with the latest Metasploit 5.0 penetration testing framework, which introduces major features like new evasion modules, a new search engine, a json-rpc daemon, integrated web services, and support for writting shellcode in C.

This release improves the metapackages for developers,  making it a lot easier to set up an advanced development environment for multiple frameworks and programming languages. These include parrot-devel, parrot-devel-tools, and parrot-devel-extra.

Parrot 4.5 drops support for 32-bit computers

On the other side, Parrot 4.5 is the first release of the ethical hacking operating system to no longer ship with installation or live images for older, 32-bit only computers. With this, Parrot joins the growing trend of GNU/Linux distributions dropping 32-bit images. However, the developers noted the fact that they will continue to support the 32-bit architecture with updates through the official software repositories for existing users.

Better Dev Tools

There are updates in metapackages for developers, and setting up an advanced development environment for several programming languages and frameworks is now easier than ever:

parrot-devel

It is pre-installed in Parrot 4.5 and provides the following tools:
  • vscodium - an advanced and extensible text editor.
  • zeal - an offline documentation downloader and browser.
  • git-cola - a graphic client to GIT.
  • meld - a graphic patch inspector.
  • tora - a graphic database frontend compatible with several database backends.
These packages are included in the metapackage by using the “Recommends” apt directive, and they can be removed individually without triggering the removal of the whole parrot-devel metapackage.
The metapackage also recommends the installation of parrot-devel-tools.
sudo apt update
sudo apt install parrot-devel

parrot-devel-tools

It is recommended by parrot-devel and pre-installed in Parrot Security. It provides some useful compilers and interpreters for the most used languages and provides the following packages:
  • GCC/G++ - a compiler collection for C, C++ and other languages.
  • python3 - the cpython interpreter for the python 3.6 and 3.7 language.
  • ruby - the official ruby lang interpreter and basic toolkit (includes irb and ri as well).
The package also recommends the following packages, that can be safely removed without triggering the removal of the entire parrot-devel-tools metapackage:
  • default-jdk - the latest Java OpenJDK distribution for Java 11 (both JDK and JRE).
  • cython3 - a compiler for the cython language, a strongly-typed dialect of python for efficient code.
  • rust/cargo - the rust compiler and devel tools and its package management system.
  • valac - the vala c compiler.
  • mono-devel - the development tools for the MONO framework, an open source implementation of .net.
  • mono-runtime - the runtime of the MONO framework compatible and interoperable with the latest .net runtime.
  • php-cli - the PHP 7.3 language plus its command line interface and some useful core libraries.
  • perl6 - the PERL 6 interpreter and core libraries.
    sudo apt update
    sudo apt install parrot-devel-tools

parrot-devel-extra

The parrot-devel-extra metapackage is a quick way to install many additional development utilities like advanced IDEs, additional languages, debuggers and extra tools.
  • golang - go language compiler and runtime
  • nodejs - node.js framework
  • npm - node.js package manager
  • atom - advanced and extensible editor by github
  • qtcreator - powerful C, C++ and Qt/QML IDE and debugger.
  • kdevelop - advanced general purpose IDE by KDE.
  • edb-debugger - graphical debugger.
  • jad - Java decompiler.
  • nasm - powerful general purpose x86 assembler.
  • radare2 - advanced command line hexadecimal editor.
  • cmake - cross-platform, open-source make system.
  • valgrind - nstrumentation framework for building dynamic analysis tools.
  • devscripts/build-essential - useful development utilities for debian developers/maintainers.

sudo apt update
sudo apt install parrot-devel-extra



ProcDump - A Linux Version Of The ProcDump Sysinternals Tool

$
0
0

ProcDump is a Linux reimagining of the classic ProcDump tool from the Sysinternals suite of tools for Windows. ProcDump provides a convenient way for Linux developers to create core dumps of their application based on performance triggers.

Installation & Usage

Requirements
  • Minimum OS:
    • Red Hat Enterprise Linux / CentOS 7
    • Fedora 26
    • Mageia 6
    • Ubuntu 14.04 LTS
    • We are actively testing against other Linux distributions. If you have requests for specific distros, please let us know (or create a pull request with the necessary changes).
  • gdb>= 7.6.1
  • zlib (build-time only)

Install ProcDump

Via Package Manager [prefered method]

1. Add the Microsoft Product feed
curl https://packages.microsoft.com/keys/microsoft.asc | gpg --dearmor > microsoft.gpg
sudo mv microsoft.gpg /etc/apt/trusted.gpg.d/microsoft.gpg

Register the Microsoft Product feed

Ubuntu 16.04
sudo sh -c 'echo "deb [arch=amd64] https://packages.microsoft.com/repos/microsoft-ubuntu-xenial-prod xenial main" > /etc/apt/sources.list.d/microsoft.list'

Ubuntu 14.04
sudo sh -c 'echo "deb [arch=amd64] https://packages.microsoft.com/repos/microsoft-ubuntu-trusty-prod trusty main" > /etc/apt/sources.list.d/microsoft.list'

2. Install Procdump
sudo apt-get update
sudo apt-get install procdump

Via .deb Package
Pre-Depends: dpkg(>=1.17.5)

1. Download .deb Package

Ubuntu 16.04
wget https://packages.microsoft.com/repos/microsoft-ubuntu-xenial-prod/pool/main/p/procdump/procdump_1.0.1_amd64.deb

Ubuntu 14.04
wget https://packages.microsoft.com/repos/microsoft-ubuntu-trusty-prod/pool/main/p/procdump/procdump_1.0.1_amd64.deb

2. Install Procdump
sudo dpkg -i procdump_1.0.1_amd64.deb
sudo apt-get -f install

Uninstall

Ubuntu 14.04+
sudo apt-get purge procdump

Usage
Usage: procdump [OPTIONS...] TARGET
OPTIONS
-C CPU threshold at which to create a dump of the process from 0 to 100 * nCPU
-c CPU threshold below which to create a dump of the process from 0 to 100 * nCPU
-M Memory commit threshold in MB at which to create a dump
-m Trigger when memory commit drops below specified MB value.
-n Number of dumps to write before exiting
-s Consecutive seconds before dump is written (default is 10)
TARGET must be exactly one of these:
-p pid of the process
-w Name of the process executable

Examples
The following examples all target a process with pid == 1234
The following will create a core dump immediately.
sudo procdump -p 1234
The following will create 3 core dumps 10 seconds apart.
sudo procdump -n 3 -p 1234
The following will create 3 core dumps 5 seconds apart.
sudo procdump -n 3 -s 5 -p 1234
The following will create a core dump each time the process has CPU usage >= 65%, up to 3 times, with at least 10 seconds between each dump.
sudo procdump -C 65 -n 3 -p 1234
The following will create a core dump each time the process has CPU usage >= 65%, up to 3 times, with at least 5 seconds between each dump.
sudo procdump -C 65 -n 3 -s 5 -p 1234
The following will create a core dump when CPU usage is outside the range [10,65].
sudo procdump -c 10 -C 65 -p 1234
The following will create a core dump when CPU usage is >= 65% or memory usage is >= 100 MB.
sudo procdump -C 65 -M 100 -p 1234
All options can also be used with -w instead of -p. -w will wait for a process with the given name.
The following waits for a process named my_application and creates a core dump immediately when it is found.
sudo procdump -w my_application

Current Limitations
  • Currently will only run on Linux Kernels version 3.5+
  • Does not have full feature parity with Windows version of ProcDump, specifically, stay alive functionality, and custom performance counters


SecureTea Project - The Purpose Of This Application Is To Warn The User (Via Various Communication Mechanisms) Whenever Their Laptop Accessed

$
0
0
Small IoT (Internet of Things) to notify users via Twitter, whenever someone accesses their laptop. This application uses the touchpad/mouse/wireless mouse to determine activity and is developed in Python and tested on Linux.
The purpose of this application is to warn the user (via various communication mechanisms) whenever their laptop accessed. This small application was developed and tested in python in Linux machine is likely to work well on the Raspberry Pi as well.

Target User:
It was written to be used by anyone who is interested in Security IOT (Internet of Things) and still needs further development.
How it functions:
  • Keep track of the movement of the mouse/touchpad
  • Detect who access the laptop with mouse/touchpad is installed
  • Send warning messages on Twitter

Objective:
To alert the user via Twitter, whenever his/her laptop had been accessed by someone. And also it can be used to monitor your system

Pre-requisites:
I. HARDWARE :
  • Linux OS / Raspberry Pi - have sudo access on the terminal/console
  • Mouse / Wireless Mouse / Touchpad congenital laptop
  • The Twitter application is already installed on the Mobile phones (Optional)
II. Software :

Procedure Installation :
  1. Python and python-setuptools must be installed. (If not already installed: sudo apt-get install python python-setuptools)
  2. Download/Clone repository from: https://github.com/OWASP/SecureTea-Project.git
  • git clone https://github.com/OWASP/SecureTea-Project.git
  1. Install SecureTea package:
  • cd SecureTea-Project
  • python setup.py install
  1. Install python dependencies/ requirements
  • pip install -r requirements.txt
  1. Open the "securetea.conf" in your home directory (~/.securetea/securetea.conf) with a text editor and edit the following variables :
Copy/Paste API KEY and TOKEN from Twitter apps
"api_key": "XXXX",
"api_secret_key": "XXXX",
"access_token": "XXXX",
"access_token_secret": "XXXX",
"username": "XXXX"
  1. Optionally in "securetea.conf" You can set debug = true to enable the console log (default: enabled). or set debug = false to disable logging to console.
  2. Install Mouse / Wireless Mouse Touchpad if not functioning properly (Linux / macOS / Raspberry Pi machine).
  3. Okay, Run program -> sudo SecureTea.py or more -> Securetea.py -h
  4. Notice a WELCOME_MSG Like this: [Core] [ 2018-08-30 16:50 ] Info : Welcome to SecureTea..!! Initializing System
  5. laptop access by moving the mouse/touchpad to see the cumulative X and Y coordinates on the console. If you have a twitter app installed on your phone, you can get updates on the "message" from your twitter account.
  6. Checks Alert message on the console and on twitter your inbox. [Core] [ 2018-08-30 16:50 ] Warn : (3) : Someone has access your laptop when
  7. If you want to monitor your system from a webapp,
  • cd gui
  • npm install
  • ng serve
  1. Click new tab terminal and type -> sudo python monitor.py
  2. Go to http://localhost:4200 to view your project. END-POINT type http://localhost:5000 and click SIGN IN. 


Getting Twitter Tokens:

Tested on:

For Suggestions and Contributing :

Roadmap:
  1. Notify by Twitter (done)
  2. Securetea Dashboard / Gui (done)
  3. Securetea Protection /firewall
  4. Securetea Antivirus
  5. Notify by Whatsapp
  6. Notify by SMS Alerts
  7. Notify by Line
  8. Notify by Telegram


LeakLooker - Find Open Databases With Shodan

$
0
0

Find open databases with Shodan
Background:
https://medium.com/@woj_ciech/leaklooker-find-open-databases-in-a-second-9da4249c8472

Requirements:
Python 3
Shodan paid plan, except Kibana search
Put your Shodan API key in line 65
pip3 install shodan
pip3 install colorama
pip3 install hurry.filesize

Usage
root@kali:~/# python leaklooker.py -h
,
)\
/ \
' # '
', ,'
`'
,
)\
/ \
' ~ '
', ,'
`'
LeakLooker - Find open databases
https://medium.com/@woj_ciech https://github.com/woj-ciech/

usage: leaklooker.py [-h] [--elastic] [--couchdb] [--mongodb] [--kibana]
[--first FIRST] [--last LAST]

LeakLooker

optional arguments:
-h, --help show this help message and exit
--elastic Elasti search (default: False)
--couchdb CouchDB (default: False)
--mongodb MongoDB (default: False)
--kibana Kibana (default: False)

Pages:
--first FIRST First page (default: None)
--last LAST Last page (default: None)
You need to specify first and last page

Example
root@kali:~/# python leaklooker.py --mongodb --couchdb --kibana --elastic --first 12 --last 14
[...]
----------------------------------Elastic - Page 12--------------------------------
Found 25069 results
IP: http://xxx.xxx.xxx.xxx:9200/_cat/indices?v
Size: 1G
Country: France
Indices:
.monitoring-kibana-6-2019.01.08
[...]
----------------------------
IP: http://yyy.yyy.yyy.yyy:9200/_cat/indices?v
Size: 144G
Country: China
Indices:
zhuanli
hx_person
[...]
----------------------------------CouchDB - Page 12--------------------------------
Found 5932 results
-----------------------------
IP: http://xxx.xxx.xxx:5984/_utils
Country: Austria
new_fron_db
test_db
-----------------------------
IP: http://yyy.yyy.yyy.yyy:5984/_utils
Country: United States
_replicator
_users
backup_20180917
backup_db
eio_local
tfa_pos
----------------------------------MongoDB - Page 12--------------------------------
Found 66680 results
IP: xxx.xxx.xxx.xxx
Size: 6G
Country: France
Database name: Warn
Size: 80M
Collections:
Warn
system.indexes
Database name: xhprofprod
Size: 5G
Collections:
results
system.indexes
-----------------------------
IP: yyy.yyy.yyy.yyy
Size: 544M
Country: Ukraine
Database name: local
Size: 32M
Collections:
startup_log
Database name: ace_stat
Size: 256M
Collections:
stat_minute
system.indexes
stat_hourly
stat_daily
[...]
Database name: ace
Size: 256M
Collections:
usergroup
system.indexes
scheduletask
dpigroup
portforward
wlangroup
[...]
----------------------------------Kibana - Page 12--------------------------------
Found 10464 results
IP: http://xxx.xxx.xxx.xxx:5601/app/kibana#/discover?_g=()
Country: Germany
---
IP: http://yyy.yyy.yyy.yyy:5601/app/kibana#/discover?_g=()
Country: United States
---
IP: http://zzz.zzz.zzz.zzz:5601/app/kibana#/discover?_g=()
Country: United Kingdom

Screenshots




WiGLE - Wifi Wardriving (Nethugging Client For Android)

$
0
0


Open source network observation, positioning, and display client from the world's largest queryable database of wireless networks. Can be used for site-survey, security analysis, and competition with your friends. Collect networks for personal research or upload to https://wigle.net. WiGLE has been collecting and mapping network data since 2001, and currently has over 350m networks. WiGLE is *not* a list of networks you can use.
  • Uses GPS to estimate locations of observed networks.
  • Observations logged to local database to track your networks found.
  • Upload and compete on the global WiGLE.net leaderboard.
  • Real-time map of networks found, with overlays from entire WiGLE dataset.
  • Free, open source, no ads (pull requestes welcome at https://github.com/wiglenet/wigle-wifi-wardriving ).
  • Export to CSV files on SD card (comma separated values).
  • Export to KML files on SD card (to import into Google Maps/Earth).
  • Bluetooth GPS support through mock locations.
  • Audio and Text-to-Speech alerting and "Mute" option to shut off all sound/speech.


Sh00T - A Testing Environment for Manual Security Testers

$
0
0

A Testing Environment for Manual Security Testers.

Sh00t
  • is a task manager to let you focus on performing security testing
  • provides To Do checklists of test cases
  • helps to create bug reports with customizable bug templates

Features:
  • Dynamic Task Manager to replace simple editors or task management tools that are NOT meant for Security
  • Automated, customizable Security test-cases Checklist to replace Evernote, OneNote or other tools which are NOT meant for Security
  • Manage custom bug templates for different purposes and automatically generate bug report
  • Support multiple Assessments & Projects to logically separate your different needs
  • Use like a paper - Everything's saved automatically
  • Export auto generated bug report into Markdown & submit blindly on HackerOne! (WIP)
  • Integration with JIRA, ServiceNow - Coming soon
  • Export bug report into Markdown - Coming soon
  • Customize everything under-the-hood

Installation:
Sh00t requires Python 3 and a few more packages. The simplest way to set up Sh00t is using Conda Environments. However, Anaconda is optional if you have Python 3 and pip installed - you can jump to step 4 below.
Pre-requisite - One time setup:
  1. Install the minimal version of Anaconda: Miniconda and follow the installation instruction. Remember to reload your bash profile or restart your terminal application to avail conda command. For windows, launch Anaconda Prompt and run all the below commands in that window only.
  2. Create a new Python 3 environment: conda create -n sh00t python=3.6
  3. Activate sh00t environment: conda activate sh00t. If you see an error message like CommandNotFoundError: Your shell has not been properly configured to use 'conda activate'., you have to manually enable conda command. Follow the instructions shown with the error message. You may have to reload your bash profile or restart your terminal. Try activating sh00t again: conda activate sh00t. You should be seeing (sh00t) XXXX$ in your terminal.
  4. Clone or download the latest project into a location of your choice: https://github.com/pavanw3b/sh00t. git clone requires installation of Git.
  5. Navigate to the folder where sh00t is cloned or downloaded & extracted: cd sh00t. Note that this is the outer-most sh00t directory in project files. Not sh00t/sh00t.
  6. Install Sh00t dependency packages: pip install -r requirements.txt
  7. Setup database: python manage.py migrate
  8. Create an User Account: python manage.py createsuperuser and follow the UI to create an account.
  9. Optional but recommended: Avail 174 Security Test Cases from OWASP Testing Guide (OTG) and Web Application Hackers Handbook (WAHH): python reset.py.
That's all for the first time. Follow the next steps whenever you want to start Sh00t.
Starting Sh00t:
If you have Python 3 installed on your machine, you can jump to Step 3.
  1. For Linux/Mac, Open Terminal. For Windows, open Anaconda Prompt.
  2. Activate sh00t environment if not on yet: conda activate sh00t
  3. Navigate to sh00t directory if not in already: cd sh00t
  4. Start Sh00t server: python manage.py runserver
  5. Access http://127.0.0.1:8000/ on your favorite browser. Login with the user credentials created in the one-time setup above.
  6. Welcome to Sh00t!
  7. Once you are done, stop the server: Ctrl + C
  8. [Optional] Deactivate sh00t environment to continue with your other work: conda deactivate.

Upgrade:
  • Navigate to the folder where sh00t was cloned: cd sh00t
  • Stop the server if it's running: Ctrl + C
  • Pull the latest code base via git: git pull or download the source from github and replace the files.
  • Activate sh00t environment if not on yet: conda activate sh00t
  • Setup any additional dependencies: pip install -r requirements.txt
  • Make the latest database changes: python manage.py migrate
  • Start the server: python manage.py runserver

Troubleshoot:
Sh00t is written in Python and powered by Django Web Framework. If you are stuck with any errors, Googling on the error message, should help you most of the times. If you are not sure, please file a new issue on github.

Glossary:
  • Flag: A Flag is a target that is sh00ted at. It's a test case that needs to be tested. Flags are generated automatically based on the testing methodology chosen. The bug might or might not be found - but the goal is to aim and sh00t at it. Flag contains detailed steps for testing. If the bug is confirmed, then it's called a sh0t.
  • Sh0t: Sh0ts are bugs. Typically Sh0t contain technical description of the bug, Affected Files/URLs, Steps To Reproduce and Fix Recommendation. Most of the contents of Sh0t is one-click generated and only the dynamic content like Affected Parameters, Steps has to be changed. Sh0ts can belong to Assessment.
  • Assessment: Assessment is a testing assessment. It can be an assessment of an application, a program - up to the user the way wanted to manage. It's a part of project.
  • Project: Project contains assessments. Project can be a logical separation of what you do. It can be different job, bug bounty, up to you to decide.

How does it work?
Begin with creating a new Assessment. Choose what methodology you want to test with. Today there are 330 test cases, grouped into 86 Flags, belonging to 13 Modules which are created with reference to "Web Application Hacker's Handbook" Testing Methodology. Modules & Flags can be handpicked & customized. Once Assessments are created with the Flags, now the tester has to test them either manually, or semi automated with the help of scanners, tools or however it's required, mark it "Done" on completion. While performing assessment we often come with custom test cases that is specific to certain scenario in the application. A new Flag can be created easily at any point of time.
Whenever a Flag is confirmed to be a valid bug, a Sh0t can be created. One can choose a bug template that matches best, and sh00t will auto fill the bug report based on the template chosen.

Screenshots:

Dashboard:


Working on a Flag:


Choosing Methodology and Test Cases while creating a new Assessment:


Filing a bug pre-filled with a template:


Who can use Sh00t?
  • Application Security Engineers: Pentesting& Vulnerability Assessments
  • Bug bounty hunters
  • Independent Security Researchers
  • Blue team, developers who fix
  • Anybody who wants to hack

Implementation details:
  • Language: Python 3
  • Framework: Django Web Framework
  • Dependencies: Django REST Framework, djnago-tables2: Managed by /requirements.txt
  • UI: Bootstrap - Responsive
Contribution:
Credits:
  • Hari Valugonda
  • Mohd Aqeel Ahmed
  • Ajeeth Rakkappan


identYwaf - Blind WAF Identification Tool

$
0
0

identYwaf is an identification tool that can recognize web protection type (i.e. WAF) based on blind inference. Blind inference is being done by inspecting responses provoked by a set of predefined offensive (non-destructive) payloads, where those are used only to trigger the web protection system in between (e.g. http://<host>?aeD0oowi=1 AND 2>1). Currently it supports more than 60 different protection products (e.g. aeSecure, Airlock, CleanTalk, CrawlProtect, Imunify360, MalCare, ModSecurity, Palo Alto, SiteGuard, UrlScan, Wallarm, WatchGuard, Wordfence, etc.), while the knowledge-base is constantly growing.
Also, as part of this project, screenshots of characteristic responses for different web protection systems are being gathered (manually) for the future reference.

Screenshots








Installation
You can download the latest zipball by clicking here.
Preferably, you can download identYwaf by cloning the Git repository:
git clone --depth 1 https://github.com/stamparm/identYwaf.git
identYwaf works out of the box with Python version 2.6.x and 2.7.x on any platform.

Usage
$ python identYwaf.py 
__ __
____ ___ ___ ____ ______ | T T __ __ ____ _____
l j| \ / _]| \ | T| | || T__T T / T| __|
| T | \ / [_ | _ Yl_j l_j| ~ || | | |Y o || l_
| | | D YY _]| | | | | |___ || | | || || _|
j l | || [_ | | | | | | ! \ / | | || ]
|____jl_____jl_____jl__j__j l__j l____/ \_/\_/ l__j__jl__j (1.0.X)

Usage: python identYwaf.py [options] <host|url>

Options:
--version Show program's version number and exit
-h, --help Show this help message and exit
--delay=DELAY Delay (sec) between tests (default: 0)
--timeout=TIMEOUT Response timeout (sec) (default: 10)
--proxy=PROXY HTTP proxy address (e.g. "http://127.0.0.1:8080")


FTW - Framework For Testing WAFs

$
0
0

This project was created by researchers from ModSecurity and Fastly to help provide rigorous tests for WAF rules. It uses the OWASP Core Ruleset V3 as a baseline to test rules on a WAF. Each rule from the ruleset is loaded into a YAML file that issues HTTP requests that will trigger these rules. Users can verify the execution of the rule after the tests are issued to make sure the expected response is received from an attack.

Goals / Use cases include:
  • Find regressions in WAF deployments by using continuous integration and issuing repeatable attacks to a WAF
  • Provide a testing framework for new rules into ModSecurity, if a rule is submitted it MUST have corresponding positive & negative tests
  • Evaluate WAFs against a common, agreeable baseline ruleset (OWASP)
  • Test and verify custom rules for WAFs that are not part of the core rule set
For our 1.0 release announcement, check out the OWASP CRS Blog

Installation
  • git clone https://github.com/CRS-support/ftw.git
  • cd ftw
  • virtualenv env && source ./env/bin/activate
  • pip install -r requirements.txt
  • py.test -s -v test/test_default.py --ruledir=test/yaml

Writing your first tests
The core of FTW is it's extensible yaml based tests. This section lists a few resources on how they are formatted, how to write them and how you can use them.
OWASP CRS wrote a great blog post describing how FTW tests are written and executed.
YAMLFormat.md is ground truth of all yaml fields that are currently understood by FTW.
After reading these two resources, you should be able to get started in writing tests. You will most likely be checking against status code responses, or web request responses using the log_contains directive. For integrating FTW to test regexes within your WAF logs, refer to ExtendingFTW.md

Provisioning Apache+Modsecurity+OWASP CRS
If you require an environment for testing WAF rules, there has been one created with Apache, Modsecurity and version 3.0.0 of the OWASP core ruleset. This can be deployed by:
  • Checking out the repository: git clone https://github.com/fastly/waf_testbed.git
  • Typing vagrant up



Sn0Int - Semi-automatic OSINT Framework And Package Manager

$
0
0
sn0int is a semi-automatic OSINT framework and package manager. It was built for IT security professionals and bug hunters to gather intelligence about a given target or about yourself. sn0int is enumerating attack surface by semi-automatically processing public information and mapping the results in a unified format for followup investigations.
Among other things, sn0int is currently able to:
  • Harvest subdomains from certificate transparency logs
  • Harvest subdomains from various passive dns logs
  • Sift through subdomain results for publicly accessible websites
  • Harvest emails from pgp keyservers
  • Enrich ip addresses with ASN and geoip info
  • Harvest subdomains from the wayback machine
  • Gather information about phonenumbers
  • Bruteforce interesting urls
sn0int is heavily inspired by recon-ng and maltego, but remains more flexible and is fully opensource. None of the investigations listed above are hardcoded in the source, instead those are provided by modules that are executed in a sandbox. You can easily extend sn0int by writing your own modules and share them with other users by publishing them to the sn0int registry. This allows you to ship updates for your modules on your own since you don't need to send a pull request.
Join on IRC: irc.hackint.org:6697/#sn0int


Getting started


Scanner-Cli - A Project Security/Vulnerability/Risk Scanning Tool

$
0
0

The Hawkeye scanner-cli is a project security, vulnerability and general risk highlighting tool. It is meant to be integrated into your pre-commit hooks and your pipelines.

Running and configuring the scanner
The Hawkeye scanner-cli assumes that your directory structure is such that it keeps the toolchain's files on top level. Roughly, this is what it boils down to:
  • Node.js projects have a package.json on top level
  • Ruby projects will have a Gemfile on top level
  • Python projects will have a requirements.txt on top level
  • PHP projects will have a composer.lock on top level
  • Java projects will have a build (gradle) or target (maven) folder, and include .java and .jar files
This is not exhaustive as sometimes tools require further files to exist. To understand how the modules decide whether they can handle a project, please check the How it works section and the modules folder.

Docker (recommended)
The docker image is hands-down the easiest way to the scanner. Please note that your project root (e.g. $PWD) needs to be mounted to /target.
docker run --rm -v $PWD:/target hawkeyesec/scanner-cli
The docker build is also the recommended way to run the scanner in your CI pipelines. This is an example of running Hawkeye against one of your projects in GoCD:
<pipeline name="security-scan">
<stage name="Hawkeye" cleanWorkingDir="true">
<jobs>
<job name="scan">
<tasks>
<exec command="docker">
<arg>pull</arg>
<arg>hawkeyesec/scanner-cli</arg>
<runif status="passed" />
</exec>
<exec command="bash">
<arg>-c</arg>
<arg>docker run --rm -v $PWD:/target hawkeyesec/scanner-cli</arg>
<runif status="passed" />
</exec>
</tasks>
</job>
</jobs>
</stage>
</pipeline>

npm
You can install and run hawkeye in a Node.js project via
npm install --save-dev @hawkeyesec/scanner-cli
npx hawkeye scan
This method is recommended in a Node.js project, where the other toolchains (e.g. python, ruby) are not required.
With this method, it is also recommended to invoke the scanner in a git pre-commit hook (e.g. via the pre-commit package) to fail the commit if issues are found.

Configuration Files (recommended)
You can configure the scanner via .hawkeyerc and .hawkeyeignore files in your project root.
The .hawkeyerc file is a JSON file that allows you to configure ...
  • the modules to run,
  • the writers to use, and
  • the failure threshold
{
"all": true|false,
"staged": true|false,
"modules": ["files-ccnumber", "java-owasp", "java-find-secbugs"],
"sumo": "http://your.sumologic.foobar/collector",
"http": "http://your.logger.foobar/collector",
"json": "log/results.json",
"failOn": "low"|"medium"|"high"|"critical",
"showCode": true|false
}
The .hawkeyeignore file is a collection of regular expressions matching paths and module error codes to exclude from the scan, and is equivalent to using the --exclude flag. Lines starting with # are regarded as comments.
Please note that any special charaters reserved in regular expressions (-[]{}()*+?.,^$|#\s) need to be escaped when used as a literal!
Please also note that the module error codes are usually not shown, since they are not primarily relevant for the user. If you want to exclude a certain false positive, you can display the module error codes with the flag --show-code or the showCode property in the .hawkeyerc.
^test/

# this is a comment

^README.md

The CLI
Use hawkeye modules to list the available modules and their status.
> npx hawkeye modules
[info] Version: v1.4.0
[info] Module Status
[info] Enabled: files-ccnumber
[info] Scans for suspicious file contents that are likely to contain credit card numbers
[info] Enabled: files-contents
[info] Scans for suspicious file contents that are likely to contain secrets
[info] Disabled: files-entropy
[info] Scans files for strings with high entropy that are likely to contain passwords
[info] Enabled: files-secrets
[info] Scans for suspicious filenames that are likely to contain secrets
[info] Enabled: java-find-secbugs
[info] Finds common security issues in Java code with findsecbugs
[info] Enabled: java-owasp
[info] Scans Java projects for gradle/maven dependencies with known vulnerabilities with the OWASP dependency checker
[info] Enabled: node-crossenv
[info] Scans node projects for known malicious crossenv dependencies
[info] Enabled: node-npmaudit
[info] Checks node projects for dependencies with known vulnerabilities
[info] Enabled: node-npmoutdated
[info] Checks node projects for outdated npm modules
[info] Enabled: node-yarnaudit
[info] Checks yarn projects for dependencies with known vulnerabilities
[info] Enabled: node-yarnoutdated
[info] Checks node projects for outdated yarn modules
[info] Enabled: php-security-checker
[info] Checks whether the composer.lock contains dependencies with known vulnerabilities using security-checker
[info] Enabled: python-bandit
[info] Scans for common security issues in Python code with bandit.
[info] Enabled: python-piprot
[info] Scans python dependencies for out of date packages
[info] Enabled: python-safety
[info] Checks python dependencies for known security vulnerabilities with the safety tool.
[info] Enabled: ruby-brakeman
[info] Statically analyzes Rails code for security issues with Brakeman.
[info] Enabled: ruby-bundler-scan
[info] Scan for Ruby gems with known vulnerabilities using bundler```

Use `hawkeye scan` to kick off a scan:
npx hawkeye scan --help [info] Version: v1.3.0 Usage: hawkeye-scan [options]
Options: -a, --all Scan all files, regardless if a git repo is found. Defaults to tracked files in git repositories. -t, --target [/path/to/project] The location to scan. Defaults to $PWD. -f, --fail-on [low|medium|high|critical] Set the level at which hawkeye returns non-zero status codes. Defaults to low. -m, --module [module name] Run specific module. Defaults to all applicable modules. -e, --exclude [pattern] Specify one or more exclusion patterns (eg. test/*). Can be specified multiple times. -j, --json [/path/to/file.json] Write findings to file. -s, --sumo [https://sumologic-http-connector] Write findings to SumoLogic. -H, --http [https://your-site.com/api/results] Write findings to a given url. --show-code Shows the code the module uses for reporting, useful for ignoring certain false positives -g, --staged Scan only git-staged files. -h, --help output usage information

# Results

#### Exit Codes

The scanner-cli responds with the following exit codes:

* Exit code 0 indicates no findings above or equal to the minimum threshold were found.
* Exit code 1 indicates that issues were found above or equal to the minimum threshold.
* Exit code 42 indicates that an unexpected error happened somewhere in the program. This is likely a bug and should not happen. Please check the log output and report a bug.

#### Redirecting the console output

If you wish to redirect the console logger output, the recommended method is latching onto stdout. In this example, we're making use of both JSON and stdout results:

```bash
docker run --rm -v $PWD:/target hawkeyesec/scanner-cli -j hawkeye-results.json -f critical 2>&1 | tee hawkeye-results.txt

Console output
By default, the scanner outputs its results to the console in tabular form.

Sumologic
The results can be sent to a SumoLogic collector of your choice. In this example, we have a collector with a single HTTP source.
hawkeye scan --sumo https://collectors.us2.sumologic.com/receiver/v1/http/your-http-collector-url
In SumoLogic, search for _collector="hawkeye" | json auto:


Any HTTP endpoint
Similar to the SumoLogic example, the scanner can send the results to any given HTTP endpoint that accepts POST messages.
hawkeye scan --http http://your.logging.foobar/endpoint
The results will be sent with User-Agent: hawkeye. Similar to the console output, the following JSON will be POSTed for each finding:
{
"module": "files-contents",
"level": "critical",
"offender": "testfile3.yml",
"description": "Private key in file",
"mitigation": "Check line number: 3"
}

How it works
Hawkeye is designed to be extensible by adding modules and writers.

Modules
Modules are basically little bits of code that either implement their own logic, or wrap a third party tool and standardise the output. They only run if the required criteria are met. For example: The npm outdated module would only run if a package.json is detected in the scan target - as a result, you don't need to tell Hawkeye what type of project you are scanning.

Generic Modules
  • files-ccnumber: Scans for suspicious file contents that are likely to contain credit card numbers
  • files-contents: Scans for suspicious file contents that are likely to contain secrets
  • files-entropy: Scans files for strings with high entropy that are likely to contain passwords. Entropy scanning is disabled by default because of the high number of false positives. It is useful to scan codebases every now and then for keys, in which case please run it please using the -m files-entropy switch.
  • files-secrets: Scans for suspicious filenames that are likely to contain secrets

Java
  • java-find-secbugs: Finds common security issues in Java code with findsecbugs
  • java-owasp: Scans Java projects for gradle/maven dependencies with known vulnerabilities with the OWASP dependency checker

Node.js
  • node-crossenv: Scans node projects for known malicious crossenv dependencies
  • node-npmaudit: Checks node projects for dependencies with known vulnerabilities with npm audit
  • node-npmoutdated: Checks node projects for outdated npm modules with npm outdated
  • node-yarnaudit: Checks yarn projects for dependencies with known vulnerabilities with yarn audit
  • node-yarnoutdated: Checks node projects for outdated yarn modules with yarn outdated

PHP
  • php-security-checker: Checks whether the composer.lock contains dependencies with known vulnerabilities using security-checker

Python
  • python-bandit: Scans for common security issues in Python code with bandit.
  • python-piprot: Scans python dependencies for out of date packages with piprot
  • python-safety: Checks python dependencies for known security vulnerabilities with the safety tool.

Ruby
  • ruby-brakeman: Statically analyzes Rails code for security issues with Brakeman.
  • ruby-bundler-scan: Scan for Ruby gems with known vulnerabilities using bundler

Adding a module
If you have an idea for a module, please feel free open a feature request in the issues section. If you have a bit of time left, please consider sending us a pull request. To see modules work, please head over to the modules folder to find how things are working.


ADAPT - Tool That Performs Automated Penetration Testing For WebApps

$
0
0

ADAPT is a tool that performs Automated Dynamic Application Penetration Testing for web applications. It is designed to increase accuracy, speed, and confidence in penetration testing efforts. ADAPT automatically tests for multiple industry standard OWASP Top 10 vulnerabilities, and outputs categorized findings based on these potential vulnerabilities. ADAPT also uses the functionality from OWASP ZAP to perform automated active and passive scans, and auto-spidering. Due to the flexible nature of the ADAPT tool, all of theses features and tests can be enabled or disabled from the configuration file. For more information on tests and configuration, please visit the ADAPT wiki.

How it Works
ADAPT uses Python to create an automated framework to use industry standard tools, such as OWASP ZAP and Nmap, to perform repeatable, well-designed procedures with anticipated results to create an easly understandable report listing vulnerabilities detected within the web application.

Automated Tests:
* OTG-IDENT-004 – Account Enumeration
* OTG-AUTHN-001 - Testing for Credentials Transported over an Encrypted Channel
* OTG-AUTHN-002 – Default Credentials
* OTG-AUTHN-003 - Testing for Weak lock out mechanism
* OTG-AUTHZ-001 – Directory Traversal
* OTG-CONFIG-002 - Test Application Platform Configuration
* OTG-CONFIG-006 – Test HTTP Methods
* OTG-CRYPST-001 - Testing for Weak SSL/TLS Ciphers, Insufficient Transport Layer Protection
* OTG-CRYPST-002 - Testing for Padding Oracle
* OTG-ERR-001 - Testing for Error Code
* OTG-ERR-002 – Testing for Stack Traces
* OTG-INFO-002 – Fingerprinting the Webserver
* OTG-INPVAL-001 - Testing for Reflected Cross site scripting
* OTG-INPVAL-002 - Testing for Stored Cross site scripting
* OTG-INPVAL-003 – HTTP Verb Tampering
* OTG-SESS-001 - Testing for Session Management Schema
* OTG-SESS-002 – Cookie Attributes

Installing the Plugin
  1. Detailed install instructions.


CIRTKit - Tools For The Computer Incident Response Team

$
0
0

One DFIR console to rule them all. Built on top of the Viper Framework

Documentation
  • Please see the wiki for more information about CIRTKit and documentation

Roadmap

Future integrations
  • Bit9
  • Palo Alto Networks
  • EnCase/FTK

Future modules
  • Packet Analysis (possibly Dshell)
  • Javascript Unpacking/Deobfuscation
  • Volatility Memory Analysis Framework
  • Hex Viewer/Editor

Scripting Framework
  • Automation is key. Scripting is key to DFIR, thus needs to be available in CIRTKit


Uncle Spufus - A Tool That Automates Mac Address Spoofing

$
0
0

A tool that automates Mac address spoofing

What is Uncle Spufus
Uncle Spufus is a tool that automates MAC address spoofing. To do so it tries various techniques and checks if the MAC is successfully spoofed.
It makes of:
  • macchanger
  • bash

Installing Uncle Spufus
1a. Download the zip b. Extract
OR
  1. Clone the repository
THEN
  1. Naviagate to uncle-spufus:
     cd uncle-spufus
  2. Make uspufus.sh executable:
     chmod +x uspufus.sh
  3. Execute:
     ./uspufus.sh
  4. Have fun


Pown Recon - A Powerful Target Reconnaissance Framework Powered By Graph Theory

$
0
0

Pown Recon is a target reconnaissance framework powered by graph theory. The benefit of using graph theory instead of flat table representation is that it is easier to find the relationships between different types of information which comes quite handy in many situations. Graph theory algorithms also help with diffing, searching, like finding the shortest path, and many more interesting tasks.

Quickstart
This tool is meant to be used as part of Pown.js but it can be invoked separately as an independent tool.
If installed globally as part of Pown invoke like this:
$ pown recon
Otherwise, install this module from the root of your project:
$ npm install @pown/recon --save
Once done, invoke pown recon like this:
$ ./node_modules/.bin/pown-cli recon
You can also use Pown to invoke it locally:
$ POWN_ROOT=. pown recon

Usage
WARNING: This pown command is currently under development and as a result will be subject to breaking changes.
pown recon [options] <command>

Target recon

Commands:
pown recon transform <transform> Perform inline transformation [aliases: t]
pown recon select <expression> Perform a selection [aliases: s]
pown recon diff <fileA> <fileB> Perform a diff between two recon files [aliases: d]

Options:
--version Show version number [boolean]
--debug Debug mode [boolean]
--help Show help [boolean]

Transform
pown recon transform <transform>

Perform inline transformation

Commands:
pown recon transform archiveindex [options] <nodes...> Obtain a commoncraw index for specific URL. [aliases: archive_index, arci]
pown recon transform awsiamendpoints [options] <nodes...> Enumeration AWS IAM Endpoints [aliases: aws_iam_endpoints, awsie]
pown recon transform builtwithscraperelationships [options] <nodes...> Performs scrape of builtwith relationships [aliases: builtwith_scrape_relationships, bwsr]
pown recon transform cloudflarednsquery [options] <nodes...> Query CloudFlare DNS API [aliases: cloudflare_dns_query, cfdq]
pown recon transform commoncrawlindex [options] <nodes...> Obtain a commoncraw index for specific URL. [aliases: commoncrawl_index, cci]
pown recon transform crtshdomainreport [options] <nodes...> Obtain crt.sh domain report which helps enumerating potential target subdomains. [aliases: crtsh_domain_report, crtshdr]
pown recon transform dockerhublistrepos [options] <nodes...> List the first 100 DockerHub repositories [aliases: dockerhub_list_repos, dhlr]
pown recon transform githublistrepos [options] <nodes...> List the first 100 GitHub repositories [aliases: github_list_repos, ghlr]
pown recon transform githublistmembers [options] <nodes...> List the first 100 GitHub members in org [aliases: github_list_members, ghlm]
pown recon transform gravatar [options] <nodes...> Get gravatar
pown recon transform hackertargetreverseiplookup [options] <nodes...> Obtain reverse IP information from hackertarget.com. [aliases: hackertarget_reverse_ip_lookup, htril]
pown recon transform hibpreport [options] <nodes...> Obtain haveibeenpwned.com breach report. [aliases: hibp_report, hibpr]
pown recon transform pkslookupkeys [options] <nodes...> Look the the PKS database at pool.sks-keyservers.net which pgp.mit.edu is part of. [aliases: pks_lookup_keys, pkslk]
pown recon transform riddleripsearch [options] <nodes...> Searches for IP references using F-Secure riddler.io. [aliases: riddler_ip_search, ris]
pown recon transform riddlerdomainsearch [options] <nodes...> Searches for Domain references using F-Secure riddler.io. [aliases: riddler_domain_search, rds]
pown recon transform threatcrowddomainreport [options] <nodes...> Obtain threatcrowd domain report which helps enumerating potential target subdomains and email addresses. [aliases: threatcrowd_domain_report, tcdr]
pown recon transform threatcrowdipreport [options] <nodes...> Obtain threatcrowd ip report which helps enumerating virtual hosts. [aliases: threatcrowd_ip_report, tcir]
pown recon transform urlscanliveshot [options] <nodes...> Generates a liveshot of any public site via urlscan. [aliases: usls]
pown recon transform wappalyzerprofile [options] <nodes...> Enumerate technologies with api.wappalyzer.com [aliases: wappalyzer_profile, wzp]
pown recon transform whatsmynamereport [options] <nodes...> Find social accounts with whatsmyname database. [aliases: wmnr]
pown recon transform zoomeyescrapesearchresults [options] <nodes...> Performs first page scrape on ZoomEye search results [aliases: zoomeye_scrape_search_results, zyssr]

Options:
--version Show version number [boolean]
--debug Debug mode [boolean]
--help Show help [boolean]
--read, -r Read file [string]
--write, -w Write file [string]

Select
pown recon select <expression>

Perform a selection

Options:
--version Show version number [boolean]
--debug Debug mode [boolean]
--help Show help [boolean]
--read, -r Read file [string]
--write, -w Write file [string]
--output-format, -o Output format [string] [choices: "table", "csv", "json"] [default: "table"]
--output-fields Output fields [string] [default: ""]
--output-with-ids Output ids [boolean] [default: false]

Diff
pown recon diff <fileA> <fileB>

Perform a diff between two recon files

Options:
--version Show version number [boolean]
--debug Debug mode [boolean]
--help Show help [boolean]
--subset, -s The subset to select [choices: "left", "right", "both"] [default: "left"]
--write, -w Write file [string]
--output-format, -o Output format [string] [choices: "table", "csv", "json"] [default: "table"]
--output-fields Output fields [string] [default: ""]
--output-with-ids Output ids [boolean] [default: false]

Transforms
  • GitHub Search of Repos and Members
  • CloudFlare 1.1.1.1 DNS API
  • CRTSH
  • DockerHub Repo Search
  • Gravatar URLs
  • Hacker Target Reverse IP Lookup
  • Have I Been Pwned Lookup
  • PKS Lookup
  • Urlscan Live Shot
  • Threatcrowd Lookup
  • ZoomEye Scraper
  • Wappalyzer
  • AWS Landing Pages
  • Builtwith
  • Riddler
  • Commoncraw
  • Archive.org
  • WhatsMyName

Tutorial
To demonstrate the power of Pown Recon and graph-based OSINT (Open Source Intelligence), let's have a look at the following trivial example.
Let's start by querying everyone who is a member of Google's engineering team and contributes to their GitHub account.
pown recon t -w google.network ghlm google
This command will generate a table similar to this:
┌─────────┬─────────────────┬────────────────────────────────────────────┬─────────────────────────┬─────────────────────────────────────────────────────────┐
│ (index) │ type │ uri │ login │ avatar │
├─────────┼─────────────────┼────────────────────────────────────────────┼─────────────────────────┼─────────────────────────────────────────────────────────┤
│ 0 │ 'github:member' │ 'https://github.com/3rf' │ '3rf' │ 'https://avatars1.githubusercontent.com/u/1242478?v=4' │
│ 1 │ 'github:member' │ 'https://github.com/aaroey' │ 'aaroey' │ 'https://avatars0.githubusercontent.com/u/31743510?v=4' │
│ 2 │ 'github:member' │ 'https://github.com/aarongable' │ 'aarongable' │ 'https://avatars3.githubusercontent.com/u/2474926?v=4' │
...
...
...
│ 97 │ 'github:member' │ 'https://github.com/alexv' │ 'alexv' │ 'https://avatars0.githubusercontent.com/u/30807372?v=4' │
│ 98 │ 'github:member' │ 'https://github.com/alexwhouse' │ 'alexwhouse' │ 'https://avatars3.githubusercontent.com/u/1448490?v=4' │
│ 99 │ 'github:member' │ 'https://github.com/alexwoz' │ 'alexwoz' │ 'https://avatars3.githubusercontent.com/u/501863?v=4' │
└─────────┴─────────────────┴────────────────────────────────────────────┴─────────────────────────┴─────────────────────────────────────────────────────────┘
You just created your first network!
The representation is tabular for convenience but underneath we've got a model which consists of nodes connected by edges.
If you are wondering what that looks like you can use SecApps Recon. The command line does not have the necessary level of interactivity to present the complexity of graphs.
The -w google.network command line option exported the network to a file. You can load the file directly into SecApps Recon with the file open feature. The result will look like this:


Now imagine that we want to query what repositories these Google engineers are working on. This is easy. First, we need to select the nodes in the graph and then transform them with the "GitHub List Repositories" transformation. This is how we do it from the command line:
pown recon t ghlr -r google.network -w google2.nework -s 'node[type="github:member"]'
If you don't hit GitHub API rate limits, you will be presented with this:
┌─────────┬───────────────┬──────────────────────────────────────────────────────────────────────────────┬───────────────────────────────────────────────────────────┐
│ (index) │ type │ uri │ fullName │
├─────────┼───────────────┼──────────────────────────────────────────────────────────────────────────────┼───────────────────────────────────────────────────────────┤
│ 0 │ 'github:repo' │ 'https://github.com/3rf/2015-talks' │ '3rf/2015-talks' │
│ 1 │ 'github:repo' │ 'https://github.com/3rf/codecoroner' │ '3rf/codecoroner' │
│ 2 │ 'github:repo' │ 'https://github.com/3rf/DefinitelyTyped' │ '3rf/DefinitelyTyped' │
...
...
...
│ 1348 │ 'github:repo' │ 'https://github.com/agau4779/ultimate-tic-tac-toe' │ 'agau4779/ultimate-tic-tac-toe' │
│ 1349 │ 'github:repo' │ 'https://github.com/agau4779/worm_scraper' │ 'agau4779/worm_scraper' │
│ 1350 │ 'github:repo' │ 'https://github.com/agau4779/zsearch' │ 'agau4779/zsearch' │
└─────────┴───────────────┴──────────────────────────────────────────────────────────────────────────────┴───────────────────────────────────────────────────────────┘
Since now we have two files google.network and google2.network you might be wondering what is the difference between them. Well, we have a tool for doing just that. This is how we do it.
pown recon diff google.network google2.network
Now we know! This feature is quite useful if you are building large recon maps and you are just curious to know what are the key differences. Imagine your cron job performs the same recon every day and you would like to know if something new just appeared which might be worth exploring further. Hello, bug bounty hunters!


Pwndb - Search For Creadentials Leaked On Pwndb

$
0
0

A data leak differs from a data breach in that the former usually happens through omission or faulty practices rather than overt action, and may be so slight that it is never detected. While a data breach usually means that sensitive data has been harvested by someone who should not have accessed it, a data leak is a situation where such sensitive information might have been inadvertently exposed. pwndb is an onion service where leaked accounts are searchable using a simple form.

After a breach occurs the data obtained is often put on sale. Sometimes, people try to blackmail the affected company, asking for money in exchange of not posting the data online. The second option is selling the data to a competitor, a rival or even an enemy. This data is used in so many different ways by companies and countries… but when the people responsible for obtaining the data fail on selling it, the bundle becomes worthless and they end up being placed in some sites like pastebin or pwndb.


pwndb is a tool to search for leaked creadentials on pwndb using the command line.
                          _ _     
| | |
_ ____ ___ __ __| | |__
| '_ \ \ /\ / / '_ \ / _` | '_ \
| |_) \ V V /| | | | (_| | |_) |
| .__/ \_/\_/ |_| |_|\__,_|_.__/
| |
|_|


pwndb.py -u <username> -d <domain>

Tutorial
Go to https://davidtavarez.github.io/osint/2019/01/25/pwndb-command-line-tool-python.html



Bolt - CSRF Scanning Suite

$
0
0

Bolt is in beta phase of development which means there can be bugs. Any production use of this tool discouraged. Pull requests and issues are welcome. I also suggest you to put this repo on watch if you are interested in it.

Workflow

Crawling
Bolt crawls the target website to the specified depth and stores all the HTML forms found in a database for further processing.

Evaluating
In this phase, Bolt finds out the tokens which aren't strong enough and the forms which aren't protected.

Comparing
This phase focuses on detection on replay attack scenarios and hence checks if a token has been issued more than one time. It also calculates the average levenshtein distance between all the tokens to see if they are similar.
Tokens are also compared against a database of 250+ hash patterns.

Observing
In this phase, 100 simultaneous requests are made to a single webpage to see if same tokens are generated for the requests.

Testing
This phase is dedicated to active testing of the CSRF protection mechanism. It includes but not limited to checking if protection exsists for moblie browsers, submitting requests with self-generated token and testing if token is being checked to a certain length.

Analysing
Various statistical checks are performed in this phase to see if the token is really random. Following tests are performed during this phase
  • Monobit frequency test
  • Block frequency test
  • Runs test
  • Spectral test
  • Non-overlapping template matching test
  • Overlapping template matching test
  • Serial test
  • Cumultative sums test
  • Aproximate entropy test
  • Random excursions variant test
  • Linear complexity test
  • Longest runs test
  • Maurers universal statistic test
  • Random excursions test

Usage
Scanning a website for CSRF using Bolt is as easy as doing
python3 bolt.py -u https://github.com -l 2
Where -u is used to supply the URL and -l is used to specify the depth of crawling.
Other options and switches:
  • -t number of threads
  • --delay delay between requests
  • --timeout http request timeout
  • --headers supply http headers

Credits
Regular Expressions for detecting hashes are taken from hashID.
Bit level entropy tests are taken from highfestiva's python implementation of statistical tests.


Fierce - Semi-Lightweight Scanner That Helps Locate Non-Contiguous IP Space And Hostnames Against Specified Domains

$
0
0

Fierce is a semi-lightweight scanner that helps locate non-contiguous IP space and hostnames against specified domains.
It's really meant as a pre-cursor to nmap, unicornscan, nessus, nikto, etc, since all of those require that you already know what IP space you are looking for.
This does not perform exploitation and does not scan the whole internet indiscriminately. It is meant specifically to locate likely targets both inside and outside a corporate network.
Because it uses DNS primarily you will often find mis-configured networks that leak internal address space. That's especially useful in targeted malware.

Options:
-connect    Attempt to make http connections to any non RFC1918
(public) addresses. This will output the return headers but
be warned, this could take a long time against a company with
many targets, depending on network/machine lag. I wouldn't
recommend doing this unless it's a small company or you have a
lot of free time on your hands (could take hours-days).
Inside the file specified the text "Host:\n" will be replaced
by the host specified. Usage:

perl fierce.pl -dns example.com -connect headers.txt

-delay The number of seconds to wait between lookups.
-dns The domain you would like scanned.
-dnsfile Use DNS servers provided by a file (one per line) for
reverse lookups (brute force).
-dnsserver Use a particular DNS server for reverse lookups
(probably should be the DNS server of the target). Fierce
uses your DNS server for the initial SOA query and then uses
the target's DNS server for all additional queries by default.
-file A file you would like to output to be logged to.
-fulloutput When combined with -connect this will output everything
the webserver sends back, not just the HTTP headers.
-help This screen.
-nopattern Don't use a search pattern when looking for nearby
hosts. Instead dump everything. This is really noisy but
is useful for finding other domains that spammers might be
using. It will also give you lots of false positives,
especially on large domains.
-range Scan an internal IP range (must be combined with
-dnsserver). Note, that this does not support a pattern
and will simply output anything it finds. Usage:

perl fierce.pl -range 111.222.333.0-255 -dnsserver ns1.example.co

-search Search list. When fierce attempts to traverse up and
down ipspace it may encounter other servers within other
domains that may belong to the same company. If you supply a
comma delimited list to fierce it will report anything found.
This is especially useful if the corporate servers are named
different from the public facing website. Usage:

perl fierce.pl -dns examplecompany.com -search corpcompany,blahcompany

Note that using search could also greatly expand the number of
hosts found, as it will continue to traverse once it locates
servers that you specified in your search list. The more the
better.
-suppress Suppress all TTY output (when combined with -file).
-tcptimeout Specify a different timeout (default 10 seconds). You
may want to increase this if the DNS server you are querying
is slow or has a lot of network lag.
-threads Specify how many threads to use while scanning (default
is single threaded).
-traverse Specify a number of IPs above and below whatever IP you
have found to look for nearby IPs. Default is 5 above and
below. Traverse will not move into other C blocks.
-version Output the version number.
-wide Scan the entire class C after finding any matching
hostnames in that class C. This generates a lot more traffic
but can uncover a lot more information.
-wordlist Use a seperate wordlist (one word per line). Usage:

perl fierce.pl -dns examplecompany.com -wordlist dictionary.txt

fierce Usage Example
root@kali:~# fierce -dns example.com
DNS Servers for example.com:
b.iana-servers.net
a.iana-servers.net

Trying zone transfer first...
Testing b.iana-servers.net
Request timed out or transfer not allowed.
Testing a.iana-servers.net
Request timed out or transfer not allowed.

Unsuccessful in zone transfer (it was worth a shot)
Okay, trying the good old fashioned way... brute force

Checking for wildcard DNS...
Nope. Good.
Now performing 2280 test(s)...


XIP - Tool To Generate A List Of IP Addresses By Applying A Set Of Transformations Used To Bypass Security Measures E.G. Blacklist Filtering, WAF, Etc.

$
0
0
XIP generates a list of IP addresses by applying a set of transformations used to bypass security measures e.g. blacklist filtering, WAF, etc.
Further explaination on our blog post article


Usage
python3 xip.py --help

Docker alternative

Official image
You can pull the official Drupwn image from the dockerhub registry using the following command:
docker pull immunit/XIP

Build
To build the container, just use this command:
docker build -t xip .
Docker will download the Alpine image and then execute the installation steps.
Be patient, the process can be quite long the first time.

Run
Once the build process is over, get and enjoy your new tool.
docker run --rm -it xip --help

Logging
The output generated is stored in the /tmp/ folder. When using docker, run your container using the following option
-v YOUR_PATH_FOLDER:/tmp/


Stenographer - A Packet Capture Solution Which Aims To Quickly Spool All Packets To Disk, Then Provide Simple, Fast Access To Subsets Of Those Packets

$
0
0

Stenographer is a full-packet-capture utility for buffering packets to disk for intrusion detection and incident response purposes. It provides a high-performance implementation of NIC-to-disk packet writing, handles deleting those files as disk fills up, and provides methods for reading back specific sets of packets quickly and easily.

It is designed to:
  • Write packets to disk, very quickly (~10Gbps on multi-core, multi-disk machines)
  • Store as much history as it can (managing disk usage, storing longer durations when traffic slows, then deleting the oldest packets when it hits disk limits)
  • Read a very small percentage (<1%) of packets from disk based on analyst needs
It is NOT designed for:
  • Complex packet processing (TCP stream reassembly, etc)
  • It’s fast because it doesn’t do this.  Even with the very minimal, single-pass processing of packets we do, processing ~1Gbps for indexing alone can take >75% of a single core.
  • Processing the data by reading it back from disk also doesn’t work:  see next bullet point.
  • Reading back large amounts of packets (> 1% of packets written)
  • The key concept here is that disk reads compete with disk writes… you can write at 90% of disk speed, but that only gives you 10% of your disk’s time for reading.  Also, we’re writing highly sequential data, which disks are very good at doing quickly, and generally reading back sparse data with lots of seeks, which disks do slowly.
For further reading, check out DESIGN.md for a discussion of stenographer's design, or read INSTALL.md for how to install stenographer on a machine.

Querying

Query Language
A user requests packets from stenographer by specifying them with a very simple query language. This language is a simple subset of BPF, and includes the primitives:
host 8.8.8.8          # Single IP address (hostnames not allowed)
net 1.0.0.0/8 # Network with CIDR
net 1.0.0.0 mask 255.255.255.0 # Network with mask
port 80 # Port number (UDP or TCP)
ip proto 6 # IP protocol number 6
icmp # equivalent to 'ip proto 1'
tcp # equivalent to 'ip proto 6'
udp # equivalent to 'ip proto 17'

# Stenographer-specific time additions:
before 2012-11-03T11:05:00Z # Packets before a specific time (UTC)
after 2012-11-03T11:05:00-07:00 # Packets after a specific time (with TZ)
before 45m ago # Packets before a relative time
before 3h ago # Packets after a relative time
NOTE: Relative times must be measured in integer values of hours or minutes as demonstrated above.
Primitives can be combined with and/&& and with or/||, which have equal precendence and evaluate left-to-right. Parens can also be used to group.
(udp and port 514) or (tcp and port 8080)

Stenoread CLI
The stenoreadcommand line script automates pulling packets from Stenographer and presenting them in a usable format to analysts. It requests raw packets from stenographer, then runs them through tcpdump to provide a more full-featured formatting/filtering experience. The first argument to stenoread is a stenographer query (see 'Query Language' above). All other arguments are passed to tcpdump. For example:
# Request all packets from IP 1.2.3.4 port 6543, then do extra filtering by
# TCP flag, which typical stenographer does not support.
$ stenoread 'host 1.2.3.4 and port 6543' 'tcp[tcpflags] & tcp-push != 0'

# Request packets on port 8765, disabling IP resolution (-n) and showing
# link-level headers (-e) when printing them out.
$ stenoread 'port 8765' -n -e

# Request packets for any IPs in the range 1.1.1.0-1.1.1.255, writing them
# out to a local PCAP file so they can be opened in Wireshark.
$ stenoread 'net 1.1.1.0/24' -w /tmp/output_for_wireshark.pcap

Downloading
To download the source code, install Go locally, then run:
$ go get github.com/google/stenographer
Go will handle downloading and installing all Go libraries that stenographer depends on. To build stenotype, go into the stenotype directory and run make. You may need to install the following Ubuntu packages (or their equivalents on other Linux distros):
  • libaio-dev
  • libleveldb-dev
  • libsnappy-dev
  • g++
  • libcap2-bin
  • libseccomp-dev

Obligatory Fine Print
This is not an official Google product (experimental or otherwise), it is just code that happens to be owned by Google.
This code is not intended (or used) to watch Google's users. Its purpose is to increase security on our networks by augmenting our internal monitoring capabilities.


LOLBAS - Living Off The Land Binaries And Scripts (LOLBins And LOLScripts)

$
0
0

The goal of the LOLBAS project is to document every binary, script, and library that can be used for Living Off The Land techniques.

All the different files can be found behind a fancy frontend here: https://lolbas-project.github.io (thanks @ConsciousHacker for this bit of eyecandy and the team over at https://gtfobins.github.io/). This repo serves as a place where we maintain the YML files that are used by the fancy frontend.

Criteria
A LOLBin/Lib/Script must:
  • Be a Microsoft-signed file, either native to the OS or downloaded from Microsoft.
  • Have extra "unexpected" functionality. It is not interesting to document intended use cases.
    • Exceptions are application whitelisting bypasses
  • Have functionality that would be useful to an APT or red team
Interesting functionality can include:
  • Executing code
    • Arbitrary code execution
    • Pass-through execution of other programs (unsigned) or scripts (via a LOLBin)
  • Compiling code
  • File operations
    • Downloading
    • Upload
    • Copy
  • Persistence
    • Pass-through persistence utilizing existing LOLBin
    • Persistence (e.g. hide data in ADS, execute at logon)
  • UAC bypass
  • Credential theft
  • Dumping process memory
  • Surveillance (e.g. keylogger, network trace)
  • Log evasion/modification
  • DLL side-loading/hijacking without being relocated elsewhere in the filesystem.

The History of the LOLBin
The phrase "Living off the land" was coined by Christopher Campbell (@obscuresec) & Matt Graeber (@mattifestation) at DerbyCon 3.
The term LOLBins came from a Twitter discussion on what to call binaries that can be used by an attacker to perform actions beyond their original purpose. Philip Goh (@MathCasualty) proposed LOLBins. A highly scientific internet poll ensued, and after a general consensus (69%) was reached, the name was made official. Jimmy (@bohops) followed up with LOLScripts. No poll was taken.
Common hashtags for these files are:
  • #LOLBin
  • #LOLBins
  • #LOLScript
  • #LOLScripts
  • #LOLLib
  • #LOLLibs


    Viewing all 5751 articles
    Browse latest View live




    Latest Images