Quantcast
Channel: KitPloit - PenTest Tools!
Viewing all 5843 articles
Browse latest View live

BADministration - Tool Which Interfaces with Management or Administration Applications from an Offensive Standpoint

$
0
0

BADministration is a tool which interfaces with management or administration applications from an offensive standpoint. It attempts to provide offsec personnel a tool with the ability to identify and leverage these non-technical vulnerabilities. As always: use for good, promote security, and fight application propagation.
Sorry for using python2.7, I found a lot of the vendor APIs would only run on 2.7 and I'm not experienced enough to mix and match python versions.

Application Propagation
In my opinion, we often do a fantastic job of network segmentation and we're starting to catch on with domain segmentation; however, one area I often see us fall down is application segmentation. Application segmentation is similar to network segmentation in that we're trying to reduce the exposure of a critical zone from a less trusted zone if it were to become exploited. Administration applications often have privileged access to all its clients, if an attacker lands on that administration application there is a good chance all the clients can become exploited as well. Application segmentation tries to ensure that server-to-client relationships don't cross any trust boundaries. For example, if your admin network is trust level 100 and it's administered by your NMS server, your NMS server should be considered trust level 100.

References

Installation
There will be a collection of python scripts, exes, and who knows what; for the central python module it's pretty simple
pip install -r requirements.txt

Current Modules

Solarwinds Orion
  • solarwinds-enum - Module used to enumerate clients of Orion
  • solarwinds-listalerts - Lists Orion alerts and draws attention to malicious BADministration alerts
  • solarwinds-alertremove - Removes the malicious alert
  • solarwinds-syscmd - Executes a system command on the Orion server via malicious alert
  • Standalone x64 4.5 .NET BADministration_SWDump.exe - Scrapes memory for WMI credentials used by Orion.
    • Can consume large amounts of memory, use at your own risk
    • Compile me as x64

Check us out at



WAES - Auto Enums Websites And Dumps Files As Result

$
0
0


Doing HTB or other CTFs enumeration against targets with HTTP(S) can become trivial. It can get tiresome to always run the same script/tests on every box eg. nmap, nikto, dirb and so on. A one-click on target with automatic reports coming solves the issue. Furthermore, with a script the enum process can be optimized while saving time for hacker. This is what CPH:SEC WAES or Web Auto Enum & Scanner is created for. WAES runs 4 steps of scanning against target (see more below) to optimize the time spend scanning. While multi core or multi-threaded scanning could be implemented it will almost surely get boxes to hang and so is undesirable.
  • From current version and forward WAES will include an install script (see blow) as project moves from alpha to beta phase.
  • WAES could have been developed in python but good bash projects are need to learn bash.
  • WAES is currently made for CTF boxes but is moving towards online uses (see todo section)

To install:
1. $> git clone https://github.com/Shiva108/WAES.git
2. $> cd WAES
2. $> sudo ./install.sh
Make sure directories are set correctly in supergobuster.sh. Should be automatic with Kali & Parrot Linux.
  • Standard directories for lists : SecLists/Discovery/Web-Content & SecLists/Discovery/Web-Content/CMS
  • Kali / Parrot directory list : /usr/share/wordlists/dirbuster/

To run WAES
Web Auto Enum & Scanner - Auto enums website(s) and dumps files as result.
##############################################################################
    Web Auto Enum & Scanner

Auto enums website(s) and dumps files as result
##############################################################################
Usage: waes.sh -u {IP} waes.sh -h
   -h shows this help
-u IP to test eg. 10.10.10.123
-p port nummer (default=80)

Example: ./waes.sh -u 10.10.10.130 -p 8080

Enumeration Process / Method
WAES runs ..
Step 0 - Passive scan - (disabled in the current version)
  • whatweb - aggressive mode
  • OSIRA (same author) - looks for subdomains
Step 1 - Fast scan
  • wafw00 - firewall detection
  • nmap with http-enum
Step 2 - Scan - in-depth
  • nmap - with NSE scripts: http-date,http-title,http-server-header,http-headers,http-enum,http-devframework,http-dombased-xss,http-stored-xss,http-xssed,http-cookie-flags,http-errors,http-grep,http-traceroute
  • nmap with vulscan (CVSS 5.0+)
  • nikto - with evasion A and all CGI dirs
  • uniscan - all tests except stress test (qweds)
Step 3 - Fuzzing
  • super gobuster
    • gobuster with multiple lists
    • dirb with multiple lists
  • xss scan (to come)
.. against target while dumping results files in report/ folder.

To Do
  • Implement domain as input
  • Add XSS scan
  • Add SSL/TLS scanning
  • Add domain scans
  • Add golismero
  • Add dirble
  • Add progressbar
  • Add CMS detection
  • Add CMS specific scans


Osmedeus v1.5 - Fully Automated Offensive Security Framework For Reconnaissance And Vulnerability Scanning

$
0
0

Osmedeus allows you automated run the collection of awesome tools to reconnaissance and vulnerability scanning against the target.

Installation
git clone https://github.com/j3ssie/Osmedeus
cd Osmedeus
./install.sh
This install only focus on Kali linux, check more install on Wiki page

How to use
If you have no idea what are you doing just type the command below or check out the Advanced Usage
./osmedeus.py -t example.com

Using Docker
Check out docker-osmedeus by mabnavarrete for docker installation and this wiki for more detail.

Features
  • Subdomain Scan.
  • Subdomain TakeOver Scan.
  • Screenshot the target.
  • Basic recon like Whois, Dig info.
  • Web Technology detection.
  • IP Discovery.
  • CORS Scan.
  • SSL Scan.
  • Wayback Machine Discovery.
  • URL Discovery.
  • Headers Scan.
  • Port Scan.
  • Vulnerable Scan.
  • Seperate workspaces to store all scan output and details logging.
  • REST API.
  • React Web UI.
  • Support Continuous Scan.
  • Slack notifications.
  • Easily view report from commnad line.
Check this Wiki page for more detail about each module.


Demo








Example Commands
# normal routine
./osmedeus.py -t example.com

# normal routine but slow speed on subdomain module
./osmedeus.py -t example.com --slow 'subdomain'

# direct mode examples
./osmedeus.py -m portscan -i "1.2.3.4/24"

./osmedeus.py -m portscan -I list_of_targets.txt -t result_folder

./osmedeus.py -m "portscan,vulnscan" -i "1.2.3.4/24" -t result_folder

./osmedeus.py -m "assets" -i "example.com"
./osmedeus.py -m "assets,dirb" -i "example.com"

# report mode

./osemdeus.py -t example.com --report list
./osemdeus.py -t example.com --report sum
./osemdeus.py -t example.com -m subdomain --report short
./osemdeus.py -t example.com -m "subdomain, portscan" --report full


More options
Basic Usage
===========
python3 osmedeus.py -t <your_target>
python3 osmedeus.py -T <list_of_targets>
python3 osmedeus.py -m <module> [-i <input>|-I <input_file>] [-t workspace_name]
python3 osmedeus.py --report <mode> -t <workspace> [-m <module>]

Advanced Usage
==============
[*] List all module
python3 osmedeus.py -M

[*] List all report mode
python3 osmedeus.py --report help

[*] Running with specific module
python3 osmedeus.py -t <result_folder> -m <module_name> -i <your_target>

[*] Example command
python3 osmedeus.py -m subdomain -t example.com
python3 osmedeus.py -t example.com --slow "subdomain"
python3 osmedeus.py -t sample2 -m vuln -i hosts.txt
python3 osmedeus.py -t sample2 -m dirb -i /tmp/list_of_hosts.txt

Remote Options
==============
--remote REMOTE Remote address for API, (default: h ttps://127.0.0.1:5000)
--auth AUTH Specify authentication e.g: --auth="username:password"
See your config file for more detail (default: core/config.conf)

--client just run client stuff in case you ran the flask server before

More options
==============
--update Update lastest from git

-c CONFIG, --config CONFIG
Specify config file (default: core/config.conf)

-w WORKSPACE, --workspace WORKSPACE
Custom workspace folder

-f, --force force to run the module again if output exists
-s, --slow "all"
All module running as slow mode
-s, --slow "subdomain"
Only running slow mode in subdomain module

--debug Just for debug purpose


Disclaimer
Most of this tool done by the authors of the tool that list in CREDITS.md. I'm just put all the pieces together, plus some extra magic.
This tool is for educational purposes only. You are responsible for your own actions. If you mess something up or break any laws while using this software, it's your fault, and your fault only.

Contribute
Please take a look at CONTRIBUTING.md

Changelog
Please take a look at CHANGELOG.md

CREDITS
Please take a look at CREDITS.md

Contact
@j3ssiejjj


AbsoluteZero - Python APT Backdoor

$
0
0

This project is a Python APT backdoor, optimized for Red Team Post Exploitation Tool, it can generate binary payload or pure python source. The final stub uses polymorphic encryption to give a first obfuscation layer to itself.

Deployment
AbsoluteZero is a complete software written in Python 2.7 and works both on Windows and Linux platforms, in order to make it working you need to have Python 2.7 installed and then using 'pip' install the requirements.txt file. Remember that to compile binaries for Windows you have to run the entire software a Microsoft platform seen that pyinstaller doesn't allow cross-platform compiling without using vine.
Make sure that Python installation folder is set on 'C:/Python27' to avoid binary compiling troubles.


Seccomp Tools - Provide Powerful Tools For Seccomp Analysis

$
0
0

Provide powerful tools for seccomp analysis.
This project is targeted to (but not limited to) analyze seccomp sandbox in CTF pwn challenges. Some features might be CTF-specific, but still useful for analyzing seccomp in real-case.

Features
  • Dump - Automatically dumps seccomp-bpf from execution file(s).
  • Disasm - Converts bpf to human readable format.
    • Simple decompile.
    • Display syscall names and arguments when possible.
    • Colorful!
  • Asm - Write seccomp rules is so easy!
  • Emu - Emulates seccomp rules.
  • Supports multi-architectures.

Installation
Available on RubyGems.org!
$ gem install seccomp-tools
If you failed when compiling, try:
sudo apt install gcc ruby-dev
and install seccomp-tools again.

Command Line Interface

seccomp-tools
$ seccomp-tools --help
# Usage: seccomp-tools [--version] [--help] <command> [<options>]
#
# List of commands:
#
# asm Seccomp bpf assembler.
# disasm Disassemble seccomp bpf.
# dump Automatically dump seccomp bpf from execution file(s).
# emu Emulate seccomp rules.
#
# See 'seccomp-tools <command> --help' to read about a specific subcommand.

$ seccomp-tools dump --help
# dump - Automatically dump seccomp bpf from execution file(s).
#
# Usage: seccomp-tools dump [exec] [options]
# -c, --sh-exec <command> Executes the given command (via sh).
# Use this option if want to pass arguments or do pipe things to the execution file.
# e.g. use `-c "./bin > /dev/null"` to dump seccomp without being mixed with stdout.
# -f, --format FORMAT Output format. FORMAT can only be one of <disasm|raw|inspect>.
# Default: disasm
# -l, --limit LIMIT Limit the number of calling "prctl(PR_SET_SECCOMP)".
# The target process will be killed whenever its calling times reaches LIMIT.
# Default: 1
# -o, --output FILE Output result into FILE instead of stdout.
# If multiple seccomp syscalls have been invoked (see --limit),
# results will be written to FILE, FILE_1, FILE_2.. etc.
# For example, "--output out.bpf" and the output files are out.bpf, out_1.bpf, ...

dump
Dumps the seccomp bpf from an execution file. This work is done by the ptrace syscall.
NOTICE: beware of the execution file will be executed.
$ file spec/binary/twctf-2016-diary
# spec/binary/twctf-2016-diary: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/l, for GNU/Linux 2.6.24, BuildID[sha1]=3648e29153ac0259a0b7c3e25537a5334f50107f, not stripped

$ seccomp-tools dump spec/binary/twctf-2016-diary
# line CODE JT JF K
# =================================
# 0000: 0x20 0x00 0x00 0x00000000 A = sys_number
# 0001: 0x15 0x00 0x01 0x00000002 if (A != open) goto 0003
# 0002: 0x06 0x00 0x00 0x00000000 return KILL
# 0003: 0x15 0x00 0x01 0x00000101 if (A != openat) goto 0005
# 0004: 0x06 0x00 0x00 0x00000000 return KILL
# 0005: 0x15 0x00 0x01 0x0000003b if (A != execve) goto 0007
# 0006: 0x06 0x00 0x00 0x00000000 return KILL
# 0007: 0x15 0x00 0x01 0x00000038 if (A != clone) goto 0009
# 0008: 0x06 0x00 0x00 0x00000000 return KILL
# 0009: 0x15 0x00 0x01 0x00000039 if (A != fork) goto 001 1
# 0010: 0x06 0x00 0x00 0x00000000 return KILL
# 0011: 0x15 0x00 0x01 0x0000003a if (A != vfork) goto 0013
# 0012: 0x06 0x00 0x00 0x00000000 return KILL
# 0013: 0x15 0x00 0x01 0x00000055 if (A != creat) goto 0015
# 0014: 0x06 0x00 0x00 0x00000000 return KILL
# 0015: 0x15 0x00 0x01 0x00000142 if (A != execveat) goto 0017
# 0016: 0x06 0x00 0x00 0x00000000 return KILL
# 0017: 0x06 0x00 0x00 0x7fff0000 return ALLOW

$ seccomp-tools dump spec/binary/twctf-2016-diary -f inspect
# "\x20\x00\x00\x00\x00\x00\x00\x00\x15\x00\x00\x01\x02\x00\x00\x00\x06\x00\x00\x00\x00\x00\x00\x00\x15\x00\x00\x01\x01\x01\x00\x00\x06\x00\x00\x00\x00\x00\x00\x00\x15\x00\x00\x01\x3B\x00\x00\x00\x06\x00\x00\x00\x00\x00\x00\x00\x15\x00\x00\x01\x38\x00\x00\x00\x06\x00\x00\x00\x00\x00\x00\x00\x15\x00\x00\x01\x39\x00\x00\x00\x06\x00\x00\x00\x00\x00\x00\x00\x15\x00\x00\x01\x3A\x00\x00\x00\x06\x00\x00\x00\x00\x00\x00\x00\x15\x00\x00\x01\x55\x00\x00\x00\x06\x00 \x00\x00\x00\x00\x00\x00\x15\x00\x00\x01\x42\x01\x00\x00\x06\x00\x00\x00\x00\x00\x00\x00\x06\x00\x00\x00\x00\x00\xFF\x7F"

$ seccomp-tools dump spec/binary/twctf-2016-diary -f raw | xxd
# 00000000: 2000 0000 0000 0000 1500 0001 0200 0000 ...............
# 00000010: 0600 0000 0000 0000 1500 0001 0101 0000 ................
# 00000020: 0600 0000 0000 0000 1500 0001 3b00 0000 ............;...
# 00000030: 0600 0000 0000 0000 1500 0001 3800 0000 ............8...
# 00000040: 0600 0000 0000 0000 1500 0001 3900 0000 ............9...
# 00000050: 0600 0000 0000 0000 1500 0001 3a00 0000 ............:...
# 00000060: 0600 0000 0000 0000 1500 0001 5500 0000 ............U...
# 00000070: 0600 0000 0000 0000 1500 0001 4201 0000 ............B...
# 00000080: 0600 0000 0000 0000 0600 0000 0000 ff7f ................

disasm
Disassembles the seccomp from raw bpf.
$ xxd spec/data/twctf-2016-diary.bpf | head -n 3
# 00000000: 2000 0000 0000 0000 1500 0001 0200 0000 ...............
# 00000010: 0600 0000 0000 0000 1500 0001 0101 0000 ................
# 00000020: 0600 0000 0000 0000 1500 0001 3b00 0000 ............;...

$ seccomp-tools disasm spec/data/twctf-2016-diary.bpf
# line CODE JT JF K
# =================================
# 0000: 0x20 0x00 0x00 0x00000000 A = sys_number
# 0001: 0x15 0x00 0x01 0x00000002 if (A != open) goto 0003
# 0002: 0x06 0x00 0x00 0x00000000 return KILL
# 0003: 0x15 0x00 0x01 0x00000101 if (A != openat) goto 0005
# 0004: 0x06 0x00 0x00 0x00000000 return KILL
# 0005: 0x15 0x00 0x01 0x0000003b if (A != execve) goto 0007
# 0006: 0x06 0x00 0x00 0x00000000 return KILL
# 0007: 0x15 0x00 0x01 0x00000038 if (A != clone) goto 0009
# 0008: 0x06 0x00 0x00 0x00000000 return KILL
# 0009: 0x15 0x00 0x01 0x00000039 if (A != fork) goto 0011
# 0010: 0x06 0x00 0x00 0x00000000 return KILL
# 0011: 0x15 0x00 0x01 0x0000003a if (A != vfork) goto 0013
# 0012: 0x06 0x00 0x00 0x00000000 return KILL
# 0013: 0x15 0x00 0x01 0x00000055 if (A != creat) goto 0015
# 0014: 0x06 0x00 0x00 0x00000000 return KILL
# 0015: 0x15 0x00 0x01 0x00000142 if (A != execveat) goto 0017
# 0016: 0x06 0x00 0x00 0x00000000 return KILL
# 0017: 0x06 0x00 0x00 0x7fff0000 return ALLOW

asm
Assembles the seccomp rules into raw bytes. It's very useful when one wants to write custom seccomp rules.
Supports labels for jumping and uses syscall names directly. See examples below.
$ seccomp-tools asm
# asm - Seccomp bpf assembler.
#
# Usage: seccomp-tools asm IN_FILE [options]
# -o, --output FILE Output result into FILE instead of stdout.
# -f, --format FORMAT Output format. FORMAT can only be one of <inspect|raw|c_array|c_source|assembly>.
# Default: inspect
# -a, --arch ARCH Specify architecture.
# Supported architectures are <amd64|i386>.

# Input file for asm
$ cat spec/data/libseccomp.asm
# # check if arch is X86_64
# A = arch
# A == ARCH_X86_64 ? next : dead
# A = sys_number
# A >= 0x40000000 ? dead : next
# A == write ? ok : next
# A == close ? ok : next
# A == dup ? ok : next
# A == exit ? ok : next
# return ERRNO(5)
# ok:
# return ALLOW
# dead:
# return KILL

$ seccomp-tools asm spec/data/libseccomp.asm
# " \x00\x00\x00\x04\x00\x00\x00\x15\x00\x00\b>\x00\x00\xC0 \x00\x00\x00\x00\x00\x00\x005\x00\x06\x00\x00\x00\x00@\x15\x00\x04\x00\x01\x00\x00\x00\x15\x00\x03\x00\x03\x00\x00\x00\x15\x00\x02\x00 \x00\x00\x00\x15\x00\x01\x00<\x00\x00\x00\x06\x00\x00\x00\x05\x00\x05\x00\x06\x00\x00\x00\x00\x00\xFF\x7F\x06\x00\x00\x00\x00\x00\x00\x00"

$ seccomp-tools asm spec/data/libseccomp.asm -f c_source
# #include <linux/seccomp.h>
# #include <stdio.h>
# #include <stdlib.h>
# #include <sys/prctl.h>
#
# static void install_seccomp() {
# static unsigned char filter[] = {32,0,0,0,4,0,0,0,21,0,0,8,62,0,0,192,32,0,0,0,0,0,0,0,53,0,6,0,0,0,0,64,21,0,4,0,1,0,0,0,21,0,3,0,3,0,0,0,21,0,2,0,32,0,0,0,21,0,1,0,60,0,0,0,6,0,0,0,5,0,5,0,6,0,0,0,0,0,255,127,6,0,0,0,0,0,0,0};
# struct prog {
# unsigned short len;
# unsig ned char *filter;
# } rule = {
# .len = sizeof(filter) >> 3,
# .filter = filter
# };
# if(prctl(PR_SET_NO_NEW_PRIVS, 1, 0, 0, 0) < 0) { perror("prctl(PR_SET_NO_NEW_PRIVS)"); exit(2); }
# if(prctl(PR_SET_SECCOMP, SECCOMP_MODE_FILTER, &rule) < 0) { perror("prctl(PR_SET_SECCOMP)"); exit(2); }
# }

$ seccomp-tools asm spec/data/libseccomp.asm -f assembly
# install_seccomp:
# push rbp
# mov rbp, rsp
# push 38
# pop rdi
# push 0x1
# pop rsi
# xor eax, eax
# mov al, 0x9d
# syscall
# push 22
# pop rdi
# lea rdx, [rip + _filter]
# push rdx /* .filter */
# push _filter_end - _filter >> 3 /* .len */
# mov rdx, rsp
# push 0x2
# pop rsi
# xor eax, eax
# mov al, 0x9d
# syscall
# leave
# ret
# _filter:
# .ascii "\040\000\000\000\004\000\ 000\000\025\000\000\010\076\000\000\300\040\000\000\000\000\000\000\000\065\000\006\000\000\000\000\100\025\000\004\000\001\000\000\000\025\000\003\000\003\000\000\000\025\000\002\000\040\000\000\000\025\000\001\000\074\000\000\000\006\000\000\000\005\000\005\000\006\000\000\000\000\000\377\177\006\000\000\000\000\000\000\000"
# _filter_end:


# let's asm then disasm!
$ seccomp-tools asm spec/data/libseccomp.asm -f raw | seccomp-tools disasm -
# line CODE JT JF K
# =================================
# 0000: 0x20 0x00 0x00 0x00000004 A = arch
# 0001: 0x15 0x00 0x08 0xc000003e if (A != ARCH_X86_64) goto 0010
# 0002: 0x20 0x00 0x00 0x00000000 A = sys_number
# 0003: 0x35 0x06 0x00 0x40000000 if (A >= 0x40000000) goto 0010
# 0004: 0x15 0x04 0x00 0x00000001 if (A == write) goto 0009
# 0005: 0x15 0x03 0x00 0x00000003 if (A == close) goto 0009
# 0006: 0x15 0x02 0x00 0x00000020 if (A == dup) goto 0009
# 00 07: 0x15 0x01 0x00 0x0000003c if (A == exit) goto 0009
# 0008: 0x06 0x00 0x00 0x00050005 return ERRNO(5)
# 0009: 0x06 0x00 0x00 0x7fff0000 return ALLOW
# 0010: 0x06 0x00 0x00 0x00000000 return KILL

Emu
Emulates seccomp given sys_nr, arg0, arg1, etc.
$ seccomp-tools emu --help
# emu - Emulate seccomp rules.
#
# Usage: seccomp-tools emu [options] BPF_FILE [sys_nr [arg0 [arg1 ... arg5]]]
# -a, --arch ARCH Specify architecture.
# Supported architectures are <amd64|i386>.
# -q, --[no-]quiet Run quietly, only show emulation result.

$ seccomp-tools emu spec/data/libseccomp.bpf write 0x3
# line CODE JT JF K
# =================================
# 0000: 0x20 0x00 0x00 0x00000004 A = arch
# 0001: 0x15 0x00 0x08 0xc000003e if (A != ARCH_X86_64) goto 0010
# 0002: 0x20 0x00 0x00 0x00000000 A = sys_number
# 0003: 0x35 0x06 0x00 0x40000000 if (A >= 0x40000000) goto 0010
# 0004: 0x15 0x04 0x00 0x00000001 if (A == write) goto 0009
# 0005: 0x15 0x03 0x00 0x00000003 if (A == close) goto 0009
# 0006: 0x15 0x02 0x00 0x00000020 if (A == dup) goto 0009
# 0 007: 0x15 0x01 0x00 0x0000003c if (A == exit) goto 0009
# 0008: 0x06 0x00 0x00 0x00050005 return ERRNO(5)
# 0009: 0x06 0x00 0x00 0x7fff0000 return ALLOW
# 0010: 0x06 0x00 0x00 0x00000000 return KILL
#
# return ALLOW at line 0009

Screenshots

Dump


Emu



Development
I recommend to use rbenv for your Ruby environment.

Setup
  • Install bundler
    • $ gem install bundler
  • Clone the source
    • $ git clone https://github.com/david942j/seccomp-tools && cd seccomp-tools
  • Install dependencies
    • $ bundle install

Run tests
$ bundle exec rake

I Need You
Any suggestion or feature request is welcome! Feel free to file an issue or send a pull request. And, if you like this work, I'll be happy to be starred

HackerTarget ToolKit v2.0 - Tools And Network Intelligence To Help Organizations With Attack Surface Discovery

$
0
0



Use open source tools and network intelligence to help organizations with attack surface discovery and identification of security vulnerabilities. Identification of an organizations vulnerabilities is an impossible task without tactical intelligence on the network footprint. By combining open source intelligence with the worlds best open source security scanning tools, we enable your attack surface discovery. With the ability for Internet assets to be deployed in seconds, the attack surface is more dynamic and ever growing. This very fact makes mapping your external network footprint a hard problem. We aim to provide solutions to solve this problem. Start with our tools for domain and IP address data, then pivot to mapping the exposure with hosted open source scanners. We have developed a linux terminal tool using python programming language through an api which we received from !

How do you run it?

Clone with HTTPS
git clone https://github.com/ismailtasdelen/hackertarget.git
cd hackertarget/

Run pip3 install to set up this script
pip3 install .

Run hackertarget CLI script via following command
python hackertarget.py

View :
root@ismailtasdelen:~# python hackertarget.py

_ _ _ _
| |_ __ _ __ | |__ ___ _ _ | |_ __ _ _ _ __ _ ___ | |_
| ' \ / _` |/ _|| / // -_)| '_|| _|/ _` || '_|/ _` |/ -_)| _|
|_||_|\__,_|\__||_\_\___||_| \__|\__,_||_| \__, |\___| \__|
|___/
Ismail Tasdelen
| github.com/ismailtasdelen | linkedin.com/in/ismailtasdelen |


[1] Traceroute
[2] Ping Test
[3] DNS Lookup
[4] Reverse DNS
[5] Find DNS Host
[6] Find Shared DNS
[7] Zone Transfer
[8] Whois Lookup
[9] IP Location Lookup
[10] Reverse IP Lookup
[11] TCP Port Scan
[12] Subnet Lookup
[13] HTTP Header Check
[14] Extract Page Links
[15] Version
[16] Exit

Which option number :

Menu :
  • [1] Traceroute
  • [2] Ping Test
  • [3] DNS Lookup
  • [4] Reverse DNS
  • [5] Find DNS Host
  • [6] Find Shared DNS
  • [7] Zone Transfer
  • [8] Whois Lookup
  • [9] IP Location Lookup
  • [10] Reverse IP Lookup
  • [11] TCP Port Scan
  • [12] Subnet Lookup
  • [13] HTTP Header Check
  • [14] Extract Page Links
  • [15] Version
  • [16] Exit

Cloning an Existing Repository ( Clone with HTTPS )
root@ismailtasdelen:~# git clone https://github.com/ismailtasdelen/hackertarget.git

Cloning an Existing Repository ( Clone with SSH )
root@ismailtasdelen:~# git clone git@github.com:ismailtasdelen/hackertarget.git

Changelog v2.0:

  • To support the python3 version at least, using the print(...) function.
  • Split the hackertarget_api from hackertarget.py file.
  • Added the hackertarget_api test.
  • Added setup file.
  • Added .travis.yml file and integration was achieved.
  • raw_input function is undefined in python-3.x versions. Using the input function instead.
  • Using the mock tests because we don't need to test the external API service. We can assume that external API service is expected successfully.
  • Added the tool version information module.



Reference :


ThreatHunting - A Splunk App Mapped To MITRE ATT&CK To Guide Your Threat Hunts

$
0
0

This is a Splunk application containing several dashboards and over 120 reports that will facilitate initial hunting indicators to investigate.
You obviously need to be ingesting Sysmon data into Splunk, a good configuration can be found here
Note: This application is not a magic bullet, it will require tuning and real investigative work to be truly effective in your environment. Try to become best friends with your system administrators. They will be able to explain a lot of the initially discovered indicators.
Big credit goes out to MITRE for creating the ATT&CK framework!
Pull requests / issue tickets and new additions will be greatly appreciated!

Mitre ATT&CK
I strive to map all searches to the ATT&CK framework. A current ATT&CK navigator export of all linked configurations is found here and can be viewed here


App Prerequisites
Install the following apps to your SearchHead:

Required actions after deployment
  • Make sure the threathunting index is present on your indexers
  • Edit the macro's to suit your environment > https://YOURSPLUNK/en-US/manager/ThreatHunting/admin/macros (make sure the sourcetype is correct)
  • The app is shipped without whitelist lookup files, you'll need to create them yourself. This is so you won't accidentally overwrite them on an upgrade of the app.
  • Install the lookup csv's or create them yourself, empty csv's are here
A step by step guide kindly written by Kirtar Oza can be found here

Usage
A more detailed explanation of all functions can be found here or in this blog post


Goop - Google Search Scraper (Bypass CAPTCHA)

$
0
0

goop can perform google searches without being blocked by the CAPTCHA or hitting any rate limits.

How it works?
Facebook provides a debugger tool for its scraper. Interestingly, Google doesn't limit the requests made by this debugger (whitelisted?) and hence it can be used to scrap the google search results without being blocked by the CAPTCHA.
Since facebook is involved, a facebook session Cookie must be supplied to the library with each request.

Usage

Installation
pip install goop

Example
from goop import goop

page_1 = goop.search('red shoes', '<your facebook cookie>')
page_2 = goop.search('red_shoes', '<your facebook cookie>', page='1')
include_omitted_results = goop.search('red_shoes', '<your facebook cookie>', page='8', full=True)
The returned is a dict of following structure
{
"0": {
"url": "https://example.com",
"text": "Example webpage",
"summary": "This is an example webpage whose aim is to demonstrate the usage of ..."
},
"1": {
...
cli.py demonstrates the usage by performing google searches from the terminal with the following command
python cli.py <query> <number_of_pages>


Legal & Disclaimer
Scraping google search results is illegal. This library is merely a proof of concept of the bypass. The author isn't responsible for the actions of the end users.



Findomain v0.2.1 - The Fastest And Cross-Platform Subdomain Enumerator

$
0
0

The fastest and cross-platform subdomain enumerator.

Comparision
It comparision gives you a idea why you should use findomain instead of another tools. The domain used for the test was microsoft.com in the following BlackArch virtual machine:
Host: KVM/QEMU (Standard PC (i440FX + PIIX, 1996) pc-i440fx-3.1)
Kernel: 5.2.6-arch1-1-ARCH
CPU: Intel (Skylake, IBRS) (4) @ 2.904GHz
Memory: 139MiB / 3943MiB
The tool used to calculate the time, is the time command in Linux. You can see all the details of the tests in it link.
Enumeration ToolSerch TimeTotal Subdomains FoundCPU UsageRAM Usage
Findomainreal 0m38.701s5622Very LowVery Low
assetfinderreal 6m1.117s4630Very LowVery Low
Subl1st3rreal 7m14.996s996LowLow
Amass*real 29m20.301s332Very HightVery Hight
  • I can't wait to the amass test for finish, looks like it will never ends and aditionally the resources usage is very hight.
Note: The benchmark was made the 10/08/2019, since it point other tools can improve things and you will got different results.

Features
  • Discover subdomains without brute-force, it tool uses Certificate Transparency Logs.
  • Discover subdomains with or without IP address according to user arguments.
  • Read target from user argument (-t).
  • Read a list of targets from file and discover their subdomains with or without IP and also write to output files per-domain if specified by the user, recursively.
  • Write output to TXT file.
  • Write output to CSV file.
  • Write output to JSON file.
  • Cross platform support: Any platform.
  • Optional multiple API support.
  • Proxy support.
Note: the proxy support is just to proxify APIs requests, the actual implementation to discover IP address of subdomains doesn't support proxyfing and it's made using the host network still if you use the -p option.

How it works?
It tool doesn't use the common methods for sub(domains) discover, the tool uses Certificate Transparency logs to find subdomains and it method make it tool the most faster and reliable. The tool make use of multiple public available APIs to perform the search. If you want to know more about Certificate Transparency logs, read https://www.certificate-transparency.org/
APIs that we are using at the moment:
If you know other that should be added, open an issue.

Supported platforms in our binary releases
All supported platforms in the binarys that we give are 64 bits only and we don't have plans to add support for 32 bits binary releases, if you want to have support for 32 bits follow the documentation.

Build for 32 bits or another platform
If you want to build the tool for your 32 bits system or another platform, follow it steps:
Note: You need to have rust, make and perl installed in your system first.
Using the crate:
  1. cargo install findomain
  2. Execute the tool from $HOME/.cargo/bin. See the cargo-install documentation.
Using the Github source code:
  1. Clone the repository or download the release source code.
  2. Extract the release source code (only needed if you downloaded the compressed file).
  3. Go to the folder where the source code is.
  4. Execute cargo build --release
  5. Now your binary is in target/release/findomain and you can use it.

Installation Android (Termux)
Install the Termux package, open it and follow it commands:
$ pkg install rust make perl
$ cargo install findomain
$ cd $HOME/.cargo/bin
$ ./findomain

Installation in Linux using source code
If you want to install it, you can do that manually compiling the source or using the precompiled binary.
Manually: You need to have Rust installed in your computer first.
$ git clone https://github.com/Edu4rdSHL/findomain.git
$ cd findomain
$ cargo build --release
$ sudo cp target/release/findomain /usr/bin/
$ findomain

Installation in Linux using compiled artifacts
$ wget https://github.com/Edu4rdSHL/findomain/releases/latest/download/findomain-linux
$ chmod +x findomain-linux
$ ./findomain-linux
If you are using the BlackArch Linux distribution, you just need to use:
$ sudo pacman -S findomain

Installation ARM
$ wget https://github.com/Edu4rdSHL/findomain/releases/latest/download/findomain-arm
$ chmod +x findomain-arm
$ ./findomain-arm

Installation Aarch64 (Raspberry Pi)
$ wget https://github.com/Edu4rdSHL/findomain/releases/latest/download/findomain-aarch64
$ chmod +x findomain-aarch64
$ ./findomain-aarch64

Installation Windows
Download the binary from https://github.com/Edu4rdSHL/findomain/releases/latest/download/findomain-windows.exe
Open a CMD shell and go to the dir where findomain-windows.exe was downloaded.
Exec: findomain-windows in the CMD shell.

Installation MacOS
$ wget https://github.com/Edu4rdSHL/findomain/releases/latest/download/findomain-osx
$ chmod +x findomain-osx.dms
$ ./findomain-osx.dms

Usage
You can use the tool in two ways, only discovering the domain name or discovering the domain + the IP address.
findomain 0.2.0
Eduard Tolosa <tolosaeduard@gmail.com>
A tool that use Certificates Transparency logs to find subdomains.

USAGE:
findomain [FLAGS] [OPTIONS]

FLAGS:
-a, --all-apis Use all the available APIs to perform the search. It take more time but you will have a lot of
more results.
-h, --help Prints help information
-i, --get-ip Return the subdomain list with IP address if resolved.
-V, --version Prints version information

OPTIONS:
-f, --file <file> Sets the input file to use.
-o, --output <output> Write data to output file in the specified format. [possible values: txt, csv, json]
-p, --proxy <proxy> Use a proxy to make the requests to the APIs.
-t, --target <target> Tar get host

Examples
  1. Make a simple search of subdomains and print the info in the screen:
findomain -t example.com
  1. Make a simple search of subdomains using all the APIs and print the info in the screen:
findomain -t example.com -a
  1. Make a search of subdomains and export the data to a CSV file:
findomain -t example.com -o csv
  1. Make a search of subdomains using all the APIs and export the data to a CSV file:
findomain -t example.com -a -o csv
  1. Make a search of subdomains and resolve the IP address of subdomains (if possible):
findomain -t example.com -i
  1. Make a search of subdomains with all the APIs and resolve the IP address of subdomains (if possible):
findomain -t example.com -i -a
  1. Make a search of subdomains with all the APIs and resolve the IP address of subdomains (if possible), exporting the data to a CSV file:
findomain -t example.com -i -a -o csv
  1. Make a search of subdomains using a proxy (http://127.0.0.1:8080 in it case, the rest of aguments continue working in the same way, you just need to add the -p flag to the before commands):
findomain -t example.com -p http://127.0.0.1:8080

Follow in Twitter:


Sampler - A Tool For Shell Commands Execution, Visualization And Alerting (Configured With A Simple YAML File)

$
0
0

Sampler is a tool for shell commands execution, visualization and alerting. Configured with a simple YAML file.

Installation

macOS
brew cask install sampler
or
curl -Lo /usr/local/bin/sampler https://github.com/sqshq/sampler/releases/download/v1.0.1/sampler-1.0.1-darwin-amd64
chmod +x /usr/local/bin/sampler

Linux
wget https://github.com/sqshq/sampler/releases/download/v1.0.1/sampler-1.0.1-linux-amd64 -O /usr/local/bin/sampler
chmod +x /usr/local/bin/sampler
Note: libasound2-dev system library is required to be installed for Sampler to play a trigger sound tone. Usually the library is in place, but if not - you can do it with your favorite package manager, e.g apt install libasound2-dev

Windows (experimental)
Recommended to use with advanced console emulators, e.g. Cmder
Download .exe

Usage
You specify shell commands, Sampler executes them with a required rate. The output is used for visualization.
One can sample any dynamic process right from the terminal - observe changes in the database, monitor MQ in-flight messages, trigger deployment process and get notification when it's done.
Using Sampler is basically a 3-step process:
  • Define your configuration in a YAML file
  • Run sampler -c config.yml
  • Adjust components size and location on UI

Components
The following is a list of configuration examples for each component type, with macOS compatible sampling scripts.

Runchart


runcharts:
- title: Search engine response time
rate-ms: 500 # sampling rate, default = 1000
scale: 2 # number of digits after sample decimal point, default = 1
legend:
enabled: true # enables item labels, default = true
details: false # enables item statistics: cur/min/max/dlt values, default = true
items:
- label: GOOGLE
sample: curl -o /dev/null -s -w '%{time_total}' https://www.google.com
color: 178 # 8-bit color number, default one is chosen from a pre-defined palette
- label: YAHOO
sample: curl -o /dev/null -s -w '%{time_total}' https://search.yahoo.com
- label: BING
sample: curl -o /dev/null -s -w '%{time_total}' https://www.bing.com

Sparkline


sparklines:
- title: CPU usage
rate-ms: 200
scale: 0
sample: ps -A -o %cpu | awk '{s+=$1} END {print s}'
- title: Free memory pages
rate-ms: 200
scale: 0
sample: memory_pressure | grep 'Pages free' | awk '{print $3}'

Barchart


barcharts:
- title: Local network activity
rate-ms: 500 # sampling rate, default = 1000
scale: 0 # number of digits after sample decimal point, default = 1
items:
- label: UDP bytes in
sample: nettop -J bytes_in -l 1 -m udp | awk '{sum += $4} END {print sum}'
- label: UDP bytes out
sample: nettop -J bytes_out -l 1 -m udp | awk '{sum += $4} END {print sum}'
- label: TCP bytes in
sample: nettop -J bytes_in -l 1 -m tcp | awk '{sum += $4} END {print sum}'
- label: TCP bytes out
sample: nettop -J bytes_out -l 1 -m tcp | awk '{sum += $4} END {print sum}'

Gauge


gauges:
- title: Minute progress
rate-ms: 500 # sampling rate, default = 1000
scale: 2 # number of digits after sample decimal point, default = 1
percent-only: false # toggle display of the current value, default = false
color: 178 # 8-bit color number, default one is chosen from a pre-defined palette
cur:
sample: date +%S # sample script for current value
max:
sample: echo 60 # sample script for max value
min:
sample: echo 0 # sample script for min value
- title: Year progress
cur:
sample: date +%j
max:
sample: echo 365
min:
sample: echo 0

Textbox


textboxes:
- title: Local weather
rate-ms: 10000 # sampling rate, default = 1000
sample: curl wttr.in?0ATQF
border: false # border around the item, default = true
color: 178 # 8-bit color number, default is white
- title: Docker containers stats
rate-ms: 500
sample: docker stats --no-stream --format "table {{.Name}}\t{{.CPUPerc}}\t{{.MemUsage}}\t{{.PIDs}}"

Asciibox


asciiboxes:
- title: UTC time
rate-ms: 500 # sampling rate, default = 1000
font: 3d # font type, default = 2d
border: false # border around the item, default = true
color: 43 # 8-bit color number, default is white
sample: env TZ=UTC date +%r

Bells and whistles

Triggers
Triggers allow to perform conditional actions, like visual/sound alerts or an arbitrary shell command. The following examples illustrate the concept.

Clock gauge, which shows minute progress and announces current time at the beginning of each minute
gauges:
- title: MINUTE PROGRESS
position: [[0, 18], [80, 0]]
cur:
sample: date +%S
max:
sample: echo 60
min:
sample: echo 0
triggers:
- title: CLOCK BELL EVERY MINUTE
condition: '[ $label == "cur" ] && [ $cur -eq 0 ] && echo 1 || echo 0' # expects "1" as TRUE indicator
actions:
terminal-bell: true # standard terminal bell, default = false
sound: true # NASA quindar tone, default = false
visual: false # notification with current value on top of the component area, default = false
script: say -v samantha `date +%I:%M%p` # an arbitrary script, which can use $cur, $prev and $label variables

Search engine latency chart, which alerts user when latency exceeds a threshold
runcharts:
- title: SEARCH ENGINE RESPONSE TIME (sec)
rate-ms: 200
items:
- label: GOOGLE
sample: curl -o /dev/null -s -w '%{time_total}' https://www.google.com
- label: YAHOO
sample: curl -o /dev/null -s -w '%{time_total}' https://search.yahoo.com
triggers:
- title: Latency threshold exceeded
condition: echo "$prev < 0.3 && $cur > 0.3" |bc -l # expects "1" as TRUE indicator
actions:
terminal-bell: true # standard terminal bell, default = false
sound: true # NASA quindar tone, default = false
visual: true # visual notification on top of the component area, default = false
script: 'say alert: ${label} latency exceeded ${cur} second' # an arbitrary script, which can use $cur, $prev and $label variables

Interactive shell support
In addition to the sample command, one can specify init command (executed only once before sampling) and transform command (to post-process sample command output). That covers interactive shell use case, e.g. to establish connection to a database only once, and then perform polling within an interactive shell session.

Basic mode
textboxes:
- title: MongoDB polling
rate-ms: 500
init: mongo --quiet --host=localhost test # executes only once to start the interactive session
sample: Date.now(); # executes with a required rate, in scope of the interactive session
transform: echo result = $sample # executes in scope of local session, $sample variable is available for transformation

PTY mode
In some cases interactive shell won't work, because its stdin is not a terminal. We can fool it, using PTY mode:
textboxes:
- title: Neo4j polling
pty: true # enables pseudo-terminal mode, default = false
init: cypher-shell -u neo4j -p pwd --format plain
sample: RETURN rand();
transform: echo "$sample" | tail -n 1
- title: Top on a remote server
pty: true # enables pseudo-terminal mode, default = false
init: ssh -i ~/user.pem ec2-user@1.2.3.4
sample: top

Multistep init
It is also possible to execute multiple init commands one after another, before you start sampling.
textboxes:
- title: Java application uptime
multistep-init:
- java -jar jmxterm-1.0.0-uber.jar
- open host:port # or local PID
- bean java.lang:type=Runtime
sample: get Uptime

Variables
If the configuration file contains repeated patterns, they can be extracted into the variables section. Also variables can be specified using -v/--variable flag on startup, and any system environment variables will also be available in the scripts.
variables:
mongoconnection: mongo --quiet --host=localhost test
barcharts:
- title: MongoDB documents by status
items:
- label: IN_PROGRESS
init: $mongoconnection
sample: db.getCollection('events').find({status:'IN_PROGRESS'}).count()
- label: SUCCESS
init: $mongoconnection
sample: db.getCollection('events').find({status:'SUCCESS'}).count()
- label: FAIL
init: $mongoconnection
sample: db.getCollection('events').find({status:'FAIL'}).count()

Color theme


theme: light # default = dark
sparklines:
- title: CPU usage
sample: ps -A -o %cpu | awk '{s+=$1} END {print s}'

Real-world recipes

Databases
The following are different database connection examples. Interactive shell (init script) usage is recommended to establish connection only once and then reuse it during sampling.

MySQL
# prerequisite: installed mysql shell

variables:
mysql_connection: mysql -u root -s --database mysql --skip-column-names
sparklines:
- title: MySQL (random number example)
pty: true
init: $mysql_connection
sample: select rand();

PostgreSQL
# prerequisite: installed psql shell

variables:
PGPASSWORD: pwd
postgres_connection: psql -h localhost -U postgres --no-align --tuples-only
sparklines:
- title: PostgreSQL (random number example)
init: $postgres_connection
sample: select random();

MongoDB
# prerequisite: installed mongo shell

variables:
mongo_connection: mongo --quiet --host=localhost test
sparklines:
- title: MongoDB (random number example)
init: $mongo_connection
sample: Math.random();

Neo4j
# prerequisite: installed cypher shell

variables:
neo4j_connection: cypher-shell -u neo4j -p pwd --format plain
sparklines:
- title: Neo4j (random number example)
pty: true
init: $neo4j_connection
sample: RETURN rand();
transform: echo "$sample" | tail -n 1

Kafka lag per consumer group
variables:
kafka_connection: $KAFKA_HOME/bin/kafka-consumer-groups --bootstrap-server localhost:9092
runcharts:
- title: Kafka lag per consumer group
rate-ms: 5000
scale: 0
items:
- label: A->B
sample: $kafka_connection --group group_a --describe | awk 'NR>1 {sum += $5} END {print sum}'
- label: B->C
sample: $kafka_connection --group group_b --describe | awk 'NR>1 {sum += $5} END {print sum}'
- label: C->D
sample: $kafka_connection --group group_c --describe | awk 'NR>1 {sum += $5} END {print sum}'

Docker containers stats (CPU, MEM, O/I)
textboxes:
- title: Docker containers stats
sample: docker stats --no-stream --format "table {{.Name}}\t{{.CPUPerc}}\t{{.MemPerc}}\t{{.MemUsage}}\t{{.NetIO}}\t{{.BlockIO}}\t{{.PIDs}}"

SSH

TOP command on a remote server
variables:
sshconnection: ssh -i ~/my-key-pair.pem ec2-user@1.2.3.4
textboxes:
- title: SSH
pty: true
init: $sshconnection
sample: top

JMX

Java application uptime example
# prerequisite: download [jmxterm jar file](https://docs.cyclopsgroup.org/jmxterm)

textboxes:
- title: Java application uptime
multistep-init:
- java -jar jmxterm-1.0.0-uber.jar
- open host:port # or local PID
- bean java.lang:type=Runtime
sample: get Uptime
transform: echo $sample | tr -dc '0-9' | awk '{printf "%.1f min", $1/1000/60}'


DrMITM - Program Designed To Globally Log All Traffic Of A Website

$
0
0

DrMITM is a program designed to globally log all traffic.

How it works
DrMITM sends a request to website and returns the IP of the website just in case the server of the website is designed to rely on the website IP for requests, and the request that goes to the website also ends up being sent to the server which it will log the message that the website sends, it will then return the same message and send it directly to the server, where the server may see it as the website but it will also direct our request to the website once the program changes IP's. once it sends our request to the website, the program will then pause our traffic, and wait for incoming traffic, when a new user tries to login or whatever and the website sends a request to the server, DrMITM will receive it, and the way it gets the data back to us is by sending the same data to a file.

How do i get started
For Nim version: Install 19.0 Nim(using choosenim or git clone) Git clone the repo cd into the directory Run nim DrMITM.nim
For Python version: Install Python git clone the repo cd into the directory Run python DrMITM.py

Commands
e(live logging)
b(traffic blocking)
r(redirect users)

Issue Reporting
If you have an issue please submit it with the following details given:
your issue
Your Nim Or Python version
Operating system
The process of what you were doing before the issue occurred

Q&A:
Q:How does live logging works?
A:it just sends the logged data to a file and outputs it on screen.
Q: How does the traffic block work? A: a unicode gets sent to the website from server and overflows the traffic towards incoming traffic.
Q:How does the redirectio. feature works?
A: it sends a fake error message + redirection status code from the server with a modified location.


DockerSecurityPlayground - A Microservices-based Framework For The Study Of Network Security And Penetration Test Techniques

$
0
0

Docker Security Playground is an application that allows you to:
  • Create network and network security scenarios, in order to understand network protocols,
    rules, and security issues by installing DSP in your PC.
  • Learn penetration testing techniques by simulating vulnerability labs scenarios
  • Manage a set of docker-compose project . Main goal of DSP is to learn in penetration testing and network security, but its flexibility allows you the creation, graphic editing and managment run / stop of all your docker-compose labs. For more information look at the Labs Managment page.

DSP Features
  • Graphic Editor of docker-compose
  • Docker Image Management
  • GIT Integration
  • DSP Repository with a set of network sescurity scenarios

How can I share my labs with the world ?
During the installation you can create a local environment that has not link with git, or you can associate a personal repository the the application. This is very useful if you want to share your work with other people.
DSP Repository must have several requirements, so I have created a base DSP Repo Template that you can use to create your personal repository.
So, the easiest way to share labs is the following:
  1. Fork the DSP_Repo project: https://github.com/giper45/DSP_Repo.git
  2. During the installation set github directory param to your forked repository.
  3. Now create your labs and share it!
It is important that all images that you use should be available to other users, so:
  • You can publish on docker hub so other users can pull your images in order to use your labs.
  • You can provide dockerfiles inside the .docker-images directory, so users can use build.sh to build your images and use your repo.
If you need a "private way" to share labs you should share the repository in other ways, at current time there is no support to share private repositories.
In DSP you can manage multiple user repositories (Repositories tab)

Prerequisites
  • Nodejs (v 7 or later)
  • git
  • docker
  • docker-compose
  • compiler tools (g++, c, c++)

Installation
Install prerequisites and run:
npm install

Troubleshooting during installation
If you have error regarding node-pty module, try to:
  • Install build-essentials : (In Ubuntu: apt install -y build-essentials)
  • Use nodejs LTS (note-pty has some isseus, as shown here

Update the application:
When you update the application it is important to update the npm packages (The application uses mydockerjs, a npm docker API that I am developing during DSP development: https://www.npmjs.com/package/mydockerjs)
npm run update

Start
Run
npm start  
To start the application. This will launch a server listening on 8080 (or another if you set have setted ENV variable in index.js file) port of your localhost.
Go to you favourite browser and digit localhost:8080. You'll be redirected on installation page, set parameters and click install.

Documentation
For documentation about DSP usage go to Wiki page:
It is a little outdated, I will update it as possible !

Docker Wrapper Image
DSP implements a label convention called DockerWrapperImage that allows you to create images that expose action to execute when a lab is running. Look at the doc

Error Debug
MacOS ECONNRESET error:
events.js:183
throw er; // Unhandled 'error' event
^

Error: read ECONNRESET
at _errnoException (util.js:992:11)
at TCP.onread (net.js:618:25)
On Mac it seems that there is some problem with some node package, so in order to solve this run:
MacBook-Pro:DockerSecurityPlayground gaetanoperrone$ npm install ws@3.3.2 --save-dev --save-exact
Other info here: http://gitlab.comics.unina.it/NS-Thesis/DockerSecurityPlayground_1/wikis/docker-operation-errors

Contributing
  1. Fork it!
  2. Create your feature branch: git checkout -b my-new-feature
  3. Commit your changes: git commit -am 'Add some feature'
  4. Push to the branch: git push origin my-new-feature
  5. Submit a pull request, we'll check

Any Questions?
Use the Issues in order to ask everything you want!.

Links

Relevant DSP Repositories

Contributors
  • Technical support: Gaetano Perrone, Francesco Caturano
  • Documentation support Gaetano Perrone, Francesco Caturano
  • Application design: Gaetano Perrone, Simon Pietro Romano
  • Application development: Gaetano Perrone, Francesco Caturano
  • Docker wrapper image development: Gaetano Perrone, Francesco Caturano
Thanks to Giuseppe Criscuolo for the logo design

Changelog
Got to CHANGELOG.md to see al the version changes.


Airflowscan - Checklist And Tools For Increasing Security Of Apache Airflow

$
0
0

Checklist and tools for increasing security of Apache Airflow.

DISCLAIMER
This project NOT AFFILIATED with the Apache Foundation and the Airflow project, and is not endorsed by them.

Contents
The purpose of this project is provide tools to increase security of Apache Airflow. installations. This projects provides the following tools:

Information for the Static Analysis Tool (airflowscan)
The static analysis tool can check an Airflow configuration file for settings related to security. The tool convers the config file to JSON, and then uses a JSON Schema to do the validation.

Requirements
Python 3 is required and you can find all required modules in the requirements.txt file. Only tested on Python 3.7 but should work on other 3.x releases. No plans to 2.x support at this time.

Installation
You can install this via PIP as follows:
pip install airflowscan
airflowscan
To download and run manually, do the following:
git clone https://github.com/nightwatchcybersecurity/airflowscan.git
cd airflowscan
pip -r requirements.txt
python -m airflowscan.cli

How to use
To scan a configuration file, do the following command:
airflowscan scan some_airflow.cfg

Reporting bugs and feature requests
Please use the GitHub issue tracker to report issues or suggest features: https://github.com/nightwatchcybersecurity/airflowscan
You can also send emai to research /at/ nightwatchcybersecurity [dot] com


Diaphora - The Most Advanced Free And Open Source Program Diffing Tool

$
0
0

Diaphora (διαφορά, Greek for 'difference') is a program diffing plugin for IDA, similar to Zynamics Bindiff or other FOSS counterparts like YaDiff, DarunGrim, TurboDiff, etc... It was released during SyScan 2015.
It works with IDA 6.9 to 7.3. Support for Ghidra is in development. Support for Binary Ninja is also planned but will come after Ghidra's port. If you are looking for Radare2 support you can check this very old fork.
For more details, please check the tutorial in the "doc" directory.
NOTE: If you're looking for a tool for diffing or matching functions between binaries and source codes, you might want to take a look to Pigaios.

Getting help and asking for features
You can join the mailing list https://groups.google.com/forum/?hl=es#!forum/diaphora to ask for help, new features, report issues, etc... For reporting bugs, however, I recommend using the issues tracker: https://github.com/joxeankoret/diaphora/issues
Please note that only the last 3 versions of IDA are officially supported. As of today, it means that only IDA 7.1, 7.2 and 7.3 are supported. Versions 6.8, 6.9, 6.95 and 7.0 do work (with all the last patches that were supplied to customers), but no official support is offered for them. However, if you run into any problem with these versions, ping me and I will do my best.

Documentation
You can check the tutorial https://github.com/joxeankoret/diaphora/blob/master/doc/diaphora_help.pdf

Screenshots
This is a screenshot of Diaphora diffing the PEGASUS iOS kernel Vulnerability fixed in iOS 9.3.5:


And this is an old screenshot of Diaphora diffing the Microsoft bulletin MS15-034:


These are some screenshots of Diaphora diffing the Microsoft bulletin MS15-050, extracted from the blog post Analyzing MS15-050 With Diaphora from Alex Ionescu.





Here is a screenshot of Diaphora diffing iBoot from iOS 10.3.3 against iOS 11.0:



Iris - WinDbg Extension To Perform Basic Detection Of Common Windows Exploit Mitigations

$
0
0

Iris WinDbg extension performs basic detection of common Windows exploit mitigations (32 and 64 bits).


The checks implemented, as can be seen in the screenshot above, are (for the loaded modules):
  • DynamicBase
  • ASLR
  • DEP
  • SEH
  • SafeSEH
  • CFG
  • RFG
  • GS
  • AppContainer
If you don't know the meaning of some of the keywords above use google, you'll find better explanations than the ones I could give you.

Setup
To "install", copy iris.dll into the winext folder for WinDbg (for x86 and x64).

WinDbg 10.0.xxxxx
Unless you installed the debug tools in a non standard path you'll find the winext folder at:
C:\Program Files (x86)\Windows Kits\10\Debuggers\x64\winext
Or, for 32 bits:
C:\Program Files (x86)\Windows Kits\10\Debuggers\x86\winext

WinDbg Preview
Unless you
installedcopied WinDbg preview install folder into a non standard location you'll have it in a folder with a name close to the one below (depending on the installed version):
C:\Program Files\WindowsApps\Microsoft.WinDbg_1.1906.12001.0_neutral__9wekib2d8acwe
For 64 bits copy iris.dll into amd64\winext or into x86\winext for 32 bits.

Load the extension
After the steps above, just load the extension with .load iris and run !iris.help to see the available command(s).
0:002> .load iris
[+] Iris WinDbg Extension Loaded
0:002> !iris.help

IRIS WinDbg Extension (rui@deniable.org). Available commands:
help = Shows this help
modules = Display exploit mitigations for all loaded modules.

Running
As shown in the screenshot above, just run: !iris.modules or simply !modules.

Warning
Don't trust blindly on the results, some might not be accurate. I pretty much used as reference PE-bear parser, winchecksec, Process Hacker, and narly. Thank you to all of them.
I put this together in a day to save some time during a specific assignment. It worked for me but it hasn't been thoroughly tested. You have been warned, use at your own risk.
I'll be updating and maintining this, so any issues you may find please let me know. I plan to add a few more mitigations later.

References
Besides the references mentioned before, if you want to write your own extension (or contribute to this one) the Advanced Windows Debugging book and the WinDbg SDK are your friends.



Firmware Slap - Discovering Vulnerabilities In Firmware Through Concolic Analysis And Function Clustering

$
0
0

Firmware slap combines concolic analysis with function clustering for vulnerability discovery and function similarity in firmware. Firmware slap is built as a series of libraries and exports most information as either pickles or JSON for integration with other tools.

Slides from the talk can be found here

Setup
Firmware slap should be run in a virtual environment. It has been tested on Python3.6
python setup.py install
You will need rabbitmq and (radare2 or Ghidra)
# Ubuntu
sudo apt install rabbitmq-server
# OSX
brew install rabbitmq

# Radare2
git clone https://github.com/radare/radare2.git
sudo ./radare2/sys/install.sh
# Ghidra
wget https://ghidra-sre.org/ghidra_9.0.4_PUBLIC_20190516.zip
unzip ghidra_9.0.4_PUBLIC_20190516.zip -d ghidra
echo "export PATH=\$PATH:$PWD/ghidra/ghidra_9.0.4/support" >> ~/.bashrc
If you want to use the Elastic search stuff run the Elasticsearch_and_kibana.sh script

Quickstart
Ensure rabbitmq-server is running.
# In a Separate terminal
celery -A firmware_slap.celery_tasks worker --loglevel=info
# Basic buffer overflow
Discover_And_Dump.py examples/iwconfig
# Command injection
tar -xvf examples/Almond_libs.tar.gz
Vuln_Discover_Celery.py examples/upload.cgi -L Almond_Root/lib/

Usage
# Get the firmware used for examples
wget https://firmware.securifi.com/AL3_64MB/AL3-R024-64MB
binwalk -Mre AL3-R024-64MB
Start a celery work from the project root directory:
# In a separate terminal
celery -A firmware_slap.celery_tasks worker --loglevel=info
In a different terminal window, run a vulnerability discovery job.
$ Vuln_Discover_Celery.py Almond_Root/etc_ro/lighttpd/www/cgi-bin/upload_bootloader.cgi -L Almond_Root/lib/
[+] Getting argument functions
[+] Analyzing 1 functions
0%| | 0/1 [00:01<?, ?it/s]
{ 'Injected_Location': { 'base': '0x7ffefde8',
........................ SNIP ......................
'type': 'Command Injection'}
Python 3.5.2 (default, Nov 12 2018, 13:43:14)
Type 'copyright', 'credits' or 'license' for more information
IPython 7.3.0 -- An enhanced Interactive Python. Type '?' for help.

In [1]:

The returned vulnerability object
The above command will return an object in the result variable. This is a dictionary will all sorts of awesome information about the vulnerability. There are three major keys in the object: The function arguments, The memory, and the injected location.
In [3]: result.keys()                                                                                 
Out[3]: dict_keys(['args', 'file_name', 'type', 'mem', 'Injected_Location'])

args
The args key will detail information about the recovered argument and what the argument values must be to recreate the vulnerability. In the below example, one argument is recovered, and to trigger the command injection that argument must be a char* that contains "`reboot`" to trigger a reboot.
In [1]: result['args']                                                           
Out[1]:
[{'base': 'a1',
'type': 'int',
'value': "0x0 -> b'`reboot`\\x01\\x01\\x01\\x01\\x01\\x01\\x01\\x01\\x01\\x01\\x01\\x01\\x01\\x01\\x01\\x01\\x01\\x01\\x01\\x01\\x01\\x01\\x01\\x01\\x01\\x01\\x01\\x01\\x01\\x01\\x01\\x01\\x01\\x01\\x01\\x01\\x01\\x01\\x01\\x01\\x01\\x01\\x01\\x01\\x01\\x01\\x01\\x01\\x01\\x01\\x01\\x00'"}]

Memory
The memory component of the object keeps track of the required memory values set to trigger the vulnerability. It also offers stack addresses and .text addresses with the offending commands for setting the required memory constraints. The first memory event required is at mtd_write_firmware+0x0 and the second is at mtd_write_firmware+0x38. Assembly is provided to help prettify future display work.
In [2]: result['mem']                                                                   
Out[2]:
[{'BBL_ADDR': '0x401138',
'BBL_DESC': {'DESCRIPTION': 'mtd_write_firmware+0x0 in upload_bootloader.cgi (0x401138)',
'DISASSEMBLY': ['0x401138:\tlui\t$gp, 0x42',
'0x40113c:\taddiu\t$sp, $sp, -0x228',
'0x401140:\taddiu\t$gp, $gp, -0x5e90',
'0x401144:\tlw\t$t9, -0x7f84($gp)',
'0x401148:\tsw\t$a2, 0x10($sp)',
'0x40114c:\tlui\t$a2, 0x40',
'0x401150:\tmove\t$a3, $a1',
'0x401154:\tsw\t$ra, 0x224($sp)',
'0x401158:\tsw\t$gp, 0x18($sp)',
'0x40115c:\tsw\t$a0, 0x14($sp)',
'0x401160:\taddiu\t$a1, $zero, 0x200',
'0x401164:\taddiu\t$a0, $sp, 0x20',
'0x401168:\tjalr\t$t9',
'0x40116c:\taddiu\t$a2, $a2, 0x196c']},
'DATA': "b'`reboot`\\x01\\x01\\x01\\x01\\x01\\x01\\x01\\x01\\x01\\x01\\x01\\x01\\x01\\x01\\x01\\x01\\x01\\x01\\x01\\x01\\x01\\x01\\x01\\x01\\x01\\x01\\x01 \\x01\\x01\\x01\\x01\\x01\\x01\\x01\\x01\\x01\\x01\\x01\\x01\\x01\\x01\\x01\\x01\\x01\\x01\\x01\\x01\\x01\\x01\\x01\\x01\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\ x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00'",
'DATA_ADDRS': ['0x0']},
{'BBL_ADDR': '0x401170',
'BBL_DESC': {'DESCRIPTION': 'mtd_write_firmware+0x38 in upload_bootloader.cgi (0x401170)',
'DISASSEMBLY': ['0x401170:\tlw\t$gp, 0x18($sp)',
'0x401174:\tnop\t',
'0x401178:\tlw\t$t9, -0x7f68($gp)',
'0x40117c:\tnop\t',
'0x401180:\tjalr\t$t9',
'0x401184:\taddiu\t$a0, $sp, 0x20']},
'DATA': "b'/bin/mtd_write -o 0 -l 0 write `reboot`'",
'DATA_ADDRS': ['0x7ffefe07']}]

Command Injection Specific
Since command injections are the easiest to demo, I've created a convenience dictionary key to demonstrate the location of the command injection easily.
In [4]: result['Injected_Location']                                                                      
Out[4]: {'base': '0x7ffefde8', 'type': 'char *', 'value': '/bin/mtd_write -o 0 -l 0 write `reboot`'}

Sample Vulnerability Cluster Script
The vulnerability cluster script will attempt to discover vulnerabilities using the method in the Sample Vulnerability Discovery script and then build k-means clusters of a set of given functions across an extracted firmware to find similar functions to vulnerable ones.
$ Vuln_Cluster_Celery.py -h
usage: Vuln_Cluster_Celery.py [-h] [-L LD_PATH] [-F FUNCTION] [-V VULN_PICKLE]
Directory

positional arguments:
Directory

optional arguments:
-h, --help show this help message and exit
-L LD_PATH, --LD_PATH LD_PATH
Path to libraries to load
-F FUNCTION, --Function FUNCTION
-V VULN_PICKLE, --Vuln_Pickle VULN_PICKLE
The below command takes -F as a known vulnerable function. -V as a dumped pickle from a previous run to not need to discover new vulnerabilites and -L for the library path. A sample usage:
$ python Vuln_Cluster_Celery.py -F mtd_write_firmware -L Almond_Root/lib/ Almond_Root/etc_ro/lighttpd/www/cgi-bin/
[+] Reading Files
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████& #9608;██████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 2.80it/s]
Getting functions from executables
Starting main
... Snip ...


Dow Jones Hammer - Protect The Cloud With The Power Of The cloud(AWS)

$
0
0

Dow Jones Hammer is a multi-account cloud security tool for AWS. It identifies misconfigurations and insecure data exposures within most popular AWS resources, across all regions and accounts. It has near real-time reporting capabilities (e.g. JIRA, Slack) to provide quick feedback to engineers and can perform auto-remediation of some misconfigurations. This helps to protect products deployed on cloud by creating secure guardrails.

Documentation
Dow Jones Hammer documentation is available via GitHub Pages at https://dowjones.github.io/hammer/.

Security features

Technologies
  • Python 3.6
  • AWS (Lambda, Dynamodb, EC2, SNS, CloudWatch, CloudFormation)
  • Terraform
  • JIRA
  • Slack

Contributing
You are welcome to contribute!

Issues:
You can use GitHub Issues to report issues. Describe what is going on wrong and what you expect to be correct behaviour.

Patches:
We currently use dev branch for ongoing development. Please open PRs to this branch.

Run tests:
Run tests with this command:
tox

Contact Us
Feel free to create issue report, pull request or just email us at hammer@dowjones.com with any other questions or concerns you have.


"Can I Take Over XYZ?" - A List Of Services And How To Claim (Sub)Domains With Dangling DNS Records.

$
0
0

What is a subdomain takeover?
Subdomain takeover vulnerabilities occur when a subdomain (subdomain.example.com) is pointing to a service (e.g. GitHub pages, Heroku, etc.) that has been removed or deleted. This allows an attacker to set up a page on the service that was being used and point their page to that subdomain. For example, if subdomain.example.com was pointing to a GitHub page and the user decided to delete their GitHub page, an attacker can now create a GitHub page, add a CNAME file containing subdomain.example.com, and claim subdomain.example.com.
You can read up more about subdomain takeovers here:

Safely demonstrating a subdomain takeover
Based on personal experience, claiming the subdomain discreetly and serving a harmless file on a hidden page is usually enough to demonstrate the security vulnerability. Do not serve content on the index page. A good proof of concept could consist of an HTML comment served via a random path:
$ cat aelfjj1or81uegj9ea8z31zro.html
<!-- PoC by username -->
Please be advised that this depends on what bug bounty program you are targeting. When in doubt, please refer to the bug bounty program's security policy and/or request clarifications from the team behind the program.

How to contribute
You can submit new services here: https://github.com/EdOverflow/can-i-take-over-xyz/issues/new?template=new-entry.md.
A list of services that can be checked (although check for duplicates against this list first) can be found here: https://github.com/EdOverflow/can-i-take-over-xyz/issues/26.

All entries
EngineStatusFingerprintDiscussionDocumentation
AkamaiNot vulnerableIssue #13
AWS/S3VulnerableThe specified bucket does not existIssue #36
BitbucketVulnerableRepository not found
Campaign MonitorVulnerable'Trying to access your account?'Support Page
Cargo CollectiveVulnerable404 Not FoundCargo Support Page
CloudfrontNot vulnerableViewerCertificateExceptionIssue #29Domain Security on Amazon CloudFront
DeskNot vulnerablePlease try again or try Desk.com free for 14 days.Issue #9
FastlyEdge caseFastly error: unknown domain:Issue #22
FeedpressVulnerableThe feed has not been found.HackerOne #195350
Fly.ioVulnerable404 Not FoundIssue #101
FreshdeskNot vulnerableFreshdesk Support Page
GhostVulnerableThe thing you were looking for is no longer here, or never was
GithubVulnerableThere isn't a Github Pages site here.Issue #37Issue #68
GitlabNot vulnerableHackerOne #312118
Google Cloud StorageNot vulnerable
HatenaBlogvulnerable404 Blog is not found
Help JuiceVulnerableWe could not find what you're looking for.Help Juice Support Page
Help ScoutVulnerableNo settings were found for this company:HelpScout Docs
HerokuEdge caseNo such appIssue #38
IntercomVulnerableUh oh. That page doesn't exist.Issue #69Help center
JetBrainsVulnerableis not a registered InCloud YouTrackYouTrack InCloud Help Page
KinstaVulnerableNo Site For DomainIssue #48kinsta-add-domain
LaunchRockVulnerableIt looks like you may have taken a wrong turn somewhere. Don't worry...it happens to all of us.Issue #74
MasheryEdge CaseUnrecognized domainHackerOne #275714, Issue #14
Microsoft AzureVulnerableIssue #35
NetlifyEdge CaseIssue #40
PantheonVulnerable404 error unknown site!Issue #24Pantheon-Sub-takeover
Readme.ioVulnerableProject doesnt exist... yet!Issue #41
SendgridNot vulnerable
ShopifyEdge CaseSorry, this shop is currently unavailable.Issue #32, Issue #46Medium Article
SquarespaceNot vulnerable
StatuspageVulnerableVisiting the subdomain will redirect users to https://www.statuspage.io.PR #105Statuspage documentation
StrikinglyVulnerablepage not foundIssue #58Strikingly-Sub-takeover
Surge.shVulnerableproject not foundSurge Documentation
TumblrVulnerableWhatever you were looking for doesn't currently exist at this address
TildaEdge CasePlease renew your subscriptionPR #20
UnbounceNot vulnerableThe requested URL was not found on this server.Issue #11
UptimerobotVulnerablepage not foundIssue #45Uptimerobot-Sub-takeover
UserVoiceVulnerableThis UserVoice subdomain is currently available!
WebflowNot VulnerableIssue #44forum webflow
WordpressVulnerableDo you want to register *.wordpress.com?
WP EngineNot vulnerable
ZendeskNot VulnerableHelp Center ClosedIssue #23Zendesk Support


Eyeballer - Convolutional Neural Network For Analyzing Pentest Screenshots

$
0
0
Give those screenshots of yours a quick eyeballing.
Eyeballer is meant for large-scope network penetration tests where you need to find "interesting" targets from a huge set of web-based hosts. Go ahead and use your favorite screenshotting tool like normal (EyeWitness or GoWitness) and then run them through Eyeballer to tell you what's likely to contain vulnerabilities, and what isn't.

Example Labels

Old-Looking Sites

Login Pages

Homepages


Custom 404's


Eyeballer uses TF.keras on Tensorflow 2.0. This is (as of this moment) still in "beta". So the pip requirement for it looks a bit weird. It'll also probably conflict with an existing TensorFlow installation if you've got the regular 1.0 version installed. So, heads-up there. But 2.0 should be out of beta and official "soon" according to Google, so this problem ought to solve itself in short order.
Setup

Download required packages on pip:
sudo pip3 install -r requirements.txt
Or if you want GPU support:
sudo pip3 install -r requirements-gpu.txt
NOTE: Setting up a GPU for use with TensorFlow is way beyond the scope of this README. There's hardware compatibility to consider, drivers to install... There's a lot. So you're just going to have to figure this part out on your own if you want a GPU. But at least from a Python package perspective, the above requirements file has you covered.
Training Data You can find our training data here:
https://www.dropbox.com/sh/7aouywaid7xptpq/AAD_-I4hAHrDeiosDAQksnBma?dl=1
Pretty soon, we're going to add this as a TensorFlow DataSet, so you don't need to download this separately like this. It'll also let us version the data a bit better. But for now, just deal with it. There's two things you need from the training data:
  1. images/ folder, containing all the screenshots (resized down to 224x140. We'll have the full-size images up soon)
  2. labels.csv that has all the labels
  3. bishop-fox-pretrained-v1.h5 A pretrained weights file you can use right out of the box without training.
Copy all three into the root of the Eyeballer code tree.

Predicting Labels
To eyeball some screenshots, just run the "predict" mode:
eyeballer.py --weights YOUR_WEIGHTS.h5 predict YOUR_FILE.png
Or for a whole directory of files:
eyeballer.py --weights YOUR_WEIGHTS.h5 predict PATH_TO/YOUR_FILES/
Eyeballer will spit the results back to you in human readable format (a results.html file so you can browse it easily) and machine readable format (a results.csv file).

Training
To train a new model, run:
eyeballer.py train
You'll want a machine with a good GPU for this to run in a reasonable amount of time. Setting that up is outside the scope of this readme, however.
This will output a new model file (weights.h5 by default).

Evaluation
You just trained a new model, cool! Let's see how well it performs against some images it's never seen before, across a variety of metrics:
eyeballer.py --weights YOUR_WEIGHTS.h5 evaluate
The output will describe the model's accuracy in both recall and precision for each of the program's labels. (Including "none of the above" as a pseudo-label)


pwnedOrNot v1.2.6 - OSINT Tool to Find Passwords for Compromised Email Addresses

$
0
0

OSINT Tool to Find Passwords for Compromised Email Accounts
pwnedOrNot uses haveibeenpwned v2 api to test email accounts and tries to find the password in Pastebin Dumps.

Featured

Get In Touch

Changelog

Features
haveibeenpwned offers a lot of information about the compromised email, some useful information is displayed by this script:
  • Name of Breach
  • Domain Name
  • Date of Breach
  • Fabrication status
  • Verification Status
  • Retirement status
  • Spam Status
And with all this information pwnedOrNot can easily find passwords for compromised emails if the dump is accessible and it contains the password

Tested on
  • Kali Linux 2019.1
  • BlackArch Linux
  • Ubuntu 18.04
  • Kali Nethunter
  • Termux

Installation
Ubuntu / Kali Linux / Nethunter / Termux
git clone https://github.com/thewhiteh4t/pwnedOrNot.git
cd pwnedOrNot
pip3 install requests
BlackArch Linux
pacman -S pwnedornot

Updates
cd pwnedOrNot
git pull

Usage
python3 pwnedornot.py -h

usage: pwnedornot.py [-h] [-e EMAIL] [-f FILE] [-d DOMAIN] [-n] [-l]
[-c CHECK]

optional arguments:
-h, --help show this help message and exit
-e EMAIL, --email EMAIL Email Address You Want to Test
-f FILE, --file FILE Load a File with Multiple Email Addresses
-d DOMAIN, --domain DOMAIN Filter Results by Domain Name
-n, --nodumps Only Check Breach Info and Skip Password Dumps
-l, --list Get List of all pwned Domains
-c CHECK, --check CHECK Check if your Domain is pwned

# Examples

# Check Single Email
python3 pwnedornot.py -e <email>
#OR
python3 pwnedornot.py --email <email>

# Check Multiple Emails from File
python3 pwnedornot.py -f <file name>
#OR
python3 pwnedornot.py --file <file name>

# Filter Result for a Domai n Name [Ex : adobe.com]
python3 pwnedornot.py -e <email> -d <domain name>
#OR
python3 pwnedornot.py -f <file name> --domain <domain name>

# Get only Breach Info, Skip Password Dumps
python3 pwnedornot.py -e <email> -n
#OR
python3 pwnedornot.py -f <file name> --nodumps

# Get List of all Breached Domains
python3 pwnedornot.py -l
#OR
python3 pwnedornot.py --list

# Check if a Domain is Pwned
python3 pwnedornot.py -c <domain name>
#OR
python3 pwnedornot.py --check <domain name>

Demo



Viewing all 5843 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>