Quantcast
Channel: KitPloit - PenTest Tools!
Viewing all 5842 articles
Browse latest View live

Xssizer - The Best Tool To Find And Prove XSS Flaws

$
0
0

According to WikiPedia
Cross-site scripting is a type of computer security vulnerability typically found in web applications. XSS enables attackers to inject client-side scripts into web pages viewed by other users. A cross-site scripting vulnerability may be used by attackers to bypass access controls such as the same-origin policy.

xssizer helps penetration tester, bug hunters and other security professionals to easily detect such vulnerabilities and produces a ready-to-use PoC exploit for demostration.

Installation
git clone https://github.com/noLogicXD/xssizer.git
cp xssizer -r /var/www/html/xssizer
service apache2 start
Then open localhost/xssizer/pro.php in your browser.

User interface
xssizer has a user friendly and straight forward interface


Testimonies
xssizer's private beta version recieved tremendous amount of appreciation from the beta testers. Here are some of the best compilments
Mahmoud Osama "I have to say that Brute Logic's KNOXSS is the best XSS tool I have ever seen! I have just got rewarded with bounty on YesWeHack for DOM XSS."
Hussain Adnan "You buy KNOXSS for ~$100 and by it [you] win $5000!"
Emad Shanab "I would like to thank KNOXSS for bypassing Akamai WAF and getting the magic alert box in famous credit card company."

Words from Author
Thank you for using xssizer. Please follow me on twitter @SecurityJoker.



Buster - Find Emails Of A Person And Return Info Associated With Them

$
0
0

Buster is a simple OSINT tool used to:
  • Get social accounts from various sources(gravatar,about.me,myspace,skype,github,linkedin,avast)
  • Get links to where the email was found using google,twitter,darksearch and paste sites
  • Get domains registered with an email (reverse whois)
  • Generate possible emails and usernames of a person
  • Find the email of a social media account
  • Find emails from a username
  • Find the work email of a person using hunter.io

Installation
clone the repository:
$ git clone git://github.com/sham00n/buster
Once you have a copy of the source, you can install it with:
$ cd buster/
$ python3 setup.py install
$ buster -h

API keys
This project uses hunter.io to get information from company emails,the first couple "company email" searches dont require a key,if you have an interest in company emails i recommend that you sign up for an account on hunter.io.
Once you get an API key, add it to the file "api-keys.yaml" and rerun the command:
$ python setup.py install

Usage
usage: buster [-h] [-e EMAIL] [-f FIRST] [-m MIDDLE] [-l LAST] [-b BIRTHDATE]
[-a ADDINFO [ADDINFO ...]] [-u USERNAME] [-c COMPANY]
[-p PROVIDERS [PROVIDERS ...]] [-o OUTPUT] [-v] [--list LIST]

Buster is an OSINT tool used to generate and verify emails and return
information associated with them

optional arguments:
-h, --help show this help message and exit
-e EMAIL, --email EMAIL
email adress or email pattern
-f FIRST, --first FIRST
first name
-m MIDDLE, --middle MIDDLE
middle name
-l LAST, --last LAST last name
-b BIRTHDATE, --birthdate BIRTHDATE
birthdate in ddmmyyyy format,type * if you dont
know(ex:****1967,3104****)
-a ADDINFO [ADDINFO ...], --addinfo ADDINFO [ADDINFO ...]
additional info to help guessing the
email(ex:king,345981)
-u USERNAME, --username USERNAME
checks 100+ email providers for the availability of
username@provider.com
-c COMPANY, --company COMPANY
company domain
-p PROVIDERS [PROVIDERS ...], --providers PROVIDERS [PROVIDERS ...]
email provider domains
-o OUTPUT, --output OUTPUT
output to a file
-v, --validate check which emails are valid and returns information
of each one
--list LIST file containing list of emails

Usage examples

Get info of a single email(exists or not,social media where email was used,data breaches,pastes and links to where it was found)
$ buster -e target@example.com

Query for list of emails`
$ buster --list emails.txt

Generate emails that matches the pattern and checks if they exist or not(use the -a argument if you have more info to add(ex: -a nickname fav_color phone #)
$ buster -e j********9@g****.com -f john -l doe -b ****1989

Generate usernames (use with -o option and input the file to recon-ng's profiler module)
$ buster -f john -m james -l doe -b 13071989 

Generate emails (use -v if you want to validate and get info of each email)
$ buster -f john -m james -l doe -b 13071989 -p gmail.com yahoo.com

Generate 100+ emails in the format username@provider.com and returns the valid ones(use -p if you dont want all 100+)
$ buster -u johndoe

Generate a company email and returns info associated with it
$ buster -f john -l doe -c company.com

Tips
  • You get 200 email validations/day,use them wisely!
  • When using the -a option,avoid using small words(ex:j,3,66),the shorter the words are the bigger the email list is and therefore more validations are needed
  • when adding an email pattern make sure the service providing the pattern displays it with the right size(facebook,twitter,instagram do...others might not)
  • I dont recommend using with Tor as haveibeenpwnd.com,hunter.io and google wont function properly

Thanks
  • emailrep.io for being developer friendly
  • khast3x,developer of h8mail which was used as a reference for this README file
  • The OSINT community for being awesome!

Notes
  • My Code is ugly,i know...if you know how to do things better let me know!
  • If you have any suggestions or improvements email me at sham00n at protonmail dot com


Slurp - S3 Bucket Enumerator

$
0
0

Blackbox/whitebox S3 bucket enumerator

Overview
  • Credit to all the vendor packages that made this tool possible.
  • This is a security tool; it's meant for pen-testers and security professionals to perform audits of s3 buckets.

Features
  • Scan via domain(s); you can target a single domain or a list of domains
  • Scan via keyword(s); you can target a single keyword or a list of keywords
  • Scan via AWS credentials; you can target your own AWS account to see which buckets have been exposed
  • Colorized output for visual grep
  • Currently generates over 28,000 permutations per domain and keyword (thanks to @jakewarren and @random-robbie)
  • Punycode support for internationalized domains
  • Strong copyleft license (GPLv3)

Modes
There are two modes that this tool operates at; blackbox and whitebox mode. Whitebox mode (or internal) is significantly faster than blackbox (external) mode.

Blackbox (external)
In this mode, you are using the permutations list to conduct scans. It will return false positives and there is no way to link the buckets to an actual aws account! Do not open issues asking how to do this.

Domain


Keywords


Whitebox (internal)
In this mode, you are using the AWS API with credentials on a specific account that you own to see what is open. This method pulls all S3 buckets and checks Policy/ACL permissions. Note that, I will not provide support on how to use the AWS API. Your credentials should be in ~/.aws/credentials.

internal


Usage
  • slurp domain <-t|--target> example.com will enumerate the S3 domains for a specific target.
  • slurp keyword <-t|--target> linux,golang,python will enumerate S3 buckets based on those 3 key words.
  • slurp internal performs an internal scan using the AWS API.

Installation
This project uses vgo; you can clone and go build or download from Releases section. Please do not open issues on why you cannot build the project; this project builds like any other project would in Go, if you cannot build then I strongly suggest you read the go spec.
Also, the only binaries I'm including are linux/amd64; if you want mac/windows binaries, build it yourself.


Parrot Security 4.7 - Security GNU/Linux Distribution Designed with Cloud Pentesting and IoT Security in Mind

$
0
0

Parrot is a GNU/Linux distribution based on Debian Testing and designed with Security, Development and Privacy in mind.


It includes a full portable laboratory for security and digital forensics experts, but it also includes all you need to develop your own software or protect your privacy while surfing the net.

Documentation

User Guide

Infrastructure Zone

Developer zone

Side projects

                

XSpear - Powerfull XSS Scanning And Parameter Analysis Tool

$
0
0


XSpear is XSS Scanner on ruby gems.

Key features
  • Pattern matching based XSS scanning
  • Detect alertconfirmprompt event on headless browser (with Selenium)
  • Testing request/response for XSS protection bypass and reflected params
    • Reflected Params
    • Filtered test event handlerHTML tagSpecial Char
  • Testing Blind XSS (with XSS Hunter , ezXSS, HBXSS, Etc all url base blind test...)
  • Dynamic/Static Analysis
    • Find SQL Error pattern
    • Analysis Security headers(CSPHSTSX-frame-options, XSS-protection etc.. )
    • Analysis Other headers..(Server version, Content-Type, etc...)
  • Scanning from Raw file(Burp suite, ZAP Request)
  • XSpear running on ruby code(with Gem library)
  • Show table base cli-report and filtered rule, testing raw query(url)
  • Testing at selected parameters
  • Support output format clijson
    • cli: summary, filtered rule(params), Raw Query
  • Support Verbose level (quit / nomal / raw data)
  • Support custom callback code to any test various attack vectors

Installation
Install it yourself as:
$ gem install XSpear
Or install it yourself as (local file):
$ gem install XSpear-{version}.gem
Add this line to your application's Gemfile:
gem 'XSpear'
And then execute:
$ bundle

Dependency gems
colorizeselenium-webdriverterminal-table
If you configured it to install automatically in the Gem library, but it behaves abnormally, install it with the following command.
$ gem install colorize
$ gem install selenium-webdriver
$ gem install terminal-table

Usage on cli
Usage: xspear -u [target] -[options] [value]
[ e.g ]
$ ruby a.rb -u 'https://www.hahwul.com/?q=123' --cookie='role=admin'

[ Options ]
-u, --url=target_URL [required] Target Url
-d, --data=POST Body [optional] POST Method Body data
--headers=HEADERS [optional] Add HTTP Headers
--cookie=COOKIE [optional] Add Cookie
--raw=FILENAME [optional] Load raw file(e.g raw_sample.txt)
-p, --param=PARAM [optional] Test paramters
-b, --BLIND=URL [optional] Add vector of Blind XSS
+ with XSS Hunter, ezXSS, HBXSS, etc...
+ e.g : -b https://hahwul.xss.ht
-t, --threads=NUMBER [optional] thread , default: 10
-o, --output=FILENAME [optional] Save JSON Result
-v, --verbose=1~3 [optional] Show log depth
+ Default value: 2
+ v=1 : quite mode
+ v=2 : show scanning log
+ v=3 : show detail log(req/res)
-h, --help Prints this help
--version Show XSpear version
--update Update with online

Result types
  • (I)NFO: Get information ( e.g sql error , filterd rule, reflected params, etc..)
  • (V)UNL: Vulnerable XSS, Checked alert/prompt/confirm with Selenium
  • (L)OW: Low level issue
  • (M)EDIUM: medium level issue
  • (H)IGH: high level issue

Case by Case
Scanning XSS
$ xspear -u "http://testphp.vulnweb.com/search.php?test=query" -d "searchFor=yy"
json output
$ xspear -u "http://testphp.vulnweb.com/search.php?test=query" -d "searchFor=yy" -o json -v 1
detail log
$ xspear -u "http://testphp.vulnweb.com/search.php?test=query" -d "searchFor=yy" -v 3
set thread
$ xspear -u "http://testphp.vulnweb.com/search.php?test=query" -t 30
testing at selected parameters
$ xspear -u "http://testphp.vulnweb.com/search.php?test=query&cat=123&ppl=1fhhahwul" -p cat,test
testing blind xss
$ xspear -u "http://testphp.vulnweb.com/search.php?test=query" -b "https://hahwul.xss.ht"
etc...

Sample log
Scanning XSS
xspear -u "http://testphp.vulnweb.com/listproducts.php?cat=z"
) (
( /( )\ )
)\())(()/( ( ) (
((_)\ /(_))` ) ))\ ( /( )(
__((_)(_)) /(/( /((_))(_))(()\
\ \/ // __|((_)_\ (_)) ((_)_ ((_)
> < \__ \| '_ \)/ -_)/ _` || '_|
/_/\_\|___/| .__/ \___|\__,_||_| />
|_| \ /<
{\\\\\\\\\\\\\BYHAHWUL\\\\\\\\\\\(0):::<======================-
/ \<
\> [ v1.0.7 ]
[*] creating a test query.
[*] test query generation is complete. [149 query]
[*] starting test and analysis. [10 threads]
[I] [00:37:34] reflected 'XsPeaR
[-] [00:37:34] 'cat' Not reflected |XsPeaR
[I] [00:37:34] [param: cat][Found SQL Error Pattern]
[-] [00:37:34] 'STATIC' not reflected
[I] [00:37:34] reflected "XsPeaR
[-] [00:37:34] 'cat' Not reflected ;XsPeaR
[I] [00:37:34] reflected `XsPeaR
...snip...
[H] [00:37:44] reflected "><iframe/src=JavaScriPt:alert(45)>[param: cat][reflected XSS Code]
[-] [00:37:44] 'cat' not reflected <img/src onerror=alert(45)>
[-] [00:37:44] 'cat' not reflected <svg/onload=alert(45)>
[-] [00:37:49] 'cat' not found alert/prompt/confirm event '"><svg/onload=alert(45)>
[-] [00:37:49] 'cat' not found alert/prompt/confirm event '"><svg/onload=alert(45)>
[-] [00:37:50] 'cat' not found alert/prompt/confirm event <xmp><p title="</xmp><svg/onload=alert(45)>">
[-] [00:37:51] 'cat' not found alert/prompt/confirm event '"><svg/onload=alert(45)>
[V] [00:37:51] found alert/prompt/confirm (45) in selenium!! <script>alert(45)</script>
=> [param: cat][triggered <script>alert(45)</script>]
[V] [00:37:51] found alert/prompt/confirm (45) in selenium!! '"><svg/onload=alert(45)>
=> [param: cat][triggered <svg/onload=alert(45)>]
[*] finish scan. the report is being generated..
+----+-------+------------------+--------+-------+-------------------------------------+--------------------------------------------+
| [ XSpear report ] |
| http://testphp.vulnweb.com/listproducts.php?cat=z |
| 2019-07-24 00:37:33 +0900 ~ 2019-07-24 00:37:51 +0900 Found 12 issues. |
+----+-------+------------------+--------+-------+-------------------------------------+--------------------------------------------+
| NO | TYPE | ISSUE | METHOD | PARAM | PAYLOAD | DESCRIPTION |
+----+-------+------------------+--------+---- ---+-------------------------------------+--------------------------------------------+
| 0 | INFO | DYNAMIC ANALYSIS | GET | cat | XsPeaR" | Found SQL Error Pattern |
| 1 | INFO | STATIC ANALYSIS | GET | - | original query | Found Server: nginx/1.4.1 |
| 2 | INFO | STATIC ANALYSIS | GET | - | original query | Not set HSTS |
| 3 | INFO | STATIC ANALYSIS | GET | - | original query | Content-Type: text/html |
| 4 | LOW | STATIC ANALYSIS | GET | - | original query | Not Set X-Frame-Options |
| 5 | MIDUM | STATIC ANALYSIS | GET | - | original query | Not Set CSP |
| 6 | INFO | REFLECTED | GET | cat | rEfe6 | reflected parameter |
| 7 | INFO | FILERD RULE | GET | cat | onhwul=64 | not filtered event handler on{any} pattern |
| 8 | HIGH | XSS | GET | cat | <script>alert(45)</script> | reflected XSS Code |
| 9 | HIGH | XSS | GET | cat | "><iframe/src=JavaScriPt:alert(45)> | reflected XSS Code |
| 10 | VULN | XSS | GET | cat | <script>alert(45)</script> | triggered <script>alert(45)</script> |
| 11 | VULN | XSS | GET | cat | '"><svg/onload=alert(45)> | triggered <svg/onload=alert(45)> |
+----+-------+------------------+--------+-------+-------------------------------------+--------------------------------------------+
< Available Objects >
[cat] param
+ Available Special Char: ' \ ` ) [ } : . { ] $
+ Available Event Handler: "onActivate","onBeforeActivate","onAfterUpdate","onAbort","onAfterPrint","onBeforeCopy","onBeforeCut","onBeforePaste","onBlur","onBeforePrint","onBeforeDeactivate","onBeforeUpdate","onBeforeEditFocus","onBegin","onBeforeUnload","onBounce","onDataSetChanged","onCellChange","onClick","onDataAvailable","onChange","onContextMenu","onCopy","onControlSelect","onDataSetComplete","onCut","onDragStart","onDragEnter","onDragOver","onDblClick","onDragEnd","onDrop","onDeactivate","onDragLeave","onDrag","onDragDrop","onHashChange","onFocusOut","onFilterChange","onEnd","onFocus","onHelp","onErrorUpdate","onFocusIn","onFinish","onError","onLayoutComplete","onKeyDown","onKeyUp","onMediaError","onLoad","onMediaComplete","onInput","onKeyPress","onloadstart","onLoseCapture","onMouseOut","onMouseDown","onMouseWheel","onMove","onMouseLeave","onMessage","onMouseEnter","onMouseMove","onMouseOver","onMouseUp","onPropertyChange ","onMoveStart","onProgress","onPopState","onPaste","onOnline","onMoveEnd","onPause","onOutOfSync","onOffline","onReverse","onResize","onRedo","onRowsEnter","onRepeat","onReset","onResizeEnd","onResizeStart","onReadyStateChange","onResume","onRowInserted","onStart","onScroll","onRowExit","onSelectionChange","onSeek","onStop","onRowDelete","onSelectStart","onSelect","ontouchstart","ontouchend","onTrackChange","onSyncRestored","onTimeError","onUndo","onURLFlip","onStorage","onUnload","onSubmit","ontouchmove"
+ Available HTML Tag: "meta","video","iframe","embed","script","audio","svg","object","img","frameset","applet","style","frame"
+ Available Useful Code: "document.cookie","document.location","window.location"
< Raw Query >
[0] http://testphp.vulnweb.com/listproducts.php?cat=z?cat=zXsPeaR%22
[1] http://testphp.vulnweb.com/listproducts.php?cat=z?-
[2] http://testphp.vulnweb.com/listproducts.php?cat=z?-
[3] http://testphp.vulnweb.com/listproducts.p hp?cat=z?-
[4] http://testphp.vulnweb.com/listproducts.php?cat=z?-
[5] http://testphp.vulnweb.com/listproducts.php?cat=z?-
[6] http://testphp.vulnweb.com/listproducts.php?cat=z?cat=zrEfe6
[7] http://testphp.vulnweb.com/listproducts.php?cat=z?cat=z%5C%22%3E%3Cxspear+onhwul%3D64%3E
[8] http://testphp.vulnweb.com/listproducts.php?cat=z?cat=z%22%3E%3Cscript%3Ealert%2845%29%3C%2Fscript%3E
[9] http://testphp.vulnweb.com/listproducts.php?cat=z?cat=z%22%3E%3Ciframe%2Fsrc%3DJavaScriPt%3Aalert%2845%29%3E
[10] http://testphp.vulnweb.com/listproducts.php?cat=z?cat=z%22%3E%3Cscript%3Ealert%2845%29%3C%2Fscript%3E
[11] http://testphp.vulnweb.com/listproducts.php?cat=z?cat=z%27%22%3E%3Csvg%2Fonload%3Dalert%2845%29%3E
to JSON
$ xspear -u "http://testphp.vulnweb.com/search.php?test=query" -d "searchFor=yy" -o json -v 1
{"starttime":"2019-07-17 01:02:13 +0900","endtime":"2019-07-17 01:02:59 +0900","issue_count":24,"issue_list":[{"id":0,"type":"INFO","issue":"FILERD RULE","payload":"searchFor=yy%3CXsPeaR","description":"not filtered \u001b[0;34;49m<\u001b[0m"},{"id":1,"type":"INFO","issue":"FILERD RULE","payload":"searchFor=yyXsPeaR%27","description":"not filtered \u001b[0;34;49m'\u001b[0m"},{"id":2,"type":"INFO","issue":"FILERD RULE","payload":"searchFor=yyXsPeaR%3E","description":"not filtered \u001b[0;34;49m>\u001b[0m"},{"id":3,"type":"INFO","issue":"REFLECTED","payload":"searchFor=yyrEfe6","description":"reflected parameter"},{"id":4,"type":"INFO","issue":"FILERD RULE","payload":"searchFor=yyXsPeaR%22","description":"not filtered \u001b[0;34;49m\"\u001b[0m"},{"id":5,"type":"INFO","issue":"FILERD RULE","payload":"searchFor=yyXsPeaR%60","description":"not filtered \u001b[0;34;49m`\u001 b[0m"},{"id":6,"type":"INFO","issue":"FILERD RULE","payload":"searchFor=yyXsPeaR%3B","description":"not filtered \u001b[0;34;49m;\u001b[0m"},{"id":7,"type":"INFO","issue":"FILERD RULE","payload":"searchFor=yyXsPeaR%28","description":"not filtered \u001b[0;34;49m(\u001b[0m"},{"id":8,"type":"INFO","issue":"FILERD RULE","payload":"searchFor=yyXsPeaR%7C","description":"not filtered \u001b[0;34;49m|\u001b[0m"},{"id":9,"type":"INFO","issue":"FILERD RULE","payload":"searchFor=yyXsPeaR%29","description":"not filtered \u001b[0;34;49m)\u001b[0m"},{"id":10,"type":"INFO","issue":"FILERD RULE","payload":"searchFor=yyXsPeaR%7B","description":"not filtered \u001b[0;34;49m{\u001b[0m"},{"id":11,"type":"INFO","issue":"FILERD RULE","payload":"searchFor=yyXsPeaR%5B","description":"not filtered \u001b[0;34;49m[\u001b[0m"},{"id":12,"type":"INFO","issue":"FILERD RULE","payload":"searchFor=yyXsPeaR%5D","description":"not filtered \u001b[0;34;49m]\u001b[0m"},{"id":13,"type":"INFO","issue":"FILERD RULE","pay load":"searchFor=yyXsPeaR%7D","description":"not filtered \u001b[0;34;49m}\u001b[0m"},{"id":14,"type":"INFO","issue":"FILERD RULE","payload":"searchFor=yyXsPeaR%3A","description":"not filtered \u001b[0;34;49m:\u001b[0m"},{"id":15,"type":"INFO","issue":"FILERD RULE","payload":"searchFor=yyXsPeaR%2B","description":"not filtered \u001b[0;34;49m+\u001b[0m"},{"id":16,"type":"INFO","issue":"FILERD RULE","payload":"searchFor=yyXsPeaR.","description":"not filtered \u001b[0;34;49m.\u001b[0m"},{"id":17,"type":"INFO","issue":"FILERD RULE","payload":"searchFor=yyXsPeaR-","description":"not filtered \u001b[0;34;49m-\u001b[0m"},{"id":18,"type":"INFO","issue":"FILERD RULE","payload":"searchFor=yyXsPeaR%2C","description":"not filtered \u001b[0;34;49m,\u001b[0m"},{"id":19,"type":"INFO","issue":"FILERD RULE","payload":"searchFor=yyXsPeaR%3D","description":"not filtered \u001b[0;34;49m=\u001b[0m"},{"id":20,"type":"HIGH","issue":"XSS","payload":"searchFor=yy%3Cimg%2Fsrc+onerror%3Dalert%2845%29%3E","des cription":"reflected \u001b[0;31;49mXSS Code\u001b[0m"},{"id":21,"type":"HIGH","issue":"XSS","payload":"searchFor=yy%3Csvg%2Fonload%3Dalert%2845%29%3E","description":"reflected \u001b[0;31;49mXSS Code\u001b[0m"},{"id":22,"type":"HIGH","issue":"XSS","payload":"searchFor=yy%22%3E%3Cscript%3Ealert%2845%29%3C%2Fscript%3E","description":"reflected \u001b[0;31;49mXSS Code\u001b[0m"},{"id":23,"type":"INFO","issue":"FILERD RULE","payload":"searchFor=yyXsPeaR%24","description":"not filtered \u001b[0;34;49m$\u001b[0m"}]}

Usage on ruby code (gem library)
require 'XSPear'

# Set options
options = {}
options['thread'] = 30
options['cookie'] = "data=123"
options['blind'] = "https://hahwul.xss.ht"
options['output'] = json

# Create XSpear object with url, options
s = XspearScan.new "https://www.hahwul.com?target_url", options

# Scanning
s.run
result = s.report.to_json
r = JSON.parse result

Add Scanning Module
1) Add makeQueryPattern
makeQueryPattern('type', 'query,', 'pattern', 'category', "description", "callback funcion")
# type: f(ilterd?) r(eflected?) x(ss?)
# category i(nfo) v(uln) l(ow) m(edium) h(igh)

# e.g
# makeQueryPattern('f', 'XsPeaR,', 'XsPeaR,', 'i', "not filtered "+",".blue, CallbackStringMatch)
2) if other callback, write callback class override ScanCallbackFunc e.g
  class CallbackStringMatch < ScanCallbackFunc
def run
if @response.body.include? @query
[true, "reflected #{@query}"]
else
[false, "not reflected #{@query}"]
end
end
end
Parent class(ScanCallbackFunc)
class ScanCallbackFunc()
def initialize(url, method, query, response)
@url = url
@method = method
@query = query
@response = response
# self.run
end

def run
# override
end
end
Common Callback Class
  • CallbackXSSSelenium
  • CallbackErrorPatternMatch
  • CallbackCheckHeaders
  • CallbackStringMatch
  • CallbackNotAdded etc...

Update
if nomal user
$ gem update XSpear
if developers (soft)
$ git pull -v
if develpers (hard)
$ git reset --hard HEAD; git pull -v

Development
After checking out the repo, run bin/setup to install dependencies. Then, run rake spec to run the tests. You can also run bin/console for an interactive prompt that will allow you to experiment.
To install this gem onto your local machine, run bundle exec rake install. To release a new version, update the version number in version.rb, and then run bundle exec rake release, which will create a git tag for the version, push git commits and tags, and push the .gem file to rubygems.org.

Contributing
Bug reports and pull requests are welcome on GitHub at https://github.com/hahwul/XSpear. This project is intended to be a safe, welcoming space for collaboration, and contributors are expected to adhere to the Contributor Covenant code of conduct.

Code of Conduct
Everyone interacting in the XSpear project’s codebases, issue trackers, chat rooms and mailing lists is expected to follow the code of conduct.

ScreenShot




W13Scan - Passive Security Scanner

$
0
0

W13scan is a proxy-based web scanner that runs on Linux/Windows/Mac systems.

Begin
Demo


Pure Python and Python version >= 3
Can you use star to encourage the author ?

Install
pip3 install w13scan

Usage
## help
w13scan -h

## running
w13scan -s 127.0.0.1:7778

HTTPS Support
If you want w13scan to support https, similar to BurpSuite, first need to set up a proxy server (default 127.0.0.1:7778), then go to http://w13scan.ca to download the root certificate and trust it.

Development
from W13SCAN.api import Scanner

scanner = Scanner(threads=20)
scanner.put("http://example.com/?post=1")
scanner.run()
By introducing the w13scan package, you can quickly create a scanner.


MSNM-S - Multivariate Statistical Network Monitoring-Sensor

$
0
0
MSNM-S (Multivariate Statistical Network Monitoring-Sensor) shows the practical suitability of the approaches found in PCA-MSNM and in Hierarchical PCA-MSNM works. The first one present the MSNM approach and new multivariate statistical methodology for network anomaly detection while the second one proposes the previous one in a hierarchical and structured network systems. The main idea behind these works, is the use of multivariate statistical techniques to generate useful information in the form of two statistics. Such a light information comes from lower to higher levels in a network hierarchy. This way, the root sensor (for example, a border router) received all the statistical information being able to compute its own statistics (Q,D). By inspecting this statistics, a security analyst can determine if anomalous event are happening when some of the statistic values are above certain control limits.

[A pre-print release of the work is avilable at https://arxiv.org/abs/1907.13612]


MSNM-S is conceived to be extremely scalable and aseptic because just two parameters are sent among levels or devices in the monitored network or system. Additionally, the MSNMSensor is able to manage multiple and heterogeneous type data sources at each monitored devices thanks to the FCParser (Feature as a Counter Parser) feature engineering approach.


Installation

Requirements
MSNSensor runs with python 2.7 and has been successfully tested on Ubuntu from 16.04 version and above. Also, the following dependencies has to be installed.

How to install
Creating a python execution environment is, probably the better way to run the application. So I recommend you to create one before doing the requeriments installation. Anaconda environment can help you and, if you decide to use it, run the following commands:
$ conda create -n py27 python=2.7
$ conda activate py27
Running the previous command will install everything needed.
(py27) $ pip install -r requirements.txt

How to run an example
Please see instructions at examples or download the pre-configured VM at MSNM-S-UBUNTU. We recommend you to use the VM. Remember to pull the repository to get the MSNM-S project updated. In the following, you can see the necessary steps to run the pre-configure experiment in the VM:
Running the MSNM-Ss (backend)
Open a terminal window and activate netflow daemon and collector.
$ cd ~/msnm-sensor/scripts/netflow/
$ sudo ./activateNetflow.sh (pass: msnm1234)
Wait for 5 minutes to get netflow records. Run and deploy the MSNM-Ss in example/scenario_4 example:
$ cd ~/msnm-sensor/scripts/
$ conda activate py27
$ ./start_experiment.sh ../examples/scenario_4/
$ ps -ef | grep msnmsensor (just to check if all the four MSNM-Ss are running)
$ tail -500f ~/msnm-sensor/examples/scenario_4/borderRouter/logs/msnm.log (another way to see how the MSNM-S is working. Replace the name of the MSNM-S if you want to see the others.)
Running the dashboard (frontend):
Open a new terminal window.
$ cd ~/msnm-sensor/dashboard/
$ conda activate msnm-dashboard
$ ln -s ../examples examples
$ python manage.py runserver
Browse to http://localhost:8000

Authors and license
MSNM Sensor - GNU GPL - Roberto Magán-Carrión, José Camacho and Gabriel Maciá-Fernández

Usbrip - Simple Command Line Forensics Tool For Tracking USB Device Artifacts (History Of USB Events) On GNU/Linux

$
0
0

usbrip (derived from "USB Ripper", not "USB R.I.P.") is an open source forensics tool with CLI interface that lets you keep track of USB device artifacts (aka USB event history, "Connected" and "Disconnected" events) on Linux machines.

Description
usbrip is a small piece of software written in pure Python 3 (using some external modules though, see Dependencies/PIP) which parses Linux log files (/var/log/syslog* or /var/log/messages* depending on the distro) for constructing USB event history tables. Such tables may contain the following columns: "Connected" (date & time), "User", "VID" (vendor ID), "PID" (product ID), "Product", "Manufacturer", "Serial Number", "Port" and "Disconnected" (date & time).
Besides, it also can:
  • export gathered information as a JSON dump (and open such dumps, of course);
  • generate a list of authorized (trusted) USB devices as a JSON (call it auth.json);
  • search for "violation events" based on the auth.json: show (or generate another JSON with) USB devices that do appear in history and do NOT appear in the auth.json;
  • *when installed with -s flag* create crypted storages (7zip archives) to automatically backup and accumulate USB events with the help of crontab scheduler;
  • search additional details about a specific USB device based on its VID and/or PID.

Quick Start
usbrip is available for download and installation at PyPI:
$ pip3 install usbrip

Screenshots



Git Clone
For simplicity, lets agree that all the commands where ~/usbrip$ prefix is appeared are executed in the ~/usbrip directory which is created as a result of git clone:
~$ git clone https://github.com/snovvcrash/usbrip.git usbrip && cd usbrip
~/usbrip$

Dependencies
usbrip works with non-modified structure of system log files only, so, unfortunately, it won't be able to parse USB history if you change the format of syslogs (with syslog-ng or rsyslog, for example). That's why the timestamps of "Connected" and "Disconnected" fields don't have the year, by the way. Keep that in mind.

DEB Packages
  • python3.6 (or newer) interpreter
  • python3-venv
  • p7zip-full (used by storages module)
~$ sudo apt install -y python3-venv p7zip-full

PIP Packages
usbrip makes use of the following external modules:

Portable
To resolve Python dependencies manually (it's not necessary actually because pip or setup.py can automate the process, see Installation) create a virtual environment (optional) and run pip from within:
~/usbrip$ python3 -m venv venv && source venv/bin/activate
(venv) ~/usbrip$ pip install -r requirements.txt
Or let the pipenv one-liner do all the dirty work for you:
~/usbrip$ pipenv install && pipenv shell
After that you can run usbrip portably:
(venv) ~/usbrip$ python -m usbrip -h
Or
(venv) ~/usbrip$ python __main__.py -h

Installation
There are two ways to install usbrip into the system: pip or setup.py.

pip or setup.py
First of all, usbrip is pip installable. This means that after git cloning the repo you can simply fire up the pip installation process and after that run usbrip from anywhere in your terminal like so:
~/usbrip$ python3 -m venv venv && source venv/bin/activate
(venv) ~/usbrip$ pip install .

(venv) ~/usbrip$ usbrip -h
Or if you want to resolve Python dependencies locally (without bothering PyPI), use setup.py:
~/usbrip$ python3 -m venv venv && source venv/bin/activate
(venv) ~/usbrip$ python setup.py install

(venv) ~/usbrip$ usbrip -h
Note: you'd likely want to run the installation process while the Python virtual environment is active (like it is shown above).

install.sh
Secondly, usbrip can also be installed into the system with the ./installers/install.sh script.
When using the ./installers/install.sh some extra features become available:
  • the virtual environment is created automatically;
  • the storage module becomes available: you can set a crontab job to backup USB events on a schedule (the example of crontab jobs can be found in usbrip/cron/usbrip.cron).
Warning: if you are using the crontab scheduling, you want to configure the cron job with sudo crontab -e in order to force the storage update submodule run as root as well as protect the passwords of the USB event storages. The storage passwords are kept in /var/opt/usbrip/usbrip.ini.
The ./installers/uninstall.sh script removes all the installation artifacts from your system.
To install usbrip use:
~/usbrip$ chmod +x ./installers/install.sh
~/usbrip$ sudo -H ./installers/install.sh [-l/--local] [-s/--storages]
~/usbrip$ cd

~$ usbrip -h
  • When -l switch is enabled, Python dependencies are resolved from local .tar packages (./3rdPartyTools/) instead of PyPI.
  • When -s switch is enabled, not only the usbrip project is installed, but also the list of trusted USB devices, history and violations storages are created.
Note: when using -s option during installation, make sure that system logs do contain at least one external USB device entry. It is a necessary condition for usbrip to successfully create the list of trusted devices (and as a result, successfully create the violations storage).
After the installation completes, feel free to remove the usbrip folder.

Paths
When installed, the usbrip uses the following paths:
  • /opt/usbrip/— project's main directory;
  • /var/opt/usbrip/usbrip.ini— usbrip configuration file: keeps passwords for 7zip storages;
  • /var/opt/usbrip/storage/— USB event storages: history.7z and violations.7z (created during the installation process);
  • /var/opt/usbrip/log/— usbrip logs (recommended to log usbrip activity when using crontab, see usbrip/cron/usbrip.cron);
  • /var/opt/usbrip/trusted/— list of trusted USB devices (created during the installation process);
  • /usr/local/bin/usbrip— symlink to the /opt/usbrip/venv/bin/usbrip script.

cron
Cron jobs can be set as follows:
~/usbrip$ sudo crontab -l > tmpcron && echo "" >> tmpcron
~/usbrip$ cat usbrip/cron/usbrip.cron | tee -a tmpcron
~/usbrip$ sudo crontab tmpcron
~/usbrip$ rm tmpcron

uninstall.sh
To uninstall usbrip use:
~/usbrip$ chmod +x ./installers/uninstall.sh
~/usbrip$ sudo ./installers/uninstall.sh [-a/--all]
  • When -a switch is enabled, not only the usbrip project directory is deleted, but also all the storages and usbrip logs are deleted too.
And don't forget to remove the cron job.

Usage

Synopsis
# ---------- BANNER ----------

$ usbrip banner
Get usbrip banner.

# ---------- EVENTS ----------

$ usbrip events history [-t | -l] [-e] [-n <NUMBER_OF_EVENTS>] [-d <DATE> [<DATE> ...]] [--user <USER> [<USER> ...]] [--vid <VID> [<VID> ...]] [--pid <PID> [<PID> ...]] [--prod <PROD> [<PROD> ...]] [--manufact <MANUFACT> [<MANUFACT> ...]] [--serial <SERIAL> [<SERIAL> ...]] [--port <PORT> [<PORT> ...]] [-c <COLUMN> [<COLUMN> ...]] [-f <FILE> [<FILE> ...]] [-q] [--debug]
Get USB event history.

$ usbrip events open <DUMP.JSON> [-t | -l] [-e] [-n <NUMBER_OF_EVENTS>] [-d <DATE> [<DATE> ...]] [--user <USER> [<USER> ...]] [--vid <VID> [<VID> ...]] [--pid <PID> [<PID> ...]] [--prod <PROD> [<PROD> ...]] [--manufact <MANUFACT> [<MANUFACT> ...]] [--serial <SERIAL> [<SERIAL> ...]] [--port <PORT> [<PORT> ...]] [-c <COLUMN> [<COLUMN> ...]] [-f <FILE> [<FILE> ...]] [-q] [--debug]
Open USB event dump.

$ usbrip events gen_auth <OUT_AUTH.JSON> [-a <ATTRIBUTE> [<ATTRIBUTE> ...]] [-e] [-n <NUMBER_OF_EVENTS>] [-d <DATE> [<DATE> ...]] [--user <USER> [<USER> ...]] [--vid <VID> [<VID> ...]] [--pid <PID> [<PID> ...]] [--prod <PROD> [<PROD> ...]] [--manufact <MANUFACT> [<MANUFACT> ...]] [--serial <SERIAL> [<SERIAL> ...]] [--port <PORT> [<PORT> ...]] [-f <FILE> [<FILE> ...]] [-q] [--debug]
Generate a list of trusted (authorized) USB devices.

$ usbrip events violations <IN_AUTH.JSON> [-a <ATTRIBUTE> [<ATTRIBUTE> ...]] [-t | -l] [-e] [-n <NUMBER_OF_EVENTS>] [-d <DATE> [<DATE> ...]] [--user <USE R> [<USER> ...]] [--vid <VID> [<VID> ...]] [--pid <PID> [<PID> ...]] [--prod <PROD> [<PROD> ...]] [--manufact <MANUFACT> [<MANUFACT> ...]] [--serial <SERIAL> [<SERIAL> ...]] [--port <PORT> [<PORT> ...]] [-c <COLUMN> [<COLUMN> ...]] [-f <FILE> [<FILE> ...]] [-q] [--debug]
Get USB violation events based on the list of trusted devices.

# ---------- STORAGE ----------

$ usbrip storage list <STORAGE_TYPE> [-q] [--debug]
List contents of the selected storage (7zip archive). STORAGE_TYPE is "history" or "violations".

$ usbrip storage open <STORAGE_TYPE> [-t | -l] [-e] [-n <NUMBER_OF_EVENTS>] [-d <DATE> [<DATE> ...]] [--user <USER> [<USER> ...]] [--vid <VID> [<VID> ...]] [--pid <PID> [<PID> ...]] [--prod <PROD> [<PROD> ...]] [--manufact <MANUFACT> [<MANUFACT> ...]] [--serial <SERIAL> [<SERIAL> ...]] [--port <PORT> [<PORT> ...]] [-c <COLUMN> [<COLUMN> ...]] [-q] [--debug]
Open selected storage (7zip archive). Behaves similary to the EVENTS OPEN submodule.

$ usbrip storage update <STORAGE_TYPE> [-a <ATTRIBUTE> [<ATTRIBUTE> ...]] [-e] [-n <NUMBER_OF_EVENTS>] [-d <DATE> [<DATE> ...]] [--user <USER> [<USER> ...]] [--vid <VID> [<VID> ...]] [--pid <PID> [<PID> ...]] [--prod <PROD> [<PROD> ...]] [--manufact <MANUFACT> [<MANUFACT> ...]] [--serial <SERIAL> [<SERIAL> ...]] [--port <PORT> [<PORT> ...]] [--lvl <COMPRESSION_LEVEL>] [-q] [--debug]
Update storage — add USB events to the existing storage (7zip archive). COMPRESSION_LEVEL is a number in [0..9].

$ usbrip storage create <STORAGE_TYPE> [-a <ATTRIBUTE> [<ATTRIBUTE> ...]] [-e] [-n <NUMBE R_OF_EVENTS>] [-d <DATE> [<DATE> ...]] [--user <USER> [<USER> ...]] [--vid <VID> [<VID> ...]] [--pid <PID> [<PID> ...]] [--prod <PROD> [<PROD> ...]] [--manufact <MANUFACT> [<MANUFACT> ...]] [--serial <SERIAL> [<SERIAL> ...]] [--port <PORT> [<PORT> ...]] [--lvl <COMPRESSION_LEVEL>] [-q] [--debug]
Create storage — create 7zip archive and add USB events to it according to the selected options.

$ usbrip storage passwd <STORAGE_TYPE> [--lvl <COMPRESSION_LEVEL>] [-q] [--debug]
Change password of the existing storage.

# ---------- IDs ----------

$ usbrip ids search [--vid <VID>] [--pid <PID>] [--offline] [-q] [--debug]
Get extra details about a specific USB device by its <VID> and/or <PID> from the USB ID database.

$ usbrip ids download [-q] [--debug]
Update (download) the USB ID database.

Help
To get a list of module names use:
$ usbrip -h
To get a list of submodule names for a specific module use:
$ usbrip <module> -h
To get a list of all switches for a specific submodule use:
$ usbrip <module> <submodule> -h

Examples
  • Show the event history of all USB devices, supressing banner output, info messages and user interaction (-q, --quiet), represented as a list (-l, --list) with latest 100 entries (-n NUMBER, --number NUMBER):
    $ usbrip events history -ql -n 100
  • Show the event history of the external USB devices (-e, --external, which were actually disconnected) represented as a table (-t, --table) containing "Connected", "VID", "PID", "Disconnected" and "Serial Number" columns (-c COLUMN [COLUMN], --column COLUMN [COLUMN]) filtered by date (-d DATE [DATE ...], --date DATE [DATE ...]) with logs taken from the outer files (-f FILE [FILE ...], --file FILE [FILE ...]):
    $ usbrip events history -et -c conn vid pid disconn serial -d "Dec  9" "Dec 10" -f /var/log/syslog.1 /var/log/syslog.2.gz
  • Build the event history of all USB devices and redirect the output to a file for further analysis. When the output stream is NOT terminal stdout (| or > for example) there would be no ANSI escape characters (color) in the output so feel free to use it that way. Also notice that usbrip uses some UNICODE symbols so it would be nice to convert the resulting file to UTF-8 encoding (with encov for example) as well as change newline characters to Windows style for portability (with awk for example):
    usbrip history events -t | awk '{ sub("$", "\r"); print }' > usbrip.out && enconv -x UTF8 usbrip.out
    Remark: you can always get rid of the escape characters by yourself even if you have already got the output to stdout. To do that just copy the output data to usbrip.out and add one more awk instruction:
    awk '{ sub("$", "\r"); gsub("\\x1B\\[[0-?]*[ -/]*[@-~]", ""); print }' usbrip.out && enconv -x UTF8 usbrip.out
  • Generate a list of trusted USB devices as a JSON-file (trusted/auth.json) with "VID" and "PID" attributes containing the first three devices connected on September 26:
    $ usbrip events gen_auth trusted/auth.json -a vid pid -n 3 -d "Sep 26"
  • Search the event history of the external USB devices for violations based on the list of trusted USB devices (trusted/auth.json) by "PID" attribute, restrict resulting events to those which have "Bob" as a user, "EvilUSBManufacturer" as a manufacturer, "1234567890" as a serial number and represent the output as a table with "Connected", "VID" and "PID" columns:
    $ usbrip events violations trusted/auth.json -a pid -et --user Bob --manufact EvilUSBManufacturer --serial 1234567890 -c conn vid pid
  • Search for details about a specific USB device by its VID (--vid VID) and PID (--pid PID):
    $ usbrip ids search --vid 0781 --pid 5580
  • Download the latest version of usb_ids/usb.ids database (the source is here):
    $ usbrip ids download

Credits & References



MemGuard - Secure Software Enclave For Storage Of Sensitive Information In Memory

$
0
0

Secure software enclave for storage of sensitive information in memory.

This package attempts to reduce the likelihood of sensitive data being exposed. It supports all major operating systems and is written in pure Go.

Features
  • Sensitive data is encrypted and authenticated in memory using xSalsa20 and Poly1305 respectively. The scheme also defends against cold-boot attacks.
  • Memory allocation bypasses the language runtime by using system calls to query the kernel for resources directly. This avoids interference from the garbage-collector.
  • Buffers that store plaintext data are fortified with guard pages and canary values to detect spurious accesses and overflows.
  • Effort is taken to prevent sensitive data from touching the disk. This includes locking memory to prevent swapping and handling core dumps.
  • Kernel-level immutability is implemented so that attempted modification of protected regions results in an access violation.
  • Multiple endpoints provide session purging and safe termination capabilities as well as signal handling to prevent remnant data being left behind.
  • Side-channel attacks are mitigated against by making sure that the copying and comparison of data is done in constant-time.
  • Accidental memory leaks are mitigated against by harnessing the garbage-collector to automatically destroy containers that have become unreachable.
Some features were inspired by libsodium, so credits to them.
Full documentation and a complete overview of the API can be found here. Interesting and useful code samples can be found within the examples subpackage.

Installation
$ go get github.com/awnumar/memguard
We strongly encourage you to pin a specific version for a clean and reliable build. This can be accomplished using modules.

Contributing
  • Using the package and identifying points of friction.
  • Reading the source code and looking for improvements.
  • Adding interesting and useful program samples to ./examples.
  • Developing Proof-of-Concept attacks and mitigations.
  • Improving compatibility with more kernels and architectures.
  • Implementing kernel-specific and cpu-specific protections.
  • Writing useful security and crypto libraries that utilise memguard.
  • Submitting performance improvements or benchmarking code.
Issues are for reporting bugs and for discussion on proposals. Pull requests should be made against master.

Future goals
  • Ability to stream data to and from encrypted enclave objects.
  • Catch segmentation faults to wipe memory before crashing.
  • Evaluate and improve the strategies in place, particularly for Coffer objects.
  • Formalise a threat model and evaluate our performance in regards to it.
  • Use lessons learned to apply patches upstream to the Go language and runtime.


HELK - The Hunting ELK

$
0
0

The Hunting ELK or simply the HELK is one of the first open source hunt platforms with advanced analytics capabilities such as SQL declarative language, graphing, structured streaming, and even machine learning via Jupyter notebooks and Apache Spark over an ELK stack. This project was developed primarily for research, but due to its flexible design and core components, it can be deployed in larger environments with the right configurations and scalable infrastructure.

Goals
  • Provide an open source hunting platform to the community and share the basics of Threat Hunting.
  • Expedite the time it takes to deploy a hunt platform.
  • Improve the testing and development of hunting use cases in an easier and more affordable way.
  • Enable Data Science capabilities while analyzing data via Apache Spark, GraphFrames & Jupyter Notebooks.

Current Status: Alpha
The project is currently in an alpha stage, which means that the code and the functionality are still changing. We haven't yet tested the system with large data sources and in many scenarios. We invite you to try it and welcome any feedback.

HELK Features
  • Kafka: A distributed publish-subscribe messaging system that is designed to be fast, scalable, fault-tolerant, and durable.
  • Elasticsearch: A highly scalable open-source full-text search and analytics engine.
  • Logstash: A data collection engine with real-time pipelining capabilities.
  • Kibana: An open source analytics and visualization platform designed to work with Elasticsearch.
  • ES-Hadoop: An open-source, stand-alone, self-contained, small library that allows Hadoop jobs (whether using Map/Reduce or libraries built upon it such as Hive, Pig or Cascading or new upcoming libraries like Apache Spark ) to interact with Elasticsearch.
  • Spark: A fast and general-purpose cluster computing system. It provides high-level APIs in Java, Scala, Python and R, and an optimized engine that supports general execution graphs.
  • GraphFrames: A package for Apache Spark which provides DataFrame-based Graphs.
  • Jupyter Notebook: An open-source web application that allows you to create and share documents that contain live code, equations, visualizations and narrative text.
  • KSQL: Confluent KSQL is the open source, streaming SQL engine that enables real-time data processing against Apache Kafka®. It provides an easy-to-use, yet powerful interactive SQL interface for stream processing on Kafka, without the need to write code in a programming language such as Java or Python
  • Elastalert: ElastAlert is a simple framework for alerting on anomalies, spikes, or other patterns of interest from data in Elasticsearch
    • Sigma: Sigma is a generic and open signature format that allows you to describe relevant log events in a straightforward manner.

Getting Started

WIKI

(Docker) Accessing the HELK's Images
By default, the HELK's containers are run in the background (Detached). You can see all your docker containers by running the following command:
sudo docker ps

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a97bd895a2b3 cyb3rward0g/helk-spark-worker:2.3.0 "./spark-worker-entr…" About an hour ago Up About an hour 0.0.0.0:8082->8082/tcp helk-spark-worker2
cbb31f688e0a cyb3rward0g/helk-spark-worker:2.3.0 "./spark-worker-entr…" About an hour ago Up About an hour 0.0.0.0:8081->8081/tcp helk-spark-worker
5d58068aa7e3 cyb3rward0g/helk-kafka-broker:1.1.0 "./kafka-entrypoint.…" About an hour ago Up About an hour 0.0.0.0:9092->9092/tcp helk-kafka-broker
bdb303b09878 cyb3rward0g/helk-kafka-broker:1.1.0 "./kafka-entrypoint.…" About an hour ago Up About an hour 0.0.0.0:9093->9093/tcp helk-kafka-broker2
7761d1e43d37 cyb3rward0g/helk-nginx:0.0.2 "./nginx-entrypoint.…" About an hour ago Up About an hour 0.0.0.0:80->80/tcp helk-nginx
ede2a2503030 cyb3rward0g/helk-jupyter:0.32.1 "./jupyter-entrypoin…" About an hour ago Up About an hour 0.0.0.0:4040->4040/tcp, 0.0.0.0:8880->8880/tcp helk-jupyter
ede19510e959 cyb3rward0g/helk-logstash:6.2.4 "/usr/local/bin/dock…" About an hour ago Up About an hour 5044/tcp, 9600/tcp helk-logstash
e92823b24b2d cyb3rward0g/helk-spark-master:2.3.0 "./spark-master-entr…" About an hour ago Up About an hour 0.0.0.0:7077->7077/tcp, 0.0.0.0:8080->8080/tcp helk-spark-master
6125921b310d cyb3rward0g/helk-kibana:6.2.4 "./kibana-entrypoint…" About an hour ago Up About an hour 5601/tcp helk-kibana
4321d609ae07 cyb3rward0g/helk-zookeeper:3.4.10 "./zookeeper-entrypo…" About an hour ago Up About an hour 2888/tcp, 0.0.0.0:2181->2181/tcp, 3888/tcp helk-zookeeper
9cbca145fb3e cyb3rward0g/helk-elasticsearch:6.2.4 "/usr/local/bin/dock…" About an hour ago Up About an hour 9200/tcp, 9300/tcp helk-elasticsearch
Then, you will just have to pick which container you want to access and run the following following commands:
sudo docker exec -ti <image-name> bash
root@ede2a2503030:/opt/helk/scripts#

Resources

Author

Contributors

Contributing
There are a few things that I would like to accomplish with the HELK as shown in the To-Do list below. I would love to make the HELK a stable build for everyone in the community. If you are interested on making this build a more robust one and adding some cool features to it, PLEASE feel free to submit a pull request. #SharingIsCaring


WiFiBroot - A WiFi Pentest Cracking Tool For WPA/WPA2 (Handshake, PMKID, Cracking, EAPOL, Deauthentication)

$
0
0

WiFiBroot is built to provide clients all-in-one facility for cracking WiFi (WPA/WPA2) networks. It heavily depends on scapy, a well-featured packet manipulation library in Python. Almost every process within is dependent somehow on scapy layers and other functions except for operating the wireless interface on a different channel. That will be done via native linux command iwconfig for which you maybe need sudo privileges. It currently provides four independent working modes to deal with the target networks. Two of them are online cracking methods while the other runs in offline mode. The offline mode is provided to crack saved hashes from the first two modes. One is for deauthentication attack on wireless network and can also b e used as a jamming handler. It can be run on a variety of linux platforms and atleast requires WN727N from tp-link to properly operate.

Installation:
WiFiBroot heavily depends on scapy. So, you would need scapy installed. Almost, every other library would likely be installed on your system. Make sure the version you install for scapy should be <=2.4.0. Newer versions are likely to throw some unknown errors.
$ sudo pip install scapy==2.4.0
The script is supposed to be run under sudo but it will still work even if not run under the root mode. The basic necessary arguments are:
$ sudo python wifibroot.py -i [interface] -d /path/to/dictionary -m [mode]

Documentation :
WiFiBroot uses modes to identify which attack you want to perform on your target. Currently, there are three available modes. The usage of each mode can be seen by supplying the --help/-h option right after the -m/--mode option. Here's a list of available modes and what they do:

Modes:
Syntax:
$ python wifibroot.py [--mode [modes]] [--options]
$ python wifibroot.py --mode 2 -i wlan1mon --verbose -d /path/to/list -w pmkid.txt

Modes:
# Description Value
01 Capture 4-way handshake and crack MIC code 1
02 Captures and Crack PMKID (PMKID Attack) 2
03 Perform Manual cracking on available
capture types. See --list-types 3
04 Deauthentication. Disconnect two stations
and jam the traffic. 4

Use -h, --help after -m, --mode to get help on modes.
Each mode has a specific purpose and has it's own options:

HANDSHAKE:
Mode: 
01 Capture 4-way handshake and crack MIC code 1

Options:
Args Description Required
-h, --help Show this help manual NO
-i, --interface Monitor Interface to use YES
-v, --verbose Turn off Verbose mode. NO
-t, --timeout Time Delay between two deauth
requests. NO
-d, --dictionary Dictionary for Cracking YES
-w, --write Write Captured handshake to
a seperate file NO
--deauth Number of Deauthentication
frames to send NO

Filters:
-e, --essid ESSID of listening network
-b, --bssid BSSID of target network .
-c, --channel Channel interface should be listening
on. Default: ALL

PMKID ATTACK
Mode: 
02 Captures and Crack PMKID (PMKID Attack) 1

Options:
Args Description Required
-h, --help Show this help manual NO
-i, --interface Monitor Interface to use YES
-v, --verbose Turn off Verbose mode. NO
-d, --dictionary Dictionary for Cracking YES
-w, --write Write Captured handshake to
a seperate file NO

Filters:
-e, --essid ESSID of listening network
-b, --bssid BSSID of target network.
-c, --channel Channel interface should be listening
on. Default: ALL

Offline Cracking
Mode: 
03 Perform Manaul cracking on available capture
types. See --list-types 3

Options:
Args Description Required
-h, --help Show this help manual NO
--list-types List available cracking types NO
--type Type of capture to crack YES
-v, --verbose Turn off Verbose mode. NO
-d, --dictionary Dictionary for Cracking YES
-e, --essid ESSID of target network.
Only for HANDSHAKE Type YES
-r, --read Captured file to crack YES

DEAUTHENTICATION ATTACK (Stress Testing)
Mode:
04 Deauthentication. Disconnect two stations
and jam the traffic. 4

Options:
Args Description Required
-h, --help Show this help manual NO
-i, --interface Monitor Mode Interface to use YES
-0, --count Number of Deauthentication
frames to send. '0' specifies
unlimited frames YES
--ap Access Point MAC Address NO
--client STA (Station) MAC Address NO

Examples
To Capture 4-way handshake and crack MIC code:
$ python wifibroot.py --mode 1 -i wlan1mon --verbose -d dicts/list.txt -w output.cap 
To Capture and Crack PMKID:
$ python wifibroot.py --mode 2 -i wlan1mon --verbose -d dicts/list.txt -w output.txt
Offline Crack Handshake and PMKID:
$ python wifibroot.py --mode 3 --type handshake --essid "TARGET ESSID" --verbose -d dicts/list.txt --read output.cap
$ python wifibroot.py --mode 3 --type pmkid --verbose -d dicts/list.txt --read output.txt
Deauthentication attack in various form:
# Ultimate Deauthentication attack: 
$ python wifibroot.py --mode 4 -i wlan1mon -00 --verbose
# Disconnect All Clients from Acess Point:
$ python wifibroot.py --mode 4 -i wlan1mon --ap [AP MAC] --verbose
# Disconnect a Specific Client:
$ python wifibroot.py --mode 4 -i wlan1mon --ap [AP MAC] --client [STA MAC] --verbose

Support
Website: https://www.shelvoide.com
Twitter: @hash3liZer
Email: admin@shellvoide.com


AutoRecon - Multi-Threaded Network Reconnaissance Tool Which Performs Automated Enumeration Of Services

$
0
0

AutoRecon is a multi-threaded network reconnaissance tool which performs automated enumeration of services. It is intended as a time-saving tool for use in CTFs and other penetration testing environments (e.g. OSCP). It may also be useful in real-world engagements.
The tool works by firstly performing port scans/service detection scans. From those initial results, the tool will launch further enumeration scans of those services using a number of different tools. For example, if HTTP is found, nikto will be launched (as well as many others).
Everything in the tool is highly configurable. The default configuration performs no automated exploitation to keep the tool in line with OSCP exam rules. If you wish to add automatic exploit tools to the configuration, you do so at your own risk. The author will not be held responsible for negative actions that result from the mis-use of this tool.

Origin
AutoRecon was inspired by three tools which the author used during the OSCP labs: Reconnoitre, ReconScan, and bscan. While all three tools were useful, none of the three alone had the functionality desired. AutoRecon combines the best features of the aforementioned tools while also implementing many new features to help testers with enumeration of multiple targets.

Features

  • Supports multiple targets in the form of IP addresses, IP ranges (CIDR notation), and resolvable hostnames.
  • Can scan targets concurrently, utilizing multiple processors if they are available.
  • Customizable port scanning profiles for flexibility in your initial scans.
  • Customizable service enumeration commands and suggested manual follow-up commands.
  • An intuitive directory structure for results gathering.
  • Full logging of commands that were run, along with errors if they fail.
  • Global and per-scan pattern matching so you can highlight/extract important information from the noise.

Requirements
  • Python 3
  • colorama
  • toml
Once Python 3 is installed, pip3 can be used to install the other requirements:
$ pip3 install -r requirements.txt
Several commands used in AutoRecon reference the SecLists project, in the directory /usr/share/seclists/. You can either manually download the SecLists project to this directory (https://github.com/danielmiessler/SecLists), or if you are using Kali Linux (highly recommended) you can run the following:
$ sudo apt install seclists
AutoRecon will still run if you do not install SecLists, though several commands may fail, and some manual commands may not run either.
Additionally the following commands may need to be installed, depending on your OS:
curl
enum4linux
gobuster
nbtscan
nikto
nmap
onesixtyone
oscanner
smbclient
smbmap
smtp-user-enum
snmpwalk
sslscan
svwar
tnscmd10g
whatweb
wkhtmltoimage

Usage
AutoRecon uses Python 3 specific functionality and does not support Python 2.
usage: autorecon.py [-h] [-ct <number>] [-cs <number>] [--profile PROFILE]
[-o OUTPUT] [--nmap NMAP | --nmap-append NMAP_APPEND] [-v]
[--disable-sanity-checks]
targets [targets ...]

Network reconnaissance tool to port scan and automatically enumerate services
found on multiple targets.

positional arguments:
targets IP addresses (e.g. 10.0.0.1), CIDR notation (e.g.
10.0.0.1/24), or resolvable hostnames (e.g. foo.bar)
to scan.

optional arguments:
-h, --help show this help message and exit
-ct <number>, --concurrent-targets <number>
The maximum number of target hosts to scan
concurrently. Default: 5
-cs <number>, --concurrent-scans <number>
The maximum n umber of scans to perform per target
host. Default: 10
--profile PROFILE The port scanning profile to use (defined in port-
scan-profiles.toml). Default: default
-o OUTPUT, --output OUTPUT
The output directory for results. Default: results
--nmap NMAP Override the {nmap_extra} variable in scans. Default:
-vv --reason -Pn
--nmap-append NMAP_APPEND
Append to the default {nmap_extra} variable in scans.
-v, --verbose Enable verbose output. Repeat for more verbosity.
--disable-sanity-checks
Disable sanity checks that would otherwise prevent the
scans from running.

Examples
Scanning a single target:
python3 autorecon.py 127.0.0.1
[*] Scanning target 127.0.0.1
[*] Running service detection nmap-full-tcp on 127.0.0.1
[*] Running service detection nmap-top-20-udp on 127.0.0.1
[*] Running service detection nmap-quick on 127.0.0.1
[*] Service detection nmap-quick on 127.0.0.1 finished successfully
[*] [127.0.0.1] ssh found on tcp/22
[*] [127.0.0.1] http found on tcp/80
[*] [127.0.0.1] rpcbind found on tcp/111
[*] [127.0.0.1] postgresql found on tcp/5432
[*] Running task tcp/22/nmap-ssh on 127.0.0.1
[*] Running task tcp/80/nmap-http on 127.0.0.1
[*] Running task tcp/80/curl-index on 127.0.0.1
[*] Running task tcp/80/curl-robots on 127.0.0.1
[*] Running task tcp/80/whatweb on 127.0.0.1
[*] Running task tcp/80/nikto on 127.0.0.1
[*] Running task tcp/111/nmap-nfs on 127.0.0.1
[*] Task tcp/80/curl-index on 127.0.0.1 finished successfully
[*] Task tcp/80/curl-robots on 127.0.0.1 finished successfully
[*] Task tcp/22/nmap-ssh on 127.0.0.1 finished successfully
[*] Task tcp/80/whatweb on 127.0.0.1 finished successfully
[*] Task tcp/111/nmap-nfs on 127.0.0.1 finished successfully
[*] Task tcp/80/nmap-http on 127.0.0.1 finished successfully
[*] Task tcp/80/nikto on 127.0.0.1 finished successfully
[*] Service detection nmap-top-20-udp on 127.0.0.1 finished successfully
[*] Service detection nmap-full-tcp on 127.0.0.1 finished successfully
[*] [127.0.0.1] http found on tcp/5984
[*] [127.0.0.1] rtsp found on tcp/5985
[*] Running task tcp/5984/nmap-http on 127.0.0.1
[*] Running task tcp/5984/curl-index on 127.0.0.1
[*] Running task tcp/5984/curl-robots on 127.0.0.1
[*] Running task tcp/5984/whatweb on 127.0.0.1
[*] Running task tcp/5984/nikto on 127.0.0.1
[*] Task tcp/5984/curl-index on 127.0.0.1 finished successfully
[*] Task tcp/5984/curl-robots on 127.0.0.1 finished successfully
[*] Task tcp/5984/whatweb on 127.0.0.1 finish ed successfully
[*] Task tcp/5984/nikto on 127.0.0.1 finished successfully
[*] Task tcp/5984/nmap-http on 127.0.0.1 finished successfully
[*] Finished scanning target 127.0.0.1
The default port scan profile first performs a full TCP port scan, a top 20 UDP port scan, and a top 1000 TCP port scan. You may ask why AutoRecon scans the top 1000 TCP ports at the same time as a full TCP port scan (which also scans those ports). The reason is simple: most open ports will generally be in the top 1000, and we want to start enumerating services quickly, rather than wait for Nmap to scan every single port. As you can see, all the service enumeration scans actually finish before the full TCP port scan is done. While there is a slight duplication of efforts, it pays off by getting actual enumeration results back to the tester quicker.
Note that the actual command line output will be colorized if your terminal supports it.
Scanning multiple targets
python3 autorecon.py 192.168.1.100 192.168.1.1/30 localhost
[*] Scanning target 192.168.1.100
[*] Scanning target 192.168.1.1
[*] Scanning target 192.168.1.2
[*] Scanning target localhost
[*] Running service detection nmap-quick on 192.168.1.100
[*] Running service detection nmap-quick on localhost
[*] Running service detection nmap-top-20-udp on 192.168.1.100
[*] Running service detection nmap-quick on 192.168.1.1
[*] Running service detection nmap-quick on 192.168.1.2
[*] Running service detection nmap-top-20-udp on 192.168.1.1
[*] Running service detection nmap-full-tcp on 192.168.1.100
[*] Running service detection nmap-top-20-udp on localhost
[*] Running service detection nmap-top-20-udp on 192.168.1.2
[*] Running service detection nmap-full-tcp on localhost
[*] Running service detection nmap-full-tcp on 192.168.1.1
[*] Running service detection nmap-full-tcp on 192.168.1.2
...
AutoRecon supports multiple targets per scan, and will expand IP ranges provided in CIDR notation. By default, only 5 targets will be scanned at a time, with 10 scans per target.
Scanning multiple targets with advanced options
python3 autorecon.py -ct 2 -cs 2 -vv -o outputdir 192.168.1.100 192.168.1.1/30 localhost
[*] Scanning target 192.168.1.100
[*] Scanning target 192.168.1.1
[*] Running service detection nmap-quick on 192.168.1.100 with nmap -vv --reason -Pn -sV -sC --version-all -oN "/root/outputdir/192.168.1.100/scans/_quick_tcp_nmap.txt" -oX "/root/outputdir/192.168.1.100/scans/_quick_tcp_nmap.xml" 192.168.1.100
[*] Running service detection nmap-quick on 192.168.1.1 with nmap -vv --reason -Pn -sV -sC --version-all -oN "/root/outputdir/192.168.1.1/scans/_quick_tcp_nmap.txt" -oX "/root/outputdir/192.168.1.1/scans/_quick_tcp_nmap.xml" 192.168.1.1
[*] Running service detection nmap-top-20-udp on 192.168.1.100 with nmap -vv --reason -Pn -sU -A --top-ports=20 --version-all -oN "/root/outputdir/192.168.1.100/scans/_top_20_udp_nmap.txt" -oX "/root/outputdir/192.168.1.100/scans/_top_20_udp_nmap.xml" 192.168.1.100
[*] Running service detection nmap-top-20-udp on 192.168.1.1 with nmap -vv --reason -Pn -sU -A --top-ports=20 --version-all -oN "/root/outputdir/192.168.1.1/scans/_top_20_udp_nmap.txt" -oX "/root/outputdir/192.168.1.1/scans/_top_20_udp_nmap.xml" 192.168.1.1
[-] [192.168.1.1 nmap-quick] Starting Nmap 7.70 ( https://nmap.org ) at 2019-03-01 17:25 EST
[-] [192.168.1.100 nmap-quick] Starting Nmap 7.70 ( https://nmap.org ) at 2019-03-01 17:25 EST
[-] [192.168.1.100 nmap-top-20-udp] Starting Nmap 7.70 ( https://nmap.org ) at 2019-03-01 17:25 EST
[-] [192.168.1.1 nmap-top-20-udp] Starting Nmap 7.70 ( https://nmap.org ) at 2019-03-01 17:25 EST
[-] [192.168.1.1 nmap-quick] NSE: Loaded 148 scripts for scanning.
[-] [192.168.1.1 nmap-quick] NSE: Script Pre-scanning.
[-] [192.168.1.1 nmap-quick] NSE: Starting runlevel 1 (of 2) scan.
[-] [192.168.1.1 nmap-quick] Initiating NSE at 17:25
[-] [192.168.1.1 nmap-quick] Completed NSE at 17:25, 0.00s elapsed
[-] [192.168.1.1 nmap-quick] NSE: Starting runlevel 2 (of 2) sca n.
[-] [192.168.1.1 nmap-quick] Initiating NSE at 17:25
[-] [192.168.1.1 nmap-quick] Completed NSE at 17:25, 0.00s elapsed
[-] [192.168.1.1 nmap-quick] Initiating ARP Ping Scan at 17:25
[-] [192.168.1.100 nmap-quick] NSE: Loaded 148 scripts for scanning.
[-] [192.168.1.100 nmap-quick] NSE: Script Pre-scanning.
[-] [192.168.1.100 nmap-quick] NSE: Starting runlevel 1 (of 2) scan.
[-] [192.168.1.100 nmap-quick] Initiating NSE at 17:25
[-] [192.168.1.100 nmap-quick] Completed NSE at 17:25, 0.00s elapsed
[-] [192.168.1.100 nmap-quick] NSE: Starting runlevel 2 (of 2) scan.
[-] [192.168.1.100 nmap-quick] Initiating NSE at 17:25
[-] [192.168.1.100 nmap-quick] Completed NSE at 17:25, 0.00s elapsed
[-] [192.168.1.100 nmap-quick] Initiating ARP Ping Scan at 17:25
...
In this example, the -ct option limits the number of concurrent targets to 2, and the -cs option limits the number of concurrent scans per target to 2. The -vv option makes the output very verbose, showing the output of every scan being run. The -o option sets a custom output directory for scan results to be saved.

Verbosity
AutoRecon supports three levels of verbosity:
  • (none) Minimal output. AutoRecon will announce when target scans start and finish, as well as which services were identified.
  • (-v) Verbose output. AutoRecon will additionally specify the exact commands which are being run, as well as highlighting any patterns which are matched in command output.
  • (-vv) Very verbose output. AutoRecon will output everything. Literally every line from all commands which are currently running. When scanning multiple targets concurrently, this can lead to a ridiculous amount of output. It is not advised to use -vv unless you absolutely need to see live output from commands.

Results
By default, results will be stored in the ./results directory. A new sub directory is created for every target. The structure of this sub directory is:
.
├── exploit/
├── loot/
├── report/
│   ├── local.txt
│   ├── notes.txt
│   ├── proof.txt
│   └── screenshots/
└── scans/
├── _commands.log
├── _manual_commands.txt
└── xml/
The exploit directory is intended to contain any exploit code you download / write for the target.
The loot directory is intended to contain any loot (e.g. hashes, interesting files) you find on the target.
The report directory contains some auto-generated files and directories that are useful for reporting:
  • local.txt can be used to store the local.txt flag found on targets.
  • notes.txt should contain a basic template where you can write notes for each service discovered.
  • proof.txt can be used to store the proof.txt flag found on targets.
  • The screenshots directory is intended to contain the screenshots you use to document the exploitation of the target.
The scans directory is where all results from scans performed by AutoRecon will go. This includes port scans / service detection scans, as well as any service enumeration scans. It also contains two other files:
  • _commands.log contains a list of every command AutoRecon ran against the target. This is useful if one of the commands fails and you want to run it again with modifications.
  • _manual_commands.txt contains any commands that are deemed "too dangerous" to run automatically, either because they are too intrusive, require modification based on human analysis, or just work better when there is a human monitoring them.
If a scan results in an error, a file called _errors.log will also appear in the scans directory with some details to alert the user.
If output matches a defined pattern, a file called _patterns.log will also appear in the scans directory with details about the matched output.
The scans/xml directory stores any XML output (e.g. from Nmap scans) separately from the main scan outputs, so that the scans directory itself does not get too cluttered.

Port Scan profiles
The port-scan-profiles.toml file is where you can define the initial port scans / service detection commands. The configuration file uses the TOML format, which is explained here: https://github.com/toml-lang/toml
Here is an example profile called "quick":
[quick]

[quick.nmap-quick]

[quick.nmap-quick.service-detection]
command = 'nmap {nmap_extra} -sV --version-all -oN "{scandir}/_quick_tcp_nmap.txt" -oX "{scandir}/xml/_quick_tcp_nmap.xml" {address}'
pattern = '^(?P<port>\d+)\/(?P<protocol>(tcp|udp))(.*)open(\s*)(?P<service>[\w\-\/]+)(\s*)(.*)$'

[quick.nmap-top-20-udp]

[quick.nmap-top-20-udp.service-detection]
command = 'nmap {nmap_extra} -sU -A --top-ports=20 --version-all -oN "{scandir}/_top_20_udp_nmap.txt" -oX "{scandir}/xml/_top_20_udp_nmap.xml" {address}'
pattern = '^(?P<port>\d+)\/(?P<protocol>(tcp|udp))(.*)open(\s*)(?P<service>[\w\-\/]+)(\s*)(.*)$'
Note that indentation is optional, it is used here purely for aesthetics. The "quick" profile defines a scan called "nmap-quick". This scan has a service-detection command which uses nmap to scan the top 1000 TCP ports. The command uses two references: {scandir} is the location of the scans directory for the target, and {address} is the address of the target.
A regex pattern is defined which matches three named groups (port, protocol, and service) in the output. Every service-detection command must have a corresponding pattern that matches all three of those groups. AutoRecon will attempt to do some checks and refuse to scan if any of these groups are missing.
An almost identical scan called "nmap-top-20-udp" is also defined. This scans the top 20 UDP ports.
Here is a more complicated example:
[udp]

[udp.udp-top-20]

[udp.udp-top-20.port-scan]
command = 'unicornscan -mU -p 631,161,137,123,138,1434,445,135,67,53,139,500,68,520,1900,4500,514,49152,162,69 {address} 2>&1 | tee "{scandir}/_top_20_udp_unicornscan.txt"'
pattern = '^UDP open\s*[\w-]+\[\s*(?P<port>\d+)\].*$'

[udp.udp-top-20.service-detection]
command = 'nmap {nmap_extra} -sU -A -p {ports} --version-all -oN "{scandir}/_top_20_udp_nmap.txt" -oX "{scandir}/xml/_top_20_udp_nmap.xml" {address}'
pattern = '^(?P<port>\d+)\/(?P<protocol>(udp))(.*)open(\s*)(?P<service>[\w\-\/]+)(\s*)(.*)$'
In this example, a profile called "udp" defines a scan called "udp-top-20". This scan has two commands, one is a port-scan and the other is a service-detection. When a port-scan command is defined, it will always be run first. The corresponding pattern must match a named group "port" which extracts the port number from the output.
The service-detection will be run after the port-scan command has finished, and uses a new reference: {ports}. This reference is a comma-separated string of all the ports extracted by the port-scan command. Note that the same three named groups (port, protocol, and service) are defined in the service-detection pattern.
Both the port-scan and the service-detection commands use the {scandir} and {address} references.
Note that if a port-scan command is defined without a corresponding service-detection command, AutoRecon will refuse to scan.
This more complicated example is only really useful if you want to use unicornscan's speed in conjuction with nmap's service detection abilities. If you are content with using Nmap for both port scanning and service detection, you do not need to use this setup.

Service Scans
The service-scans.toml file is where you can define service enumeration scans and other manual commands associated with certain services.
Here is an example of a simple configuration:
[ftp]

service-names = [
'^ftp',
'^ftp\-data'
]

[[ftp.scan]]
name = 'nmap-ftp'
command = 'nmap {nmap_extra} -sV -p {port} --script="(ftp* or ssl*) and not (brute or broadcast or dos or external or fuzzer)" -oN "{scandir}/{protocol}_{port}_ftp_nmap.txt" -oX "{scandir}/xml/{protocol}_{port}_ftp_nmap.xml" {address}'

[[ftp.scan.pattern]]
description = 'Anonymous FTP Enabled!'
pattern = 'Anonymous FTP login allowed'

[[ftp.manual]]
description = 'Bruteforce logins:'
commands = [
'hydra -L "{username_wordlist}" -P "{password_wordlist}" -e nsr -s {port} -o "{scandir}/{protocol}_{port}_ftp_hydra.txt" ftp://{address}',
'medusa -U "{username_wordlist}" -P "{password_wordlist}" -e ns -n {port} -O "{scandir}/{protocol}_{port}_ftp_medusa.txt" -M ftp -h {address}'
]
Note that indentation is optional, it is used here purely for aesthetics. The service "ftp" is defined here. The service-names array contains regex strings which should match the service name from the service-detection scans. Regex is used to be as flexible as possible. The service-names array works on a whitelist basis; as long as one of the regex strings matches, the service will get scanned.
An optional ignore-service-names array can also be defined, if you want to blacklist certain regex strings from matching.
The ftp.scan section defines a single scan, named nmap-ftp. This scan defines a command which runs nmap with several ftp-related scripts. Several references are used here:
  • {nmap_extra} by default is set to "-vv --reason -Pn" but this can be overridden or appended to using the --nmap or --nmap-append command line options respectively. If the protocol is UDP, "-sU" will also be appended.
  • {port} is the port that the service is running on.
  • {scandir} is the location of the scans directory for the target.
  • {protocol} is the protocol being used (either tcp or udp).
  • {address} is the address of the target.
A pattern is defined for the nmap-ftp scan, which matches the simple pattern "Anonymous FTP login allowed". In the event that this pattern matches output of the nmap-ftp command, the pattern description ("Anonymous FTP Enabled!") will be saved to the _patterns.log file in the scans directory. A special reference {match} can be used in the description to reference the entire match, or the first capturing group.
The ftp.manual section defines a group of manual commands. This group contains a description for the user, and a commands array which contains the commands that a user can run. Two new references are defined here: {username_wordlist} and {password_wordlist} which are configured at the very top of the service-scans.toml file, and default to a username and password wordlist provided by SecLists.
Here is a more complicated configuration:
[smb]

service-names = [
'^smb',
'^microsoft\-ds',
'^netbios'
]

[[smb.scan]]
name = 'nmap-smb'
command = 'nmap {nmap_extra} -sV -p {port} --script="(nbstat or smb* or ssl*) and not (brute or broadcast or dos or external or fuzzer)" --script-args="unsafe=1" -oN "{scandir}/{protocol}_{port}_smb_nmap.txt" -oX "{scandir}/xml/{protocol}_{port}_smb_nmap.xml" {address}'

[[smb.scan]]
name = 'enum4linux'
command = 'enum4linux -a -M -l -d {address} 2>&1 | tee "{scandir}/enum4linux.txt"'
run_once = true
ports.tcp = [139, 389, 445]
ports.udp = [137]

[[smb.scan]]
name = 'nbtscan'
command = 'nbtscan -rvh {address} 2>&1 | tee "{scandir}/nbtscan.txt"'
run_once = true
ports.udp = [137]

[[smb.scan]]
name = 'smbclient'
command = 'smbclient -L\\ -N -I {address} 2>&1 | tee "{scan dir}/smbclient.txt"'
run_once = true
ports.tcp = [139, 445]

[[smb.scan]]
name = 'smbmap-share-permissions'
command = 'smbmap -H {address} -P {port} 2>&1 | tee -a "{scandir}/smbmap-share-permissions.txt"; smbmap -u null -p "" -H {address} -P {port} 2>&1 | tee -a "{scandir}/smbmap-share-permissions.txt"'

[[smb.scan]]
name = 'smbmap-list-contents'
command = 'smbmap -H {address} -P {port} -R 2>&1 | tee -a "{scandir}/smbmap-list-contents.txt"; smbmap -u null -p "" -H {address} -P {port} -R 2>&1 | tee -a "{scandir}/smbmap-list-contents.txt"'

[[smb.scan]]
name = 'smbmap-execute-command'
command = 'smbmap -H {address} -P {port} -x "ipconfig /all" 2>&1 | tee -a "{scandir}/smbmap-execute-command.txt"; smbmap -u null -p "" -H {address} -P {port} -x "ipconfig /all" 2>&1 | tee -a "{scandir}/smbmap-execute-command.txt"'

[[smb.manual]]
description = 'Nmap scans for SMB vulnerabilities that could potentially cause a DoS if scanned (according to Nmap). Be careful:'
commands = [
'nmap {nmap_extra} -sV -p {port} --script="smb-vuln-ms06-025" --script-args="unsafe=1" -oN "{scandir}/{protocol}_{port}_smb_ms06-025.txt" -oX "{scandir}/xml/{protocol}_{port}_smb_ms06-025.xml" {address}',
'nmap {nmap_extra} -sV -p {port} --script="smb-vuln-ms07-029" --script-args="unsafe=1" -oN "{scandir}/{protocol}_{port}_smb_ms07-029.txt" -oX "{scandir}/xml/{protocol}_{port}_smb_ms07-029.xml" {address}',
'nmap {nmap_extra} -sV -p {port} --script="smb-vuln-ms08-067" --script-args="unsafe=1" -oN "{scandir}/{protocol}_{port}_smb_ms08-067.txt" -oX "{scandir}/xml/{protocol}_{port}_smb_ms08-067.xml" {address}'
]
The main difference here is that several scans have some new settings:
  • The ports.tcp array defines a whitelist of TCP ports which the command can be run against. If the service is detected on a port that is not in the whitelist, the command will not be run against it.
  • The ports.udp array defines a whitelist of UDP ports which the command can be run against. It operates in the same way as the ports.tcp array.
Why do these settings even exist? Well, some commands will only run against specific ports, and can't be told to run against any other ports. enum4linux for example, will only run against TCP ports 139, 389, and 445, and UDP port 137.
In fact, enum4linux will always try these ports when it is run. So if the SMB service is found on TCP ports 139 and 445, AutoRecon may attempt to run enum4linux twice for no reason. This is why the third setting exists:
  • If run_once is set to true, the command will only ever run once for that target, even if the SMB service is found on multiple ports.

Testimonials
AutoRecon was invaluable during my OSCP exam, in that it saved me from the tedium of executing my active information gathering commands myself. I was able to start on a target with all of the information I needed clearly laid in front of me. I would strongly recommend this utility for anyone in the PWK labs, the OSCP exam, or other environments such as VulnHub or HTB. It is a great tool for both people just starting down their journey into OffSec and seasoned veterans alike. Just make sure that somewhere between those two points you take the time to learn what's going on "under the hood" and how / why it scans what it does.
- b0ats (rooted 5/5 exam hosts)
Wow, what a great find! Before using AutoRecon, ReconScan was my goto enumeration script for targets because it automatically ran the enumeration commands after it finds open ports. The only thing missing was the automatic creation of key directories a pentester might need during an engagement (exploit, loot, report, scans). Reconnoitre did this but didn't automatically run those commands for you. I thought ReconScan that was the bee's knees until I gave AutoRecon a try. It's awesome! It combines the best features of Reconnoitre (auto directory creation) and ReconScan (automatically executing the enumeration commands). All I have to do is run it on a target or a set of targets and start going over the information it has already collected while it continues the rest of scan. The proof is in the pudding :) Passed the OSCP exam! Kudos to Tib3rius!
- werk0ut
A friend told me about AutoRecon, so I gave it a try in the PWK labs. AutoRecon launches the common tools we all always use, whether it be nmap or nikto, and also creates a nice subfolder system based on the targets you are attacking. The strongest feature of AutoRecon is the speed; on the OSCP exam I left the tool running in the background while I started with another target, and in a matter of minutes I had all of the AutoRecon output waiting for me. AutoRecon creates a file full of commands that you should try manually, some of which may require tweaking (for example, hydra bruteforcing commands). It's good to have that extra checklist.
- tr3mb0 (rooted 4/5 exam hosts)
Being introduced to AutoRecon was a complete game changer for me while taking the OSCP and establishing my penetration testing methodology. AutoRecon is a multi-threaded reconnaissance tool that combines and automates popular enumeration tools to do most of the hard work for you. You can't get much better than that! After running AutoRecon on my OSCP exam hosts, I was given a treasure chest full of information that helped me to start on each host and pass on my first try. The best part of the tool is that it automatically launches further enumeration scans based on the initial port scans (e.g. run enum4linux if SMB is detected). The only bad part is that I did not use this tool sooner! Thanks Tib3rius.
- rufy (rooted 4/5 exam hosts)
AutoRecon allows a security researcher to iteratively scan hosts and identify potential attack vectors. Its true power comes in the form of performing scans in the background while the attacker is working on another host. I was able to start my scans and finish a specific host I was working on - and then return to find all relevant scans completed. I was then able to immediately begin trying to gain initial access instead of manually performing the active scanning process. I will continue to use AutoRecon in future penetration tests and CTFs, and highly recommend you do the same.
- waar (rooted 4.99/5 exam hosts)
"If you have to do a task more than twice a day, you need to automate it." That's a piece of advice that an old boss gave to me. AutoRecon takes that lesson to heart. Whether you're sitting in the exam, or in the PWK labs, you can fire off AutoRecon and let it work its magic. I had it running during my last exam while I worked on the buffer overflow. By the time I finished, all the enum data I needed was there for me to go through. 10/10 would recommend for anyone getting into CTF, and anyone who has been at this a long time.
- whoisflynn
I love this tool so much I wrote it.
- Tib3rius (rooted 5/5 exam hosts)
I highly recommend anyone going for their OSCP, doing CTFs or on HTB to checkout this tool. Been using AutoRecon on HTB for a month before using it over on the PWK labs and it helped me pass my OSCP exam. If you're having a hard time getting settled with an enumeration methodology I encourage you to follow the flow and techniques this script uses. It takes out a lot of the tedious work that you're probably used to while at the same time provide well-organized subdirectories to quickly look over so you don't lose your head. The manual commands it provides are great for those specific situations that need it when you have run out of options. It's a very valuable tool, cannot recommend enough.
- d0hnuts (rooted 5/5 exam hosts)
Autorecon is not just any other tool, it is a recon correlation framwork for engagements. This helped me fire a whole bunch of scans while I was working on other targets. This can help a lot in time management. This assisted me to own 4/5 boxes in pwk exam! Result: Passed!
- Wh0ami (rooted 4/5 exam hosts)


Malcolm - A Powerful, Easily Deployable Network Traffic Analysis Tool Suite For Full Packet Capture Artifacts (PCAP Files) And Zeek Logs

$
0
0

Malcolm is a powerful network traffic analysis tool suite designed with the following goals in mind:
  • Easy to use– Malcolm accepts network traffic data in the form of full packet capture (PCAP) files and Zeek (formerly Bro) logs. These artifacts can be uploaded via a simple browser-based interface or captured live and forwarded to Malcolm using lightweight forwarders. In either case, the data is automatically normalized, enriched, and correlated for analysis.
  • Powerful traffic analysis– Visibility into network communications is provided through two intuitive interfaces: Kibana, a flexible data visualization plugin with dozens of prebuilt dashboards providing an at-a-glance overview of network protocols; and Moloch, a powerful tool for finding and identifying the network sessions comprising suspected security incidents.
  • Streamlined deployment– Malcolm operates as a cluster of Docker containers, isolated sandboxes which each serve a dedicated function of the system. This Docker-based deployment model, combined with a few simple scripts for setup and run-time management, makes Malcolm suitable to be deployed quickly across a variety of platforms and use cases, whether it be for long-term deployment on a Linux server in a security operations center (SOC) or for incident response on a Macbook for an individual engagement.
  • Secure communications– All communications with Malcolm, both from the user interface and from remote log forwarders, are secured with industry standard encryption protocols.
  • Permissive license– Malcolm is comprised of several widely used open source tools, making it an attractive alternative to security solutions requiring paid licenses.
  • Expanding control systems visibility– While Malcolm is great for general-purpose network traffic analysis, its creators see a particular need in the community for tools providing insight into protocols used in industrial control systems (ICS) environments. Ongoing Malcolm development will aim to provide additional parsers for common ICS protocols.
Although all of the open source tools which make up Malcolm are already available and in general use, Malcolm provides a framework of interconnectivity which makes it greater than the sum of its parts. And while there are many other network traffic analysis solutions out there, ranging from complete Linux distributions like Security Onion to licensed products like Splunk Enterprise Security, the creators of Malcolm feel its easy deployment and robust combination of tools fill a void in the network security space that will make network traffic analysis accessible to many in both the public and private sectors as well as individual enthusiasts.
In short, Malcolm provides an easily deployable network analysis tool suite for full packet capture artifacts (PCAP files) and Zeek logs. While Internet access is required to build it, it is not required at runtime.

Quick start

Getting Malcolm
For a TL;DR example of downloading, configuring, and running Malcolm on a Linux platform, see Installation example using Ubuntu 18.04 LTS.

Source code
The files required to build and run Malcolm are available on the Idaho National Lab's GitHub page. Malcolm's source code is released under the terms of a permissive open source software license (see see License.txt for the terms of its release).

Building Malcolm from scratch
The build.sh script can Malcolm's Docker from scratch. See Building from source for more information.

Pull Malcolm's Docker images
Malcolm's Docker images are periodically built and hosted on Docker Hub. If you already have Docker and Docker Compose, these prebuilt images can be pulled by navigating into the Malcolm directory (containing the docker-compose.yml file) and running docker-compose pull like this:
$ docker-compose pull
Pulling elasticsearch ... done
Pulling kibana ... done
Pulling elastalert ... done
Pulling curator ... done
Pulling logstash ... done
Pulling filebeat ... done
Pulling moloch ... done
Pulling file-monitor ... done
Pulling pcap-capture ... done
Pulling upload ... done
Pulling htadmin ... done
Pulling nginx-proxy ... done
You can then observe that the images have been retrieved by running docker images:
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
malcolmnetsec/moloch 1.4.0 xxxxxxxxxxxx 27 minutes ago 517MB
malcolmnetsec/htadmin 1.4.0 xxxxxxxxxxxx 2 hours ago 180MB
malcolmnetsec/nginx-proxy 1.4.0 xxxxxxxxxxxx 4 hours ago 53MB
malcolmnetsec/file-upload 1.4.0 xxxxxxxxxxxx 24 hours ago 198MB
malcolmnetsec/pcap-capture 1.4.0 xxxxxxxxxxxx 24 hours ago 111MB
malcolmnetsec/file-monitor 1.4.0 xxxxxxxxxxxx 24 hours ago 355MB
malcolmnetsec/logstash-oss 1.4.0 xxxxxxxxxxxx 25 hours ago 1.2 4GB
malcolmnetsec/curator 1.4.0 xxxxxxxxxxxx 25 hours ago 303MB
malcolmnetsec/kibana-oss 1.4.0 xxxxxxxxxxxx 33 hours ago 944MB
malcolmnetsec/filebeat-oss 1.4.0 xxxxxxxxxxxx 11 days ago 459MB
malcolmnetsec/elastalert 1.4.0 xxxxxxxxxxxx 11 days ago 276MB
docker.elastic.co/elasticsearch/elasticsearch-oss 6.8.1 xxxxxxxxxxxx 5 weeks ago 769MB
You must run auth_setup.sh prior to running docker-compose pull. You should also ensure your system configuration and docker-compose.yml settings are tuned by running ./scripts/install.py or ./scripts/install.py --configure (see System configuration and tuning).

Import from pre-packaged tarballs
Once built, the malcolm_appliance_packager.sh script can be used to create pre-packaged Malcolm tarballs for import on another machine. See Pre-Packaged Installation Files for more information.

Starting and stopping Malcolm
Use the scripts in the scripts/ directory to start and stop Malcolm, view debug logs of a currently running instance, wipe the database and restore Malcolm to a fresh state, etc.

User interface
A few minutes after starting Malcolm (probably 5 to 10 minutes for Logstash to be completely up, depending on the system), the following services will be accessible:

Overview


Malcolm processes network traffic data in the form of packet capture (PCAP) files or Zeek logs. A packet capture appliance ("sensor") monitors network traffic mirrored to it over a SPAN port on a network switch or router, or using a network TAP device. Zeek logs are generated containing important session metadata from the traffic observed, which are then securely forwarded to a Malcolm instance. Full PCAP files are optionally stored locally on the sensor device for examination later.
Malcolm parses the network session data and enriches it with additional lookups and mappings including GeoIP mapping, hardware manufacturer lookups from organizationally unique identifiers (OUI) in MAC addresses, assigning names to network segments and hosts based on user-defined IP address and MAC mappings, performing TLS fingerprinting, and many others.
The enriched data is stored in an Elasticsearch document store in a format suitable for analysis through two intuitive interfaces: Kibana, a flexible data visualization plugin with dozens of prebuilt dashboards providing an at-a-glance overview of network protocols; and Moloch, a powerful tool for finding and identifying the network sessions comprising suspected security incidents. These tools can be accessed through a web browser from analyst workstations or for display in a security operations center (SOC). Logs can also optionally be forwarded on to another instance of Malcolm.
For smaller networks, use at home by network security enthusiasts, or in the field for incident response engagements, Malcolm can also easily be deployed locally on an ordinary consumer workstation or laptop. Malcolm can process local artifacts such as locally-generated Zeek logs, locally-captured PCAP files, and PCAP files collected offline without the use of a dedicated sensor appliance.

Components
Malcolm leverages the following excellent open source tools, among others.
  • Moloch - for PCAP file processing, browsing, searching, analysis, and carving/exporting; Moloch itself consists of two parts:
    • moloch-capture - a tool for traffic capture, as well as offline PCAP parsing and metadata insertion into Elasticsearch
    • viewer - a browser-based interface for data visualization
  • Elasticsearch - a search and analytics engine for indexing and querying network traffic session metadata
  • Logstash and Filebeat - for ingesting and parsing ZeekLog Files and ingesting them into Elasticsearch in a format that Moloch understands and is able to understand in the same way it natively understands PCAP data
  • Kibana - for creating additional ad-hoc visualizations and dashboards beyond that which is provided by Moloch Viewer
  • Zeek - a network analysis framework and IDS
  • ClamAV - an antivirus engine for scanning files extracted by Zeek
  • CyberChef - a "swiss-army knife" data conversion tool
  • jQuery File Upload - for uploading PCAP files and Zeek logs for processing
  • Docker and Docker Compose - for simple, reproducible deployment of the Malcolm appliance across environments and to coordinate communication between its various components
  • nginx - for HTTPS and reverse proxying Malcolm components
  • ElastAlert - an alerting framework for Elasticsearch. Specifically, the BitSensor fork of ElastAlert, its Docker configuration and its corresponding Kibana plugin are used.

Development
Checking out the Malcolm source code results in the following subdirectories in your malcolm/ working copy:
  • curator - code and configuration for the curator container which define rules for closing and/or deleting old Elasticsearch indices
  • Dockerfiles - a directory containing build instructions for Malcolm's docker images
  • docs - a directory containing instructions and documentation
  • elastalert - code and configuration for the elastalert container which provides an alerting framework for Elasticsearch
  • elasticsearch - an initially empty directory where the Elasticsearch database instance will reside
  • elasticsearch-backup - an initially empty directory for storing Elasticsearch index snapshots
  • filebeat - code and configuration for the filebeat container which ingests Zeek logs and forwards them to the logstash container
  • file-monitor - code and configuration for the file-monitor container which can scan files extracted by Zeek
  • file-upload - code and configuration for the upload container which serves a web browser-based upload form for uploading PCAP files and Zeek logs, and which serves an SFTP share as an alternate method for upload
  • htadmin - configuration for the htadmin user account management container
  • iso-build - code and configuration for building an installer ISO for a minimal Debian-based Linux installation for running Malcolm
  • kibana - code and configuration for the kibana container for creating additional ad-hoc visualizations and dashboards beyond that which is provided by Moloch Viewer
  • logstash - code and configuration for the logstash container which parses Zeek logs and forwards them to the elasticsearch container
  • moloch - code and configuration for the moloch container which handles PCAP processing and which serves the Viewer application
  • moloch-logs - an initially empty directory to which the moloch container will write some debug log files
  • moloch-raw - an initially empty directory to which the moloch container will write captured PCAP files; as Moloch as employed by Malcolm is currently used for processing previously-captured PCAP files, this directory is currently unused
  • nginx - configuration for the nginx reverse proxy container
  • pcap - an initially empty directory for PCAP files to be uploaded, processed, and stored
  • pcap-capture - code and configuration for the pcap-capture container which can capture network traffic
  • scripts - control scripts for starting, stopping, restarting, etc. Malcolm
  • shared - miscellaneous code used by various Malcolm components
  • zeek-logs - an initially empty directory for Zeek logs to be uploaded, processed, and stored
and the following files of special note:
  • auth.env - the script ./scripts/auth_setup.sh prompts the user for the administrator credentials used by the Malcolm appliance, and auth.env is the environment file where those values are stored
  • cidr-map.txt - specify custom IP address to network segment mapping
  • host-map.txt - specify custom IP and/or MAC address to host mapping
  • docker-compose.yml - the configuration file used by docker-compose to build, start, and stop an instance of the Malcolm appliance
  • docker-compose-standalone.yml - similar to docker-compose.yml, only used for the "packaged" installation of Malcolm
  • docker-compose-standalone-zeek-live.yml - identical to docker-compose-standalone.yml, only Filebeat is configured to monitor live Zeek logs (ie., being actively written to)

Building from source
Building the Malcolm docker images from scratch requires internet access to pull source files for its components. Once internet access is available, execute the following command to build all of the Docker images used by the Malcolm appliance:
$ ./scripts/build.sh
Then, go take a walk or something since it will be a while. When you're done, you can run docker images and see you have fresh images for:
  • malcolmnetsec/curator (based on debian:buster-slim)
  • malcolmnetsec/elastalert (based on bitsensor/elastalert)
  • malcolmnetsec/file-monitor (based on debian:buster-slim)
  • malcolmnetsec/file-upload (based on debian:buster-slim)
  • malcolmnetsec/filebeat-oss (based on docker.elastic.co/beats/filebeat-oss)
  • malcolmnetsec/htadmin (based on debian:buster-slim)
  • malcolmnetsec/kibana-oss (based on docker.elastic.co/kibana/kibana-oss)
  • malcolmnetsec/logstash-oss (based on centos:7)
  • malcolmnetsec/moloch (based on debian:stretch-slim)
  • malcolmnetsec/nginx-proxy (based on jwilder/nginx-proxy:alpine)
  • malcolmnetsec/pcap-capture (based on debian:buster-slim)
Additionally, the command will pull from Docker Hub:
  • docker.elastic.co/elasticsearch/elasticsearch-oss

Pre-Packaged installation files

Creating pre-packaged installation files
scripts/malcolm_appliance_packager.sh can be run to package up the configuration files (and, if necessary, the Docker images) which can be copied to a network share or USB drive for distribution to non-networked machines. For example:
$ ./scripts/malcolm_appliance_packager.sh 
You must set a username and password for Malcolm, and self-signed X.509 certificates will be generated
Administrator username: analyst
analyst password:
analyst password (again):

(Re)generate self-signed certificates for HTTPS access [Y/n]?

(Re)generate self-signed certificates for a remote log forwarder [Y/n]?

Store username/password for forwarding Logstash events to a secondary, external Elasticsearch instance [y/N]?
Packaged Malcolm to "/home/user/tmp/malcolm_20190513_101117_f0d052c.tar.gz"


Do you need to package docker images also [y/N]? y
This might take a few minutes...

Packaged Malcolm docker images to "/home/user/tmp/malcolm_20190513_101117_f0d052c_images.tar.gz"


To install Malcolm:
1. Run install.py
2. Follow the prompts

To start, stop, restart, etc. Malcolm:
Use the control scripts in the "scripts/" dir ectory:
- start.sh (start Malcolm)
- stop.sh (stop Malcolm)
- restart.sh (restart Malcolm)
- logs.sh (monitor Malcolm logs)
- wipe.sh (stop Malcolm and clear its database)
- auth_setup.sh (change authentication-related settings)

A minute or so after starting Malcolm, the following services will be accessible:
- Moloch: https://localhost/
- Kibana: https://localhost:5601/
- PCAP Upload (web): https://localhost:8443/
- PCAP Upload (sftp): sftp://USERNAME@127.0.0.1:8022/files/
- Account management: https://localhost:488/
The above example will result in the following artifacts for distribution as explained in the script's output:
$ ls -lh
total 2.0G
-rwxr-xr-x 1 user user 61k May 13 11:32 install.py
-rw-r--r-- 1 user user 2.0G May 13 11:37 malcolm_20190513_101117_f0d052c_images.tar.gz
-rw-r--r-- 1 user user 683 May 13 11:37 malcolm_20190513_101117_f0d052c.README.txt
-rw-r--r-- 1 user user 183k May 13 11:32 malcolm_20190513_101117_f0d052c.tar.gz

Installing from pre-packaged installation files
If you have obtained pre-packaged installation files to install Malcolm on a non-networked machine via an internal network share or on a USB key, you likely have the following files:
  • malcolm_YYYYMMDD_HHNNSS_xxxxxxx.README.txt - This readme file contains a minimal set up instructions for extracting the contents of the other tarballs and running the Malcolm appliance.
  • malcolm_YYYYMMDD_HHNNSS_xxxxxxx.tar.gz - This tarball contains the configuration files and directory configuration used by an instance of Malcolm. It can be extracted via tar -xf malcolm_YYYYMMDD_HHNNSS_xxxxxxx.tar.gz upon which a directory will be created (named similarly to the tarball) containing the directories and configuration files. Alternately, install.py can accept this filename as an argument and handle its extraction and initial configuration for you.
  • malcolm_YYYYMMDD_HHNNSS_xxxxxxx_images.tar.gz - This tarball contains the Docker images used by Malcolm. It can be imported manually via docker load -i malcolm_YYYYMMDD_HHNNSS_xxxxxxx_images.tar.gz
  • install.py - This install script can load the Docker images and extract Malcolm configuration files from the aforementioned tarballs and do some initial configuration for you.
Run install.py malcolm_XXXXXXXX_XXXXXX_XXXXXXX.tar.gz and follow the prompts. If you do not already have Docker and Docker Compose installed, the install.py script will help you install them.

Preparing your system

Recommended system requirements
Malcolm needs a reasonably up-to-date version of Docker and Docker Compose. In theory this should be possible on Linux, macOS, and recent Windows 10 releases, although so far it's only been tested on Linux and macOS hosts.
To quote the Elasticsearch documentation, "If there is one resource that you will run out of first, it will likely be memory." The same is true for Malcolm: you will want at least 16 gigabytes of RAM to run Malcolm comfortably. For processing large volumes of traffic, I'd recommend at a bare minimum a dedicated server with 16 cores and 16 gigabytes of RAM. Malcolm can run on less, but more is better. You're going to want as much hard drive space as possible, of course, as the amount of PCAP data you're able to analyze and store will be limited by your hard drive.
Moloch's wiki has a couple of documents (here and here and here and a calculator here) which may be helpful, although not everything in those documents will apply to a Docker-based setup like Malcolm.

System configuration and tuning
If you already have Docker and Docker Compose installed, the install.py script can still help you tune system configuration and docker-compose.yml parameters for Malcolm. To run it in "configuration only" mode, bypassing the steps to install Docker and Docker Compose, run it like this:
sudo ./scripts/install.py --configure
Although install.py will attempt to automate many of the following configuration and tuning parameters, they are nonetheless listed in the following sections for reference:

docker-compose.yml parameters
Edit docker-compose.yml and search for the ES_JAVA_OPTS key. Edit the -Xms4g -Xmx4g values, replacing 4g with a number that is half of your total system memory, or just under 32 gigabytes, whichever is less. So, for example, if I had 64 gigabytes of memory I would edit those values to be -Xms31g -Xmx31g. This indicates how much memory can be allocated to the Elasticsearch heaps. For a pleasant experience, I would suggest not using a value under 10 gigabytes. Similar values can be modified for Logstash with LS_JAVA_OPTS, where using 3 or 4 gigabytes is recommended.
Various other environment variables inside of docker-compose.yml can be tweaked to control aspects of how Malcolm behaves, particularly with regards to processing PCAP files and Zeek logs. The environment variables of particular interest are located near the top of that file under Commonly tweaked configuration options, which include:
  • INITIALIZEDB– indicates to Malcolm to create (or recreate) Moloch’s internal settings database on startup; this setting is managed by the wipe.sh and start.sh scripts and does not generally need to be changed manually
  • MANAGE_PCAP_FILES– if set to true, all PCAP files imported into Malcolm will be marked as available for deletion by Moloch if available storage space becomes too low (default false)
  • ZEEK_AUTO_ANALYZE_PCAP_FILES– if set to true, all PCAP files imported into Malcolm will automatically be analyzed by Zeek, and the resulting logs will also be imported (default false)
  • MOLOCH_ANALYZE_PCAP_THREADS– the number of threads available to Moloch for analyzing PCAP files (default 1)
  • ZEEK_AUTO_ANALYZE_PCAP_THREADS– the number of threads available to Malcolm for analyzing Zeek logs (default 1)
  • LOGSTASH_JAVA_EXECUTION_ENGINE– if set to true, Logstash will use the new Logstash Java Execution Engine which may significantly speed up Logstash startup and processing (default false, as it is currently considered experimental)
  • LOGSTASH_OUI_LOOKUP– if set to true, Logstash will map MAC addresses to vendors for all source and destination MAC addresses when analyzing Zeek logs (default true)
  • LOGSTASH_REVERSE_DNS– if set to true, Logstash will perform a reverse DNS lookup for all external source and destination IP address values when analyzing Zeek logs (default false)
  • ES_EXTERNAL_HOSTS– if specified (in the format '10.0.0.123:9200'), logs received by Logstash will be forwarded on to another external Elasticsearch instance in addition to the one maintained locally by Malcolm
  • ES_EXTERNAL_SSL– if set to true, Logstash will use HTTPS for the connection to external Elasticsearch instances specified in ES_EXTERNAL_HOSTS
  • ES_EXTERNAL_SSL_CERTIFICATE_VERIFICATION– if set to true, Logstash will require full SSL certificate validation; this may fail if using self-signed certificates (default false)
  • KIBANA_OFFLINE_REGION_MAPS– if set to true, a small internal server will be surfaced to Kibana to provide the ability to view region map visualizations even when an Internet connection is not available (default true)
  • CURATOR_CLOSE_COUNT and CURATOR_CLOSE_UNITS - determine behavior for automatically closing older Elasticsearch indices to conserve memory; see Elasticsearch index curation
  • CURATOR_DELETE_COUNT and CURATOR_DELETE_UNITS - determine behavior for automatically deleting older Elasticsearch indices to reduce disk usage; see Elasticsearch index curation
  • CURATOR_DELETE_GIGS - if the Elasticsearch indices representing the log data exceed this size, in gigabytes, older indices will be deleted to bring the total size back under this threshold; see Elasticsearch index curation
  • CURATOR_SNAPSHOT_DISABLED - if set to False, daily snapshots (backups) will be made of the previous day's Elasticsearch log index; see Elasticsearch index curation
  • AUTO_TAG– if set to true, Malcolm will automatically create Moloch sessions and Zeek logs with tags based on the filename, as described in Tagging (default true)
  • BEATS_SSL– if set to true, Logstash will use require encrypted communications for any external Beats-based forwarders from which it will accept logs; if Malcolm is being used as a standalone tool then this can safely be set to false, but if external log feeds are to be accepted then setting it to true is recommended (default false)
  • ZEEK_EXTRACTOR_MODE– determines the file extraction behavior for file transfers detected by Zeek; see Automatic file extraction and scanning for more details
  • EXTRACTED_FILE_IGNORE_EXISTING– if set to true, files extant in ./zeek-logs/extract_files/ directory will be ignored on startup rather than scanned
  • EXTRACTED_FILE_PRESERVATION– determines behavior for preservation of Zeek-extracted files
  • VTOT_API2_KEY– used to specify a VirusTotal Public API v.20 key, which, if specified, will be used to submit hashes of Zeek-extracted files to VirusTotal
  • EXTRACTED_FILE_ENABLE_CLAMAV– if set to true (and VTOT_API2_KEY is unspecified), Zeek-extracted files will be scanned with ClamAV
  • EXTRACTED_FILE_ENABLE_FRESHCLAM– if set to true, ClamAV will periodically update virus databases
  • PCAP_ENABLE_NETSNIFF– if set to true, Malcolm will capture network traffic on the local network interface(s) indicated in PCAP_IFACE using netsniff-ng
  • PCAP_ENABLE_TCPDUMP– if set to true, Malcolm will capture network traffic on the local network interface(s) indicated in PCAP_IFACE using tcpdump; there is no reason to enable bothPCAP_ENABLE_NETSNIFF and PCAP_ENABLE_TCPDUMP
  • PCAP_IFACE– used to specify the network interface(s) for local packet capture if PCAP_ENABLE_NETSNIFF or PCAP_ENABLE_TCPDUMP are enabled; for multiple interfaces, separate the interface names with a comma (eg., 'enp0s25' or 'enp10s0,enp11s0')
  • PCAP_ROTATE_MEGABYTES– used to specify how large a locally-captured PCAP file can become (in megabytes) before it closed for processing and a new PCAP file created
  • PCAP_ROTATE_MINUTES– used to specify an time interval (in minutes) after which a locally-captured PCAP file will be closed for processing and a new PCAP file created
  • PCAP_FILTER– specifies a tcpdump-style filter expression for local packet capture; leave blank to capture all traffic

Linux host system configuration

Installing Docker
Docker installation instructions vary slightly by distribution. Please follow the links below to docker.com to find the instructions specific to your distribution:
After installing Docker, because Malcolm should be run as a non-root user, add your user to the docker group with something like:
$ sudo usermod -aG docker yourusername
Following this, either reboot or log out then log back in.
Docker starts automatically on DEB-based distributions. On RPM-based distributions, you need to start it manually or enable it using the appropriate systemctl or service command(s).
You can test docker by running docker info, or (assuming you have internet access), docker run --rm hello-world.

Installing docker-compose
Please follow this link on docker.com for instructions on installing docker-compose.

Operating system configuration
The host system (ie., the one running Docker) will need to be configured for the best possible Elasticsearch performance. Here are a few suggestions for Linux hosts (these may vary from distribution to distribution):
  • Append the following lines to /etc/sysctl.conf:
# the maximum number of open file handles
fs.file-max=65536

# the maximum number of user inotify watches
fs.inotify.max_user_watches=131072

# the maximum number of memory map areas a process may have
vm.max_map_count=262144

# decrease "swappiness" (swapping out runtime memory vs. dropping pages)
vm.swappiness=1

# the maximum number of incoming connections
net.core.somaxconn=65535

# the % of system memory fillable with "dirty" pages before flushing
vm.dirty_background_ratio=40

# maximum % of dirty system memory before committing everything
vm.dirty_ratio=80
  • Depending on your distribution, create either the file /etc/security/limits.d/limits.conf containing:
# the maximum number of open file handles
* soft nofile 65535
* hard nofile 65535
# do not limit the size of memory that can be locked
* soft memlock unlimited
* hard memlock unlimited
OR the file /etc/systemd/system.conf.d/limits.conf containing:
[Manager]
# the maximum number of open file handles
DefaultLimitNOFILE=65535:65535
# do not limit the size of memory that can be locked
DefaultLimitMEMLOCK=infinity
  • Change the readahead value for the disk where the Elasticsearch data will be stored. There are a few ways to do this. For example, you could add this line to /etc/rc.local (replacing /dev/sda with your disk block descriptor):
# change disk read-adhead value (# of blocks)
blockdev --setra 512 /dev/sda
  • Change the I/O scheduler to deadline or noop. Again, this can be done in a variety of ways. The simplest is to add elevator=deadline to the arguments in GRUB_CMDLINE_LINUX in /etc/default/grub, then running sudo update-grub2
  • If you are planning on using very large data sets, consider formatting the drive containing elasticsearch volume as XFS.
After making all of these changes, do a reboot for good measure!

macOS host system configuration

Automatic installation using install.py
The install.py script will attempt to guide you through the installation of Docker and Docker Compose if they are not present. If that works for you, you can skip ahead to Configure docker daemon option in this section.

Install Homebrew
The easiest way to install and maintain docker on Mac is using the Homebrew cask. Execute the following in a terminal.
$ /usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
$ brew install cask
$ brew tap caskroom/versions

Install docker-edge
$ brew cask install docker-edge
This will install the latest version of docker and docker-compose. It can be upgraded later using brew as well:
$ brew cask upgrade --no-quarantine docker-edge
You can now run docker from the Applications folder.

Configure docker daemon option
Some changes should be made for performance (this link gives a good succinct overview).
  • Resource allocation - For a good experience, you likely need at least a quad-core MacBook Pro with 16GB RAM and an SSD. I have run Malcolm on an older 2013 MacBook Pro with 8GB of RAM, but the more the better. Go in your system tray and select DockerPreferencesAdvanced. Set the resources available to docker to at least 4 CPUs and 8GB of RAM (>= 16GB is preferable).
  • Volume mount performance - You can speed up performance of volume mounts by removing unused paths from DockerPreferencesFile Sharing. For example, if you’re only going to be mounting volumes under your home directory, you could share /Users but remove other paths.
After making these changes, right click on the Docker icon in the system tray and select Restart.

Windows host system configuration
There are several ways of installing and running docker with Windows, and they vary depending on the version of Windows you are running, whether or not Hyper-V must be enabled (which is a requirement for VMWare, but is precluded by the recent non-virtual machine release of Docker).
As the author supposes that the target audience of this document are more likely to be running macOS or Linux, detailed instructions for Docker setup under Windows are not included here. Instead, refer to the following links:

Running Malcolm

Configure authentication
Run ./scripts/auth_setup.sh before starting Malcolm for the first time in order to:
  • define the administrator account username and password
  • specify whether or not to (re)generate the self-signed certificates used for HTTPS access
    • key and certificate files are located in the nginx/certs/ directory
  • specify whether or not to (re)generate the self-signed certificates used by a remote log forwarder (see the BEATS_SSL environment variable above)
    • certificate authority, certificate, and key files for Malcolm’s Logstash instance are located in the logstash/certs/ directory
    • certificate authority, certificate, and key files to be copied to and used by the remote log forwarder are located in the filebeat/certs/ directory
  • specify whether or not to store the username/password for forwarding Logstash events to a secondary, external Elasticsearch instance (see the ES_EXTERNAL_HOSTS, ES_EXTERNAL_SSL, and ES_EXTERNAL_SSL_CERTIFICATE_VERIFICATION environment variables above)
    • these parameters are stored securely in the Logstash keystore file logstash/certs/logstash.keystore

Account management
auth_setup.sh is used to define the username and password for the administrator account. Once Malcolm is running, the administrator account can be used to manage other user accounts via a Malcolm User Management page served over HTTPS on port 488 (eg., https://localhost:488 if you are connecting locally).
Malcolm user accounts can be used to access the interfaces of all of its components, including Moloch. Moloch uses its own internal database of user accounts, so when a Malcolm user account logs in to Moloch for the first time Malcolm creates a corresponding Moloch user account automatically. This being the case, it is not recommended to use the Moloch Users settings page or change the password via the Password form under the Moloch Settings page, as those settings would not be consistently used across Malcolm.
Users may change their passwords via the Malcolm User Management page by clicking User Self Service. A forgotten password can also be reset via an emailed link, though this requires SMTP server settings to be specified in htadmin/config.ini in the Malcolm installation directory.

Starting Malcolm
Docker compose is used to coordinate running the Docker containers. To start Malcolm, navigate to the directory containing docker-compose.yml and run:
$ ./scripts/start.sh
This will create the containers' virtual network and instantiate them, then leave them running in the background. The Malcolm containers may take a several minutes to start up completely. To follow the debug output for an already-running Malcolm instance, run:
$ ./scripts/logs.sh
You can also use docker stats to monitor the resource utilization of running containers.

Stopping and restarting Malcolm
You can run ./scripts/stop.sh to stop the docker containers and remove their virtual network. Alternately, ./scripts/restart.sh will restart an instance of Malcolm. Because the data on disk is stored on the host in docker volumes, doing these operations will not result in loss of data.
Malcolm can be configured to be automatically restarted when the Docker system daemon restart (for example, on system reboot). This behavior depends on the value of the restart: setting for each service in the docker-compose.yml file. This value can be set by running ./scripts/install.py --configure and answering "yes" to "Restart Malcolm upon system or Docker daemon restart?."

Clearing Malcolm’s data
Run ./scripts/wipe.sh to stop the Malcolm instance and wipe its Elasticsearch database (including index snapshots).

Capture file and log archive upload
Malcolm serves a web browser-based upload form for uploading PCAP files and Zeek logs over HTTPS on port 8443 (eg., https://localhost:8443 if you are connecting locally).


Additionally, there is a writable files directory on an SFTP server served on port 8022 (eg., sftp://USERNAME@localhost:8022/files/ if you are connecting locally).
The types of files supported are:
  • PCAP files (of mime type application/vnd.tcpdump.pcap or application/x-pcapng)
    • PCAPNG files are partially supported: Zeek is able to process PCAPNG files, but not all of Moloch's packet examination features work correctly
  • Zeek logs in archive files (application/gzip, application/x-gzip, application/x-7z-compressed, application/x-bzip2, application/x-cpio, application/x-lzip, application/x-lzma, application/x-rar-compressed, application/x-tar, application/x-xz, or application/zip)
    • where the Zeek logs are found in the internal directory structure in the archive file does not matter
Files uploaded via these methods are monitored and moved automatically to other directories for processing to begin, generally within one minute of completion of the upload.

Tagging
In addition to be processed for uploading, Malcolm events will be tagged according to the components of the filenames of the PCAP files or Zeek log archives files from which the events were parsed. For example, records created from a PCAP file named ACME_Scada_VLAN10.pcap would be tagged with ACME, Scada, and VLAN10. Tags are extracted from filenames by splitting on the characters "," (comma), "-" (dash), and "_" (underscore). These tags are viewable and searchable (via the tags field) in Moloch and Kibana. This behavior can be changed by modifying the AUTO_TAGenvironment variable in docker-compose.yml.
Tags may also be specified manually with the browser-based upload form.

Processing uploaded PCAPs with Zeek
The browser-based upload interface also provides the ability to specify tags for events extracted from the files uploaded. Additionally, an Analyze with Zeek checkbox may be used when uploading PCAP files to cause them to be analyzed by Zeek, similarly to the ZEEK_AUTO_ANALYZE_PCAP_FILES environment variable described above, only on a per-upload basis. Zeek can also automatically carve out files from file transfers; see Automatic file extraction and scanning for more details.

Live analysis

Capturing traffic on local network interfaces
Malcolm's pcap-capture container can capture traffic on one or more local network interfaces and periodically rotate these files for processing with Moloch and Zeek. The pcap-capture Docker container is started with additional privileges (IPC_LOCK, NET_ADMIN, NET_RAW, and SYS_ADMIN) in order for it to be able to open network interfaces in promiscuous mode for capture.
The environment variables prefixed with PCAP_ in the docker-compose.yml file determine local packet capture behavior. Local capture can also be configured by running ./scripts/install.py --configure and answering "yes" to "Should Malcolm capture network traffic to PCAP files?."
Note that currently Microsoft Windows and Apple macOS platforms run Docker inside of a virtualized environment. This would require additional configuration of virtual interfaces and port forwarding in Docker, the process for which is outside of the scope of this document.

Zeek logs from an external source
Malcolm’s Logstash instance can also be configured to accept Zeek logs from a remote forwarder by running ./scripts/install.py --configure and answering "yes" to "Expose Logstash port to external hosts?." Enabling encrypted transport of these logs files is discussed in Configure authentication and the description of the BEATS_SSL environment variable in the docker-compose.yml file.
Configuring Filebeat to forward Zeek logs to Malcolm might look something like this example filebeat.yml:
filebeat.inputs:
- type: log
paths:
- /var/zeek/*.log
fields_under_root: true
fields:
type: "session"
compression_level: 0
exclude_lines: ['^\s*#']
scan_frequency: 10s
clean_inactive: 180m
ignore_older: 120m
close_inactive: 90m
close_renamed: true
close_removed: true
close_eof: false
clean_renamed: true
clean_removed: true

output.logstash:
hosts: ["192.0.2.123:5044"]
ssl.enabled: true
ssl.certificate_authorities: ["/foo/bar/ca.crt"]
ssl.certificate: "/foo/bar/client.crt"
ssl.key: "/foo/bar/client.key"
ssl.supported_protocols: "TLSv1.2"
ssl.verification_mode: "none"
A future release of Malcolm is planned which will include a customized Linux-based network sensor appliance OS installation image to help automate this setup.

Monitoring a local Zeek instance
Another option for analyzing live network data is to run an external copy of Zeek (ie., not within Malcolm) so that the log files it creates are seen by Malcolm and automatically processed as they are written.
To do this, you'll need to configure Malcolm's local Filebeat log forwarder so that it will continue to look for changes to Zeek logs that are actively being written to even once it reaches the end of the file. You can do this by replacing docker-compose.yml with docker-compose-zeek-live.yml before starting Malcolm:
$ mv -f ./docker-compose-zeek-live.yml ./docker-compose.yml
Alternately, you can run the start.sh script (and the other control scripts) like this, without modifying your original docker-compose.yml file:
$ ./scripts/start.sh ./docker-compose-zeek-live.yml
Once Malcolm has been started, cd into ./zeek-logs/current/ and run bro from inside that directory.

Moloch
The Moloch interface will be accessible over HTTPS on port 443 at the docker hosts IP address (eg., https://localhost if you are connecting locally).

Zeek log integration
A stock installation of Moloch extracts all of its network connection ("session") metadata ("SPI" or "Session Profile Information") from full packet capture artifacts (PCAP files). Zeek (formerly Bro) generates similar session metadata, linking network events to sessions via a connection UID. Malcolm aims to facilitate analysis of Zeek logs by mapping values from Zeek logs to the Moloch session database schema for equivalent fields, and by creating new "native" Moloch database fields for all the other Zeek log values for which there is not currently an equivalent in Moloch:


In this way, when full packet capture is an option, analysis of PCAP files can be enhanced by the additional information Zeek provides. When full packet capture is not an option, similar analysis can still be performed using the same interfaces and processes using the Zeek logs alone.
One value of particular mention is Zeek Log Type (zeek.logType in Elasticsearch). This value corresponds to the kind of Zeek .log file from which the record was created. In other words, a search could be restricted to records from conn.log by searching zeek.logType == conn, or restricted to records from weird.log by searching zeek.logType == weird. In this same way, to view only records from Zeek logs (excluding any from PCAP files), use the special Moloch EXISTS filter, as in zeek.logType == EXISTS!. On the other hand, to exclude Zeek logs and only view records from PCAP files, use zeek.logType != EXISTS!.
Click the icon of the owl in the upper-left hand corner of to access the Moloch usage documentation (accessible at https://localhost/help if you are connecting locally), click the Fields label in the navigation pane, then search for zeek to see a list of the other Zeek log types and fields available to Malcolm.


The values of records created from Zeek logs can be expanded and viewed like any native moloch session by clicking the plus icon to the left of the record in the Sessions view. However, note that when dealing with these Zeek records the full packet contents are not available, so buttons dealing with viewing and exporting PCAP information will not behave as they would for records from PCAP files. However, clicking the Source Raw or Destination Raw buttons will allow you to view the original Zeek log (formatted as JSON) from which the record was created. Other than that, Zeek records and their values are usable in Malcolm just like native PCAP session records.


Help
Click the icon of the owl in the upper-left hand corner of to access the Moloch usage documentation (accessible at https://localhost/help if you are connecting locally), which includes such topics as search syntax, the Sessions view, SPIView, SPIGraph, and the Connections graph.

Sessions
The Sessions view provides low-level details of the sessions being investigated, whether they be Moloch sessions created from PCAP files or Zeek logs mapped to the Moloch session database schema.


The Sessions view contains many controls for filtering the sessions displayed from all sessions down to sessions of interest:
  • search bar: Indicated by the magnifying glass icon, the search bar allows defining filters on session/log metadata
  • time bounding controls: The start icon, Start, End, Bounding, and Interval fields, and the date histogram can be used to visually zoom and pan the time range being examined.
  • search button: The Search button re-runs the sessions query with the filters currently specified.
  • views button: Indicated by the eyeball icon, views allow overlaying additional previously-specified filters onto the current sessions filters. For convenience, Malcolm provides several Moloch preconfigured views including several on the zeek.logType field.

  • map: A global map can be expanded by clicking the globe icon. This allows filtering sessions by IP-based geolocation when possible.
Some of these filter controls are also available on other Moloch pages (such as SPIView, SPIGraph, Connections, and Hunt).
The number of sessions displayed per page, as well as the page currently displayed, can be specified using the paging controls underneath the time bounding controls.
The sessions table is displayed below the filter controls. This table contains the sessions/logs matching the specified filters.
To the left of the column headers are two buttons. The Toggle visible columns button, indicated by a grid icon, allows toggling which columns are displayed in the sessions table. The Save or load custom column configuration button, indicated by a columns icon, allows saving the current displayed columns or loading previously-saved configurations. This is useful for customizing which columns are displayed when investigating different types of traffic. Column headers can also be clicked to sort the results in the table, and column widths may be adjusted by dragging the separators between column headers.
Details for individual sessions/logs can be expanded by clicking the plus icon on the left of each row. Each row may contain multiple sections and controls, depending on whether the row represents a Moloch session or a Zeek log. Clicking the field names and values in the details sections allows additional filters to be specified or summary lists of unique values to be exported.
When viewing Moloch session details (ie., a session generated from a PCAP file), an additional packets section will be visible underneath the metadata sections. When the details of a session of this type are expanded, Moloch will read the packet(s) comprising the session for display here. Various controls can be used to adjust how the packet is displayed (enabling natural decoding and enabling Show Images & Files may produce visually pleasing results), and other options (including PCAP download, carving images and files, applying decoding filters, and examining payloads in CyberChef) are available.
See also Moloch's usage documentation for more information on the Sessions view.

PCAP Export
Clicking the down arrow icon to the far right of the search bar presents a list of actions including PCAP Export (see Moloch's sessions help for information on the other actions). When full PCAP sessions are displayed, the PCAP Export feature allows you to create a new PCAP file from the matching Moloch sessions, including controls for which sessions are included (open items, visible items, or all matching items) and whether or not to include linked segments. Click Export PCAP button to generate the PCAP, after which you'll be presented with a browser download dialog to save or open the file. Note that depending on the scope of the filters specified this might take a long time (or, possibly even time out).


See the issues section of this document for an error that can occur using this feature when Zeek log sessions are displayed.View

SPIView
Moloch's SPI (Session Profile Information) View provides a quick and easy-to-use interface for exploring session/log metrics. The SPIView page lists categories for general session metrics (eg., protocol, source and destination IP addresses, sort and destination ports, etc.) as well as for all of various types of network understood by Moloch and Zeek. These categories can be expanded and the top n values displayed, along with each value's cardinality, for the fields of interest they contain.


Click the the plus icon to the right of a category to expand it. The values for specific fields are displayed by clicking the field description in the field list underneatn the category name. The list of field names can be filtered by typing part of the field name in the Search for fields to display in this category text input. The Load All and Unload All buttons can be used to toggle display of all of the fields belonging to that category. Once displayed, a field's name or one of its values may be clicked to provide further actions for filtering or displaying that field or its values. Of particular interest may be the Open [fieldname] SPI Graph option when clicking on a field's name. This will open a new tab with the SPI Graph (see below) populated with the field's top values.
Note that because the SPIView page can potentially run many queries, SPIView limits the search domain to seven days (in other words, seven indices, as each index represents one day's worth of data). When using SPIView, you will have best results if you limit your search time frame to less than or equal to seven days. This limit can be adjusted by editing the spiDataMaxIndices setting in config.ini and rebuilding the malcolmnetsec/moloch docker container.
See also Moloch's usage documentation for more information on SPIView.

SPIGraph
Moloch's SPI (Session Profile Information) Graph visualizes the occurrence of some field's top n values over time, and (optionally) geographically. This is particularly useful for identifying trends in a particular type of communication over time: traffic using a particular protocol when seen sparsely at regular intervals on that protocol's date histogram in the SPIGraph may indicate a connection check, polling, or beaconing (for example, see the llmnr protocol in the screenshot below).


Controls can be found underneath the time bounding controls for selecting the field of interest, the number of elements to be displayed, the sort order, and a periodic refresh of the data.
See also Moloch's usage documentation for more information on SPIGraph.

Connections
The Connections page presents network communications via a force-directed graph, making it easy to visualize logical relationships between network hosts.


Controls are available for specifying the query size (where smaller values will execute more quickly but may only contain an incomplete representation of the top n sessions, and larger values may take longer to execute but will be more complete), which fields to use as the source and destionation for node values, a minimum connections threshold, and the method for determining the "weight" of the link between two nodes. As is the case with most other visualizations in Moloch, the graph is interactive: clicking on a node or the link between two nodes can be used to modify query filters, and the nodes themselves may be repositioned by dragging and dropping them. A node's color indicates whether it communicated as a source/originator, a destination/responder, or both.
While the default source and destination fields are Src IP and Dst IP:Dst Port, the Connections view is able to use any combination of any of the fields populated by Moloch and Zeek. For example:
  • Src OUI and Dst OUI (hardware manufacturers)
  • Src IP and Protocols
  • Originating Network Segment and Responding Network Segment (see CIDR subnet to network segment name mapping)
  • Originating GeoIP City and Responding GeoIP City
or any other combination of these or other fields.
See also Moloch's usage documentation for more information on the Connections graph.

Hunt
Moloch's Hunt feature allows an analyst to search within the packets themselves (including payload data) rather than simply searching the session metadata. The search string may be specified using ASCII (with or without case sensitivity), hex codes, or regular expressions. Once a hunt job is complete, matching sessions can be viewed in the Sessions view.
Clicking the Create a packet search job on the Hunt page will allow you to specify the following parameters for a new hunt job:
  • a packet search job name
  • a maximum number of packets to examine per session
  • the search string and its format (ascii, ascii (case sensitive), hex, regex, or hex regex)
  • whether to search source packets, destination packets, or both
  • whether to search raw or reassembled packets
Click the Create button to begin the search. Moloch will scan the source PCAP files from which the sessions were created according to the search criteria. Note that whatever filters were specified when the hunt job is executed will apply to the hunt job as well; the number of sessions matching the current filters will be displayed above the hunt job parameters with text like "ⓘ Creating a new packet search job will search the packets of # sessions."


Once a hunt job is submitted, it will be assigned a unique hunt ID (a long unique string of characters like yuBHAGsBdljYmwGkbEMm) and its progress will be updated periodically in the Hunt Job Queue with the execution percent complete, the number of matches found so far, and the other parameters with which the job was submitted. More details for the hunt job can be viewed by expanding its row with the plus icon on the left.


Once the hunt job is complete (and a minute or so has passed, as the huntId must be added to the matching session records in the database), click the folder icon on the right side of the hunt job row to open a new Sessions tab with the search bar prepopulated to filter to sessions with packets matching the search criteria.


From this list of filtered sessions you can expand session details and explore packet payloads which matched the hunt search criteria.
The hunt feature is available only for sessions created from full packet capture data, not Zeek logs. This being the case, it is a good idea to click the eyeball icon and select the PCAP Files view to exclude Zeek logs from candidate sessions prior to using the hunt feature.
See also Moloch's usage documentation for more information on the hunt feature.

Statistics
Moloch provides several other reports which show information about the state of Moloch and the underlying Elasticsearch database.
The Files list displays a list of PCAP files processed by Moloch, the date and time of the earliest packet in each file, and the file size:


The ES Indices list (available under the Stats page) lists the Elasticsearch indices within which log data is contained:


The History view provides a historical list of queries issues to Moloch and the details of those queries:


See also Moloch's usage documentation for more information on the Files list, statistics, and history.

Settings

General settings
The Settings page can be used to tweak Moloch preferences, defined additional custom views and column configurations, tweak the color theme, and more.
See Moloch's usage documentation for more information on settings.



Kibana
While Moloch provides very nice visualizations, especially for network traffic, Kibana (an open source general-purpose data visualization tool for Elasticsearch) can be used to create custom visualizations (tables, charts, graphs, dashboards, etc.) using the same data.
The Kibana container can be accessed over HTTPS on port 5601 (eg., https://localhost:5601 if you are connecting locally). Several preconfigured dashboards for Zeek logs are included in Malcolm's Kibana configuration.
The official Kibana User Guide has excellent tutorials for a variety of topics.
Kibana has several components for data searching and visualization:

Discover
The Discover view enables you to view events on a record-by-record basis (similar to a session record in Moloch or an individual line from a Zeek log). See the official Kibana User Guide for information on using the Discover view:

Screenshots






Visualizations and dashboards

Prebuilt visualizations and dashboards
Malcolm comes with dozens of prebuilt visualizations and dashboards for the network traffic represented by each of the Zeek log types. Click Dashboard to see a list of these dashboards. As is the case with all Kibana's visualizations, all of the charts, graphs, maps, and tables are interactive and can be clicked on to narrow or expand the scope of the data you are investigating. Similarly, click Visualize to explore the prebuilt visualizations used to build the dashboards.
Many of Malcolm's prebuilt visualizations for Zeek logs are heavily inspired by the excellent Kibana Dashboards that are part of Security Onion.

Screenshots














Building your own visualizations and dashboards
See the official Kibana User Guide for information on creating your own visualizations and dashboards:

Screenshots




Other Malcolm features

Automatic file extraction and scanning
Malcolm can leverage Zeek's knowledge of network protocols to automatically detect file transfers and extract those files from PCAPs as Zeek processes them. This behavior can be enabled globally by modifying the ZEEK_EXTRACTOR_MODEenvironment variable in docker-compose.yml, or on a per-upload basis for PCAP files uploaded via the browser-based upload form when Analyze with Zeek is selected.
To specify which files should be extracted, the following values are acceptable in ZEEK_EXTRACTOR_MODE:
  • none: no file extraction
  • interesting: extraction of files with mime types of common attack vectors
  • mapped: extraction of files with recognized mime types
  • known: extraction of files for which any mime type can be determined
  • all: extract all files
Extracted files can be examined through either (but not both) of two methods:
Files which are flagged as potentially malicious via either of these methods will be logged as Zeek signatures.log entries, and can be viewed in the Signatures dashboard in Kibana.
The EXTRACTED_FILE_PRESERVATIONenvironment variable in docker-compose.yml determines the behavior for preservation of Zeek-extracted files:
  • quarantined: preserve only flagged files in ./zeek-logs/extract_files/quarantine
  • all: preserve flagged files in ./zeek-logs/extract_files/quarantine and all other extracted files in ./zeek-logs/extract_files/preserved
  • none: preserve no extracted files

Automatic host and subnet name assignment

IP/MAC address to hostname mapping via host-map.txt
The host-map.txt file in the Malcolm installation directory can be used to define names for network hosts based on IP and/or MAC addresses in Zeek logs. The default empty configuration looks like this:
# IP or MAC address to host name map:
# address|host name|required tag
#
# where:
# address: comma-separated list of IPv4, IPv6, or MAC addresses
# eg., 172.16.10.41, 02:42:45:dc:a2:96, 2001:0db8:85a3:0000:0000:8a2e:0370:7334
#
# host name: host name to be assigned when event address(es) match
#
# required tag (optional): only check match and apply host name if the event
# contains this tag
#
Each non-comment line (not beginning with a #), defines an address-to-name mapping for a network host. For example:
127.0.0.1,127.0.1.1,::1|localhost|
192.168.10.10|office-laptop.intranet.lan|
06:46:0b:a6:16:bf|serial-host.intranet.lan|testbed
Each line consists of three |-separated fields: address(es), hostname, and, optionally, a tag which, if specified, must belong to a log for the matching to occur.
As Zeek logs are processed into Malcolm's Elasticsearch instance, the log's source and destination IP and MAC address fields (zeek.orig_h, zeek.resp_h, zeek.orig_l2_addr, and zeek.resp_l2_addr, respectively) are compared against the lists of addresses in host-map.txt. When a match is found, a new field is added to the log: zeek.orig_hostname or zeek.resp_hostname, depending on whether the matching address belongs to the originating or responding host. If the third field (the "required tag" field) is specified, a log must also contain that value in its tags field in addition to matching the IP or MAC address specified in order for the corresponding _hostname field to be added.
zeek.orig_hostname and zeek.resp_hostname may each contain multiple values. For example, if both a host's source IP address and source MAC address were matched by two different lines, zeek.orig_hostname would contain the hostname values from both matching lines.

CIDR subnet to network segment name mapping via cidr-map.txt
The cidr-map.txt file in the Malcolm installation directory can be used to define names for network segments based on IP addresses in Zeek logs. The default empty configuration looks like this:
# CIDR to network segment format:
# IP(s)|segment name|required tag
#
# where:
# IP(s): comma-separated list of CIDR-formatted network IP addresses
# eg., 10.0.0.0/8, 169.254.0.0/16, 172.16.10.41
#
# segment name: segment name to be assigned when event IP address(es) match
#
# required tag (optional): only check match and apply segment name if the event
# contains this tag
#
Each non-comment line (not beginning with a #), defines an subnet-to-name mapping for a network host. For example:
192.168.50.0/24,192.168.40.0/24,10.0.0.0/8|corporate|
192.168.100.0/24|control|
192.168.200.0/24|dmz|
172.16.0.0/12|virtualized|testbed
Each line consists of three |-separated fields: CIDR-formatted subnet IP range(s), subnet name, and, optionally, a tag which, if specified, must belong to a log for the matching to occur.
As Zeek logs are processed into Malcolm's Elasticsearch instance, the log's source and destination IP address fields (zeek.orig_h and zeek.resp_h, respectively) are compared against the lists of addresses in cidr-map.txt. When a match is found, a new field is added to the log: zeek.orig_segment or zeek.resp_segment, depending on whether the matching address belongs to the originating or responding host. If the third field (the "required tag" field) is specified, a log must also contain that value in its tags field in addition to its IP address falling within the subnet specified in order for the corresponding _segment field to be added.
zeek.orig_segment and zeek.resp_segment may each contain multiple values. For example, if cidr-map.txt specifies multiple overlapping subnets on different lines, zeek.orig_segment would contain the hostname values from both matching lines if zeek.orig_h belonged to both subnets.
If both zeek.orig_segment and zeek.resp_segment are added to a log, and if they contain different values, the tag cross_segment will be added to the log's tags field for convenient identification of cross-segment traffic. This traffic could be easily visualized using Moloch's Connections graph, by setting the Src: value to Originating Network Segment and the Dst: value to Responding Network Segment:


Applying mapping changes
When changes are made to either cidr-map.txt or host-map.txt, Malcolm's Logstash container must be restarted. The easiest way to do this is to restart malcolm via restart.sh (see Stopping and restarting Malcolm).

Elasticsearch index curation
Malcolm uses Elasticsearch Curator to periodically examine indices representing the log data and perform actions on indices meeting criteria for age or disk usage. The environment variables prefixed with CURATOR_ in the docker-compose.yml file determine the criteria for the following actions:
This behavior can also be modified by running ./scripts/install.py --configure.
Other custom filters and actions may be defined by the user by manually modifying the action_file.yml file used by the curator container and ensuring that it is mounted into the container as a volume in the curator: section of your docker-compose.yml file:
  curator:

volumes:
- ./curator/config/action_file.yml:/config/action_file.yml
The settings governing index curation can affect Malcolm's performance in both log ingestion and queries, and there are caveats that should be taken into consideration when configuring this feature. Please read the Elasticsearch documentation linked in this section with regards to index curation.
Index curation only deals with disk space consumed by Elasticsearch indices: it does not have anything to do with PCAP file storage. The MANAGE_PCAP_FILES environment variable in the docker-compose.yml file can be used to allow Moloch to prune old PCAP files based on available disk space.

Known issues

PCAP file export error when Zeek logs are in Moloch search results
Moloch has a nice feature that allows you to export PCAP files matching the filters currently populating the search field. However, Moloch viewer will raise an exception if records created from Zeek logs are found among the search results to be exported. For this reason, if you are using the export PCAP feature it is recommended that you apply the PCAP Files view to filter your search results prior to doing the export.

Manual Kibana index pattern refresh
Because some fields are created in Elasticsearch dynamically when Zeek logs are ingested by Logstash, they may not have been present when Kibana configures its index pattern field mapping during initialization. As such, those fields will not show up in Kibana visualizations until Kibana’s copy of the field list is refreshed. Malcolm periodically refreshes this list, but if fields are missing from your visualizations you may wish to do it manually.
After Malcolm ingests your data (or, more specifically, after it has ingested a new log type it has not seen before) you may manually refresh Kibana’s field list by clicking ManagementIndex Patterns, then selecting the sessions2-* index pattern and clicking the reload button near the upper-right of the window.


Installation example using Ubuntu 18.04 LTS
Here's a step-by-step example of getting Malcolm from GitHub, configuring your system and your Malcolm instance, and running it on a system running Ubuntu Linux. Your mileage may vary depending on your individual system configuration, but this should be a good starting point.
You can use git to clone Malcolm into a local working copy, or you can download and extract the artifacts from the latest release.
To install Malcolm from the latest Malcolm release, browse to the Malcolm releases page on GitHub and download at a minimum install.py and the malcolm_YYYYMMDD_HHNNSS_xxxxxxx.tar.gz file, then navigate to your downloads directory:
user@host:~$ cd Downloads/
user@host:~/Downloads$ ls
install.py malcolm_20190611_095410_ce2d8de.tar.gz
If you are obtaining Malcolm using git instead, run the following command to clone Malcolm into a local working copy:
user@host:~$ git clone https://github.com/idaholab/Malcolm
Cloning into 'Malcolm'...
remote: Enumerating objects: 443, done.
remote: Counting objects: 100% (443/443), done.
remote: Compressing objects: 100% (310/310), done.
remote: Total 443 (delta 81), reused 441 (delta 79), pack-reused 0
Receiving objects: 100% (443/443), 6.87 MiB | 18.86 MiB/s, done.
Resolving deltas: 100% (81/81), done.

user@host:~$ cd Malcolm/
Next, run the install.py script to configure your system. Replace user in this example with your local account username, and follow the prompts. Most questions have an acceptable default you can accept by pressing the Enter key. Depending on whether you are installing Malcolm from the release tarball or inside of a git working copy, the questions below will be slightly different, but for the most part are the same.
user@host:~/Downloads$ sudo python3 install.py
Installing required packages: ['apache2-utils', 'make', 'openssl']

"docker info" failed, attempt to install Docker? (Y/n): y

Attempt to install Docker using official repositories? (Y/n): y
Installing required packages: ['apt-transport-https', 'ca-certificates', 'curl', 'gnupg-agent', 'software-properties-common']
Installing docker packages: ['docker-ce', 'docker-ce-cli', 'containerd.io']
Installation of docker packages apparently succeeded

Add a non-root user to the "docker" group? (y/n): y

Enter user account: user

Add another non-root user to the "docker" group? (y/n): n

"docker-compose version" failed, attempt to install docker-compose? (Y/n): y

Install docker-compose directly from docker github? (Y/n): y
Download and installation of docker-compose apparently succeeded


fs.file-max increases allowed maximum for file handles
fs .file-max= appears to be missing from /etc/sysctl.conf, append it? (Y/n): y

fs.inotify.max_user_watches increases allowed maximum for monitored files
fs.inotify.max_user_watches= appears to be missing from /etc/sysctl.conf, append it? (Y/n): y


vm.max_map_count increases allowed maximum for memory segments
vm.max_map_count= appears to be missing from /etc/sysctl.conf, append it? (Y/n): y


net.core.somaxconn increases allowed maximum for socket connections
net.core.somaxconn= appears to be missing from /etc/sysctl.conf, append it? (Y/n): y


vm.swappiness adjusts the preference of the system to swap vs. drop runtime memory pages
vm.swappiness= appears to be missing from /etc/sysctl.conf, append it? (Y/n): y


vm.dirty_background_ratio defines the percentage of system memory fillable with "dirty" pages before flushing
vm.dirty_background_ratio= appears to be missing from /etc/sysctl.conf, append it? (Y/n): y


vm.dirty_ratio defines the maximum percentage of dirty system memory before committing everything
vm.dirty_ratio= appears to be missing from /etc/sysctl.conf, append it? (Y/n): y


/etc/security/limits.d/limits.conf increases the allowed maximums for file handles and memlocked segments
/etc/security/limits.d/limits.conf does not exist, create it? (Y/n): y

The "haveged" utility may help improve Malcolm startup times by providing entropy for the Linux kernel.
Install haveged? (y/N): y
Installing haveged packages: ['haveged']
Installation of haveged packages apparently succeeded
At this point, if you are installing from the a release tarball you will be asked if you would like to extract the contents of the tarball and to specify the installation directory:
Extract Malcolm runtime files from /home/user/Downloads/malcolm_20190611_095410_ce2d8de.tar.gz (Y/n): y

Enter installation path for Malcolm [/home/user/Downloads/malcolm]: /home/user/Malcolm
Malcolm runtime files extracted to /home/user/Malcolm
Alternately, if you are configuring Malcolm from within a git working copy, install.py will now exit. Run install.py again like you did at the beginning of the example, only remove the sudo and add --configure to run install.py in "configuration only" mode.
user@host:~/Malcolm$ python3 scripts/install.py --configure
Now that any necessary system configuration changes have been made, the local Malcolm instance will be configured:
Setting 10g for Elasticsearch and 3g for Logstash. Is this OK? (Y/n): y

Restart Malcolm upon system or Docker daemon restart? (y/N): y

Select Malcolm restart behavior ('no', 'on-failure', 'always', 'unless-stopped'): unless-stopped

Periodically close old Elasticsearch indices? (Y/n): y

Indices older than 5 years will be periodically closed. Is this OK? (Y/n): n

Enter index close threshold (eg., 90 days, 2 years, etc.): 1 years

Indices older than 1 years will be periodically closed. Is this OK? (Y/n): y

Periodically delete old Elasticsearch indices? (Y/n): y

Indices older than 10 years will be periodically deleted. Is this OK? (Y/n): n

Enter index delete threshold (eg., 90 days, 2 years, etc.): 5 years

Indices older than 5 years will be periodically deleted. Is this OK? (Y/n): y

Periodically delete the oldest Elasticsearch indices when the database exceeds a certain size? (Y/n ): y

Indices will be deleted when the database exceeds 10000 gigabytes. Is this OK? (Y/n): n

Enter index threshold in gigabytes: 100

Indices will be deleted when the database exceeds 100 gigabytes. Is this OK? (Y/n): y

Automatically analyze all PCAP files with Zeek? (y/N): y

Perform reverse DNS lookup locally for source and destination IP addresses in Zeek logs? (y/N): n

Perform hardware vendor OUI lookups for MAC addresses? (Y/n): y

Expose Logstash port to external hosts? (y/N): n

Forward Logstash logs to external Elasticstack instance? (y/N): n

Enable file extraction with Zeek? (y/N): y

Select file extraction behavior ('none', 'known', 'mapped', 'all', 'interesting'): interesting

Select file preservation behavior ('quarantined', 'all', 'none'): quarantined

Scan extracted files with ClamAV? (y/N): y

Download updated ClamAV virus signatures periodically? (Y/n): y

Should Malcolm capture network traffic to PCAP files? (y/N): y

Specify capture interface(s) (comma-separated): eth0

Capture packets using netsniff-ng? (Y/n): y

Capture packets using tcpdump? (y/N): n

Malcolm has been installed to /home/user/Malcolm. See README.md for more information.
Scripts for starting and stopping Malcolm and changing authentication-related settings can be found
in /home/user/Malcolm/scripts.
At this point you should reboot your computer so that the new system settings can be applied. After rebooting, log back in and return to the directory to which Malcolm was installed (or to which the git working copy was cloned).
Now we need to set up authentication and generate some unique self-signed SSL certificates. You can replace analyst in this example with whatever username you wish to use to log in to the Malcolm web interface.
user@host:~/Malcolm$ ./scripts/auth_setup.sh
Username: analyst
analyst password:
analyst password (again):

(Re)generate self-signed certificates for HTTPS access [Y/n]? y

(Re)generate self-signed certificates for a remote log forwarder [Y/n]? y

Store username/password for forwarding Logstash events to a secondary, external Elasticsearch instance [y/N]? n
For now, rather than build Malcolm from scratch, we'll pull images from Docker Hub:
user@host:~/Malcolm$ docker-compose pull
Pulling elasticsearch ... done
Pulling kibana ... done
Pulling elastalert ... done
Pulling curator ... done
Pulling logstash ... done
Pulling filebeat ... done
Pulling moloch ... done
Pulling file-monitor ... done
Pulling pcap-capture ... done
Pulling upload ... done
Pulling htadmin ... done
Pulling nginx-proxy ... done

user@host:~/Malcolm$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
malcolmnetsec/moloch 1.4.0 xxxxxxxxxxxx 27 minutes ago 517MB
malcolmnetsec/htadmin 1.4.0 xxxxxxxxxxxx 2 hours ago 180MB
malcolmnetsec/nginx-proxy 1.4.0 xxxxxxxxxxxx 4 hours ago 5 3MB
malcolmnetsec/file-upload 1.4.0 xxxxxxxxxxxx 24 hours ago 198MB
malcolmnetsec/pcap-capture 1.4.0 xxxxxxxxxxxx 24 hours ago 111MB
malcolmnetsec/file-monitor 1.4.0 xxxxxxxxxxxx 24 hours ago 355MB
malcolmnetsec/logstash-oss 1.4.0 xxxxxxxxxxxx 25 hours ago 1.24GB
malcolmnetsec/curator 1.4.0 xxxxxxxxxxxx 25 hours ago 303MB
malcolmnetsec/kibana-oss 1.4.0 xxxxxxxxxxxx 33 hours ago 944MB
malcolmnetsec/filebeat-oss 1.4.0 xxxxxxxxxxxx 11 days ago 459MB
malcolmnetsec/elastalert 1.4.0 xxxxxxxxxxxx 11 days ago 276MB
docker.elast ic.co/elasticsearch/elasticsearch-oss 6.8.1 xxxxxxxxxxxx 5 weeks ago 769MB
Finally, we can start Malcolm. When Malcolm starts it will stream informational and debug messages to the console. If you wish, you can safely close the console or use Ctrl+C to stop these messages; Malcolm will continue running in the background.
user@host:~/Malcolm$ ./scripts/start.sh
Creating network "malcolm_default" with the default driver
Creating malcolm_file-monitor_1 ... done
Creating malcolm_htadmin_1 ... done
Creating malcolm_elasticsearch_1 ... done
Creating malcolm_pcap-capture_1 ... done
Creating malcolm_curator_1 ... done
Creating malcolm_logstash_1 ... done
Creating malcolm_elastalert_1 ... done
Creating malcolm_kibana_1 ... done
Creating malcolm_moloch_1 ... done
Creating malcolm_filebeat_1 ... done
Creating malcolm_upload_1 ... done
Creating malcolm_nginx-proxy_1 ... done

Malcolm started, setting "INITIALIZEDB=false" in "docker-compose.yml" for subsequent runs.

In a few minutes, Malcolm services will be accessible via the following URLs:
------------------------------------------------------------------------------
- Moloch: https://localhost:443/
- Kibana: https:/ /localhost:5601/
- PCAP Upload (web): https://localhost:8443/
- PCAP Upload (sftp): sftp://username@127.0.0.1:8022/files/
- Account management: https://localhost:488/

Name Command State Ports
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
malcolm_curator_1 /usr/local/bin/cron_env_deb.sh Up
malcolm_elastalert_1 /usr/local/bin/elastalert- ... Up (health: starting) 3030/tcp, 3333/tcp
malcolm_elasticsearch_1 /usr/local/bin/docker-entr ... Up (health: starting) 9200/tcp, 9300/tcp
malcolm_file-monitor_1 /usr/local/bin/supervisord ... Up 3310/tcp
malcolm_filebeat_1 /usr/local/bin/docker-entr ... Up
malcolm_htadmin_1 /usr/bin/supervisord -c /s ... Up 80/tcp
malcolm_kibana_1 /usr/bin/supervisord - c /e ... Up (health: starting) 28991/tcp, 5601/tcp
malcolm_logstash_1 /usr/local/bin/logstash-st ... Up (health: starting) 5000/tcp, 5044/tcp, 9600/tcp
malcolm_moloch_1 /usr/bin/supervisord -c /e ... Up 8000/tcp, 8005/tcp, 8081/tcp
malcolm_nginx-proxy_1 /app/docker-entrypoint.sh ... Up 0.0.0.0:28991->28991/tcp, 0.0.0.0:3030->3030/tcp, 0.0.0.0:443->443/tcp, 0.0.0.0:488->488/tcp, 0.0.0.0:5601->5601/tcp, 80/tcp,
0.0.0.0:8443->8443/tcp, 0.0.0.0:9200->9200/tcp, 0.0.0.0:9600-> 9600/tcp
malcolm_pcap-capture_1 /usr/local/bin/supervisor.sh Up
malcolm_upload_1 /docker-entrypoint.sh /usr ... Up 127.0.0.1:8022->22/tcp, 80/tcp

Attaching to malcolm_nginx-proxy_1, malcolm_upload_1, malcolm_filebeat_1, malcolm_kibana_1, malcolm_moloch_1, malcolm_elastalert_1, malcolm_logstash_1, malcolm_curator_1, malcolm_elasticsearch_1, malcolm_htadmin_1, malcolm_pcap-capture_1, malcolm_file-monitor_1
It will take several minutes for all of Malcolm's components to start up. Logstash will take the longest, probably 5 to 10 minutes. You'll know Logstash is fully ready when you see Logstash spit out a bunch of starting up messages, ending with this:

logstash_1 | [2019-06-11T15:45:41,938][INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x7a5910 sleep>"}
logstash_1 | [2019-06-11T15:45:42,009][INFO ][logstash.agent ] Pipelines running {:count=>3, :running_pipelines=>[:input, :main, :output], :non_running_pipelines=>[]}
logstash_1 | [2019-06-11T15:45:42,599][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
You can now open a web browser and navigate to one of the Malcolm user interfaces.

Copyright
Malcolm is Copyright 2019 Battelle Energy Alliance, LLC, and is developed and released through the cooperation of the Cybersecurity and Infrastructure Security Agency of the U.S. Department of Homeland Security.
See License.txt for the terms of its release.

Contact information of author(s):
Seth Grover

Other Software
Idaho National Laboratory is a cutting edge research facility which is a constantly producing high quality research and software. Feel free to take a look at our other software and scientific offerings at:
Primary Technology Offerings Page
Supported Open Source Software
Raw Experiment Open Source Software
Unsupported Open Source Software


Theo - Ethereum Recon And Exploitation Tool

$
0
0

Theo aims to be an exploitation framework and a blockchain recon and interaction tool.

Features:
  • Automatic smart contract scanning which generates a list of possible exploits.
  • Sending transactions to exploit a smart contract.
  • Transaction pool monitor.
  • Web3 console
  • Frontrunning and backrunning transactions.
  • Waiting for a list of transactions and sending out others.
  • Estimating gas for transactions means only successful transactions are sent.
  • Disabling gas estimation will send transactions with a fixed gas quantity.
He knows Karl from work.
Theo's purpose is to fight script kiddies that try to be leet hackers. He can listen to them trying to exploit his honeypots and make them lose their funds, for his own gain.
"You didn't bring me along for my charming personality."

Install
Theo is available as a PyPI package:
$ pip install theo
$ theo --help
usage: theo [-h] [--rpc-http RPC_HTTP] [--rpc-ws RPC_WS] [--rpc-ipc RPC_IPC]
[--account-pk ACCOUNT_PK] [--contract ADDRESS]
[--skip-mythril SKIP_MYTHRIL] [--load-file LOAD_FILE] [--version]

Monitor contracts for balance changes or tx pool.

optional arguments:
-h, --help show this help message and exit
--rpc-http RPC_HTTP Connect to this HTTP RPC (default:
http://127.0.0.1:8545)
--account-pk ACCOUNT_PK
The account's private key (default: None)
--contract ADDRESS Contract to monitor (default: None)
--skip-mythril SKIP_MYTHRIL
Don't try to find exploits with Mythril (default:
False)
--load-file LOAD_FILE
Load exploit from file (default: )
--version show program's version numb er and exit

RPC connections:
--rpc-ws RPC_WS Connect to this WebSockets RPC (default: None)
--rpc-ipc RPC_IPC Connect to this IPC RPC (default: None)
Install from sources
$ git clone https://github.com/cleanunicorn/theo
$ cd theo
$ virtualenv ./venv
$ . ./venv/bin/activate
$ pip install -r requirements.txt
$ pip install -e .
$ theo --help
Requirements:
  • Python 3.5 or higher.
  • An Ethereum node with RPC available. Ganache works really well for testing or for validating exploits.

Demos

Find exploit and execute it
Scan a smart contract, find exploits, exploit it:
  • Start Ganache as our local Ethereum node
  • Deploy the vulnerable contract (happens in a different window)
  • Scan for exploits
  • Run exploit


Frontrun victim
Setup a honeypot, deploy honeypot, wait for attacker, frontrun:
  • Start geth as our local Ethereum node
  • Start mining
  • Deploy the honeypot
  • Start Theo and scan the mem pool for transactions
  • Frontrun the attacker and steal his ether


Usage

Help screen
It's a good idea to check the help screen first.
$ theo --help
usage: theo [-h] [--rpc-http RPC_HTTP] [--rpc-ws RPC_WS] [--rpc-ipc RPC_IPC]
[--account-pk ACCOUNT_PK] [--contract ADDRESS] [--skip-mythril]
[--load-file LOAD_FILE] [--version]

Monitor contracts for balance changes or tx pool.

optional arguments:
-h, --help show this help message and exit
--rpc-http RPC_HTTP Connect to this HTTP RPC (default:
http://127.0.0.1:8545)
--account-pk ACCOUNT_PK
The account's private key (default: None)
--contract ADDRESS Contract to interact with (default: None)
--skip-mythril Skip scanning the contract with Mythril (default:
False)
--load-file LOAD_FILE
Load exploit from file (default: )
--version show program's version number and exit

RPC connections:
--rpc-ws RPC_WS Connect to this WebSockets RPC (default: None)
--rpc-ipc RPC_IPC Connect to this IPC RPC (default: None)

Symbolic execution
A list of exploits is automatically identified using mythril.
Start a session by running:
$ theo --contract=<scanned contract> --account-pk=<your private key>
Scanning for exploits in contract: 0xa586074fa4fe3e546a132a16238abe37951d41fe
Connecting to HTTP: http://127.0.0.1:8545.
Found exploits(s):
[Exploit: (txs=[Transaction {Data: 0xcf7a8965, Value: 1000000000000000000}])]

A few objects are available in the console:
- `exploits` is an array of loaded exploits found by Mythril or read from a file
- `w3` an initialized instance of web3py for the provided HTTP RPC endpoint

Check the readme for more info:
https://github.com/cleanunicorn/theo

>>>
It will analyze the contract and will find a list of available exploits.
You can see the available exploits found. In this case one exploit was found. Each exploit is an Exploit object.
>>> exploits[0]
Exploit: (txs=[Transaction: {'input': '0xcf7a8965', 'value': '0xde0b6b3a7640000'}])

Running exploits
The exploit steps can be run by calling .execute() on the exploit object. The transactions will be signed and sent to the node you're connected to.
>>> exploits[0].execute()
2019-07-22 11:26:12,196 - Sending tx: {'to': '0xA586074FA4Fe3E546A132a16238abe37951D41fE', 'gasPrice': 1, 'gas': 30521, 'value': 1000000000000000000, 'data': '0xcf7a8965', 'nonce': 47}
2019-07-22 11:26:12,200 - Waiting for 0x41b489c78f654cab0b0451fc573010ddb20ee6437cdbf5098b6b03ee1936c33c to be mined...
2019-07-22 11:26:16,337 - Mined
2019-07-22 11:26:16,341 - Initial balance: 1155999450759997797167 (1156.00 ether)
2019-07-22 11:26:16,342 - Final balance: 1156999450759997768901 (1157.00 ether)

Frontrunning
You can start the frontrunning monitor to listen for other hackers trying to exploit the honeypot.
Use .frontrun() to start listening for the exploit and when found, send a transaction with a higher gas price.
>>> exploits[0].frontrun()
2019-07-22 11:22:26,285 - Scanning the mem pool for transactions...
2019-07-22 11:22:45,369 - Found tx: 0xf6041abe6e547cea93e80a451fdf53e6bdae67820244246fde44098f91ce1c20
2019-07-22 11:22:45,375 - Sending tx: {'to': '0xA586074FA4Fe3E546A132a16238abe37951D41fE', 'gasPrice': '0x2', 'data': '0xcf7a8965', 'gas': 30522, 'value': 1000000000000000000, 'nonce': 45}
2019-07-22 11:22:45,380 - Waiting for 0xa73316daf806e7eef83d09e467c32ce5faa239c6eda3a270a8ce7a7aae48fb7e to be mined...
2019-07-22 11:22:56,852 - Mined
"Oh, my God! The quarterback is toast!"
This works very well for some specially crafted contracts or some other vulnerable contracts, as long as you make sure frontrunning is in your favor.

Load transactions from file
Instead of identifying the exploits with mythril, you can specify the list of exploits yourself.
Create a file that looks like this exploits.json:
[
[
{
"name": "claimOwnership()",
"input": "0x4e71e0c8",
"value": "0xde0b6b3a7640000"
},
{
"name": "retrieve()",
"input": "0x2e64cec1",
"value": "0x0"
}
],
[
{
"name": "claimOwnership()",
"input": "0x4e71e0c8",
"value": "0xde0b6b3a7640000"
}
]
]
This one defines 2 exploits, the first one has 2 transactions and the second one only has 1 transaction.
You can load it with:
$ theo --load-file=./exploits.json

Troubleshooting

openssl/aes.h: No such file or directory
If you get this error, you need the libssl source libraries:
    scrypt-1.2.1/libcperciva/crypto/crypto_aes.c:6:10: fatal error: openssl/aes.h: No such file or directory
#include <openssl/aes.h>
^~~~~~~~~~~~~~~
compilation terminated.
error: command 'x86_64-linux-gnu-gcc' failed with exit status 1

----------------------------------------
Command "/usr/bin/python3 -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-5rl4ep94/scrypt/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /tmp/pip-mnbzx9qe-record/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /tmp/pip-build-5rl4ep94/scrypt/
On Ubuntu you can install them with:
$ sudo apt install libssl-dev


Project iKy v2.1.0 - Tool That Collects Information From An Email And Shows Results In A Nice Visual Interface

$
0
0

Project iKy is a tool that collects information from an email and shows results in a nice visual interface.

Visit the Gitlab Page of the Project

Project

First of all we want to advice you that we have changed the Frontend from AngularJS to Angular 7. For this reason we left the project with AngularJS as Frontend in the iKy-v1 branch and the documentation for its installation here.
The reason of changing the Frontend was to update the technology and get an easier way of installation.

Video


Installation

Clone repository

git clone https://gitlab.com/kennbroorg/iKy.git

Install Backend


Redis
You must install Redis
wget http://download.redis.io/redis-stable.tar.gz
tar xvzf redis-stable.tar.gz
cd redis-stable
make
sudo make install
And turn on the server in a terminal
redis-server

Python stuff and Celery
You must install the libraries inside requirements.txt
pip install -r requirements.txt
And turn on Celery in another terminal, within the directory backend
./celery.sh
Finally, again, in another terminal turn on backend app from directory backend
python app.py

Install Frontend


Node
First of all, install nodejs.

Dependencies
Inside the directory frontend install the dependencies
npm install

Turn on Frontend Server
Finally, to run frontend server, execute:
npm start

Browser

Open the browser in this url

Config API Keys

Once the application is loaded in the browser, you should go to the Api Keys option and load the values of the APIs that are needed.

  • Fullcontact: Generate the APIs from here
  • Twitter: Generate the APIs from here
  • Linkedin: Only the user and password of your account must be loaded



SET v8.0.1 - The Social-Engineer Toolkit

$
0
0

Copyright 2019 The Social-Engineer Toolkit (SET)
Written by: David Kennedy (ReL1K)
Company: TrustedSec
DISCLAIMER: This is only for testing purposes and can only be used where strict consent has been given. Do not use this for illegal purposes, period.
Please read the LICENSE under readme/LICENSE for the licensing of SET.

SET Tutorial
For a full document on how to use SET, visit the SET user manual.

Features
The Social-Engineer Toolkit is an open-source penetration testing framework designed for social engineering. SET has a number of custom attack vectors that allow you to make a believable attack quickly. SET is a product of TrustedSec, LLC – an information security consulting firm located in Cleveland, Ohio.

Bugs and enhancements
For bug reports or enhancements, please open an issue here.

Supported platforms
  • Linux
  • Mac OS X

Installation

Resolve dependencies

Ubuntu/Debian System
  • Linux
  • Mac OS X (experimental)

Installation

Install via requirements.txt
$ pip install -r requirements.txt

Install SET
All OSs
$ git clone https://github.com/trustedsec/social-engineer-toolkit/ set/
$ cd set
$ pip install -r requirements.txt


KRF - A Kernelspace Randomized Faulter

$
0
0

KRF is a Kernelspace Randomized Faulter.
It currently supports the Linux and FreeBSD kernels.

What?
Fault injection is a software testing technique that involves inducing failures ("faults") in the functions called by a program. If the callee has failed to perform proper error checking and handling, these faults can result in unreliable application behavior or exploitable vulnerabilities.

Unlike the many userspace fault injection systems out there, KRF runs in kernelspace via a loaded module. This has several advantages:
  • It works on static binaries, as it does not rely on LD_PRELOAD for injection.
  • Because it intercepts raw syscalls and not their libc wrappers, it can inject faults into calls made by syscall(3) or inline assembly.
  • It's probably faster and less error-prone than futzing with dlsym.
There are also several disadvantages:
  • You'll probably need to build it yourself.
  • It probably only works on x86(_64), since it twiddles cr0 manually. There is probably an architecture-independent way to do that in Linux, somewhere.
  • It's essentially a rootkit. You should definitely never, ever run it on a non-testing system.
  • It probably doesn't cover everything that the Linux kernel expects of syscalls, and may destabilize its host in weird and difficult to reproduce ways.

How does it work?
KRF rewrites the Linux or FreeBSD system call table: when configured via krfctl, KRF replaces faultable syscalls with thin wrappers.
Each wrapper then performs a check to see whether the call should be faulted using a configurable targeting system capable of targeting a specific personality(2), PID, UID, and/or GID. If the process shouldn't be faulted, the original syscall is invoked.
Finally, the targeted call is faulted via a random failure function. For example, a read(2) call might receive one of EBADF, EINTR, EIO, and so on.

Setup

Compatibility
NOTE: If you have Vagrant, just use the Vagrantfile and jump to the build steps.
KRF should work on any recent-ish (4.15+) Linux kernel with CONFIG_KALLSYMS=1.
This includes the default kernel on Ubuntu 18.04 and probably many other recent distros.

Dependencies
NOTE: Ignore this if you're using Vagrant.
Apart from a C toolchain (GCC is probably necessary for Linux), KRF's only dependencies should be libelf, the kernel headers, and Ruby (for code generation).
GNU Make is required on all platforms; FreeBSD additionally requires BSD Make.
For systems with apt:
sudo apt install libelf-dev ruby linux-headers-$(uname -r)

Building
git clone https://github.com/trailofbits/krf && cd krf
make -j$(nproc)
or, if you're using Vagrant:
git clone https://github.com/trailofbits/krf && cd krf
vagrant up linux && vagrant ssh linux
# inside the VM
cd /vagrant
make -j$(nproc)
or, for FreeBSD:
git clone https://github.com/trailofbits/krf && cd krf
cd vagrant up freebsd && vagrant ssh freebsd
# inside the VM
cd /vagrant
gmake # NOT make!

Usage
KRF has three components:
  • A kernel module (krfx)
  • An execution utility (krfexec)
  • A control utility (krfctl)
To load the kernel module, run make insmod. To unload it, run make rmmod.
KRF begins in a neutral state: no syscalls will be intercepted or faulted until the user specifies some behavior via krfctl:
# no induced faults, even with KRF loaded
ls

# tell krf to fault read(2) and write(2) calls
# note that krfctl requires root privileges
sudo ./src/krfctl/krfctl -F 'read,write'

# tell krf to fault any program with a
# personality of 28 (the value set by krfexec)
sudo ./src/krfctl/krfctl -T personality=28

# may fault!
./src/krfexec/krfexec ls

# krfexec will pass options correctly as well
./src/krfexec/krfexec echo -n 'no newline'

# clear the fault specification
sudo ./src/krfctl/krfctl -c

# clear the targeting specification
sudo ./src/krfctl/krfctl -C

# no induced faults, since no syscalls are being faulted
./src/krfexec/krfexec firefox
On FreeBSD, krfexec requires root privileges. By default, it will attempt to use SUDO_UID and the username returned by getlogin_r to return to a non-root user before executing the target. To force a particular UID, export REAL_UID, e.g.:
REAL_UID=1000 sudo ./src/krfexec/krfexec ls

Configuration
NOTE: Most users should use krfctl instead of manipulating these files by hand. In FreeBSD, these same values are accessible through sysctl krf.whatever instead of procfs.

/proc/krf/rng_state
This file allows a user to read and modify the internal state of KRF's PRNG.
For example, each of the following will correctly update the state:
echo "1234" | sudo tee /proc/krf/rng_state
echo "0777" | sudo tee /proc/krf/rng_state
echo "0xFF" | sudo tee /proc/krf/rng_state
The state is a 32-bit unsigned integer; attempting to change it beyond that will fail.

/proc/krf/targeting
This file allows a user set the values used by KRF for syscall targeting.
NOTE: KRF uses a default personality not currently used by the Linux kernel by default. If you change this, you should be careful to avoid making it something that Linux cares about. man 2 personality has the details.
echo "0 28" | sudo tee /proc/krf/targeting
A personality of 28 is hardcoded into krfexec.

/proc/krf/probability
This file allows a user to read and write the probability of inducing fault for a given (faultable) syscall.
The probability is represented as a reciprocal, e.g. 1000 means that, on average, 0.1% of faultable syscalls will be faulted.
echo "100000" | sudo tee /proc/krf/probability

/proc/krf/control
This file controls the syscalls that KRF faults.
NOTE: Most users should use krfctl instead of interacting with this file directly — the former will perform syscall name-to-number translation automatically and will provide clearer error messages when things go wrong.
# replace the syscall in slot 0 (usually SYS_read) with its faulty wrapper
echo "0" | sudo tee /proc/krf/control
Passing any number greater than KRF_NR_SYSCALLS will cause KRF to flush the entire syscall table, returning it to the neutral state. Since KRF_NR_SYSCALLS isn't necessarily predictable for arbitrary versions of the Linux kernel, choosing a large number (like 65535) is fine.
Passing a valid syscall number that lacks a fault injection wrapper will cause the write(2) to the file to fail with EOPNOTSUPP.

/proc/krf/log_faults
This file controls whether or not KRF emits kernel logs on faulty syscalls. By default, no logging messages are emitted.
NOTE: Most users should use krfctl instead of interacting with this file directly.
# enable fault logging
echo "1" | sudo tee /proc/krf/log_faults
# disable fault logging
echo "0" | sudo tee /proc/krf/log_faults
# read the logging state
cat /proc/krf/log_faults



Skadi - Collect, Process, And Hunt With Host Based Data From MacOS, Windows, And Linux

$
0
0

(pronounced “SKAH-Dee”: similar to Scotty but with a d sound) is a giantess and goddess of hunting in Norse mythology

Purpose
Skadi is a free, open source collection of tools that enables the collection, processing and advanced analysis of forensic artifacts and images. It works on MacOS, Windows, and Linux machines. It scales to work effectively on laptops, desktops, servers, the cloud, and can be installed on top of hardened / gold disk images.

How to Get Started and Support

Download Latest Release
Available in OVA, Vagrant and Signed Installer formats
Download the Latest Release

Installation Instructions
Starting Skadi on Docker InstructionsVagrant Installation Instructions
OVA Installation Instructions
Signed Installer Instructions

Skadi Portal
This portal allows easy access to Skadi tools. By default it is available at the IP address of the Skadi Server.
The default credentials are:
  • Username: skadi
  • Password: skadi
Access the portal through a web browser at the IP address of the server. In this example the server is 192.168.1.2 while Vagrant and Docker will create a link to localhost

Included Tools


The tools are combined into one platform that all work together to provide the ability to collect data, convert the bits and bytes to words and numbers, and analyze the results quickly and easily. This enables the ability to rapidly hunt for host based evidence of a malicious activities quickly and accurately.
  • CDQR
  • CyberChef
  • CyLR
  • Docker
  • ElasticSearch
  • Glances
  • Grafana
  • Portainer
  • Kibana
  • Yeti
  • Plaso
  • TimeSketch

Yeti (Threat Intelligence Tool)


Kibana and TimeSketch Included


11 Kibana Dashboards



TimeSketch



Videos and Media

Skadi Wiki Page
The answers to common questions and information about how to get started with Skadi is stored in the Skadi Wiki Pages.

Skadi Community
There is a Slack community setup for developers and users of the Skadi ecosystem. It is a safe place to ask questions and share information.
Join the Skadi Community Slack

Skadi Add-on Packs
Skadi add-on packs are installed on top of the base Skadi VM to provide extra functionality

Thank you to everyone who has helped, and those that continue to, making this project a reality.

Special Thanks to:
  • The team from Komand for their advice and support on all things Automation
  • Jackie & Jason from @SpyglassSec for their guidance
  • Every single one of the contributors who's efforts made the automation Addon Pack possible

CREATOR


Commando VM v2.0 - The First Full Windows-based Penetration Testing Virtual Machine Distribution

$
0
0

Welcome to CommandoVM - a fully customizable, Windows-based security distribution for penetration testing and red teaming.
For detailed install instructions or more information please see our blog

Installation (Install Script)

Requirements
  • Windows 7 Service Pack 1 or Windows 10
  • 60 GB Hard Drive
  • 2 GB RAM

Recommended
  • Windows 10
  • 80+ GB Hard Drive
  • 4+ GB RAM
  • 2 network adapters
  • Enable Virtualization support for VM
    • REQUIRED FOR KALI OR DOCKER

Instructions

Standard install
  1. Create and configure a new Windows Virtual Machine
  • Ensure VM is updated completely. You may have to check for updates, reboot, and check again until no more remain
  • Take a snapshot of your machine!
  • Download and copy install.ps1 on your newly configured machine.
  • Open PowerShell as an Administrator
  • Enable script execution by running the following command:
    • Set-ExecutionPolicy Unrestricted
  • Finally, execute the installer script as follows:
    • .\install.ps1
    • You can also pass your password as an argument: .\install.ps1 -password <password>
The script will set up the Boxstarter environment and proceed to download and install the Commando VM environment. You will be prompted for the administrator password in order to automate host restarts during installation. If you do not have a password set, hitting enter when prompted will also work.

Custom install
  1. Download the zip from https://github.com/fireeye/commando-vm into your Downloads folder.
  2. Decompress the zip and edit the ${Env:UserProfile}\Downloads\commando-vm-master\commando-vm-master\profile.json file by removing tools or adding tools in the “packages” section. Tools are available from our package list or from the chocolatey repository.
  3. Open an administrative PowerShell window and enable script execution. Set-ExecutionPolicy Unrestricted -f
  4. Change to the unzipped project directory. cd ${Env:UserProfile}\Downloads\commando-vm-master\commando-vm-master\
  5. Execute the install with the -profile_file argument. .\install.ps1 -profile_file .\profile.json
For more detailed instructions about custom installations, see our blog

Installing a new package
Commando VM uses the Chocolatey Windows package manager. It is easy to install a new package. For example, enter the following command as Administrator to deploy Github Desktop on your system:
cinst github

Staying up to date
Type the following command to update all of the packages to the most recent version:
cup all

Installed Tools

Active Directory Tools
  • Remote Server Administration Tools (RSAT)
  • SQL Server Command Line Utilities
  • Sysinternals

Command & Control
  • Covenant
  • PoshC2
  • WMImplant
  • WMIOps

Developer Tools
  • Dep
  • Git
  • Go
  • Java
  • Python 2
  • Python 3 (default)
  • Ruby
  • Ruby Devkit
  • Visual Studio 2017 Build Tools (Windows 10)
  • Visual Studio Code

Docker
  • Amass
  • SpiderFoot

Evasion
  • CheckPlease
  • Demiguise
  • DefenderCheck
  • DotNetToJScript
  • Invoke-CradleCrafter
  • Invoke-DOSfuscation
  • Invoke-Obfuscation
  • Invoke-Phant0m
  • Not PowerShell (nps)
  • PS>Attack
  • PSAmsi
  • Pafishmacro
  • PowerLessShell
  • PowerShdll
  • StarFighters

Exploitation
  • ADAPE-Script
  • API Monitor
  • CrackMapExec
  • CrackMapExecWin
  • DAMP
  • EvilClippy
  • Exchange-AD-Privesc
  • FuzzySec's PowerShell-Suite
  • FuzzySec's Sharp-Suite
  • Generate-Macro
  • GhostPack
    • Rubeus
    • SafetyKatz
    • Seatbelt
    • SharpDPAPI
    • SharpDump
    • SharpRoast
    • SharpUp
    • SharpWMI
  • GoFetch
  • Impacket
  • Invoke-ACLPwn
  • Invoke-DCOM
  • Invoke-PSImage
  • Invoke-PowerThIEf
  • Juicy Potato
  • Kali Binaries for Windows
  • LuckyStrike
  • MetaTwin
  • Metasploit
  • Mr. Unikod3r's RedTeamPowershellScripts
  • NetshHelperBeacon
  • Nishang
  • Orca
  • PSReflect
  • PowerLurk
  • PowerPriv
  • PowerSploit
  • PowerUpSQL
  • PrivExchange
  • RottenPotatoNG
  • Ruler
  • SharpClipHistory
  • SharpExchangePriv
  • SharpExec
  • SpoolSample
  • SharpSploit
  • UACME
  • impacket-examples-windows
  • vssown
  • Vulcan

Information Gathering
  • ADACLScanner
  • ADExplorer
  • ADOffline
  • ADRecon
  • BloodHound
  • dnsrecon
  • FOCA
  • Get-ReconInfo
  • GoBuster
  • GoWitness
  • NetRipper
  • Nmap
  • PowerView
    • Dev branch included
  • SharpHound
  • SharpView
  • SpoolerScanner
  • Watson

Kali Linux
  • kali-linux-default
  • kali-linux-xfce
  • VcXsrv

Networking Tools
  • Citrix Receiver
  • OpenVPN
  • Proxycap
  • PuTTY
  • Telnet
  • VMWare Horizon Client
  • VMWare vSphere Client
  • VNC-Viewer
  • WinSCP
  • Windump
  • Wireshark

Password Attacks
  • ASREPRoast
  • CredNinja
  • DomainPasswordSpray
  • DSInternals
  • Get-LAPSPasswords
  • Hashcat
  • Internal-Monologue
  • Inveigh
  • Invoke-TheHash
  • KeeFarce
  • KeeThief
  • LAPSToolkit
  • MailSniper
  • Mimikatz
  • Mimikittenz
  • RiskySPN
  • SessionGopher

Reverse Engineering
  • DNSpy
  • Flare-Floss
  • ILSpy
  • PEview
  • Windbg
  • x64dbg

Utilities
  • 7zip
  • Adobe Reader
  • AutoIT
  • Cmder
  • CyberChef
  • Explorer Suite
  • Gimp
  • Greenshot
  • Hashcheck
  • Hexchat
  • HxD
  • Keepass
  • MobaXterm
  • Mozilla Thunderbird
  • Neo4j Community Edition
  • Notepad++
  • Pidgin
  • Process Hacker 2
  • SQLite DB Browser
  • Screentogif
  • Shellcode Launcher
  • Sublime Text 3
  • TortoiseSVN
  • VLC Media Player
  • Winrar
  • yEd Graph Tool

Vulnerability Analysis
  • AD Control Paths
  • Egress-Assess
  • Grouper2
  • NtdsAudit
  • PwndPasswordsNTLM
  • zBang

Web Applications
  • Burp Suite
  • Fiddler
  • Firefox
  • OWASP Zap
  • Subdomain-Bruteforce
  • Wfuzz

Wordlists
  • FuzzDB
  • PayloadsAllTheThings
  • SecLists
  • Probable-Wordlists
  • RobotsDisallowed

Legal Notice
This download configuration script is provided to assist penetration testers
in creating handy and versatile toolboxes for offensive engagements. It provides
a convenient interface for them to obtain a useful set of pentesting Tools directly
from their original sources. Installation and use of this script is subject to the
Apache 2.0 License.

You as a user of this script must review, accept and comply with the license
terms of each downloaded/installed package listed below. By proceeding with the
installation, you are accepting the license terms of each package, and
acknowledging that your use of each package will be subject to its respective
license terms.

List of package licenses:

http://technet.microsoft.com/en-us/sysinternals/bb469936
https://github.com/stufus/ADOffline/blob/master/LICENCE.md
https://github.com/HarmJ0y/ASREPRoast/blob/master/LICENSE
https://github.com/BloodHoundAD/BloodHound/blo b/master/LICENSE.md
https://github.com/Arvanaghi/CheckPlease/blob/master/LICENSE
https://github.com/cobbr/Covenant/blob/master/LICENSE
https://github.com/byt3bl33d3r/CrackMapExec/blob/master/LICENSE
https://github.com/Raikia/CredNinja/blob/master/LICENSE
https://github.com/MichaelGrafnetter/DSInternals/blob/master/LICENSE.md
https://github.com/tyranid/DotNetToJScript/blob/master/LICENSE
https://github.com/FortyNorthSecurity/Egress-Assess/blob/master/LICENSE
https://github.com/cobbr/Elite/blob/master/LICENSE
https://github.com/GoFetchAD/GoFetch/blob/master/LICENSE.md
http://www.gnu.org/licenses/gpl.html
https://github.com/Kevin-Robertson/Inveigh/blob/master/LICENSE.md
https://github.com/danielbohannon/Invoke-CradleCrafter/blob/master/LICENSE
https://github.com/rvrsh3ll/Misc-Powershell-Scripts/blob/master/LICENSE
https://github.com/danielbohannon/Invoke-Obfuscation/blob/master/LICENSE
https://github.com/Kevin-Robertson/Invoke -TheHash/blob/master/LICENSE.md
https://github.com/denandz/KeeFarce/blob/master/LICENSE
https://github.com/HarmJ0y/KeeThief/blob/master/LICENSE
https://github.com/gentilkiwi/mimikatz
https://github.com/nettitude/PoshC2/blob/master/LICENSE
https://github.com/Mr-Un1k0d3r/PowerLessShell/blob/master/LICENSE.md
https://github.com/G0ldenGunSec/PowerPriv/blob/master/LICENSE
https://github.com/p3nt4/PowerShdll/blob/master/LICENSE.md
https://github.com/FuzzySecurity/PowerShell-Suite/blob/master/LICENSE
https://github.com/PowerShellMafia/PowerSploit/blob/master/LICENSE
https://github.com/PowerShellMafia/PowerSploit/blob/master/LICENSE
https://github.com/dirkjanm/PrivExchange/blob/master/LICENSE
https://github.com/Mr-Un1k0d3r/RedTeamPowershellScripts/blob/master/LICENSE.md
https://github.com/cyberark/RiskySPN/blob/master/LICENSE.md
https://github.com/GhostPack/Rubeus/blob/master/LICENSE
https://github.com/GhostPack/SafetyKatz/blob/mas ter/LICENSE
https://github.com/NickeManarin/ScreenToGif/blob/master/LICENSE.txt
https://github.com/GhostPack/Seatbelt
https://github.com/danielmiessler/SecLists/blob/master/LICENSE
https://github.com/Arvanaghi/SessionGopher
https://github.com/GhostPack/SharpDPAPI/blob/master/LICENSE
https://github.com/GhostPack/SharpDump/blob/master/LICENSE
https://github.com/tevora-threat/SharpView/blob/master/LICENSE
https://github.com/GhostPack/SharpRoast/blob/master/LICENSE
https://github.com/GhostPack/SharpUp/blob/master/LICENSE
https://github.com/GhostPack/SharpWMI/blob/master/LICENSE
https://github.com/leechristensen/SpoolSample/blob/master/LICENSE
https://github.com/vletoux/SpoolerScanner/blob/master/LICENSE
http://www.sublimetext.com/eula
https://github.com/HarmJ0y/TrustVisualizer/blob/master/LICENSE
https://github.com/hfiref0x/UACME/blob/master/LICENSE.md
https://github.com/FortyNorthSecurity/WMIOps/blob/master/LICENSE
htt ps://github.com/FortyNorthSecurity/WMImplant/blob/master/LICENSE
http://www.adobe.com/products/eulas/pdfs/Reader10_combined-20100625_1419.pdf
http://www.rohitab.com/apimonitor
http://www.autoitscript.com/autoit3/docs/license.htm
https://portswigger.net/burp
http://www.citrix.com/buy/licensing/agreements.html
https://github.com/cmderdev/cmder/blob/master/LICENSE
https://github.com/nccgroup/demiguise/blob/master/LICENSE.txt
http://www.telerik.com/purchase/license-agreement/fiddler
https://www.mozilla.org/en-US/MPL/2.0/
https://github.com/fireeye/flare-floss
https://github.com/fuzzdb-project/fuzzdb/blob/master/_copyright.txt
https://www.gimp.org/about/
https://www.google.it/intl/en/chrome/browser/privacy/eula_text.html
https://github.com/sensepost/gowitness/blob/master/LICENSE.txt
https://github.com/hashcat/hashcat/blob/master/docs/license.txt
https://www.gnu.org/licenses/gpl-2.0.html
https://mh-nexus.de/en/hxd/license .php
https://github.com/SecureAuthCorp/impacket/blob/master/LICENSE
https://github.com/SecureAuthCorp/impacket/blob/master/LICENSE
https://www.kali.org/about-us/
http://keepass.info/help/v2/license.html
https://github.com/putterpanda/mimikittenz
http://mobaxterm.mobatek.net/license.html
http://neo4j.com/open-source-project/
https://github.com/samratashok/nishang/blob/master/LICENSE
https://svn.nmap.org/nmap/COPYING
https://github.com/Ben0xA/nps/blob/master/LICENSE
https://openvpn.net/index.php/license.html
https://www.microsoft.com/en-us/servicesagreement/
https://github.com/joesecurity/pafishmacro/blob/master/LICENSE
https://hg.pidgin.im/pidgin/main/file/f02ebb71b5e3/COPYING
http://www.proxycap.com/eula.pdf
http://www.chiark.greenend.org.uk/~sgtatham/putty/licence.html
https://support.microsoft.com/en-us/gp/mats_eula
https://raw.githubusercontent.com/sqlitebrowser/sqlitebrowser/master/LICENSE
http://technet .microsoft.com/en-us/sysinternals/bb469936
http://www.mozilla.org/en-US/legal/eula/thunderbird.html
http://www.videolan.org/legal.html
http://www.vmware.com/download/eula/universal_eula.html
https://www.vmware.com/help/legal.html
https://www.realvnc.com/legal/
https://code.visualstudio.com/License
http://go.microsoft.com/fwlink/?LinkID=251960
http://opensource.org/licenses/BSD-3-Clause
https://winscp.net/docs/license
http://www.gnu.org/copyleft/gpl.html
https://github.com/x64dbg/x64dbg/blob/development/LICENSE
https://www.yworks.com/products/yed/license.html
http://www.apache.org/licenses/LICENSE-2.0
https://github.com/Dionach/NtdsAudit/blob/master/LICENSE
https://github.com/ANSSI-FR/AD-control-paths/blob/master/LICENSE.txt
https://github.com/OJ/gobuster/blob/master/LICENSE
https://github.com/xmendez/wfuzz/blob/master/LICENSE
https://github.com/dafthack/DomainPasswordSpray/blob/master/LICENSE
https://github. com/nettitude/PoshC2_Python/blob/master/LICENSE
https://github.com/ElevenPaths/FOCA/blob/master/LICENSE.txt
https://github.com/ohpe/juicy-potato/blob/master/LICENSE
https://github.com/NytroRST/NetRipper/blob/master/LICENSE.TXT
https://github.com/unixrox/prebellico/blob/master/LICENSE.md
https://github.com/rasta-mouse/Watson/blob/master/LICENSE.txt
https://github.com/berzerk0/Probable-Wordlists/blob/master/License.txt
https://github.com/cobbr/SharpSploit/blob/master/LICENSE


SQLMap v1.3.8 - Automatic SQL Injection And Database Takeover Tool

$
0
0

SQLMap is an open source penetration testing tool that automates the process of detecting and exploiting SQL injection flaws and taking over of database servers. It comes with a powerful detection engine, many niche features for the ultimate penetration tester and a broad range of switches lasting from database fingerprinting, over data fetching from the database, to accessing the underlying file system and executing commands on the operating system via out-of-band connections.

Features
  • Full support for MySQL, Oracle, PostgreSQL, Microsoft SQL Server, Microsoft Access, IBM DB2, SQLite, Firebird, Sybase, SAP MaxDB, HSQLDB and Informix database management systems.
  • Full support for six SQL injection techniques: boolean-based blind, time-based blind, error-based, UNION query-based, stacked queries and out-of-band.
  • Support to directly connect to the database without passing via a SQL injection, by providing DBMS credentials, IP address, port and database name.
  • Support to enumerate users, password hashes, privileges, roles, databases, tables and columns.
  • Automatic recognition of password hash formats and support for cracking them using a dictionary-based attack.
  • Support to dump database tables entirely, a range of entries or specific columns as per user's choice. The user can also choose to dump only a range of characters from each column's entry.
  • Support to search for specific database names, specific tables across all databases or specific columns across all databases' tables. This is useful, for instance, to identify tables containing custom application credentials where relevant columns' names contain string like name and pass.
  • Support to download and upload any file from the database server underlying file system when the database software is MySQL, PostgreSQL or Microsoft SQL Server.
  • Support to execute arbitrary commands and retrieve their standard output on the database server underlying operating system when the database software is MySQL, PostgreSQL or Microsoft SQL Server.
  • Support to establish an out-of-band stateful TCP connection between the attacker machine and the database server underlying operating system. This channel can be an interactive command prompt, a Meterpreter session or a graphical user interface (VNC) session as per user's choice.
  • Support for database process' user privilege escalation via Metasploit's Meterpreter getsystem command.

Installation
You can download the latest tarball by clicking here or latest zipball by clicking here.
Preferably, you can download sqlmap by cloning the Git repository:
git clone --depth 1 https://github.com/sqlmapproject/sqlmap.git sqlmap-dev
sqlmap works out of the box with Python version 2.6.x and 2.7.x on any platform.

Usage
To get a list of basic options and switches use:
python sqlmap.py -h
To get a list of all options and switches use:
python sqlmap.py -hh
You can find a sample run here. To get an overview of sqlmap capabilities, list of supported features and description of all options and switches, along with examples, you are advised to consult the user's manual.

Demo

Links

Translations


Viewing all 5842 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>