DomainTools Archives - DomainTools | Start Here. Know Now. https://www.domaintools.com/authors/domaintools/ Start Here. Know Now. Fri, 21 Feb 2025 17:54:24 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.2 Best Practices: How to Leverage Domain and DNS Intelligence for OEMs https://www.domaintools.com/resources/white-papers/best-practices-how-to-leverage-domain-and-dns-intelligence-for-oems/ Mon, 06 Nov 2023 21:26:02 +0000 https://www.domaintools.com/?p=26905 See What a Partnership with DomainTools Can Do For Your Business Just like security teams, security products and services looking to better protect organizations require the best domain intelligence possible. For more than 20 years, DomainTools has been building its domain and DNS infrastructure database, which now covers 97% of the Internet. OEMs that partner […]

The post Best Practices: How to Leverage Domain and DNS Intelligence for OEMs appeared first on DomainTools | Start Here. Know Now..

]]>
See What a Partnership with DomainTools Can Do For Your Business

Just like security teams, security products and services looking to better protect organizations require the best domain intelligence possible. For more than 20 years, DomainTools has been building its domain and DNS infrastructure database, which now covers 97% of the Internet. OEMs that partner with DomainTools see faster time-to-market, increased revenue streams, and improved product quality. 

DomainTools Intelligence Feeds, Monitors, APIs, and Farsight DNSDB query capabilities can be licensed and integrated into products and services offered by OEM partners far quicker and more cost effectively than if they tried to build the functionality themselves, resulting in a higher quality product and better protected customers. 

Download this eBook to learn how: 

  • DomainTools lets OEMs maximize offerings and grow market share
  • Earlier threat detection and enhanced coverage reduces OEM risk 
  • An integration with DomainTools can improve OEM product quality and customer satisfaction

See how DomainTools can quickly advance and differentiate cyber product and service companies’ solutions.

The post Best Practices: How to Leverage Domain and DNS Intelligence for OEMs appeared first on DomainTools | Start Here. Know Now..

]]>
The Use Cases and Benefits of SVCB and HTTPS DNS Record Types https://www.domaintools.com/resources/blog/the-use-cases-and-benefits-of-svcb-and-https-dns-record-types/ Thu, 14 Nov 2024 00:00:00 +0000 https://domaintools.wpengine.com/the-use-cases-and-benefits-of-svcb-and-https-dns-record-types/ Learn more about use cases and benefits of SVCB and HTTPS DNS resource record types.

The post The Use Cases and Benefits of SVCB and HTTPS DNS Record Types appeared first on DomainTools | Start Here. Know Now..

]]>
Executive Summary

Already used in production, but still not a finalized standard, the SVCB (Service Binding) and HTTPS (Hypertext Transfer Protocol Secure) DNS resource record types offer solutions for issues around service discovery, privacy and performance. This blog post provides further clarity on the use cases and benefits of these new record types.

Introduction: What is Service Binding and HTTPS DNS Record Types

DNS records are analogous to entries in an address book and store information on how to reach a particular contact or, in our case, a system in a computer network. While personal address books are often messy and free form, DNS records have well-defined types of entries. For example, record type “A” contains the internet address for a specific computer (hence, “A” for “address”) and type “CNAME” (or Canonical Name) indicates that a system is known by a different name and so on.

Key Features of SVCB and HTTPs Records

The list of record types continues to evolve and in late 2018 the “service binding” type was proposed to “facilitate the lookup of information needed to make connections for origin resources”. This new record type provides the option to specify alternative endpoints for a name and operates in either of two modes:

  1. AliasMode, where it behaves similarly to a CNAME record
  2. ServiceMode, where it allows the advertisement of a number of different parameters for service discovery.

The proposal also introduces the HTTPS resource record type, a variant of service binding that is specific for HTTPS. The goal of this second record type is to reduce the number of DNS queries for that particular use case, a recurring theme throughout the proposal.

AliasMode: CNAME and Aliasing

The first advantage of adopting SVCB is that it addresses a shortcoming of CNAME records. As mentioned earlier, CNAME creates an alias between two names. Here’s an example record indicating that queries for www.example[.]net should lookup www.example[.]com instead:

www.example[.]net. 3600 IN CNAME www.example[.]com

This aliasing is useful in many situations, but CNAME records cannot be used on what’s called a zone apex or the root domain (the example[.]com portion of the name). This is a historical limitation in how the CNAME type was originally proposed and implemented, and something that can’t be modified at this point. This constraint has led to a few different workarounds, each with its own shortcomings. The SVCB proposal offers a new solution through its AliasMode, allowing us to create a zone apex record and specify a target (or an alternative endpoint) that will further resolve an A, AAAA or SVCB record.

Staying with our previous example, the following SVCB record creates an alias for example[.]net (a zone apex):

example[.]net.	3600 IN SVCB 0	www.example[.]com

The “0” value informs us that this SVCB record is in AliasMode and, in this particular case, the alternative endpoint is the name we’re redirecting to.

ServiceMode: Privacy and Performance

Arguably the main part of the proposal, ServiceMode offers solutions to privacy and performance issues inherent to how service discovery works on the internet.

Practical Use Cases for SVCB and HTTPS Records

Front and center is the ability to indicate that a service supports HTTPS directly from the result of a DNS query. Currently, web clients have to first perform a DNS lookup for a service’s internet address and access it “in the clear” with HTTP in the hopes of being redirected to that service’s HTTPS endpoint. Service binding records allow us to specify a parameter called Application-Layer Protocol Negotiation, or ALPN for short, and provide a list of protocols supported by the service. Let’s look at an example:

www.example[.]com 3600 IN SVCB 1	.     alpn=h2

Enhancing HTTPS Support with SVCB Records

This SVCB record informs us that www.example[.]com supports the HTTP/2 protocol. Notice that we’re not specifying an alternate endpoint in this case, the period before the ALPN parameter tells us that it applies to the record owner. It is also possible to specify a different target, for example:

www.example[.]com.	3600 IN HTTPS 1	svc.example[.]com.	alpn=h2

Both inform clients of the support for HTTP/2. The second example redirects queries to svc.example[.]com. This allows us to connect directly to an HTTPS endpoint, reducing the need for multiple requests and improving privacy by reducing the number of unencrypted messages.

The same mechanism can be used to indicate support for HTTP/3, which offers some very interesting performance benefits over previous versions. The proposal includes an optional “port” parameter that can be used to indicate on which network port the specified services can be reached.

Addressing Privacy Concerns with Encrypted Client HELLO (ECH)

The Encrypted Client Hello (or ECH) parameter, another part of the new specification, helps to close an important privacy gap in how a secure connection is established. With Transport Layer Security (TLS), the client and server must exchange a few pieces of vital information as part of a handshake process, including the service that the user’s trying to connect to. Unfortunately, even the latest TLS standard doesn’t provide for complete encryption and some parts of that conversation can still be intercepted.

Although ECH was already in use before service binding was proposed, it required a separate TXT record. By bundling it with additional parameters in the service binding registry, the cost of additional queries and their processing is reduced. In our previous example, we would have:

www.example[.]com.	3600 IN HTTPS 1	svc.example[.]com     alpn=h2 ech=abcdefgh

ipv4hint and ipv6hint: Reducing DNS Query Times

In addition to ALPN and ECH, SVCB proposes the ipv4hint and ipv6hint parameters. Similarly to what was done with ECH, their intended purpose seems to be the bundling of all information required to speed up access in a single DNS record and avoid multiple queries. The addresses provided here may be used to reach the given service and if A or AAAA records for the target are locally available, the client should ignore these hints.

All of the parameters we covered in this post are part of a Service Binding Parameter Registry, which also contain the mandatory and a no-default-alpn parameters, a range for private use and allow for arbitrary key value pairs using the keyNNNNN=value format. Unfortunately, the proposal doesn’t offer any additional information on how the private range is or will be used.

Future of SVCB and HTTPS Record Types in DNS Standards

The service binding proposal offers a method to streamline the retrieval of DNS data by condensing all the necessary information into a single record. It also narrows the gap through which privacy can be infringed upon during service discovery. The additional flexibility of redirecting requests through its alternative endpoints is a welcomed one and should be valuable for multi-hosting scenarios.

Conclusion: Why SVCB and HTTPS Matter for Modern DNS Infrastructure

Multiple DNS providers and web clients have already deployed support for SVCB and HTTPS records, from Google and Cloudflare to iOS, Chrome and Firefox. This push for adoption before the proposal reaches the standard phase isn’t uncommon in the networking world, and with big names behind it, SVCB and HTTPS are quickly becoming a de facto standard.

Author: Rafael Vanoni

The post The Use Cases and Benefits of SVCB and HTTPS DNS Record Types appeared first on DomainTools | Start Here. Know Now..

]]>
Caught in the Act: A Phishing Expedition https://www.domaintools.com/resources/blog/caught-in-the-act-a-phishing-expedition/ Thu, 11 Mar 2021 00:00:00 +0000 https://domaintools.wpengine.com/caught-in-the-act-a-phishing-expedition/ Upon the discovery of a suspicious domain name, DomainTools researchers uncovered a phishing attack targeting Tesco Bank. See how they used code analysis and more.

The post Caught in the Act: A Phishing Expedition appeared first on DomainTools | Start Here. Know Now..

]]>
Introduction

Cybercrime is everywhere. The news is filled each day with a constant drumbeat of reports detailing some kind of sophisticated attack that was carried out by unseen hackers who are located thousands of miles away. Cyber attacks are so ubiquitous that by some estimates they occur every 39 seconds. We hear about a random company’s massive data breach, spend a moment wondering if we or someone we know is a customer of the breached company, and then go on about our day. In many ways, we have become numb to the status quo. However, there are some types of attacks that shake us out of our collective stupor. These types of attacks feel personal, because they are.

Getting phished is one such attack. It causes one to second guess one’s choices with questions such as “How did we fail to recognize a fake login?,” “Have we used this username and password elsewhere?,” and “Why haven’t we been using a password manager?” Add the spectre of financial loss to these questions, and a sense of recrimination emerges that feels very personal. Even though we usually experience these feelings in isolation, research indicates that we do not suffer in silence. An American study conducted last year showed 24% of subjects admitted to having fallen for a phishing attack, and six in ten were unable to distinguish a fake Amazon login from the real one.

Who are the malicious actors that carry out phishing attacks? What do they want? What does their infrastructure look like? Where do they get their tools, and what tactics do they employ? DomainTools security researchers were able to provide some answers by analyzing the malicious infrastructure, commodity malware, and tool usage of a nascent phishing campaign that was set to target a large banking institution. Our efforts were aided in small part by the timeliness of the discovery and in great measure by the poor OPSEC practices of the scammers.

Fake Banking Login

Tesco Bank is a major financial services provider in the UK. Wholly owned by the British multinational groceries and general merchandise retailer Tesco PLC, the bank is home to 5.3 million customer accounts and holds over 15 billion GBP in customer assets. Tesco Bank also offers home and automobile insurance, further expanding its presence in the public sphere.

On November 18th, DomainTools security researchers noted the registration of a suspicious domain, www.tesco-banklogin[.]com. This domain piqued our interest not only due to the dubious naming convention used by the registrant, but also due to the fact that we were able to catch the domain registration on the same day that it was created. As such, we felt that this might be a prime opportunity to analyze the rollout of a phishing site in near real-time.

The domain registrar is Eranet International LTD, based in Hong Kong. The site was hosted by LLC Server v arendy, based in Moscow, RU. It utilized a free 90-day SSL certificate provided by ZeroSSL based in London, UK and Vienna, AT. Many certificate authorities now offer SSL certificates free for 90 days in an effort to compete with Let’s Encrypt’s cost-free offerings. It is important to note that DomainTools initially assigned a relatively high risk profile score of 81 (out of 99) within hours of the domain’s registration. This underscores the accuracy, timeliness, and expertise of DomainTools Machine Learning systems in carrying out threat identification. Customers who utilize DomainTools integrations with their SIEM or SOAR platform would have benefited from this timely recognition.  

What did this site look like shortly after creation? DomainTools researchers were able to monitor and analyze the website as it took shape. Fortuitously, the malicious actors failed to secure the webpage stub, leaving the site open for view as it was being constructed. As denoted by the creation date of the directories, the attackers wasted no time in getting set up for their upcoming campaign.

Code Analysis

We wanted to understand the attackers’ objectives by analyzing the tools that they were employing. Therefore, we securely mirrored all of the content from the site for offline analysis. A look at  tescolive[.]zip revealed that it is front end software for a phishkit consisting of commodity malware coded in Bootstrap, PHP, Angular, Javascript, and JQuery that was purposefully modified for this group of bank scammers. The PHP code was packed and minified, and then base64 encoded in an effort to obfuscate its true purpose.

Index of the phishing kit

Hierarchy of files for Tescolive.zip

It is worth noting our luck in discovering this find. Due to exceptionally poor opsec, we found artifacts in the code that revealed that it was edited on November 9th, nine days before this domain’s registration, specifically for use against Tesco. The code was purchased on the dark web from a known .onion site. It was authored by a person or persons known as KTS Team, and thoughtfully signed for the customer, DennisBergkamp[at]jabbr.org, who appears to be a football/soccer fan and who failed to remove this particular email artifact prior to posting it to the www.tesco-banklogin[.]com domain.

<?php /* 
   
Page made by KTS team;
http://xxxxxxxxxxxxxxxx.onion/shop/;
Edited for  DennisBergkamp@jabbr.org  on  Mon 9 Nov 2020 14:21:22 EET 
*/ ?-->
<!--?php>

KTS Team provided a man.txt file to its customer that served as a useful guide for our code and function analysis. The errors in spelling and grammar on this page may indicate that the author is someone whose native language is not English.

Key words;
 zip           =  zip file you have been recived from me;
 fake          =  cloned web pages
 parent folder =  folder which contain all files, after unzip ;
 cfg           =  file "config.json" located on 'parent folder';
 uadmin link   =  link to uAdmin panel, for reciving logs from 'fake';
 step          =  single step with html form and inputs/fields collection, used for request information from client;
 777           =  permissions form specific folder/files wich mean xrw-xrw-xrw for all posable users;
 uadmin        =  Universal admin panel
 token page.   =  Can be known as (Live page|Intercept page). It is type of the page which can be controlled over admin panel in real time.

-After unzip 'zip' you will see 'parent folder'. Name for this folder can be changed to anything you like but it can not be removed and 'fake' MUST stay inside 'patent folder'.  After all in url you can only have unique folders name.
   good!:
      domen.com/parentfolder/..../login 
      domen.com/anyname/..../login 
      domen.com/sgi/..../login

   not good!:
      domen.com/name/name/..../login
      domen.com/domen/..../login
      domen.com/com/..../login
      domen.com/dome/..../login


-To connect 'fake' to 'uadmin' you have to edit 'home.php' file ,
        OR download new 'home.php' file from already live uAdmin (no edit needed). 
 If you choose to edit existing 'home.php' then open 'home.php' and find line with "http(s)://{domain|ip}/uadmin/gate.php" and change {domain|ip} to your domain name or ip, where uAdmin located.

-Set '777' for 'parent folder' -R

-To access page over browser you have to open 'parent folder' 
    https://fakedomain.com/*/{parent_folder_name}/       where {parent_folder_name} is name of your 'parent folder' 

-If page is 'token page' than after connect to 'uadmin' 
       -go to token page on uadmin
       -find connected page with green dot
       -press O-Panel on its row
       -press Operations settings
       -press Advanced 
       -empty pop up text box 
       -insert json-formated commands in the bottom of this text file
       -press "save" on pop up box
       -press "save" on current page 
       -go back to o-panel
  after this you will be able to see commands set for current page

Analyzing the man.txt file, we can see that the customer receives their order in the form of a .zip file and that the file contains code that creates a single-step front-end login page that emulates a legitimate institution’s front end. The scammer is meant to utilize a subdirectory naming convention when setting up the fake front-end login screen on the “token page.” In our case, the token page URL is www.tesco-banklogin[.]com/veri/. The author of the code provides a way to connect the fake token page to a customer-controlled administrative control panel, known as “uadmin,” or the “Universal Admin Panel.” Harvested credentials or tokens are retained in a SQLite database. The admin panel allows users to customize messages that can be presented to victims on the fake login page through the use of JSON formatted commands. The following image shows the Tesco login page that is presented to a potential victim.

Phishing login page (imitating Tesco)

The fake page is virtually indistinguishable from Tesco’s true page located at https://identity.tescobank[.]com. Unsuspecting victims may be sent the fake domain URL via a phishing email or SMS text message that masquerades as a legitimate communication from Tesco Bank. Usually, these emails contain a message to the effect that the victim’s account may have an issue that can only be rectified by logging in via the link provided.

Further code analysis shows that the author included an automated notification module that notifies an attacker via the Jabber messaging app whenever a victim has entered their credentials. This is important because this type of attack can be time-sensitive. Amplifying this, DomainTools tested the site’s web server response codes via BurpSuite proxy and found that the site did not yet redirect us to Tesco’s legitimate site or any other site. It is unknown whether this feature was supposed to be added at a later time. The lack of a redirect (after credential theft) and failure to provide a legitimate log-in experience can tip off a victim that the site is fake, potentially raising figurative alarm bells. This underscores the need for a timely notification via Jabber. Upon notification, the attackers can attempt to leverage the purloined credentials to log into the victim’s true account via Tesco Bank’s online portal, set up a bank transfer, and drain the account of its funds. It is useful to note that two factor authentication may help to mitigate this form of credential reuse.  

Credential theft is not the only threat posed by this Phishkit. We discovered a code artifact that seeks to install a custom APK that functions as a permanent backdoor on the victim’s device.

var php_js = {"device":{"isMobile":false,"isTablet":false,"isiOS":false,"isAndroid":false},"gets":[],"lng":"en","bb_link":"https://identity.tescobank.com/pf/adapter2adapter.ping","link":  "tesco.uk","apk_file":"http://test.com/file.apk","encryption":0,"texts":"{}",    "query":"","home":"../../../home.php","relative_root":"../../../","parent_folders":"a1b2c3/266c5260bde130cbd5f7f5ac3e469769/login/","fake_base":"login/"}   

According to the phishkit’s author, the APK is capable of performing screen grabs, keyboard logging, and data exfiltration. The APK is presented to the victim as a “Security Application Update” or “Banking Security Certificate.” Links to the APK’s source are defined in a separate configuration file. Unfortunately, true source locations were never defined by the attacker before the site was blocklisted. bb_link is the URL of the real banking site. apk_file is the location of the APK source. 

{
    "lng": "en",
    "bb_link": "https://identity.tescobank.com/pf/adapter2adapter.ping",
    "link": "tesco.uk",
    "apk_file": "http://test.com/file.apk",
    "encryption":0
}

It is useful to call attention to a couple other features that were designed to mitigate anti-phishing efforts. We identified a .php file that is used for fast flux functionality.

//get real ip for Fat flux
function get_ip_address(){
foreach (array('HTTP_CLIENT_IP', 'HTTP_X_FORWARDED_FOR', 'HTTP_X_FORWARDED', 'HTTP_X_CLUSTER_CLIENT_IP', 'HTTP_FORWARDED_FOR', 'HTTP_FORWARDED', 'REMOTE_ADDR') as $key)
{
if (array_key_exists($key, $_SERVER) === true){
foreach (explode(',', $_SERVER[$key]) as $ip){
$ip = trim($ip); // just to be safeif (filter_var($ip, FILTER_VALIDATE_IP, FILTER_FLAG_NO_PRIV_RANGE | FILTER_FLAG_NO_RES_RANGE) !== false)
{
return $ip;}}}}};$_SERVER['REMOTE_ADDR']= get_ip_address();

Roughly speaking, fast flux works by rapidly cycling through a database of IP addresses used by scammers for hosting the phishing website. This is useful for malicious actors because this tactic can circumvent defenses that use dynamic IP address blocking as a primary means of protection. Unluckily for us, this .db file was never populated with an array of IP addresses. 

We detected other anti-phishing countermeasures in addition to fast flux. This came in the form of a blocklist file used to block search engine crawlers from cataloguing the content of the fake site. Apart from search crawlers, the code employed an evasion mechanism in an effort to hide from scanners used by security research companies that specialize in finding and detecting phishing sites. This blocklist code, updated for this campaign on November 9th and aptly named antibot.php, used a mix of detection methods. These included an array of wildcarded IPv4 addresses totalling three million individual addresses, in-URL string detection, and user agent identification to detect a potential threat. Remote addresses, strings, or user agents that matched the banned criteria caused the session to die, with a standard out message providing a 404 response code. The following code uses a predefined array of IP addresses to identify those that it deemed a threat as well as keyword string identification.

foreach($bannedIP as $ip) {
             if(preg_match('/' . $ip . '/',$_SERVER['REMOTE_ADDR'])){
                  header('HTTP/1.0 404 Not Found');
                  die("<h1>404 Not FoundThe page that you have requested could not be found.");
             }
        }
   }
   $hostname = gethostbyaddr($_SERVER['REMOTE_ADDR']);
   $blocked_words = array("above","google","softlayer","netcraft","amazonaws","cyveillance","phishtank","dreamhost","netpilot","calyxinstitute","tor-exit",);
   foreach($blocked_words as $word) {
       if (substr_count($hostname, $word) < 0) {
           header("HTTP/1.0 404 Not Found");
           die("<h1>404 Not Found</h1>The page that you have requested could not be found.");

Fittingly, we should call attention to one prominent artifact found in the code, kaktys_encode. This function was used here in part to obfuscate HTML. 

document.write(_kaktys_encode("<?php echo base64_encode($html) ?>")); </script>

We discovered the unique term, kaktys, throughout the code base. This isn’t anything novel as far as phishing kits go, but we did want to see how far we could pull on this lead since we had a unique opportunity owing to poor opsec by uadmin’s author.

Dark Web Shop

Turning our attention to the phish kit author’s site on the dark web, we were immediately presented with this prominent pop-up. 

Big red warning message

The handle kaktys1010 belongs to a commodity malware creator whose claim to fame is the creation of the aforementioned “uadmin” control panel for phishing scammers. Kaktys1010 was first seen in business in February of 2017, and now operates an extensive “shop” on the dark web advertised as KTS Team. A look at the KTS Team page provides a clearer picture of the functionality of the malware.

KTS Team’s offerings include web and android injects targeting 90 financial institutions whose operations are primarily based in Europe and South America. This includes desktop browser based web injects, Android-focused mobile application web injects (some of which will drop an APK file), and one uncategorized web inject targeting a bank based in Poland. Customers can sort KTS Team products by “Tags,” “Categoties” [sic], and “Countries.”

Three columns of Tags, Categories, & Countries with folders and the number files in each folder listed below.

One can gather which banking institutions are of greatest interest by using the sorting function  conveniently offered by the site’s designer. The following page depicts products sorted by popularity.

KTS catalogue website screenshot

ING Group, currently occupying the number two position in popularity, was listed as the most recent addition to KTS Team’s catalogue. However, we do expect the catalogue to be updated with the Tesco Bank web inject at a later date.

Web injects ranged in price from 60 dollars to 500 dollars. The most expensive web inject targets RBC Royal Bank. We initially thought that it may be possible to match phishkit complexity with asking price, however adding one to the shopping cart and checking out prompts the visitor to log in to exploit[.]in, a semi-exclusive underground forum that caters to mostly Russian speakers, where the visitor would presumably enter into negotiations with KTS Team.

An extensive level of effort was put into designing the phishkits. The following images show the mobile screen presented to a victim along with the backend uadmin control panel. The mobile prompt asks the victim to download a “Security Application Update,” which is in reality a malicious Android APK. When paired with the uadmin control panel, a scammer is offered a complete picture of the victim’s session, credentials, and user agent.

Post Finance Security Application Update Screen on Mobile Device

Admin side for the Phishing exploit. Here the admin can see logins, passwords, info, and more details of dynamic/live data

Mapping Infrastructure to Commodity Offerings

Cataloguing the financial institutions targeted by KTS Team’s phishkits assisted us in our quest to understand the extent of the current Tesco-related phishing campaign. Starting with the www.tesco-banklogin[.]com domain, we were able to match domains that we discovered through DomainTools Iris guided pivots with some of the financial institutions listed in KTS Team’s catalogue.

DomainTools Iris Screenshot of tesco-banklogin[.]com

The guided pivot using the hosting IP address showed 32 other domains whose naming conventions were highly suspicious. It is notable that all of the affiliated domains have been assigned a risk-rating of 100, which indicates that they have made their way onto a blocklist. Of note, halifax-login.com, halifaxpersonal.com, halifax-online-login.com, tsb-banking.com, hsbc-heldesk.com, and hsbc-net.com are all useful banking phishing domain names.

Iris Map of connected domains

Turning our focus to HSBC bank, we noted that the KTS Team mobile inject login is a close facsimile of the one used on HSBC’s actual site. 

When we pair this with one of the HSBC-related domain names that we discovered via our guided pivot such as  hsbc-heldesk.com, and hsbc-net.com, it is easy to see how unsuspecting users can fall for a scam.

Conclusion

We can see by the use of extensive infrastructure, sophisticated platforms, anti-security tools, and savvy employment of social engineering that phishing scams will not be going away. While this phishkit and other mobile injects mentioned are clearly commodity malware that require a low level of sophistication to implement and exploit, we still find value in chasing them down to their fullest extent, as we have in this blog. At DomainTools, we will continue to pull and analyze these interesting phish kits when found. We will chart the trajectory of their features and the capabilities of their authors in order to further inform threat landscape topography and improve defenses for security teams worldwide. Using these code analysis and infrastructure mapping steps can help teams to understand the fullest extent of phishing campaigns targeting their corporate infrastructure.

This work would not have been possible without the generous assistance of Matthew Pahl, former DomainTools Security Researcher and Tarik Saleh, former DomainTools Senior Security Engineer, whose expertise was crucial in analyzing this campaign.

The post Caught in the Act: A Phishing Expedition appeared first on DomainTools | Start Here. Know Now..

]]>
Top Sources of Domain and DNS Logging https://www.domaintools.com/resources/blog/useful-sources-of-domain-and-dns-logging/ Thu, 13 Feb 2025 00:00:00 +0000 https://domaintools.wpengine.com/useful-sources-of-domain-and-dns-logging/ The final part of this blog series on log collection covers Managed DNS Providers, Packet Capture, IDS/IPS Tools, Mail Exchange, IIS Servers, and more. Learn about these

The post Top Sources of Domain and DNS Logging appeared first on DomainTools | Start Here. Know Now..

]]>
Domain and DNS Logging

This final post is about other log sources for DNS and domain logs that are not otherwise covered in the previous two posts. This post then ends with a note of challenges to anticipate as well as ideas for the next steps beyond logging.

If you have not done so already, have a look at the previous posts in this series:

Log Sources Containing Relevant Log Data

There are additional event log sources that contain valuable metadata. From IDS/IPS tools to firewall and mail exchange logs, use the metadata to extract IP addresses, hostnames, and other metadata to further inform IR and threat hunting work. IP addresses can be used to trace back and investigate infrastructure, or use the data to conduct a reverse IP to hostname check to pivot towards associated domains and find the Threat Profile and Risk Score. The first half of this section covers other sources. The final half includes in-depth examples.

Logging Data with Managed DNS Providers

Amazon Route 53 DNS Query Logging and CloudWatch

Configure Amazon Route 53 to log information about the public DNS queries that Route 53 receives (see the Developer Guide here). The available metadata is similar to other sources of DNS query logging: Domain or subdomain that was requested, date and timestamp, DNS record type, DNS response code, and the Route 53 edge location that responded to the DNS query. If you use Amazon CloudWatch Logs to monitor, store, and access the DNS query log files, you can also stream these logs to your LM and SIEM instance.

Google Cloud DNS

Google Cloud DNS logging tracks queries that name servers resolve for VPC (Virtual Private Cloud) networks. 

Other managed DNS providers are available, check their documentation to see how to set up query and response logging.

Proxy Server Logs 

Proxy server logs are a common source of information for domain metadata. These logs contain requests that are made within the network from both users and applications. 

DNS Packet Logs

In addition to capturing and listening to packets on network analysis tools, it’s also possible to set up packet logging and configure it to only collect packet logs of certain protocols such as DNS packets. However, packet capture and packet logging only provide the DNS-related metadata. It is not possible, for example, to obtain more details about the host in which this event had originated, or which specific user or user action triggered that event.

Logs Generated from IDS/IPS Tools

Logs generated from IDS/IPS tools, such as rules and alerts, can be collected and forwarded on to your LM/SIEM. Examples are:

  • Zeek (formerly Bro) DNS Query and Response logging to collect and obtain DNS query and responses. Also used in conjunction with Domain Hotlist
  • Snort DNS rules that inspect DNS query responses and take action based on the response back. 
  • Suricata DNS rules to log and collect related events, create event-based actions such as matching DNS queries to a blocklist (i.e., Domain Hotlist), or writing log events to collect DNS query and response logging.

Mail Exchange Server Events

There are a few use cases to leverage logs generated by your mail exchange server:

  • Detect known phishing/spam email infrastructure.
    • Sent by known phishing or bad infrastructure.
    • Example: “Call me at 1-888-382-1222 to set up your VPN – Matthew (from Tech Support).
  • Detect known phishing/spam links in the email body.
    • Messages containing phishing links, but a user action has been performed based on subsequent event logs (ie user clicks on the link, the domain that is queried appears in the logs).
      • Example: “Click here to download your VPN cert – Matthew (from Tech Support).
  • To further investigate yet-unknown phishing indications.

Event Sources for Exchange DNS Logs

  • MSExchange Management channel on EventLog (called “MSExchange Management”)
  • Message Tracking Logs, where the default location is at %ExchangeInstallPath%TransportRolesLogsMessageTracking

The MSExchange Message tracking log has fields that are populated with metadata for threat hunting and DomainTools investigations. They include:

  • client-ip: The IP of the messaging server/client that submitted the message.
  • client-hostname: The hostname/FQDN of the messaging server/client that submitted the message.
  • server-ip: The IP of the source or destination server.
  • server-hostname: The hostname/FQDN of the destination server.
  • A source value field includes DNS as a known source.

The client IP and client hostname answer the questions of “which infrastructure sent this message,” which the analyst uses to seek out more information about the infrastructure. 

The server IP and server hostname is the target address. This answers the question of “to whom was this message aimed at.” 

Proactive Defense Note: Use DomainTools API products or the Iris investigation platform to further investigate the metadata captures in these fields. An example working with DomainTools Iris:

  • Hostnames are extracted from the client-hostname field out of MSExchange logs.
  • The domains are extracted with the subsequent domain list then imported into Iris.
  • The Domain Risk Score associated with each domain indicates the risk level for that domain.
  • Further decisions:
    • Correlate between the client-hostname and the server IP.
    • Isolate that target server IP, or even set up more logging to find other IOCs (EventIDs and other events that could be generated to indicate further compromise or lateral movements).
  • Add to blocklist, in addition to using Domain Hotlist. 

Logging IP Data with Windows Firewall Event Log Channel

Other sources of log data holding valuable IP data include firewall logs from the Windows Firewall EventLog Channel. 

Proactive Defense Note: Use the Domain Hotlist as a blocklist on the firewall network perimeter. 

Finding Advanced Attacks and Malware With Only 6 Windows  EventID’s” by Michael Gough (aka Malware Archaeologist) provides additional information of the lifecycle of these EventIDs that signal an attack. Event ID 5156 is fired when the Windows Filtering Platform allows a program to connect to another process (on the same or a remote computer) on a TCP or UDP port. The valuable metadata of this event source indicates the command and control or origin of the attack and what application was used to communicate with the external or internal IP address. 

Proactive Defense Note: Use DomainTools Iris Investigate API, Classic Tools APIs, or the Iris investigation platform to further investigate this IP. Results include – finding IP infrastructure information, revealing connected infrastructure, finding connections of this IP address, reverse lookup of the IP address.

Windows IIS Server Logging

<13>Sep 9 17:38:25 IIS-SERVER 2020-09-09 17:38:25 MALICIOUS_CLIENT_IP_HERE GET /welcome.png - 80 - ::1 Mozilla/5.0+(Windows+NT+10.0;+Win64;+x64;+rv:69.0)+Gecko/20100101+Firefox/69.0 http://localhost/ 200 0 0 11

The above example features a Syslog-formatted log event generated by a locally-hosted (test) IIS server.

For example, if there is an unusual number of requests from a client IP (indicated by MALICIOUS_CLIENT_IP_HERE), an analyst may want to verify if the associated IP is malicious.

Proactive Defense Note: Conduct a reverse DNS check (aka conduct a check to find the associated hostname/s to a client IP). From the reverse DNS check (to find the hostname), check the infrastructure or associated Risk Score or Threat Profile either on Iris or with the API.

Challenges to Domain and DNS Logging

Increasing SIEM and LM Storage Requirements and Costs

Your storage and licensing challenges, which are tied to infrastructure issues, may require increased capacity due to the additional logging requirements. However, there are workaround solutions including targeted logging (selectively only collecting certain EventIDs, or logging channels), deduplicating logs, dropping unnecessary metadata fields of a log event, batch compression logging, and more.

Other related costs that may also increase include:

  1. Subscription costs. These are instances where the log agent or the LM/SIEM incorporate data ingestion-centered subscription costs. There may be users or customers that are using Community Edition or free agents that have event ingestion limitations.
  2. Personnel costs. Administrators or other specialized personnel may be required to deploy on instances or deployment scenarios that require agent-based logging. Certain endpoints also require additional work, for example, an endpoint that requires additional settings to be configured to deploy logging.

Logging-Related Infrastructure Issues

Logging queries and requests can be problematic due to issues like server performance degradation from the extra processing work, storage limitations, and other related challenges. In terms of processing-related issues, each time a query or a response event occurs, the DNS server will not only interpret the telemetry event as a log source to collect, but it also needs to write the events to a log file (in a format that is specified) and then send to an external destination. There are also additional parsing, or other enrichment, required that may provide more performance-related impositions over the resource. 

Structuring the Log Process in a SIEM

Event logs need to be ingested by the SIEM suite, which has its own SIEM fields and schemas. Additional specialist modules and add-ons may be required (in addition to what is offered by the log collection agent) in order to properly ingest and enrich the logs. For example, the Message field in the Windows Event Log may have valuable information about the event itself, which needs to be extracted. Linux DNS logs can be written in a number of formats, which also need to be normalized for ingestion into a SIEM. This normalization process can add an additional performance burden.

Configuring Domain and DNS Logs

The next steps are to build the configurations to collect, parse and enrich events from these sources.

While it is out of the scope of this post to cover the maze of setting out event source configurations for the myriad of available platforms, it is worthwhile to reiterate a few ways to leverage these. 

Craft Sigma and Other Detection Rules

There are Sigma rules available that have already been crafted for scenarios such as detecting C2 servers or detecting a high number of queries to a single domain. Sigma rules drive consistency in threat detections, after a Sigma rule is crafted, it is then shared (or converted) so that whichever endpoints that need to consume the rule, such as a particular SIEM platform, can do so.

Other resources for detection rules are also available. There are other detection rules that are crafted and shared online so below is a small list:

Event Source Coverage Using DomainTools API Products

Effective event source coverage will mean that investigators will be able to make the most of and take advantage of what DomainTools integrations have to offer. For example, go beyond relying on proxy logs when you can also extract valuable domain and other metadata in other event sources. In addition, DomainTools API products can be used to develop your own integrations. Get started with the API documentation here including no-charge sample queries.

Use DomainTools Integrations with your SIEM

In addition to APIs, use DomainTools integrations to find threat intelligence in your domain metadata. 

The post Top Sources of Domain and DNS Logging appeared first on DomainTools | Start Here. Know Now..

]]>
Increase the Visibility of Your Linux DNS Servers with Log Collection https://www.domaintools.com/resources/blog/increase-the-visibility-of-your-linux-dns-servers-with-log-collection/ Tue, 11 Feb 2025 00:00:00 +0000 https://domaintools.wpengine.com/increase-the-visibility-of-your-linux-dns-servers-with-log-collection/ We covered Windows DNS Logging—now it’s time to focus on Linux and other Unix-like platforms. Learn more about log collection deployment and Linux auditing in part 4 of t

The post Increase the Visibility of Your Linux DNS Servers with Log Collection appeared first on DomainTools | Start Here. Know Now..

]]>
Log Collection on Linux and other Unix Platforms

Welcome to the fourth part of this series on DNS and domain logging. The aim of this post is to focus on covering log collection on Linux and other Unix-like platforms. While enterprise environments nowadays are hybrid, rather than completely homogenous, we decided to use this post to focus on another telemetry event standard- syslog daemon (syslogd) and Syslog message logging.

This post will start with a sample of log collection deployment featuring a Linux DNS server, followed by a brief overview of the meaning of these log source samples, and then finished off with Linux auditing. If you are working with Windows Event Log data and Microsoft deployments, please see part 3 of this series.

Linux Log Deployment Scenario

List of DNS Servers that support Linux/Unix-like platforms:

  • BIND 9 ISC DNS Server.
  • NLnet Labs (created the implementation of DNSSEC) NSD DNS Server and Unbound.
  • PowerDNS Authoritative Nameserver and Recursive Server (includes a NOD or Newly Observed Domains feature).
  • KnotDNS Auth Name Server.
  • CoreDNS DNS Server.
  • Dnsmasque DNS Forwarder.

From Source to SIEM

The following is a Linux DNS server example deployment, featuring one infrastructure source - the BIND 9 DNS Server.

The following is a Linux DNS server example deployment, featuring one infrastructure source – the BIND 9 DNS Server. 

From Source (BIND 9 DNS Server and Audited Events)

BIND 9 DNS Server and audited Events

Starting with a BIND 9 DNS server, two main sources of telemetry are defined—audit logging rules and the DNS server configuration file which is used to define a variety of logging rules.

Example Source 1: Audit Logging Rules 

These define the rules for collecting auditing logs, which can inform potential infrastructure attacks such as unauthorized DNS server configuration changes. Changes attacking file or folder permissions to the configuration file, deleting logs, and disabling logging are other forms of infrastructure attacks.  A lack of visibility with these events form the breeding ground for lateral movement attacks originating from the DNS server and other associated infrastructure.

Example Source 2: DNS Server Configuration File

A portion of the DNS server configuration file covers the server logging criteria, such as:

  1. The types of DNS events to log (query only, query and response)
  2. How detailed the events are (such as verbose)
  3. The types of events to collect (such as warning only events or all events ranging from informational to warning)

The actual logging criteria and configuration will differ for each DNS server implementation.  For BIND 9, here is one such example of the logging statement grammar from their user manual:

logging {
  category string { string; ... };
  channel string {
    buffered boolean;
    file quoted_string [ versions ( unlimited | integer ) ]
         [ size size ] [ suffix ( increment | timestamp ) ];
    null;
    print-category boolean;
    print-severity boolean;
    print-time ( iso8601 | iso8601-utc | local | boolean );
    severity log_severity;
    stderr;
    syslog [ syslog_facility ];
  };
};

To SIEM

Diagrams of Data Lake & Log enrichment combining into SIEM examples

Within the BIND DNS configuration file, one can specify certain events to write to disk, or and instead to be sent via TCP/UDP stream directly to a log management server.  In the BIND 9 DNS server, this is defined in the channel portion of the configuration file. This informs the destination of the messages selected for the channel—either to go to a file, to a particular syslog facility, to the standard error stream (stderr), or to null (discarded). The events in this deployment sample, just the security events, are sent to the Syslog stream and written to disk (/var/log/security_events.log). 

The events should be written in a standard format.  There are various ways to write and transport event data. For Syslog, these are BSD Syslog, IETF (implementation that precedes BSD) Syslog, CEF (or Common Event Format, an ArcSight implementation), LEEF (Log Event Extended Format, a proprietary format in relation to IBM QRadar), Snare Syslog.  There are also other accepted formats, like JSON.

The appropriate way to enrich and write events are detailed in your SIEM and log collector documentation.  Regardless, enrichment happens as part of the log collection process on the DNS server as well as the log centralization server instance depending on the deployment requirements. Potentially on the SIEM, the events may be consumed and presented as “raw” events if it has not been correctly enriched. There may also be additional rules to route these logs as not all logging may have an immediate security value, such as logging for debugging purposes.

From the SIEM side, further work is to be done to enrich the logs for analysis, automation rules, and more. For example, it needs to be clear that a key-value pair for domain metadata exists as it forms the basis for DomainTools App for Splunk, and the metadata source should be relevant regardless of the platform source.

BIND DNS Server Event Sample

About BIND 9 DNS Server

BIND is a widely-used DNS open source software, BIND 9 is currently maintained by the ISC (Internet Systems Consortium). BIND software includes a Domain Name Resolver which resolves queries about names via communicating these to appropriate servers and responding to the servers’ replies. The BIND Domain Name Authority server answers requests from resolvers.  Below is an example from the Resolver.

BIND 9 named.conf Example

The configuration files for BIND 9 logging cover the how, what, and where. Below is an example to log queries into a file.

logging {
        channel query {
                file "/var/log/queries.log";
                print-severity yes;
                print-time yes;
        };
       category queries { query };
};

BIND 9 DNS Resolver Snippet

As a semi-structured log message, below is a sample log from a test environment for example.com query:

30-May-2020 11:11:11.553613 queries: info: client @0x7f39604b8660 127.0.0.1#50587 (example.com): query: example.com IN A +E(0)K (127.0.0.1)

While the above is human-readable, it is not necessarily consumable in your SIEM and may be presented as a raw log and not usable for further processing. With parsing and enrichment taking place, then the unstructured message is split up into this type of key-value pair:

EventTime: 2020-05-30T11:11:11.533613+01:00
Category: queries
Severity: info
Client: 127.0.0.1#50587
Address: 127.0.0.1
Query: example.com
Type: A
Class: IN
Flags: +E(0)K

Therefore, the metadata residing in important fields, such as in the Query field, has value for additional processing. For example, one of the DomainTools integrations revolves around the ability to process domains that arise within the network perimeter. In which case, you will want each Query to be processed as part of this work since after all, it is part of an event that has taken place in the perimeter.  Also, you may notice that event timestamp is already set to log in high-precision timestamps and not only that, the EventTime is in the ISO8601 standard, with milliseconds and timezone added at the end (UTC+01:00), to ensure consistency and accuracy in timestamps.

Linux DNS Audit Logging

Apply audit logging to your DNS server in order to track security-relevant events. Applying audit logging rules allows for more targeted security events to be tracked. Knowing more about the auditing system in your platform is useful as you set up audit logging rules and read the events but below is an example.

BIND 9 Linux Audit Logging Sample:

Please see your platform documentation for the auditing rules (if you don’t have one at hand, you can refer to the RHEL documentation on Linux Audit Logs)

To watch /etc/bind/ for modifications and add a tag:

-w /etc/bind/named.conf -p wa -k conf-change-bind9

Result snippet in key value pairs (not raw log) format:

type: CONFIG_CHANGE
UID: 0
comm: nano
exe: /bin/nano
Key: conf-change-bind9
EventTime: 2020-05-30T12:19:20.055718+01:00

A rule to watch /etc/bind is added to create logs should there be write access to, and every attribute change of the file named.conf in the BIND 9 server. When this is observed, a log is recorded with the tag conf-change-bind9.

The result snippet should be human-readable. The CONFIG_CHANGE is the audit record type observed. The uid code, which is 0, indicates that it was a superuser account that changed the configuration file with the nano editor (and invoking the nano command).

When there is audit logging set up for these and other events, the administrator can refer to their own platform documentation to read the audit rules.  One can also observe the trail of audit logs; for example, they can see which utility was used to change the configuration file.

Final Questions

With this post, you have been introduced to a sample BIND 9 DNS log collection deployment on a Linux/Unix-like platform, with an event sample of both a resolver event and an auditing event of a server configuration change. Feel free to go back to the beginning and check whether your deployment has covered the following:

  1. What monitoring information on DNS and domain activity can be obtained from your existing sources?
  2. Which areas need to be expanded in order to improve your logging visibility, in particular for metadata containing domain and DNS logs?
  3. What other opportunities are there to increase your log collection scope overall?
  4. Which of these sources lead to better intelligence for your own use cases? 
  5. Is your current log deployment meeting your final phase needs in terms of intel for threat hunting, incidence response, orchestration, automation, etc.

While it is out of the scope to dive deep into actual deployment details (such as more configuration samples, DNSSEC logging, and more), you can check your platform and DNS server documentation, as well as your own collector and SIEM documentation, to see what configurations need to be applied and what options are available.

Use DomainTools integrations and APIs to further enrich relevant events with DNS and domain intelligence—Domain Risk Score—as well as use domain indicators of compromise (IOCs) with the Iris platform. Use DomainTools as part of your proactive and reactive defense, in addition to targeted logging.  

The post Increase the Visibility of Your Linux DNS Servers with Log Collection appeared first on DomainTools | Start Here. Know Now..

]]>
Strengthen Your Defense with Targeted Windows DNS Logging Techniques https://www.domaintools.com/resources/blog/strengthen-your-defense-with-windows-dns-logging/ Tue, 12 Nov 2024 00:00:00 +0000 https://domaintools.wpengine.com/maximizing-your-defense-with-windows-dns-logging/ In part 3 of 5 of this blog series, learn how to improve your log collection deployment. Follow a sample Windows log scenario and receive a deployment checklist to help o

The post Strengthen Your Defense with Targeted Windows DNS Logging Techniques appeared first on DomainTools | Start Here. Know Now..

]]>
Improve Defense via Targeted DNS Logging

The aim of this post is to introduce you to log collection on the Microsoft Windows platform. We will start with an illustration of a Windows source-only log deployment, followed by a collection of chosen fields from log samples and a brief description of these sources. The last part will be on audit logging, as it holds an important role in ensuring infrastructure defense.

Windows Log Deployment Scenario

In order to begin the investigations covered in the second part of this post, analysts and incident responders need to be armed with the relevant DNS and domain logs, thus giving them visibility over relevant events occurring on the network. Your organization may already be deploying a form of centralized logging at some sort of scale similar to the one shown below. If not, there is always room for improvement!

From Source to SIEM

JNLP

On the left-hand side are possible log sources. I defined a small portion of a sample log deployment, even though the actual number of unique log source endpoints is much more than what has been illustrated here. Then as we move to the right of the diagram, we see how logs may be forwarded through to a log management server or data lake, then on to a SIEM for further actions.

From These Event Sources…

JNLP

The source-side of the diagram on the left illustrates a few potential sources. A client machine is shown to represent that there are events occurring client-side that need to be collected. I have added labels to show a few samples of the types of events to collect —such as events generated by Windows Sysinternals System Monitor (Sysmon), any subscriptions to high-value EventIDs, and channels. There are also other log sources not using Windows Event Log telemetry, such as file-based logging, and other log sources which are grouped in the “other predefined collection processes” area such as (represented in icons) Windows firewall, Powershell logging, ingress logging.

There are three servers that have been defined in this example:

  • An on-premise mail server, the Exchange server. It is set up to collect Exchange Message Tracking Logs that, in the metadata, contain IP, and hostname information.
  • The Windows DNS server. Log collection is set up on the DNSServer Windows EventLog Analytic channel, as well as audit logging. Collection may also be manually enabled and set up to collect DNS Debug log events.
  • The Active Directory server. This server is a high-value target for many reasons. Log collection is set up to collect GPO or Group Policy Object logs, as well as Audit logs.

There are many other log sources that provide valuable intelligence for DomainTools investigations and integrations, like logs sourced from firewall events, Windows IIS server logs, ingress authentication attempts, and more. You may even deploy a baseline collection as well as an option to enable suspected baseline collection which would entail a more verbose or expanded collection of a suspected target machine.

…to the SIEM for Integrations, Automation, and Analysis

JNLP

Windows subscriptions may be deployed to pull only Windows EventLog events (via WEF, Windows Event Forwarding).  From these client/server sources, such events are forwarded to a Windows Event Collector.

Another collector, either provisioned by the SIEM or a third party, will also be deployed to collect more events, to expand the visibility, and to ensure that other types of logs are not left behind—such as file-based logging.

These events may be eventually forwarded to a data lake or a log management server. All logs do not make it to a SIEM for reasons such as storage, infrastructure and licensing costs, and that not all events may not have a SIEM supported use case.

The process of log normalization is important, seeing as the entries are not written in a standard format.  In addition to normalizing the logs, parsing and enrichment of these events are applied. These processes are done on transit by the collector and/or on disk on the source itself or on the log management server.

Once the logs reach the SIEM, they are used for additional integrations, to furnish any other actions to be done (triaging), enrich analysis, trigger alerts, and so on.

Windows Event Log

So far we have covered a log deployment example. The following are example fields from selected Windows DNS logs to give you an idea of the types of information provided. Note that these are already enriched/written to another format and they are only snippets and not the full log itself.

Windows DNS Client Sources

Windows DNS client event sources specific to DNS events include:

  • Windows Event Log channels (DNS Client Events – Operational)
  • Sysmon Event ID 22 DNSQuery
  • Event Tracing for Windows (Microsoft-Windows-DNS-Client ETW Provider)

Sysmon Event ID 22 DNSQuery Sample

"Hostname": "sld.tld."
"Severity": "INFO"
"EventID": 22
"Source": "Microsoft-Windows-Sysmon"
"ProviderGuid": "{5770385F-C22A-43E0-BF4C-06F5698FFBD9}"
"Channel": "Microsoft-Windows-Sysmon/Operational"
"Domain": "NTAUTHORITY"
"AccountName": "SYSTEM"
"UserID": "S-1-5-18"
"AccountType": "User"

In the above example, we can see that this event is from an instance called “sld.tld.” Since we know the originator of this event, it is then useful for finding the source should an investigation be made for IR. The fields for Domain, the Account Name, and security identifier (SID) (or UserID S-1-5-18) may also be useful.

"Message":
Dns query:
RuleName:
UtcTime: 2020-10-29 11:32:43.274
ProcessGuid: {b3c285a4-5f1e-5db8-0000-0010c24d1d00}
ProcessId: 5696
QueryName: example.com
QueryStatus: 0
QueryResults: ::ffff:93.184.216.34;
Image: C:\Windows\System32\PING.EXE

Included in Windows events are the Message field. From the Message field (which had been parsed here), we can see that Sysmon Event ID 22 DNSQuery event was emitted due to the ping command used with example.com.

What is a sample application of Sysmon Event ID 22 logging?

Let’s say that example.com is a link of high interest. One of the app features is enriching results with a Risk Score, and in this case, example.com obtained a high risk score of 90. With the logs that have been collected, we can trace back to the date, time, hostname, account, the utility program, so on. Further incident response actions such as finding the originating source, can help with isolation/containment measures of an instance as well as providing a use case to begin more extensive logging as a targeted machine. Assuming that 600 million events per day are collected, then isolating such incidents to a specific timeframe range may also help reduce MTTD/MTTR to obtain and find other IOC EventIDs.

Further reading:

There is interesting research available at Evading Sysmon DNS Monitoring and Using Sysmon And ETW For So Much More on Binary Defense which contains a section on leveraging DNS and Sysmon events.

Windows DNS Server Sources

The sources for DNS server events are:

  • Event Tracing for Windows (Microsoft-Windows-DNS-Server-Service, Microsoft-Windows-DNSServer ETW Providers)
  • DNS Debug log file
  • Windows Event Log channels

ETW / Windows DNS Service Provider Source

The following is a snippet from the Microsoft-Windows-DNSServer/Analytical channel for Event ID 260. It is only a portion of an event tracing example in key-value pairs.

"Source": "Microsoft-Windows-DNSServer"
"ProviderGuid": "{EB79061A-A566-4698-9119-3ED2807060E7}"
"EventId": 260
"Severity": "INFO"
"Domain": "NT AUTHORITY"
"AccountName": "SYSTEM"
"UserID": "S-1-5-18"
"AccountType": "User" 
"TCP":"0"
"Destination": "destination.IP"
"InterfaceIP": "interface.IP"
"RD": "0"
"QNAME": "subdomain.sld.tld"
"QTYPE": "28"
"XID": "17271"
"Port": "0"
"Flags": "0"
"RecursionScope": "."
"CacheScope": "Default"
"PolicyName": "NULL"
"BufferSize": "76"
"PacketData": "0x437700000001000000000001096C6F63616C686F73740975732D656173742D320D6563322D7574696C697469657309616D617A6F6E61777303636F6D00001C00010000290FA0000080000000”
  • QTYPE means that it was an A record, an IP address.
  • QNAME: is from my own instance.
  • AAAA: 28 indicates a 128-bit IPv6 address record. Most commonly used to map hostnames to an IP address of the host.

If you are making a comparison between Event ID 260 and the WireShark DNS request packet for the same event, you will note that the same fields are captured in the DNS payload when compared with the full Event Text (ex. PacketData, QNAME, DestinationIP, etc.).

There are also other sources for ETW to log other relevant events. From F-Secure Consulting in their labs note another ETW Provider, Microsoft-Windows-WebIO, which provides analysts with visibility of web requests made by some system processes.

About ETW / Windows DNS Service Provider

In brief, ETW has four main components which are:

  • Provider—a supplier of information to event tracing for windows sessions.
  • Session—a collection of in-memory buffers that accept events through the Windows ETW Provider API.
  • Controller—starts and stops the ETW sessions.
  • Consumer —receives events from ETW session from a log file.

ETW holds a valuable source of Windows telemetry. It is out of scope for this post, but you can learn more about the Windows DNS ETW in their documentation portal.

Windows DNS Server Debug

The following is an example test snippet from the Windows DNS Debug Log File. This type of logging should only be run in a temporary timeframe, due to its verbose nature which affects the performance of the server, amongst other potential issues.  Logs must be parsed to extract the relevant metadata (such as the IP address or the protocol) as part of log collection. This sample has been added to show how raw logs on its own is definitely not ready for use and requires further enrichment to be usable.

10/30/2020 8:16:54 PM 0FC0 PACKET 000001E1A286DE90 UDP Rcv 172.31.21.66 16df Q [0001 D NOERROR] NS (0)
10/30/2020 8:16:54 PM 0FC0 PACKET 000001E1A34544B0 UDP Rcv 199.7.83.42 de78 R Q [0084 A NOERROR] NS (0)

DNS Server Shutdown Event Snippet in XML

The following is a snippet of Windows DNS Shutdown Event in XML View from the Windows Event Viewer. While this is structured data, you can see how it deviates from a data format that can be consumed in another system, such as a SIEM. An event source connector is needed such as a direct connector, an input module, or a log agent to normalize the XML formatted XML log.

<Provider Name="Microsoft-Windows-DNS-Server-Service" Guid="{71a551f5-c893-4849-886b-b5ec8502641e}" /> <TimeCreated SystemTime="2020-09-23T21:33:21.512036500Z" /> <Channel>DNS Server</Channel> <Computer>ec2-xx-xxx-xxx-xx.us-east-2.compute.amazonaws.com</Computer> <Security UserID="S-1-5-18" /> </System> <EventData Name="DNS_EVENT_SHUTDOWN" /> </Event>

Discussing log collection involving DNS is not complete until we also include audit logging on DNS infrastructure. Audit logging not only helps to meet auditing and compliance objectives, but it also provides the event data for defenders to help incident responders in obtaining more information about a DNS infrastructure attack. Logging the server also serves the need for any other operational and troubleshooting issues involving the infrastructure.

These event sources include:

Enhanced Windows DNS Event Logging Options

The source for these events includes the Microsoft-Windows-DNSServer/Audit EventLog channel, and the Event Tracing for Windows provider. The types of events that are covered in audit logging on the DNS server include:

  • Cache operations—Purging a cache.
  • DNSSEC operations—Key rollover events, export/import DNSSEC, trust point related events.
  • Policy operations—Events for the client subnet record, zone level policy, forwarding policy, client subnet record were created, deleted, or updated.
  • Server operations—Restarting server, clearing of debug logs, clearing of statistics, changes to listen events.
  • Resource record updates—Resource Record (RR) creation events such as deletion or that a record was modified.
  • Zone operations—The zone was deleted or updated, a zone scope was deleted or updated.

Final Questions

Note: One item that you may notice from the Source to SIEM diagram is that I included log sources that are not covered in this post—for example, the Exchange logs as well as logs from other collection processes. We will be covering these in the final post of this series in a bit more detail. 

With this post, you have been introduced to a sample Windows log collection deployment, the types of events that can be collected on Windows DNS server and client, and the types of logs. Feel free to go back to the beginning and check whether your deployment has covered the following:

  1. What monitoring information on DNS and domain activity can be obtained from your existing sources?
  2. Which areas need to be expanded in order to improve your logging visibility, in particular for metadata containing domain and DNS logs?
  3. What other opportunities are there to increase your log collection scope overall?
  4. Which of these sources lead to better intelligence for your own use cases?
  5. Is your current log deployment meeting your final phase needs in terms of intel for threat hunting, incidence response, orchestration, automation, etc.

While it is out of the scope to dive deep into actual deployment details (such as configuration samples, or instructions on how to enable audit logging), you can check the Microsoft documentation as well as your own log agent and SIEM documentation to see what configurations need to be applied and what options are available.

Use DomainTools integrations and APIs to further enrich relevant events with DNS and domain intelligence—Domain Risk Score—as well as use domain IOCs with the Iris platform. Use DomainTools as part of your proactive and reactive defense, in addition to targeted logging.

The post Strengthen Your Defense with Targeted Windows DNS Logging Techniques appeared first on DomainTools | Start Here. Know Now..

]]>
DomainTools Employee Spotlight - Tim Helming https://www.domaintools.com/resources/blog/domaintools-employee-spotlight-tim-helming/ Tue, 15 Dec 2020 00:00:00 +0000 https://domaintools.wpengine.com/domaintools-employee-spotlight-tim-helming/ Discover our Employee Spotlight blog! In this series, we like to celebrate our employees by sharing their stories. This quarter’s feature: DomainTools Security Evangelist

The post DomainTools Employee Spotlight - Tim Helming appeared first on DomainTools | Start Here. Know Now..

]]>
It’s been a while since our last employee spotlight, a blog dedicated to celebrating the hardworking employees of DomainTools by sharing interesting, fun stories about the wonderful people who work here. We’re always grateful for the outstanding individuals we get to work with and decided it’s time to once again showcase the many talented artists, athletes, and aficionados that make up our DomainTools family. So, with the re-introduction of this quarterly blog series, I’m thrilled to formally introduce you to DomainTools Security Evangelist, Tim Helming. It should be no surprise if Tim’s name sounds familiar to you. Not only does he have over 20 years of experience in information security, but as our company evangelist extraordinaire, he authors many pieces of content, presents for webinars, and is part of the Breaking Badness podcast trio.

One of my favorite things about Tim is his love of telling stories. From his early life experience as a school teacher to his present-day role here at DomainTools, his passions for performing and creating a narrative have been a common thread throughout his career. So while Tim spends his days crafting infosec tales that benefit our community, I’m both happy and honored to be able to give a short glimpse into a chapter of what one day may be part of his bestselling biography.

According to Tim, his work in infosec has never been deeply technical. He entered the industry just at the peak of the dot-com boom with a tech support position at WatchGuard Technologies. As he worked his way up to leading the Product Management team there, Tim realized how much he enjoyed speaking opportunities and getting to articulate the company’s vision. In fact, he would even joke about his true role at the company being “Evangelist in Chief” (now that’s what I call foreshadowing). Tim ultimately navigated his way to DomainTools in 2013, and other than a brief yearlong venture into industrial security, he’s been here ever since. In his initial role as Director of Product Management, Tim was able to participate in a handful of significant DomainTools moments. He considers his involvement in the Iris product launch and his work with DomainTools Risk Score as a few of his favorite contributions in the “early years.”

Tim’s abilities to communicate and entertain don’t stop when the clock hits 5 p.m.—they’re interests that carry into his hobbies outside of work too. Before his career in the infosec industry, Tim both taught and played music professionally as a classical percussionist, although now he calls it his “self-funding hobby.” If you ever get the chance to ask Tim about his music career, I highly suggest doing so. He’s had some incredible opportunities worth hearing about, from doing a show with Ray Charles and playing with the Seattle Symphony to providing soundtracks and sound effects for movies, video games, and even a theme park. If you’re curious about what blockbuster hit you can hear him on, Tim says his “funny, proud moment” from that period of his life was playing on the soundtrack for the 2000 film, “Battlefield Earth.” The film may only score 3% on Rotten Tomatoes, but I personally think it’s worthy of DomainTools movie night featuring a commentary track by our very own Tim Helming.

The 7 stages of intrusion. With 1 being at the top and 7 being at the bottom of the image.

If you can’t find Tim out evangelizin’, building his own drums, or playing music, chances are you’ll find him behind a microphone. From our conversation, I learned that Tim’s love for imparting information goes hand-in-hand with his other lifelong fascination: radio. He’s had his ham radio license for close to 30 years and uses it to volunteer for the City of Seattle’s emergency communications. If you want to experience Tim’s soothing radio voice for yourself aside from those crisis situations, check out the Breaking Badness podcast, where he joins fellow DomainTools employees, Kelsey LaBelle and Chad Anderson, to talk about all things infosec (and even crack a few jokes here and there). Shameless plug aside, it is clear that Tim brings his enthusiasm for teaching, connecting with people, and entertaining into everything he does in his life. All of us feel very lucky to have Tim on the DomainTools team! Make sure to Tim on Twitter (@timhelming) to keep an eye out for all the excellent content he’s involved in!

The 7 stages of intrusion. With 1 being at the top and 7 being at the bottom of the image.

I’m looking forward to sharing more fun DomainTools stories as our series continues. If you are interested in joining our team, check out our job listings

The post DomainTools Employee Spotlight - Tim Helming appeared first on DomainTools | Start Here. Know Now..

]]>
Strengthening Your Client and Network Defenses with Targeted Log Collection https://www.domaintools.com/resources/blog/how-targeted-log-collection-strengthens-your-client-and-network-defenses/ Thu, 06 Feb 2025 00:00:00 +0000 https://domaintools.wpengine.com/how-targeted-log-collection-strengthens-your-client-and-network-defenses/ Make sure to check out part 2 of our 5-part series on log collection. This blog delves into how log sources, the MITRE ATT&CK framework, and metadata can elevate your thr

The post Strengthening Your Client and Network Defenses with Targeted Log Collection appeared first on DomainTools | Start Here. Know Now..

]]>
Introduction to Targeted Log Collection

In the first post of the series, we mentioned domain and DNS log collection guidance in the industry including NSA. These configurations and publications are usually born out of defense research, completing the life cycle from offensive research, defense, and then back to the implementers of SIEM and telemetry systems. NSA published Spotting the Adversary report, for example, and JPCERT (Coordination Center) in Japan analyzed which log events were due to known malware tools such as Windows EventID 5156, which indicates a new network connection event. We also covered a log deployment sample, but only covering the numbers, to give us an idea of the scale of telemetry collection in an enterprise environment.

In this part of the series, we will be focusing on the relationship and role of logging metadata with defensive security (blue/purple teaming). It also helps to consider previous research work conducted in the past.

Log Sources and MITRE ATT&CK

A lack of targeted approach to security logging can lead to performance issues by the log agents, excessive SIEM and agent licensing and subscription costs, excessive infrastructure requirements, noise and alert fatigue, insufficient log collection, and more. OWASP has even advised not to log too much (as well as too little) as a security logging implementation best practice. In addition to following the guidance and resources from the inaugural post of this series, another blue team and defensive practice is to map out log sources with frameworks like MITRE ATT&CK to ensure that log data elevates threat hunting and security operations can configure better alerts and triage rules.
 

MITRE Att&CK Framework

What is the MITRE ATT&CK Framework

MITRE ATT&CK® is a globally-accessible knowledge base of adversary tactics and techniques based on real-world observations. Above is a screenshot of the MITRE ATT&CK Navigator v4.0 for Enterprise, also available on GitHub.

Specific MITRE ATT&CK Log Sources

The following are log sources that relate to a MITRE ATT&CK Tactic. These logs are specific to use cases that DomainTools cover for Iris Detect, Iris Investigate, and APIs such as:

  • Finding domain IOCs via Risk Score, Threat Profile, or investigations in the Iris Investigate UI.
  • Finding valuable metadata, such as email addresses, as a seed term for investigations on the Iris web platform.
  • Using the APIs, such as the Reverse IP API, to find domains associated with an IP address.

Which Sources Do Defenders Need to Add for Their Log Collection Scope?

Tactic: Command and Control (C2)
Tactic: Exfiltration (Exfiltration over C2, Over Alternative Protocol)
DNS requests show attempts to resolve known malicious domains or attempts to contact domains with a poor reputation.  Network perimeter logs show exfiltration towards an external IP address. Related log sources: Linux DNS query logs, Windows Sysmon DNSQuery logs, DNS protocol packet capture logs, Windows Firewall logs, etc.
Tactic: Defense Evasion
Technique “Impairment of Defences”
DNS server being used maliciously (generating fake domain names, squatting, infrastructure attacks).  The server has been modified—such as the changing of permissions to the Syslog output stream or output file to disallow logging, or changing the DNS server configuration itself to disable logging. Tampering with the system to disable PowerShell logging (in the way of disabling Powershell event tracing mechanism in Windows). Related log sources: Group Policy Modification (Windows GPO) logs, Windows PowerShell logs to find suspicious PowerShell commands (disable PowerShell event trace), Linux auditing logs (configuration file changes).
Tactic: Persistence Mechanism
Setting up persistence on a DNS server for malware backdoor activities later such as connecting later to a C&C server. Related log sources: The log sources are similar to Exfiltration, and C2 tactics.
Credential Access (T1171)
Ingress authentications success and failures, including access attempts of a known insider threat in the company.
Related logs:
Ingress authentications and access attempts at the network perimeter.

Post-Exploitation Hunting with Logs: Mimikatz, a Credential Stealing Tool

The following are types of log source examples* to detect a Mimikatz attack but is more related to logs showing DNS, IP, and domain-related metadata:

  • Windows Sysmon Logs
    • Sysmon is a utility tool that offers a robust and extended log collection capacity. For example, one can configure Sysmon to help log the trail of utility tools that attackers use with Sysmon process logging.
    • You can also use Sysmon to collect DNSQuery events, FileDelete events, WMI (Windows Management Instrumentation) related events, and more. 
  • Windows PowerShell Event Log and Logging via
    • PowerShell is a common utility tool that is used to automate tasks in a Windows environment. The PowerShell event source itself can be targeted, such as attackers disabling event tracing of the Windows PowerShell ETW Event Provider.
    • It is possible to not only log PowerShell Windows Events (via Windows Event Log or Event Tracing for Windows) but also PowerShell logging can be enabled via editing the GPO settings.
    • The Invoke-Mimikatz script is recorded via the Microsoft-Windows-PowerShell/Operational channel (from JPCERT CC) as well as the Destination IP address/Host name/Port number upon success.
  • Windows DNS Debug Logging
    • Very verbose source of DNS logs that contains very detailed data of DNS information sent and received by the DNS server. Available from Windows Server DNS (2012R2 onwards with this hotfix).
  • Windows DNS Analytical Logging
    • “Informational” type of events available from Windows Server DNS (2012R2 onwards with the above hotfix). The types of events include RESPONSE_SUCCESS events which contain the destination IP address, QNAME (or, the Query Name like example.com). 

*These examples are from the “Hunting for Post-Exploitation Stage Attacks with Elastic Stack and the MITRE ATT&CK Framework” research. We have the Elastic App integration to provide a better understanding of threats in your network utilizing logs collected from your network such as proxy logs.

The Challenges of Processing Metadata

Even after log source collection, there is still the challenge of processing (structuring, extracting, enriching) the valuable metadata within these logs. Parsing the metadata benefits security operations because it leads to more targeted rules (for dashboards, search indexes, or to filter for investigations), alerts, or to use the data for additional analysis like investigating domain IOCs within your SIEM/SOAR.

Which Log Sources Reveal IP/Hostname Exfiltration?

Let’s take a look at a couple of log sources mentioned previously in the MITRE ATT&CK Tactics section. In this case, we look at the Exfiltration MITRE Tactic and see which log sources are relevant for domain/hostname/IP related IOC investigations.

Detection: Collect PowerShell logs from the Windows PowerShell ETW Provider

These logs contain metadata of commands that attackers use on PowerShell. New network connections made or attempted will be logged and include the IP (internal or external) as the destination. These are signs of exfiltration (uploading and sending data to a malicious endpoint).

Even if the payload were base64 obfuscated commands or data, thus making it difficult to parse and create rules for since rules depend on certain patterns, collecting such logs can still help to find any outliers. There are additional analysis tools like the ee-outlier tool by NVISO-BE for these types of tasks.

Detection: Finding DNS Tunneling via Logging and Monitoring or DNS Queries and Responses

DNS Tunneling (a method of Exfiltration Over Alternative Protocol) is where “adversaries may steal data by exfiltrating it over a different protocol than that of the existing command and control channel.” Attackers can use the DNS subdomain field to exfiltrate data, in addition to other ways to misuse the DNS protocol for attacks

DNS tunneling to a command and control server uses the DNS protocol as a way to obfuscate attacks. One of the reasons for this technique is that defenders have an easier time monitoring for bad activity on the more common endpoints such as RDP, DNS tunneling abuses the DNS protocol to sneak in commands and data like exfiltration. Finding DNS tunneling attacks involves logging DNS queries and responses. 

Note: If you are interested to learn more about these types of logs and metadata, upcoming parts of this series will focus more on log deployments and log samples.

The Value of Log Collection

In this part of the series, we further read into the security value gleaned from targeted log collection. Targeted log collection is to be aware that not all log sources are the same in terms of log event reliability and quality. Ensuring these log sources are part of your exposure helps provide indicators of compromise (IOCs) to move forward with an Iris investigation pathway, or investigations via the API or a DomainTools integration.

Every so often, we receive requests for (possible) post-exploitation help and to dive into domain data to find post-exploitation clues. While the series does not delve into operational parts of threat hunting, it is important that deployments see the value of increasing log exposure, despite the challenges of managing log sources (and their log types, metadata, any missing fields, and so on). In the next series, we get into the specifics of these, covering the opportunities and challenges in both Windows and Linux platforms respectively.

To learn how to identify and track adversary operations in DomainTools Iris Investigate visit our product page.

The post Strengthening Your Client and Network Defenses with Targeted Log Collection appeared first on DomainTools | Start Here. Know Now..

]]>
DNS and Domain Logging: A Bird's Eye View https://www.domaintools.com/resources/blog/dns-and-domain-logging-a-birds-eye-view/ Tue, 04 Feb 2025 00:00:00 +0000 https://domaintools.wpengine.com/dns-and-domain-logging-a-birds-eye-view/ Discover everything you need to know about log collection. In the first blog of this five-part series, we’ll give an industry overview on logging and explore what it mean

The post DNS and Domain Logging: A Bird's Eye View appeared first on DomainTools | Start Here. Know Now..

]]>
Domain and DNS Logging Overview

Logs are the essential building blocks in security.  One of the most common use cases we see is DNS log collection and integrations. Therefore, as part one of a several part series on log collection, we decided to share what we know about logging—but with a focus on logs sourced from DNS (server and client) and other log sources holding valuable IP, hostname, and domain metadata.

In the inaugural post of this series, we will cover a bird’s eye view of logging—reasons for log collection, industry guidance, and logging statistics. The subsequent blog series will cover the role of targeted log collection in defense, with pointers toward research and use cases. We also keep in mind those involved in integrating log sources and SIEMs. We will be publishing a series on Linux and Windows DNS server and client log collection and deployment, as well as other log sources (cloud, auditing, mail logs, network defense logs, and more).

Reasons for Domain and DNS Log Collection 

What are the reasons to collect logs? An overview of the main reasons includes:

  • Troubleshooting operational issues
  • Maintaining event integrity
  • Auditing and compliance purposes 
  • Building a centralized log management server
  • Following best practices and security guidelines. For example, being able to answer these questions:
    • What are the important security events to monitor?
    • What are the notable domain and DNS events to monitor?
  • Informing analysts with the data they need for investigations, alerts, reporting, and more.
  • Finding MITM (Man in the Middle) spoofing or hijacking of the DNS responses
  • Finding evidence of communication with CnC (Command and Control)
  • Finding evidence of Social Engineering/Phishing (such as active use of deceptive domains)

Reasons and rationale may not be enough, especially if you plan to convince your company to begin or improve logging. Luckily there is already guidance available in the industry, as covered in the next section.

Logging Guidance from the Industry

We recommend using the industry guidance as a starting point for log deployments while ensuring there is scope to capture not only DNS logs, but also other log sources that hold valuable metadata, such as auditing events and log metadata containing IP addresses and hostnames.

Palantir

Palantir’s WEF (Windows Event Forwarding) Guidance includes both DNS client and server event in their Github repository for WEF.

NSA

NSA published the “Windows Event Monitoring Guidance” which includes Windows Event Log channels such as “Microsoft-Windows-DNS-Client” and “Microsoft-Windows-DNSServer” in addition to the names of relevant Event Tracing for Windows (ETW) Providers.*

Example configuration from the NSA Event Monitoring Guidance:

<Query Id="0" Path="Microsoft-Windows-DNS-Client/Operational">
       <!-- 3008: DNS Client events Query Completed -->
       <Select Path="Microsoft-Windows-DNS-Client/Operational">*[System[(EventID=3008)]]</Select>
       <!-- Suppresses local machine name resolution events -->
       <Suppress Path="Microsoft-Windows-DNS-Client/Operational">*[EventData[Data[@Name="QueryOptions"]="140737488355328"]]</Suppress>
       <!-- Suppresses empty name resolution events -->
       <Suppress Path="Microsoft-Windows-DNS-Client/Operational">*[EventData[Data[@Name="QueryResults"]=""]]</Suppress>
     </Query>
     <Query Id="1" Path="DNS Server">
       <!-- 150: DNS Server could not load or initialize the plug-in DLL -->
       <!-- 770: DNS Server plugin DLL has been loaded -->
       <Select Path="DNS Server">*[System[(EventID=150 or EventID=770)]]</Select>
       <!-- NOTE: The ACL for Microsoft-Windows-DNSServer/Audit may need to be updated to allow read access by Event Log Readers -->
       <!-- 541: The setting serverlevelplugindll on scope . has been set to $dll_path -->
       <Select Path="Microsoft-Windows-DNSServer/Audit">*[System[(EventID=541)]]</Select>
 

OWASP (The Open Web Application Security Project)

OWASP Top 10 Security Vulnerabilities for 2024 included Insufficient logging and monitoring. They have included this type of vulnerability in the 2017 report as well.

From OWASP:

Insufficient logging and monitoring, coupled with missing or ineffective integration with incident response, allows attackers to attack systems further, maintain persistence, pivot to more systems to tamper with, extract, or destroy data. Most breach studies demonstrate the time to detect a breach is over 200 days, typically detected by external parties rather than internal processes or monitoring.

SANS Critical Controls

SANS Critical Controls (CIS Controls 7.1) has Sub Control 8.7 (Network), which includes the recommendation: “Enable domain name system (DNS) query logging to detect hostname lookup for known malicious C2 domains.”

Open Source

SwiftonSecurity Github Repo for Sysmon https://github.com/SwiftOnSecurity/sysmon-config

Other guidance: general log collection guidance

Log Deployment Numbers: A Scenario

To fulfill the needs of analyzing IOCs (Indicators of Compromise) and other threat-hunting activities, as well as the operational requirements to fulfill SOC work (such as alerting and triage), companies require log data. And mountains of it. DNS and domain-related telemetry events work with other log data sources, such as authentication or auditing logs.

More information and data is required—once you know the exfiltration point (a malicious domain) via the logs, what entry point did the attackers use? And it needs to be specific as well. It’s why, for example, the Equifax 2017 post-mortem indicated that one of the early signals of the attack was running the whoami command on a target server.

Splunk Instance Deployment Example 

The following is a real example of a deployment on a Splunk instance. Below are the details:

  • Central Splunk Instance with the capacity of 12TB
  • Up to 20TB effective capacity by the end of 2020
  • 50+ Splunk Universal Forwarders on Windows
  • 300+ Splunk Universal Forwarders expected by the end of 2020
  • Actual Volume of Data: 200 GB/Day
  • 300+ GB/Day expected by the end of 2020
  • 100 Source types
  • 650 independent log sources

One interesting item to note is that there is an upward trend towards widening log source exposure and in turn, increase log collection.  

A Deployment Example – Potential Logging Stats

Based on the figure of 300+ GB/Day from the earlier example, the following is a potential 

  • 7400 EPS (Events per Second)
  • 639,360,000 EPD (Events Per Day)
  • 90 Days Raw Log Retention
  • 90 Days SIEM Log Retention
  • 1:1 SIEM Storage Compression
  • 298 Total Raw Log Data (GB/day) 
  • 893 Total Normalized Log Data (GB/day) 
  • 26,820 GB Raw Storage Requirement
  • 80,370 Total SIEM Storage Requirement

Where:

  • Total Raw Log Data (GB/day) value assumes 500 bytes log data.
  • Total Normalized Log Data (GB/day) value assumes 1500 bytes per stored record.

The value of 300+ GB/Day was used to reverse calculate back to how many log events this may indicate. A SIEM storage calculator (BuzzCircuit SIEM Storage Calculator*) was used to arrive at several 7400 log events per second, or 639 million log events per day. Remember that these are log events, not the full telemetry data as not all sources get logged. There may also be additional parsing rules for these logs, such as rules to deduplicate logs or drop unimportant metadata.

*These calculators are a useful utility for deployments to figure out how much budget and infrastructure are needed to meet capacity to collect logs, in addition to baseline tests.

By the time a user reaches the stage where they are considering additional help, such as DomainTools integrations or Iris Investigate, to aid in their threat detection and analysis of domains, they may have these types of scope to process.

What About DNS Server Logging Stats?

You may also want to get an idea of what the log generation numbers are for DNS server events. The DNS server sample from a SolarWinds paper on estimating log generation states that, for them, “2 Windows DNS Server” for 1000 employees can generate 100 EPS or Events per Second (peak, average peak)*. In our calculation, that is up to 8,640,000 EPD or Events per Day (based on peak).

Based on my calculations: 

  • 4GB/day Total Raw Log Data 
  • 12 GB/day Total Normalized Log Data 
  • 360 GB Raw Storage Requirement
  • 1080 Total SIEM Storage Requirement

We have only included the SolarWinds for the events per second (in bold) since the rest are approximate numbers we calculated based on the known peak EPS.

One thing to keep in mind is that the estimations excluded other log sources. The additional sources are also of interest and would have also provided valuable metadata such as:

  • DNS Client events
  • Network connection logs, such as from Windows Firewall
  • FQDN metadata from proxy logs
  • Hostname (source and destination) from message tracking logs
  • DNS Query events

More information about these log sources, including log samples, will be covered in a future blog post.

Leverage Your Logs from Source to SIEM (and more)

Defenders should pay special attention to the internals of their own DNS servers, DNS query logs from clients, network perimeter activities, mail logs, proxy logs, and more. Industry guidance illustrates the importance of collecting these (i.e. DNS, IP, hostname, domain logs). Deploying such a task is no small feat for an enterprise, as shown by the log deployment statistics.  

In upcoming parts of the series, we show how log sources provide the backbone for defense, provide deployment examples, and how relevant metadata from these logs can be used for threat hunting and analysis. When relevant logs are collected, analysts can build a far more comprehensive depiction of IOCs—from a potential unauthorized intrusion via a deceptive phishing portal, malicious exfiltration beyond the network to an external IP, and more.

Leverage your Logs from Source to SIEM (and more)

Make the most of relevant logging sources in your environments and networks. Armed with the right metadata from your server, client, and network endpoints, you can improve and strengthen defenses using DomainTools Iris Detect, Iris Investigate, and numerous APIs.

Additional Resources

Use APIs to enrich your log data programmatically. The Iris Investigate API and Iris Enrich API, which process the same data sources as the Iris Investigate UI, provide Domain Risk Scores from Proximity and Threat Profile algorithms. See the API documentation to learn more. In addition to APIs, use DomainTools integrations to find threat intelligence in your domain metadata.


*Event Forwarding Guidance on Github.

The post DNS and Domain Logging: A Bird's Eye View appeared first on DomainTools | Start Here. Know Now..

]]>
How to Leverage Threat Intelligence in Incident Response to Move from Reactive Tactics to a Proactive Strategy https://www.domaintools.com/resources/blog/how-to-leverage-threat-intelligence-in-incident-response/ Thu, 29 Oct 2020 00:00:00 +0000 https://domaintools.wpengine.com/how-to-leverage-threat-intelligence-in-incident-response-to-move-from-reactive-tactics-to-a-proactive-strategy/ Learn how to better protect your organization and combat threats to your network. See how integrating threat intelligence and automation levels up your incident response

The post How to Leverage Threat Intelligence in Incident Response to Move from Reactive Tactics to a Proactive Strategy appeared first on DomainTools | Start Here. Know Now..

]]>
The “Organized Crime” of Cyber Threat Campaigns in 2020

While bad actors have become more organized and sophisticated by refining their craft, they are not the only attackers a security professional needs to be concerned with in 2020. There are still opportunistic, less skilled hackers that utilize commoditized exploits. These attack strategies are made possible by leveraging resources such as simple phishing kits or even ransomware-as-a-service (RaaS) tactics that are highly profitable and easy to use. These resources assist even the most junior bad actors in leveraging advanced, malicious code so they can move rapidly to create campaigns and execute them with ease. 

The threat landscape continues to grow, and security professionals need additional skills, tools, and advanced strategies in their arsenal to combat those who wish to do harm. This blog will outline how to better protect your organization by integrating threat intelligence into your incident response processes.

Incident Response Challenges—The Struggle is Real 

Though security teams comprise several roles and are responsible for various functions, incident responders have the heavy burden of identifying anomalies quickly, gathering data to triage the possible incident, and then moving rapidly to neutralize the situation. With the barrage of alert notifications bombarding security analysts, having a process in place to categorize the incident type and then mitigate each threat accordingly is essential to keeping an incident from becoming a breach.

Organizations seek security analysts that have advanced knowledge and experience, especially around incident response best practices. They need to have skills such as the ability to perform deep malware analysis, forensic investigation, and reverse engineering of threats. These senior security analysts can use their knowledge of these complicated processes to create playbooks, guiding their teams to appropriate responses for each threat scenario.

Security teams are constantly hit with a barrage of low-quality alerts from data feeds and a multitude of systems. This creates vast amounts of data to sift through in order to determine which alerts indicate true threats and which are false positives, leaving less time for deeper investigations once true threats are identified. Security analysts need context to enrich alerts to help analysts make more informed decisions based on cleanly presented information. By leveraging threat intelligence and automation, security teams can filter out false positives, prioritize the riskiest alerts, and help less experienced analysts make quick escalation decisions—ensuring highly-skilled analysts spend their time on more skill-specific work.

The Early Bird Gets the Worm by Marrying Threat Intelligence with Automation 

The traditional reactive approach to an IOC just isn’t enough anymore. Security teams need to shift to think more proactively by applying tactics such as automating certain aspects of incident response. Leveling-up incident response efforts by proactively evaluating threats in the wild and blocking them before initial interaction can reduce the number of threats that need to be addressed on the network. The external context of threat intelligence can also support quicker decision making during incident response procedures. 

At the end of a threat is a person who has a plan to exploit a vulnerability. These people or groups often reuse infrastructure, attack mechanisms, or code to reduce the amount of effort it takes to exploit a target. Threat intelligence provides a window into these tactics, techniques, and procedures (TTPs) associated with bad actors. That intelligence supports the incident response process by identifying when indicators might be associated with a particular group or type of threat—for example, enriching alerts from a Security Information and Event Management (SIEM) and comparing corresponding internal network data with external threats in the wild, employing threat profiles, risk scores, connections, and TTP analysis to identify when IOCs are likely associated with already-identified actor groups. This additional context enables defenders to make faster decisions or to build rules around specific indicators or attributes that can help categorize, prioritize, or automate alert response.

The Incident Response Battle Strategy

Automation plays an important role in helping the security professional with prioritizing. Organizations are adding new tools to their stack such as SIEM and Security Orchestration, Automation, and Response (SOAR) solutions. A SIEM solution improves efficiency by providing a system to view all the security log data from many hosts. A SOAR solution assists security teams in investigating and responding to threats faster by way of automating detection, investigation, and response capabilities and carrying out response actions at machine speed—reducing threat dwell time and decreasing their mean time to remediation (MTTR).

Security professionals often leverage various frameworks that are considered best practices such as NIST, ISO, PCI/DSS, SOX, HiTrust, etc. If we take a framework-agnostic approach, there are really 4 phases that are the meat and potatoes of incident response processes: preparation, triage, remediation, and containment. Adopting threat intelligence enhances incident response in triaging and containing threats when combined with SIEM and SOAR solutions.

There should always be preparation to lay the groundwork for the defense. And you should never stop preparing. Preparation should be constant and should leverage the ongoing experiences of your incident response and network defense teams to ensure that you’re instituting a cycle of constant improvement. The preparation phase sets the incident response lifecycle up for success by conducting risk assessments, closing gaps, and implementing controls. Planning ahead to identify what threats are likely to target your organization based on its systems or leveraging IOCs from other stages in previously detected attacks to map attacker infrastructure and institute proactive blocking can help incident responders stay ahead of the game.

In the triage phase, assessing the situation and prioritizing—in the way a proverbial battlefield medic would—is key to an effective response. To extend the metaphor, the battlefield medic needs to quickly assess the situation based on initial, observable attributes to discover what the issue is and what needs to be done in the early stages. Our triager can leverage threat intelligence, risk scoring technology, and threat profile information to help with their assessment within their SIEM or SOAR solutions. The more information someone has up front, the faster and more effectively they can make a decision about next steps.

In the containment phase, a security analyst can combine threat intelligence and automation technologies, and these automated playbooks can kick off certain aspects of this process. A good example is if our metaphoric triager identifies there’s a malware outbreak on the network, they can apply strategies to quarantine or isolate infected machines or systems. In this scenario, our battlefield medic needs to stop the bleeding.

At the remediation stage—or now that the patient is stabilized—our soldier has to switch from a triage mindset to investigate the issue further, addressing the root cause of the incident by getting the attacker out of the network and then closing holes that allowed the attackers access in the first place (e.g., reimaging workstations or devices, applying security patches, etc.).

Solutions in Action: Alerting with threat intelligence in a SIEM

By combining threat intelligence with automation via SIEM platforms, incident responders can enrich indicators from their log files and fire off alerts and notable events when high-risk domains are detected or when activity patterns reach certain thresholds. Here’s a view of the DomainTools App for Splunk that highlights log file events related to domains with high risk scores or that fit the profile of domains used for Phishing, Malware, or Spam attacks. Consider how implementing tools like these can help your incident response team contextualize attacks and make more accurate decisions faster.

Solutions in Action: Threat intelligence in SOAR automation

Automating incident response elements improves efficiency. By using threat intelligence to structure rules and using risk indicators or the presence of connected infrastructure as decision points for executing automated actions and pivots, organizations can better support analyst teams by automating as much of the investigation as possible—leaving their senior security specialists to focus on the work that definitely requires human expertise.

Here’s an example of an incident response process mapped out in a SOAR tool (Splunk Phantom) that leverages DomainTools’ threat intelligence, including risk scores and connected infrastructure counts, to help automatically investigate and identify threat-delivering infrastructure and make determinations on next steps.

Know more. Thwart More.

Intelligence is key to enabling security teams to anticipate threats, respond to attacks more quickly, and make smarter, more informed decisions on how to reduce risk. Threat intelligence can be applied to incident response procedures and enable an organization to have a more proactive strategy that magnifies the effectiveness of security teams and security solutions by predicting unknown threats and streamlining processes to make better use of human resources.

Learn more about DomainTools Threat Intelligence for SIEM solutions

 

The post How to Leverage Threat Intelligence in Incident Response to Move from Reactive Tactics to a Proactive Strategy appeared first on DomainTools | Start Here. Know Now..

]]>