Episode 5 – Security Investigation Series – DNS Reflection Attacks

One of the most popular attacks in the Internet today is the DNS Reflection Attacks resulting in a Distributed DoS. One of the major DoS mitigation vendors, Prolexic released a Report for 2013 saying that, Distributed DoS Attacks have increased by over 20% and bandwidth utilizations have seen never before levels. Spamhaus, Network Solutions and several other companies this year have been hit by DNS Reflection attacks. The attackers specifically targeted organizations in order to hurt and humiliate them. Distributed DoS protection service providers are slowly gaining prominence. At this time, we at infosecnirvana.com feel that it is important to understand the mechanics of such attacks and how they can be detected and responded from an Enterprise Security standpoint. In this Security Investigation Series post, we talk about the usual suspects – What is DNS Reflection Attack? How do we detect them? & How do we prevent them?.

Understanding DNS Reflection Attacks: As we all know, DNS is one of the components of the internet that serves as a Directory Assistance service akin to the Yellow pages. The only difference is that DNS gives IP numbers when requested for Domain Names. People who understand how DNS Works, would definitely have heard about “Recursive DNS Querying”. Essentially a Recursive Query is like following the trail of bread crumbs till you solve the puzzle. Every DNS Server pushes up the query recursively till it gets the response for the DNS Query. This also essentially means that the Original DNS Query will be very small in size, however, the recursive query/responses will be huge in size. Since DNS uses UDP, Volume based DOS Attacks are possible by using this Recursive Querying capability. This is the basic premise of a DNS Reflection attack. By making several thousands of spoofed DNS queries that result in recursion, an amplified DNS Response can be directed to the spoofed address. To understand the attack pattern that typically a Distributed DoS Attacker would follow, lets list down the attack pattern:

  1. Attacker first compromises an Authoritative Name Server.
  2. Attacker then creates a large TXT RR (Large sized Resource Record).
  3. Attacker Spoofs the Target IP Range
  4. Attacker Sends DNS Query (with Target IP Range As Client IP) to a number of Open DNS Servers (close to 5 million Open DNS Servers allow recursive querying) in such a way that Recursive Query happens and they retrieve the TXT RR.
  5. In order to achieve Amplification, the attacker then uses Several compromised Zombies to send our DNS Requests for the larger resource record (RR).
  6. All the Responses go to the Spoofed IP – Typically the Organisation the attacker wants to flood with DNS responses thereby causing a potential Distributed DOS scenario because of Bandwidth Consumption.
  7. Typical rates of amplification achieved are – For every 100 Mb/s of request Traffic, reply traffic can be up to 10 Gb/s

Imaging several gigabytes of DNS Packets hitting your perimeter and choking the bandwidth. This is what a DNS Amplification Attack or a DNS reflection attack can do.

Detection of DNS Reflection Attacks: Now that we understand the Anatomy of the attack, lets see how we can detect them. DNS Reflection attacks have IP Spoofing as the basic premise to redirect DNS Query Responses to a Target site. However, if the target site was able to detect that it never sent a DNS Query to elicit a DNS response, these attacks can be mitigated or stopped right away. However, detecting this is easier said then done. This can be done using a combination of Network Traffic monitoring, IDS/IPS & SIEM Technologies. Using a Network Monitoring tool like IPTraf, Netwatch,Netramet, we can gather DNS Statistics. Use a custom Script or a custom parser (SIEM parlance), we can normalize the Statistics into a more simpler State table (This is similar to the Firewall State table but mainly for UDP, called a Pseudo-state table). The table should contain a minimum of the following parameters:

  • Transaction ID (This is unique DNS Query ID. The Response received from the Authoritative DNS Server will have the same Transaction ID). 
  • Source IP Address of DNS Query initiator client
  • Source Port of the initiator client
  • Destination Address to which the Query is directed to
  • Destination port the Query is directed to

If there is a Query with a Transaction ID (say 0xcefd) & a corresponding response with the same transaction ID, we can safely say that the DNS Query & Response pair is legitimate. However, if there is only a Query or a Response, they are categorized as an “Orphan entry”. Orphan entries can be two types: 1. Only Query packet seen but no response is seen. & 2. Only Response packet is seen but no query packet is seen. Only Query & no response in my opinion is less harmful and can be safely ignored. But keep in mind, if this number is too high, it means something is wrong with your organizations DNS client or server and could even potential indicate a compromised asset. Only Response & No Query is the most likely candidate for DNS Reflection attacks. However, we need to also keep in mind that for smaller volumes these can lead to false positives as well. Some of the reasons for this is infrastructure logging fidelity (since DNS is UDP), bad routing of outbound traffic etc. In essence, our focus is on DNS Responses without preceding Queries. Now that we know what to look for, we need to start filtering all the remaining noise. This is where Range Thresholds are important. A small volume may not warrant attention, however, if there is an exponential increase in volume, it is really important to take actions aimed at mitigation. This is where an SIEM system comes in handy, Giving you a trend analysis based on the data collected over a time period. Let me just show you how this can be done in ArcSight SIEM. Data from the Traffic monitors can be parsed using a File Reader Custom Parser (using ArcSight Flex Connector). This parser can parse the required data fields as mentioned above from the Network logs and can map them to a native Event field schema. Then we use a Rule to populate all the Queries in an (First) Active list called DNS Query List. Similarly we use a Rule to populate all the Responses in another (Second) Active List called DNS Response List. You can then have a separate rule to populate a Third Active List for Orphan entries. This list will be a count based list giving you how many entries are there without Responses. A monitoring Dashboard can be then created for detecting Trend patterns  due to an increase in the Orphan entry list which would typically indicate a DNS Reflection Attack. I am sure there are couple of other ways we can get this done in ArcSight, but I am not going to go into those details in this post. But basically you should be able to get an idea of how this can be done. As far as other SIEM vendors are concerned, QRadar SIEM has some capabilities to do this, however I don’t think McAfee Nitro & Symantec SIM have this capability (Readers, let me know what you think about these SIEM Tools and whether this logic can be implemented or not).

Mitigation Methods for DNS Reflection Attacks: Once the suspicious list shows an increased percentage of Orphaned Response entries, there is a high likelihood that the organization is targeted. Unless you have partnered with a Distributed  DoS protection service provider, fending off such attacks would be a challenging prospect. However, there are ways and means to mitigate the attack from getting bigger.To start of with, lets look at the perimeter defences and how we can leverage them to mitigate the attack. Organizations generally will have a core router, an IPS/IDS, an Authoritative name server and a firewall.

On the core router we can enable URPF (Unicast Reverse Path Forwarding) to ensure that spoofing based attacks are controlled.

On the IDS side, we can enable rate limiting signatures for DNS packets to detect and probably drop packets thereby limiting the success of DNS Amplification based attacks. We can also enable Geography based filtering to ensure that the attacks remain controlled within a region.

On the DNS server side, we can limit the recursion so that our DNS servers don’t become part of the amplification attack. There are some experimental features in DNS on Rate Limiting on DNS Responses, however it is not commercialized and not many people have this feature tested. There is a great paper on the technical details of this feature. Please visit http://ss.vix.su/~vixie/isc-tn-2012-1.txt to take a look at it.

Finally, if you have money, spend on Distributed DoS protection services from Cloudflare, Imperva, Akamai, Prolexic etc. who can provide you with some of the rate limiting and geo based filtering based protection.

A combination of all these controls will ensure that the attacks are mitigated to a great extent. However, if you bandwidth is choked, you would still face a service disruption and slow website loading, but considering the defences, this would a good start.

What do you think would be your strategy to combat DNS Reflector attacks? Would you do it yourself or would you play with the big boys shelling big bucks? Chime on.

Until then….Detect & Respond.

Big Data – What you need to know?

Big Data is the buzzword in IT Circle nowadays. The major reason for this is the exploding “Netizen” base. Today Everything is happening Online and Online Data is estimated in zettabytes. The wealth of information one can carve from Online data is undeniably attractive for several organizations for marketing and sales. Organizations like Google, Yahoo, Facebook, Amazon etc process several Petabytes of data on a daily basis. Many more organizations are moving towards being able to collect, store and make sense of data in the Internet to further their interests.That is where “Big Data” has caught the imagination of people around the world. But What is Big Data and How can I jump into this bandwagon. Fret not, for in the blog post, you are going to find all about it.  The structure of this blog will be typical of a What you need to know? series posted at Infosecnirvana.com. So lets get started!!!

What is Data?
Data is anything that provides value in a structured or unstructured format. It is the lowest level of abstraction in Computing terms because after this, it is binary digits only. Data is typically stored in File Systems

Introducing File Systems
File Systems are the basis of storing and accessing data from a hardware device. It is nothing but an abstraction layer of software/firmware that gives you the capability to store data in a structured format, remember the structure and when queried, help retrieve it as quickly as possible. There are 2 major and common types of File Systems – Disk Based (local access) and Network Based (remote access). To give a simple example, FAT is a Windows Disk based File System wheres NFS is a Network based File System.

Even though both the file systems continue to dominate IT space, more and more relevance is given to Network based File Systems for obvious reasons like Distributed Data storage, redundancy, fault tolerance capabilities etc. This is the basis of “Big Data Tools and Technologies”.

Introducing DFS
Distributed File Systems are Network based File Systems that allow data to be shared across multiple machines across multiple networks. This makes it possible for multiple users on multiple machines to share files and storage resources. The client machines don’t have direct access to the Storage disk itself (as in a Disk based file system), but are able to interact with the Data using a File System protocol. One classic example of DFS is Microsoft SMB where All Windows machines are SMB Clients and access a common SMB Share on the File Server. But SMB suffers from issues pertaining to scalability and fault tolerance. This is where systems like Google File System – GFS (Google uses this in their search engine) and Hadoop Distributed File System – HDFS (Yahoo and others) come into prominence. What these File Systems do is provide a mechanism to effectively manage big data collection, storage and processing across multiple machine nodes.

Introducing HDFS:

Hadoop Distributed File Systems or shortly HDFS is similar to the other DFS file systems talked above, however it is significantly different as well. HDFS can be deployed on Commodity Hardware, is Highly Fault Tolerant and is very capable of handling large data sets. Originally HDFS was developed as part of the Apache NUTCH Project for an alternate Search Engine akin to Google. Some of the most prominent software players for HDFS are “Apache Hadoop”, “Greenplum”, Cloudera etc.

In this post, we will be looking at Log Collection and Management using the Hadoop Platform.

APACHE Hadoop: The Apache Hadoop architecture in a Nutshell consists of the following components:

  • HDFS is a Master Slave Architecture
  • Master Server is called a NameNode
  • Slave Servers are called DataNodes
  • Underlying Data Replication across Nodes
  • Interface Language – Java

Installing Hadoop: Installation of Apache Hadoop is not a very easy task, but at the same time it is not too complex either. Understanding of the Hardware Requirements, Operating System Requirements and Java Programming Language can help you install Apache Hadoop without any issues. Installing Hadoop can be either a Single Node Installation or a Cluster Installation. For this post, we will look at only Single Node Installation steps:

  1. Install Oracle Java on your machine – Ubuntu
  2. Install OpenSSH Server
  3. Create a Hadoop Group and Hadoop User and set Key Based Login for SSH
  4. Download the Latest Distribution of Hadoop from http://www.apache.org/dyn/closer.cgi
  5. Installation is just extracting the Hadoop files into a folder and editing some property files
  6. Provide the location for the JAVA home in the following file location- hadoop/conf/hadoop-env.sh
  7. Create a working folder in Hadoop User Home Directory /home//tmp
  8. Add the relevant details about the host and the home directory following configuration elements in /hadoop/conf/core-site.xml
    conf/core-site.xml —>
    
    hadoop.tmp.dir
    /home//tmp
    A base for other temporary directories.
    
    fs.default.name
    hdfs://localhost:54310
    The name of the default file system. A URI whose
    scheme and authority determine the FileSystem implementation. The
    uri’s scheme determines the config property (fs.SCHEME.impl) naming
    the FileSystem implementation class. The uri’s authority is used to
    determine the host, port, etc. for a filesystem.
  9. Then we need to edit the hadoop/conf/mapred-site.xml using a text editor and add the following configuration values (like core-site.xml)
    conf/mapred-site.xml —>
    
    mapred.job.tracker
    localhost:54311
    The host and port that the MapReduce job tracker runs
    at. If “local”, then jobs are run in-process as a single map
    and reduce task.
  10. Open hadoop/conf/hdfs-site.xml using a text editor and add the following configurations:
    conf/hdfs-site.xml —>
    
    dfs.replication
    1
    Default block replication.
    The actual number of replications can be specified when the file is created.
    The default is used if replication is not specified in create time.
  11. Before running the Hadoop Installation, the most important step is to format the NameNode or the Master Server. This is critical because, Without the NameNode, the DataNodes will not be setup. In a Single Node Installation, NameNode and DataNodes will reside on the same host, where as in Cluster Installation, NameNodes and DataNodes will reside on different hosts. In order to format the NameNode using Hadoop commands, Run the following command – /hadoop/bin/hadoop namenode -format
  12. In order to start the Hadoop Instance, from hadoop/bin run ./start-dfs.sh and Running the commands will start up Hadoop and when you query the Java Process, you should be able to see the following components of Hadoop Running:
    NameNode
    DataNode
    SecondaryNameNode
    JobTracker
    TaskTracker
  13. If you have successfully completed till this, then you now have a Hadoop Single Node Instance running on your machine.

Getting Data in/out of Hadoop:

Once the installation is completed, the next thing we need to worry about is getting data in and out of Hadoop File System. Typically in order to get the data into the system, we need a API interface into HDFS. This typically is a JAVA or HTTP API. Tools like FluentD, Flume etc help in getting data in and out of Hadoop. Both the tools have plugins for receiving HTTP data, Streaming data and Syslog Data as well.

MapReduce: Hadoop and Big data discussions are incomplete without talking about MapReduce. MapReduce is a software policy framework that maps Input data based on a map file and outputs data in key value pairs. These are two different jobs when it comes to actual processing. One is the Map Task that splits the data into smaller chunks and there is the Reduce Job that generates a Key Value combination for each of the smaller data chunks. This framework is the powerhouse for Hadoop because, this is built with parallelism in mind. Map Tasks and Reduce Tasks can both be run parallel on several machines without compromising on speed, cpu and memory resources. The NameNode is the central master that tracks the Maps and the Jobs where as the DataNodes are just providing processing resource.

Finally, Using Hadoop: Now that we know what drives Hadoop and how to get Hadoop installed, the easiest thing would be to start using them. Several examples for MapReduce jobs using Java are available to aid in learning. There are several related projects running to make the Hadoop Ecosystem more scalable and mature. Some of them are:

  • HBase, a Bigtable-like structured storage system for Hadoop HDFS
  • Apache Pig is a high-level data-flow language and execution framework for parallel computation. It is built on top of Hadoop Core.
  • Hive a data warehouse infrastructure which allows sql-like adhoc querying of data (in any format) stored in Hadoop
  • ZooKeeper is a high-performance coordination service for distributed applications.
  • Hama, a Google’s Pregel-like distributed computing framework based on BSP (Bulk Synchronous Parallel) computing techniques for massive scientific computations.
  • Mahout, scalable Machine Learning algorithms using Hadoop

Conclusion: Hope this post helped you in understanding the basic concepts of Big Data and also to setup a Hadoop Single Node Installation to play with. Please do post your thoughts on how Big Data is playing a major role in your organisations.

ArcSight CORR 6.0 – Install and Migration

ArcSight (now HP) Enterprise Security Manager (ESM) is the premiere security event manager that analyzes and correlates every other event in order to support the Security Team or analysts in every aspect of security event monitoring, from compliance and risk management to security intelligence and operations. There have been several versions of ArcSight ESM released over a period in time. Their latest version is ArcSight CORR 6.0. At InfoSecNirvana.com we have got a copy of the latest version and we will be writing a multi-part post on how to Install, Migrate from Older versions to 6.0 and some basic walk around.

In this Part 1 post, we shall cover about the installation of ArcSight CORR (Correlation Optimized Retention and Retrieval), a proprietary data storage and retrieval framework that receives and processes events at high rates, and performs high-speed searches; the latest ArcSight ESM by HP. With the ArcSight CORR, Oracle database is now eliminated.

CORR components:

  • ArcSight Manager
  • CORR Engine
  • ArcSight Console
  • ArcSight Web
  • Management Console
  • Smart Connectors

Requirements:

System: This completely depends on the EPS that you expect to receive. InfoSecNirvana has been working on getting a PoC for this and the below configuration was used:
A VMWare box with 8 cores, 32GB Ram, 256GB SSD HDD, 2TB WD 7200 RPM SATA HDD (Note: for production, there might be/recommend a higher configuration. Check with ArcSight manuals on the same)

OS: Red Hat Enterprise Linux Server release 6.2 x64, installed with xfsprogs-3.1.1-6.el6.x86_64 rpm; this is required to convert some of the ext4 file systems to xfs filesystems. XFS Partition is the most apt format for us to fully utilize the performance enhancements coming with CORR. Typically, I would recommend /opt/ to be formatted with XFS and maximum storage can be allocated to this partition. This is crucial because, the very first step of installation would verify whether the entire /opt/ directory is in XFS. When using VMWare with LVM, we faced some issues during the installation and ArcSight Support could not help us with this. However, when raw devices were mounted as /OPT/ we did not face any issues.

Storage: Please allocate the required storage (calculate based on Number of Devices, Events per second, Average Event Size and Retention period). Remember, CORR is like an ESM with a built in Logger. You can still use a Logger for long term retention if that is what you prefer so that ESM will be lean and mean.

Permissions: The installation has to be done using a Non-Root account. This account can be a service account named”arcsight”. This account should have RWX permissions on the /opt/ directory. Make sure this is satisfied.

Misc: /TMP/ partition should have at least 3GB space. /home/arcsight also should have a minimum of 5GB free space. This is crucial again because, the INSTALL DIR log files are written in these location and if sufficient space is not allocated the installation fails.

The CORR package: Get the CORR installation package and the license from HP ArcSight. This can be obtained from your sales representative with HP/ArcSight.

CORR Installation:
The installation is pretty straightforward and is just a series of clicks. I have given most of the screenshots below just as a reference. Obviously, if you have already installed ArcSight Software, you would not even need this. Once done, you would be able to install the Console to access CORR and play around.



Once the installation is completed, we would want to test the following before we call the install as complete:

  1. Validate the Log Files in the Manager Install Logs and find out if there are any warnings and errors. Generally, this is a best practice to ensure valid installation.
  2. Install the Console and try to connect to ESM, with the default user name and password (mentioned in the install guide). First time when you connect, A certificate import of the Manager happens. If you use a self-signed certificate make sure you note down the parameters used to create cause this will help in future migrations, troubleshooting or recovery.
  3. After connecting to the console, you are ready to go.

Migrations from Existing Installs – Migrating from earlier versions to this CORR instance is tricky, because you are migrating from a DB back end to a NON-DB back end. I will be posting a followup of this post in PART 2 that will detail the migration procedure from 4.X to 5.X.

Stay Tuned to InfoSecNirvana.com for more!!!

Achieve Nirvana in Information Security

Follow

Get every new post delivered to your Inbox

Join other followers: