Category Archives: Security Learning

SIEM Use Cases – What you need to know?

My previous post “Adopting SIEM – What you need to know” would give a better starting point if you are new to SIEM and want to implement it in your organization. If you already use/manage/implement a SIEM, then read on.
To start with, SIEM tools take a lot of effort to implement. Once implemented, they need to be taken care like babies. If care is not given, within a few months you would be staring at a million dollar museum artifact. Now there are two parts of care:

  1. Making sure that the systems are updated regularly, not only for patches and configurations but also the content put in them.
  2. Second and the most important part is making the SIEM relevant to the current Threat Landscape.

Anyone who has worked on SIEM for some time would agree with me, that Administration is generally easier compared to making the system relevant to the Threat Landscape. Before people hit me with “Administration is also a pain”, I would like to offer a defense saying that mostly, all SIEM products have documentation attached that give fair amount of information on how to install, update, upgrade and operate these systems. However, Translating Threat Landscapes to nuts and bolts for SIEM purposes is the biggest challenge and there are no guides that can help do that.

In this blog post, my attempt is to make this translation as easy as possible. In SIEM parlance, we call the translation as a Use Case. If there is well-defined Use Case, implementing them, responding to them and managing them would become easier. Such Use Cases would eventually become the cornerstone on which a SOC (Security Operations Center) is built. As usual, I would like to start with defining a Use Case, running through its stages and then finally wrapping it up with an example. So here we go.

Use Case Definition: A Use Case by definition is nothing but a Logical, Actionable and Reportable component of an Event Management system (SIEM). It can be either a Rule, Report, Alert or Dashboard which solves a set of needs or requirements.

A Use Case is actually “developed” and this development is a complete process and not just a simple task. Like a mini project it has several stages. The various Stages involved in Use Case Development are as follows:

  • First stage is the “Requirements”Definition. It can be any of the following high level requirements and is unique to every company:
    1. Business
    2. Compliance
    3. Regulatory
    4. Security
  • Once the requirements are finalized, the next stage would be to “Define the scope” of the requirement. This would typically mean the IT Infrastructure that needs to be protected and is a high priority for the specific requirement.
  • Once the scope is finalized, we can sit down and list the “Event Sources” that would be required to implement the Use Case. These would be Log Data, Configuration Data, Alert Data etc coming out of IT Systems under the above Requirements Scope.
  • The next stage would be to ensure that the Event Sources are going through “Validation Phase” before use. Many times, we would have an Event source but the required data to trigger an Event may not be available. This needs to be fixed before we proceed with the Use Case development.
  • Post validation, we need to “Define the Logic”. This is where we exactly define what and how much data is needed to alert along with the Attack Vector we would like to detect.
  • Use Case “Implementation and Testing” is the next stage. This is where we actually configure the SIEM to do what it does best – Correlation and Alerting. During Implementation the definition of the desired output can also be done. The output can be one of the following:
    1. Report
    2. Real Time Notification
    3. Historical Notification
  • Once implementation is done, we need to “Define Use Case Response” procedures. These procedures help you to make the Use Case Operational.
  • Finally, Use Case “Maintenance” is an ongoing process to keep the Use Case relevant by appropriate tuning.

Now that we have defined in detail the Use Case Development methodology, it is time to take an example and see how this actually looks in Real Life Implementation terms.

The Requirement: Outbound Spam Detection.
The Scope: Mail Infrastructure, End User Machine, Security Detection Infrastructure
The Event Source:
  • IDS/IPS at Network and Host – Signature Based Detection
  • Mail Hygiene or Mail Filtering Tools – Signature Based Detection
  • Events from Network Devices – Traffic Anomaly Based Detection
  • Events from End User Detection tools – Signature and Traffic Anomaly Based Detection

The Event Validation: The devices logging to SIEM should be normalized and parsed properly. Typically, SIEM products would allow Content development based on their native Field Mappings (Through Parsing). If the fields are not mapped, then the SIEM does a poor job of Event Triggering and Alerting. The required fields for the above Use Case would typically be Source IP, Source user ID, Email Addresses, Target IP, Host information of Source and Target, Event Names for SPAM detection, Port and Protocol for SMTP based traffic detection etc.

Use Case Logic Flow: The Logic definition is something unique to the environment and needs to be defined accordingly. The logic can be either Signature based or behavior based. You can have it restricted to certain subset of data (based on the Event Sources above) or expand it to be more generic. Some samples are given below:
  • One machine doing Port 25 Outbound connections at the rate of 10 in a minute
  • SPAM Signatures originating from the same source from IDS/IPS, Mail Filter etc having the same destination Public domain
  • SYN Scans on port 25 constantly from a single source etc

Implementation and Testing: Once the logic is defined, Configuration of SIEM and tuning the implementation to trigger more accurately is the next phase. After Implementation of the Use Case, we would need several iterations of Incident Analysis along with data collection to ensure that the Use Case is doing what it is intended to do. This is done at the SIEM level and may involve aggregation, threshold adjustments, logic tightening etc.

Use Case Response: After implementation, the Use Case need to be made as a valuable resource by Defining a Use Case Response. This is the stage where you would define “What action needs to be taken and how it needs to be taken”. You can look at Episode 4 of my Security Investigation series to get an idea of how to Investigate SPAM cases. Other Security Investigation Series Articles are located here – Security Investigation Series.

SIEM Use Cases are really the starting point for good Incident detection. If you want to run a SOC, having well-defined SIEM Use Cases would ease management and increase efficiency of Operations. This post is my humble attempt to simplify and regularize Use Case development for SIEM implementations.

As always, I would love to hear comments and thoughts on this topic.

Adopting SIEM – What you need to know?

SIEM stands for Security Information and Event Management”

Oh wait, I have heard of SIM, I have heard of SEM, what is this SIEM??.
Originally, the Security Information Management (SIM) and Security Event Management (SEM) systems were two different technologies performing similar but distinct functions. Gartner in 2005, coined the term SIEM to encompass both. As the name suggests, It is nothing but a collection of tools and technologies to manage Incident and Events pertaining to Security alone. Some of the tell-tale capabilities of a typical SIEM platform are:

  1. Collect Logs from various Log Sources/Devices
  2. Store these logs for a decent amount of time
  3. Provide Fast Search/Retrieval capabilities
  4. Provide meaningful interpretation of Log received
  5. Provide capabilities to correlate between logs of different devices
  6. Basic Ticketing/Alerting capabilities.

The first 4 points are typical of a SIM and the remaining 2 are typical of a SEM.
Any tool that does all of these is a SIEM. There are more than 50 different products that cater to the SIEM space. Just like any other product, they cater to various market segments at various price points.
If you Google for SIEM reviews you would get a lot of information on various products. In my experience, I have worked with at least 4 SIEM vendors. Each one of them have their own pros and cons. Comparing a product in a DEMO and comparing it after use are two different things. So, in this blog post, I am going to highlight few things as “What you need to know” when you are planning to adopt SIEM technology

  1. Have a defined Logging process in your environment. This is very crucial because a SIEM is useless without a good Logging Program. This not only helps in making the SIEM implementation easier, but also helps in getting a measure of the volume you are dealing with. In my experience, often times, despite having an Industry leading SIEM, the log Management made it look pedestrian and a waste of money.
  2. Every SIEM vendor has something called as Collector/Connector/Receiver/Agent that collects logs from the devices and converts them to their proprietary format. This conversion or parsing as we call is important for the product developers to store data in a format they can understand and process quickly. Most of the vendors offer something of a Custom Collector/Parser development for their “unsupported” log sources. This costs money, skills in-house and may require regular maintenance. Hence Native Parsing Support for Log Sources is better. Establish this before you move ahead with SIEM implementations. Either source a in-house resource to help build and manage such customizations or spend more money to get the vendor to do it.
  3. Identify primary focus areas from an Organizational perspective. This will help you configure your SIEM solutions appropriately. These focus areas should be broadly classified and then expanded to the ground level. For example, if your requirement is compliance, start with control requirements, see what logs need to be collected to fulfill them, see how integration needs to be done, see what needs to be reported, alerted, retained, etc.
  4. Get a dedicated SIEM administrator or rather train someone in-house to be that person. This is very important because, in my experience I have always felt that SIEM is as good as the administrator is. Without proper maintenance and care, it will decay over time. If you really need to generate value out of it, manage it well. By managing a SIEM I mean not only the system itself but also the ecosystem it resides in.
  5. Understand that SIEM alone cannot solve all your Security Problems. It is NOT A MAGIC WAND. If setup and configured correctly, a SIEM can at best point you in the right direction, a direction where you can identify and fix several security issues in your enterprise thereby strengthening it. So, be prepared to have a Response/Remediation team that will investigate the alerts generated and take appropriate action.
  6. Correlation is a vital part of SIEM offerings. Before Adopting SIEM, make sure you understand and possibly catalog the various Attack Vectors, Threat Scenarios you would want looked at for correlation in your organization. This will give a fair direction for the basic rules you would put in place to start with. Once you are comfortable and start seeing the various alerts generated, you can play around and experiment more. In my experience, start with built-in rules, understand them, investigate them, tune them and then slowly start building your own content. For more details on the various rules available in SIEM Look at Rules Rule in SIEM Kingdom
  7. Architecture wise, make sure your SIEM solutions are in tandem with your Logging solutions. Also, build your SIEM as modular as possible thereby making upgrades, technology refresh etc seamless.
  8. Don’t forget the filtering aspect. Correlation Engines will perform faster and will get you better results if they are attacking a smaller set of “known bad” logs rather than all. This is crucial in large enterprises as the Log Volume can easily overwhelm the SIEM systems. Note: Many SIEM tools have limitations in the number of events they can process. This is denoted in Events Per Second (EPS). Even though the vendors advertise several thousands, an effective correlation system can have only around 2000 – 5000 EPS tops. Anything more will make your system painstakingly slow. So understand and work through this. Look at my posts What and How much to Collect and High Log Volume – What to Filter and What to Keep? to get more information on how to log, what to log and what to filter.
  9. Remember, more processing layers, less EPS. This means that the Log Collection layer will have more EPS processing capability than the Correlation engine and so on. Visualize it as a pyramid with the Log Collection at the Base and the Correlation at the top
  10. Last but not the least, “Stay Alert and Eager. The Logs Don’t Lie”

Hope this post helped you in getting a fair idea of SIEM technologies. I have worked on HP ArcSight, Symantec SSIM, Novell E-Sentinel. If you need details about them in terms of practical setup, configuration, architecture etc, shout out and I will help as much as possible.
 [pdf]Save as PDF[/pdf]

Research to Detection – Identify Fast Flux in your environment

So what is Fast Flux?

Fast Flux is a camouflage technique used by Modern day Bots to evade detection and IP-based Blacklisting. This technique basically involves rapidly changing DNS Address Records (A Record) for a single FQDN, which means that every time you visit a www.site.com, you will be connecting to a different IP address.
Detecting Fast Flux in any environment is a very difficult task. Let me explain how!!!

  1. Fast Flux is of two types – Single Flux and Double Flux.
  2. If Single flux is employed, the only thing to worry about is IP address change for static domain names. A typically Fast Flux service network would have several thousand A records for the same domain name. The TTL value for every A record is very less, thereby prompting DNS resolvers to query in short succession.
  3. If Double Flux is employed nothing is static anymore. Both the NS Records as well as the A records change rapidly. The NS servers are a list of compromised machines having a back-end control to the attacker. Detecting Double Flux is twice as hard as Single Flux already is.
  4. If you think that “Oh, its easy to identify these domains from Analysis of rapidly changing DNS records” YOU ARE WRONG. In case of Web Traffic Load Balancing, several hosting servers employ this to ensure that they are able to serve the Client Request quickly. So, if you were to analyze the DNS records, you would be lost when you try to separate milk and water.
  5. There is no right or wrong way of identifying the Fast Flux networks and research is still ongoing to identify a solid solution.

But the havoc, several Bots cause today are real. How can be bring Research based approaches to Enterprise? How can we achieve Fast Flux detection? How can we increase the effectiveness of detection with already existing tools?
In this post, I wanted to discuss about Research to Detection based approach for Fast Flux in DNS in an Enterprise Network. I have used Snort, ArcSight, Custom Scripting etc to elucidate my thoughts and ideas. This may not be a perfect solution but it would do its primary job.

  1. Firstly, we need to start logging DNS queries happening in the Network. We are interested in only logging and analyzing all outward queries happening from our Enterprise DNS servers. This is less noisier than internally received requests to DNS Servers from client machines. Remember to have a Log Management/Detection program in place.
  2. In the queries being sent from the DNS servers, we need to detect all the queries that return A records with a TTL value of < 1800 seconds. This data collection should contain the Domain Name, A records and NS Records.
  3. If possible we can collect the ASN records for the IP A records returned by the DNS response
  4. The data collection of the above can be done by a three-step customization.
    1. First step would be to create a Snort Rule to identify DNS queries/responses with a low TTL value. Generally, the DNS Response would have the A Records, the corresponding NS records and the TTL value.
    2. Second step of the collection would be to parse the Snort Output data to the to correctly identify the domain, IP records and the NS records. This would mostly require a Custom Collector or we can “shim” an existing File Reader collector to parse the Snort Data into respective fields.
    3. Third Step would be to do a recursive IP to ASN mapping for all the IP records returned. This can be done by running a script or a tool post collection.
  5. We can then put the parsed data into two Active lists (ArcSight Terminology for a watch list). One Active List would be a Domain/A Record pairing and the other would be a Domain/NS Record pairing.
  6. Then a rule logic can be created to do the following:
    1. For Single Flux the logic would be One Domain – Large IP records in a day.
    2. For Double Flux the logic would be One Domain – Large IP records – Large NS records in a day.
    3. Correlation with ASN data collected would give a clear picture of whether the Fast Flux trigger is False Positive or not. I would personally want to investigate this data set against ASN data set manually to begin with so that I can make a determination on what needs to be tightened for the Rules.
    4. Now, we can add some tuning as well for DynDNS scenarios. This whitelist domain list would then reduce the subset of event triggers.
    5. Progressive Cross-Validation with Internet Blacklists, Spam Lists, Abuse Lists etc, will give identification more muscle.

Remember that there are several practical pitfalls in terms of “Performance Issues”. Snort preprocessors can quickly become resource intensive, hence best idea would be to put some Network Zoning in place (with Whitelisted DynDNS sites as well), thereby reducing the Snort processing cycles. Similarly ArcSight Active Lists and Rule Triggers can quickly go out of control, hence it is important to manage them closely. The Custom scripts/data collectors can also put some load on the servers. Once the detection is done, suitable response mechanisms can be put in place for Fast Flux Networks.

Since this approach is a work in progress, I would be adding a few more notes as and when I identify something new. If you have inputs to enhance this idea, I would love to hear from you as well.

 [pdf]Save as PDF[/pdf]