Galleries

How good is our current Security Strategy?

Few years ago, none of the “Hacktivist Groups” existed or even if they did, they lurked in the underworld. But today, they have the guts to come out in public and declare war on the Internet. They have also been very successful in bringing big corporations loss in terms of data and money. And how much wager would you like to place that this is just the beginning. This begs us to the very question – How good is our current Security Strategy? 

Traditionally we have been building the Security Regime using one or both of the approaches in tandem: Known Bad Security (Blacklisting) and Known Good Security (Whitelisting). To all those signature and behavior based thinkers, don’t fret, for this approach is a superset of Signature and Behavior based approach.

First let us look at Known Bad Security or Blacklisting:
One of the things we are very good at is, “KNOWN BAD” detection and response. By this I mean, we are good at identifying Vulnerabilities based on Vendor releases, patching them once the vendor releases the patches, updating AV/AS, IDS/IPS, Content Filtering etc to protect against exploitation. This is what “KNOWN BAD” Security is all about. You know it is bad and you defend against it. But a recent survey by Verizon shows that only 1% of the total data breaches are identified by IDS/IPS or AV solutions. This is a clear indicator that Signature based detection or Blacklisting based response is not giving us the results. So even though we are very good at Known Bad Security, we are being compromised day in and day out!!!!

Known Good Security or Whitelisting is just the opposite of Known Bad:
By this I mean, we identify and maintain a list of KNOWN GOOD items in our IT Infra. What connections are good, What users are good, What files are good, What is allowed, What is unauthorized etc as our data points for Known Good Security. Based on this data, we identify Security Abnormalities, anomalous pattern detection etc that don’t conform to the Whitelist and go after them as Rogues/Attackers. We investigate them, if found bad follow remediation process for them or if found good add them to the Whitelist. Once we know What is bad, we automate it by feeding to the Blacklist detection and Response. This while being effective is a slow and tedious process thereby giving gracious amounts of time for an attacker to wreak havoc.

Some Good, Some Bad:
Most of the Enterprises today effectively use a combination of Blacklisting and Whitelisting to achieve their Information Security needs. But based on the threats being propagated today, we can say with enough confidence that this approach is failing. The main reason for the failure is that, “Actual Good and Actual Bad are way more than Known Good and Known Bad”. Since we are unable to quantify these numbers scientifically, we end up doing good of nothing.

What we lack?
Our current strategy towards security has some gaping holes. Some of them are listed below:

  • Over Relying on External Sources: We still rely on Vendor input, community input and other public disclosures to define Blacklisting. One vendor’s threat detection efficiency is different from the other. One vendor might rate a Malicious Code as High Severity, but the other may rate it as Low. This kind of disparity does not help in determining what is “Actually bad”.
  • Poor Knowledge of our environment: How many times have you identified a Security incident and while investigating found out something new about the environment. I can bet that it is literally every time. Without knowing the exact nature of our environment, we would not be able to do any effective Whitelisting. Without effective Whitelisting, effective Blacklisting also is impacted
  • One cure for All diseases: We think that if one organization is compromised by a specific exploit, it is applicable to all. We seldom think or evaluate the Controls we have may differ significantly from the controls other organizations have. Security should be tailored to suit not vice-verse.
  • Once we Whitelist something, we never re-evaluate. We perceive that “it is clean” and pay little attention till hell breaks loose. This is more related to human nature than anything else I guess. Once we “move past” we never look back. This will hurt us because, a whitelist today might turn bad tomorrow due to IT dynamics, thereby leading to an exploit.
  • We live by and die by More tools – more security, Latest signatures – more protection, more resources – more coverage, more training – more knowledge. Most organization just buy Security tools or technologies to fill a check box in their Audit/Compliance needs. If the company execs have caught wind of some Security attack that happened at some other company, they are paranoid that it will happen to them as well. Hence “Gimme more security” approach.
  • More the HooHa More Serious the Threat: The amount of publicity received is directly proportional to the severity of the threat. We would have several other threats in our environment, several gaps we need to fix, but we would still look for the famous Conficker, Flame, Stuxnet, Aurora, Zeus etc. Even though some of them were big in terms of spread, every organization had different infection rates.
  • We still think Security as a Operations function. We still go by the number of Alerts worked, number of incident raised, time to solve, time to respond etc. Security is more Analytical, investigative field. Looking beyond the noise, finding the needle in the haystack, attacker attribution and all sound cool on paper, but to bring it to reality the current strategy doesn’t help.
  • Security is not a culture. In everyday life, you lock the front door, keep important things in a safe, put on safety gear, wear a seat belt etc, but we don’t treat our IT systems development, implementation and management with a Security mindset. Bad products are developed, bad implementations happen, bad administration and monitoring happen, and finally mistakes from people too happen, leading to a Security breach, data theft and loss.

I am sure there is more than the above list in terms of flaws in our current strategy. What do you think? Please comment on!!!

 

SIEM Use Cases – What you need to know?

My previous post “Adopting SIEM – What you need to know” would give a better starting point if you are new to SIEM and want to implement it in your organization. If you already use/manage/implement a SIEM, then read on.
To start with, SIEM tools take a lot of effort to implement. Once implemented, they need to be taken care like babies. If care is not given, within a few months you would be staring at a million dollar museum artifact. Now there are two parts of care:

  1. Making sure that the systems are updated regularly, not only for patches and configurations but also the content put in them.
  2. Second and the most important part is making the SIEM relevant to the current Threat Landscape.

Anyone who has worked on SIEM for some time would agree with me, that Administration is generally easier compared to making the system relevant to the Threat Landscape. Before people hit me with “Administration is also a pain”, I would like to offer a defense saying that mostly, all SIEM products have documentation attached that give fair amount of information on how to install, update, upgrade and operate these systems. However, Translating Threat Landscapes to nuts and bolts for SIEM purposes is the biggest challenge and there are no guides that can help do that.

In this blog post, my attempt is to make this translation as easy as possible. In SIEM parlance, we call the translation as a Use Case. If there is well-defined Use Case, implementing them, responding to them and managing them would become easier. Such Use Cases would eventually become the cornerstone on which a SOC (Security Operations Center) is built. As usual, I would like to start with defining a Use Case, running through its stages and then finally wrapping it up with an example. So here we go.

Use Case Definition: A Use Case by definition is nothing but a Logical, Actionable and Reportable component of an Event Management system (SIEM). It can be either a Rule, Report, Alert or Dashboard which solves a set of needs or requirements.

A Use Case is actually “developed” and this development is a complete process and not just a simple task. Like a mini project it has several stages. The various Stages involved in Use Case Development are as follows:

  • First stage is the “Requirements”Definition. It can be any of the following high level requirements and is unique to every company:
    1. Business
    2. Compliance
    3. Regulatory
    4. Security
  • Once the requirements are finalized, the next stage would be to “Define the scope” of the requirement. This would typically mean the IT Infrastructure that needs to be protected and is a high priority for the specific requirement.
  • Once the scope is finalized, we can sit down and list the “Event Sources” that would be required to implement the Use Case. These would be Log Data, Configuration Data, Alert Data etc coming out of IT Systems under the above Requirements Scope.
  • The next stage would be to ensure that the Event Sources are going through “Validation Phase” before use. Many times, we would have an Event source but the required data to trigger an Event may not be available. This needs to be fixed before we proceed with the Use Case development.
  • Post validation, we need to “Define the Logic”. This is where we exactly define what and how much data is needed to alert along with the Attack Vector we would like to detect.
  • Use Case “Implementation and Testing” is the next stage. This is where we actually configure the SIEM to do what it does best – Correlation and Alerting. During Implementation the definition of the desired output can also be done. The output can be one of the following:
    1. Report
    2. Real Time Notification
    3. Historical Notification
  • Once implementation is done, we need to “Define Use Case Response” procedures. These procedures help you to make the Use Case Operational.
  • Finally, Use Case “Maintenance” is an ongoing process to keep the Use Case relevant by appropriate tuning.

Now that we have defined in detail the Use Case Development methodology, it is time to take an example and see how this actually looks in Real Life Implementation terms.

The Requirement: Outbound Spam Detection.
The Scope: Mail Infrastructure, End User Machine, Security Detection Infrastructure
The Event Source:
  • IDS/IPS at Network and Host – Signature Based Detection
  • Mail Hygiene or Mail Filtering Tools – Signature Based Detection
  • Events from Network Devices – Traffic Anomaly Based Detection
  • Events from End User Detection tools – Signature and Traffic Anomaly Based Detection

The Event Validation: The devices logging to SIEM should be normalized and parsed properly. Typically, SIEM products would allow Content development based on their native Field Mappings (Through Parsing). If the fields are not mapped, then the SIEM does a poor job of Event Triggering and Alerting. The required fields for the above Use Case would typically be Source IP, Source user ID, Email Addresses, Target IP, Host information of Source and Target, Event Names for SPAM detection, Port and Protocol for SMTP based traffic detection etc.

Use Case Logic Flow: The Logic definition is something unique to the environment and needs to be defined accordingly. The logic can be either Signature based or behavior based. You can have it restricted to certain subset of data (based on the Event Sources above) or expand it to be more generic. Some samples are given below:
  • One machine doing Port 25 Outbound connections at the rate of 10 in a minute
  • SPAM Signatures originating from the same source from IDS/IPS, Mail Filter etc having the same destination Public domain
  • SYN Scans on port 25 constantly from a single source etc

Implementation and Testing: Once the logic is defined, Configuration of SIEM and tuning the implementation to trigger more accurately is the next phase. After Implementation of the Use Case, we would need several iterations of Incident Analysis along with data collection to ensure that the Use Case is doing what it is intended to do. This is done at the SIEM level and may involve aggregation, threshold adjustments, logic tightening etc.

Use Case Response: After implementation, the Use Case need to be made as a valuable resource by Defining a Use Case Response. This is the stage where you would define “What action needs to be taken and how it needs to be taken”. You can look at Episode 4 of my Security Investigation series to get an idea of how to Investigate SPAM cases. Other Security Investigation Series Articles are located here – Security Investigation Series.

SIEM Use Cases are really the starting point for good Incident detection. If you want to run a SOC, having well-defined SIEM Use Cases would ease management and increase efficiency of Operations. This post is my humble attempt to simplify and regularize Use Case development for SIEM implementations.

As always, I would love to hear comments and thoughts on this topic.

Automating Security Investigations – Torrent Investigation

Across the company, you will find several users using BitTorrent, uTorrent or other P2P clients and downloading copyright materials which is a huge risk to the company. Other than this it also brings a lot of other risks like downloading trojans, bots, malware which could cause data leaks. Although, in several organizations this is categorized as policy violation, users ignore such policies and continue to download them. In a previous Security Investigation Series – Episode 1, the details about investigating such incidents have been clearly documented. I have decided to take this to the next level. During my routine investigations I have identified several tasks done by an Operator that beg to be automated. This is where I decided to build a custom script to do this. If the routine tasks are automated, the Operations Analyst can concentrate on other Complex Analysis and Tasks. Before beginning on a script for automation, we need to ensure that all the details required are collected. For Torrent Investigation related cases, let us see what is required:

  • An Alert in the SIEM tool indicating the presence of Torrent Traffic in Network
  • Verify if the Alert is genuine or False positive. Verification typically involves checking the user machine.
  • Once verified, required artifacts need to be collected for further analysis. Further analysis is required to validate whether the machine is compromised because of Torrent use. This is vital because several attackers use Torrent Software to serve as Entry points into the Victim machine and from there on plan their subsequent attacks.
  • If the machine is compromised, trigger a forensic process
  • If the machine is not compromised, trigger a remediation process
  • The process trigger can either be a Ticket Created Automatically or an Email/Report/Alert.
  • Any other custom actions as per the Enterprise specificity

Now that we have the tasks to be automated, let us start on working what the Script/Tool input and output will be.

Input for the Script/Tool:

  • The IP Address of the machine suspected for torrent behavior. This can actually be a list of IP addresses so that it saves time. Gather of IP addresses can be done in SIEM using Active Lists (ArcSight), Lookup Tables (Symantec SSIM), Query Tables (SIEM Solutions) etc that track only the IP address of the Client Machine doing Torrent.
  • Additional Port, Protocol Information, Destination IP etc will also help to gather detailed information.
  • Credentials to Connect to the machine. Typically, Security Teams have administrator level access or some privileged level of access to perform security investigations in the Enterprise

Output for the Script/Tool: This tool can be made to collect different types of artifacts to ascertain that the IP in question is violating policy. These artifacts can be any or more of the following:

  • NETSTAT information to the pertaining destination ip and port numbers. Based on this information it is easy to identify the service, the PID etc for the connection itself.
  • Task list information based on the PID to find out the service name. This data along with NETSTAT will exactly point out the Torrent program being used by the Client machine.
  • The tool also can perform inventory of the entire hard drive to retrieve a list of saved torrents filenames along with the attributes.
  • The tool can then zip all the data collected and place it in a shared folder or server location.
  • Additionally, the tool can also be made to take an “ACTION” once the investigation is complete. The ACTION can be any one of these
    1. Just a user email sent to warn the user of the policy violation and remediation steps.
    2. Create a Support Ticket in a Ticketing system so that the respective Security Operations team can take appropriate action on the case
    3. Trigger a remediation on the machine automatically. I know this is the most intrusive but “Hey, it’s an option too right??”
  • Some intelligence can also be added into the tool. For example, the tool can store the IP it processed in its own small DB to make sure, it dosent contact that IP again for a limited period (say like 3 days, depending on how strict you want the remediation process to kick in). This way the noise generated due to repeated IPs will not be there.

How the Tool is constructed? – The tool can be constructed in any Programming language. Typically, based on the in-house skill set, the language can be chosen so that the support and update of the codes become easier. As mentioned earlier, based on the requirements our function definitions would be formed. Below, I would like to show samples of how this tool can be written in C#. I will be using some Remote Command Execution tools like PSTools, XCMD etc to be the backend jobs.

  • The most important part of the Tool is the “Data Collection” part. In my code, I call this the “Worker Function”. This worker function is the one that collects all the artifacts from the remote IP address. Threading is enabled so that multiple IP addresses can be tackled in a single go. Multiple IP input can be through a Static IP list or a Dynamic IP List populated natively by a SIEM or custom population using scripts of Web API.
  • The below screen shot shows the variable declarations required for the Worker Function.
  • Before running the worker function, we need to check if we already processed the given IP address. As mentioned earlier, this is important to reduce the noise:
  • If the IP is not available, we will go ahead and Retrieve the remote host-name, logged-in username and the logged-in domain name.
  • With the above details, We can now cross-verify to make sure that not just the given IP address was already processed, but also the user logged into that IP is also not processed. Since your environment might be with dynamic address, it is possible that a user could be in different IP addresses at different times. So cross-verification is always best to make sure, we don’t dig the same hole.
  • Once we know the given IP is new, start to collect the required artifacts. Here we are collecting:
    1. Tasklist – to get the torrent name & path along with PID. In C# I execute the function:
      System.Diagnostics.Process.GetProcesses(remoteIP)
    2. Netstat – get all the active connections/ports open/communication protocol used. This will be done by Executing a local shell command like the following:
      xcmd \\{IP} “netstat -anob”
    3. Registry Entries – firewall registry entries/winlogon & Run registry entries, etc can also be collected by Executing a local shell command like the following:
      reg query “\\{ip}\HKLM\System\\ControlSet001\\Services\\SharedAccess\ Parameters\
      FirewallPolicy\StandardProfile\AuthorizedApplications\List\”
    4. Torrent files – Collect all the torrent files names in the hard drive (forensic purposes)
    5. Define Actions – Once done, we can make the tool do a lot of ACTIONS as discussed earlier in the post.

Hope this post helps the SOC analysts, Operators and Managers in automating SOC tasks and processes that are routine and time consuming. I have seen over the years that Automation can greatly help in a high volume Security Investigations environment so that more valuable time can be spent on Qualitative Threat Detection.

[pdf]Save as PDF[/pdf]