All posts by misnomer

SIEM Product Comparison – 2016

SIEM Product Comparison – 2016

We at Infosecnirvana.com have done several posts on SIEM. After the Dummies Guide on SIEM, we followed it up with a SIEM Product Comparison – 101 deck. The SIEM comparison we did was in 2014. After two years we are taking a look at the SIEM market and comparing them alongside. The leaders in this space according to Gartner are still the following products (in no order):

1. HP ArcSight – Review

2. Intel Security – Review

3. IBM QRadar – Review 

4. Splunk SIEM – Review

5. LogRhythm

In the below post, we have tried to provide detailed explanations of the Strengths and Weakness of these various SIEM products as evaluated in 2016. Finally, we provide a Scorecard for the products based on various capabilities.

HP ArcSight: Since 2014, ArcSight has come a long way. They have added quite a few features along the way that has added to their strengths. For example, Connector load balancing was definitely a welcome addition after several years of being requested.  However, the weakness list is still the same. One of things frustrating users mainly is that the Web architecture for administration and management is not as mature as the thick client.

Picture1

IBM QRadar: Since 2014, QRadar has continued to maintain its pole position in product ratings and evaluations. There have not been major product announcements after QVM and Incident Forensics other than IBM App Exchange (a Splunk App store style approach to extensions and plugins). While the strong points of IBM QRadar are still true, the weaknesses have started to crop up in areas of operational efficiency and reliability.

Picture2

Intel Security: This is one product that underwhelms when it comes to realizing its true potential. They checked all the boxes required for monitoring with ADM, DAM, DPI, ATD etc. However, the real problem with erstwhile Nitro has always been stability and management overhead. Two years later, the strengths have increased no doubt, but the weaknesses still remain around reliability.

Picture3

Splunk: This is one of the products that has gone through several changes in the past two years. They have expanded their capabilities significantly in the “App for Enterprise” space with predefined security indicators and dashboards and visualizations. They have also improved the support for packet captures and analysis. With the purchase of Caspida, behaviour analytics capabilities will come into Splunk. While the strengths column has increased, the weakness column still remains the same.
Picture1

LogRhythm: The new and upcoming unified SIEM player LogRhythm has come a long way from its humble beginnings. In the past 2 years, LogRhythm has added several new features to their product including but not limited to incident response and case management workflow,  centralized evidence locker, collaboration tools, risk based profiling and behavioural analytics to identify statistical anomalies for network, user and device activity.  This combined with ease of deployment and competitive price has definitely opened up the leaders quadrant to some exciting shake up.  Let’s take a look at the strengths and weakness for LogRhythm.

Picture2

Overall Scorecard: 

Any evaluation is incomplete without a scorecard. So we have consolidated feedback from various sources and provided a weighted score on the five SIEM products reviewed above.

Picture3

Conclusion:

Based on the review of SIEM products done this year, we feel innovation in the SIEM space has plateaued. The next generation Security Analytics and Big Data technologies are slowly becoming mainstream thereby relegating the SIEM solution purchases to a more compliance driven initiative.

Please share your thoughts on how you would rate the various SIEM products discussed here.

Enterprise SIEM Implementation – Building Blocks

Introduction

SIEM technology has been around for more than a decade now. We at Infosecnirvana.com have in the last few years posted quite a lot on this topic. However, as we talk to several industry folks, one area which lacks clarity is the success mantra for a good SIEM implementation. While most of the discussions around SIEM revolve around the product space (as the vendors will rightly point towards), seldom do people talk about “on-the-ground” approaches to a successful (product agnostic) implementation and operations.

This post aims at giving a “on-the-ground” approach to Enterprise SIEM implementation. In short, we are going to elucidate the various Enterprise SIEM Building Blocks needed for a successful implementation.

First things First

  • This is not product specific and can be applied to any SIEM product you choose to implement
  • This takes into consideration that SIEM implementation has an Enterprise wide value proposition
  • This takes into consideration that SIEM program is a multi-year journey and requires patience and care.

Our Mantra – “The higher you need to go, the stronger your footing should be”. 

The methodology which we would like to discuss in this post is shown diagrammatically below:

Picture1
Enterprise SIEM – Building Blocks

As you can see, there are several components or building blocks to an Enterprise SIEM implementation. The colour coding should give you an idea of how the various aspects of SIEM implementation are grouped together. In this post, we would like to de-construct this model in detail.

Enterprise Log Management

The most important building block or the foundation of a successful SIEM implementation program is laid down by Enterprise Log Management. While you may think that this is simple, let us assure you that it is not. Enterprise Log Management is the conduit through which you get visibility into your network.  If not done properly, there are chances that you may be blind when a breach happens. There are several steps to a successful Log Management program namely:

  • Define Assets in Scope – IT infrastructure that matters to your enterprise is in scope. The pragmatic way to scope is to start from the Crown jewels (application servers, databases etc.) and work your way to the Ingress and Egress touch points (network devices). This should give you a list of infrastructure assets in scope.
  • Define and Implement Enterprise Logging Policy – Once the assets are scoped, it is important to standardize Logging levels enterprise wide. Typically logging policy is defined keeping in mind practicality and usability.  What I mean by that is, a balance between logging levels and system/storage performance will ensure that security does not impact productivity and business availability.
  • Centralized or De-centralized Log Repository – Building a logging architecture requires foresight into how the organization is going to evolve over a period of time. A central log repository may make sense for most organizations, however a de-centralized, controlled log repository may make sense for some organizations. Choosing the approach determines the course which the other building blocks in our methodology takes.

At Infosecnirvana, we talk about Enterprise Log Management in two detailed posts:

  1. What and How much to Collect – Enterprise Security Logging
  2. What to store and What to filter – Log Filtering

These posts should give you enough details on how Enterprise Log management should be approached and how they should be managed.

Event Correlation

Once the Enterprise Log management foundation is laid strong, event correlation becomes simpler and more meaningful. This is where SIEM comes in to the picture. SIEM as we all know needs data to perform correlation and event monitoring. Enterprise Log Management provides that data. There are several SIEM products out there and we at Infosecnirvana have written several posts on the various SIEM products and how they are similar or differ from one another.

  1. Adopting SIEM – What you need to know?
  2. A dummies guide to SIEM
  3. SIEM Product Comparison – 101
  4. Evaluating SIEM – What you need to know

The above posts should give enough information on SIEM from a technology and product angle.

Use Cases:

Once the logs are collected and correlated into a SIEM solution, putting the “correlation capabilities” to good use is the next step. The best way to do this is with Use Cases. Use cases as you can see from the image above is comprised of two building blocks namely:

  1. Threat Detection Use Cases – These are the basic use cases that can be created and implemented once all the logs are collected in to SIEM. These use cases are “Rule-Based” and detect threats coming from the infrastructure point products themselves. Correlation happens based on these internal data sets. Typical examples are IDS Alert correlated with Web Server logs, Malware alerts correlated with Firewall logs, SPAM alerts correlated with endpoint logs etc.
  2. Advanced Use Cases – The next stage in the evolution of Threat detection Use cases  is to make use of Threat Intelligence and Analytics capability in detecting Security threats and incidents including that of the so called APT style threats. These class of use cases are at the very top of the SIEM use cases food chain and potentially jump into the “Research to Detection” territory, where innovative detection techniques need to be created and utilized in SIEM for threat detection. Correlation here happens based on internal and external data sets combined with machine learning, trending etc.

We at Infosecnirvana.com have done a few posts on SIEM use cases and how they can be developed. SIEM Use Cases – What you need to know? is a very popular post that has been referenced in Gartner Blogs and McAfee SOC Whitepaper.

Cyber Intelligence:

Once the Use cases are in place, it is imperative that we start incorporating some of the Intelligence feeds that is available. What Cyber Intelligence does to Use cases is akin to what a compass does to a sailor. While most of the SIEM tools today offer some sort of Intelligence capability, rationalizing and making it part of daily operations is the biggest challenge. In our opinion, gathering and using Cyber Intelligence is an iterative process as summarized below:

Picture1

It has three specific steps namely:

  • Data Gathering – Data can be open source, community, commercial or raw human intelligence and gathering this requires a bit of technology integration, data management and hunting.
  • Foraging Loop – Basically, foraging loop is nothing but searching for “treasure” from the heaps of data gathered. This is critical step in the realm of cyber intelligence. Foraging is best done by analysts who understand the organization intimately in terms of what is their infrastructure and software spread.
  • Advanced Analysis – Once we have gathered the data, normalized it and filtered all the irrelevant items out, what remains at the end of the “Foraging Loop” is applicable intelligence. Analysing this using traditional techniques like reversing, sink hole analysis, pattern recognition, etc will yield a list of value Indicators of Compromise (IOC).
  • Action & Reporting – Once the IOC is available, we can use them to create content in our defence systems. It can be perimeter systems, it can be SIEM tools, it can be administrative take-downs etc. This is where we make sense about the gathered and analysed intelligence. The Sensemaking Loop is all about this.

Once we convert Intelligence in to actionable IOC data, it is ready to be used in SIEM use cases. Most of the mature organizations, constantly update their monitoring infrastructure with Actionable Intelligence, because IP and Domain blacklists are no longer sufficient to detect threats.

Risk Analytics:

The last capability or the most mature capability in our opinion is the “Risk Analytics” capability. While the term is generic, when you see that in a SIEM parlance, it takes a very specific meaning. SIEM with advanced use cases and cyber intelligence capability provides the most visibility into an organization’s network/assets. However, this visibility is a “point-in-time” visibility. It does not provide “retrospection”. With Analytics capability, organizations can go back in time, analyse things in retrospect, identify common risk patterns over a longer period of time, identify outliers etc. This is in my opinion the fastest growing function in the Cyber space today. When you hear terms like “Security Analytics”, “Behaviour Analytics”, they are all nothing but subsets of the larger Risk Analytics capability. Companies like Securonix, Caspida (Splunk), Exabeam etc, are some of the frontrunners in this space.

Conclusion:

As you can see in the entire post, the approach is layered and stage wise. It aims are providing a structured, organic growth path towards generating complete value from your SIEM implementation. Please feel free to share your thoughts on this.

Until Next time….Ciao!!!

CSIRT Series – Introduction

Incident Response is a key component of any organization serious about Cyber Security. However, many organizations are faced with the challenge of building and maintaining an “efficient” IR function or CSIRT. In our definition, an IR function is a perfect amalgamation of three major things – Well defined Process, Qualified People and appropriate tools & technologies. At InfosecNirvana, we have posted several things related to SIEM, Security Investigation and Log Management, however we have not spent considerable time in the IR side of things.  This blog post aims to introduce you to our take on how an IR function should be.

Our IR Framework:

There are several IR frameworks in the internet, the most popular ones are the NIST framework and the SANS framework. Though the approaches are similar they are different in practice. Hence, we have tried to build a very generic framework that can be used by all the organizations that want to set up a IR function. The framework is as below:

The IR framework depicted here consists of 6 major functions. They are as follows:

  1. Incident Detection: – You can only respond to what you can see.
  2. Incident Classification: – Know where you are going, what you are dealing with.
  3. Incident Handling: – Handle with care
  4. Incident Containment:- Stop the bleeding
  5. Incident Recovery: – Get it back up and running
  6. Continuous Improvement: – Never stop learning and improving

Each of these functions listed above have a heady mix of Process, People and Technology. Several organizations have varied definitions for each of these function, but in this post, we are trying to make them as generic and all-encompassing as possible. Since a single post does not do justice to the readers, we have decided to split this into several sections for easy access and readability. Below are the links that explore each of these functions in detail. Feel free to comment in these individual sections so that discussions stay on topic.

Until next time… CIAO!!!!