SIEM technology has been around for more than a decade now. We at Infosecnirvana.com have in the last few years posted quite a lot on this topic. However, as we talk to several industry folks, one area which lacks clarity is the success mantra for a good SIEM implementation. While most of the discussions around SIEM revolve around the product space (as the vendors will rightly point towards), seldom do people talk about “on-the-ground” approaches to a successful (product agnostic) implementation and operations.
This post aims at giving a “on-the-ground” approach to Enterprise SIEM implementation. In short, we are going to elucidate the various Enterprise SIEM Building Blocks needed for a successful implementation.
First things First
- This is not product specific and can be applied to any SIEM product you choose to implement
- This takes into consideration that SIEM implementation has an Enterprise wide value proposition
- This takes into consideration that SIEM program is a multi-year journey and requires patience and care.
Our Mantra – “The higher you need to go, the stronger your footing should be”.
The methodology which we would like to discuss in this post is shown diagrammatically below:
As you can see, there are several components or building blocks to an Enterprise SIEM implementation. The colour coding should give you an idea of how the various aspects of SIEM implementation are grouped together. In this post, we would like to de-construct this model in detail.
Enterprise Log Management
The most important building block or the foundation of a successful SIEM implementation program is laid down by Enterprise Log Management. While you may think that this is simple, let us assure you that it is not. Enterprise Log Management is the conduit through which you get visibility into your network. If not done properly, there are chances that you may be blind when a breach happens. There are several steps to a successful Log Management program namely:
- Define Assets in Scope – IT infrastructure that matters to your enterprise is in scope. The pragmatic way to scope is to start from the Crown jewels (application servers, databases etc.) and work your way to the Ingress and Egress touch points (network devices). This should give you a list of infrastructure assets in scope.
- Define and Implement Enterprise Logging Policy – Once the assets are scoped, it is important to standardize Logging levels enterprise wide. Typically logging policy is defined keeping in mind practicality and usability. What I mean by that is, a balance between logging levels and system/storage performance will ensure that security does not impact productivity and business availability.
- Centralized or De-centralized Log Repository – Building a logging architecture requires foresight into how the organization is going to evolve over a period of time. A central log repository may make sense for most organizations, however a de-centralized, controlled log repository may make sense for some organizations. Choosing the approach determines the course which the other building blocks in our methodology takes.
At Infosecnirvana, we talk about Enterprise Log Management in two detailed posts:
- What and How much to Collect – Enterprise Security Logging
- What to store and What to filter – Log Filtering
These posts should give you enough details on how Enterprise Log management should be approached and how they should be managed.
Once the Enterprise Log management foundation is laid strong, event correlation becomes simpler and more meaningful. This is where SIEM comes in to the picture. SIEM as we all know needs data to perform correlation and event monitoring. Enterprise Log Management provides that data. There are several SIEM products out there and we at Infosecnirvana have written several posts on the various SIEM products and how they are similar or differ from one another.
- Adopting SIEM – What you need to know?
- A dummies guide to SIEM
- SIEM Product Comparison – 101
- Evaluating SIEM – What you need to know
The above posts should give enough information on SIEM from a technology and product angle.
Once the logs are collected and correlated into a SIEM solution, putting the “correlation capabilities” to good use is the next step. The best way to do this is with Use Cases. Use cases as you can see from the image above is comprised of two building blocks namely:
- Threat Detection Use Cases – These are the basic use cases that can be created and implemented once all the logs are collected in to SIEM. These use cases are “Rule-Based” and detect threats coming from the infrastructure point products themselves. Correlation happens based on these internal data sets. Typical examples are IDS Alert correlated with Web Server logs, Malware alerts correlated with Firewall logs, SPAM alerts correlated with endpoint logs etc.
- Advanced Use Cases – The next stage in the evolution of Threat detection Use cases is to make use of Threat Intelligence and Analytics capability in detecting Security threats and incidents including that of the so called APT style threats. These class of use cases are at the very top of the SIEM use cases food chain and potentially jump into the “Research to Detection” territory, where innovative detection techniques need to be created and utilized in SIEM for threat detection. Correlation here happens based on internal and external data sets combined with machine learning, trending etc.
We at Infosecnirvana.com have done a few posts on SIEM use cases and how they can be developed. SIEM Use Cases – What you need to know? is a very popular post that has been referenced in Gartner Blogs and McAfee SOC Whitepaper.
Once the Use cases are in place, it is imperative that we start incorporating some of the Intelligence feeds that is available. What Cyber Intelligence does to Use cases is akin to what a compass does to a sailor. While most of the SIEM tools today offer some sort of Intelligence capability, rationalizing and making it part of daily operations is the biggest challenge. In our opinion, gathering and using Cyber Intelligence is an iterative process as summarized below:
It has three specific steps namely:
- Data Gathering – Data can be open source, community, commercial or raw human intelligence and gathering this requires a bit of technology integration, data management and hunting.
- Foraging Loop – Basically, foraging loop is nothing but searching for “treasure” from the heaps of data gathered. This is critical step in the realm of cyber intelligence. Foraging is best done by analysts who understand the organization intimately in terms of what is their infrastructure and software spread.
- Advanced Analysis – Once we have gathered the data, normalized it and filtered all the irrelevant items out, what remains at the end of the “Foraging Loop” is applicable intelligence. Analysing this using traditional techniques like reversing, sink hole analysis, pattern recognition, etc will yield a list of value Indicators of Compromise (IOC).
- Action & Reporting – Once the IOC is available, we can use them to create content in our defence systems. It can be perimeter systems, it can be SIEM tools, it can be administrative take-downs etc. This is where we make sense about the gathered and analysed intelligence. The Sensemaking Loop is all about this.
Once we convert Intelligence in to actionable IOC data, it is ready to be used in SIEM use cases. Most of the mature organizations, constantly update their monitoring infrastructure with Actionable Intelligence, because IP and Domain blacklists are no longer sufficient to detect threats.
The last capability or the most mature capability in our opinion is the “Risk Analytics” capability. While the term is generic, when you see that in a SIEM parlance, it takes a very specific meaning. SIEM with advanced use cases and cyber intelligence capability provides the most visibility into an organization’s network/assets. However, this visibility is a “point-in-time” visibility. It does not provide “retrospection”. With Analytics capability, organizations can go back in time, analyse things in retrospect, identify common risk patterns over a longer period of time, identify outliers etc. This is in my opinion the fastest growing function in the Cyber space today. When you hear terms like “Security Analytics”, “Behaviour Analytics”, they are all nothing but subsets of the larger Risk Analytics capability. Companies like Securonix, Caspida (Splunk), Exabeam etc, are some of the frontrunners in this space.
As you can see in the entire post, the approach is layered and stage wise. It aims are providing a structured, organic growth path towards generating complete value from your SIEM implementation. Please feel free to share your thoughts on this.
Until Next time….Ciao!!!