All posts by misnomer

Adopting SIEM – What you need to know?

SIEM stands for Security Information and Event Management”

Oh wait, I have heard of SIM, I have heard of SEM, what is this SIEM??.
Originally, the Security Information Management (SIM) and Security Event Management (SEM) systems were two different technologies performing similar but distinct functions. Gartner in 2005, coined the term SIEM to encompass both. As the name suggests, It is nothing but a collection of tools and technologies to manage Incident and Events pertaining to Security alone. Some of the tell-tale capabilities of a typical SIEM platform are:

  1. Collect Logs from various Log Sources/Devices
  2. Store these logs for a decent amount of time
  3. Provide Fast Search/Retrieval capabilities
  4. Provide meaningful interpretation of Log received
  5. Provide capabilities to correlate between logs of different devices
  6. Basic Ticketing/Alerting capabilities.

The first 4 points are typical of a SIM and the remaining 2 are typical of a SEM.
Any tool that does all of these is a SIEM. There are more than 50 different products that cater to the SIEM space. Just like any other product, they cater to various market segments at various price points.
If you Google for SIEM reviews you would get a lot of information on various products. In my experience, I have worked with at least 4 SIEM vendors. Each one of them have their own pros and cons. Comparing a product in a DEMO and comparing it after use are two different things. So, in this blog post, I am going to highlight few things as “What you need to know” when you are planning to adopt SIEM technology

  1. Have a defined Logging process in your environment. This is very crucial because a SIEM is useless without a good Logging Program. This not only helps in making the SIEM implementation easier, but also helps in getting a measure of the volume you are dealing with. In my experience, often times, despite having an Industry leading SIEM, the log Management made it look pedestrian and a waste of money.
  2. Every SIEM vendor has something called as Collector/Connector/Receiver/Agent that collects logs from the devices and converts them to their proprietary format. This conversion or parsing as we call is important for the product developers to store data in a format they can understand and process quickly. Most of the vendors offer something of a Custom Collector/Parser development for their “unsupported” log sources. This costs money, skills in-house and may require regular maintenance. Hence Native Parsing Support for Log Sources is better. Establish this before you move ahead with SIEM implementations. Either source a in-house resource to help build and manage such customizations or spend more money to get the vendor to do it.
  3. Identify primary focus areas from an Organizational perspective. This will help you configure your SIEM solutions appropriately. These focus areas should be broadly classified and then expanded to the ground level. For example, if your requirement is compliance, start with control requirements, see what logs need to be collected to fulfill them, see how integration needs to be done, see what needs to be reported, alerted, retained, etc.
  4. Get a dedicated SIEM administrator or rather train someone in-house to be that person. This is very important because, in my experience I have always felt that SIEM is as good as the administrator is. Without proper maintenance and care, it will decay over time. If you really need to generate value out of it, manage it well. By managing a SIEM I mean not only the system itself but also the ecosystem it resides in.
  5. Understand that SIEM alone cannot solve all your Security Problems. It is NOT A MAGIC WAND. If setup and configured correctly, a SIEM can at best point you in the right direction, a direction where you can identify and fix several security issues in your enterprise thereby strengthening it. So, be prepared to have a Response/Remediation team that will investigate the alerts generated and take appropriate action.
  6. Correlation is a vital part of SIEM offerings. Before Adopting SIEM, make sure you understand and possibly catalog the various Attack Vectors, Threat Scenarios you would want looked at for correlation in your organization. This will give a fair direction for the basic rules you would put in place to start with. Once you are comfortable and start seeing the various alerts generated, you can play around and experiment more. In my experience, start with built-in rules, understand them, investigate them, tune them and then slowly start building your own content. For more details on the various rules available in SIEM Look at Rules Rule in SIEM Kingdom
  7. Architecture wise, make sure your SIEM solutions are in tandem with your Logging solutions. Also, build your SIEM as modular as possible thereby making upgrades, technology refresh etc seamless.
  8. Don’t forget the filtering aspect. Correlation Engines will perform faster and will get you better results if they are attacking a smaller set of “known bad” logs rather than all. This is crucial in large enterprises as the Log Volume can easily overwhelm the SIEM systems. Note: Many SIEM tools have limitations in the number of events they can process. This is denoted in Events Per Second (EPS). Even though the vendors advertise several thousands, an effective correlation system can have only around 2000 – 5000 EPS tops. Anything more will make your system painstakingly slow. So understand and work through this. Look at my posts What and How much to Collect and High Log Volume – What to Filter and What to Keep? to get more information on how to log, what to log and what to filter.
  9. Remember, more processing layers, less EPS. This means that the Log Collection layer will have more EPS processing capability than the Correlation engine and so on. Visualize it as a pyramid with the Log Collection at the Base and the Correlation at the top
  10. Last but not the least, “Stay Alert and Eager. The Logs Don’t Lie”

Hope this post helped you in getting a fair idea of SIEM technologies. I have worked on HP ArcSight, Symantec SSIM, Novell E-Sentinel. If you need details about them in terms of practical setup, configuration, architecture etc, shout out and I will help as much as possible.
 [pdf]Save as PDF[/pdf]

Research to Detection – Identify Fast Flux in your environment

So what is Fast Flux?

Fast Flux is a camouflage technique used by Modern day Bots to evade detection and IP-based Blacklisting. This technique basically involves rapidly changing DNS Address Records (A Record) for a single FQDN, which means that every time you visit a www.site.com, you will be connecting to a different IP address.
Detecting Fast Flux in any environment is a very difficult task. Let me explain how!!!

  1. Fast Flux is of two types – Single Flux and Double Flux.
  2. If Single flux is employed, the only thing to worry about is IP address change for static domain names. A typically Fast Flux service network would have several thousand A records for the same domain name. The TTL value for every A record is very less, thereby prompting DNS resolvers to query in short succession.
  3. If Double Flux is employed nothing is static anymore. Both the NS Records as well as the A records change rapidly. The NS servers are a list of compromised machines having a back-end control to the attacker. Detecting Double Flux is twice as hard as Single Flux already is.
  4. If you think that “Oh, its easy to identify these domains from Analysis of rapidly changing DNS records” YOU ARE WRONG. In case of Web Traffic Load Balancing, several hosting servers employ this to ensure that they are able to serve the Client Request quickly. So, if you were to analyze the DNS records, you would be lost when you try to separate milk and water.
  5. There is no right or wrong way of identifying the Fast Flux networks and research is still ongoing to identify a solid solution.

But the havoc, several Bots cause today are real. How can be bring Research based approaches to Enterprise? How can we achieve Fast Flux detection? How can we increase the effectiveness of detection with already existing tools?
In this post, I wanted to discuss about Research to Detection based approach for Fast Flux in DNS in an Enterprise Network. I have used Snort, ArcSight, Custom Scripting etc to elucidate my thoughts and ideas. This may not be a perfect solution but it would do its primary job.

  1. Firstly, we need to start logging DNS queries happening in the Network. We are interested in only logging and analyzing all outward queries happening from our Enterprise DNS servers. This is less noisier than internally received requests to DNS Servers from client machines. Remember to have a Log Management/Detection program in place.
  2. In the queries being sent from the DNS servers, we need to detect all the queries that return A records with a TTL value of < 1800 seconds. This data collection should contain the Domain Name, A records and NS Records.
  3. If possible we can collect the ASN records for the IP A records returned by the DNS response
  4. The data collection of the above can be done by a three-step customization.
    1. First step would be to create a Snort Rule to identify DNS queries/responses with a low TTL value. Generally, the DNS Response would have the A Records, the corresponding NS records and the TTL value.
    2. Second step of the collection would be to parse the Snort Output data to the to correctly identify the domain, IP records and the NS records. This would mostly require a Custom Collector or we can “shim” an existing File Reader collector to parse the Snort Data into respective fields.
    3. Third Step would be to do a recursive IP to ASN mapping for all the IP records returned. This can be done by running a script or a tool post collection.
  5. We can then put the parsed data into two Active lists (ArcSight Terminology for a watch list). One Active List would be a Domain/A Record pairing and the other would be a Domain/NS Record pairing.
  6. Then a rule logic can be created to do the following:
    1. For Single Flux the logic would be One Domain – Large IP records in a day.
    2. For Double Flux the logic would be One Domain – Large IP records – Large NS records in a day.
    3. Correlation with ASN data collected would give a clear picture of whether the Fast Flux trigger is False Positive or not. I would personally want to investigate this data set against ASN data set manually to begin with so that I can make a determination on what needs to be tightened for the Rules.
    4. Now, we can add some tuning as well for DynDNS scenarios. This whitelist domain list would then reduce the subset of event triggers.
    5. Progressive Cross-Validation with Internet Blacklists, Spam Lists, Abuse Lists etc, will give identification more muscle.

Remember that there are several practical pitfalls in terms of “Performance Issues”. Snort preprocessors can quickly become resource intensive, hence best idea would be to put some Network Zoning in place (with Whitelisted DynDNS sites as well), thereby reducing the Snort processing cycles. Similarly ArcSight Active Lists and Rule Triggers can quickly go out of control, hence it is important to manage them closely. The Custom scripts/data collectors can also put some load on the servers. Once the detection is done, suitable response mechanisms can be put in place for Fast Flux Networks.

Since this approach is a work in progress, I would be adding a few more notes as and when I identify something new. If you have inputs to enhance this idea, I would love to hear from you as well.

 [pdf]Save as PDF[/pdf]

Sec-Intel with Twitter Trends

Recently came across an interesting topic on the Splunk Blog – Visit http://blogs.splunk.com/2010/06/23/track-twitter-world-cup-sentiment-with-splunk/. To summarize this blog post, there is a Twitter App available for Splunk that lets you stream the Popular topics on Twitter. It collects Twitter Trends and presents a dashboard on the Splunk front end. All it requires is Splunk with Twitter App Plugin and a Twitter Account to track streams. For people without Splunk, the answer lies in customizing your Log Management Solutions to do something similar. The big question in my mind is “How would it be if we use this cool feature of Splunk to do Security Intelligence gathering and from there on, use this contextual data to perform Security Investigation and Analysis”?

Investigation in most cases is something very reactive based on what is seen in the logs or what is going on the network at a given time. Would it be possible for Investigation to be driven using Security Intelligence? In a country’s Intelligence Bureau, this is exactly what happens. Using data from the “chatter” and gauging the potential mood and the probability of a National threat. Scaling it down to an Enterprise level should be easy and very much do-able I guess. Several organizations are dependent on “public face”. One rotten apple leaking stuff from inside on some social media network would cause significant damage to the company’s reputation. This is where such Intelligence Gathering from the Internet helps. There are several implementation difficulties, but it is something worth considering on a case by case basis.

So What do you think? Is gleaning Security Intelligence from Social Media promising? What are the possible concerns regarding Privacy, Legislation, etc? Can Security Intelligence with Twitter Trends the next thing to do? Comment ON!!!

 

 [pdf]Save as PDF[/pdf]