Galleries

Episode 4: Security Investigation Series – Tackling SPAM Attacks

One of the age-old attacks seen in the Internet is a SPAM attack. Many organizations have been blacklisted for having been a SPAM relay or a SPAM Source. Even though technologies have improved vastly over the decade, SPAM is still real and users are still being enticed by SPAM. The result is Machine Compromise and potential data breach. As of 2011, more than 7 Trillion SPAM Messages have originated. Many organizations combat SPAM in many different ways. In this Security Investigation Series Episode, I am going to layout a workflow for SPAM detection, Cleanup and Prevention.

Understanding SPAM: Firstly, let us understand SPAM. SPAM is nothing but a mass of unsolicited messages being sent anonymously or using fake identities. This often is a pre-cursor of an attack and hence is one of the Attack Vectors.  Two major sources of SPAM in an enterprise are

  • Email based SPAM and
  • Instant Messaging SPAM

Let us break this down even further. Email based SPAM uses SMTP protocol as its transport whereas the Instant Messaging using a gamut of protocols from HTTP, SIP, IMPP to XMPP. Several tools and technologies for SPAM detection and filtering work at this protocol level and identify SPAM and filter them as needed. Still intelligent spammers, can circumvent the detection and make their way to the user mailbox. In such cases, a clear incident detection and response process is needed.

Let us take one sample scenario so that I can layout the process flow for similar scenarios.

A Real Life Scenario: A mail is received at the user’s inbox containing a Password Stealer link. This is suspected as SPAM by the Security Devices in your enterprise. The security devices can be anyone or combination of Intrusion Detection Systems, Gateway Filters, SPAM filters etc. These alerts are logged to a SIEM solution. SIEM Solutions then correlate the various messages received and trigger an Incident. If there are no Security devices that do this, SIEM can help you identify SPAM through Network Traffic monitoring.

Logic for good SPAM detection: In signature based detection, it is good enough to just pick on the Triggers from the individual product vendors and then correlate among them. But if there is a SPAM message that is fresh and does not have a signature pattern, then only behavior based detection will be effective. Several tools today do some behavior based detection. Enterprises who don’t have behavior based systems can look at making use of SIEM to be that system. SIEM is a powerful tool and can do trending, correlation and pattern matching. A simple rule can be written for network log correlation for protocols like SMTP/IMP etc. Typically a value of 25 SPAM messages going to different destinations within a minute is a good indication of SPAM. This value combinations can be throttled (Throttling is a great SIEM topic and one of the classic rule writing as well as throttling ArcSight example can be found here at wymanstocks.com) to get a more accurate SPAM detection rate. This detection is crucial for the response to be triggered.

Responding to SPAM Attacks: Once the SPAM detection is done through signature or behavior based logs, it is important to take a series of responsive actions.

  • Before responding obviously validation needs to be done to ensure we don’t falsely respond to a legitimate email. This is typically done at the SIEM level itself and is the job of a Level 1 Analyst.
  • Then, We need to ensure that the SPAM domain is blocked at the gateway level. This is to ensure that the SPAM does not spread from Internal to External. Some SPAM mails have carefully constructed callback to the domain itself and hence it is important to block it at the gateway.
  • Secondly, we need to ensure that the SPAM spreading is controlled in the Internal Mail Infrastructure. This can be done by putting filtering rules in Exchange to move SPAM messages to the deleted items folder. Similarly for IM also, such rules can be put in the messaging server.
  • Finally, the SPAM has to be cleaned from the individual machines. This is where it gets interesting.
  • There are three major types of users:
    1. Many users are aware of the SPAM and hence they would not have clicked on any of the links available in the mail or the message. These user machines can be remediated by just deleting the SPAM messages.
    2. Some users who are curious would have clicked on the link and then closed it after seeing its suspicious face. Majority of these cases are nothing and get remediated by just deleting the SPAM message. Some targeted SPAM attacks work to just make the user click on the SPAM link and then get re-directed to a Malware Dropper site. In these cases, even though the SPAM mail has been deleted the users are at risk. Hence these user machine have to be validated as well for possible compromise.
    3. Time and again we also see users clicking on the link, keying in all their data to the site and then feeling that nothing wrong has happened to them. These user machines are no longer clean and have to be re-imaged straight away.
  • Once the remediation is done, the appropriate documentation need to be carried out. Again as I said, documentation is vital in a Security Investigation process. Without documentation, it will not be repeatable, it will not be efficient.

Preventing SPAM Attacks: Preventing SPAM is the ultimate goal for every enterprise. Day in and Day out, Enterprise Defenses are being improved to combat SPAM. However, SPAM is mostly an initialization vector and hence it is at the hands of the End-User to be aware of the risks involved with SPAM. So more than the tools and technologies I would say “User Training” is the best way to prevent SPAM. What do you think?

How do you combat SPAM in your enterprise? Sound off in the comments below

 [pdf]Save as PDF[/pdf]

Adopting SIEM – What you need to know?

SIEM stands for Security Information and Event Management”

Oh wait, I have heard of SIM, I have heard of SEM, what is this SIEM??.
Originally, the Security Information Management (SIM) and Security Event Management (SEM) systems were two different technologies performing similar but distinct functions. Gartner in 2005, coined the term SIEM to encompass both. As the name suggests, It is nothing but a collection of tools and technologies to manage Incident and Events pertaining to Security alone. Some of the tell-tale capabilities of a typical SIEM platform are:

  1. Collect Logs from various Log Sources/Devices
  2. Store these logs for a decent amount of time
  3. Provide Fast Search/Retrieval capabilities
  4. Provide meaningful interpretation of Log received
  5. Provide capabilities to correlate between logs of different devices
  6. Basic Ticketing/Alerting capabilities.

The first 4 points are typical of a SIM and the remaining 2 are typical of a SEM.
Any tool that does all of these is a SIEM. There are more than 50 different products that cater to the SIEM space. Just like any other product, they cater to various market segments at various price points.
If you Google for SIEM reviews you would get a lot of information on various products. In my experience, I have worked with at least 4 SIEM vendors. Each one of them have their own pros and cons. Comparing a product in a DEMO and comparing it after use are two different things. So, in this blog post, I am going to highlight few things as “What you need to know” when you are planning to adopt SIEM technology

  1. Have a defined Logging process in your environment. This is very crucial because a SIEM is useless without a good Logging Program. This not only helps in making the SIEM implementation easier, but also helps in getting a measure of the volume you are dealing with. In my experience, often times, despite having an Industry leading SIEM, the log Management made it look pedestrian and a waste of money.
  2. Every SIEM vendor has something called as Collector/Connector/Receiver/Agent that collects logs from the devices and converts them to their proprietary format. This conversion or parsing as we call is important for the product developers to store data in a format they can understand and process quickly. Most of the vendors offer something of a Custom Collector/Parser development for their “unsupported” log sources. This costs money, skills in-house and may require regular maintenance. Hence Native Parsing Support for Log Sources is better. Establish this before you move ahead with SIEM implementations. Either source a in-house resource to help build and manage such customizations or spend more money to get the vendor to do it.
  3. Identify primary focus areas from an Organizational perspective. This will help you configure your SIEM solutions appropriately. These focus areas should be broadly classified and then expanded to the ground level. For example, if your requirement is compliance, start with control requirements, see what logs need to be collected to fulfill them, see how integration needs to be done, see what needs to be reported, alerted, retained, etc.
  4. Get a dedicated SIEM administrator or rather train someone in-house to be that person. This is very important because, in my experience I have always felt that SIEM is as good as the administrator is. Without proper maintenance and care, it will decay over time. If you really need to generate value out of it, manage it well. By managing a SIEM I mean not only the system itself but also the ecosystem it resides in.
  5. Understand that SIEM alone cannot solve all your Security Problems. It is NOT A MAGIC WAND. If setup and configured correctly, a SIEM can at best point you in the right direction, a direction where you can identify and fix several security issues in your enterprise thereby strengthening it. So, be prepared to have a Response/Remediation team that will investigate the alerts generated and take appropriate action.
  6. Correlation is a vital part of SIEM offerings. Before Adopting SIEM, make sure you understand and possibly catalog the various Attack Vectors, Threat Scenarios you would want looked at for correlation in your organization. This will give a fair direction for the basic rules you would put in place to start with. Once you are comfortable and start seeing the various alerts generated, you can play around and experiment more. In my experience, start with built-in rules, understand them, investigate them, tune them and then slowly start building your own content. For more details on the various rules available in SIEM Look at Rules Rule in SIEM Kingdom
  7. Architecture wise, make sure your SIEM solutions are in tandem with your Logging solutions. Also, build your SIEM as modular as possible thereby making upgrades, technology refresh etc seamless.
  8. Don’t forget the filtering aspect. Correlation Engines will perform faster and will get you better results if they are attacking a smaller set of “known bad” logs rather than all. This is crucial in large enterprises as the Log Volume can easily overwhelm the SIEM systems. Note: Many SIEM tools have limitations in the number of events they can process. This is denoted in Events Per Second (EPS). Even though the vendors advertise several thousands, an effective correlation system can have only around 2000 – 5000 EPS tops. Anything more will make your system painstakingly slow. So understand and work through this. Look at my posts What and How much to Collect and High Log Volume – What to Filter and What to Keep? to get more information on how to log, what to log and what to filter.
  9. Remember, more processing layers, less EPS. This means that the Log Collection layer will have more EPS processing capability than the Correlation engine and so on. Visualize it as a pyramid with the Log Collection at the Base and the Correlation at the top
  10. Last but not the least, “Stay Alert and Eager. The Logs Don’t Lie”

Hope this post helped you in getting a fair idea of SIEM technologies. I have worked on HP ArcSight, Symantec SSIM, Novell E-Sentinel. If you need details about them in terms of practical setup, configuration, architecture etc, shout out and I will help as much as possible.
 [pdf]Save as PDF[/pdf]

Research to Detection – Identify Fast Flux in your environment

So what is Fast Flux?

Fast Flux is a camouflage technique used by Modern day Bots to evade detection and IP-based Blacklisting. This technique basically involves rapidly changing DNS Address Records (A Record) for a single FQDN, which means that every time you visit a www.site.com, you will be connecting to a different IP address.
Detecting Fast Flux in any environment is a very difficult task. Let me explain how!!!

  1. Fast Flux is of two types – Single Flux and Double Flux.
  2. If Single flux is employed, the only thing to worry about is IP address change for static domain names. A typically Fast Flux service network would have several thousand A records for the same domain name. The TTL value for every A record is very less, thereby prompting DNS resolvers to query in short succession.
  3. If Double Flux is employed nothing is static anymore. Both the NS Records as well as the A records change rapidly. The NS servers are a list of compromised machines having a back-end control to the attacker. Detecting Double Flux is twice as hard as Single Flux already is.
  4. If you think that “Oh, its easy to identify these domains from Analysis of rapidly changing DNS records” YOU ARE WRONG. In case of Web Traffic Load Balancing, several hosting servers employ this to ensure that they are able to serve the Client Request quickly. So, if you were to analyze the DNS records, you would be lost when you try to separate milk and water.
  5. There is no right or wrong way of identifying the Fast Flux networks and research is still ongoing to identify a solid solution.

But the havoc, several Bots cause today are real. How can be bring Research based approaches to Enterprise? How can we achieve Fast Flux detection? How can we increase the effectiveness of detection with already existing tools?
In this post, I wanted to discuss about Research to Detection based approach for Fast Flux in DNS in an Enterprise Network. I have used Snort, ArcSight, Custom Scripting etc to elucidate my thoughts and ideas. This may not be a perfect solution but it would do its primary job.

  1. Firstly, we need to start logging DNS queries happening in the Network. We are interested in only logging and analyzing all outward queries happening from our Enterprise DNS servers. This is less noisier than internally received requests to DNS Servers from client machines. Remember to have a Log Management/Detection program in place.
  2. In the queries being sent from the DNS servers, we need to detect all the queries that return A records with a TTL value of < 1800 seconds. This data collection should contain the Domain Name, A records and NS Records.
  3. If possible we can collect the ASN records for the IP A records returned by the DNS response
  4. The data collection of the above can be done by a three-step customization.
    1. First step would be to create a Snort Rule to identify DNS queries/responses with a low TTL value. Generally, the DNS Response would have the A Records, the corresponding NS records and the TTL value.
    2. Second step of the collection would be to parse the Snort Output data to the to correctly identify the domain, IP records and the NS records. This would mostly require a Custom Collector or we can “shim” an existing File Reader collector to parse the Snort Data into respective fields.
    3. Third Step would be to do a recursive IP to ASN mapping for all the IP records returned. This can be done by running a script or a tool post collection.
  5. We can then put the parsed data into two Active lists (ArcSight Terminology for a watch list). One Active List would be a Domain/A Record pairing and the other would be a Domain/NS Record pairing.
  6. Then a rule logic can be created to do the following:
    1. For Single Flux the logic would be One Domain – Large IP records in a day.
    2. For Double Flux the logic would be One Domain – Large IP records – Large NS records in a day.
    3. Correlation with ASN data collected would give a clear picture of whether the Fast Flux trigger is False Positive or not. I would personally want to investigate this data set against ASN data set manually to begin with so that I can make a determination on what needs to be tightened for the Rules.
    4. Now, we can add some tuning as well for DynDNS scenarios. This whitelist domain list would then reduce the subset of event triggers.
    5. Progressive Cross-Validation with Internet Blacklists, Spam Lists, Abuse Lists etc, will give identification more muscle.

Remember that there are several practical pitfalls in terms of “Performance Issues”. Snort preprocessors can quickly become resource intensive, hence best idea would be to put some Network Zoning in place (with Whitelisted DynDNS sites as well), thereby reducing the Snort processing cycles. Similarly ArcSight Active Lists and Rule Triggers can quickly go out of control, hence it is important to manage them closely. The Custom scripts/data collectors can also put some load on the servers. Once the detection is done, suitable response mechanisms can be put in place for Fast Flux Networks.

Since this approach is a work in progress, I would be adding a few more notes as and when I identify something new. If you have inputs to enhance this idea, I would love to hear from you as well.

 [pdf]Save as PDF[/pdf]