Research to Detection – Identify Fast Flux in your environment

So what is Fast Flux?

Fast Flux is a camouflage technique used by Modern day Bots to evade detection and IP-based Blacklisting. This technique basically involves rapidly changing DNS Address Records (A Record) for a single FQDN, which means that every time you visit a, you will be connecting to a different IP address.
Detecting Fast Flux in any environment is a very difficult task. Let me explain how!!!

  1. Fast Flux is of two types – Single Flux and Double Flux.
  2. If Single flux is employed, the only thing to worry about is IP address change for static domain names. A typically Fast Flux service network would have several thousand A records for the same domain name. The TTL value for every A record is very less, thereby prompting DNS resolvers to query in short succession.
  3. If Double Flux is employed nothing is static anymore. Both the NS Records as well as the A records change rapidly. The NS servers are a list of compromised machines having a back-end control to the attacker. Detecting Double Flux is twice as hard as Single Flux already is.
  4. If you think that “Oh, its easy to identify these domains from Analysis of rapidly changing DNS records” YOU ARE WRONG. In case of Web Traffic Load Balancing, several hosting servers employ this to ensure that they are able to serve the Client Request quickly. So, if you were to analyze the DNS records, you would be lost when you try to separate milk and water.
  5. There is no right or wrong way of identifying the Fast Flux networks and research is still ongoing to identify a solid solution.

But the havoc, several Bots cause today are real. How can be bring Research based approaches to Enterprise? How can we achieve Fast Flux detection? How can we increase the effectiveness of detection with already existing tools?
In this post, I wanted to discuss about Research to Detection based approach for Fast Flux in DNS in an Enterprise Network. I have used Snort, ArcSight, Custom Scripting etc to elucidate my thoughts and ideas. This may not be a perfect solution but it would do its primary job.

  1. Firstly, we need to start logging DNS queries happening in the Network. We are interested in only logging and analyzing all outward queries happening from our Enterprise DNS servers. This is less noisier than internally received requests to DNS Servers from client machines. Remember to have a Log Management/Detection program in place.
  2. In the queries being sent from the DNS servers, we need to detect all the queries that return A records with a TTL value of < 1800 seconds. This data collection should contain the Domain Name, A records and NS Records.
  3. If possible we can collect the ASN records for the IP A records returned by the DNS response
  4. The data collection of the above can be done by a three-step customization.
    1. First step would be to create a Snort Rule to identify DNS queries/responses with a low TTL value. Generally, the DNS Response would have the A Records, the corresponding NS records and the TTL value.
    2. Second step of the collection would be to parse the Snort Output data to the to correctly identify the domain, IP records and the NS records. This would mostly require a Custom Collector or we can “shim” an existing File Reader collector to parse the Snort Data into respective fields.
    3. Third Step would be to do a recursive IP to ASN mapping for all the IP records returned. This can be done by running a script or a tool post collection.
  5. We can then put the parsed data into two Active lists (ArcSight Terminology for a watch list). One Active List would be a Domain/A Record pairing and the other would be a Domain/NS Record pairing.
  6. Then a rule logic can be created to do the following:
    1. For Single Flux the logic would be One Domain – Large IP records in a day.
    2. For Double Flux the logic would be One Domain – Large IP records – Large NS records in a day.
    3. Correlation with ASN data collected would give a clear picture of whether the Fast Flux trigger is False Positive or not. I would personally want to investigate this data set against ASN data set manually to begin with so that I can make a determination on what needs to be tightened for the Rules.
    4. Now, we can add some tuning as well for DynDNS scenarios. This whitelist domain list would then reduce the subset of event triggers.
    5. Progressive Cross-Validation with Internet Blacklists, Spam Lists, Abuse Lists etc, will give identification more muscle.

Remember that there are several practical pitfalls in terms of “Performance Issues”. Snort preprocessors can quickly become resource intensive, hence best idea would be to put some Network Zoning in place (with Whitelisted DynDNS sites as well), thereby reducing the Snort processing cycles. Similarly ArcSight Active Lists and Rule Triggers can quickly go out of control, hence it is important to manage them closely. The Custom scripts/data collectors can also put some load on the servers. Once the detection is done, suitable response mechanisms can be put in place for Fast Flux Networks.

Since this approach is a work in progress, I would be adding a few more notes as and when I identify something new. If you have inputs to enhance this idea, I would love to hear from you as well.

 Save as PDF

Sec-Intel with Twitter Trends

Recently came across an interesting topic on the Splunk Blog – Visit To summarize this blog post, there is a Twitter App available for Splunk that lets you stream the Popular topics on Twitter. It collects Twitter Trends and presents a dashboard on the Splunk front end. All it requires is Splunk with Twitter App Plugin and a Twitter Account to track streams. For people without Splunk, the answer lies in customizing your Log Management Solutions to do something similar. The big question in my mind is “How would it be if we use this cool feature of Splunk to do Security Intelligence gathering and from there on, use this contextual data to perform Security Investigation and Analysis”?

Investigation in most cases is something very reactive based on what is seen in the logs or what is going on the network at a given time. Would it be possible for Investigation to be driven using Security Intelligence? In a country’s Intelligence Bureau, this is exactly what happens. Using data from the “chatter” and gauging the potential mood and the probability of a National threat. Scaling it down to an Enterprise level should be easy and very much do-able I guess. Several organizations are dependent on “public face”. One rotten apple leaking stuff from inside on some social media network would cause significant damage to the company’s reputation. This is where such Intelligence Gathering from the Internet helps. There are several implementation difficulties, but it is something worth considering on a case by case basis.

So What do you think? Is gleaning Security Intelligence from Social Media promising? What are the possible concerns regarding Privacy, Legislation, etc? Can Security Intelligence with Twitter Trends the next thing to do? Comment ON!!!


 Save as PDF

Rules Rule In SIEM Kingdom!!!

Are you new to SIEM?
Are you trying to write a Correlation Rule in SIEM and don’t know where to start?
Are you stuck with several jargons from the SIEM Administrator guide?
If your answer is “YES” to any one of the above questions, please continue.

Rules are the staple ingredient for any SIEM tool or Technology. A Rule is nothing but a set of logical instructions for a System to follow before it determines What to do and What not to do. As we all know, SIEM is a passive system. All it does is a pattern matching of the Logs received and follows the instructions on what to do (trigger) and what not to do (Not Trigger). This pattern matcher is also called as Correlation Rules or Real time Rules. These Correlation Rules are nothing but “your visualization of how an attack would look in an IT Infra”.

Generally, Rule Writing or Rule Development is a process similar to SDLC.
It all starts with the Requirements Phase.
Requirement Phase: In this phase, the Rule Author should collect the exact requirements of putting a rule in place. This requirements should also tell what is the “Intent” of the rule. It also captures the Response such a Rule Trigger would elicit. It is at this Requirements Phase, where the “visualization” actually starts. Remember, without a goal your Rules will not mean anything.

Once you gather the requirements, you enter into the Design phase.
Design Phase: In the design phase, you do the rough skeletal layout of the rule itself. Things like,

  • What logs to use for creating this specific rule?
  • What log attribute is more suitable for rule trigger?
  • What are the various attributes to collect/represent?
  • What type of rule to write?
  • What type of alerting to be configured? (Read Email, SNMP Traps, Dashboard Trigger, Response Action etc)

are laid out in this phase. This is the Most crucial phase. When it comes to Selecting Rule Types, you need to know what are all the available features from your SIEM tool or Technology. Generally as an All-Purpose guide, I would broadly classify the rules for any SIEM tool or Technology into the following:

1. Single Event Rule – If Condition 1, 2, 3 up to N match, trigger. Typically used rule type as it is straight forward pattern matching of Event Type, Event ID, IP etc.
2. Many to One or One to Many Rules – If a Condition matches and One Source Several Targets or One Target Several Sources scenarios are in play
3. Cause and Effect Rules or “Followed by” Rules or Sequential Rules – If Condition A matches and leads to Condition B. This will typically we a “Scan Followed with an Exploit” type scenarios, “Password Guessing Failure followed by Successful login” type scenarios etc.
4. Transitive Rules or Tracking Rules – If Condition A matches for Attacker to Machine A and within some time, Machine A becomes the Attacker for Machine B (again matching Condition A). This is typically used in Worm/Malware Outbreak Scenarios. Here, the Target in the first event becomes the Source in the second event.
5. Trending Rules – These rules are tracking several Conditions over a time period, based on thresholds. This happens in DoS or DDoS Scenarios.

In the design phase, the selection of the above happens. If this Rule Type selection is messed up here, you will have to revisit the Requirements phase again. Hence it is important to understand and choose wisely.

Now that we have got the requirements and the design out, we can move to the Development phase.
Development Phase: This is where we actually write the rule. Remember, once the logical understanding is there for the Conditions required to match (generally using Boolean Operators), writing a Rule is very simple and straightforward (of course you need to know the SIEM Tool Menus to do so)

Finally, Testing Phase and Deployment Phase follow. Testing is critical to validate the logic involved in the Rule. Simulating the Rule Conditions in Development Environment/Testing Environment will help to iron out the Chinks in the Rule.

Finally, Post Implementation Phase kicks in.
Post Implementation Phase: Once the rule is implemented, we need to manage them. By manage, I mean ensure that the rule is tightened based on feedback from Security Analysts. This may involve adding additional conditions to the rule, whitelisting, Threshold adjustments, etc. This is what makes the rule better and efficient in achieving the “INTENT”. This is typically based like a “Waterfall Model” where you keep going back to the rule again and again to tune it according to the exact needs.

Finally, Rule Refresh Phase is another phase I would like to add in the mix. This is a stage where the Rules you put in place may no longer be applicable, or may have become obsolete and have to be replaced by better rules. Periodic clean up of old/obsolete rules is also one of the best practices in the world of SIEM Rules.

Indeed Rules Rule init??


 Save as PDF