Tag Archives: Security Investigation

Automating Security Investigations – Torrent Investigation

Across the company, you will find several users using BitTorrent, uTorrent or other P2P clients and downloading copyright materials which is a huge risk to the company. Other than this it also brings a lot of other risks like downloading trojans, bots, malware which could cause data leaks. Although, in several organizations this is categorized as policy violation, users ignore such policies and continue to download them. In a previous Security Investigation Series – Episode 1, the details about investigating such incidents have been clearly documented. I have decided to take this to the next level. During my routine investigations I have identified several tasks done by an Operator that beg to be automated. This is where I decided to build a custom script to do this. If the routine tasks are automated, the Operations Analyst can concentrate on other Complex Analysis and Tasks. Before beginning on a script for automation, we need to ensure that all the details required are collected. For Torrent Investigation related cases, let us see what is required:

  • An Alert in the SIEM tool indicating the presence of Torrent Traffic in Network
  • Verify if the Alert is genuine or False positive. Verification typically involves checking the user machine.
  • Once verified, required artifacts need to be collected for further analysis. Further analysis is required to validate whether the machine is compromised because of Torrent use. This is vital because several attackers use Torrent Software to serve as Entry points into the Victim machine and from there on plan their subsequent attacks.
  • If the machine is compromised, trigger a forensic process
  • If the machine is not compromised, trigger a remediation process
  • The process trigger can either be a Ticket Created Automatically or an Email/Report/Alert.
  • Any other custom actions as per the Enterprise specificity

Now that we have the tasks to be automated, let us start on working what the Script/Tool input and output will be.

Input for the Script/Tool:

  • The IP Address of the machine suspected for torrent behavior. This can actually be a list of IP addresses so that it saves time. Gather of IP addresses can be done in SIEM using Active Lists (ArcSight), Lookup Tables (Symantec SSIM), Query Tables (SIEM Solutions) etc that track only the IP address of the Client Machine doing Torrent.
  • Additional Port, Protocol Information, Destination IP etc will also help to gather detailed information.
  • Credentials to Connect to the machine. Typically, Security Teams have administrator level access or some privileged level of access to perform security investigations in the Enterprise

Output for the Script/Tool: This tool can be made to collect different types of artifacts to ascertain that the IP in question is violating policy. These artifacts can be any or more of the following:

  • NETSTAT information to the pertaining destination ip and port numbers. Based on this information it is easy to identify the service, the PID etc for the connection itself.
  • Task list information based on the PID to find out the service name. This data along with NETSTAT will exactly point out the Torrent program being used by the Client machine.
  • The tool also can perform inventory of the entire hard drive to retrieve a list of saved torrents filenames along with the attributes.
  • The tool can then zip all the data collected and place it in a shared folder or server location.
  • Additionally, the tool can also be made to take an “ACTION” once the investigation is complete. The ACTION can be any one of these
    1. Just a user email sent to warn the user of the policy violation and remediation steps.
    2. Create a Support Ticket in a Ticketing system so that the respective Security Operations team can take appropriate action on the case
    3. Trigger a remediation on the machine automatically. I know this is the most intrusive but “Hey, it’s an option too right??”
  • Some intelligence can also be added into the tool. For example, the tool can store the IP it processed in its own small DB to make sure, it dosent contact that IP again for a limited period (say like 3 days, depending on how strict you want the remediation process to kick in). This way the noise generated due to repeated IPs will not be there.

How the Tool is constructed? – The tool can be constructed in any Programming language. Typically, based on the in-house skill set, the language can be chosen so that the support and update of the codes become easier. As mentioned earlier, based on the requirements our function definitions would be formed. Below, I would like to show samples of how this tool can be written in C#. I will be using some Remote Command Execution tools like PSTools, XCMD etc to be the backend jobs.

  • The most important part of the Tool is the “Data Collection” part. In my code, I call this the “Worker Function”. This worker function is the one that collects all the artifacts from the remote IP address. Threading is enabled so that multiple IP addresses can be tackled in a single go. Multiple IP input can be through a Static IP list or a Dynamic IP List populated natively by a SIEM or custom population using scripts of Web API.
  • The below screen shot shows the variable declarations required for the Worker Function.
  • Before running the worker function, we need to check if we already processed the given IP address. As mentioned earlier, this is important to reduce the noise:
  • If the IP is not available, we will go ahead and Retrieve the remote host-name, logged-in username and the logged-in domain name.
  • With the above details, We can now cross-verify to make sure that not just the given IP address was already processed, but also the user logged into that IP is also not processed. Since your environment might be with dynamic address, it is possible that a user could be in different IP addresses at different times. So cross-verification is always best to make sure, we don’t dig the same hole.
  • Once we know the given IP is new, start to collect the required artifacts. Here we are collecting:
    1. Tasklist – to get the torrent name & path along with PID. In C# I execute the function:
      System.Diagnostics.Process.GetProcesses(remoteIP)
    2. Netstat – get all the active connections/ports open/communication protocol used. This will be done by Executing a local shell command like the following:
      xcmd \\{IP} “netstat -anob”
    3. Registry Entries – firewall registry entries/winlogon & Run registry entries, etc can also be collected by Executing a local shell command like the following:
      reg query “\\{ip}\HKLM\System\\ControlSet001\\Services\\SharedAccess\ Parameters\
      FirewallPolicy\StandardProfile\AuthorizedApplications\List\”
    4. Torrent files – Collect all the torrent files names in the hard drive (forensic purposes)
    5. Define Actions – Once done, we can make the tool do a lot of ACTIONS as discussed earlier in the post.

Hope this post helps the SOC analysts, Operators and Managers in automating SOC tasks and processes that are routine and time consuming. I have seen over the years that Automation can greatly help in a high volume Security Investigations environment so that more valuable time can be spent on Qualitative Threat Detection.

[pdf]Save as PDF[/pdf]

Episode 4: Security Investigation Series – Tackling SPAM Attacks

One of the age-old attacks seen in the Internet is a SPAM attack. Many organizations have been blacklisted for having been a SPAM relay or a SPAM Source. Even though technologies have improved vastly over the decade, SPAM is still real and users are still being enticed by SPAM. The result is Machine Compromise and potential data breach. As of 2011, more than 7 Trillion SPAM Messages have originated. Many organizations combat SPAM in many different ways. In this Security Investigation Series Episode, I am going to layout a workflow for SPAM detection, Cleanup and Prevention.

Understanding SPAM: Firstly, let us understand SPAM. SPAM is nothing but a mass of unsolicited messages being sent anonymously or using fake identities. This often is a pre-cursor of an attack and hence is one of the Attack Vectors.  Two major sources of SPAM in an enterprise are

  • Email based SPAM and
  • Instant Messaging SPAM

Let us break this down even further. Email based SPAM uses SMTP protocol as its transport whereas the Instant Messaging using a gamut of protocols from HTTP, SIP, IMPP to XMPP. Several tools and technologies for SPAM detection and filtering work at this protocol level and identify SPAM and filter them as needed. Still intelligent spammers, can circumvent the detection and make their way to the user mailbox. In such cases, a clear incident detection and response process is needed.

Let us take one sample scenario so that I can layout the process flow for similar scenarios.

A Real Life Scenario: A mail is received at the user’s inbox containing a Password Stealer link. This is suspected as SPAM by the Security Devices in your enterprise. The security devices can be anyone or combination of Intrusion Detection Systems, Gateway Filters, SPAM filters etc. These alerts are logged to a SIEM solution. SIEM Solutions then correlate the various messages received and trigger an Incident. If there are no Security devices that do this, SIEM can help you identify SPAM through Network Traffic monitoring.

Logic for good SPAM detection: In signature based detection, it is good enough to just pick on the Triggers from the individual product vendors and then correlate among them. But if there is a SPAM message that is fresh and does not have a signature pattern, then only behavior based detection will be effective. Several tools today do some behavior based detection. Enterprises who don’t have behavior based systems can look at making use of SIEM to be that system. SIEM is a powerful tool and can do trending, correlation and pattern matching. A simple rule can be written for network log correlation for protocols like SMTP/IMP etc. Typically a value of 25 SPAM messages going to different destinations within a minute is a good indication of SPAM. This value combinations can be throttled (Throttling is a great SIEM topic and one of the classic rule writing as well as throttling ArcSight example can be found here at wymanstocks.com) to get a more accurate SPAM detection rate. This detection is crucial for the response to be triggered.

Responding to SPAM Attacks: Once the SPAM detection is done through signature or behavior based logs, it is important to take a series of responsive actions.

  • Before responding obviously validation needs to be done to ensure we don’t falsely respond to a legitimate email. This is typically done at the SIEM level itself and is the job of a Level 1 Analyst.
  • Then, We need to ensure that the SPAM domain is blocked at the gateway level. This is to ensure that the SPAM does not spread from Internal to External. Some SPAM mails have carefully constructed callback to the domain itself and hence it is important to block it at the gateway.
  • Secondly, we need to ensure that the SPAM spreading is controlled in the Internal Mail Infrastructure. This can be done by putting filtering rules in Exchange to move SPAM messages to the deleted items folder. Similarly for IM also, such rules can be put in the messaging server.
  • Finally, the SPAM has to be cleaned from the individual machines. This is where it gets interesting.
  • There are three major types of users:
    1. Many users are aware of the SPAM and hence they would not have clicked on any of the links available in the mail or the message. These user machines can be remediated by just deleting the SPAM messages.
    2. Some users who are curious would have clicked on the link and then closed it after seeing its suspicious face. Majority of these cases are nothing and get remediated by just deleting the SPAM message. Some targeted SPAM attacks work to just make the user click on the SPAM link and then get re-directed to a Malware Dropper site. In these cases, even though the SPAM mail has been deleted the users are at risk. Hence these user machine have to be validated as well for possible compromise.
    3. Time and again we also see users clicking on the link, keying in all their data to the site and then feeling that nothing wrong has happened to them. These user machines are no longer clean and have to be re-imaged straight away.
  • Once the remediation is done, the appropriate documentation need to be carried out. Again as I said, documentation is vital in a Security Investigation process. Without documentation, it will not be repeatable, it will not be efficient.

Preventing SPAM Attacks: Preventing SPAM is the ultimate goal for every enterprise. Day in and Day out, Enterprise Defenses are being improved to combat SPAM. However, SPAM is mostly an initialization vector and hence it is at the hands of the End-User to be aware of the risks involved with SPAM. So more than the tools and technologies I would say “User Training” is the best way to prevent SPAM. What do you think?

How do you combat SPAM in your enterprise? Sound off in the comments below

 [pdf]Save as PDF[/pdf]

Research to Detection – Identify Fast Flux in your environment

So what is Fast Flux?

Fast Flux is a camouflage technique used by Modern day Bots to evade detection and IP-based Blacklisting. This technique basically involves rapidly changing DNS Address Records (A Record) for a single FQDN, which means that every time you visit a www.site.com, you will be connecting to a different IP address.
Detecting Fast Flux in any environment is a very difficult task. Let me explain how!!!

  1. Fast Flux is of two types – Single Flux and Double Flux.
  2. If Single flux is employed, the only thing to worry about is IP address change for static domain names. A typically Fast Flux service network would have several thousand A records for the same domain name. The TTL value for every A record is very less, thereby prompting DNS resolvers to query in short succession.
  3. If Double Flux is employed nothing is static anymore. Both the NS Records as well as the A records change rapidly. The NS servers are a list of compromised machines having a back-end control to the attacker. Detecting Double Flux is twice as hard as Single Flux already is.
  4. If you think that “Oh, its easy to identify these domains from Analysis of rapidly changing DNS records” YOU ARE WRONG. In case of Web Traffic Load Balancing, several hosting servers employ this to ensure that they are able to serve the Client Request quickly. So, if you were to analyze the DNS records, you would be lost when you try to separate milk and water.
  5. There is no right or wrong way of identifying the Fast Flux networks and research is still ongoing to identify a solid solution.

But the havoc, several Bots cause today are real. How can be bring Research based approaches to Enterprise? How can we achieve Fast Flux detection? How can we increase the effectiveness of detection with already existing tools?
In this post, I wanted to discuss about Research to Detection based approach for Fast Flux in DNS in an Enterprise Network. I have used Snort, ArcSight, Custom Scripting etc to elucidate my thoughts and ideas. This may not be a perfect solution but it would do its primary job.

  1. Firstly, we need to start logging DNS queries happening in the Network. We are interested in only logging and analyzing all outward queries happening from our Enterprise DNS servers. This is less noisier than internally received requests to DNS Servers from client machines. Remember to have a Log Management/Detection program in place.
  2. In the queries being sent from the DNS servers, we need to detect all the queries that return A records with a TTL value of < 1800 seconds. This data collection should contain the Domain Name, A records and NS Records.
  3. If possible we can collect the ASN records for the IP A records returned by the DNS response
  4. The data collection of the above can be done by a three-step customization.
    1. First step would be to create a Snort Rule to identify DNS queries/responses with a low TTL value. Generally, the DNS Response would have the A Records, the corresponding NS records and the TTL value.
    2. Second step of the collection would be to parse the Snort Output data to the to correctly identify the domain, IP records and the NS records. This would mostly require a Custom Collector or we can “shim” an existing File Reader collector to parse the Snort Data into respective fields.
    3. Third Step would be to do a recursive IP to ASN mapping for all the IP records returned. This can be done by running a script or a tool post collection.
  5. We can then put the parsed data into two Active lists (ArcSight Terminology for a watch list). One Active List would be a Domain/A Record pairing and the other would be a Domain/NS Record pairing.
  6. Then a rule logic can be created to do the following:
    1. For Single Flux the logic would be One Domain – Large IP records in a day.
    2. For Double Flux the logic would be One Domain – Large IP records – Large NS records in a day.
    3. Correlation with ASN data collected would give a clear picture of whether the Fast Flux trigger is False Positive or not. I would personally want to investigate this data set against ASN data set manually to begin with so that I can make a determination on what needs to be tightened for the Rules.
    4. Now, we can add some tuning as well for DynDNS scenarios. This whitelist domain list would then reduce the subset of event triggers.
    5. Progressive Cross-Validation with Internet Blacklists, Spam Lists, Abuse Lists etc, will give identification more muscle.

Remember that there are several practical pitfalls in terms of “Performance Issues”. Snort preprocessors can quickly become resource intensive, hence best idea would be to put some Network Zoning in place (with Whitelisted DynDNS sites as well), thereby reducing the Snort processing cycles. Similarly ArcSight Active Lists and Rule Triggers can quickly go out of control, hence it is important to manage them closely. The Custom scripts/data collectors can also put some load on the servers. Once the detection is done, suitable response mechanisms can be put in place for Fast Flux Networks.

Since this approach is a work in progress, I would be adding a few more notes as and when I identify something new. If you have inputs to enhance this idea, I would love to hear from you as well.

 [pdf]Save as PDF[/pdf]