Tag Archives: What you need to know?

Security in Developer first organization…

Security in an Agile, Developer First organization is a constant ”tug-of-war” between “more security controls vs more developer usability”. The reason for this is primarily the need of Security professionals to maintain control over the “hacky” by nature developers. I am a big fan of Mordoc and the comic strip below shows exactly how this tug-of-war is:

Developers typically use a lot of tools in their machines like:

  1. Build Tools (iOS, Android EMU, IDE, VM, Containers etc.)
  2. Browsers
  3. Repo tools constantly updating
  4. Testing tools.

On such machines, there is always a risk of developers being the target to steal source code or sensitive information. In order to protect these developers, Security teams become overzealous and implement several security tools like Endpoint Security solutions (AV, EDR, HIDS etc.) and Compliance tools (DLP, Web Proxy, Patching, logging etc.).

A typical machine that has all the security tools installed and running looks something like this. This pattern is true across typical enterprise organizations.

The question always is – Where is the balance between Security controls vs Developer productivity?

I gave a talk at a conference on this topic and the slides are available here.  There are various strategies we can employ to secure the developer machines or in general any enterprise computing asset. Some of the innovative ways (they are real and implementable) we can use are described with examples. I thoroughly enjoyed working on this talk and all the associated research that went with it.

 

 

Please leave your comments below. Until next time!!!

Reverse Engineering Malware – What you need to know?

Every now and then, a nasty piece of Malware raises its ugly head and wreck havoc on the Enterprise Infrastructure. It is often necessary to analyze the Malware and understand its working so that

  • The impact of the Malware on IT Systems can be ascertained AND
  • The nature of preventative controls that can be put in place so that this threat does not spread further.

In such scenarios, Reverse Engineering of the Malware becomes a requirement. Reverse engineering of a malware or an unknown piece of binary file is the process of analyzing and understanding its characteristics, behavior. There are several approaches that several different people use. But in this Blog post, the goal is to give a quick little guide for Malware Reversing so that anyone with an inclination to pick it can do so very easily. This in my mind is an essential tool in the hands of a Security Analyst. The basic skills needed to do this are listed below:

  • Some programming skills, or at the very least be able to understand and read source code
  • Logical Mindset capable of analyzing and interpreting the Vectors used by the Malware Code
  • Lots and Lots of Patience and Interest

Let us now get into the basics. We need to understand that Malware programs have several stages before they infect and compromise a machine. Typically a malware performs the following:

  • Get itself persistent by adding its executable path in the registry, autorun, etc – Exploit
  • Comfortably elevate itself to another process, so that it cannot be easily found – Masquerade
  • Deter the process of it’s analysis by rapidly changing its code signature – Polymorphism
  • Make connections to remote servers sometimes to update itself or also to report back to its Master – Callback
  • Perform the necessary tasks on the affected system – Data Exfiltration or Zombie

All these Malicious Programs, have one goal or the other, but eventually they end up handing over control of your machine to strangers and potentially bad guys as well. Some Intentions of Malware are listed below:

  • Steal sensitive information / Key-logging / identity theft / usernames and password / banking information / Company patents / source code / etc (including personal data that may have been part of the system)
  • Access private networks
  • Perform DDoS Attacks
  • Spamming
  • Browser hijacking, ad-wares to perform fraud
  • Ransom-ware: deny access to the users’ own data and demand money to give access back – in other words extortion
  • Data exfiltration

Reverse Engineering Methodology:
This effort involves determining not only what the malware can do specifically, but also establishing how to identify the presence of such programs on affected systems. There might be so many (right) ways to do this, but for something quick we shall follow the steps laid out here.

Why Quick?? Because in an Enterprise sometimes we might not have all the time to really perform in-depth analysis, as time is a major factor when responding to incidents of this kind.

The Setup:
In order to perform an Effective Malware Analysis, we need to have a Tool Kit and an Environment for Analysis. Some of the key things to take care while setting up the environment are:

  • The Environment should be isolated with no connections to the Enterprise Data Sensitive network.
  • The environment should have its own  Proxy service so that it does not have scope to spread. The Proxy can be a sink hole where it just logs the connections made.
  • Set-up 2 sand boxes, 1 physical and 1 VM, as some malware programs only work on  on a physical box as they are VM-aware.
  • Make sure these sandboxes are standard images, with bare minimum corporate patching done. This should theoretically be equivalent to the weakest link in the organization.
  • Install all the required tools listed below to do certain type of analysis.
  • Tools required: strings, ida pro, pmdump, volatility framework, upx, packerid, pescanner, pe explorer, md5hashollydbg, deep freeze, winalysis, lp

The Analysis: The analysis of Malware is usually a two-phased approach – Behavior Analysis and Code Analysis. These two analysis methods yield so much information that detection and response becomes easy.

  • Behavioral analysis: Observing malware interactions with its environment like network connections, files dropped, evasive measures taken etc. This can be identified by installing the Malware, “getting infected” as you may call it.
    • Once infected, you can capture the network packets, to look at potential domains and IP addresses the software tries to connect. This will help in perimeter filtering and Endpoint ‘Firewall’ing
    • If the Malware drops some files using C2, then that also can be observed as a part of the getting infected process. This will help in gather SHA and MD5 values for the dropped files and banning them in Endpoint solutions from execution
  • Code analysis: Examining the code that comprises the program to infer what exactly the malware is capable of doing when executed. This does not help in response schemes, but is very important from a Forensics Purpose. Code Analysis can help in determining the extent of loss, the extent of vulnerability in the system that is being exploited etc.
  • Code Analysis can be done as follows:
    • Firstly Identify if either the unknown file is protected, obfuscated, encrypted (armoring) and/or packed (the original code is compressed, encrypted or both). To do this, we can use packerid or pe explorer.This technique is applied in an attempt to evade signature based malware detection, and to deter the efforts of static analysis. Identifying the packer specifically, can exactly tell you what you are missing in terms of detection using Perimeter tools.
    • Then with basic analysis like enumerating exports, imports, function use, syscalls, winapi, mutex, dll dependencies, strings and some grepping, using the winalysis or other similar tools that you might be comfortable with, you can come up with several theories about the file. These theories will give an understanding of the various attack vectors employed by the file. This can help lock down a system to these kinds of Malware attempts.
    • Drilling down further into the specific attack functions and looking at the code itself can help understand the vulnerability being exploited. This is very useful for Developers in fixing the holes in the software. This will help in a sort of retroactive patching methodology.
Post Analysis Steps: 
  1. Once the analysis is done on the Behavior and the Code aspects of the Malware, you have lots of data about at hand. Documenting the Analysis is very key because, future variants may use the same Attack Vector, Same Exploit Code etc to gain access to a machine/application.
  2. Use the documentation prepared as above to compare against subsequent analysis. This will save a great deal of time in detecting and responding to future threats posed.
  3. Snapshot of the VM also can be retained for future reference.
  4. Destroy the Analysis VM and start over again!!!
Practical Example
There will be a follow up post to this with a Hands-On Tutorial of how its done!!!! Keep Following this blog and Happy Reversing!!!!
Additional Resources:

 

Website to get malware samples for analysis:
http://oc.gtisc.gatech.edu:8080/

Websites to assist you in malware analysis:
REMnux (Linux distribution for malware analysis) – http://zeltser.com/remnux/
ISEC Labs Anubis Tool – http://anubis.iseclab.org/
GFI Sandbox – http://www.gfi.com/malware-analysis-tool
Hex to Binary/ASCII – http://home.paulschou.net/tools/xlate/
Hex to ASCII – http://www.dolcevie.com/js/converter.html
Jsunpack – http://jsunpack.jeek.org/

 

APT – What you need to know?

APT – Advanced Persistent Threat is the latest buzz word in the industry. Everyone who is in the Security Industry, professionals and business alike want to get into the bandwagon that is called APT. Security product vendors are all gearing to cater to “APT” and all their current product lines or future releases address APT in some form of the other. Now, the fever has spread to the IT Management as well and now they want their Security teams to detect and prevent APT. Even though the InfoSec public has caught up with it, how much thought have we put into understanding the magnitude of the problem at hand? Is it enough to just jump on to something without understanding it fully or do we need a more educated and intelligent decision making?

Let us find out more in this post!!!!

As always, I would like to define APT to start with. This is key because once the definitions are clear, all we would need is to align our thinking to that definition. Then, I will list down what flaws we have in our current approach towards security. Finally, I will try to list down as many possible solutions to the problem at hand.

Defining APT:
Simply put, APT is a Security Threat to the Enterprise (even End User for that matter) that is Advanced in execution that traditional security filters are not able to catch outright and is persistent enough that it keeps moving from one compromised target to another evading detection. 

Is it a technology of the future? – No, it is not. APT is nothing but a threat we are not trained to see. One of the main reasons why APT has been so successful in many organizations is the fact that we have an outdated security strategy. For example, we are keen on tracking a Data Exfiltration from a compromised machine. How do we do it today?

  • To start of with, we look for Data Loss Prevention Solutions and see which vendor is the market leader
  • Then we implement DLP solutions with basic policies for generic data loss (PDF, WORD DOC, XLS, Source Codes, Credit Card Numbers, PAN, PII etc)
  • We fine tune the DLP policies for our enterprise specifically and implement detection and prevention capabilities
  • We log the data from DLP solutions to SIEM and alert when something of interest happens.
  • In addition or In replacement, IDS/IPS rules will be implemented to identify data loss traffic based on REGEX file names etc.
  • In some cases we would also look at Traffic going to Blacklisted Domains and IP.
I am sure all of them or majority of the organizations do this to identify Data Exfiltrations. But  can all those organizations say that they are safe against APT? The answer is a SAD NO. The reason being, Known (Policy or Signature of What is Bad) is a drop, Unknown (Where APT works) is an Ocean. The threat landscape has evolved to exploit the Unknown, but we have not evolved to detect and respond to it. What is the solution for this problem?
There are several solutions being proposed by several people in the industry.  In my opinion one of the most important solutions is to do behavior profiling and Anomaly Detection.
Now What is Behavior profiling?
Behavior Profiling – Every network, every segment of the network has a behavior profile that is deemed normal. Today how many of us know what our Network Segments look like in terms of Connections they accept, they deny, Traffic flowing within the segment, what are the most used protocols, what are not used, What size of packets flow, what outbound and inbound communications happen, Access in and out, Who is supposed to and Who is not etc etc.. I seriously doubt it. We are more concerned about getting the system up, providing the service it is deemed to provide. We seldom think about the Security Profile the segment has. Once we profile, we can identify several Anomalies.

Let us now take the same example of Data Exfiltration and see how Behavior profiling would help:

  1. We would have complete details about where sensitive data is residing, the VLAN, the Server, the Folder, The file, The DB tables etc.
  2. To the Sensitive Machine/Network/Data, We would know who has access to and Who does not?
  3. We would also track who has a copy of that data – what is the machine, where is it residing (desktop, laptop, mobile) etc.
  4. The data usage by which team, which individuals etc are also profiled and that would give us the subset of people handling that sensitive data
  5. Any theft of that data would be through one of the above actors/entities.
  6. Tracking each of their machines activity over time would give us a Normal behavior profile.
  7. Digital Markers on such sensitive data can also be placed by the corporations to track data use/flow
  8. We can also track periodicity of data access, time of access, track the data changes etc through Digital Markers
  9. Any deviations from Normal behavior is a potential Data Exfiltration action and needs to be investigated
  10. Behavior profiles thus created can be used in addition to Signature based detection

This requires intimate co-ordination with various teams and also requires great understanding of what your Network does, what it is supposed to do. This while being the most logical is the most challenging to implement and thus the most rewarding as well. Behavior profiling is being used in the Intelligence Community for a long time, but the Technology community is still to embrace this. Enterprise data is becoming critical and with threats like APT, our fundamentals are being questioned.

This approach can help after the fact but from preventing the occurrence a Long term solution is needed. From a long term perspective the only solution is building Networks and Applications (OS as well as Apps) from ground up to treat security as a embedded character and not an add on feature.

What are your thoughts on APT? How do you think we should change our Security thought process, technology and all to combat it? Sound on below!!!