Category Archives: Security Learning

ArcSight CORR 6.0 – Install and Migration

ArcSight (now HP) Enterprise Security Manager (ESM) is the premiere security event manager that analyzes and correlates every other event in order to support the Security Team or analysts in every aspect of security event monitoring, from compliance and risk management to security intelligence and operations. There have been several versions of ArcSight ESM released over a period in time. Their latest version is ArcSight CORR 6.0. At InfoSecNirvana.com we have got a copy of the latest version and we will be writing a multi-part post on how to Install, Migrate from Older versions to 6.0 and some basic walk around.

In this Part 1 post, we shall cover about the installation of ArcSight CORR (Correlation Optimized Retention and Retrieval), a proprietary data storage and retrieval framework that receives and processes events at high rates, and performs high-speed searches; the latest ArcSight ESM by HP. With the ArcSight CORR, Oracle database is now eliminated.

CORR components:

  • ArcSight Manager
  • CORR Engine
  • ArcSight Console
  • ArcSight Web
  • Management Console
  • Smart Connectors

Requirements:

System: This completely depends on the EPS that you expect to receive. InfoSecNirvana has been working on getting a PoC for this and the below configuration was used:
A VMWare box with 8 cores, 32GB Ram, 256GB SSD HDD, 2TB WD 7200 RPM SATA HDD (Note: for production, there might be/recommend a higher configuration. Check with ArcSight manuals on the same)

OS: Red Hat Enterprise Linux Server release 6.2 x64, installed with xfsprogs-3.1.1-6.el6.x86_64 rpm; this is required to convert some of the ext4 file systems to xfs filesystems. XFS Partition is the most apt format for us to fully utilize the performance enhancements coming with CORR. Typically, I would recommend /opt/ to be formatted with XFS and maximum storage can be allocated to this partition. This is crucial because, the very first step of installation would verify whether the entire /opt/ directory is in XFS. When using VMWare with LVM, we faced some issues during the installation and ArcSight Support could not help us with this. However, when raw devices were mounted as /OPT/ we did not face any issues.

Storage: Please allocate the required storage (calculate based on Number of Devices, Events per second, Average Event Size and Retention period). Remember, CORR is like an ESM with a built in Logger. You can still use a Logger for long term retention if that is what you prefer so that ESM will be lean and mean.

Permissions: The installation has to be done using a Non-Root account. This account can be a service account named”arcsight”. This account should have RWX permissions on the /opt/ directory. Make sure this is satisfied.

Misc: /TMP/ partition should have at least 3GB space. /home/arcsight also should have a minimum of 5GB free space. This is crucial again because, the INSTALL DIR log files are written in these location and if sufficient space is not allocated the installation fails.

The CORR package: Get the CORR installation package and the license from HP ArcSight. This can be obtained from your sales representative with HP/ArcSight.

CORR Installation:
The installation is pretty straightforward and is just a series of clicks. I have given most of the screenshots below just as a reference. Obviously, if you have already installed ArcSight Software, you would not even need this. Once done, you would be able to install the Console to access CORR and play around.



Once the installation is completed, we would want to test the following before we call the install as complete:

  1. Validate the Log Files in the Manager Install Logs and find out if there are any warnings and errors. Generally, this is a best practice to ensure valid installation.
  2. Install the Console and try to connect to ESM, with the default user name and password (mentioned in the install guide). First time when you connect, A certificate import of the Manager happens. If you use a self-signed certificate make sure you note down the parameters used to create cause this will help in future migrations, troubleshooting or recovery.
  3. After connecting to the console, you are ready to go.

Migrations from Existing Installs – Migrating from earlier versions to this CORR instance is tricky, because you are migrating from a DB back end to a NON-DB back end. I will be posting a followup of this post in PART 2 that will detail the migration procedure from 4.X to 5.X.

Stay Tuned to InfoSecNirvana.com for more!!!

SIEM – The Good, The Bad and The Ugly – Part 2

In Part 1 of the post, we discussed about the several shortcomings of SIEM that has risen over the years. These problems need to be addressed if we need to progress further in our maturity with SIEM technology. Let me start with the main capability required for a SIEM to function – Log Management. This is where majority of the problems are.

Log Management: The problems plaguing Log Management in a Client/Server model are way too many to comprehend. Centralized log Management is the solution, however we don’t have a sound Log Management solution today that addresses these needs. Let us see what problems are there in Log Management and how we could solve this. Log Management is broadly divided into four parts:

  1. Log Collection
  2. Log Categorization
  3. Log Storage
  4. Log Management (Making it easily searchable across large sets)

Log Collection – Client-less and Standardized: In my view, ideal solution would be that, the source devices should not have any clients installed on them, should not need special formatting, should not be taxed in terms of processing. Standard method of data collection should be set as the norm. From a higher order, an RFC or a Standard should be floated that standardizes all the Log Data from every IT device. This standardization would help in two things – One to improve the Overall Logging and Auditing capabilities of devices (Client Less – Out Of Box) in a standard format and the other is to Improve the Security Consciousness of the Application development teams. Think in terms of Logging being a part of standard protocol suite. What this would do is help interoperability between devices, be it Log Sources, Log Collectors, Log Managers etc along with bringing together all the fragmented parts of the existing log management under one umbrella. For example, Every IDS vendor today supports SNORT formatting. Similarly, all SIEM vendors should support a standard log Collection and processing so that interoperability, migration between vendors etc becomes more easy. I know CEF is one of the standards, but I am not sure everyone adopts that today.

Log Categorization – Security and Non-Security: When large volumes of Log data comes in, there is a need to separate the list of Security events from normal Non-Security events. The Log Standard also should specify categorization of Events clearly as Security and Non-Security. For every device class, the Security Events should be listed out and only those logs should be collected by SIEM for correlation and incident management. Many organizations collect way too much Log data (up to 100K) but effectively use only 10% of it for Security purposes. That means the remaining 80-90% is Non-Security related Logs. And this junk is the one eating up Terra bytes of space. There is a huge disparity between the Log Management tier and the SIEM tier. Data Collection is always more and more, however SIEM processing is more focussed. This is where the categorization helps so that SIEM receives only Security events to process.

Log Storage and Log Management – Streamlined and consistent data sets: The moment we start collecting standard data and properly categorizing them, the storage, indexing and retrieval becomes easier. This space we are good at and should continue to improve. Things like using Big Data Storage technologies instead of relying on Oracle or SQL or MySQL as the backend limits the capabilities when handling big data sets. Storage has to be streamlined and new indexing and searching capabilities should be thrown into the future development of Log Management tools and solutions.

Security Incident and Event Management (SIEM) – Once the Log Management problems are sorted out, the SIEM problems become easier to solve. One of the major pain points in using SIEM was the client-server architecture. When Log Collection becomes Client-less, the SIEM solutions need not focus on building Log Collection clients and instead focus their energies on better correlation and intelligence data mining. Some sweeping changes that can be brought into SIEM world are as below:

  • SIEM should be an inference engine, a correlation engine and a Data mining engine only. This will bring more value in the intelligence piece of Log Data Mining rather than just parsing and doing some basic alerting defined. Remember, as I always say, Security is an Intelligence Function and not an Operations Functions
  • SIEM should be able to focus only on Security events for Alerting, reporting and investigation. This is where the Log Standardization and Categorization plays a big role. If SIEM were to process only Security Data, we would never be hitting more than 5K in most enterprises.
  • SIEM should be a fast and agile product and not rely on backend DB queries, reports and stats usually driven using Oracle or SQL. This is something that some vendors are starting to explore. I know Novell e-Sentinel has this capability for a long time, HP is now trying to do similar thing with CORR. This is in my opinion the right way forward
  • The SIEM should also become a more Active tool instead of being a Passive tool as it is today. What I mean by this is, a SIEM should be able to respond to threats in a comprehensive way. It should be able to alert, do basic ITIL Service Management Integration out of the box and also if need be, execute boxed responses for alerts. This is helpful because, many a times the rules written in SIEM are basic and over a period of time become repetitive. Such repeatable alert responses can be automated into a Workflow and the system should be capable of becoming self-sufficient.
  • SIEM should get better at Large Data Set Mining. Today, SIEM Management consoles are “code heavy” in the front end and “CPU heavy” on the backend. This kinda reduces the efficiency of SIEM technologies in perform real-time correlation. A radical change is needed in the way SIEM applications are developed to make the application lightning fast.
  • SIEM should also provide flexibility in terms of customization of Incident Detection and Response templates, writing visualization rules (where you are able to chart Attack Vectors, Vulnerable points, Network Maps etc), trending Historical Data in a way where the system automatically detects a pattern of similar issues in the past and so on and so forth. In short, add “Intelligence as well as Learning Capabilities” to SIEM.

Again, I am not sure I have covered everything in terms of possible improvements to existing problems, but at least grazed on a few.

What else do you think can help improve the SIEM capabilities? Comment on below.

Reverse Engineering Malware – What you need to know?

Every now and then, a nasty piece of Malware raises its ugly head and wreck havoc on the Enterprise Infrastructure. It is often necessary to analyze the Malware and understand its working so that

  • The impact of the Malware on IT Systems can be ascertained AND
  • The nature of preventative controls that can be put in place so that this threat does not spread further.

In such scenarios, Reverse Engineering of the Malware becomes a requirement. Reverse engineering of a malware or an unknown piece of binary file is the process of analyzing and understanding its characteristics, behavior. There are several approaches that several different people use. But in this Blog post, the goal is to give a quick little guide for Malware Reversing so that anyone with an inclination to pick it can do so very easily. This in my mind is an essential tool in the hands of a Security Analyst. The basic skills needed to do this are listed below:

  • Some programming skills, or at the very least be able to understand and read source code
  • Logical Mindset capable of analyzing and interpreting the Vectors used by the Malware Code
  • Lots and Lots of Patience and Interest

Let us now get into the basics. We need to understand that Malware programs have several stages before they infect and compromise a machine. Typically a malware performs the following:

  • Get itself persistent by adding its executable path in the registry, autorun, etc – Exploit
  • Comfortably elevate itself to another process, so that it cannot be easily found – Masquerade
  • Deter the process of it’s analysis by rapidly changing its code signature – Polymorphism
  • Make connections to remote servers sometimes to update itself or also to report back to its Master – Callback
  • Perform the necessary tasks on the affected system – Data Exfiltration or Zombie

All these Malicious Programs, have one goal or the other, but eventually they end up handing over control of your machine to strangers and potentially bad guys as well. Some Intentions of Malware are listed below:

  • Steal sensitive information / Key-logging / identity theft / usernames and password / banking information / Company patents / source code / etc (including personal data that may have been part of the system)
  • Access private networks
  • Perform DDoS Attacks
  • Spamming
  • Browser hijacking, ad-wares to perform fraud
  • Ransom-ware: deny access to the users’ own data and demand money to give access back – in other words extortion
  • Data exfiltration

Reverse Engineering Methodology:
This effort involves determining not only what the malware can do specifically, but also establishing how to identify the presence of such programs on affected systems. There might be so many (right) ways to do this, but for something quick we shall follow the steps laid out here.

Why Quick?? Because in an Enterprise sometimes we might not have all the time to really perform in-depth analysis, as time is a major factor when responding to incidents of this kind.

The Setup:
In order to perform an Effective Malware Analysis, we need to have a Tool Kit and an Environment for Analysis. Some of the key things to take care while setting up the environment are:

  • The Environment should be isolated with no connections to the Enterprise Data Sensitive network.
  • The environment should have its own  Proxy service so that it does not have scope to spread. The Proxy can be a sink hole where it just logs the connections made.
  • Set-up 2 sand boxes, 1 physical and 1 VM, as some malware programs only work on  on a physical box as they are VM-aware.
  • Make sure these sandboxes are standard images, with bare minimum corporate patching done. This should theoretically be equivalent to the weakest link in the organization.
  • Install all the required tools listed below to do certain type of analysis.
  • Tools required: strings, ida pro, pmdump, volatility framework, upx, packerid, pescanner, pe explorer, md5hashollydbg, deep freeze, winalysis, lp

The Analysis: The analysis of Malware is usually a two-phased approach – Behavior Analysis and Code Analysis. These two analysis methods yield so much information that detection and response becomes easy.

  • Behavioral analysis: Observing malware interactions with its environment like network connections, files dropped, evasive measures taken etc. This can be identified by installing the Malware, “getting infected” as you may call it.
    • Once infected, you can capture the network packets, to look at potential domains and IP addresses the software tries to connect. This will help in perimeter filtering and Endpoint ‘Firewall’ing
    • If the Malware drops some files using C2, then that also can be observed as a part of the getting infected process. This will help in gather SHA and MD5 values for the dropped files and banning them in Endpoint solutions from execution
  • Code analysis: Examining the code that comprises the program to infer what exactly the malware is capable of doing when executed. This does not help in response schemes, but is very important from a Forensics Purpose. Code Analysis can help in determining the extent of loss, the extent of vulnerability in the system that is being exploited etc.
  • Code Analysis can be done as follows:
    • Firstly Identify if either the unknown file is protected, obfuscated, encrypted (armoring) and/or packed (the original code is compressed, encrypted or both). To do this, we can use packerid or pe explorer.This technique is applied in an attempt to evade signature based malware detection, and to deter the efforts of static analysis. Identifying the packer specifically, can exactly tell you what you are missing in terms of detection using Perimeter tools.
    • Then with basic analysis like enumerating exports, imports, function use, syscalls, winapi, mutex, dll dependencies, strings and some grepping, using the winalysis or other similar tools that you might be comfortable with, you can come up with several theories about the file. These theories will give an understanding of the various attack vectors employed by the file. This can help lock down a system to these kinds of Malware attempts.
    • Drilling down further into the specific attack functions and looking at the code itself can help understand the vulnerability being exploited. This is very useful for Developers in fixing the holes in the software. This will help in a sort of retroactive patching methodology.
Post Analysis Steps: 
  1. Once the analysis is done on the Behavior and the Code aspects of the Malware, you have lots of data about at hand. Documenting the Analysis is very key because, future variants may use the same Attack Vector, Same Exploit Code etc to gain access to a machine/application.
  2. Use the documentation prepared as above to compare against subsequent analysis. This will save a great deal of time in detecting and responding to future threats posed.
  3. Snapshot of the VM also can be retained for future reference.
  4. Destroy the Analysis VM and start over again!!!
Practical Example
There will be a follow up post to this with a Hands-On Tutorial of how its done!!!! Keep Following this blog and Happy Reversing!!!!
Additional Resources:

 

Website to get malware samples for analysis:
http://oc.gtisc.gatech.edu:8080/

Websites to assist you in malware analysis:
REMnux (Linux distribution for malware analysis) – http://zeltser.com/remnux/
ISEC Labs Anubis Tool – http://anubis.iseclab.org/
GFI Sandbox – http://www.gfi.com/malware-analysis-tool
Hex to Binary/ASCII – http://home.paulschou.net/tools/xlate/
Hex to ASCII – http://www.dolcevie.com/js/converter.html
Jsunpack – http://jsunpack.jeek.org/