ArcSight CORR 6.0 – Install and Migration

ArcSight (now HP) Enterprise Security Manager (ESM) is the premiere security event manager that analyzes and correlates every other event in order to support the Security Team or analysts in every aspect of security event monitoring, from compliance and risk management to security intelligence and operations. There have been several versions of ArcSight ESM released over a period in time. Their latest version is ArcSight CORR 6.0. At we have got a copy of the latest version and we will be writing a multi-part post on how to Install, Migrate from Older versions to 6.0 and some basic walk around.

In this Part 1 post, we shall cover about the installation of ArcSight CORR (Correlation Optimized Retention and Retrieval), a proprietary data storage and retrieval framework that receives and processes events at high rates, and performs high-speed searches; the latest ArcSight ESM by HP. With the ArcSight CORR, Oracle database is now eliminated.

CORR components:

  • ArcSight Manager
  • CORR Engine
  • ArcSight Console
  • ArcSight Web
  • Management Console
  • Smart Connectors


System: This completely depends on the EPS that you expect to receive. InfoSecNirvana has been working on getting a PoC for this and the below configuration was used:
A VMWare box with 8 cores, 32GB Ram, 256GB SSD HDD, 2TB WD 7200 RPM SATA HDD (Note: for production, there might be/recommend a higher configuration. Check with ArcSight manuals on the same)

OS: Red Hat Enterprise Linux Server release 6.2 x64, installed with xfsprogs-3.1.1-6.el6.x86_64 rpm; this is required to convert some of the ext4 file systems to xfs filesystems. XFS Partition is the most apt format for us to fully utilize the performance enhancements coming with CORR. Typically, I would recommend /opt/ to be formatted with XFS and maximum storage can be allocated to this partition. This is crucial because, the very first step of installation would verify whether the entire /opt/ directory is in XFS. When using VMWare with LVM, we faced some issues during the installation and ArcSight Support could not help us with this. However, when raw devices were mounted as /OPT/ we did not face any issues.

Storage: Please allocate the required storage (calculate based on Number of Devices, Events per second, Average Event Size and Retention period). Remember, CORR is like an ESM with a built in Logger. You can still use a Logger for long term retention if that is what you prefer so that ESM will be lean and mean.

Permissions: The installation has to be done using a Non-Root account. This account can be a service account named”arcsight”. This account should have RWX permissions on the /opt/ directory. Make sure this is satisfied.

Misc: /TMP/ partition should have at least 3GB space. /home/arcsight also should have a minimum of 5GB free space. This is crucial again because, the INSTALL DIR log files are written in these location and if sufficient space is not allocated the installation fails.

The CORR package: Get the CORR installation package and the license from HP ArcSight. This can be obtained from your sales representative with HP/ArcSight.

CORR Installation:
The installation is pretty straightforward and is just a series of clicks. I have given most of the screenshots below just as a reference. Obviously, if you have already installed ArcSight Software, you would not even need this. Once done, you would be able to install the Console to access CORR and play around.

Once the installation is completed, we would want to test the following before we call the install as complete:

  1. Validate the Log Files in the Manager Install Logs and find out if there are any warnings and errors. Generally, this is a best practice to ensure valid installation.
  2. Install the Console and try to connect to ESM, with the default user name and password (mentioned in the install guide). First time when you connect, A certificate import of the Manager happens. If you use a self-signed certificate make sure you note down the parameters used to create cause this will help in future migrations, troubleshooting or recovery.
  3. After connecting to the console, you are ready to go.

Migrations from Existing Installs – Migrating from earlier versions to this CORR instance is tricky, because you are migrating from a DB back end to a NON-DB back end. I will be posting a followup of this post in PART 2 that will detail the migration procedure from 4.X to 5.X.

Stay Tuned to for more!!!

12 thoughts on “ArcSight CORR 6.0 – Install and Migration”

  1. Why not supporting Oracle. For large systems that need to scale and keep years of data online, compression, performance, offloading, etc. The Exadata scales out and sustains 500k-1M EPS and no indexes and all queries offload to their storage. It’s been proven already

    1. I think the reason for moving away from oracle is not a technical one. It’s the pathological dislike HP has with Oracle products. I remember the days when they were planning to buy Arcsight, the very first thing they did was put oracle removal on their agenda.

    2. Nice article! We’ve subsequently tested with Exadata and gotten query speeds under 2 mins for 140 days (500+TB) of data. The best CORR could do was 50 days to respond to the same query. Not to mention ingest of 3B rows per day and a cost <1/20th of the HW we replaced (and that wasn't nearly as good). I heard the ArcSight division at HP might be going back to supporting Oracle based on these numbers. And something about a 7TB shard limitation in CORR…

      1. Agreed. HP and Oracle don’t get along well and that was the reason for the change. Technical considerations were not prime focus

    1. Actually we did not post the following series as the migrations are manual at the time. For automated scripts we need to engage PS. Alternatively we have had experience creating content packages and using them to export on to CoRR

  2. Update: arcsight 5.5 stacking & Exadata using oracle imdb and ado on 1.2 year production system (600TB). We migrated from 5.2 to 5.5 and oracle 11.2 to 12.1.2.

    Most of the issues we overcome is from Arcsight design flaws. so now we have two esm configuration. Esm1 does all ingest “stuff”. Esm2 does all console workloads. We had to turn off a lot of parameters to support this but the system is REALLY stable and fast.

    Full HA and doing analytical queries that were never achievable in the past. ie. 1000+ rules, 75k sustained/120k peak EPS, 280 analysts/80 concurrent, and no limitation on queries or how long. Example, show me all the top bandwith users, by protocol, application, for the last 1yr across my enterprise. Comes back in seconds!!! Active channels have a terrible design flaw. The console likes to break up a single query into many window queries so we run report viewers instead because they do one fetch window and return in seconds.

    it’s 8 node RAC DB cluster with two nodes doing ingest from ESM1 server Infiniband connected. four nodes of the db nodes for query from ESM2 server Infiniband connected. These nodes have the new oracle imdb turned on so 800 GB of rdram is used for the columnar hot cache data. This allows for no indexes and ms response times for data less than a few days. if the data is beyond the rdram imdb cache (query scans 6 months of data) it gets pushed to storage nodes for queries. All the data in the storage nodes (columnar) are in flachecache cards or smart scanned on disk.

    Backing up the 1 year is not an issue because we use Exadata compression not arcsight compression. it only backs up 33TB of the 600TB actual (18x-22x compression). Furthermore, we mark partitions read only so we don’t have to back them all up. Everything is done over the Infiniband 80g network as well to the zfsa storage.

    So if someone wanted a smaller system to achieve these results it’s very simple. Just buy a two servers (20K) for ESM and connect to a 1/8 Exadata for around 200K dollars. In fact, the newer Exadata systems take out the Flash vendors with a new architecture we were told. So buy storage cells of all flash (not SSD’s) disk, another rack of just normal cells, and lastly a rack of ZFSA and have the ILM manager move the data to the respective ASM diskgroups based on policy.

    Regarding the use of Oracle, they also have BigSQL which I look for them to leverage in their own LogAnalytics play soon.

Leave a Reply to John K. Cancel reply