Tag Archives: Log Management

Is the SIEM dead?

change siem technology

It has been several years since SIEM emerged into the Security industry as an “Security Alerting” system. Since then, it has grown into a huge market with several vendors like ArcSight , QRadar, Splunk, Nitro, becoming successful since they were the first movers. Today, in 2020, they are becoming more and more obsolete. While people still buy SIEM for compliance needs, the real question to ask  is “Is the SIEM dead?”

What does a SIEM do?

SIEM by definition does the following:

  1. Log Collection from multiple devices/vendors
  2. Log Parsing and normalization into proprietary formats
  3. Log Aggregation
  4. Log Storage across a variety of data stores
  5. Rules Engine
  6. Management workflows
  7. API Driven Integration

Every vendor does all the above in different ways with varying degree of efficiency. However all the vendors have some common issues as well:

  • Poor Scalability: By definition, SIEM products scale very well in the Log Collection and Aggregation department. However, when it comes to Correlation, the SIEM falls short on Scale. Use case based Correlation rules, Log Filtering, Tuning etc need to be performed with ruthless focus on minimizing the need to scale the Correlation engine. While “on paper” vendors claim their correlation engines can be scaled, however, in practical use and effectiveness they fail.
  • Poor Intelligence: Contrary to popular belief, the Rules engine has no intelligence at all. All out of the box rules are stock compliance driven or basic security best practice driven. The pattern recognition rules are so archaic that seldom do organization detect any malicious attacker in the network using them.
  • Limited scope: Outside of Security operations centers (SOC), there is not much of a scope for SIEM to be relevant and valuable. This is primarily because while they are good at alarm based event management, they are miserable at “Event Searches” and “Reporting” thereby making it less and less attractive outside of the SOC realm.
  • Steep Learning curve: Every SIEM product  suffers from the steep learning curve. The learning curve on “General Use” is comparatively easy however, the “Power Use” and “Admin” is steep. Often times organization rely on a very small set of people (often 1 person) who is an expert on the product to keep this beast of a system up and running.
  • Maintenance Nightmare: The more you scale the SIEM, the more the maintenance needed to keep it up and running. This is especially a huge problem when you have mission critical functions depending on alerting and monitoring from SIEM. The amount of engineering resources burnt on this is significant.
  • Poor Analytics: While the SIEM vendors claim to be “Analytics” providers, what they provide is sub-par. Analytics requires the ability to analyse large volumes of data and run meaningful queries, however, most of the SIEM vendors because of “Poor scalability” , can’t process, store and provide analytical output on large volumes of data. Splunk for sure is an exception on this point as they are an Analytics platform first and a SIEM (poorly) later.
  • Poor ITSM capabilities: Again, SIEM vendors don’t do cases management as well as ITSM tools, thereby relying on 3rd party integrations which often times is a “one-way” integration, meaning cases can be created, but updates cannot be tracked in return.
  • Poor Automation: Reliance on 3rd party automation & response tools makes the SIEM ecosystem even more complex.

There are probably a few more issues depending on the SIEM vendors we see, but in general the above are the most common ones.

So the question is “Is the SIEM dead”??

Let me know in the comments section

Accelops – An innovative take on Monitoring

AccelOps-LOGO-Grey-Blue

AccelOps – An innovation take on Monitoring

We at Infosecnirvana.com have done several posts on SIEM. One of the most common request from readers of our SIEM posts is to review Accelops. So this post is our answer to those repeated questions.

Introduction: 

How many of you remember Cisco MARS? Well, if you don’t, let me remind you that they were one of the earliest SIEM products around that stemmed from the infrastructure monitoring space. MARS was geared more towards monitoring and reviewing network infrastructure including their utilisation, performance availability and logs. After a brief run in enterprises that were Cisco heavy, the product died a natural death. People who were involved in the product left Cisco and started AccelOps (Accelerate Operations). As a product, they took the fundamentals of data collection and integrated infrastructure log, event monitoring to the data analytics platform. The result is a promising product called AccelOps.

They have since been acquired by Fortinet, marking their foray into the larger Enterprise SIEM market dominated by the likes of HP, IBM, Splunk etc.

AccelOps:

As you can guess, by virtue of collecting data from various sources like Network devices and servers, AccelOps is a product that provides fully integrated SIEM, file integrity monitoring (FIM), configuration management database (CMDB), and availability and performance monitoring (APM) capabilities in a single platform.

  • APM Capability: This is their strong suite and it is MARS on steroids. AccelOps excels in capturing statistics to provide insights into how the system health is. This is value in a MSSP/NOC/SOC setup as there is no need for an additional monitoring platform. Again, Syslog or SNMP are your best bets for APM.
  • File Integrity Monitoring: Very few SIEM products (think Alienvault) offer native FIM capabilities and to see it in AccelOps is refreshing. The way they do is no surprise as FIM can only be done effectively using an Agent-based approach and Accelops also does the same.
  • CMDB: Accelops has the capability to keep track of all the elements in an organisation’s network infrastructure like network devices, UPS, servers, storage, hyper-visors, and applications. Using the data, a Centralised Management Database (CMDB) is available in AccelOps. This again is very unique and even AlienVault with all its Unified SIEM branding, does not shine as much as AccelOps does.
  • SIEM: Now that all the data from various network infrastructure is available in AccelOps along with CMDB, the ability to cross-correlates, in real-time becomes easy and AccelOps does that using its own patented correlation engine. The SIEM capability comes with all the bells and whistles one would expect – Rules, Dashboards, Alerting, Analytics, Intelligence, etc.

Now let us look at the Strengths and Weakness of AccelOps as a product

The Good:

  • AccelOps’ combination of SIEM, FIM and APM capabilities in a single box helps in a Centralised operations as well as security monitoring.
  • AccelOps serves as a centralised data aggregation platform for system health data, network flow data as well as event log data.
  • AccelOps has a mature integration capability with traditional incident management and workflow tools like ServiceNow, ConnectWise, LanDesk and RemedyForce
  • From a deployment flexibility, AccelOps excels in virtualisation environments. However, they are also available in traditional form factors. If customers prefer cloud, they are also available for deployments in either public, private or hybrid clouds.
  • From an architecture perspective, they have 3 layered tiers.
    1. The Collector tier does exactly what the name suggests – collects data from end log sources.
    2. The Analytics tier receives data from the collector tier. This analytics tier is built on big data architecture fundamentals supporting a master/slave setup. In AccelOps terms, it is Supervisor/Worker setup.
    3. The Storage tier then serves as the data sink housing the CMDB and the big data file system.
  • Because of the architecture setup, the scalability is not an issue with AccelOps. It does scale well with clustering at Analytics and Storage tiers.

The Not So Good:

  • The most obvious is that AccelOps as a product has relatively low visibility in the market. However, this is bound to change with the Fortinet buy. They will hopefully be seen in more competitive bids and evaluations.
  • While AccelOps tries to be a “Jack of All”, it unfortunately is a master of none. This means that the product has poor support for some third-party security technologies, such as data loss prevention (DLP), application security testing, network forensics and deep packet inspection (DPI).  This hinders the product versatility in large environments.
  • Parsing is a key aspect of SIEM and in this area too AccelOps lacks extensive coverage as seen amongst competition. While most of the popular ones are parsed out of the box, the others require a custom parser development skills, which unfortunately requires steep learning curve or product support to help build.
  • While for Network engineers and analysts, the interface makes sense, from a SIEM view, the usability could definitely be improved. This issue is evident when looking at dashboards, report engines, alerts etc. which seem to be afflicted with information overdose.
  • Ease of deployment is there, however, the configuration takes a lot of time considering the fact that there are several tool integrations to be done before it can generate value. Some of the configurations are really complex and may lead to user or admin being spooked. We were reminded of the MARS days time and again while evaluating this product.
  • The UI, while presents data in a very informative way, suffers from too much clutter hindering usability. While this is a personal opinion, when compared against the likes of IBM, Splunk and even LogRhythm, the AccelOps UI does not excite. We hope that Fortinet brings to fore its UI maturity to AccelOps, thereby becoming much more savvy.
  • Correlation capabilities are very good when it comes to data visibility, compliance and infrastructure monitoring use cases. However, when it comes to Threat hunting, trend analysis, behaviour profiling, AccelOps has a lot of ground to cover.
  • Without Infrastructure data, AccelOps loses its edge. As a traditional SIEM collecting only Event logs makes it look like a pretty basic SIEM. This can be quite an issue in organisations where Infrastructure monitoring is already being done by other tools. Unless customers duplicate data sets across  the tools, the value is poor.

Conclusion:

All in All, the product is a well rounded performer when it comes to combined Infrastructure and Security monitoring, however in traditional SIEM bake-offs, they need a lot more flavour to make it exciting. Hopefully the Fortinet buy will do just that. We will continue to watch out for this product and its road map in coming months.

Until next time – Ciao!!!

Oracle: You have the sight now Neo!!!

The objective of this post (The Matrix Quote in the title not withstanding) is to help the Database and the Security Team to enable log collection, processing and monitoring of Security Events of Interest with regards to a Business Critical Application Database. I have posted some articles on Overall Log Collection/Log Management here and here.
The key thing to note about DB Log Management is that the log is Non-Syslog Standard and has to have a DB connector/collector/agent/parser to collect, format and process the data in the DB for Auditing and Security Correlation purposes. Several SIEM solutions have the ability to pull audit data collected by Oracle DB regarding “Exactly at What Time, What User was responsible for executing What commands on the database”.

ORACLE AUDITING:
There are many different Oracle Audit facilities available for configuration:

  • Oracle audit
  • System triggers
  • Update, delete, and insert triggers
  • Fine-grained audit
  • System logs

The best method to use is the Oracle Audit Facility with the “db_extended” option to capture Command History.
This will help in tracking the exact commands executed by the attacker. This will help in the Forensic Investigation of the attacks. Events like, privilege misuse, elevation of privilege attacks, SQL Injection, Brute Force etc can be prevented with proactive monitoring, alerting and response capabilities.

REQUIREMENTS:

  1. Create a Unique Tablespace for the Audit Table: The first process is to create a separate tablespace just for auditing. The Oracle audit table is stored in “sys” table space. Because Oracle generates a lot of audit messages, this fills up the audit table, which can cause the database to crash with over flow. To avoid this problem, we must move the Oracle audit table into its own table space with its own data files separate from the core Oracle tables.
  2. Enable Auditing on the Oracle Database The auditing can be enabled either for the entire DB or for specific Tables. Please refer to the Diagram below showing the Typical Auditing Options recommended. For business critical applications, we recommend auditing SELECTS, UPDATES, and INSERTS to critical tables, such as salary info, credit card info, patient info, financial data, intellectual property, and so on.

For High Performance Databases where auditing cannot be enabled on all the tables, we can configure the “user.table_name” with the name of the table for which we would want to enable auditing for that action (as shown highlighted in yellow in the figure). We can also configure “user_name” with the names of users whose specific actions we want to audit (as shown highlighted in yellow in the figure).

AUDIT REQUIREMENTS:
Below is a table showing the some sample requirements for Database Auditing. This is just indicative and may vary from environment to environment depending on the Business Needs of your organization.

AUDIT_OPTION                      SUCCESS    FAILURE
GRANT PROCEDURE BY ACCESS BY ACCESS
SYSTEM GRANT BY ACCESS BY ACCESS
ALTER SEQUENCE BY ACCESS BY ACCESS
GRANT TABLE BY ACCESS BY ACCESS
ALTER TABLE BY ACCESS BY ACCESS
TRIGGER BY ACCESS BY ACCESS
PROCEDURE BY ACCESS BY ACCESS
ROLE BY ACCESS BY ACCESS
PUBLIC DATABASE LINK BY ACCESS BY ACCESS
DATABASE LINK BY ACCESS BY ACCESS
SEQUENCE BY ACCESS BY ACCESS
VIEW BY ACCESS BY ACCESS
PUBLIC SYNONYM BY ACCESS BY ACCESS
SYNONYM BY ACCESS BY ACCESS
ALTER USER BY ACCESS BY ACCESS
TYPE BY ACCESS BY ACCESS
USER BY ACCESS BY ACCESS
TABLE BY ACCESS BY ACCESS
CREATE SESSION BY ACCESS BY ACCESS

HOUSEKEEPING:

Audit Logging in the Oracle Database will cost Disk/Database space. This data can be purged over a regular schedule to keep the audit table clutter free and performing faster. Since the Audit data is being collected into a SIEM solution, retention should not be done at DB and instead at the SIEM.

In order to perform the housekeeping, the recommendation is as follows:
1. Create a Truncate Procedure

Create or replace procedure clean_audit is
Begin
— select sysdate from dual;
EXECUTE IMMEDIATE ‘truncate table aud$’;
commit;
End;

2. Schedule the Truncate Procedure

delete from dba_jobs where substr(what,1,6)=’CLEAN_’;
commit;
— it only need to run one time.
DECLARE
jobno number;
BEGIN    DBMS_JOB.SUBMIT (
job => jobno,
what => ‘CLEAN_AUDIT;’,
next_date => trunc(SYSDATE+1)+2/24,
interval => ‘/*2:00AM*/ trunc(SYSDATE+1)+2/24’
);
COMMIT;
DBMS_OUTPUT.PUT_LINE( ‘Job number ‘||jobno||’ setup.’ );
END;

Now that you have covered Auditing for all User Connections to the Database, you would think that you have all your bases covered. But Oracle is always coming up with surprises you see. The surprise is that you are still vulnerable to Insider Attacks because SYS, SYSDBA activities are not tracked in the Database Auditing covered above.
Gotcha!!!
—
I know, I know. No system is perfect and every system has a GAP that can be exploited. Monitoring SYS and SYSDBA is very important because, a disgruntled user can go ahead and tamper your DB and even though you are collecting DB Audit logs, you would never come to know of it. So what do we do now?
Auditing SYS, SYSDBA Activity is not straightforward as User Activity Auditing. This is because SYS and SYSDBA events are logged only at the OS level.— This is where the problems increase manifold. Let us look at it in detail –
Problem 1: The Files are stored in Oracle Installation directory thereby easily accessible and are also from a permission wise owned under Oracle Install User Group so it can be easily tampered with by any user in the Oracle Install group.
Problem 2: We will have to monitor the Client machine also to track who is the User who logged into the OS in order to login as SYSDBA and SYS.
Problem 3: If you have disabled direct client login as SYS and SYSDBA, the only way would be to login to the DB machine itself and then login as SYS and SYSDBA. In this case, you will have to track the Machine login as well as DB login.
So, unless and until you address the problems identified above, DB Audit Tracking for Log Investigation and Incident Detection will not be complete. Once again, go through the articles on Overall Log Collection/Log Management here and here to get an idea of how you can solve the problems.
Oracle: You have the sight now Neo!!!

 [pdf]Save as PDF[/pdf]