Category Archives: Log Management

SIEM – The Good, The Bad and The Ugly – Part 2

In Part 1 of the post, we discussed about the several shortcomings of SIEM that has risen over the years. These problems need to be addressed if we need to progress further in our maturity with SIEM technology. Let me start with the main capability required for a SIEM to function – Log Management. This is where majority of the problems are.

Log Management: The problems plaguing Log Management in a Client/Server model are way too many to comprehend. Centralized log Management is the solution, however we don’t have a sound Log Management solution today that addresses these needs. Let us see what problems are there in Log Management and how we could solve this. Log Management is broadly divided into four parts:

  1. Log Collection
  2. Log Categorization
  3. Log Storage
  4. Log Management (Making it easily searchable across large sets)

Log Collection – Client-less and Standardized: In my view, ideal solution would be that, the source devices should not have any clients installed on them, should not need special formatting, should not be taxed in terms of processing. Standard method of data collection should be set as the norm. From a higher order, an RFC or a Standard should be floated that standardizes all the Log Data from every IT device. This standardization would help in two things – One to improve the Overall Logging and Auditing capabilities of devices (Client Less – Out Of Box) in a standard format and the other is to Improve the Security Consciousness of the Application development teams. Think in terms of Logging being a part of standard protocol suite. What this would do is help interoperability between devices, be it Log Sources, Log Collectors, Log Managers etc along with bringing together all the fragmented parts of the existing log management under one umbrella. For example, Every IDS vendor today supports SNORT formatting. Similarly, all SIEM vendors should support a standard log Collection and processing so that interoperability, migration between vendors etc becomes more easy. I know CEF is one of the standards, but I am not sure everyone adopts that today.

Log Categorization – Security and Non-Security: When large volumes of Log data comes in, there is a need to separate the list of Security events from normal Non-Security events. The Log Standard also should specify categorization of Events clearly as Security and Non-Security. For every device class, the Security Events should be listed out and only those logs should be collected by SIEM for correlation and incident management. Many organizations collect way too much Log data (up to 100K) but effectively use only 10% of it for Security purposes. That means the remaining 80-90% is Non-Security related Logs. And this junk is the one eating up Terra bytes of space. There is a huge disparity between the Log Management tier and the SIEM tier. Data Collection is always more and more, however SIEM processing is more focussed. This is where the categorization helps so that SIEM receives only Security events to process.

Log Storage and Log Management – Streamlined and consistent data sets: The moment we start collecting standard data and properly categorizing them, the storage, indexing and retrieval becomes easier. This space we are good at and should continue to improve. Things like using Big Data Storage technologies instead of relying on Oracle or SQL or MySQL as the backend limits the capabilities when handling big data sets. Storage has to be streamlined and new indexing and searching capabilities should be thrown into the future development of Log Management tools and solutions.

Security Incident and Event Management (SIEM) – Once the Log Management problems are sorted out, the SIEM problems become easier to solve. One of the major pain points in using SIEM was the client-server architecture. When Log Collection becomes Client-less, the SIEM solutions need not focus on building Log Collection clients and instead focus their energies on better correlation and intelligence data mining. Some sweeping changes that can be brought into SIEM world are as below:

  • SIEM should be an inference engine, a correlation engine and a Data mining engine only. This will bring more value in the intelligence piece of Log Data Mining rather than just parsing and doing some basic alerting defined. Remember, as I always say, Security is an Intelligence Function and not an Operations Functions
  • SIEM should be able to focus only on Security events for Alerting, reporting and investigation. This is where the Log Standardization and Categorization plays a big role. If SIEM were to process only Security Data, we would never be hitting more than 5K in most enterprises.
  • SIEM should be a fast and agile product and not rely on backend DB queries, reports and stats usually driven using Oracle or SQL. This is something that some vendors are starting to explore. I know Novell e-Sentinel has this capability for a long time, HP is now trying to do similar thing with CORR. This is in my opinion the right way forward
  • The SIEM should also become a more Active tool instead of being a Passive tool as it is today. What I mean by this is, a SIEM should be able to respond to threats in a comprehensive way. It should be able to alert, do basic ITIL Service Management Integration out of the box and also if need be, execute boxed responses for alerts. This is helpful because, many a times the rules written in SIEM are basic and over a period of time become repetitive. Such repeatable alert responses can be automated into a Workflow and the system should be capable of becoming self-sufficient.
  • SIEM should get better at Large Data Set Mining. Today, SIEM Management consoles are “code heavy” in the front end and “CPU heavy” on the backend. This kinda reduces the efficiency of SIEM technologies in perform real-time correlation. A radical change is needed in the way SIEM applications are developed to make the application lightning fast.
  • SIEM should also provide flexibility in terms of customization of Incident Detection and Response templates, writing visualization rules (where you are able to chart Attack Vectors, Vulnerable points, Network Maps etc), trending Historical Data in a way where the system automatically detects a pattern of similar issues in the past and so on and so forth. In short, add “Intelligence as well as Learning Capabilities” to SIEM.

Again, I am not sure I have covered everything in terms of possible improvements to existing problems, but at least grazed on a few.

What else do you think can help improve the SIEM capabilities? Comment on below.

SIEM – The Good, The Bad and The Ugly – Part 1

SIEM Technology – The Good, The Bad and The Ugly

SIEM is one of those technologies most of the organizations adopt in the wake of Security Log Analysis/Incident/Event Reporting requirements. If you already know what SIEM technology and want to get into the domain, these are the things to know (SIEM – What you need to know). If you don’t know what SIEM is, read it nevertheless!!! This blog post is to talk about SIEM technology by analyzing it critically (even though I am a big fan of SIEM, I believe that maturity comes from review and feedback). Almost a decade ago, SIEM started gaining traction and has come a long way since. Now, I think is a good time to review the technology from a critical view point. So here is my blog on The Good, The Bad and The Ugly!!! This will be a 2 part post, with Part 1 concentrating on Introducing SIEM and then highlighting what it has and has not achieved. Part 2 will concentrate on a proposal/vision on how SIEM should move ahead in the coming years

Introduction:
SIEM is data driven. Data in the form of logs from IT Infrastructure is the key driver for SIEM tools to perform their so called “magic”. Logs have been around in IT for a long time. Logs have been one of the main tools to troubleshoot programs/operating systems etc since long. Gradually, Security gained importance and because of an established logging platform available across IT landscape, Security Events also slowly started to trickle into Logs. With time, along came several compliance and Audit requirements that were driving the Security Log Management domain. Then gradually there arose a need to analyze Log Data and based on the analysis, perform an action. This is where SIM tools gained prominence. This later started to get focussed on Security related incidents and diverged as SIEM. If you look at the pro genesis of SIEM, it has all to do with Data. That is why in today’s world, where Data is exploding in the Internet, it is of utmost importance to understand a technology as SIEM and improve it with time.

What SIEM has accomplished?
For more than a Decade, SIEM has done a lot of things for IT folks. When there was no capability to analyze lines and lines of Log files, SIEM was our savior. SIEM gave us the following capabilities right off the bat:

  1. Process Log Streams from Various Products and standardize them into a single Application data set.
  2. Provide capabilities to work with several thousands of events per second and still give what we need in terms of searching and querying Log data
  3. Provide capabilities to co-relate data from different entities so that we can trace the progeny of an issue
  4. Provide nice Alerts/Reports/Dashboards/Summary for the IT Log data
  5. Finally, a Incident and Event Management Workflow to make it operational.

Several vendors of SIEM (SIM/SEM also used interchangeably but SIEM is becoming standard) exist and google searches will give you more than 20 in number. The SIEM market today has grown into a Multi-Billion dollar market and companies, people etc are all embracing the change.

SIEM Shortcomings: 
While SIEM is a lucrative segment to be in, the problem is that the technology is not mature and has some gaping holes. The technology instead of solving a problem for good, fixed some and introduced several other collateral issues. Let us look at some of them below:

  • Log Management as a technology, as a solution was never mature. We never had good enterprise wide Log Management technologies and tools around before SIEM arrived.
  • Several Log Management issues still exist. These are around Big Data Sets, Standard Log Format Specifications, Integration of Log Sources, Standardization of Applications logging with respect to Security etc. Instead of focussing on fixing these issues, we jumped into SIEM solutions (Log Management + Event Management).
  • SIEM came packaged with Log Management solutions as well, but they were not as efficient as they should be. SIEM came packaged with Event Management Solutions as well, but what is good Event Management, when Log Management is not efficient.
    • Sample this, Windows Logs are resident files in a proprietary format. All Network devices send Syslog messages using the same RFC, but content is varied. Database Audit logs are a mix of Table Data and File Audit Data. When we have a variety of such logs from vendors, there is no way we can effectively perform Log Management and subsequently Event Management
  • One of the best and easiest solution for Log Management was that SIEM vendors packaged a client that can collect and normalize the data into its proprietary format. Then the processed data was sent to a Central Manager where all Event Management capabilities existed.
  • The problem with the above approach is, different data sources need different processing and hence a different client for every data source. Though this seems to be a simple solution at the outset, it adds a layer of complexity in terms of managing the Clients themselves. Imagine this problem for a huge enterprise and you know what a pain point this is for SIEM solutions.
  • Client management is a decentralized approach and hence a failure. Monitoring the health of the client is one of the management headaches one has to bear with. Patching them, updating properties, remote management etc are all points of failure, Not to mention keeping them up and running with constant care and feed like a new born.
  • Since the log standardization in SIEM is in proprietary format, migrating from one system to another, one vendor to another is a pain point. This would require client re-installation and data re-processing. This is a problem where you are stuck with a product for life. Inter-operability between systems has been always a problem for Vendors in IT space. This while protects their business, limits the capability of the end user to get what he wants. The solution cannot be more and more new products, new projects to replace existing SIEM solutions etc. It has to be more robust than that.
  • Searching data across TBs (terabytes) of data is the most important problem every organization faces. How do our SIEM solutions solve this? By using some sort of Databasing and Indexing. All the databases today (Read Oracle/SQL/MYSQL/PGSQL) are all limited in terms of handling such randomly formatted, high volume feeds, thereby rendering long term searches, trend analysis etc a slow, frustrating and time consuming job.
  • Client Server Models implemented by SIEM does not scale for BIG DATA!!! Let me tell you how:
    1. Most of the SIEM solutions I have worked with have 3 layers of architecture – Data Collector Layer (Event Collectors), Data Storage Layer (Event Indexers/Storage) and the Data Processing Layer (Event Management/Administration/Web Console/Server).
    2. In the above architecture, Data Collection and Data Storage is High Volume ranging up to 100K events per second. However, for Data Processing Layer or the Manager Layer, there is a limit of how much it can process (typically in 1/10 – 1/20 of collected data)
    3. If the effective use of Log data is only going to be 10-20%, what about the rest?
    4. People say aggregation and filtering is done to consolidate the data to be within the 10-20% range. Filtering and Aggregation have their own pros and cons but the end result is what you collect, is not used entirely.
  • Managing SIEM solutions (from architecting, implementing, integrating, customization, event management, content development, maintenance etc) is not a simple task and usually requires huge investments in people and training. The vendors make money with this I know, but honestly, being a User, you know that “If it is complicated, adoption will be difficult”
  • Most SIEM solutions are not integrated with ITIL process of Incident and Event Management (A rather standard form for IT framework used across the industry) thereby limiting deployments that should be a seamless transition.

I have to be honest about the fact that the above list is not comprehensive and there are several points you as readers would like to point out as far as the Positives and Negatives of SIEM. Please comment on and I will update the post with your views and comments. Part 2 will be discussing about the various options for SIEM to learn and improve based on Industry feedback and User feedback.

Oracle: You have the sight now Neo!!!

The objective of this post (The Matrix Quote in the title not withstanding) is to help the Database and the Security Team to enable log collection, processing and monitoring of Security Events of Interest with regards to a Business Critical Application Database. I have posted some articles on Overall Log Collection/Log Management here and here.
The key thing to note about DB Log Management is that the log is Non-Syslog Standard and has to have a DB connector/collector/agent/parser to collect, format and process the data in the DB for Auditing and Security Correlation purposes. Several SIEM solutions have the ability to pull audit data collected by Oracle DB regarding “Exactly at What Time, What User was responsible for executing What commands on the database”.

ORACLE AUDITING:
There are many different Oracle Audit facilities available for configuration:

  • Oracle audit
  • System triggers
  • Update, delete, and insert triggers
  • Fine-grained audit
  • System logs

The best method to use is the Oracle Audit Facility with the “db_extended” option to capture Command History.
This will help in tracking the exact commands executed by the attacker. This will help in the Forensic Investigation of the attacks. Events like, privilege misuse, elevation of privilege attacks, SQL Injection, Brute Force etc can be prevented with proactive monitoring, alerting and response capabilities.

REQUIREMENTS:

  1. Create a Unique Tablespace for the Audit Table: The first process is to create a separate tablespace just for auditing. The Oracle audit table is stored in “sys” table space. Because Oracle generates a lot of audit messages, this fills up the audit table, which can cause the database to crash with over flow. To avoid this problem, we must move the Oracle audit table into its own table space with its own data files separate from the core Oracle tables.
  2. Enable Auditing on the Oracle Database The auditing can be enabled either for the entire DB or for specific Tables. Please refer to the Diagram below showing the Typical Auditing Options recommended. For business critical applications, we recommend auditing SELECTS, UPDATES, and INSERTS to critical tables, such as salary info, credit card info, patient info, financial data, intellectual property, and so on.

For High Performance Databases where auditing cannot be enabled on all the tables, we can configure the “user.table_name” with the name of the table for which we would want to enable auditing for that action (as shown highlighted in yellow in the figure). We can also configure “user_name” with the names of users whose specific actions we want to audit (as shown highlighted in yellow in the figure).

AUDIT REQUIREMENTS:
Below is a table showing the some sample requirements for Database Auditing. This is just indicative and may vary from environment to environment depending on the Business Needs of your organization.

AUDIT_OPTION                      SUCCESS    FAILURE
GRANT PROCEDURE BY ACCESS BY ACCESS
SYSTEM GRANT BY ACCESS BY ACCESS
ALTER SEQUENCE BY ACCESS BY ACCESS
GRANT TABLE BY ACCESS BY ACCESS
ALTER TABLE BY ACCESS BY ACCESS
TRIGGER BY ACCESS BY ACCESS
PROCEDURE BY ACCESS BY ACCESS
ROLE BY ACCESS BY ACCESS
PUBLIC DATABASE LINK BY ACCESS BY ACCESS
DATABASE LINK BY ACCESS BY ACCESS
SEQUENCE BY ACCESS BY ACCESS
VIEW BY ACCESS BY ACCESS
PUBLIC SYNONYM BY ACCESS BY ACCESS
SYNONYM BY ACCESS BY ACCESS
ALTER USER BY ACCESS BY ACCESS
TYPE BY ACCESS BY ACCESS
USER BY ACCESS BY ACCESS
TABLE BY ACCESS BY ACCESS
CREATE SESSION BY ACCESS BY ACCESS

HOUSEKEEPING:

Audit Logging in the Oracle Database will cost Disk/Database space. This data can be purged over a regular schedule to keep the audit table clutter free and performing faster. Since the Audit data is being collected into a SIEM solution, retention should not be done at DB and instead at the SIEM.

In order to perform the housekeeping, the recommendation is as follows:
1. Create a Truncate Procedure

Create or replace procedure clean_audit is
Begin
— select sysdate from dual;
EXECUTE IMMEDIATE ‘truncate table aud$’;
commit;
End;

2. Schedule the Truncate Procedure

delete from dba_jobs where substr(what,1,6)=’CLEAN_’;
commit;
— it only need to run one time.
DECLARE
jobno number;
BEGIN    DBMS_JOB.SUBMIT (
job => jobno,
what => ‘CLEAN_AUDIT;’,
next_date => trunc(SYSDATE+1)+2/24,
interval => ‘/*2:00AM*/ trunc(SYSDATE+1)+2/24’
);
COMMIT;
DBMS_OUTPUT.PUT_LINE( ‘Job number ‘||jobno||’ setup.’ );
END;

Now that you have covered Auditing for all User Connections to the Database, you would think that you have all your bases covered. But Oracle is always coming up with surprises you see. The surprise is that you are still vulnerable to Insider Attacks because SYS, SYSDBA activities are not tracked in the Database Auditing covered above.
Gotcha!!!
—
I know, I know. No system is perfect and every system has a GAP that can be exploited. Monitoring SYS and SYSDBA is very important because, a disgruntled user can go ahead and tamper your DB and even though you are collecting DB Audit logs, you would never come to know of it. So what do we do now?
Auditing SYS, SYSDBA Activity is not straightforward as User Activity Auditing. This is because SYS and SYSDBA events are logged only at the OS level.— This is where the problems increase manifold. Let us look at it in detail –
Problem 1: The Files are stored in Oracle Installation directory thereby easily accessible and are also from a permission wise owned under Oracle Install User Group so it can be easily tampered with by any user in the Oracle Install group.
Problem 2: We will have to monitor the Client machine also to track who is the User who logged into the OS in order to login as SYSDBA and SYS.
Problem 3: If you have disabled direct client login as SYS and SYSDBA, the only way would be to login to the DB machine itself and then login as SYS and SYSDBA. In this case, you will have to track the Machine login as well as DB login.
So, unless and until you address the problems identified above, DB Audit Tracking for Log Investigation and Incident Detection will not be complete. Once again, go through the articles on Overall Log Collection/Log Management here and here to get an idea of how you can solve the problems.
Oracle: You have the sight now Neo!!!

 [pdf]Save as PDF[/pdf]