Category Archives: Security Learning

Rules Rule In SIEM Kingdom!!!

Are you new to SIEM?
Are you trying to write a Correlation Rule in SIEM and don’t know where to start?
Are you stuck with several jargons from the SIEM Administrator guide?
If your answer is “YES” to any one of the above questions, please continue.

Rules are the staple ingredient for any SIEM tool or Technology. A Rule is nothing but a set of logical instructions for a System to follow before it determines What to do and What not to do. As we all know, SIEM is a passive system. All it does is a pattern matching of the Logs received and follows the instructions on what to do (trigger) and what not to do (Not Trigger). This pattern matcher is also called as Correlation Rules or Real time Rules. These Correlation Rules are nothing but “your visualization of how an attack would look in an IT Infra”.

Generally, Rule Writing or Rule Development is a process similar to SDLC.
It all starts with the Requirements Phase.
Requirement Phase: In this phase, the Rule Author should collect the exact requirements of putting a rule in place. This requirements should also tell what is the “Intent” of the rule. It also captures the Response such a Rule Trigger would elicit. It is at this Requirements Phase, where the “visualization” actually starts. Remember, without a goal your Rules will not mean anything.

Once you gather the requirements, you enter into the Design phase.
Design Phase: In the design phase, you do the rough skeletal layout of the rule itself. Things like,

  • What logs to use for creating this specific rule?
  • What log attribute is more suitable for rule trigger?
  • What are the various attributes to collect/represent?
  • What type of rule to write?
  • What type of alerting to be configured? (Read Email, SNMP Traps, Dashboard Trigger, Response Action etc)

are laid out in this phase. This is the Most crucial phase. When it comes to Selecting Rule Types, you need to know what are all the available features from your SIEM tool or Technology. Generally as an All-Purpose guide, I would broadly classify the rules for any SIEM tool or Technology into the following:

1. Single Event Rule – If Condition 1, 2, 3 up to N match, trigger. Typically used rule type as it is straight forward pattern matching of Event Type, Event ID, IP etc.
2. Many to One or One to Many Rules – If a Condition matches and One Source Several Targets or One Target Several Sources scenarios are in play
3. Cause and Effect Rules or “Followed by” Rules or Sequential Rules – If Condition A matches and leads to Condition B. This will typically we a “Scan Followed with an Exploit” type scenarios, “Password Guessing Failure followed by Successful login” type scenarios etc.
4. Transitive Rules or Tracking Rules – If Condition A matches for Attacker to Machine A and within some time, Machine A becomes the Attacker for Machine B (again matching Condition A). This is typically used in Worm/Malware Outbreak Scenarios. Here, the Target in the first event becomes the Source in the second event.
5. Trending Rules – These rules are tracking several Conditions over a time period, based on thresholds. This happens in DoS or DDoS Scenarios.

In the design phase, the selection of the above happens. If this Rule Type selection is messed up here, you will have to revisit the Requirements phase again. Hence it is important to understand and choose wisely.

Now that we have got the requirements and the design out, we can move to the Development phase.
Development Phase: This is where we actually write the rule. Remember, once the logical understanding is there for the Conditions required to match (generally using Boolean Operators), writing a Rule is very simple and straightforward (of course you need to know the SIEM Tool Menus to do so)

Finally, Testing Phase and Deployment Phase follow. Testing is critical to validate the logic involved in the Rule. Simulating the Rule Conditions in Development Environment/Testing Environment will help to iron out the Chinks in the Rule.

Finally, Post Implementation Phase kicks in.
Post Implementation Phase: Once the rule is implemented, we need to manage them. By manage, I mean ensure that the rule is tightened based on feedback from Security Analysts. This may involve adding additional conditions to the rule, whitelisting, Threshold adjustments, etc. This is what makes the rule better and efficient in achieving the “INTENT”. This is typically based like a “Waterfall Model” where you keep going back to the rule again and again to tune it according to the exact needs.

Finally, Rule Refresh Phase is another phase I would like to add in the mix. This is a stage where the Rules you put in place may no longer be applicable, or may have become obsolete and have to be replaced by better rules. Periodic clean up of old/obsolete rules is also one of the best practices in the world of SIEM Rules.

Indeed Rules Rule init??


 Save as PDF

Oracle: You have the sight now Neo!!!

The objective of this post (The Matrix Quote in the title not withstanding) is to help the Database and the Security Team to enable log collection, processing and monitoring of Security Events of Interest with regards to a Business Critical Application Database. I have posted some articles on Overall Log Collection/Log Management here and here.
The key thing to note about DB Log Management is that the log is Non-Syslog Standard and has to have a DB connector/collector/agent/parser to collect, format and process the data in the DB for Auditing and Security Correlation purposes. Several SIEM solutions have the ability to pull audit data collected by Oracle DB regarding “Exactly at What Time, What User was responsible for executing What commands on the database”.

There are many different Oracle Audit facilities available for configuration:

  • Oracle audit
  • System triggers
  • Update, delete, and insert triggers
  • Fine-grained audit
  • System logs

The best method to use is the Oracle Audit Facility with the “db_extended” option to capture Command History.
This will help in tracking the exact commands executed by the attacker. This will help in the Forensic Investigation of the attacks. Events like, privilege misuse, elevation of privilege attacks, SQL Injection, Brute Force etc can be prevented with proactive monitoring, alerting and response capabilities.


  1. Create a Unique Tablespace for the Audit Table: The first process is to create a separate tablespace just for auditing. The Oracle audit table is stored in “sys” table space. Because Oracle generates a lot of audit messages, this fills up the audit table, which can cause the database to crash with over flow. To avoid this problem, we must move the Oracle audit table into its own table space with its own data files separate from the core Oracle tables.
  2. Enable Auditing on the Oracle Database The auditing can be enabled either for the entire DB or for specific Tables. Please refer to the Diagram below showing the Typical Auditing Options recommended. For business critical applications, we recommend auditing SELECTS, UPDATES, and INSERTS to critical tables, such as salary info, credit card info, patient info, financial data, intellectual property, and so on.

For High Performance Databases where auditing cannot be enabled on all the tables, we can configure the “user.table_name” with the name of the table for which we would want to enable auditing for that action (as shown highlighted in yellow in the figure). We can also configure “user_name” with the names of users whose specific actions we want to audit (as shown highlighted in yellow in the figure).

Below is a table showing the some sample requirements for Database Auditing. This is just indicative and may vary from environment to environment depending on the Business Needs of your organization.

AUDIT_OPTION                      SUCCESS    FAILURE


Audit Logging in the Oracle Database will cost Disk/Database space. This data can be purged over a regular schedule to keep the audit table clutter free and performing faster. Since the Audit data is being collected into a SIEM solution, retention should not be done at DB and instead at the SIEM.

In order to perform the housekeeping, the recommendation is as follows:
1. Create a Truncate Procedure

Create or replace procedure clean_audit is
— select sysdate from dual;
EXECUTE IMMEDIATE ‘truncate table aud$’;

2. Schedule the Truncate Procedure

delete from dba_jobs where substr(what,1,6)=’CLEAN_’;
— it only need to run one time.
jobno number;
job => jobno,
what => ‘CLEAN_AUDIT;’,
next_date => trunc(SYSDATE+1)+2/24,
interval => ‘/*2:00AM*/ trunc(SYSDATE+1)+2/24’
DBMS_OUTPUT.PUT_LINE( ‘Job number ‘||jobno||’ setup.’ );

Now that you have covered Auditing for all User Connections to the Database, you would think that you have all your bases covered. But Oracle is always coming up with surprises you see. The surprise is that you are still vulnerable to Insider Attacks because SYS, SYSDBA activities are not tracked in the Database Auditing covered above.
I know, I know. No system is perfect and every system has a GAP that can be exploited. Monitoring SYS and SYSDBA is very important because, a disgruntled user can go ahead and tamper your DB and even though you are collecting DB Audit logs, you would never come to know of it. So what do we do now?
Auditing SYS, SYSDBA Activity is not straightforward as User Activity Auditing. This is because SYS and SYSDBA events are logged only at the OS level.— This is where the problems increase manifold. Let us look at it in detail –
Problem 1: The Files are stored in Oracle Installation directory thereby easily accessible and are also from a permission wise owned under Oracle Install User Group so it can be easily tampered with by any user in the Oracle Install group.
Problem 2: We will have to monitor the Client machine also to track who is the User who logged into the OS in order to login as SYSDBA and SYS.
Problem 3: If you have disabled direct client login as SYS and SYSDBA, the only way would be to login to the DB machine itself and then login as SYS and SYSDBA. In this case, you will have to track the Machine login as well as DB login.
So, unless and until you address the problems identified above, DB Audit Tracking for Log Investigation and Incident Detection will not be complete. Once again, go through the articles on Overall Log Collection/Log Management here and here to get an idea of how you can solve the problems.
Oracle: You have the sight now Neo!!!

 Save as PDF

What is TOR? How can I Un-TOR?

TOR or The Onion Router is one of the widely used Anonymity Networks in the “Wide World Web”. Before we understand what TOR Networks are, its important to understand the basic technology involved. This is key to defining a Detection/Prevention/Control strategy in Enterprise Networks. The basic technology behind TOR is Onion Routing.
Onion routing is a technique for anonymous communications over a Network. Data Packets are repeatedly encrypted and then sent through several onion routers. Like someone unpeeling an Onion, each onion router removes a layer of encryption to uncover routing instructions, and sends the message to the next router where this is repeated. Using this approach means each node in the chain is ideally aware of only 2 other nodes:

  1. The preceding node from which the onion was transmitted.
  2. The proceeding node to which the onion should next be transmitted.

Now how is it achieved? Just like in a Routed Network, there is a master node in Onion Network that maintains the list of TOR Nodes and their Public Keys (Remember TOR uses Asymmetric Crypto). Whenever a request is made, this master node crafts the data packet in layers. Outermost Layer of encryption will be opened by the First Onion Router and the Innermost encryption will be opened by the Last Onion Router. The peeling away of each layer of the onion makes it difficult or impossible to track the onion and hence the name Onion Routing.

Some things to understand about TOR in addition to the basic technology is how it works and how hard it makes life for Security Professionals to identify and control it.

  • Firstly, TOR Nodes have to be public. Their IPs cannot be hidden. Here is a sample list of TOR IP Addresses. This could potentially serve as a blacklisting source in Enterprise Firewalls/IPS/IDS/Proxy.
  • TOR Nodes can use Bridges to connect to TOR Public IPs. Bridges are nothing but Relay IP addresses that help a client connect to the TOR Network. Bridge/Relay IP Addresses make it difficult to identify TOR entry and Exit nodes. Any user can install “Vidalia” and set up a Bridge relay to help several TOR users (who have TOR Public nodes blocked by ISP or Enterprise). These bridge IP addresses are generated randomly per Request Received. There are several such Relays and they are hidden from Public IP Pool.
  • TOR traffic is encrypted and hence detection using IDS/IPS will be difficult
  • TOR Clients have the capability to use SOCKS to set up connections and hence differentiating SOCKS doing TOR and SOCKS not doing TOR is a great challenge.
  • Several Torrent softwares have the capability to do native TOR communication. Identifying such Software Machines will be a challenge in an Enterprise having a distributed setup, Remote Access Setup etc.

Now that we know what TOR is, Don’t we need to know how to control this in the Network?
Analysis and Control of TOR can be done as follows:

1. Blacklist all known IP addresses – This essentially is not fool-proof for the mentioned reasons above with Bridging.
2. Custom Script to pool Bridge IPs and keep adding the same to IP Blacklist. This can regularly query the Bridge Mail ID to get the random list of Bridge IP addresses.
3. If your enterprise is using HTTP Proxies only, then SOCKS Protocol should not be available in your network. Identifying a user doing SOCKS can help identify possible TOR clients.
4. P2P traffic should be blocked in the enterprise as P2P and TOR go hand in hand. The key things to look for is, Browser Plugins for P2P that mask behind HTTP and HTTPS Requests. This is quite an interesting development as far as Identifying P2P Users in your network is concerned.
5. If Traffic Analysis, Flow Analysis is available in your network, you can profile your Network segments for all the Application protocols in use in your network. Unless and until you are using TLS/IPSEC through out your network, chances are that very less amount of encrypted traffic is found. Filtering through the Chaff on known Encrypted traffic should narrow you down to a list of machines that do encrypted traffic and are not supposed to or not normal.

I welcome the readers to share your experience on working with TOR and let me know of any other method of identifying and analysis TOR in Enterprise networks.