Oracle: You have the sight now Neo!!!

The objective of this post (The Matrix Quote in the title not withstanding) is to help the Database and the Security Team to enable log collection, processing and monitoring of Security Events of Interest with regards to a Business Critical Application Database. I have posted some articles on Overall Log Collection/Log Management here and here.
The key thing to note about DB Log Management is that the log is Non-Syslog Standard and has to have a DB connector/collector/agent/parser to collect, format and process the data in the DB for Auditing and Security Correlation purposes. Several SIEM solutions have the ability to pull audit data collected by Oracle DB regarding “Exactly at What Time, What User was responsible for executing What commands on the database”.

ORACLE AUDITING:
There are many different Oracle Audit facilities available for configuration:

  • Oracle audit
  • System triggers
  • Update, delete, and insert triggers
  • Fine-grained audit
  • System logs

The best method to use is the Oracle Audit Facility with the “db_extended” option to capture Command History.
This will help in tracking the exact commands executed by the attacker. This will help in the Forensic Investigation of the attacks. Events like, privilege misuse, elevation of privilege attacks, SQL Injection, Brute Force etc can be prevented with proactive monitoring, alerting and response capabilities.

REQUIREMENTS:

  1. Create a Unique Tablespace for the Audit Table: The first process is to create a separate tablespace just for auditing. The Oracle audit table is stored in “sys” table space. Because Oracle generates a lot of audit messages, this fills up the audit table, which can cause the database to crash with over flow. To avoid this problem, we must move the Oracle audit table into its own table space with its own data files separate from the core Oracle tables.
  2. Enable Auditing on the Oracle Database The auditing can be enabled either for the entire DB or for specific Tables. Please refer to the Diagram below showing the Typical Auditing Options recommended. For business critical applications, we recommend auditing SELECTS, UPDATES, and INSERTS to critical tables, such as salary info, credit card info, patient info, financial data, intellectual property, and so on.

For High Performance Databases where auditing cannot be enabled on all the tables, we can configure the “user.table_name” with the name of the table for which we would want to enable auditing for that action (as shown highlighted in yellow in the figure). We can also configure “user_name” with the names of users whose specific actions we want to audit (as shown highlighted in yellow in the figure).

AUDIT REQUIREMENTS:
Below is a table showing the some sample requirements for Database Auditing. This is just indicative and may vary from environment to environment depending on the Business Needs of your organization.

AUDIT_OPTION                      SUCCESS    FAILURE
GRANT PROCEDURE BY ACCESS BY ACCESS
SYSTEM GRANT BY ACCESS BY ACCESS
ALTER SEQUENCE BY ACCESS BY ACCESS
GRANT TABLE BY ACCESS BY ACCESS
ALTER TABLE BY ACCESS BY ACCESS
TRIGGER BY ACCESS BY ACCESS
PROCEDURE BY ACCESS BY ACCESS
ROLE BY ACCESS BY ACCESS
PUBLIC DATABASE LINK BY ACCESS BY ACCESS
DATABASE LINK BY ACCESS BY ACCESS
SEQUENCE BY ACCESS BY ACCESS
VIEW BY ACCESS BY ACCESS
PUBLIC SYNONYM BY ACCESS BY ACCESS
SYNONYM BY ACCESS BY ACCESS
ALTER USER BY ACCESS BY ACCESS
TYPE BY ACCESS BY ACCESS
USER BY ACCESS BY ACCESS
TABLE BY ACCESS BY ACCESS
CREATE SESSION BY ACCESS BY ACCESS

HOUSEKEEPING:

Audit Logging in the Oracle Database will cost Disk/Database space. This data can be purged over a regular schedule to keep the audit table clutter free and performing faster. Since the Audit data is being collected into a SIEM solution, retention should not be done at DB and instead at the SIEM.

In order to perform the housekeeping, the recommendation is as follows:
1. Create a Truncate Procedure

Create or replace procedure clean_audit is
Begin
— select sysdate from dual;
EXECUTE IMMEDIATE ‘truncate table aud$’;
commit;
End;

2. Schedule the Truncate Procedure

delete from dba_jobs where substr(what,1,6)=’CLEAN_’;
commit;
— it only need to run one time.
DECLARE
jobno number;
BEGIN    DBMS_JOB.SUBMIT (
job => jobno,
what => ‘CLEAN_AUDIT;’,
next_date => trunc(SYSDATE+1)+2/24,
interval => ‘/*2:00AM*/ trunc(SYSDATE+1)+2/24’
);
COMMIT;
DBMS_OUTPUT.PUT_LINE( ‘Job number ‘||jobno||’ setup.’ );
END;

Now that you have covered Auditing for all User Connections to the Database, you would think that you have all your bases covered. But Oracle is always coming up with surprises you see. The surprise is that you are still vulnerable to Insider Attacks because SYS, SYSDBA activities are not tracked in the Database Auditing covered above.
Gotcha!!!
—
I know, I know. No system is perfect and every system has a GAP that can be exploited. Monitoring SYS and SYSDBA is very important because, a disgruntled user can go ahead and tamper your DB and even though you are collecting DB Audit logs, you would never come to know of it. So what do we do now?
Auditing SYS, SYSDBA Activity is not straightforward as User Activity Auditing. This is because SYS and SYSDBA events are logged only at the OS level.— This is where the problems increase manifold. Let us look at it in detail –
Problem 1: The Files are stored in Oracle Installation directory thereby easily accessible and are also from a permission wise owned under Oracle Install User Group so it can be easily tampered with by any user in the Oracle Install group.
Problem 2: We will have to monitor the Client machine also to track who is the User who logged into the OS in order to login as SYSDBA and SYS.
Problem 3: If you have disabled direct client login as SYS and SYSDBA, the only way would be to login to the DB machine itself and then login as SYS and SYSDBA. In this case, you will have to track the Machine login as well as DB login.
So, unless and until you address the problems identified above, DB Audit Tracking for Log Investigation and Incident Detection will not be complete. Once again, go through the articles on Overall Log Collection/Log Management here and here to get an idea of how you can solve the problems.
Oracle: You have the sight now Neo!!!

 [pdf]Save as PDF[/pdf]

Episode 3 – Security Investigation Series – Should I press the panic button?

 

Scenario:

  • A User has a machine installed with an Application Client and connects to the Web Application Server which in turn connects to the Database servers on the back-end.
  • The Client machine has been reported to be being slow.
  • Some basic SIEM based analysis of Network logs for the client machine shows some Network connections resembling a Port Scan from the client machine in its own subnet
  • The machine is also doing some random internet connections to Public IP Addresses.
  • On checking the Anti-Virus software on the machine it shows as being outdated.
  • Suspicion is that the machine is infected with some kind of a malware.
  • Since this client machine has connections to the Critical App and Database server, this event is critical

Should I press the panic button?

Before we press the panic button (raise an Official Security Incident which has SLA tagged to it), there are several questions we need answers for. This is something I learnt from one of my mentors (he blogs here). What I learnt was that before beginning a Security Incident investigation and press the panic button, we need to sit down and collect as much as corroborating data as possible. In order to do that we can follow a series of logical steps:

  1. Collect the Public IP Addresses it connects to and find out more about them. Sometimes a simple Google search can help to determine the authenticity and reputation of the website/domain or public IP.
  2. Establish Ground Zero for the infection. This can be determined by historical log data. Some SIEM tools do this natively for you. Else, you can use a white board or pencil/paper to draw this visualization yourself.
  3. Gather as much as possible infected machine IPs and sort them geographically, Network Zone based and if possible VLAN based as well.
  4. Other Transitive Infections that could be identified.
  5. Analyze one of the infected machines to gather Malware analysis data

Next Steps – Remediation, Control etc

  • Quarantine all the machines infected.
  • In case of the server, ensure that all instances (if any) of the infected file are cleaned.
  • In case of a database, if we exactly know the infection source and possible file names, try purging them from the database contents.
  • Submit the samples of the Malware to the Security Vendor and ask them to start working on a Signature or definition.
  • Perform a Reverse Engineering of the Malware to identify specific Network connections/Behaviors so that they can go into our prevention systems in the Network. (Read Network IPS, Host IPS, Proxy etc). Reverse Engineering skill set  is a good skill to have because for Zero day scenarios, vendors are slow to react. If a Reverse Engineer is available, the job of getting a control measure put in place (although a crude one) will be quicker.
  • Document the characteristics of the malware, the response taken and the findings during the course of this entire incident.
And Finally, if you did all of the above after following the Security Investigation Series blog, say thanks!!!

What is TOR? How can I Un-TOR?

TOR or The Onion Router is one of the widely used Anonymity Networks in the “Wide World Web”. Before we understand what TOR Networks are, its important to understand the basic technology involved. This is key to defining a Detection/Prevention/Control strategy in Enterprise Networks. The basic technology behind TOR is Onion Routing.
Onion routing is a technique for anonymous communications over a Network. Data Packets are repeatedly encrypted and then sent through several onion routers. Like someone unpeeling an Onion, each onion router removes a layer of encryption to uncover routing instructions, and sends the message to the next router where this is repeated. Using this approach means each node in the chain is ideally aware of only 2 other nodes:

  1. The preceding node from which the onion was transmitted.
  2. The proceeding node to which the onion should next be transmitted.

Now how is it achieved? Just like in a Routed Network, there is a master node in Onion Network that maintains the list of TOR Nodes and their Public Keys (Remember TOR uses Asymmetric Crypto). Whenever a request is made, this master node crafts the data packet in layers. Outermost Layer of encryption will be opened by the First Onion Router and the Innermost encryption will be opened by the Last Onion Router. The peeling away of each layer of the onion makes it difficult or impossible to track the onion and hence the name Onion Routing.

Some things to understand about TOR in addition to the basic technology is how it works and how hard it makes life for Security Professionals to identify and control it.

  • Firstly, TOR Nodes have to be public. Their IPs cannot be hidden. Here is a sample list of TOR IP Addresses. This could potentially serve as a blacklisting source in Enterprise Firewalls/IPS/IDS/Proxy.
  • TOR Nodes can use Bridges to connect to TOR Public IPs. Bridges are nothing but Relay IP addresses that help a client connect to the TOR Network. Bridge/Relay IP Addresses make it difficult to identify TOR entry and Exit nodes. Any user can install “Vidalia” and set up a Bridge relay to help several TOR users (who have TOR Public nodes blocked by ISP or Enterprise). These bridge IP addresses are generated randomly per Request Received. There are several such Relays and they are hidden from Public IP Pool.
  • TOR traffic is encrypted and hence detection using IDS/IPS will be difficult
  • TOR Clients have the capability to use SOCKS to set up connections and hence differentiating SOCKS doing TOR and SOCKS not doing TOR is a great challenge.
  • Several Torrent softwares have the capability to do native TOR communication. Identifying such Software Machines will be a challenge in an Enterprise having a distributed setup, Remote Access Setup etc.

Now that we know what TOR is, Don’t we need to know how to control this in the Network?
Analysis and Control of TOR can be done as follows:

1. Blacklist all known IP addresses – This essentially is not fool-proof for the mentioned reasons above with Bridging.
2. Custom Script to pool Bridge IPs and keep adding the same to IP Blacklist. This can regularly query the Bridge Mail ID to get the random list of Bridge IP addresses.
3. If your enterprise is using HTTP Proxies only, then SOCKS Protocol should not be available in your network. Identifying a user doing SOCKS can help identify possible TOR clients.
4. P2P traffic should be blocked in the enterprise as P2P and TOR go hand in hand. The key things to look for is, Browser Plugins for P2P that mask behind HTTP and HTTPS Requests. This is quite an interesting development as far as Identifying P2P Users in your network is concerned.
5. If Traffic Analysis, Flow Analysis is available in your network, you can profile your Network segments for all the Application protocols in use in your network. Unless and until you are using TLS/IPSEC through out your network, chances are that very less amount of encrypted traffic is found. Filtering through the Chaff on known Encrypted traffic should narrow you down to a list of machines that do encrypted traffic and are not supposed to or not normal.

I welcome the readers to share your experience on working with TOR and let me know of any other method of identifying and analysis TOR in Enterprise networks.

Achieve Nirvana in Information Security

Follow

Get every new post delivered to your Inbox

Join other followers: