June 6, 2014 2 Comments
According to http://en.wikipedia.org/wiki/Perfect_storm:
A “perfect storm” is an expression that describes an event where a rare combination of circumstances will aggravate a situation drastically.
The other day we had a bit of a panic when AUD$ space grew by 45GB in 15 minutes and set off some alerts.
The audit_trail parameter was set to DB, EXTENDED.
Behaves the same as AUDIT_TRAIL=DB, but also populates the SQL bind and SQL text CLOB-type columns of the SYS.AUD$ table, when available.
So, we’re logging SQL statements.
If in DBA_STMT_AUDIT_OPTS we have this:
USER_NAME PROXY_NAME AUDIT_OPTION SUCCESS FAILURE --------- ---------- ------------- -------- ---------- SELECT TABLE NOT SET BY ACCESS
BY ACCESS means:
Oracle Database records separately each execution of a SQL statement, the use of a privilege, and access to the audited object. Given that the values for the return code, timestamp, SQL text recorded are accurate for each execution, this can help you find how many times the action was performed.
Now, given developer read-only access to production, if we then add these sequence of events:
1.A generated sql statement with 17,000 UNION statements each accessing the same table with a different id, e.g.
select * from t1 where id = 1 union select * from t1 where id = 2 union ... select * from t1 where id = 17000;
*no need to comment on whether this is a sensible query to be writing🙂
2. The (foolish) developer then setting this statement off in prod.
3. After a few minutes, the developer sensibly realising that running this in prod might not be a good idea and cancelling.
4. Unluckily (for the developer), the sensible cancel operation on the foolish sql statement causing a ORA-01013 sql failure which will then be audited.
5. The audit settings above meaning that 17,000 entries will be written into AUD$, one each for the 17,000 references to T1 in the SQL statement above and each of the 17,000 entries containing the 17,000 line long SQL statement.
The additional 45GB was all taken up by the AUD$.SQLTXT LOB.
To me, this was new information – I didn’t expect AUDIT to write down each and every reference to the table from a single execution – a list of DISTINCT objects in the statement would have been my expectation. And it is a concern that given just SELECT access, it’s so easy to generate significant volumes of data.
Should you wish to play around with this scenario, steps are:
1. If you need to change audit_trail param, it’s not dynamic, you’ll need to restart:
ALTER SYSTEM SET audit_trail=DB,EXTENDED scope=spfile;
2. Setup SELECT auditing BY ACCESS for failures (for testing, you can set it up for success as well of course):
AUDIT SELECT TABLE BY ACCESS WHENEVER NOT SUCCESSFUL;
3. As a non-SYS user, create a table to select from:
CREATE TABLE t1 AS SELECT * FROM dba_objects;
4. Generate a large SQL statement with lots of table references:
set serveroutput on declare l_sql clob; begin dbms_output.put_line('select * from t1 where object_id = 1'); for i in 2 .. 10000 loop dbms_output.put_line(' union select * from t1 where object_id = '||i); end loop; end; /
5. Kick it off and then cancel the execution.
6. Wait a significant amount of time for the background session to write the audit data to AUD$. For this smaller/simpler SQL above, it took my underpowered VM several minutes to write down 4GB of audit data.
6. Check out the growth of your AUD$ table (or rather the associated LOB):
select segment_name, sum(bytes) from dba_segments where segment_name in ('AUD$') or segment_name in (select segment_name from dba_lobs where table_name = 'AUD$') group by segment_name;