Conditional uniqueness

A quick fly through the options for conditional uniqueness.

Requirement #1: I want uniqueness on a column but only under certain conditions.

For example, I have an active flag and I want to make sure there is only one active record for a particular attribute but there can be many inactive rows.

Initial setup:

create table t1
(col1      number       not null
,col2      varchar2(24) not null
,is_active number(1)    not null
,constraint pk_t1 primary key (col1)
,constraint ck_t1_is_active check (is_active in (1,0)));

Solution #1: A unique index on an expression which evaluates to null when the condition is not met.

create unique index i_t1 on t1 (case when is_active = 1 then col2 end);

unique index I_T1 created.

insert into t1 values(1,'SHAGGY',1);

1 rows inserted.

insert into t1 values(2,'SHAGGY',1);

SQL Error: ORA-00001: unique constraint (I_T1) violated
00001. 00000 -  "unique constraint (%s.%s) violated"
*Cause:    An UPDATE or INSERT statement attempted to insert a duplicate key.
           For Trusted Oracle configured in DBMS MAC mode, you may see
           this message if a duplicate entry exists at a different level.
*Action:   Either remove the unique restriction or do not insert the key.

Only one active SHAGGY allowed.
But multiple inactives allowed:

insert into t1 values(2,'SHAGGY',0);

1 rows inserted.

insert into t1 values(3,'SHAGGY',0);

1 rows inserted.

Solution #2: A virtual column with a unique constraint

drop index i_t1;

index I_T1 dropped.

alter table t1 add (vc_col2 varchar2(24) generated always as (case when is_active = 1 then col2 end));

table T1 altered.

alter table t1 add constraint uk_t1 unique (vc_col2);

table T1 altered.

Note that now we have a virtual column we have to be very aware of insert statements with no explicit column list:

insert into t1 values(4,'SCOOBY',1);

SQL Error: ORA-00947: not enough values
00947. 00000 -  "not enough values"

Unless we’re lucky enough to be on 12c and use the INVISIBLE syntax:

alter table t1 add (vc_col2 varchar2(24) invisible generated always as (case when is_active = 1 then col2 end));

But as this example is on 11.2.0.3:

insert into t1 (col1, col2, is_active) values(4,'SCOOBY',1);

1 rows inserted.

insert into t1 (col1, col2, is_active) values(5,'SCOOBY',1);

SQL Error: ORA-00001: unique constraint (UK_T1) violated
00001. 00000 -  "unique constraint (%s.%s) violated"
*Cause:    An UPDATE or INSERT statement attempted to insert a duplicate key.
           For Trusted Oracle configured in DBMS MAC mode, you may see
           this message if a duplicate entry exists at a different level.
*Action:   Either remove the unique restriction or do not insert the key.

insert into t1 (col1, col2, is_active) values(5,'SCOOBY',0);

1 rows inserted.

insert into t1 (col1, col2, is_active) values(6,'SCOOBY',0);

1 rows inserted.

Requirement #2: Sorry we forgot to tell you that we insert the new row first and the update the old one to be inactive so we need deferred constraint (hmmm!)

In which case, you can’t have deferred uniqueness on an index so the only option is the virtual column.

alter table t1 drop constraint uk_t1;

table T1 altered.

alter table t1 add constraint uk_t1 unique (vc_col2) deferrable initially deferred;

table T1 altered.

insert into t1 (col1, col2, is_active) values(7,'FRED',1);

1 rows inserted.

insert into t1 (col1, col2, is_active) values(8,'FRED',1);

1 rows inserted.

commit;

SQL Error: ORA-02091: transaction rolled back
ORA-00001: unique constraint (.UK_T1) violated
02091. 00000 -  "transaction rolled back"
*Cause:    Also see error 2092. If the transaction is aborted at a remote
           site then you will only see 2091; if aborted at host then you will
           see 2092 and 2091.
*Action:   Add rollback segment and retry the transaction.

insert into t1 (col1, col2, is_active) values(7,'FRED',1);

1 rows inserted.

insert into t1 (col1, col2, is_active) values(8,'FRED',1);

1 rows inserted.

update t1 set is_active = 0 where col1 = 7;

1 rows updated.

commit;

committed.

See previous post on similar approach for conditional foreign key

Audit space threat – a perfect storm

According to http://en.wikipedia.org/wiki/Perfect_storm:

A “perfect storm” is an expression that describes an event where a rare combination of circumstances will aggravate a situation drastically.

The other day we had a bit of a panic when AUD$ space grew by 45GB in 15 minutes and set off some alerts.

The audit_trail parameter was set to DB, EXTENDED.

which means:

Behaves the same as AUDIT_TRAIL=DB, but also populates the SQL bind and SQL text CLOB-type columns of the SYS.AUD$ table, when available.

So, we’re logging SQL statements.

If in DBA_STMT_AUDIT_OPTS we have this:

USER_NAME PROXY_NAME AUDIT_OPTION  SUCCESS  FAILURE  
--------- ---------- ------------- -------- ----------
                     SELECT TABLE  NOT SET   BY ACCESS  

BY ACCESS means:

Oracle Database records separately each execution of a SQL statement, the use of a privilege, and access to the audited object. Given that the values for the return code, timestamp, SQL text recorded are accurate for each execution, this can help you find how many times the action was performed.

Now, given developer read-only access to production, if we then add these sequence of events:

1.A generated sql statement with 17,000 UNION statements each accessing the same table with a different id, e.g.

select * from t1 where id = 1 union
select * from t1 where id = 2 union
...
select * from t1 where id = 17000;

*no need to comment on whether this is a sensible query to be writing :)

2. The (foolish) developer then setting this statement off in prod.

3. After a few minutes, the developer sensibly realising that running this in prod might not be a good idea and cancelling.

4. Unluckily (for the developer), the sensible cancel operation on the foolish sql statement causing a ORA-01013 sql failure which will then be audited.

5. The audit settings above meaning that 17,000 entries will be written into AUD$, one each for the 17,000 references to T1 in the SQL statement above and each of the 17,000 entries containing the 17,000 line long SQL statement.

The additional 45GB was all taken up by the AUD$.SQLTXT LOB.

To me, this was new information – I didn’t expect AUDIT to write down each and every reference to the table from a single execution – a list of DISTINCT objects in the statement would have been my expectation. And it is a concern that given just SELECT access, it’s so easy to generate significant volumes of data.

Should you wish to play around with this scenario, steps are:

1. If you need to change audit_trail param, it’s not dynamic, you’ll need to restart:

ALTER SYSTEM SET audit_trail=DB,EXTENDED scope=spfile;

2. Setup SELECT auditing BY ACCESS for failures (for testing, you can set it up for success as well of course):

AUDIT SELECT TABLE BY ACCESS WHENEVER NOT SUCCESSFUL;

3. As a non-SYS user, create a table to select from:

CREATE TABLE t1 AS SELECT * FROM dba_objects;

4. Generate a large SQL statement with lots of table references:

set serveroutput on
declare
 l_sql clob;
begin
 dbms_output.put_line('select * from t1 where object_id = 1');
 for i in 2 .. 10000
 loop
     dbms_output.put_line(' union select * from t1 where object_id = '||i);
 end loop;
end;
/

5. Kick it off and then cancel the execution.

6. Wait a significant amount of time for the background session to write the audit data to AUD$. For this smaller/simpler SQL above, it took my underpowered VM several minutes to write down 4GB of audit data.

6. Check out the growth of your AUD$ table (or rather the associated LOB):

select segment_name, sum(bytes) 
from dba_segments
where segment_name in ('AUD$')
or    segment_name in (select segment_name from dba_lobs where table_name = 'AUD$')
group by segment_name;

Oracle Support – Grrr

Is this an appropriate response?

Others have raised this before – in fact I think I’ve raised the question before – but my initial search on Oracle Support did not turn up an answer so I thought I would raise a question again.

Problem Summary
—————————————————
Question only: Null predicates in DBA_HIST_SQL_PLAN

Problem Description
—————————————————
Everyone knows that the FILTER_PREDICATES and ACCESS_PREDICATES columns in DBA_HIST_SQL_PLAN are not populated.
If I remember correctly, this was originally a bug in 9i and statspack and has been carried forward into AWR.
As a result, any execution plan fetched using DBMS_XPLAN.DISPLAY_AWR has no predicate information
The statement which populates the underlying data has hardcoded NULLS.
Why do these continue to be null through all versions?
Is there any plan to address this?

I even provided the AWR statement with the hardcoded NULLS:

INSERT INTO wrh$_sql_plan sp (snap_id, dbid, sql_id, plan_hash_value, id, operation, options, object_node, object#, object_owner, object_name, object_alias, object_type, optimizer, parent_id, depth, position, search_columns, cost, cardinality, bytes, other_tag, partition_start, partition_stop, partition_id, other, distribution, cpu_cost, io_cost, temp_space, access_predicates, filter_predicates, projection, time, qblock_name, remarks, timestamp, other_xml) SELECT /*+ ordered use_nl(p) PARAM(‘_flush_plan_in_awr_sql’, 1) PARAM(‘_cursor_plan_unparse_enabled’, ‘FALSE’) */ :lah_snap_id lsnid, :dbid dbid, p.sql_id, p.plan_hash_value, p.id, p.operation, p.options, p.object_node, p.object#, p.object_owner, p.object_name, p.object_alias, p.object_type, p.optimizer, p.parent_id, p.depth, p.position, p.search_columns, p.cost, p.cardinality, p.bytes, p.other_tag, p.partition_start, p.partition_stop, p.partition_id, p.other, p.distribution, p.cpu_cost, p.io_cost, p.temp_space, NULL access_predicates, NULL filter_predicates, NULL projection, p.time, p.qblock_name, p.remarks, p.timestamp, p.other_xml FROM x$kewrattrnew atn, x$kewrtsqlplan spn, v$sql_plan p WHERE atn.str1_kewrattr = spn.sqlid_kewrspe AND atn.num1_kewrattr = spn.planhash_kewrspe AND spn.sqlid_kewrspe = p.sql_id AND spn.planhash_kewrspe = p.plan_hash_value AND spn.childaddr_kewrspe = p.child_address

Response:

Please provide 10046 showing the issue

blah, blah, blah
instructions for doing a trace
blah, blah, blah

Might as well have been an automated response or “did you try turning it off and on again?”
RTFQ…

Unusable unique constraint

Another annoying thing about unusable indexes

I’m surprised that I can’t remember coming across this before before.

I want to archive some data from a bunch of partitions.

Unfortunately, I can’t follow the virtuous circle of data lifecycling and partition the table in such a way that I can archive all the data from a particular partition (or subpartition) via exchange partition.

Without going into too much detail and skipping a bunch of other steps, roughly what I wanted to do was:
1. Mark indexes as unusable (ignore whether they’re global/local, it doesn’t matter).
2. Delete from existing partition
3. Rebuild indexes.

But if you mark a unique index as unusable, you can’t then do DML on the table.

It is expected behaviour.
Doesn’t matter what the setting of skip_unusable_indexes is.

If an index is used to enforce a UNIQUE constraint on a table, then allowing insert and update operations on the table might violate the constraint.
Therefore, this setting does not disable error reporting for unusable indexes that are unique.

Well that seems to compromise one of the main benefits of marking a unique index as unusable.

SQL> get tmp
  1  drop table t1;
  2  create table t1
  3  (pt    varchar2(24)
  4  ,col1  number)
  5  PARTITION BY LIST (pt)
  6  (PARTITION PX VALUES('X')
  7  ,PARTITION PY VALUES('Y'));
  8  create unique index i1 on t1 (col1) global;
  9  insert into t1 values ('X',1);
 10  alter index i1 unusable;
 11* delete from t1 partition (px) where col1 = 1;
 12  .
SQL> @tmp

Table dropped.

Table created.

Index created.

1 row created.

Index altered.

delete from t1 partition (px) where col1 = 1
*
ERROR at line 1:
ORA-01502: index 'E668983_DBA.I1' or partition of such index is in unusable state

To get around this, I can use a unique constraint backed by a non-unique index but that can come with some extra baggage.

SQL> get tmp
  1  drop table t1;
  2  create table t1
  3  (pt    varchar2(24)
  4  ,col1  number)
  5  PARTITION BY LIST (pt)
  6  (PARTITION PX VALUES('X')
  7  ,PARTITION PY VALUES('Y'));
  8  create  index i1 on t1 (col1) global;
  9  alter table t1 add constraint uk_t1 unique (col1) using index i1;
 10  insert into t1 values ('X',1);
 11  alter table t1 disable constraint uk_t1;
 12  alter index i1 unusable;
 13* delete from t1 partition (px) where col1 = 1;
 14  .
SQL> @tmp

Table dropped.

Table created.

Index created.

Table altered.

1 row created.

Table altered.

Index altered.

1 row deleted.

Remote Surprise

Here is an example of surprising behaviour from a remote DB from an OTN forum thread

Setup a link to a remote DB (I’ve used an actual remote DB and not tested a loopback)

Remote DB:

create table t1
(col1  varchar2(1));

Local DB:

create or replace view v1 
as 
select count(*) c1 from t1@l1;

Then alternate variations on this sequence of events:

1. On local DB execute SELECT:

SELECT * FROM v1;

2. On remote DB execute

begin
insert into t1 values ('a');
commit;
end;
/

3. On local DB execute SELECT:

SELECT * FROM v1;

4. On local DB execute SELECT:

SELECT * FROM v1;

OR

1. On local DB execute SELECT:

SELECT * FROM v1;

2. On remote DB execute

begin
delete from t1; 
commit;
end;
/

3. On local DB execute:

SELECT * FROM v1;

4. On local DB execute:

SELECT * FROM v1;

You may have to execute several times to catch the “inconsistency” but between the remote INSERT or DELETE and the local SELECT, you should see the odd “old” result which is corrected on the subsequent execution.

For example, on local DB with nothing happening on remote DB between the two SELECTS:

SQL> select * from v1;

        C1
----------
         6

SQL> select * from v1;

        C1
----------
         7

The behaviour is documented.


no practical way exists to keep SCNs in a distributed system absolutely synchronized: a window always exists in which one node may have an SCN that is somewhat in the past with respect to the SCN of another node.

Because of the SCN gap, you can execute a query that uses a slightly old snapshot, so that the most recent changes to the remote database are not seen. In accordance with read consistency, a query can therefore retrieve consistent, but out-of-date data.

And two workarounds documented:

You can use the following techniques to ensure that the SCNs of the two systems are synchronized just before a query:

Because SCNs are synchronized at the end of a remote query, precede each remote query with a dummy remote query to the same site, for example, SELECT * FROM DUAL@REMOTE.

Because SCNs are synchronized at the start of every remote transaction, commit or roll back the current transaction before issuing the remote query.

An example of where the optimizer should ignore a hint

Here’s an example of where the optimizer really should ignore a hint.

I’m testing an implementation of deliberately unusable index partitions – i.e. some unusable index partitions, some usable.

This is 11.2.0.3 so whilst there are enhancements in 12c, they’re no use here.

SQL> create table t1
  2  (col1 number)
  3  partition by list(col1)
  4  (partition p0 values(0),
  5   partition pdef values(default));

Table created.

SQL> create index i1 on t1 (col1) local unusable;

Index created.

SQL> alter index i1 rebuild partition pdef;

Index altered.

SQL> select * from t1;

no rows selected

SQL> select * from t1 partition (p0);

no rows selected

SQL> select /*+ index(t1 i1) */ * from t1 partition (p0);

no rows selected

SQL> select * from t1 where col1 = 0;

no rows selected

SQL> select /*+ index(t1 i1) */ * from t1 where col1 = 0;
select /*+ index(t1 i1) */ * from t1 where col1 = 0
*
ERROR at line 1:
ORA-01502: index 'I1' or partition of such index is in unusable state

SQL>

Oracle 12c VirtualBox VM

Just a quick note to highlight that the Oracle pre-built Developer VMs have been updated for Oracle 12c.

Oracle pre-built Developer VMs

Follow

Get every new post delivered to your Inbox.

Join 68 other followers