OLTP Compression, Drop Column, Partition Exchange

With Basic Compression, you cannot drop a column.

create table t1
(col1 number
,col2 number)
compress;
 
table T1 created.

alter table t1 drop column col2;

SQL Error: ORA-39726: unsupported add/drop column operation on compressed tables
39726. 00000 -  "unsupported add/drop column operation on compressed tables"
*Cause:    An unsupported add/drop column operation for compressed table
           was attemped.
*Action:   When adding a column, do not specify a default value.
           DROP column is only supported in the form of SET UNUSED column
           (meta-data drop column).

But with Basic Compression, you can set as unused.

alter table t1a set unused column col2;  

table T1A altered

With OLTP Compression, on the face of it, you can drop a column.

drop table t1;

create table t1
(col1 number
,col2 number)
compress for oltp;

select table_name, column_name, hidden_column 
from   user_tab_cols 
where  table_name = 'T1' order by column_id;

TABLE_NAME COLUMN_NAME HIDDEN_COLUMN
---------- ----------- -------------
T1         COL1        NO
T1         COL2        NO

alter table t1 drop column col2;

table T1 altered

But this is a lie/misdirection.
In reality, the dropped column is just renamed.
We see the column is not dropped but hidden:

select table_name, column_name, hidden_column 
from   user_tab_cols 
where  table_name = 'T1' order by column_id;

TABLE_NAME COLUMN_NAME                HIDDEN_COLUMN
---------- -------------------------- -------------
T1         COL1                       NO
T1         SYS_C00002_14111103:30:14$ YES

Note
1. The hidden column name is system generated and suffixed with YYMMDDHH24:MI:SS$
2. Hidden columns are not exposed via USER_TAB_COLUMNS hence usage of USER_TAB_COLS.

I’m not a fan of this behaviour.
It is intended and documented.

Not in OLTP Compression master note 1223705.1.

https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&id=1223705.1

But in doc id 1068820.1

https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&id=1068820.1

Only advanced compression, which requires Advanced Compression license, supports drop on compressed tables but even in this situation there is no real drop but internally the database sets the column UNUSED to avoid long-running decompression and recompression operations.

Why might this matter?

I can think of at least one time when this might matter – partition exchange.
Ok, the circumstances are probably niche and more relevant for a DEV environment than PROD.
But imagine you add a column to one side of the partition exchange operation and then think better of it and immediately drop it.

drop table t1;
drop table t2;

create table t1
(col1 number
,col2 number)
compress for oltp;
 
create table t2
(col1 number
,col2 number)
partition by range (col1) interval(1)
(partition p0 values less than (0))
compress for oltp;

Show that partition exchange works initially:

lock table t2 partition for (1) in exclusive mode;

lock succeeded.

alter table t2 exchange partition for (1) with table t1;

table T2 altered.

But if I add and drop a column

alter table t1 add col3 number;

table T1 altered.

alter table t1 drop column col3;

table T1 altered.

Then my partition exchange no longer works:

alter table t2 exchange partition for (1) with table t1;

SQL Error: ORA-14097: column type or size mismatch in ALTER TABLE EXCHANGE PARTITION
14097. 00000 -  "column type or size mismatch in ALTER TABLE EXCHANGE PARTITION"
*Cause:    The corresponding columns in the tables specified in the
           ALTER TABLE EXCHANGE PARTITION are of different type or size
*Action:   Ensure that the two tables have the same number of columns
           with the same type and size.

Until I recreate the table or use DBMS_REDEFINITION.
Or until I do the same to the other side of the exchange operation:

alter table t2 add col345 number;

table T2 altered.

alter table t2 drop column col345;

table T2 altered.

Then

alter table t2 exchange partition for (1) with table t1;

table T2 altered.

Oracle 12c Multitenant – Inmemory admin basics

A couple of very, very basic observations on getting going with 12c Inmemory in a multitenant database.

1. When trying to set inmemory_size within a PDB when inmemory_size is 0 in the CDB
ORA-02096: specified initialization parameter is not modifiable with this option

SQL> alter session set container = cdb$root;

Session altered.

SQL> select value from v$parameter where name = 'inmemory_size';

VALUE
--------------------------------------------------------------------------------
0

SQL> alter session set container = orcl;

Session altered.

SQL> alter session set inmemory_size=100M;

alter session set inmemory_size=100M
                  *
ERROR at line 1:
ORA-02096: specified initialization parameter is not modifiable with this option

2. You have to use scope=spfile when setting inmemory on CDB and it requires restart to take effect

SQL> alter session set container = cdb$root;

Session altered.

SQL> alter system set inmemory_size = 500M;

alter system set inmemory_size = 500M
*
ERROR at line 1:
ORA-02097: parameter cannot be modified because specified value is invalid
ORA-02095: specified initialization parameter cannot be modified

SQL> alter system set inmemory_size = 500M scope=spfile;

System altered.

SQL> select * from v$sga;

NAME                 VALUE      CON_ID
-------------------- ---------- ----------
Fixed Size           2926472    0
Variable Size        1224738936 0
Database Buffers     905969664  0
Redo Buffers         13848576   0

SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.

SQL> startup
ORACLE instance started.

Total System Global Area 2147483648 bytes
Fixed Size               2926472 bytes
Variable Size            1275070584 bytes
Database Buffers         318767104 bytes
Redo Buffers             13848576 bytes
In-Memory Area           536870912 bytes
Database mounted.
Database opened.

3. You don’t use scope when setting in a PDB otherwise
ORA-02096: specified initialization parameter is not modifiable with this option

SQL> alter session set container = orcl;

Session altered.

SQL> alter system set inmemory_size=400M scope=spfile;

alter system set inmemory_size=400M scope=spfile
                                               *
ERROR at line 1:
ORA-02096: specified initialization parameter is not modifiable with this option

SQL> alter system set inmemory_size=400M;

System altered.

4. If you try to set the PDB inmemory_size larger than the CDB then you get
ORA-02097: parameter cannot be modified because specified value is invalid

SQL> alter session set container = orcl;

Session altered.

SQL> alter system set inmemory_size=600M;

alter system set inmemory_size=600M
*
ERROR at line 1:
ORA-02097: parameter cannot be modified because specified value is invalid
ORA-02095: specified initialization parameter cannot be modified

Added:

5. Similar to point (2), if we change the inmemory size of the CDB, we have to restart for it to take effect

SQL> select sys_context('USERENV','CON_NAME') from dual;

SYS_CONTEXT('USERENV','CON_NAME')
--------------------------------------------------------------------------------
CDB$ROOT

SQL> alter system set inmemory_size=500M scope=spfile;

System altered.

SQL> select value from v$parameter where name = 'inmemory_size';

VALUE
--------------------------------------------------------------------------------
218103808

SQL> shutdown immediate  
Database closed.
Database dismounted.
ORACLE instance shut down.

SQL> startup
ORACLE instance started.

Total System Global Area 2147483648 bytes
Fixed Size               2926472 bytes
Variable Size            1157630072 bytes
Database Buffers         436207616 bytes
Redo Buffers             13848576 bytes
In-Memory Area           536870912 bytes
Database mounted.
Database opened.
SQL> 

6. If we modify inmemory_size for PDB, it takes effect immediately (presumably because all it really is a maximum quota on the actual memory structure established in the CDB?):

SQL> alter session set container = orcl;

Session altered.

SQL> alter system set inmemory_size=200M;

System altered.

SQL> select value from v$parameter where name = 'inmemory_size';

VALUE
--------------------------------------------------------------------------------
209715200

SQL> alter system set inmemory_size=400M;

System altered.

SQL> select value from v$parameter where name = 'inmemory_size';

VALUE
--------------------------------------------------------------------------------
419430400

7. While we showed in point (4) that a single PDB inmemory_size cannot exceed the CDB size, we can overallocate such “maximum quotas” across all PDBs – i.e. the sum of inmemory_size for all PDBs can exceed CDB size (prompts future investigations):

SQL> alter session set container=cdb$root;

Session altered.

SQL> select value from v$parameter where name = 'inmemory_size';

VALUE
--------------------------------------------------------------------------------
536870912

SQL> alter session set container=orcl;

Session altered.

SQL> select value from v$parameter where name = 'inmemory_size';

VALUE
--------------------------------------------------------------------------------
419430400

SQL> alter session set container=cdb$root;

Session altered.

SQL> create pluggable database dom from orcl file_name_convert=
 2  ('/home/oracle/app/oracle/oradata/cdb1/orcl',
 3  '/home/oracle/app/oracle/oradata/cdb1/dom');

Pluggable database created.

SQL> alter pluggable database dom open;

Pluggable database altered.

SQL> alter session set container = dom;

Session altered.

SQL> alter system set inmemory_size = 400M;

System altered.

SQL> select value from v$parameter where name = 'inmemory_size';

VALUE
--------------------------------------------------------------------------------
419430400

Plan Instability

There seems to me to be a relatively simple choice.

Either you except that the Oracle Optimizer has a wealth of complicated strategies and, in this complex effort to get the best executions it can, will inevitably either get it wrong sometimes (or take some extra executions to realise it’s wrong).

Or you stick your head in the sand and raise a bug for every unexpected poor execution or plan flip.

But let’s say that above else you wanted Plan Stability.

This is an widespread desire.

What would be the best strategy?
And to what lengths would you take it?

SQL Plan Management features are designed to give you some degree of stability.

You could baseline statements which have given you a problem in the past.

What if that is not deemed adequate?

So, on the one hand, you could baseline everything you could find.

Taking a long history of AWR and taking regularly snaps of V$SQL, you could put all in a SQL Plan Baseline.

But, depending on your application, you might regularly get new SQL appear unprotected by a baseline.
In one Hibernate application I know, a change in the application domain model often results in the table aliases changing, meaning that there is lots of brand new, but very familiar, SQL.
So, you then become dependant on having a reliable testing infrastructure which will generate nearly all the SQL you’re likely to get in advance of production.

In addition, you might want multiple plans in your baseline – for ACS for example – and then, once that extra bindset awareness is flushed out of memory, you then need a couple of executions to rediscover it (unless you use a SQL Patch to inject BIND_AWARE into the specific sql statements).

It’s certainly no guarantee of stability.

What is the alternative?

I like to argue that most problematic executions stem from two sources:

  1. Badly written SQL
  2. Using poor combinations of features – e.g. bind variable peeking + histograms

The other day I made a comment to someone that “if you’re heavily reliant on ACS, perhaps you should rethink whether you should be using literals rather than bind variables”.

So, you might then take the position that plan instability stems from increasing complexity in the optimizer.

In which case, maybe a viable strategy might be to turn off much of the complexity:

  • Bind Variable Peeking
  • Cardinality feedback
  • Adaptive Cursor Sharing
  • Dynamic Sampling
  • Adaptive direct path reads
  • 12c adaptive execution plans
  • Anything with the word “adaptive” in it?
  • Default Stats job and/or default method_opt histogram behaviour

This seems quite radical to many. Perhaps justifiably.

Perhaps it would be nice if there were new optimizer strategies – OPTIMIZER_MODE? – perhaps “AGGRESSIVE_PERFORMANCE” (maybe “ADAPTIVE_PERFORMANCE_WITH_THE_ODD_HICCUP” ?) vs “PERFORMANCE_STABILITY” which were an umbrella of many of the features above.

Thoughts?

To what extent are you concerned with performance stability, above all else?

How many of the optimizer features in each release are actually aligned with that widespread concern?

Conditional uniqueness

A quick fly through the options for conditional uniqueness.

Requirement #1: I want uniqueness on a column but only under certain conditions.

For example, I have an active flag and I want to make sure there is only one active record for a particular attribute but there can be many inactive rows.

Initial setup:

create table t1
(col1      number       not null
,col2      varchar2(24) not null
,is_active number(1)    not null
,constraint pk_t1 primary key (col1)
,constraint ck_t1_is_active check (is_active in (1,0)));

Solution #1: A unique index on an expression which evaluates to null when the condition is not met.

create unique index i_t1 on t1 (case when is_active = 1 then col2 end);

unique index I_T1 created.

insert into t1 values(1,'SHAGGY',1);

1 rows inserted.

insert into t1 values(2,'SHAGGY',1);

SQL Error: ORA-00001: unique constraint (I_T1) violated
00001. 00000 -  "unique constraint (%s.%s) violated"
*Cause:    An UPDATE or INSERT statement attempted to insert a duplicate key.
           For Trusted Oracle configured in DBMS MAC mode, you may see
           this message if a duplicate entry exists at a different level.
*Action:   Either remove the unique restriction or do not insert the key.

Only one active SHAGGY allowed.
But multiple inactives allowed:

insert into t1 values(2,'SHAGGY',0);

1 rows inserted.

insert into t1 values(3,'SHAGGY',0);

1 rows inserted.

Solution #2: A virtual column with a unique constraint

drop index i_t1;

index I_T1 dropped.

alter table t1 add (vc_col2 varchar2(24) generated always as (case when is_active = 1 then col2 end));

table T1 altered.

alter table t1 add constraint uk_t1 unique (vc_col2);

table T1 altered.

Note that now we have a virtual column we have to be very aware of insert statements with no explicit column list:

insert into t1 values(4,'SCOOBY',1);

SQL Error: ORA-00947: not enough values
00947. 00000 -  "not enough values"

Unless we’re lucky enough to be on 12c and use the INVISIBLE syntax:

alter table t1 add (vc_col2 varchar2(24) invisible generated always as (case when is_active = 1 then col2 end));

But as this example is on 11.2.0.3:

insert into t1 (col1, col2, is_active) values(4,'SCOOBY',1);

1 rows inserted.

insert into t1 (col1, col2, is_active) values(5,'SCOOBY',1);

SQL Error: ORA-00001: unique constraint (UK_T1) violated
00001. 00000 -  "unique constraint (%s.%s) violated"
*Cause:    An UPDATE or INSERT statement attempted to insert a duplicate key.
           For Trusted Oracle configured in DBMS MAC mode, you may see
           this message if a duplicate entry exists at a different level.
*Action:   Either remove the unique restriction or do not insert the key.

insert into t1 (col1, col2, is_active) values(5,'SCOOBY',0);

1 rows inserted.

insert into t1 (col1, col2, is_active) values(6,'SCOOBY',0);

1 rows inserted.

Requirement #2: Sorry we forgot to tell you that we insert the new row first and the update the old one to be inactive so we need deferred constraint (hmmm!)

In which case, you can’t have deferred uniqueness on an index so the only option is the virtual column.

alter table t1 drop constraint uk_t1;

table T1 altered.

alter table t1 add constraint uk_t1 unique (vc_col2) deferrable initially deferred;

table T1 altered.

insert into t1 (col1, col2, is_active) values(7,'FRED',1);

1 rows inserted.

insert into t1 (col1, col2, is_active) values(8,'FRED',1);

1 rows inserted.

commit;

SQL Error: ORA-02091: transaction rolled back
ORA-00001: unique constraint (.UK_T1) violated
02091. 00000 -  "transaction rolled back"
*Cause:    Also see error 2092. If the transaction is aborted at a remote
           site then you will only see 2091; if aborted at host then you will
           see 2092 and 2091.
*Action:   Add rollback segment and retry the transaction.

insert into t1 (col1, col2, is_active) values(7,'FRED',1);

1 rows inserted.

insert into t1 (col1, col2, is_active) values(8,'FRED',1);

1 rows inserted.

update t1 set is_active = 0 where col1 = 7;

1 rows updated.

commit;

committed.

See previous post on similar approach for conditional foreign key

Oracle 12c VirtualBox VM

Just a quick note to highlight that the Oracle pre-built Developer VMs have been updated for Oracle 12c.

Oracle pre-built Developer VMs

Follow

Get every new post delivered to your Inbox.

Join 75 other followers