SQL Developer: Viewing Trace Files

Just a quick plug for looking at raw sql trace files via SQL Developer.

There is a nice Tree View:
sqldev_trace

Which can be expanded:
sqldev_trace_expand

Also summary view of statistics, filterable:
sqldev_trace_stats

And a list view, filterable and orderable:

sqldev_trace_list

Some sort of right click summary for binds/waits might be a nice addition.

Real time sql monitoring – bulk bound statement

Moving through some observations of an ongoing investigation tackling the easiest ones first.

Previously I observed the slow execution of a recursive delete of statistics history but coming not from MMON but from a number of application processes which should have only been calculating stats for their own partitions.

Statement was sql id 9v9n97qj8z1dg:

	
delete /*+ dynamic_sampling(4) */
from sys.wri$_optstat_histhead_history
where savtime < :1
and rownum <= NVL(:2, rownum);

From the real time sql monitoring output of a single execution (DBMS_SQLTUNE.REPORT_SQL_MONITOR EXEC_ID parameter) , I observed that EXECS was greater than 1 and so the A-Rows was much larger than I expected for a statement which was deliberately restricting the number of rows affected by a ROWNUM predicate.

Reminder:

SQL Plan Monitoring Details (Plan Hash Value=2348801730)
=======================================================================================================================================================================================================
| Id   |            Operation             |             Name              |  Rows   | Cost  |   Time    | Start  | Execs |   Rows   | Read | Read  | Activity |            Activity Detail            |
|      |                                  |                               | (Estim) |       | Active(s) | Active |       | (Actual) | Reqs | Bytes |   (%)    |              (# samples)              |
=======================================================================================================================================================================================================
|    0 | DELETE STATEMENT                 |                               |         |       |     50262 |   +101 |  1423 |        1 |      |       |          |                                       |
|    1 |   DELETE                         | WRI$_OPTSTAT_HISTHEAD_HISTORY |         |       |     50361 |     +2 |  1423 |        1 |      |       |    29.74 | enq: TX - row lock contention (14630) |
|      |                                  |                               |         |       |           |        |       |          |      |       |          | buffer busy waits (231)               |
|      |                                  |                               |         |       |           |        |       |          |      |       |          | Cpu (82)                              |
|      |                                  |                               |         |       |           |        |       |          |      |       |          | resmgr:cpu quantum (5)                |
|    2 |    COUNT                         |                               |         |       |     50262 |   +101 |  1423 |      14M |      |       |     0.00 | Cpu (1)                               |
|    3 |     FILTER                       |                               |         |       |     50266 |    +97 |  1423 |      14M |      |       |    30.14 | Cpu (15146)                           |
|    4 |      TABLE ACCESS BY INDEX ROWID | WRI$_OPTSTAT_HISTHEAD_HISTORY |      6M | 23218 |     50253 |   +100 |  1423 |       4G |   22 | 176KB |    23.17 | buffer busy waits (18)                |
|      |                                  |                               |         |       |           |        |       |          |      |       |          | Cpu (11627)                           |
| -> 5 |       INDEX RANGE SCAN           | I_WRI$_OPTSTAT_HH_ST          |    990K |  5827 |     50264 |   +101 |  1423 |       4G |   41 | 328KB |    16.94 | buffer busy waits (3)                 |
|      |                                  |                               |         |       |           |        |       |          |      |       |          | Cpu (8332)                            |
|      |                                  |                               |         |       |           |        |       |          |      |       |          | resmgr:cpu quantum (180)              |
=======================================================================================================================================================================================================
   3 - filter(ROWNUM<=NVL(:2,ROWNUM))
   5 - access("WRI$_OPTSTAT_HISTHEAD_HISTORY"."SYS_NC00024$"<SYS_EXTRACT_UTC(:1))

Initially I was stumped by the fact that A-Rows in steps 2-3 was greater than the ROWNUM predicate restriction (10000) but that was before I noticed the number of EXECS and EXECS * 10000 = 14M so was this a report of a bulk/batched statement?

So, this is just an illustration that such a bulk bound statement will produce an output as per above.

Create a table with a few rows to slow things down:

drop table t3;

create table t3
as
with x as
(select rownum col1
 from   dual
 connect by level <= 1000)
select x1.col1
,      rpad('X',100,'X') col2
from   x x1, x x2;

Then run an update with FORALL to bulk bind:

declare
 v1 sys.odcinumberlist := sys.odcinumberlist();
begin
 for i in 1 .. 1000
 loop
     v1.extend();
     v1(i) := i;
 end loop;
 forall i in 1 .. v1.count
   update /*+ monitor domtest */ t3
   set    col2 = rpad('Y',100,'Y')
   where  col1 = v1(i);
 commit;
end;
/

Hunt down my SQL execution and get RTSM output of statement.

V$SQL reports this as one execution updating 1M rows.

select executions, rows_processed, elapsed_time/1000/1000, sql_text from v$sql where sql_id = '3tkqtzjyaa02g';

EXECUTIONS ROWS_PROCESSED ELAPSED_TIME/1000/1000 SQL_TEXT
---------- -------------- ---------------------- ------------------------------------------------------------------------------
         1        1000000             109.596429 UPDATE /*+ monitor domtest */ T3 SET COL2 = RPAD('Y',100,'Y') WHERE COL1 = :B1

RTSM reports it as 1044 (!?) executions and doesn’t seem to deal with the number of rows updated.
We only get the bind for the initial bind – not unexpected.

select dbms_sqltune.report_sql_monitor('3tkqtzjyaa02g',sql_exec_id => '16777216') from dual;
SQL Monitoring Report

SQL Text
------------------------------
UPDATE /*+ monitor domtest */ T3 SET COL2 = RPAD('Y',100,'Y') WHERE COL1 = :B1

Global Information
------------------------------
 Status              :  DONE                 
 Instance ID         :  1                    
 SQL ID              :  3tkqtzjyaa02g        
 SQL Execution ID    :  16777216             
 Execution Started   :  02/26/2015 13:25:29  
 First Refresh Time  :  02/26/2015 13:25:29  
 Last Refresh Time   :  02/26/2015 13:27:19  
 Duration            :  110s                 
 Module/Action       :  SQL Developer/-      
 Service             :  SYS$USERS            
 Program             :  SQL Developer        

Binds
========================================================================================================================
| Name | Position |  Type  |                                           Value                                           |
========================================================================================================================
| :B1  |        1 | NUMBER | 1                                                                                         |
========================================================================================================================

Global Stats
=========================================
| Elapsed |   Cpu   |  Other   | Buffer |
| Time(s) | Time(s) | Waits(s) |  Gets  |
=========================================
|     110 |     109 |     0.16 |    16M |
=========================================

SQL Plan Monitoring Details (Plan Hash Value=669554690)
==========================================================================================================================
| Id |      Operation       | Name |  Rows   | Cost |   Time    | Start  | Execs |   Rows   | Activity | Activity Detail |
|    |                      |      | (Estim) |      | Active(s) | Active |       | (Actual) |   (%)    |   (# samples)   |
==========================================================================================================================
|  0 | UPDATE STATEMENT     |      |         |      |           |        |  1044 |          |          |                 |
|  1 |   UPDATE             | T3   |         |      |       109 |     +2 |  1044 |        0 |     0.92 | Cpu (1)         |
|  2 |    TABLE ACCESS FULL | T3   |     171 | 3409 |       110 |     +1 |  1044 |       1M |    99.08 | Cpu (108)       |
==========================================================================================================================

By contrast a non-bulk bound FOR LOOP statement would register as 1000 executions in V$SQL and each a separate SQL_EXEC_ID.

However this does nothing to address the concern about what the heck is going on such that application processes executing gather_table_stats for a particular partition would each end up running this bulk indiscriminate cleanup script… heading that way next.

COUNT STOPKEY – NVL

Yesterday I mentioned issues with a recursive delete operation on statistics history.

This is a quick illustration of the last points I made on that post regarding the lack of a COUNT STOPKEY optimisation because of the use of NVL.

COUNT STOPKEY is an optimisation which allows processing to stop once the target number of rows has been reached.

For example:

create table t1 as select * from dba_objects;

alter session set statistics_level = all;

var rn number
exec :rn := 10;

select *
from   t1
where  rownum <= :rn
and    mod(object_id,5) = 0;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));

Plan hash value: 3836375644

-------------------------------------------------------------------------------------
| Id  | Operation          | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |      1 |        |     10 |00:00:00.01 |       4 | 
|*  1 |  COUNT STOPKEY     |      |      1 |        |     10 |00:00:00.01 |       4 |
|*  2 |   TABLE ACCESS FULL| T1   |      1 |  26148 |     10 |00:00:00.01 |       4 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter(ROWNUM<=:RN)
   2 - filter(MOD("OBJECT_ID",5)=0) 

COUNT STOPKEY kicks in after we have fetched the relevant rows and stops any unnecessary further execution – note in particular A-Rows & Buffers for STEP 2

However, if we use NVL around our ROWNUM limitation, then this optimisation is prevented.

var rn number
exec :rn := 10;

select *
from   t1
where  rownum <= nvl(:rn,rownum)
and    mod(object_id,5) = 0;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));

Plan hash value: 624922415

--------------------------------------------------------------------------------------
| Id  | Operation           | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
--------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      |      1 |        |     10 |00:00:00.09 |    2310 |
|   1 |  COUNT              |      |      1 |        |     10 |00:00:00.09 |    2310 |
|*  2 |   FILTER            |      |      1 |        |     10 |00:00:00.09 |    2310 |
|*  3 |    TABLE ACCESS FULL| T1   |      1 |  26148 |  29630 |00:00:00.08 |    2310 |
--------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter(ROWNUM<=NVL(:RN,ROWNUM))
   3 - filter(MOD("OBJECT_ID",5)=0)

Understanding vs Resolution – Statistics History Cleanup

Today I helped resolve a problem quickly but to the detriment of my understanding of exactly what was going on and why.

And as part of the quicker resolution, I had to destroy the evidence which would have helped understand better.

So… now need to go back and figure it out if I have time to model it properly, etc.

Here’s what little I know so far.

What happened was that there were a number of application sessions experiencing slowness and contention when executing a recursive stats history cleanup statement.

Verified via ASH that this recursive delete was somehow being called by app ETL code (TOP_LEVEL_SQL_ID, PLSQL_ENTRY_OBJECT_ID & USER_ID columns), four of them each running slow statement and also because of that nature of the statement below and being blocked by mode 6 TX locks from the leading execution.

Version is 11.2.0.3

Statement was sql id 9v9n97qj8z1dg:

delete /*+ dynamic_sampling(4) */ 
from sys.wri$_optstat_histhead_history 
where savtime < :1 
and rownum <= NVL(:2, rownum);

First up, resolution was quick and easy according to, by a quirk of coincidence, my last post:

https://orastory.wordpress.com/2015/02/16/dbms_stats-purge_stats/

We just got rid of all the statistics history using the magic PURGE_ALL truncate flag.
The history is of limited usefulness day-to-day anyway.

The slowness was holding up ETL jobs which were just trying to calculate stats for their own partition.

I was brought into the situation towards the end of the situation but here’s an illustration of the slowness:

SQL Text
------------------------------
delete /*+ dynamic_sampling(4) */ from sys.wri$_optstat_histhead_history where savtime < :1 and rownum <= NVL(:2, rownum)

Global Information
------------------------------
 Status              :  EXECUTING               
 Instance ID         :  1                       
 SQL ID              :  9v9n97qj8z1dg           
 SQL Execution ID    :  16777249                
 Execution Started   :  02/24/2015 19:11:25     
 First Refresh Time  :  02/24/2015 19:13:06     
 Last Refresh Time   :  02/25/2015 09:10:35     
 Duration            :  50351s                  
 Module/Action       :  JDBC Thin Client/-      
 Program             :  JDBC Thin Client        

Binds
========================================================================================================================
| Name | Position |  Type  |                                           Value                                           |
========================================================================================================================
| :2   |        2 | NUMBER | 10000                                                                                     |
========================================================================================================================

Global Stats
===============================================================================================
| Elapsed |   Cpu   |    IO    | Application | Concurrency |  Other   | Buffer | Read | Read  |
| Time(s) | Time(s) | Waits(s) |  Waits(s)   |  Waits(s)   | Waits(s) |  Gets  | Reqs | Bytes |
===============================================================================================
|   50359 |   35199 |     0.16 |       14669 |         254 |      237 |     1G |   63 | 504KB |
===============================================================================================

SQL Plan Monitoring Details (Plan Hash Value=2348801730)
=======================================================================================================================================================================================================
| Id   |            Operation             |             Name              |  Rows   | Cost  |   Time    | Start  | Execs |   Rows   | Read | Read  | Activity |            Activity Detail            |
|      |                                  |                               | (Estim) |       | Active(s) | Active |       | (Actual) | Reqs | Bytes |   (%)    |              (# samples)              |
=======================================================================================================================================================================================================
|    0 | DELETE STATEMENT                 |                               |         |       |     50262 |   +101 |  1423 |        1 |      |       |          |                                       |
|    1 |   DELETE                         | WRI$_OPTSTAT_HISTHEAD_HISTORY |         |       |     50361 |     +2 |  1423 |        1 |      |       |    29.74 | enq: TX - row lock contention (14630) |
|      |                                  |                               |         |       |           |        |       |          |      |       |          | buffer busy waits (231)               |
|      |                                  |                               |         |       |           |        |       |          |      |       |          | Cpu (82)                              |
|      |                                  |                               |         |       |           |        |       |          |      |       |          | resmgr:cpu quantum (5)                |
|    2 |    COUNT                         |                               |         |       |     50262 |   +101 |  1423 |      14M |      |       |     0.00 | Cpu (1)                               |
|    3 |     FILTER                       |                               |         |       |     50266 |    +97 |  1423 |      14M |      |       |    30.14 | Cpu (15146)                           |
|    4 |      TABLE ACCESS BY INDEX ROWID | WRI$_OPTSTAT_HISTHEAD_HISTORY |      6M | 23218 |     50253 |   +100 |  1423 |       4G |   22 | 176KB |    23.17 | buffer busy waits (18)                |
|      |                                  |                               |         |       |           |        |       |          |      |       |          | Cpu (11627)                           |
| -> 5 |       INDEX RANGE SCAN           | I_WRI$_OPTSTAT_HH_ST          |    990K |  5827 |     50264 |   +101 |  1423 |       4G |   41 | 328KB |    16.94 | buffer busy waits (3)                 |
|      |                                  |                               |         |       |           |        |       |          |      |       |          | Cpu (8332)                            |
|      |                                  |                               |         |       |           |        |       |          |      |       |          | resmgr:cpu quantum (180)              |
=======================================================================================================================================================================================================
   3 - filter(ROWNUM<=NVL(:2,ROWNUM))
   5 - access("WRI$_OPTSTAT_HISTHEAD_HISTORY"."SYS_NC00024$"<SYS_EXTRACT_UTC(:1))

So, first thought was that:

1. I thought that MMON was responsible for purging old stats… clearly not in this case as there were multiple concurrent application connections purging old data as part of their ETL process.

2. The DELETE is deleting any old history older than a parameterised date, the first 10000 rows thereof. There is no connection to the object on whose statistics the application ETL jobs were working on.

3. I would expect a COUNT STOPKEY operation but then the NVL predicate avoids this optimisation.

4. UPDATE: The real time sql monitoring output is also reporting the metrics for 1423 executions of this statement. The clue is in the 14M rows reported in the COUNT + FILTER operations. 1423 * 10000 rows = 14M. But I requested the output for what I thought was a single execution id – 16777249 – strange… bulk/batched delete ?

More questions than answers…

Call vs Exec

Just a reference to a really simple difference between CALL and EXEC.
I thought I had mentioned this before but couldn’t find it so…

EXEC/EXECUTE is a SQL*Plus command which wraps the proc call in an anonymous BEGIN … END; block.

CALL is a SQL command hence it is limited to SQL data types and there are other restrictions which the documentation sums up pretty well.

Because CALL is SQL, there is on key behavioural difference which caused a bug on a project a few years ago when Java code was calling a stored proc not with BEGIN … END; but with CALL and ended up swallowing certain exceptions:

SQL> l
  1  create or replace procedure p1
  2  as
  3  begin
  4    raise no_data_found;
  5* end;
SQL> /

Procedure created.

SQL> exec p1;
BEGIN p1; END;

*
ERROR at line 1:
ORA-01403: no data found
ORA-06512: at "PGPS_UAT1.P1", line 4
ORA-06512: at line 1


SQL> call p1();

Call completed.

SQL>

SQL expects and handles certain exceptions.

So always use BEGIN and END; blocks in application code rather than CALL.

Partition gotchas

Two minor partition gotchas on partition extent sizing and on the number of partitions in an interval partitioned.

First – old news – note that initial extent size for partitioned tables changed in 11.2.0.2.
Somehow this passed me by and I was late to the large extent party – duh!
Influenced by parameter _partition_large_extents, the default was changed to 8MB rather than 64K for autoallocated ASSM extent sizing.

Pre 11.2.0.2 (non-Exadata):

alter session set "_partition_large_extents" = false;

create table t1
(col1   number not null)
partition by range (col1)    interval ( 1 )
(partition p0 values less than (1));

insert into t1 (col1) values (1);

See initial extent size of 64k:

select table_name
,      partition_name
,      high_value
,      initial_extent
,      next_extent 
from user_tab_partitions where table_name = 'T1';

TABLE_NAME PARTITION_NAME HIGH_VALUE INITIAL_EXTENT NEXT_EXTENT
---------- -------------- ---------- -------------- -----------
T1         P0             1          
T1         SYS_P133592    2          65536          1048576 

Onwards from 11.2.0.2, the following is the default behaviour:

alter session set "_partition_large_extents" = true;

drop table t1;

create table t1
(col1   number not null)
partition by range (col1)    interval ( 1 )
(partition p0 values less than (1));

insert into t1 (col1) values (1);

Back to default initial extent of 8MB:

select table_name
,      partition_name
,      high_value
,      initial_extent
,      next_extent 
from   user_tab_partitions where table_name = 'T1';

TABLE_NAME PARTITION_NAME HIGH_VALUE INITIAL_EXTENT NEXT_EXTENT
---------- -------------- ---------- -------------- -----------
T1         P0             1          
T1         SYS_P133593    2          8388608        1048576 

A friend highlighted this to me on his DB where a lot of space had been wasted on small or empty partitions.
If empty, why was space wasted? Did not deferred segment creation benefit?
Deferred segment creation was off on this DB because of historic occurrence of bug 12535346 with deferred constraints combined with other features with “deferred” behaviour (deferred segment creation or interval partition).

Second gotcha – if you are considering INTERVAL PARTITIONING, particularly with a RANGE of 1, think about the impact of the maximum number of interval partitions.

Following on from example T1 table above:

select table_name
,      partition_name
,      high_value 
from   user_tab_partitions where table_name = 'T1';

TABLE_NAME PARTITION_NAME HIGH_VALUE 
---------- -------------- -----------
T1         P0             1                                                                                
T1         SYS_P133593    2        
insert into t1 (col1) values (1048574);

1 rows inserted

insert into t1 (col1) values (1048575);

ORA-14300: partitioning key maps to a partition outside maximum permitted number of partitions

The maximum number of partitions in an interval partition is documented as 1024K -1 = 1048575.

select table_name
,      partition_name
,      high_value 
from   user_tab_partitions where table_name = 'T1';

TABLE_NAME PARTITION_NAME HIGH_VALUE 
---------- -------------- -----------
T1         P0             1                                                                                
T1         SYS_P133593    2                                                                                
T1         SYS_P133594    1048575   

Although we only have two materialised partitions, the possible future partition boundaries inbetween are effectively reserved for numbers between the lower and upper boundaries.

There’s not a lot we can do about this UNLESS you know that you’re not going to need those reserved partitions OR if you’re happy for other numbers to go into existing partitions, effectively having ranges much larger than the original 1.

At the moment, if we go higher than our reserved boundaries it fails, but for example if the key “1048573” comes along with is within our reserved boundaries, it will create a new partition:

insert into t1 (col1) values (1048573);

1 row inserted

select table_name
,      partition_name
,      high_value 
,      interval
from   user_tab_partitions where table_name = 'T1';

TABLE_NAME PARTITION_NAME HIGH_VALUE INTERVAL
---------- -------------- ---------- --------
T1         P0             1          YES                                                              
T1         SYS_P133593    2          YES
T1         SYS_P133594    1048575    YES
T1         SYS_P133595    1048574    YES

But we can do is convert our existing partitions to normal RANGE partitions:

alter table t1 set interval ();

table altered

select table_name
,      partition_name
,      high_value 
,      interval
from   user_tab_partitions where table_name = 'T1';

TABLE_NAME PARTITION_NAME HIGH_VALUE INTERVAL
---------- -------------- ---------- --------
T1         P0             1          NO                                                              
T1         SYS_P133593    2          NO
T1         SYS_P133594    1048575    NO
T1         SYS_P133595    1048574    NO

And then we can set the table back to interval partitioning without affecting those existing partitions:

alter table t1 set interval (1);

Now if a new values comes along, if it falls within an existing range, it will use the existing range partition:

insert into t1 (col1) values (1048572);

1 row inserted

select table_name
,      partition_name
,      high_value 
,      interval
from   user_tab_partitions where table_name = 'T1';

TABLE_NAME PARTITION_NAME HIGH_VALUE INTERVAL
---------- -------------- ---------- --------
T1         P0             1          NO                                                              
T1         SYS_P133593    2          NO
T1         SYS_P133594    1048575    NO
T1         SYS_P133595    1048574    NO

select col1
, (select subobject_name from dba_objects where data_object_id = rd) pname 
from (
select col1
,      dbms_rowid.rowid_object(rowid) rd
from t1 where col1 IN (1048572,1048573));

      COL1 PNAME                        
---------- ------------------------------
   1048573 SYS_P133595                    
   1048572 SYS_P133595      

But if a new high comes along, we can now create our new interval partition:

insert into t1 (col1) values (1048575);

1 row inserted

select table_name
,      partition_name
,      high_value 
,      interval
from   user_tab_partitions where table_name = 'T1';

TABLE_NAME PARTITION_NAME HIGH_VALUE INTERVAL
---------- -------------- ---------- --------
T1         P0             1          NO                                                              
T1         SYS_P133593    2          NO
T1         SYS_P133594    1048575    NO
T1         SYS_P133595    1048574    NO
T1         SYS_P133596    1048576    YES

And our limit of 1048575 partitions still exists but the reserved future interval partitions can move out:

insert into t1 (col1) values (2097145);

1 row inserted

insert into t1 (col1) values (2097146);

ORA-14300: partitioning key maps to a partition outside maximum permitted number of partitions

This workaround has very limited usefulness as mentioned.
Bottom line – if using interval partitioning, pay careful consideration to this limit.

RELY DISABLE

Learning, relearning or unforgetting…

What value is there in a DISABLEd constraint?

This was a question on the OTN forums this week and a) my first reply was wrong and b) I couldn’t find a clear demonstration elsewhere.

The key is RELY.

The documentation is clear.

RELY Constraints

The ETL process commonly verifies that certain constraints are true. For example, it can validate all of the foreign keys in the data coming into the fact table. This means that you can trust it to provide clean data, instead of implementing constraints in the data warehouse. You create a RELY constraint as follows:

ALTER TABLE sales ADD CONSTRAINT sales_time_fk FOREIGN KEY (time_id) REFERENCES times (time_id) RELY DISABLE NOVALIDATE;

This statement assumes that the primary key is in the RELY state. RELY constraints, even though they are not used for data validation, can:

– Enable more sophisticated query rewrites for materialized views. See Chapter 18, “Basic Query Rewrite” for further details.
– Enable other data warehousing tools to retrieve information regarding constraints directly from the Oracle data dictionary.

Creating a RELY constraint is inexpensive and does not impose any overhead during DML or load. Because the constraint is not being validated, no data processing is necessary to create it.

We can prove the value of a RELY DISABLEd CONSTRAINT by playing withTom Kyte’s illustrations on the value of ENABLEd constraints.

EMP/DEPT Table:

drop table emp;
drop table dept;

create table dept 
(deptno number(2)     not null,
 dname  varchar2(15),
 loc    varchar2(15));

insert into dept values (10, 'ACCOUNTING', 'NEW YORK');
insert into dept values (20, 'RESEARCH', 'DALLAS');
insert into dept values (30, 'SALES', 'CHICAGO');
insert into dept values (40, 'OPERATIONS', 'BOSTON');

create table emp
(empno    number(4) not null
,ename    varchar2(10)
,job      varchar2(9)
,mgr      number(4)
,hiredate date
,sal      number(7, 2)
,comm     number(7, 2)
,deptno   number(2) not null);

insert into emp values (7369, 'SMITH', 'CLERK',    7902, to_date('17-DEC-1980', 'DD-MON-YYYY'), 800, null, 20);
insert into emp values (7499, 'ALLEN', 'SALESMAN', 7698, to_date('20-FEB-1981', 'DD-MON-YYYY'), 1600, 300, 30);
insert into emp values (7521, 'WARD',  'SALESMAN', 7698, to_date('22-FEB-1981', 'DD-MON-YYYY'), 1250, 500, 30);
insert into emp values (7566, 'JONES', 'MANAGER',  7839, to_date('2-APR-1981',  'DD-MON-YYYY'), 2975, null, 20);
insert into emp values (7654, 'MARTIN', 'SALESMAN', 7698,to_date('28-SEP-1981', 'DD-MON-YYYY'), 1250, 1400, 30);
insert into emp values (7698, 'BLAKE', 'MANAGER', 7839,to_date('1-MAY-1981', 'DD-MON-YYYY'), 2850, null, 30);
insert into emp values (7782, 'CLARK', 'MANAGER', 7839,to_date('9-JUN-1981', 'DD-MON-YYYY'), 2450, null, 10);
insert into emp values (7788, 'SCOTT', 'ANALYST', 7566,to_date('09-DEC-1982', 'DD-MON-YYYY'), 3000, null, 20);
insert into emp values (7839, 'KING', 'PRESIDENT', null,to_date('17-NOV-1981', 'DD-MON-YYYY'), 5000, null, 10);
insert into emp values (7844, 'TURNER', 'SALESMAN', 7698,to_date('8-SEP-1981', 'DD-MON-YYYY'), 1500, 0, 30);
insert into emp values (7876, 'ADAMS', 'CLERK', 7788,to_date('12-JAN-1983', 'DD-MON-YYYY'), 1100, null, 20);
insert into emp values (7900, 'JAMES', 'CLERK', 7698,to_date('3-DEC-1981', 'DD-MON-YYYY'), 950, null, 30);
insert into emp values (7902, 'FORD', 'ANALYST', 7566,to_date('3-DEC-1981', 'DD-MON-YYYY'), 3000, null, 20);
insert into emp values (7934, 'MILLER', 'CLERK', 7782,to_date('23-JAN-1982', 'DD-MON-YYYY'), 1300, null, 10);

begin
  dbms_stats.set_table_stats
  ( user, 'EMP', numrows=>1000000, numblks=>100000 );
  dbms_stats.set_table_stats
  ( user, 'DEPT', numrows=>100000, numblks=>10000 );
end; 
/

First, there’s nearly always an oddity to observe or tangent to follow:

alter table dept add constraint dept_pk primary key(deptno);
alter table emp add constraint emp_fk_dept foreign key(deptno) references dept(deptno) rely disable novalidate;

Results in:

SQL Error: ORA-25158: Cannot specify RELY for foreign key if the associated primary key is NORELY
25158. 00000 -  "Cannot specify RELY for foreign key if the associated primary key is NORELY"
*Cause:    RELY is specified for the foreign key contraint, when the
           associated primary key constraint is NORELY.
*Action:   Change the option of the primary key also to RELY.

But this is ok?

alter table emp add constraint emp_fk_dept foreign key(deptno) references dept(deptno) disable novalidate;
alter table emp modify constraint emp_fk_dept rely;

Odd!

Anyway, first, we can show a clear demonstration of JOIN ELIMINATION.

No FK constraint:

alter table emp drop constraint emp_fk_dept;

create or replace view emp_dept
as
select emp.ename, dept.dname
from   emp, dept
where  emp.deptno = dept.deptno; 

select ename from emp_dept;
select * from table(dbms_xplan.display_cursor);

Gives plan:

Plan hash value: 4269077325
------------------------------------------------------------------------------
| Id  | Operation          | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |         |       |       | 21974 (100)|          |
|   1 |  NESTED LOOPS      |         |  1000K|    31M| 21974   (1)| 00:04:24 |
|   2 |   TABLE ACCESS FULL| EMP     |  1000K|    19M| 21924   (1)| 00:04:24 |
|*  3 |   INDEX UNIQUE SCAN| DEPT_PK |     1 |    13 |     0   (0)|          |
------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("EMP"."DEPTNO"="DEPT"."DEPTNO")

Now with added constraint, RELY DISABLE:

alter table emp add constraint emp_fk_dept foreign key(deptno) references dept(deptno) disable novalidate;
alter table emp modify constraint emp_fk_dept rely;

select ename from emp_dept;
select * from table(dbms_xplan.display_cursor);

And we get:

Plan hash value: 3956160932
--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |       |       | 21925 (100)|          |
|*  1 |  TABLE ACCESS FULL| EMP  | 50000 |   976K| 21925   (1)| 00:04:24 |
--------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
    1 - filter("EMP"."DEPTNO" IS NOT NULL)

And just to confirm our constraint state:

select constraint_name, status, validated, rely from user_constraints where constraint_name = 'EMP_FK_DEPT';
CONSTRAINT_NAME                STATUS   VALIDATED     RELY
------------------------------ -------- ------------- ----
EMP_FK_DEPT                    DISABLED NOT VALIDATED RELY 

Now we can also see benefit in MV query_rewrite:

create materialized view mv_emp_dept
enable query rewrite
as
select dept.deptno, dept.dname, count (*) 
from   emp, dept
where  emp.deptno = dept.deptno
group by dept.deptno, dept.dname;

begin
   dbms_stats.set_table_stats
   ( user, 'mv', numrows=>100000, numblks=>10000 );
end; 
/

alter session set query_rewrite_enabled = false;
select count(*) from emp;
select * from table(dbms_xplan.display_cursor);
Plan hash value: 2083865914
-------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Cost (%CPU)| Time     |
-------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |       | 21917 (100)|          |
|   1 |  SORT AGGREGATE    |      |     1 |            |          |
|   2 |   TABLE ACCESS FULL| EMP  |  1000K| 21917   (1)| 00:04:24 |
-------------------------------------------------------------------

Enable query_rewrite and we can use MV instead:

alter session set query_rewrite_enabled = true;
alter session set query_rewrite_integrity = trusted;
select count(*) from emp;
select * from table(dbms_xplan.display_cursor);
Plan hash value: 632580757
---------------------------------------------------------------------------------------------
| Id  | Operation                     | Name        | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |             |       |       |     3 (100)|          |
|   1 |  SORT AGGREGATE               |             |     1 |    13 |            |          |
|   2 |   MAT_VIEW REWRITE ACCESS FULL| MV_EMP_DEPT |     3 |    39 |     3   (0)| 00:00:01 |
---------------------------------------------------------------------------------------------
Follow

Get every new post delivered to your Inbox.

Join 76 other followers