create table domtest
(col1 number
,col2 varchar2(200))
partition by range (col1) interval (1)
(partition p0 values less than (1))
compress for oltp;
> Table created.
insert into domtest
with x as (select level lvl from dual connect by level <= 10)
select lvl, rpad(lvl,200,'X') from x;
> 10 row(s) inserted.
commit;
> Statement processed.
select table_name, def_compression, def_compress_for
from user_part_tables
where table_name = 'DOMTEST';
TABLE_NAME DEF_COMPRESSION DEF_COMPRESS_FOR
DOMTEST ENABLED ADVANCED
select table_name, partition_name, high_value, compression, compress_for
from user_tab_partitions
where table_name = 'DOMTEST';
TABLE_NAME PARTITION_NAME HIGH_VALUE COMPRESSION COMPRESS_FOR
DOMTEST P0 1 ENABLED ADVANCED
DOMTEST SYS_P487584 2 ENABLED ADVANCED
DOMTEST SYS_P487585 3 ENABLED ADVANCED
DOMTEST SYS_P487586 4 ENABLED ADVANCED
DOMTEST SYS_P487587 5 ENABLED ADVANCED
DOMTEST SYS_P487588 6 ENABLED ADVANCED
DOMTEST SYS_P487589 7 ENABLED ADVANCED
DOMTEST SYS_P487590 8 ENABLED ADVANCED
DOMTEST SYS_P487591 9 ENABLED ADVANCED
DOMTEST SYS_P487592 10 ENABLED ADVANCED
DOMTEST SYS_P487593 11 ENABLED ADVANCED
If I want to change the default compression at table level so that new partitions are no longer compressed, then those three words make a big difference.
alter table domtest nocompress;
>Table altered.
This does what we want:
select table_name, def_compression, def_compress_for
from user_part_tables
where table_name = 'DOMTEST';
TABLE_NAME DEF_COMPRESSION DEF_COMPRESS_FOR
DOMTEST DISABLED -
But it does a whole lot we don’t. It doesn’t move the existing data but it does change the partition attributes for new data.
We hit a serious problem about 5 months ago when we upgraded one of the development databases from 19.12 to 19.13. Going back to 19.12 solved the problem so definitely for us something introduced in 19.13
Nothing out there at the time on Oracle Support or from other reliable sources.
Our problem happened under row cache load which in particular effected our commit builds when there was heavy dropping and creating of objects across a number of schemas.
Best indication so far is that this is related to the cloning feature of row cache objects, present in 19.13, new feature with aims to automatically create clones of row cache objects if it detects that the object is accessed frequently.
Apparently many bugs related to this feature have already been addressed but suggestion is that disabling the feature might prevent the issue, via the snappily named parameter _bug33046179_kqr_hot_copy_sleep_limit = 0.
Will follow-up with comment on effectiveness.
Addendum: Good news – parameter seems effective so far but you need to restart the database. Although you can change the parameter without restarting the database, in our experience you do need to restart if the circumstances around the bug might already have taken effect before you changed the parameter.
I’m triple posting this on oracle-l, my blog and on Oracle community to get some more help. Apologies if you’ve tried to help before and it didn’t come to anything.
When the problem originally surfaced it was after an upgrade from 11.2.0.4 to 19.6. Now version is 19.12.
Now I have one example which gets executed four or five times a day The example has two sql ids involved in the process but both are almost identical and have identical sql plans.
Queries in question get executed after specific data loads. For each data load where it gets executed, it will be doing pretty much the same workload day-on-day. The query gets supplied a collection which tends to have one or a couple of rows row and each row in the collection provides the values to prune to a specific subpartition.
The issue is that about once a week, an execution will go slow. Exactly the same sql plan each execution. When it works, it runs in a couple of seconds max. When it goes wrong, the partition list iterator decides to read every partition/subpartition.
There was some speculation about possible causes in the OTN thread. The best guess was that there is some timeout or invalidation or event or contention which means Oracle decides that it cannot reliably prune. There might be other data loads ongoing. There could be a partition operation (add new or exchange) which coincides with the degraded iterator performance. There are > 70k partitions / subpartitions.
select count(*) from dba_tab_partitions where table_name = 'POSITION_BALANCE';
COUNT(*)
----------
60941
select count(*), sum(case when segment_created = 'YES' then 1 else 0 end) from dba_tab_subpartitions where table_name = 'POSITION_BALANCE';
COUNT(*) SUM(CASEWHENSEGMENT_CREATED='YES'THEN1ELSE0END)
---------- -----------------------------------------------
78154 33705
I’m reasonably comfortable that what I’ve described is roughly what’s happening. ASH confirms that it’s reading lots of subpartitions Just got no idea why and how to figure that out. The average-ish once-a-week occurence makes it tricker.
Even though when it runs in runs in a couple of seconds, I do have some of the ASH samples that show this.
select /*+ parallel(4) */ instance_number, sql_id, sql_plan_hash_value, sql_full_plan_hash_value, sql_exec_id, sql_exec_start, min(sample_time), max(sample_time), max(sample_time) - min(sample_time) duration, count(*) cnt_samples
, sum(case when in_sql_execution = 'Y' then 1 else 0 end) cnt_in_sql_execution
, sum(case when in_parse = 'Y' then 1 else 0 end) cnt_in_parse
, sum(case when in_hard_parse = 'Y' then 1 else 0 end) cnt_in_hard_parse
from dba_hist_active_sess_history h
where 1=1
and h.user_id IN (select user_id from dba_users where username like '%TODS%')
and sql_id in ('fw985yayy1mfz','0gk5hj9wnp2vs')
--and sql_id = 'ft2prc1xx2hmd'
and sql_exec_id is not null
group by instance_number, sql_id, sql_plan_hash_value, sql_full_plan_hash_value, sql_exec_id, sql_exec_start
order by h.sql_exec_start;
INSTANCE_NUMBER SQL_ID SQL_PLAN_HASH_VALUE SQL_FULL_PLAN_HASH_VALUE SQL_EXEC_ID SQL_EXEC_START MIN(SAMPLE_TIME) MAX(SAMPLE_TIME) DURATION CNT_SAMPLES CNT_IN_SQL_EXECUTION CNT_IN_PARSE CNT_IN_HARD_PARSE
--------------- ------------- ------------------- ------------------------ ----------- -------------------- ------------------------------- ------------------------------- ------------------- ----------- -------------------- ------------ -----------------
4 0gk5hj9wnp2vs 2221895109 2867134094 67108864 23-MAY-2022 09:18:24 23-MAY-22 09.18.27.378000000 AM 23-MAY-22 09.18.27.378000000 AM +00 00:00:00.000000 1 1 0 0
3 0gk5hj9wnp2vs 2221895109 2867134094 50331648 24-MAY-2022 07:59:49 24-MAY-22 07.59.50.659000000 AM 24-MAY-22 07.59.50.659000000 AM +00 00:00:00.000000 1 1 0 0
2 0gk5hj9wnp2vs 2221895109 2867134094 33554432 25-MAY-2022 07:09:07 25-MAY-22 07.09.08.709000000 AM 25-MAY-22 07.09.08.709000000 AM +00 00:00:00.000000 1 1 0 0
2 0gk5hj9wnp2vs 2221895109 2867134094 33554434 25-MAY-2022 09:41:53 25-MAY-22 09.41.59.718000000 AM 25-MAY-22 12.41.49.605000000 PM +00 02:59:49.887000 1054 1054 0 0
2 fw985yayy1mfz 2221895109 2867134094 33554434 25-MAY-2022 10:57:41 25-MAY-22 10.57.49.221000000 AM 25-MAY-22 01.56.37.861000000 PM +00 02:58:48.640000 1048 1048 0 0
3 0gk5hj9wnp2vs 2221895109 2867134094 50331648 25-MAY-2022 11:41:04 25-MAY-22 11.41.05.539000000 AM 25-MAY-22 11.41.05.539000000 AM +00 00:00:00.000000 1 1 0 0
3 0gk5hj9wnp2vs 2221895109 2867134094 50331648 27-MAY-2022 08:01:28 27-MAY-22 08.01.30.371000000 AM 27-MAY-22 08.01.30.371000000 AM +00 00:00:00.000000 1 1 0 0
3 fw985yayy1mfz 2221895109 2867134094 50331648 27-MAY-2022 08:01:38 27-MAY-22 08.01.40.611000000 AM 27-MAY-22 08.01.40.611000000 AM +00 00:00:00.000000 1 1 0 0
1 0gk5hj9wnp2vs 2221895109 2867134094 16777218 01-JUN-2022 06:48:24 01-JUN-22 06.48.27.979000000 AM 01-JUN-22 09.48.17.547000000 AM +00 02:59:49.568000 1054 1054 0 0
3 0gk5hj9wnp2vs 2221895109 2867134094 50331649 01-JUN-2022 09:27:03 01-JUN-22 09.27.10.915000000 AM 01-JUN-22 09.27.10.915000000 AM +00 00:00:00.000000 1 1 0 0
3 fw985yayy1mfz 2221895109 2867134094 50331648 02-JUN-2022 07:11:55 02-JUN-22 07.11.57.315000000 AM 02-JUN-22 07.11.57.315000000 AM +00 00:00:00.000000 1 1 0 0
3 0gk5hj9wnp2vs 2221895109 2867134094 50331648 07-JUN-2022 07:04:13 07-JUN-22 07.04.17.155000000 AM 07-JUN-22 07.04.17.155000000 AM +00 00:00:00.000000 1 1 0 0
2 0gk5hj9wnp2vs 2221895109 2867134094 33554434 07-JUN-2022 08:17:56 07-JUN-22 08.17.58.693000000 AM 07-JUN-22 11.17.49.285000000 AM +00 02:59:50.592000 1054 1054 0 0
1 0gk5hj9wnp2vs 2221895109 2867134094 16777216 08-JUN-2022 07:09:22 08-JUN-22 07.09.24.427000000 AM 08-JUN-22 07.09.24.427000000 AM +00 00:00:00.000000 1 1 0 0
2 0gk5hj9wnp2vs 2221895109 2867134094 33554432 08-JUN-2022 08:14:25 08-JUN-22 08.14.26.278000000 AM 08-JUN-22 08.14.26.278000000 AM +00 00:00:00.000000 1 1 0 0
3 fw985yayy1mfz 2221895109 2867134094 50331649 09-JUN-2022 06:51:27 09-JUN-22 06.51.29.219000000 AM 09-JUN-22 06.51.29.219000000 AM +00 00:00:00.000000 1 1 0 0
3 fw985yayy1mfz 2221895109 2867134094 50331648 10-JUN-2022 11:10:13 10-JUN-22 11.10.14.595000000 AM 10-JUN-22 11.10.14.595000000 AM +00 00:00:00.000000 1 1 0 0
2 0gk5hj9wnp2vs 2221895109 2867134094 33554433 13-JUN-2022 08:45:43 13-JUN-22 08.45.45.509000000 AM 13-JUN-22 08.45.45.509000000 AM +00 00:00:00.000000 1 1 0 0
2 fw985yayy1mfz 2221895109 2867134094 33554433 13-JUN-2022 08:45:55 13-JUN-22 08.45.55.749000000 AM 13-JUN-22 08.45.55.749000000 AM +00 00:00:00.000000 1 1 0 0
3 0gk5hj9wnp2vs 2221895109 2867134094 50331648 15-JUN-2022 07:08:19 15-JUN-22 07.08.22.275000000 AM 15-JUN-22 07.08.22.275000000 AM +00 00:00:00.000000 1 1 0 0
3 0gk5hj9wnp2vs 2221895109 2867134094 50331649 15-JUN-2022 09:08:54 15-JUN-22 09.08.56.387000000 AM 15-JUN-22 12.08.46.083000000 PM +00 02:59:49.696000 1054 1054 0 0
1 fw985yayy1mfz 2221895109 2867134094 16777216 16-JUN-2022 06:54:04 16-JUN-22 06.54.05.259000000 AM 16-JUN-22 06.54.05.259000000 AM +00 00:00:00.000000 1 1 0 0
1 0gk5hj9wnp2vs 2221895109 2867134094 16777217 16-JUN-2022 08:54:13 16-JUN-22 08.54.18.891000000 AM 16-JUN-22 11.54.08.844000000 AM +00 02:59:49.953000 1054 1054 0 0
3 fw985yayy1mfz 2221895109 2867134094 50331649 16-JUN-2022 11:16:29 16-JUN-22 11.16.31.491000000 AM 16-JUN-22 11.16.31.491000000 AM +00 00:00:00.000000 1 1 0 0
4 0gk5hj9wnp2vs 2221895109 2867134094 67108865 17-JUN-2022 07:33:10 17-JUN-22 07.33.11.282000000 AM 17-JUN-22 07.33.11.282000000 AM +00 00:00:00.000000 1 1 0 0
4 fw985yayy1mfz 2221895109 2867134094 67108865 17-JUN-2022 07:33:20 17-JUN-22 07.33.21.522000000 AM 17-JUN-22 07.33.21.522000000 AM +00 00:00:00.000000 1 1 0 0
2 0gk5hj9wnp2vs 2221895109 2867134094 33554432 21-JUN-2022 06:59:24 21-JUN-22 06.59.25.728000000 AM 21-JUN-22 06.59.25.728000000 AM +00 00:00:00.000000 1 1 0 0
2 0gk5hj9wnp2vs 2221895109 2867134094 33554433 21-JUN-2022 09:14:05 21-JUN-22 09.14.10.464000000 AM 21-JUN-22 12.14.00.096000000 PM +00 02:59:49.632000 1054 1054 0 0
2 0gk5hj9wnp2vs 2221895109 2867134094 33554434 21-JUN-2022 11:07:36 21-JUN-22 11.07.44.480000000 AM 21-JUN-22 02.07.23.680000000 PM +00 02:59:39.200000 1053 1053 0 0
4 fw985yayy1mfz 2221895109 2867134094 67108865 21-JUN-2022 11:43:04 21-JUN-22 11.43.13.010000000 AM 21-JUN-22 02.42.33.651000000 PM +00 02:59:20.641000 1051 1051 0 0
4 fw985yayy1mfz 2221895109 2867134094 67108866 21-JUN-2022 12:06:49 21-JUN-22 12.06.57.586000000 PM 21-JUN-22 03.06.38.514000000 PM +00 02:59:40.928000 1053 1053 0 0
3 fw985yayy1mfz 2221895109 2867134094 50331649 23-JUN-2022 10:05:02 23-JUN-22 10.05.05.667000000 AM 23-JUN-22 01.04.45.251000000 PM +00 02:59:39.584000 1053 1053 0 0
1 0gk5hj9wnp2vs 2221895109 2867134094 16777216 23-JUN-2022 10:57:04 23-JUN-22 10.57.04.857000000 AM 23-JUN-22 10.57.04.857000000 AM +00 00:00:00.000000 1 1 0 0
This is a section of a RTSM report when it was ok:
Potentially I could set system level trace for this sql id and wait for it to reoccur but what events would I be setting to try to capture what causes this runtime decision? I presume it wouldn’t be an optimizer trace?
I have a couple of performance issues at the moment which link back to queries against large tables choosing to do a FULL segment scan rather than using a non-prefixed local index.
The problems right now are not currently reproducible in or outside of production.
But when I look at a specific point in time at the reproduction of a problem, the system state is not necessarily as it was when the problem was occurring.
My main theory is that somewhere on these large partitioned tables there is intermittently an unusable partition of the index perhaps due some aspect of a data load, data movement or other maintenance operation on a different subpartition (where each partition represents a different day of a specific feed and each subpartition represents a version of that daily load).
However I have no evidence currently to support that theory.
If this theory was in the right ballpark, what would it mean?
That a query using bind variables (tick) would not use the index by default as the optimizer would not be able to guarantee that that shareable plan would not be accessing the unusable.
A query using literals would not be subject to the same restrictions as the optimizer would know, if an unusable subpartition was relevant, that the unusable subpartition was not relevant to this specific query (and any DDL which might make a relevant partition unusable would invalidate the shared cursor).
Whilst I was musing over this yesterday, I had a vague sense of familiarity and I found these previous posts to job my memory:
create table t1
(pkey varchar2(24) not null
,spkey number not null
,id varchar2(24) not null
,version number not null
,status varchar2(24) not null
,filler varchar2(255))
partition by list (pkey)
subpartition by list (spkey)
subpartition template
(subpartition sp_1 values (1))
(partition p_0 values ('X'));
alter table t1 add partition p_abc values ('ABC');
alter table t1 add partition p_def values ('DEF');
alter table t1 add partition p_ghi values ('GHI');
create index i1 on t1 (id) local;
create sequence s1 start with 10000000;
create type o1 is object
(pkey varchar2(24)
,spkey number
,id varchar2(24));
/
create type c1 is table of o1;
/
select object_name, subobject_name, object_type from user_objects where object_name = 'T1' order by object_name, subobject_name nulls first;
OBJECT_NAM SUBOBJECT_ OBJECT_TYPE
---------- ---------- -----------------------
T1 TABLE
T1 P_0 TABLE PARTITION
T1 P_0_SP_1 TABLE SUBPARTITION
T1 P_ABC TABLE PARTITION
T1 P_ABC_SP_1 TABLE SUBPARTITION
T1 P_DEF TABLE PARTITION
T1 P_DEF_SP_1 TABLE SUBPARTITION
T1 P_GHI TABLE PARTITION
T1 P_GHI_SP_1 TABLE SUBPARTITION
select object_name, subobject_name, object_type from user_objects where object_name = 'I1' order by object_name, subobject_name nulls first;
OBJECT_NAM SUBOBJECT_ OBJECT_TYPE
---------- ---------- -----------------------
I1 INDEX
I1 P_0 INDEX PARTITION
I1 P_0_SP_1 INDEX SUBPARTITION
I1 P_ABC INDEX PARTITION
I1 P_ABC_SP_1 INDEX SUBPARTITION
I1 P_DEF INDEX PARTITION
I1 P_DEF_SP_1 INDEX SUBPARTITION
I1 P_GHI INDEX PARTITION
I1 P_GHI_SP_1 INDEX SUBPARTITION
And I’m going to seed some data simply:
declare
p sys.odcivarchar2list := sys.odcivarchar2list('ABC','DEF','GHI');
begin
for i in 1 .. p.count
loop
for j in 1 .. 10000
loop
insert into t1
values (p(i), 1, 'PB:'||s1.nextval, 1, 'LATEST', rpad('X',255,'X'));
end loop;
end loop;
commit;
end;
/
select pkey, spkey, count(*), min(id), max(id) from t1 group by pkey, spkey;
PKEY SPKEY COUNT(*) MIN(ID) MAX(ID)
---- ---------- ---------- ------------------------ ------------------------
ABC 1 10000 PB:10000000 PB:10009999
DEF 1 10000 PB:10010000 PB:10019999
GHI 1 10000 PB:10020000 PB:10029999
And then just to keep it representative to my real world problem, I’m going to run a bulk update to set the status of some of the rows to SUPERSEDED (and which is then in the real world followed by the INSERT of some LATEST versions of those rows supplied by the client but no need to do that here):
declare
v1 c1 := c1(o1('DEF',1,'PB:10010001'),
o1('DEF',1,'PB:10010002'),
o1('DEF',1,'PB:10010003'),
o1('DEF',1,'PB:10010004'),
o1('DEF',1,'PB:10010005'));
begin
forall i in 1 .. v1.count
update /*+ find_me_dom */
t1
set status = 'SUPERSEDED'
where pkey = v1(i).pkey
and spkey = v1(i).spkey
and id = v1(i).id;
end;
/
And I’m going to lookup my sql id from v$sql and plug it into DBMS_XPLAN to see how my UPDATE performed:
select * from table(dbms_xplan.display_cursor('bjddz8c4jrk3y',0));
-------------------------------------------------------------
| Id | Operation | Name |
-------------------------------------------------------------
| 0 | UPDATE STATEMENT | |
| 1 | UPDATE | T1 |
| 2 | PARTITION LIST SINGLE | |
| 3 | PARTITION LIST SINGLE | |
| 4 | TABLE ACCESS BY LOCAL INDEX ROWID BATCHED| T1 |
| 5 | INDEX RANGE SCAN | I1 |
-------------------------------------------------------------
Range Scan of I1 as desired
Now I will mark one subpartition of the index unusable (different subpartition from one affected by the update) and repeat the update, the getting of the SQL and the fetching of the plan (different child number):
alter index i1 modify subpartition P_ABC_SP_1 unusable;
--Repeat dml
--lookup sql
select * from table(dbms_xplan.display_cursor('bjddz8c4jrk3y',1));
-- ^ different child cursor number because marking the index subpartition as unusable
-- has invalidated the previous child cursor 0
---------------------------------------------
| Id | Operation | Name |
---------------------------------------------
| 0 | UPDATE STATEMENT | |
| 1 | UPDATE | T1 |
| 2 | PARTITION LIST SINGLE | |
| 3 | PARTITION LIST SINGLE | |
| 4 | TABLE ACCESS STORAGE FULL| T1 |
---------------------------------------------
The access method is no longer an index range scan.
As I covered initially, as part of the parse process the execution plan has to be good for all possible ranges of supplied values in the event that that SQL child cursor is shared for executions with different binds than those initially parsed with. Because an update (or select etc, etc) against rows for table subpartition (ABC,1) cannot use the index because that index subpartition is unusable, the optimizer has to discount it.
We can force the index:
declare
v1 c1 := c1(o1('DEF',1,'PB:10010001'),
o1('DEF',1,'PB:10010002'),
o1('DEF',1,'PB:10010003'),
o1('DEF',1,'PB:10010004'),
o1('DEF',1,'PB:10010005'));
begin
forall i in 1 .. v1.count
update /*+ find_me_dom index(t1 (id))*/
t1
set status = 'SUPERSEDED'
where pkey = v1(i).pkey
and spkey = v1(i).spkey
and id = v1(i).id;
end;
/
select * from table(dbms_xplan.display_cursor('3m2xyxgruxkpr',0));
-------------------------------------------------------------
| Id | Operation | Name |
-------------------------------------------------------------
| 0 | UPDATE STATEMENT | |
| 1 | UPDATE | T1 |
| 2 | PARTITION LIST SINGLE | |
| 3 | PARTITION LIST SINGLE | |
| 4 | TABLE ACCESS BY LOCAL INDEX ROWID BATCHED| T1 |
| 5 | INDEX RANGE SCAN | I1 |
-------------------------------------------------------------
But note, as covered in my earlier link, if we force the index in a situation where it can’t be used, it will error in current versions:
alter index i1 modify subpartition P_DEF_SP_1 unusable;
declare
v1 c1 := c1(o1('DEF',1,'PB:10010001'),
o1('DEF',1,'PB:10010002'),
o1('DEF',1,'PB:10010003'),
o1('DEF',1,'PB:10010004'),
o1('DEF',1,'PB:10010005'));
begin
forall i in 1 .. v1.count
update /*+ find_me_dom index(t1 (id))*/
t1
set status = 'SUPERSEDED'
where pkey = v1(i).pkey
and spkey = v1(i).spkey
and id = v1(i).id;
end;
/
ORA-01502: index 'DOM.I1' or partition of such index is in unusable state
ORA-06512: at line 8
01502. 00000 - "index '%s.%s' or partition of such index is in unusable state"
*Cause: An attempt has been made to access an index or index partition
that has been marked unusable by a direct load or by a DDL
operation
*Action: DROP the specified index, or REBUILD the specified index, or
REBUILD the unusable index partition
If I rebuild that second partition again, then I can look at my second option – using literals not binds, at least for the partition keys, but really we then lose the option of FORALL.
alter index i1 rebuild subpartition P_DEF_SP_1;
declare
v1 c1 := c1(o1('DEF',1,'PB:10010001'),
o1('DEF',1,'PB:10010002'),
o1('DEF',1,'PB:10010003'),
o1('DEF',1,'PB:10010004'),
o1('DEF',1,'PB:10010005'));
begin
for i in 1 .. v1.count
loop
execute immediate
'update /*+ find_me_dom */
t1
set status = ''SUPERSEDED''
where pkey = '''||v1(i).pkey||''''||'
and spkey = '||v1(i).spkey||'
and id = :1' using v1(i).id;
end loop;
end;
/
select * from table(dbms_xplan.display_cursor('5a3vcac58x32q',0));
----------------------------------------
| Id | Operation | Name |
----------------------------------------
| 0 | UPDATE STATEMENT | |
| 1 | UPDATE | T1 |
| 2 | PARTITION LIST SINGLE | |
| 3 | PARTITION LIST SINGLE| |
| 4 | INDEX RANGE SCAN | I1 |
----------------------------------------
At least we have options whilst playing a waiting game to see if we can observe a problem state which might cause such an issue…. or wait for other possibilities to make themselves known…
I’m currently an Oracle Ace (for a few more weeks at least!), I love writing code, digging out significant performance gains in existing processes and troubleshooting problems. Simples.
If you’re interested, please drop me a mail, add a comment, send me a linkedin message, anything.
This is the distillation of a performance problem in some regulatory reporting where a mistake in the optimizer cardinality estimates causes all sorts of knock-on performance issues. This post does not look at those knock-on effects but examines what is happening with the optimizer in this case. Most of the digging in the issue belongs to Valerii Satybaldyev and thanks to Sayan Malakshinov, notably for the contributions on the oracle-l thread.
Script to reproduce (currently on 19.6):
drop table t1;
create table t1
(id number not null
,version number not null
,create_ts timestamp not null
,modify_ts timestamp
,status varchar2(24) generated always as (NVL2("MODIFY_TS",'SUPERSEDED','LATEST'))
,id2 number not null
,yn varchar2(1) not null
,business_date date not null);
insert into t1
(id, version, create_ts, id2, yn, business_date)
select rownum
, 1
, systimestamp
, rownum
, case when mod(rownum,2) = 1 then 'Y' else 'N' end
, trunc(sysdate,'MON') + mod(rownum,10)
from dual
connect by level <= 1000;
exec dbms_stats.gather_table_stats(USER,'T1');
explain plan for
with x as
(select * from t1
union all
select * from t1)
select *
from x
where yn = 'Y';
select * from table(dbms_xplan.display);
The problem comes with the estimate of 1 row in the outermost SELECT, particularly for subsequent joins this cardinality estimate causes significant performance problems.
If we look at the 10053 trace, we can see the root cause of the problem. Compare the BASE STATISTICAL INFORMATION snippets for T1 and for the inline view X. Note in particular the numbers for column YN which is the predicate as well as for ID2 and BUSINESS_DATE. Note that these are columns which appear after the virtual column STATUS. (in the original trace file, the entry under T1 for column #8 comes before column #7, I have reordered below just to make the correlation between T1 and X more obvious)
For inline view X, column STATUS inherits the stats for ID2, ID2 gets those of YN, YN those of BUSINESS_DATE and BUSINESS_DATE gets NO STATISTICS. So for our predicate on YN we get completely the wrong base statistics. Then in addition, because the supplied value is out of range from the wrong statistics, estimates get further pro-rated.
Potential workaround is not risk-free as it moves the implicit order of the selected columns which could affect certain code.
alter table t1 modify status invisible;
alter table t1 modify status visible;
explain plan for
with x as
(select * from t1
union all
select * from t1)
select *
from x
where yn = 'Y';
select * from table(dbms_xplan.display);
Plan hash value: 3505968351
------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1000 | 90000 | 548 (0)| 00:00:01 |
| 1 | VIEW | | 1000 | 90000 | 548 (0)| 00:00:01 |
| 2 | UNION-ALL | | | | | |
|* 3 | TABLE ACCESS STORAGE FULL| T1 | 500 | 20000 | 274 (0)| 00:00:01 |
|* 4 | TABLE ACCESS STORAGE FULL| T1 | 500 | 20000 | 274 (0)| 00:00:01 |
------------------------------------------------------------------------------------
Reason it works is that the virtual column is now logically at the end of the column definitions.
If we compare the trace once “fixed”, then we see the following in the X section:
Table: X Alias: X (NOT ANALYZED)
...
Column (#8): STATUS(VARCHAR2) NO STATISTICS (using defaults)
As a recap, when you do direct path load into an empty segment, Oracle will take advantage of gathering statistics on the data whilst it is loading it, i.e no need to go back and re-read in an explicit stats gathering.
Note that, per Maria’s note, only base table and column statistics are gathered, not histograms but those can be gathered without re-gathering base column statistics via GATHER AUTO.
Old news.
We have only recently upgraded a complex application from 11.2.0.4 and hit an issue which is really obvious in hindsight but just worth pointing out, particular in combination with another feature?
I’m sure the behaviour below will come as no surprise in 11.2.0.4
drop table t1;
create global temporary table t1
on commit preserve rows
as
select * from user_objects where rownum <= 100;
explain plan for
select * from t1;
select * from table(dbms_xplan.display);
select num_rows from user_tables where table_name = 'T1';
select column_name, num_nulls, num_distinct from user_tab_columns where table_name = 'T1';
The output of note:
Plan hash value: 3617692013
----------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 100 | 19000 | 2 (0)| 00:00:01 |
| 1 | TABLE ACCESS STORAGE FULL | T1 | 100 | 19000 | 2 (0)| 00:00:01 |
----------------------------------------------------------------------------------
Note
-----
- dynamic sampling used for this statement (level=2)
NUM_ROWS
----------
COLUMN_NAME NUM_NULLS NUM_DISTINCT
-------------------- ---------- ------------
GENERATED
SECONDARY
NAMESPACE
EDITION_NAME
OBJECT_NAME
SUBOBJECT_NAME
OBJECT_ID
DATA_OBJECT_ID
OBJECT_TYPE
CREATED
LAST_DDL_TIME
TIMESTAMP
STATUS
TEMPORARY
We load 100 rows, there are no stats, dynamic sampling kicks in, bingo the optimizer expects 100 rows.
Then for > 11.2.0.4 ( in this case 19.6), with the addition of SCOPE column in the script for 19c, we see that we now have table and column statistics after the load despite no explicit stats gathering call. Which, given the feature we’re discussing, should be completely obvious and expected.
I’ve used a temporary table here deliberately purely to facilitate a comment at the end. Clearly this could be a normal heap table – but the SCOPE in the output is specific to the new feature of temporary table private statistics.
Again, we’ve loaded 100 rows, as part of the direct path load, we have some base & column statstics, the optimizer expects 100 rows. No dynamic sampling necessary
But what if our data is loaded in multiple steps, might this new online statistics gathering cause us a bit of a problem?
It only gathers the stats when the segment is empty.
I’m not going to repeat the output above for the table & column statistics, just the execution plan detail.
drop table t1;
create global temporary table t1
on commit preserve rows
as
select * from user_objects where rownum <= 100;
insert /*+ append */ into t1
select * from user_objects
commit;
insert /*+ append */ into t1
select * from user_objects
commit;
explain plan for
select * from t1;
Which in 11.2.0.4 gives:
Plan hash value: 3617692013
---------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 32376 | 6007K | 93 (0)| 00:00:01 |
| 1 | TABLE ACCESS STORAGE FULL | T1 | 32376 | 6007K | 93 (0)| 00:00:01 |
---------------------------------------------------------------------------------------
Note
-----
- dynamic sampling used for this statement (level=2)
select count(*) from t1;
COUNT(*)
----------
34392
And in 19.6
Plan hash value: 3617692013
----------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 100 | 11300 | 2 (0)| 00:00:01 |
| 1 | TABLE ACCESS STORAGE FULL | T1 | 100 | 11300 | 2 (0)| 00:00:01 |
----------------------------------------------------------------------------------
Note
-----
- Global temporary table session private statistics used
select count(*) from t1;
COUNT(*)
----------
59712*
*With an obvious difference in actual rows, because these are different environments on different versions.
The problem now is that our subsequent processing code which drives off a table populated with different volumes of data in multiple stages used to work fine in 11.2.0.4 but is now at risk of some serious consequences from having cardinality mis-estimates thanks to inaccurate statistics from our new feature which is behaving as it should.
In our real-world example, it was not always the case that there was a big skew between the first load and the subsequent loads and actually this consequence, obvious in hindsight, took a few weeks to reveal itself until that first load into an empty segment happened to be very small followed by some other loads which were much larger.
Furthermore, this also involved the private temporary table aspect and so when you’re looking at solutions, you might consider whether some explicit stats gathering at the end of the load is the solution – in which case should it be private stats or shared stats or whether it even matters when quite possibly the easiest solution is to just use dynamic sampling in the subsequent select/s.
I was reminded of this issue when writing this other article earlier on a more serious problem that online statistics gathering was causing:
Unfortunately this is just an anecdotal long after-the-fact note on issues observed on upgrade from 11.2.0.4 to 19.6 with new feature real-time statistics. I thought I had joted this down previously, but doesn’t appear to be another article published or in draft. Issue was not reproducible outside of the application unfortunately and was also intermittent.
Feature was causing several issues in one particular area of the application.
First off was ORA-00600s in the processing with immediate disconnect on the processing session (apart from trace file generation)
Second was sometimes leaving evidence of integer overflow issues in column statistics.
Third, “corrupt” statistics then leading to subsequent bad execution plans for complicated queries evidenced by MERGE JOIN / MERGE JOIN CARTESIAN and row estimates of max 18E.
Examples of the latter complex queries with cardinality issues leading to bad plans and severe performance problems, when distilled to simplest form, could be seen as follows:
explain plan for
SELECT *
FROM XXXXX
WHERE business_date = :busines_date
AND id = :id
AND processed_flag = 'Y';
select * from table(dbms_xplan.display);
----------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 18E| 15E| 647K (1)| 00:00:26 |
|* 1 | TABLE ACCESS BY INDEX ROWID BATCHED| XXXXXXXXXXXX | 18E| 15E| 647K (1)| 00:00:26 |
|* 2 | INDEX RANGE SCAN | XXXXXXXXXXXX | 733K| | 1538 (1)| 00:00:01 |
----------------------------------------------------------------------------------------------------
Note
-----
- dynamic statistics used: statistics for conventional DML
explain plan for
SELECT /*+ opt_param('optimizer_features_enable','18.1.0') */ *
FROM XXXXX
WHERE business_date = :busines_date
AND id = :id
AND processed_flag = 'Y';
select * from table(dbms_xplan.display);
----------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 5 | 1405 | 13 (0)| 00:00:01 |
|* 1 | TABLE ACCESS BY INDEX ROWID BATCHED| XXXXXXXXXXXX | 5 | 1405 | 13 (0)| 00:00:01 |
|* 2 | INDEX RANGE SCAN | XXXXXXXXXXXX | 10 | | 3 (0)| 00:00:01 |
----------------------------------------------------------------------------------------------------
It should be obvious that NUM_NULLs cannot be < 1 and also some strange, unsuitable values in some parts on LOW_VALUE which I’ve omitted from output above.
Traction with Oracle Support was disappointing and, unfortunately, once they knew we didn’t have a problem if we just turned the feature off, that was good enough for them and to be honest we just needed to move on in our testing – this was back in August 2020.
So, feature real-time statistics is turned off across the board via the parameters below which being underscore parameters we had Oracle Support’s blessing to do.
Previously I had blogged about an intermittent problem which was affecting one of our feeds where a parallel query would just hang as the QC would wait forever on a parallel slave which never joined the party.
The theory was the parallel slave was affected by another issue – possibly the slave process died and until PMON cleans it up, the QC assumes it is still alive (ref doc id 250960.1)
Intermittent problems can be a right pain but this issue mainly affected one particular process most days kicking off between 1 and 2 am – same process runs 30-40 times per day but only tended to have the issue once so it seemed somewhat data dependent – but also would affect some other processes in different schemas very occasionally.
Over time, evidence built from trace files suggest that the issue was mostly related to:
If the QC received the error, connection would terminate instantly (apart from trace file generation), if the slave received the error then QC would hang.
One patch – 31045929 – suggested by Oracle Support was applied but was not effective.
It’s disappointing to have to turn off yet another “new” feature – this one related to stats gathering for direct path into an empty segment – when we have already had to turn off real-time statistics for conventional dml previously ( link to future post – I thought I had blogged about this but seems not) but hopefully we will still be able to pursue patch fix to success.
Just an illustration of a SQL / XML utility to count rows in a partition.
In my case, my requirement was to count rows in all subpartitions where there is data which might not belong in that partition (due to NOVALIDATE partition exchange). My thoughts were that I wanted the equivalent of a NESTED LOOP – for each partition… do XYZ and was happy for this to run for a while in the background without consuming significant parallel resources.
Partition keys here are not nullable.
with s as
(select /*+ */
sp.table_owner, sp.table_name, sp.subpartition_name
, sn.snapshot_id, sn.version
from dba_tab_subpartitions sp
join my_snaphot_table sn on (sn.subpartition_name = sp.subpartition_name)
where sp.table_owner = :owner
and sp.table_name = :table_name)
select s.*, x.cnt
from s
, xmltable('for $i in /ROWSET/ROW/CNT
return $i'
passing xmltype(
dbms_xmlgen.getxml
('select /*+ parallel(2) */ count(*) cnt '
||'from '||s.table_owner||'.'||s.table_name||' SUBPARTITION ('||s.subpartition_name||') '
||'where (snapshot_id, snapshot_version) != (('''||s.snapshot_id||''', '||s.version||'))'
))
columns cnt number path '/') x
where x.cnt > 0;
The views expressed are my own and not those of my employer or my clients. The views and opinions expressed by visitors to this blog are theirs and do not necessarily reflect mine.
Recent Comments