July 28, 2014 Leave a comment
There seems to me to be a relatively simple choice.
Either you except that the Oracle Optimizer has a wealth of complicated strategies and, in this complex effort to get the best executions it can, will inevitably either get it wrong sometimes (or take some extra executions to realise it’s wrong).
Or you stick your head in the sand and raise a bug for every unexpected poor execution or plan flip.
But let’s say that above else you wanted Plan Stability.
This is an widespread desire.
What would be the best strategy?
And to what lengths would you take it?
SQL Plan Management features are designed to give you some degree of stability.
You could baseline statements which have given you a problem in the past.
What if that is not deemed adequate?
So, on the one hand, you could baseline everything you could find.
Taking a long history of AWR and taking regularly snaps of V$SQL, you could put all in a SQL Plan Baseline.
But, depending on your application, you might regularly get new SQL appear unprotected by a baseline.
In one Hibernate application I know, a change in the application domain model often results in the table aliases changing, meaning that there is lots of brand new, but very familiar, SQL.
So, you then become dependant on having a reliable testing infrastructure which will generate nearly all the SQL you’re likely to get in advance of production.
In addition, you might want multiple plans in your baseline – for ACS for example – and then, once that extra bindset awareness is flushed out of memory, you then need a couple of executions to rediscover it (unless you use a SQL Patch to inject BIND_AWARE into the specific sql statements).
It’s certainly no guarantee of stability.
What is the alternative?
I like to argue that most problematic executions stem from two sources:
- Badly written SQL
- Using poor combinations of features – e.g. bind variable peeking + histograms
The other day I made a comment to someone that “if you’re heavily reliant on ACS, perhaps you should rethink whether you should be using literals rather than bind variables”.
So, you might then take the position that plan instability stems from increasing complexity in the optimizer.
In which case, maybe a viable strategy might be to turn off much of the complexity:
- Bind Variable Peeking
- Cardinality feedback
- Adaptive Cursor Sharing
- Dynamic Sampling
- Adaptive direct path reads
- 12c adaptive execution plans
- Anything with the word “adaptive” in it?
- Default Stats job and/or default method_opt histogram behaviour
This seems quite radical to many. Perhaps justifiably.
Perhaps it would be nice if there were new optimizer strategies – OPTIMIZER_MODE? – perhaps “AGGRESSIVE_PERFORMANCE” (maybe “ADAPTIVE_PERFORMANCE_WITH_THE_ODD_HICCUP” ?) vs “PERFORMANCE_STABILITY” which were an umbrella of many of the features above.
To what extent are you concerned with performance stability, above all else?
How many of the optimizer features in each release are actually aligned with that widespread concern?