I’ve been reading up on Oracle 12c to get certified and to help advise my company on potential uses for the new version. I’ve been looking forward to researching the new Adaptive Optimization features because it makes so much sense that the database should change its plans when it finds differences between the expected number of rows each part of a plan sees and the actual number of rows.
I’ve written blog posts in the past about limitations of the optimizer related to its ability to determine the number of rows (cardinality) that steps in a plan would see. I took the scripts from some of these and ran them on a 12c instance to see if the new features would cause any of the inefficient plans to change to the obvious efficient plans.
Sadly, none of my examples ran differently on 12c. I don’t doubt that there are examples that run better because of the new features but the ones I constructed earlier didn’t see any improvement. So, I thought I would blog about one such example.
Here is the original blog post with an example run on 11.2 Oracle: url
Here is the same script run on 12c: zip
Here is the query with the bad plan:
select B.DIVNUM from DIVISION A,SALES B where a.DIVNUM=B.DIVNUM and A.DIVNAME='Mayberry' Plan hash value: 480645376 -------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| -------------------------------------------------------------------- | 0 | SELECT STATEMENT | | | | 421 (100)| |* 1 | HASH JOIN | | 500K| 8300K| 421 (2)| |* 2 | TABLE ACCESS FULL| DIVISION | 1 | 14 | 2 (0)| | 3 | TABLE ACCESS FULL| SALES | 1000K| 2929K| 417 (1)| --------------------------------------------------------------------
There is only 1 SALES row that has the DIVNUM associated with DIVNAME=’Mayberry’. There 1,000,001 SALES rows and there is an index on SALES.DIVNUM so an index scan would be the most efficient access method and a nested loops join the most efficient join method. But the 12c optimizer chooses a hash join and full table scan instead.
According to the 12c SQL Tuning manual there are two types of Adaptive Optimization that might help in this case: Adaptive Plans and Adaptive Statistics. I tried to tweak my test script to get Adaptive Statistics to kick in by commenting out the dbms_stats calls but it didn’t help. Also, I tried running the query several times in a row but it never changed plan.
I can see why Adaptive Plans wouldn’t work. How long will it let the full scan of SALES go before it decides to switch to a nested loops join with an index scan? If it gets half way through the table it is too late. So, I’m not sure how Adaptive Plans could change the plan on the fly when it expects a lot of rows and only finds a few.
On the Adaptive Statistics I guess this is just a case that it still can’t handle. I guess it is like a histogram across joins case that would be pretty complex to solve in general.
Anyway, this all reminds me of when I first learned about histograms. I got all excited that histograms would solve all my query performance problems and then came crashing down to earth when I realized it wasn’t the case. I think the analogy fits. Histograms improved cardinality estimates and can help in certain cases. I think the new adaptive features will help improve plans by using real cardinality figures where it can, but they aren’t a cure-all.
I’m not sure that getting cardinality right is a solvable problem in the general case. You have to have a fast optimizer so there are limits to how much it can do.
I ran all this as the user SYSTEM on the base 12c 64 bit Linux install with all the default parameters unchanged on the standard default database.
– Bobby
Hi Bobby,
If you transform the index SALESINDEX , from BTREE to BITMAP, the adaptive plan runs this time.
Best regards
Thanks for your comment. It looks like the optimizer will choose a bitmap index even though it has a wrong cardinality estimate of 500,000 rows so I don’t think the adaptive optimization is really at work in your example. I ran my example script on an 11.2 instance with a bitmap index and it chose the index even with the wrong cardinality. Also, in your plan card=500001 means the optimizer thinks you are pulling 500,001 rows instead of 1 row as it really is. So, I think the optimizer will choose bitmap indexes on equal conditions even when it thinks the cardinality is high. The point of my post is that the optimizer doesn’t choose the btree index even though it is obviously the better plan. 12c still doesn’t resolve this type of issue with its new adaptive optimization features.
11.2 bitmap index plan: