Adaptive Optimization Limitation Example

I’ve been reading up on Oracle 12c to get certified and to help advise my company on potential uses for the new version.  I’ve been looking forward to researching the new Adaptive Optimization features because it makes so much sense that the database should change its plans when it finds differences between the expected number of rows each part of a plan sees and the actual number of rows.

I’ve written blog posts in the past about limitations of the optimizer related to its ability to determine the number of rows (cardinality) that steps in a plan would see.  I took the scripts from some of these and ran them on a 12c instance to see if the new features would cause any of the inefficient plans to change to the obvious efficient plans.

Sadly, none of my examples ran differently on 12c.  I don’t doubt that there are examples that run better because of the new features but the ones I constructed earlier didn’t see any improvement.  So, I thought I would blog about one such example.

Here is the original blog post with an example run on 11.2 Oracle: url

Here is the same script run on 12c: zip

Here is the query with the bad plan:

select B.DIVNUM 
where a.DIVNUM=B.DIVNUM and                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                     

Plan hash value: 480645376                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              

| Id  | Operation          | Name     | Rows  | Bytes | Cost (%CPU)|                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                         
|   0 | SELECT STATEMENT   |          |       |       |   421 (100)|                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                         
|*  1 |  HASH JOIN         |          |   500K|  8300K|   421   (2)|                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                         
|*  2 |   TABLE ACCESS FULL| DIVISION |     1 |    14 |     2   (0)|                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                         
|   3 |   TABLE ACCESS FULL| SALES    |  1000K|  2929K|   417   (1)|                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                         

There is only 1 SALES row that has the DIVNUM associated with DIVNAME=’Mayberry’. There 1,000,001 SALES rows  and there is an index on SALES.DIVNUM so an index scan would be the most efficient access method and a nested loops join the most efficient join method. But the 12c optimizer chooses a hash join and full table scan instead.

According to the 12c SQL Tuning manual there are two types of Adaptive Optimization that might help in this case: Adaptive Plans and Adaptive Statistics.  I tried to tweak my test script to get Adaptive Statistics to kick in by commenting out the dbms_stats calls but it didn’t help.  Also, I tried running the query several times in a row but it never changed plan.

I can see why Adaptive Plans wouldn’t work.  How long will it let the full scan of SALES go before it decides to switch to a nested loops join with an index scan?  If it gets half way through the table it is too late.  So, I’m not sure how Adaptive Plans could change the plan on the fly when it expects a lot of rows and only finds a few.

On the Adaptive Statistics I guess this is just a case that it still can’t handle.  I guess it is like a histogram across joins case that would be pretty complex to solve in general.

Anyway, this all reminds me of when I first learned about histograms.  I got all excited that histograms would solve all my query performance problems and then came crashing down to earth when I realized it wasn’t the case.  I think the analogy fits.  Histograms improved cardinality estimates and can help in certain cases.  I think the new adaptive features will help improve plans by using real cardinality figures where it can, but they aren’t a cure-all.

I’m not sure that getting cardinality right is a solvable problem in the general case.  You have to have a fast optimizer so there are limits to how much it can do.

I ran all this as the user SYSTEM on the base 12c 64 bit Linux install with all the default parameters unchanged on the standard default database.

– Bobby

About Bobby

I live in Chandler, Arizona with my wife and three daughters. I work for US Foods, the second largest food distribution company in the United States. I have worked in the Information Technology field since 1989. I have a passion for Oracle database performance tuning because I enjoy challenging technical problems that require an understanding of computer science. I enjoy communicating with people about my work.
This entry was posted in Uncategorized. Bookmark the permalink.

2 Responses to Adaptive Optimization Limitation Example

  1. Anonymous says:

    Hi Bobby,

    If you transform the index SALESINDEX , from BTREE to BITMAP, the adaptive plan runs this time.

    Best regards

    select B.DIVNUM
    call     count       cpu    elapsed       disk      query    current        rows
    ------- ------  -------- ---------- ---------- ---------- ----------  ----------
    Parse        7      0.00       0.00          0          2          0           0
    Execute      7      0.00       0.00          0          0          0           0
    Fetch       14      0.00       0.00          0        168          0           7
    ------- ------  -------- ---------- ---------- ---------- ----------  ----------
    total       28      0.00       0.00          0        170          0           7
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 103
    Number of plan statistics captured: 3
    Rows (1st) Rows (avg) Rows (max)  Row Source Operation
    ---------- ---------- ----------  ---------------------------------------------------
             1          1          1  HASH JOIN  (cr=24 pr=0 pw=0 time=393 us cost=28 size=8500017 card=500001)
             1          1          1   NESTED LOOPS  (cr=24 pr=0 pw=0 time=134 us cost=28 size=8500017 card=500001)
             1          1          1    STATISTICS COLLECTOR  (cr=22 pr=0 pw=0 time=108 us)
             1          1          1     TABLE ACCESS FULL DIVISION (cr=22 pr=0 pw=0 time=100 us cost=18 size=14 card=1)
             1          1          1    BITMAP CONVERSION TO ROWIDS (cr=2 pr=0 pw=0 time=21 us cost=28 size=1500003 card=500001)
             1          1          1     BITMAP INDEX SINGLE VALUE SALESINDEX (cr=2 pr=0 pw=0 time=15 us)(object id 93325)
             0          0          0   BITMAP CONVERSION TO ROWIDS (cr=0 pr=0 pw=0 time=0 us cost=28 size=1500003 card=500001)
             0          0          0    BITMAP INDEX FAST FULL SCAN SALESINDEX (cr=0 pr=0 pw=0 time=0 us)(object id 93325)
    • Bobby says:

      Thanks for your comment. It looks like the optimizer will choose a bitmap index even though it has a wrong cardinality estimate of 500,000 rows so I don’t think the adaptive optimization is really at work in your example. I ran my example script on an 11.2 instance with a bitmap index and it chose the index even with the wrong cardinality. Also, in your plan card=500001 means the optimizer thinks you are pulling 500,001 rows instead of 1 row as it really is. So, I think the optimizer will choose bitmap indexes on equal conditions even when it thinks the cardinality is high. The point of my post is that the optimizer doesn’t choose the btree index even though it is obviously the better plan. 12c still doesn’t resolve this type of issue with its new adaptive optimization features.

      11.2 bitmap index plan:

      select B.DIVNUM from DIVISION A,SALES B where a.DIVNUM=B.DIVNUM and
      Plan hash value: 419279617
      | Id  | Operation                    | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
      |   0 | SELECT STATEMENT             |            |       |       |    15 (100)|          |
      |   1 |  NESTED LOOPS                |            |   500K|  8300K|    15   (0)| 00:00:01 |
      |*  2 |   TABLE ACCESS FULL          | DIVISION   |     1 |    14 |     5   (0)| 00:00:01 |
      |   3 |   BITMAP CONVERSION TO ROWIDS|            |   500K|  1464K|    15   (0)| 00:00:01 |
      |*  4 |    BITMAP INDEX SINGLE VALUE | SALESINDEX |       |       |            |          |

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.