Submitted two abstracts for Oracle OpenWorld 2017

I submitted two abstracts for Oracle OpenWorld 2017. I have two talks that I have thought of putting together:

  • Python for the Oracle DBA
  • Toastmasters for the Oracle DBA

I want to do these talks because they describe two things that I have spent time on and that have been valuable to me.

I have given several recent talks about Delphix. Kyle Hailey let me use his slot at Oaktable World in 2015 which was at the same time as Oracle OpenWorld 2015. Right after that I got to speak at Delphix Sync which was a Delphix user event. More recently I did a Delphix user panel webinar.

So, I’ve done a lot of Delphix lately and that is because I have done a lot with Delphix in my work. But, I have also done a lot with Python and Toastmasters so that is why I’m planning to put together presentations about these two topics.

I probably go to one conference every two years so I’m not a frequent speaker, but I have a list of conferences that I am thinking about submitting these two talks to, hoping to speak at one. These conferences are competitive and I’ve seen that better people than me have trouble getting speaking slots at them. But here is my rough idea of what I want to submit the talks to:

I’ve never gone to RMOUG but I think it is in Denver so that is a short flight and I have heard good things.

Also, we have our own local AZORA group in Phoenix. Recently we have had some really good ACE Director/Oak Table type speakers, but I think they might like to have some local speakers as well so we will see if that will work out.

If all else fails I can give the talks at work. I need to start working on the five speeches in my Toastmasters “Technical Presentations” manual which is part of the Advanced Communication Series. I haven’t even cracked the book open, so I don’t know if it applies but it seems likely that I can use these two talks for a couple of the speech projects.

Anyway, I’ve taken the first steps towards giving my Python and Toastmasters speeches. Time will tell when these will actually be presented, but I know the value that I have received from Python and Toastmasters and I’m happy to try to put this information out there for others.

Bobby

Posted in Uncategorized | 2 Comments

How to find the object that caused ORA-08103 error

A developer told me that two package executions died with ORA-08103 errors and he didn’t know which object caused the errors.

I found two trace files that had the following contents:

*** SESSION ID:(865.1201) 2017-04-17 10:17:09.476
OBJD MISMATCH typ=6, seg.obj=21058339, diskobj=21058934, dsflg=100000, dsobj=21058339, tid=21058339, cls=1

*** SESSION ID:(595.1611) 2017-04-17 10:17:35.395
OBJD MISMATCH typ=6, seg.obj=21058340, diskobj=21058935, dsflg=100000, dsobj=21058340, tid=21058340, cls=1

Bug 13844883 on Oracle’s support site gave me the idea to look up the object id for the diskobj part of the trace as the current object id. So, I needed to look up 21058934 and 21058935. I used this query to find the objects:

select * from dba_objects where DATA_OBJECT_ID in
(21058934,
21058935);

This pointed to two index partitions that had been rebuilt while the package was running. I’m pretty sure this caused the ORA-08103 error. So, if you get an ORA-08103 error find diskobj in the trace file and look it up as DATA_OBJECT_ID in dba_objects.

Bobby

Posted in Uncategorized | Leave a comment

Optimizer bug fix makes a query run more than 3 times slower

I’m working on an 11.1.0.7 to 11.2.0.4 upgrade and found a handful of queries that run more than 3 times longer on 11.2.0.4 than 11.1.0.7. The data and optimizer statistics are very similar on the two test databases. I’m pretty sure that an optimizer bug fix caused this difference. So, the irony is that a fix to the optimizer that we get with the upgrade to the very stable 11.2.0.4 release is causing a 3x slowdown in the query.

For my testing I’m using the gather_plan_statistics hint and this query to dump out the plan after executing the query:

select * from table(dbms_xplan.display_cursor(null,null,’ALLSTATS’));

I used an outline hint to force the 11.2.0.4 plan to run under 11.1.0.7 and then I looked at the estimated and actual row counts to find a discrepancy. I found one table with estimated row counts that did not look correct on 11.1.0.7 but made sense on 11.2.0.4.

11.1.0.7

---------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                              | Name                      | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem |  O/1/M   |
---------------------------------------------------------------------------------------------------------------------------------------------------------
|* 29 |           TABLE ACCESS BY INDEX ROWID  | MY_TABLE                  |     17 |      1 |      0 |00:00:07.34 |   96306 |       |       |          |
|* 30 |            INDEX RANGE SCAN            | MY_TABLE_PK               |     17 |     16 |    102 |00:00:01.20 |   96255 |       |       |          |

11.2.0.4

---------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                              | Name                      | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem |  O/1/M   |
---------------------------------------------------------------------------------------------------------------------------------------------------------
|* 29 |           TABLE ACCESS BY INDEX ROWID  | MY_TABLE                  |     17 |      8 |      0 |00:00:07.44 |   96306 |       |       |          |
|* 30 |            INDEX RANGE SCAN            | MY_TABLE_PK               |     17 |     17 |    102 |00:00:01.22 |   96255 |       |       |          |

After the range scan in step 30 in the plan in 11.1.0.7 you have an estimate of 16 rows but the table access in step 29 has an estimate of only 1. In 11.2.0.4 the estimate for step 29 is 8 rows.  Given the optimizer statistics, the optimizer should have estimated 8 rows step 29 in 11.1.0.7. It appears that 11.2.0.4 fixed a bug like this.

Here are the predicates for step 29 in the plan:

29 – filter((“PCP”.”MY_FLAG”=’M’ OR “PCP”.”MY_FLAG”=’Y’))

So the column could have value M or Y. The optimizer statistics have 4 distinct values for the column and no histogram. So the optimizer should assume that 1/4 of the rows meet each criteria. So the optimizer should have estimated 1/4 + 1/4 = 1/2 of the rows from step 30 meet the criteria in step 29. So, 17/2 = 8, rounding down. But in 11.1.0.7 it seems that they multiplied the rows from step 30 by 1/4 two times making it 16*1/4*1/4 = 1. It seems that in 11.1.0.7 the optimizer multiplied by 1/4 twice instead of adding them and then multiplying. There is a known bug related to OR conditions in where clauses:

Bug 10623119 – wrong cardinality with ORs and columns with mostly nulls (Doc ID 10623119.8)

Our 11.2.0.4 database includes this bug fix but I don’t know if this fix caused the difference in behavior that I saw. It seems possible that it did.

The interesting thing is that the real row count for step 29 is 0. So, the pre-bug fix plan in 11.1.0.7 actually estimated the row count more accurately by accident. It estimated 1 and the real count was 0. The correct estimate should have been 8, but that is not as close to 0 as 1 is. I think we just happened to have a few queries where the bug resulted in a more accurate estimate than a properly functioning optimizer. But, I’m only looking at the queries whose performance is worse after the upgrade. There may have been other queries that performed better because of this bug fix.

I ended up passing this and a similar query back to a senior SQL developer and he took one look at the query and described it as “ugly”. He fixed both of them in no time so that both queries now run just as fast or faster on 11.2.0.4 than they did on 11.1.0.7.

So, the original query ran faster when the optimizer was not working properly. A human developer simplified the query and then it ran faster when the optimizer was working properly. Maybe the moral of the story is to build simpler and cleaner SQL queries to begin with and if you find a query whose performance declines with better optimizer information then consider improving the query so that it works well with the better functioning optimizer.

Bobby

Update: I messed around with cardinality hints some more. The problem with blog posts is that once I’ve written one I start second guessing myself. Did I cover all of my bases? I finally found a cardinality hint on the problem table that forced the plan back to the 11.1.0.7 plan. But the weird thing is that I had to hint that the number of rows on the table was larger than reality. My hint was like this: cardinality(pcp,1000000). I expected that a smaller cardinality would change the plan!

The good thing about this test is that it brought me back to why I focused on this table in the first place. Most of the time in the query execution centered around this one table and its index range scan. Now I know that messing with the cardinality hint on this table changes the plan back I feel good about the idea that this table’s row count has something to do with the plan change in 11.2.0.4. But, I’m not sure how to tie this change back to a specific bug fix.

Yet another update:

I ran the query on 11.2.0.4 with the parameter optimizer_features_enable=’11.1.0.7′ and the plan reverted to the 11.1.0.7 plan. At least I know that the slight differences in statistics between the two databases aren’t causing this issue. It is something about 11.2.0.4. I also forced the 11.1.0.7 plan under 11.2.0.4 and its cost was less than the plan that 11.2.0.4 was choosing so that suggests that the 11.2.0.4 optimizer is not even trying the better plan. It would have chosen the lower cost plan. I tried setting optimizer_max_permutations=1000000000000 to get it to try more plans but it still didn’t choose the lower cost 11.1.0.7 plan. So, I guess that my observations about the difference is the row counts on the table do not explain the slower plan. I am not sure how to diagnose why 11.2.0.4 is not trying the lower cost plan. It could be that this is a bug that 11.2.0.4 introduced.

Still I guess all of this research reinforces the main point. I’m tearing my hair out trying to apply all of this arcane Oracle SQL tuning information to understand why the queries ran slower on 11.2.0.4. But, a good SQL developer rewrote the queries in minutes so maybe I do not need to spend more time on it?

I guess if we run into a query that changes to a worse plan in the 11.2.0.4 upgrade we can always try setting optimizer_features_enable=’11.1.0.7′ in a hint like this

/*+ OPTIMIZER_FEATURES_ENABLE(‘11.1.0.7’) */

Posted in Uncategorized | 3 Comments

_small_table_threshold=1000000 results in > 5x query speedup

Today I sped a query up by over 5 times by setting _small_table_threshold=1000000.

Here is the query elapsed time and a piece of the plan showing its behavior before setting the parameter:

Elapsed: 00:28:41.67

----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                              | Name                   | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem |  O/1/M   |
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|  69 |              PARTITION RANGE ITERATOR                  |                        |   9125 |      1 |   9122 |00:13:02.42 |    3071K|   3050K|       |       |          |
|* 70 |               TABLE ACCESS FULL                        | SIS_INV_DTL            |   9125 |      1 |   9122 |00:13:02.25 |    3071K|   3050K|       |       |          |

I think that this part of the plan means that the query scanned a range of partitions  9125 times resulting in over three million physical reads. These reads took about 13 minutes. If you do the math it works out to between 200-300 microseconds per read. I have seen similar times from repeated reads from a storage server that has cached the data in memory. I have seen this with a SAN and with Delphix.

Here is my math for fun:

>>> 1000000*((60*13)+2.25)/3050000
256.4754098360656

About 256 microseconds per read.

I ran this query again and watched the wait events in Toad’s session browser to verify that the query was doing a bunch of direct path reads. Even though the query was doing full scans on the partition range 9000 times the database just kept on doing direct path reads for 13 minutes.

So, I got the idea of trying to increase _small_table_threshold. I was not sure if it would work with parallel queries. By the way, this is on 11.2.0.4 on HP-UX Itanium platform. So, I tried

alter session set "_small_table_threshold"=1000000;

I ran the query again and it ran in under 5 minutes. I had to add a comment to the query to get the plan to come back cleanly. So, then I reran the query again and I guess because of caching it came back in under 2 minutes:

First run:

Elapsed: 00:04:28.83

Second run:

Elapsed: 00:01:39.69

-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                              | Name                   | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem |  O/1/M   |
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|  69 |              PARTITION RANGE ITERATOR                  |                        |   9125 |      1 |   9122 |00:00:45.33 |    3103K|      0 |       |       |          |
|* 70 |               TABLE ACCESS FULL                        | SIS_INV_DTL            |   9125 |      1 |   9122 |00:00:45.27 |    3103K|      0 |       |       |          |

The second execution did zero physical reads on these partitions instead of the 3 million that we had without the parameter!

So, it seems that if you have a query that keeps doing full scans on partitions over and over it can run a lot faster if you disable direct path read by upping _small_table_threshold.

Bobby

Posted in Uncategorized | 2 Comments

Jetpack 4.8 outage

I just got my bobbydurrettdba.com blog back online. It seems that an automatic update of Jetpack to version 4.8 took it down last night. I thought I would post a quick comment on how I resolved it because it is slightly different from what the Jetpack support site says.

When I tried to get into my blog today I just got a white screen saying there was a problem. I could not get into the wp-admin screen to do anything. Then I found out that Jetpack 4.8 had been pushed out and had brought down a lot of WordPress sites. Then I found out that Jetpack 4.8.1 had just come out today to fix it.

Here is the url for the 4.8.1 fix to this issue: fix

The fix points to the manual plugin install url which talks about deleting your /wp-content/plugins/jetpack folder and then manually installing by downloading the 4.8.1 jetpack zip, unzipping it, and ftping it up to /wp-content/plugins.

But all I had to do, after backing up /wp-content/plugins/jetpack was to rename it to /wp-content/plugins/jetpack.old. Then I was able to get into my site and to update the plugin to 4.8.1 through the normal web-based process.

Strangely enough the Jetpack plugin update removed my /wp-content/plugins/jetpack.old directory and replaced it with /wp-content/plugins/jetpack.

Here is what the directory looked like after I ran the update from my blogs admin page:

I never did ftp the 4.8.1 jetpack directory over though I had it unzipped and ready to go on my laptop.

Anyway, I didn’t delete my jetpack directory.  I just renamed it to jetpack.old. Then I ran the normal plugin update process.

Bobby

p.s. My site stats for today are not looking so good with the blog down all day:

Posted in Uncategorized | 4 Comments

Information Technology Jobs Posted

My company, US Foods, has posted a number of information technology jobs that are in Chicago (Rosemont)  or Phoenix (Tempe). Here is the web site:

https://usfood.taleo.net/careersection/usf_external/jobsearch.ftl

Enter Information Technology as the Job Field to see all the posted IT jobs.

Bobby

Posted in Uncategorized | Leave a comment

Using Delphix to support Oracle upgrade

I’m working on upgrading a very buggy unpatched 11.1.0.7 Oracle database to a fully patched 11.2.0.4 version. I’m using Delphix to support the upgrade and it has been a big help so far. This is on the HP-UX 11.31 Itanium platform.

The great thing about using Delphix to support an upgrade is that my very first pass through the upgrade scripts was with a full-sized clone of production. In the past I probably started with a tiny subset or even an out of the box demo database for my first upgrade pass and even when I got to QA it wasn’t a full test of a production upgrade. This time, my first test was with all the data and that was very cool.

The main example of how this helped is that we had a lot of data in the SYS.WRI$_OPTSTAT_HISTGRM_HISTORY and SYS.WRI$_OPTSTAT_HISTHEAD_HISTORY tables in production and this made the first upgrade of its clone take a long time. After two or three attempts at other ways to speed things up, I ended up applying patch 12683802 on an 11.1.0.7 Oracle home and this allowed me to truncate these two tables.

Delphix helped me here because I had an unused 11.1.0.7 Oracle home on a different host from the one I was doing the upgrade on. I didn’t want to apply a patch on the upgrade host because there were three other databases using the home and I didn’t want to bring them down or patch them. Delphix let me move the VDB that I was upgrading over to the host that had the unused home. Then I applied the patch there and ran the truncate using the dbms_stats.purge_stats(dbms_stats.purge_all) procedure call that the patch enabled.

Then I moved the VDB back to the host where I intended to do the upgrade, which already had the fully patched 11.2.0.4 binaries installed, and ran the upgrade there. Pretty cool. I think I did the most recent upgrade in about 3.5 hours with the empty OPTSTAT tables.

By the way, doing an upgrade within Delphix is easy. You just bring the VDB up on the old oracle home, do the upgrade as you normally would include all the shutdown and startup commands, and then within the Delphix GUI you let Delphix know the VDB is now on a new Oracle home by shutting it down, choosing the new home, and bringing it back up. Piece of cake.

Bobby

Posted in Uncategorized | 2 Comments

Simple Python for Oracle database tuning example

I ran across a stackoverflow question and it gave me an idea for a simpler use of Python to graph some Oracle database performance information. I looked at my PythonDBAGraphs scripts and I’m not sure that it is worth modifying them to try to simplify those scripts since I like what they do. But they may make people think that Python scripts to graph Oracle performance data are difficult to write.  But, I think if someone just wants to put together some graphs using Python, Matplotlib, and cx_Oracle they could do it more simply than I have in my PythonDBAGraphs scripts and it still could be useful.

Here is an example that looks at db file sequential read waits and graphs the number of waits per interval and the average wait time in microseconds:

import cx_Oracle
import matplotlib.pyplot as plt
import matplotlib.dates as mdates

connect_string = "MYUSER/MYPASSWORD@MYDATABASE"
con = cx_Oracle.connect(connect_string)
cur = con.cursor()

query="""
select sn.END_INTERVAL_TIME,
(after.total_waits-before.total_waits) "number of waits",
(after.time_waited_micro-before.time_waited_micro)/
(after.total_waits-before.total_waits) "ave microseconds"
from 
DBA_HIST_SYSTEM_EVENT before, 
DBA_HIST_SYSTEM_EVENT after,
DBA_HIST_SNAPSHOT sn
where before.event_name='db file sequential read' and
after.event_name=before.event_name and
after.snap_id=before.snap_id+1 and
after.instance_number=1 and
before.instance_number=after.instance_number and
after.snap_id=sn.snap_id and
after.instance_number=sn.instance_number and
(after.total_waits-before.total_waits) > 0
order by after.snap_id
"""

cur.execute(query)

datetimes = []
numwaits = []
avgmicros = []
for result in cur:
    datetimes.append(result[0])
    numwaits.append(result[1])
    avgmicros.append(result[2])
        
cur.close()
con.close()

title="db file sequential read waits"

fig = plt.figure(title)
ax = plt.axes()

plt.plot(datetimes,numwaits,'r')
plt.plot(datetimes,avgmicros,'b')

# Format X axis dates

fig.autofmt_xdate()
ax.fmt_xdata = mdates.DateFormatter('%m/%d/%Y %H:%M')
datetimefmt = mdates.DateFormatter("%m/%d/%Y")
ax.xaxis.set_major_formatter(datetimefmt)

# Title and axes labels

plt.title(title)
plt.xlabel("Date and time")
plt.ylabel("num waits and average wait time")

# Legend

plt.legend(["Number of waits","Average wait time in microseconds"],
loc='upper left')

plt.show()

The graph it produces is usable without a lot of time spent formatting it in a non-standard way:

It is a short 68 line script and you just need matplotlib and cx_Oracle to run it. I’ve tested this with Python 2.

Bobby

Posted in Uncategorized | 8 Comments

Rosetta Code

I have experimented with the Rosetta Code website as a way to learn more about computer technology. I have mentioned my exploration of the edX and OCW web sites in earlier posts so I thought I would mention Rosetta Code or RC as another way to learn.

The RC site displays code in a variety of programming languages so that you can compare the languages. They define certain tasks and people post code that performs the tasks. For example, the task could be to write a simple Hello World program that writes Hello World to the screen. The site shows how to do this in Java, Python, C, and others. At this moment there are 354 implementations of Hello World on the site.  I added this simple one for Oracle’s version of SQL: My Hello World

But, I can’t help wondering if this is just a fun game or is it really educational? I just finished a task this morning that was interesting to me. It is a draft task which means that people have not yet written enough example programs for the site managers to publish it as an official task. But anyone can see it. I’ve written the Python and PL/SQL examples. This 4 squares problem is just a puzzle or game but it was interesting to think about solving it in a declarative way with SQL and not a procedural way with Python or any other regular language. Maybe there is value in looking at these tasks and thinking about how SQL and relational database thinking could be applied?

Anyway, I just thought I would document what I have done with the RC site. Be careful because once you start working up example code for the site it can become addictive! I am still thinking about what the real value of the site is but it has helped me exercise my programming muscles and think a bit about SQL. Check it out if you think it has value.

Bobby

Posted in Uncategorized | Leave a comment

AZORA Meeting January 26th

The next Arizona Oracle User Group meeting is January 26th in Scottsdale. Signup on this meetup link.

Copied from invitation:

When:  January 26, 2017 (Thursday)
12:30 pm – 4:00 pm

Where:  Republic Services
1st Floor Training Rooms
14400 N 87th St (AZ101 & Raintree)
Scottsdale, AZ

Agenda:

12:30 – 1:00 Registration and Pizza

1:00-1:10  Welcome

1:10-2:10  Presentations

Room 1    Biju Thomas – OneNeck IT Solutions
(Oracle ACE Director)
“Oracle Database 12c New Features for 11gR2 DBA”

Room 2   Charles Kim – Viscosity North America
(Oracle ACE Director)
“Bullet Proof Your Data Guard Environment”

2:10-2:25  Break – Coffee & Cookies

2:25-3:25  Presentations

Room 1   Biju Thomas – OneNeck IT Solutions
(Oracle ACE Director)
“Introduction to Oracle Databases in the Cloud”

Room 2  Jerry Ward – Viscosity North America
“Building Faceted Search Navigation in APEX
with Oracle JET and  PL/SQL Pipelines”

3:25-3:30    Wrap Up and Closing

Posted in Uncategorized | Leave a comment