Arizona User Group Meeting tomorrow

Here is the link: url

It looks like our meeting tomorrow will be on Oracle 12c new features.  I’m looking forward to participating. :)

– Bobby

Posted in Uncategorized | Leave a comment

Cary Millsap

This is my third of four posts about people who have made a major impact on my Oracle database performance tuning journey.  This post is about Cary Millsap.  The previous two were about Craig Shallahamer and Don Burleson.

I am working through these four people in chronological order.  The biggest impact Cary Millsap had on me was through the book Optimizing Oracle Performance which he co-authored with Jeff Holt.  I have also heard Cary speak at conferences and we had him in for a product demo one time where I work.

I have delayed writing this post because I struggle to put into words why Cary’s book was so useful to me without repeating a long explanation of the book’s contents.  Just before reading the book I had worked on a system with high CPU usage and queuing for the CPU.  I had just read the paper “Microstate Response-time Performance Profiling” by Danisment Gazi Unal which talked about why CPU measurements in Oracle do not include time spent queued for the CPU.  Then I read Cary Millsap’s book and it was very enlightening.  For one thing, the book was very well written and written in a convincing way.  But the key concept was Cary Millsap’s idea of looking at the waits and CPU time that Oracle reports at a session level and comparing that to the real elapsed time.  This performance profile with waits, CPU, and elapsed time formed the basis of my first conference talk which I gave at Collaborate 06: PowerPoint, Word, Zip

Here is an example of a session profile from my presentation:

TIMESOURCE                  ELAPSED_SECONDS
--------------------------- ---------------
REALELAPSED                             141
CPU                                   44.81
SQL*Net message from client            9.27
db file sequential read                 .16

This is a profile of a session that spent roughly two-thirds of its time queued for the CPU.

Since reading Optimizing Oracle Performance I have resolved many performance problems by creatively applying the concepts in the book.  The book focuses on using traces to build profiles.  I have made my own scripts against V$ views and I have also used Precise.  I have used traces as the book suggests but only with TKPROF.  I have not had a chance to use the tool that the book describes, the Method R Profiler.

However I do it the focus is on waits, CPU as reported by Oracle, and real elapsed time all for a single session.  It is a powerful way to approach performance tuning and the main thing I learned from Cary Millsap.  I highly recommend Cary Millsap and Jeff Holt’s book to anyone who wants to learn more about Oracle database performance tuning because it made such a profound impact on my career.

– Bobby



Posted in Uncategorized | Leave a comment

Career Day – Oracle Database Administrator

I will be talking at my daughter’s high school for career day on Monday, explaining my job as an Oracle Database Administrator.  Wish me luck!

The funny thing is that no one understands what Oracle DBAs do, unless they are one or work closely with one.  I have a feeling that my talk is going to fall flat, but if it helps one of the students in any way it will be worth it.

To me the best thing about being an Oracle DBA is that you can do a pretty interesting and technically challenging job and companies that are not technology centric will still hire you to do it.  I’ve always been interested in computer technology but have worked in non-technical companies my entire career – mainly a non-profit ministry and a food distribution company.  Neither companies make computers or sell software!

My other thought is how available computer technology is to students today.  Oracle, in one of the company’s more brilliant moves, made all of its software available for download so students can try out the very expensive software for free.  Plus all the manuals are available online.  What is it like to grow up as a student interested in computer technology in the age of the internet?  I can’t begin to compare it to my days in the 1980s when I was in high school and college.  Did we even have email?  I guess we must have but I can’t remember using it much.  Today a student who owns a laptop and has an internet connection has a world of technology at their fingertips far beyond what I had at their age.

Hopefully I wont bore the students to tears talking about being an Oracle DBA.  They probably still won’t know what it really is after I’m done.  But at least they will know that such a job exists, and maybe that will be helpful to them.

– Bobby

P.S.  There were over 100 students there.  They were pretty polite with only a little talking.  Here is a picture of myself on the left, my daughter in the center, and a coworker who also spoke at the career day on the right.

careerday





Posted in Uncategorized | Leave a comment

AZORA – Arizona Oracle User Group meeting January 20th

AZORA is planning a meeting January 20th.  Here is the link to RSVP: url

Hope to see you there. :)

– Bobby

Posted in Uncategorized | Leave a comment

10128 trace to see partition pruning

I am working on an SR with Oracle support and they asked me to do a 10128 trace to look at how the optimizer is doing partition pruning.  I did some quick research on this trace and wanted to pass it along.

Here are the names of the two Oracle support documents that I found most helpful:

How to see Partition Pruning Occurred? (Doc ID 166118.1)

Partition Pruning Min/Max Optimization Fails when Parallel Query Run in Serial (Doc ID 1941770.1)

The first was the one Oracle support recommended.  But, the SR said to run both a level 2 and a level 7 trace and the first document did not mention level 7.  But, the second document has an example of a level 7 trace and more details on how to set it up.

I also found these two non-Oracle sites or blog posts:

http://cbohl.blogspot.com/2006/10/verify-that-partition-pruning-works.html

http://www.juliandyke.com/Diagnostics/Events/EventReference.html#10128

I do not have time to delve into this further now but if you are trying to understand partition pruning then the 10128 trace may help you understand how it works.

– Bobby


Posted in Uncategorized | Leave a comment

Parsing blocks stats blocks parsing

I had a five-minute conversation with Oracle development Friday that rocked my world.  I found out that parsing blocks stats which blocks parsing.

We have a system with queries that are taking minutes to parse.  These queries include the main tables on our database, one of which is interval partitioned and has 20,000 sub-partitions.  We have seen a situation where Oracle’s delivered statistics gathering job hangs on a library cache lock waiting for a query to finish parsing.  But, much worse than that, we find most queries on our system hanging on library cache lock waits blocked by the statistics job.

We have an SR open on this situation because it seems to be a bug, but Friday someone on the phone from Oracle development explained that this parse blocks stats blocks parse situation is normal.  Later after I got off the phone I built a simple test case proving that what he said was true.  I took a query that took a long time to parse in production and ran it on our development database and it took 16 seconds to parse there.  I choose the smallest table that the query included and gathered stats on it.  The stats ran in a fraction of a second when run by itself, but if I started the long parsing query in one window and ran the stats in another window the stats hung on a library cache lock wait for 15 seconds.  Then I created a trivial query against the same small table I had gathered stats on.  The query ran instantly by itself.  But, if I ran the long parsing query first, kicked off the stats which hung on the lock, and then kicked off the short query against the table I was gathering stats on the short query hung on a library cache lock also.  This example convinced me that the parse blocks stats blocks parse chain was real.

This morning I built a standalone test case that others can run to prove this out on their databases: zip of testcase.  To run the testcase you need three windows where you can run three sqlplus scripts in rapid succession.  In one window first just run tables.sql to create the test tables.  Then run these three scripts one after the other in each window to create the three link chain: chain1.sql, chain2.sql, chain3.sql.  Chain1.sql has the explain plan of a query that takes a long time to parse.  Chain2.sql gathers stats on one table.  Chain3.sql runs a simple query against the table whose stats are being gathered.  Chain1 spends all of its time on the CPU doing the parse.  Chain2 and 3 spends all of their time on library cache lock waits.

First I created two tables:

create table t1 as select * from dba_tables;
create table t2 as select * from dba_tables;

Next I kicked off the explain plan that takes a long time to run.  It joined 100 tables together:

explain plan into plan_table for 
select 
count(*)
from
     t1,
     t2,
     t2 t3,
...
     t2 t100
where
  t1.owner=t2.owner and
...
  t1.owner=t100.owner
/

This explain plan ran for 26 seconds, almost all of which was CPU:

Elapsed: 00:00:26.90

...

CPU in seconds
--------------
         26.81

Right after I kicked off the explain plan I kicked off this statement which gathered stats on the first table in the from clause:

execute dbms_stats.gather_table_stats(NULL,'T1');

This ran for 25 seconds and almost all of the time was spent on a library cache lock wait:

Elapsed: 00:00:25.77

...

Library cache lock in seconds
-----------------------------
                        25.55

Right after I kicked off the gather table stats command I ran this simple query making sure that it was unique and required a hard parse:

select /* comment to force hard parse */ count(*) from T1;

This ran for 24 seconds and almost all of the time was spent on a library cache lock wait:

Elapsed: 00:00:24.48

...

Library cache lock in seconds
-----------------------------
                        24.48

Evidently when a session parses a query it needs to obtain a shared lock on every table that the query includes.  When you gather statistics on a table you need to obtain an exclusive lock on the table, even if you are gathering statistics on one partition or sub-partition of the table.  While the statistics gathering session waits to acquire an exclusive lock any new parses that include the same table will hang.

Prior to Friday I did not think that there was any non-bug situation where gathering optimizer statistics would lock up sessions.  I thought that the only negative to gathering statistics at the same time as other application processing was that statistics gathering would compete for system resources such as CPU and I/O and possibly slow down application code.  But, now I know that gathering statistics can hang all queries that use the given table if stats gathering gets hung up waiting for a query that takes a long time to parse.

– Bobby

P.S. After reviewing the SR I wanted to understand what this parse blocks stats blocks parse looked like in a state dump.  The Oracle support analyst explained how the locks looked in a state dump that we uploaded but I didn’t get a chance to look at it closely until today.  I found the most important information in lines with the string “LibraryObjectLock” at the front of the line after some spaces or tabs.  There were three types of lines – the holding share lock, the waiting exclusive lock, and the many waiting share locks:

LibraryObjectLock:  Address=... Handle=0x5196c8908 Mode=S ...
LibraryObjectLock:  Address=... Handle=0x5196c8908 RequestMode=X ...
LibraryObjectLock:  Address=... Handle=0x5196c8908 RequestMode=S ...

The “…” indicates places I edited out other details.  The handle 0x5196c8908 identifies the table being locked.  The “Mode=S” string indicates a successful share lock of that table by the session with the long parse time.  The “RequestMode=X” was from the stats job trying to get exclusive access to the table.  The “RequestMode=S” was all the other sessions trying to get shared access to the table waiting for stats to first get exclusive access.  Anyway, I just wanted to translate what Oracle support told me into something that might be useful to others.  Plus I want to remember it myself!




Posted in Uncategorized | 6 Comments

Alerting on plans that change for the worse

I’ve uploaded a monitoring script that I have worked on: zip.

The script alerts you when the optimizer runs a SQL statement with a potentially new and inefficient plan so you can intervene.  This script improves upon my earlier script which only alerts you to SQL statements running with new plans.  The new script compares the average elapsed time of the current plan with the average of the most often executed plan.  If the new plan averages more than ten times the most often executed plan then the script alerts you to a possible new slow plan.  The elapsed time for the current plan comes from V$SQL and the time for the most often executed plan comes from DBA_HIST_SQLSTAT.

Here is an example output from the first test case:

SQL_ID        PLAN_HASH_VALUE
------------- ---------------
a1fw5xjcwkjqx      1357081020

There are two test cases included in the script.  The first test case has a query which uses an index and the plan changes when I drop the index.

If you look at the output for the first test case you see the good plan:

SQL_ID  a1fw5xjcwkjqx, child number 0
-------------------------------------
select /* 9 */ owner,table_name from test where owner='XYZ' and
table_name='XYZ'

Plan hash value: 268773832

---------------------------------------------------------------
| Id  | Operation        | Name  | Rows  | Bytes | Cost (%CPU)|
---------------------------------------------------------------
|   0 | SELECT STATEMENT |       |       |       |    58 (100)|
|*  1 |  INDEX RANGE SCAN| TESTI | 16222 |   221K|    58   (0)|
---------------------------------------------------------------

The good plan uses the index.  Here is the bad plan.  Note how the sql_id and plan_hash_value correspond to the output of the monitor script.

SQL_ID  a1fw5xjcwkjqx, child number 0
-------------------------------------
select /* 9 */ owner,table_name from test where owner='XYZ' and
table_name='XYZ'

Plan hash value: 1357081020

---------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)|
---------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |       |       | 15708 (100)|
|*  1 |  TABLE ACCESS FULL| TEST |  8111 |   110K| 15708   (1)|
---------------------------------------------------------------

I have this running in a cron job on a development server and I plan to put the script into our production server next week and set it up to alert me with an email when the optimizer runs SQL statements with potentially  new and inefficient plans.

– Bobby

 


Posted in Uncategorized | Leave a comment

Check out the new Oracle help web site

I found a broken link to an Oracle help document in one of my posts and when I went to the Oracle 12c database documentation to find the new URL to put in my post I found that Oracle had totally revamped their online manuals.

Here is a link to the new high level Oracle help site: url

I’ve only looked at it for a few minutes and can’t say whether I like it or not.  You can still download the manuals in PDF format so that remains familiar.  It looks like the new site integrates documentation across most or all of Oracle’s products in a similar format and that’s pretty cool.

Anyway, I just saw this and thought I would pass it along.

– Bobby


Posted in Uncategorized | 3 Comments

Oracle DBA job in Tempe, Arizona

We still have a position open on our Oracle database team here in Tempe, Arizona.  Here is the link with an updated job description: url

We have a great team and would love to have a new member to join us.

-Bobby

Posted in Uncategorized | 2 Comments

Useful Carlos Sierra post about queries changing performance

Saw an interesting post related to work I am doing locking in plans and finding plans that have changed: Carlos Sierra post on queries with changing performance.

Our DBA team uses versions of my sqlstat.sql script to find queries whose elapsed time change and we use a script from Carlos Sierra’s SQLT collection to force the execution plan back to an efficient plan that the optimizer choose in the past.  Carlos Sierra’s new post includes scripts that resemble my sqlstat script but with more functionality including an interesting way of helping people figure out which queries are now running slower.  At this point we just look at the top queries that are running and run sqlstat.sql to see if they have run with less elapsed time in the past.  We eyeball it and usually it is obvious because the bad plan takes 100 times longer than the good one.  But the newly posted Carlos Sierra script seems to use some math to help figure out whether the elapsed time is really bad enough for us to consider it a problem.

I want to set up a monitoring script that pages us when a plan changes for the worse so we can decide whether to intervene and force on older plan to improve performance.  I have a simple script running on one of our databases but, as I expected, it is returning a lot of false positives.  So, I’m hoping to have some time this week to improve that script so that we get fewer false alarms and I’m going to take a close look at Carlos Sierra’s newly posted script for ideas.

– Bobby

 




Posted in Uncategorized | 4 Comments