Simple tuning example but uncertain test results

Last week I came across what seemed like a simple query tuning problem.  A PeopleSoft batch job ran for many hours and when I did an AWR report I found that the top query was doing a full scan when an index should help.

Here is the query and its bad plan:

SQL_ID 1jx5w9ybgb51g
--------------------
UPDATE PS_JGEN_ACCT_ENTRY 
SET 
JOURNAL_ID = :1, 
JOURNAL_DATE = TO_DATE(:2,'YYYY-MM-DD'), 
FISCAL_YEAR = :3,
ACCOUNTING_PERIOD = :4,
GL_DISTRIB_STATUS = 'D', 
JOURNAL_LINE = :5 
WHERE 
PROCESS_INSTANCE = 6692638 AND
GL_DISTRIB_STATUS = 'J'  AND 
ACCOUNT=:6 AND 
DEPTID=:7 AND 
CURRENCY_CD=:8 AND
FOREIGN_CURRENCY=:9

Plan hash value: 1919611120

-----------------------------------------------------------------------------------------
| Id  | Operation          | Name               | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------
|   0 | UPDATE STATEMENT   |                    |       |       | 21649 (100)|          |
|   1 |  UPDATE            | PS_JGEN_ACCT_ENTRY |       |       |            |          |
|   2 |   TABLE ACCESS FULL| PS_JGEN_ACCT_ENTRY |     1 |    58 | 21649   (5)| 00:01:27 |
-----------------------------------------------------------------------------------------

The problematic batch job ran variations of this query with different literal values for PROCESS_INSTANCE corresponding to each flat file being loaded.  Three updates of this type were in the awr report for the 16 hour period that covered the run of the batch job:

  Elapsed      CPU                  Elap per  % Total
  Time (s)   Time (s)  Executions   Exec (s)  DB Time    SQL Id
---------- ---------- ------------ ---------- ------- -------------
    16,899      5,836        3,811        4.4     3.5 4h54qqmbkynaj

UPDATE PS_JGEN_ACCT_ENTRY SET JOURNAL_ID = :1, JOURNAL_DATE = TO_DATE(:2,'YYYY-M
M-DD'), FISCAL_YEAR = :3, ACCOUNTING_PERIOD = :4, GL_DISTRIB_STATUS = 'D', JOURN
AL_LINE = :5 WHERE PROCESS_INSTANCE = 6692549 AND GL_DISTRIB_STATUS = 'J' AND A
CCOUNT=:6 AND DEPTID=:7 AND CURRENCY_CD=:8 AND FOREIGN_CURRENCY=:9

     6,170      2,190        1,480        4.2     1.3 a5rd6vx6sm8p3

UPDATE PS_JGEN_ACCT_ENTRY SET JOURNAL_ID = :1, JOURNAL_DATE = TO_DATE(:2,'YYYY-M
M-DD'), FISCAL_YEAR = :3, ACCOUNTING_PERIOD = :4, GL_DISTRIB_STATUS = 'D', JOURN
AL_LINE = :5 WHERE PROCESS_INSTANCE = 6692572 AND GL_DISTRIB_STATUS = 'J' AND A
CCOUNT=:6 AND DEPTID=:7 AND CURRENCY_CD=:8 AND FOREIGN_CURRENCY=:9

     6,141      1,983        1,288        4.8     1.3 1jx5w9ybgb51g

UPDATE PS_JGEN_ACCT_ENTRY SET JOURNAL_ID = :1, JOURNAL_DATE = TO_DATE(:2,'YYYY-M
M-DD'), FISCAL_YEAR = :3, ACCOUNTING_PERIOD = :4, GL_DISTRIB_STATUS = 'D', JOURN
AL_LINE = :5 WHERE PROCESS_INSTANCE = 6692638 AND GL_DISTRIB_STATUS = 'J' AND A
CCOUNT=:6 AND DEPTID=:7 AND CURRENCY_CD=:8 AND FOREIGN_CURRENCY=:9

The batch job ran about 15 and a half hours so these three plus others like them surely combined to make up the bulk of the run time.

It made sense to me to just add an index on all the columns in the where clause – PROCESS_INSTANCE,GL_DISTRIB_STATUS,ACCOUNT,DEPTID,CURRENCY_CD,FOREIGN_CURRENCY

Just to check how selective this combination of columns might be I did a count on each grouping of these columns and came up with about 50 rows per combination:

>select max(cnt),avg(cnt),min(cnt)
  2  from
  3  (select
  4  PROCESS_INSTANCE,
  5  GL_DISTRIB_STATUS,
  6  ACCOUNT,
  7  DEPTID,
  8  CURRENCY_CD,
  9  FOREIGN_CURRENCY,
 10  count(*) cnt
 11  from sysadm.PS_JGEN_ACCT_ENTRY
 12  group by
 13  PROCESS_INSTANCE,
 14  GL_DISTRIB_STATUS,
 15  ACCOUNT,
 16  DEPTID,
 17  CURRENCY_CD,
 18  FOREIGN_CURRENCY);

  MAX(CNT)   AVG(CNT)   MIN(CNT)
---------- ---------- ----------
      9404  50.167041          1

The table itself has 3 million rows so this is pretty selective:

OWNER                TABLE_NAME                       NUM_ROWS     BLOCKS AVG_ROW_LEN SAMPLE_SIZE LAST_ANALYZED       DEGREE     INSTANCES
-------------------- ------------------------------ ---------- ---------- ----------- ----------- ------------------- ---------- ----------
SYSADM               PS_JGEN_ACCT_ENTRY                3145253      82857         204     3145253 2014-04-21 21:07:02          1          1

But, the strange thing was when we added the index on our test system we didn’t see any performance improvement!  We ran the largest production file on test and it ran in ten minutes with or without the index.  Yack!

So, I tried my own test in sqlplus with the select equivalent of the update and hardcoded values instead of bind variables – quick and dirty.  I thought I had extracted some valid values although I later realized they weren’t.  Here is what I ran and notice the full scan ran just as fast as with the index:

>select * from
  2  sysadm.PS_JGEN_ACCT_ENTRY
  3  WHERE PROCESS_INSTANCE = 6138803 AND
  4  GL_DISTRIB_STATUS = 'J'  AND ACCOUNT=1234567 AND DEPTID=567 AND CURRENCY_CD='USD' AND
  5  FOREIGN_CURRENCY = NULL;

no rows selected

Elapsed: 00:00:00.30

Execution Plan
----------------------------------------------------------
Plan hash value: 1762298626

---------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name               | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |                    |     1 |   203 |     0   (0)|          |
|*  1 |  FILTER                      |                    |       |       |            |          |
|   2 |   TABLE ACCESS BY INDEX ROWID| PS_JGEN_ACCT_ENTRY |     1 |   203 |     5   (0)| 00:00:01 |
|*  3 |    INDEX RANGE SCAN          | PSAJGEN_ACCT_ENTRY |     1 |       |     4   (0)| 00:00:01 |
---------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter(NULL IS NOT NULL)
   3 - access("PROCESS_INSTANCE"=6138803 AND "GL_DISTRIB_STATUS"='J' AND
              "CURRENCY_CD"='USD')
       filter(TO_NUMBER("ACCOUNT")=1234567 AND TO_NUMBER("DEPTID")=567 AND
              "CURRENCY_CD"='USD')


Statistics
----------------------------------------------------------
       1761  recursive calls
          0  db block gets
        557  consistent gets
         14  physical reads
          0  redo size
       1866  bytes sent via SQL*Net to client
        239  bytes received via SQL*Net from client
          1  SQL*Net roundtrips to/from client
          5  sorts (memory)
          0  sorts (disk)
          0  rows processed

>
>select /*+full(PS_JGEN_ACCT_ENTRY) */ * from
  2  sysadm.PS_JGEN_ACCT_ENTRY
  3  WHERE PROCESS_INSTANCE = 6138803 AND
  4  GL_DISTRIB_STATUS = 'J'  AND ACCOUNT=1234567 AND DEPTID=567 AND CURRENCY_CD='USD' AND
  5  FOREIGN_CURRENCY = NULL;

no rows selected

Elapsed: 00:00:00.17

Execution Plan
----------------------------------------------------------
Plan hash value: 3728573827

-----------------------------------------------------------------------------------------
| Id  | Operation          | Name               | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |                    |     1 |   203 |     0   (0)|          |
|*  1 |  FILTER            |                    |       |       |            |          |
|*  2 |   TABLE ACCESS FULL| PS_JGEN_ACCT_ENTRY |     1 |   203 | 12185   (2)| 00:02:27 |
-----------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter(NULL IS NOT NULL)
   2 - filter("PROCESS_INSTANCE"=6138803 AND "GL_DISTRIB_STATUS"='J' AND
              TO_NUMBER("ACCOUNT")=1234567 AND TO_NUMBER("DEPTID")=567 AND "CURRENCY_CD"='USD')


Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
          0  consistent gets
          0  physical reads
          0  redo size
       1873  bytes sent via SQL*Net to client
        239  bytes received via SQL*Net from client
          1  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          0  rows processed

It looks like I passed in a NULL for a column with a NOT NULL constraint and that is what made the full scan version just as fast as the indexed one.  The FILTER condition must have realized no rows could meet both NULL and NOT NULL conditions.  With both plans the database realized immediately that there were no rows matching this bogus collection of constants.  So, then I replaced the NULL with a zero and finally we had proof of the performance improvement of the index:

>select * from
  2  sysadm.PS_JGEN_ACCT_ENTRY
  3  WHERE PROCESS_INSTANCE = 6138803 AND
  4  GL_DISTRIB_STATUS = 'J'  AND ACCOUNT=1234567 AND DEPTID=567 AND CURRENCY_CD='USD' AND
  5  FOREIGN_CURRENCY = 0;

no rows selected

Elapsed: 00:00:00.02

Execution Plan
----------------------------------------------------------
Plan hash value: 2047014499

--------------------------------------------------------------------------------------------------
| Id  | Operation                   | Name               | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |                    |     1 |   203 |     5   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| PS_JGEN_ACCT_ENTRY |     1 |   203 |     5   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | PSAJGEN_ACCT_ENTRY |     1 |       |     4   (0)| 00:00:01 |
--------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("PROCESS_INSTANCE"=6138803 AND "GL_DISTRIB_STATUS"='J' AND
              "CURRENCY_CD"='USD')
       filter(TO_NUMBER("ACCOUNT")=1234567 AND TO_NUMBER("DEPTID")=567 AND
              TO_NUMBER("FOREIGN_CURRENCY")=0 AND "CURRENCY_CD"='USD')


Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
          3  consistent gets
          0  physical reads
          0  redo size
       1866  bytes sent via SQL*Net to client
        239  bytes received via SQL*Net from client
          1  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          0  rows processed

>
>select /*+full(PS_JGEN_ACCT_ENTRY) */ * from
  2  sysadm.PS_JGEN_ACCT_ENTRY
  3  WHERE PROCESS_INSTANCE = 6138803 AND
  4  GL_DISTRIB_STATUS = 'J'  AND ACCOUNT=1234567 AND DEPTID=567 AND CURRENCY_CD='USD' AND
  5  FOREIGN_CURRENCY = 0;

no rows selected

Elapsed: 00:00:37.11

Execution Plan
----------------------------------------------------------
Plan hash value: 1758291200

----------------------------------------------------------------------------------------
| Id  | Operation         | Name               | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |                    |     1 |   203 | 12185   (2)| 00:02:27 |
|*  1 |  TABLE ACCESS FULL| PS_JGEN_ACCT_ENTRY |     1 |   203 | 12185   (2)| 00:02:27 |
----------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("PROCESS_INSTANCE"=6138803 AND "GL_DISTRIB_STATUS"='J' AND
              TO_NUMBER("ACCOUNT")=1234567 AND TO_NUMBER("DEPTID")=567 AND
              TO_NUMBER("FOREIGN_CURRENCY")=0 AND "CURRENCY_CD"='USD')


Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
      56110  consistent gets
      55409  physical reads
          0  redo size
       1873  bytes sent via SQL*Net to client
        239  bytes received via SQL*Net from client
          1  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          0  rows processed

So, I can’t tell you what happened in test but I suspect that we were passing null into one of the bind variables and got a similar efficient filter due to some data that was missing running a production file on an out of date test system.  But, once I forced the equivalent to the production full scan by supplying non-null values for all the constants the value of the index became clear.  It went into production last week and this weekend’s run ran in a few minutes instead of 15 hours.

– Bobby

 

 

Posted in Uncategorized | 2 Comments

Epic fail: Flunked my 12c OCP upgrade test

Well, I took the Oracle 12c OCP upgrade exam this morning and didn’t pass it.  I’ve spent many hours over the past weeks and months reading up on new 12c features and it wasn’t enough.  I also installed 12c databases and tested out a variety of features to no avail.

Ultimately my downfall was that I tried to prepare for the test from the manuals and the descriptions of what the test covered.  Also, I purchased the Kaplan TestPrep software which only covered the first part of the 12c upgrade exam and only included 50 questions.

I should have waited until an upgrade exam book came out and I should have gotten the more comprehensive Transcender software so I would have a more complete set of questions and a better idea what would be covered.  I can’t study everything.

If you don’t know the 12c OCP upgrade exam has a new “Key DBA Skills” section which wasn’t present in earlier exams.  You have to pass both sections.  The first section covers the new 12c features and corresponds to the previous exams.  I passed this section, although barely, even though I felt confident going in that I would get a high percentage.  The breadth of the topics covered by the second section worried me because I can’t study everything.  It definitely covered things I didn’t study including some features I’ve never used.

Both parts of the exam were challenging.  It seems like a pretty tough test.  I’ve got my Oracle 7, 9,10, and 11 certifications and I’ve passed all of those tests on the first try so this is my first failure.  So, I’m trying to regroup and think about where to go from here.

Ideally, I’d like to get Sam Alapati’s book after it comes out on Amazon and get the Transcender software as well but that costs some money.  Also, I’m thinking I need to take some time and write-up some study sheets for myself instead of trying to commit everything to memory and hoping I remember during the test.

Anyway, I thought I would share my failure with the community and hope it helps someone else prepare.  The truth is that even though it is embarrassing to fail the test I learned things in the process that I can use at my job.  It would be great to get the piece of paper and I hope to do so by the end of the year, but I’ve already learned a ton through what I’ve done so far.

– Bobby

Posted in Uncategorized | 27 Comments

Useful Oracle 12c OCP exam blog post

I found this blog post about the Oracle 12c OCP exam useful: NO LONGER EXISTS

In particular it explained why my Kaplan SelfTest software only covers the new 12c features and not general DBA skills section of the OCP exam.

The Kaplan software I purchased has 50 questions and they are only about new features.  The software showed me the gaps in my 12c new features knowledge and gave me practice taking a multiple choice computerized test and I believe the value of these benefits exceed the $99 cost of the software.  But, the software surprised me when I discovered that it didn’t cover all the areas that will be on the OCP 12c upgrade exam.  The blog post I’ve referenced explains that in the near future Transcender will produce software that includes both sections of the OCP 12c upgrade exam.

– Bobby

Posted in Uncategorized | 2 Comments

Quick documentation for new PeopleSoft DBAs

I did a quick survey of the latest PeopleSoft manuals to find a set of links to pass on to a couple of coworkers of mine that are interested in doing PeopleSoft DBA work so I thought I’d include the links in a post.  This might give a new PeopleSoft DBA some highlights without having to read the entire manual set.

This page has a nice picture of how the environments connect:

http://docs.oracle.com/cd/E41633_01/pt853pbh1/eng/pt/tgst/concept_UnderstandingthePeopleSoftTopology-827f9d.html

This is the top level URL for the PeopleTools 8.53 documentation:

http://docs.oracle.com/cd/E41633_01/pt853pbh1/eng/pt/index.html

Another nice architecture diagram:

http://docs.oracle.com/cd/E41633_01/pt853pbh1/eng/pt/tgst/task_WebServer-827f33.html

Nice overview of application development using app designer:

http://docs.oracle.com/cd/E41633_01/pt853pbh1/eng/pt/tapd/task_PeopleSoftApplicationDesignerImplementation-0776f7.html

Yet another architecture diagram:

http://docs.oracle.com/cd/E41633_01/pt853pbh1/eng/pt/tprt/concept_PeopleSoftPureInternetArchitectureFundamentals-c071ce.html

More in depth view of app server and its processes:

http://docs.oracle.com/cd/E41633_01/pt853pbh1/eng/pt/tprt/concept_ApplicationServers-c071d0.html

Web server with discussion of servlets and jolt:

http://docs.oracle.com/cd/E41633_01/pt853pbh1/eng/pt/tprt/concept_WebServer-c071dc.html

Nice overview of datamover commands:

http://docs.oracle.com/cd/E41633_01/pt853pbh1/eng/pt/tadm/concept_UnderstandingDataMoverScripts-077b05.html

Datamover basics

http://docs.oracle.com/cd/E41633_01/pt853pbh1/eng/pt/tadm/task_CreatingandRunningPeopleSoftDataMoverScripts-077af9.html

Nice explanation of Oracle connections from PeopleSoft:

http://docs.oracle.com/cd/E41633_01/pt853pbh1/eng/pt/tadm/task_MonitoringPeopleSoftDatabaseConnections-077989.html

Good to know but not very clear explanation:

http://docs.oracle.com/cd/E41633_01/pt853pbh1/eng/pt/tsec/concept_PeopleSoftAuthorizationIDs-c07669.html

Important to know but not very clear:

http://docs.oracle.com/cd/E41633_01/pt853pbh1/eng/pt/tsec/concept_PeopleSoftSignIn-c0766f.html

PS_HOME versus PS_CFG_HOME

http://docs.oracle.com/cd/E41633_01/pt853pbh1/eng/pt/tsvt/concept_UnderstandingPS_HOMEandPS_CFG_HOME-eb7ece.html

Starting psadmin

http://docs.oracle.com/cd/E41633_01/pt853pbh1/eng/pt/tsvt/task_StartingPSADMIN-c07e6b.html

Nice run down of config files:

http://docs.oracle.com/cd/E41633_01/pt853pbh1/eng/pt/tsvt/task_UsingPSADMINConfigurationFiles-c07e7a.html

App server menu:

http://docs.oracle.com/cd/E41633_01/pt853pbh1/eng/pt/tsvt/task_UsingtheApplicationServerAdministrationMenu-c07e84.html

process scheduler menu:

http://docs.oracle.com/cd/E41633_01/pt853pbh1/eng/pt/tsvt/task_UsingtheProcessSchedulerMenu-c07ea3.html

web server menu – I don’t think I’ve ever used this:

http://docs.oracle.com/cd/E41633_01/pt853pbh1/eng/pt/tsvt/task_UsingtheWebPIAServerMenu-1773ed.html

– Bobby

4/27/2021

These links are all out of date. Here is the current PeopleTools manual page:

https://docs.oracle.com/en/applications/peoplesoft/peopletools/index.html

You might be able to find the current version of some of the things I mentioned in this older post.

Posted in Uncategorized | 2 Comments

SQL*Loader Express bug – not!

I’m still studying for my Oracle 12c OCP exam and I was trying to run a simple example of using SQL*Loader Express and the first thing I did blew up and I think it is a bug.  When I load a table with one or two columns it works fine, but when I load a table with 3 or 4 columns the last column is not loaded.  Tell me this isn’t a special feature! 🙂

First I create the table with four columns:

create table test
(a varchar2(20),
 b varchar2(20),
 c varchar2(20),
 d varchar2(20));

Then I create a comma separated values file named test.dat with four values per line:

[oracle@ora12c dpsl]$ cat test.dat
a,b,c,d
f,g,h,i
j,k,l,m

Then I run sql*loader in express mode:

[oracle@ora12c dpsl]$ sqlldr system/xxxxxx table=test

SQL*Loader: Release 12.1.0.1.0 - Production on Mon Apr 21 07:32:43 2014

Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.

Express Mode Load, Table: TEST
Path used:      External Table, DEGREE_OF_PARALLELISM=AUTO

Table TEST:
  3 Rows successfully loaded.

Check the log files:
  test.log
  test_%p.log_xt
for more information about the load.

Then I query the newly loaded table:

ORCL:CDB$ROOT:SYSTEM>select * from test;

A                    B                    D
-------------------- -------------------- --------------------
a                    b                    d
f                    g                    i
j                    k                    m

Queue the mysterious music.  Actually, now that I look at it really it is the third column that is missing.  Maybe it doesn’t work with a column named C.

Sure enough, here it is with column C replaced with column X:

A                    B                    X                    D
-------------------- -------------------- -------------------- --------------------
a                    b                    c                    d
f                    g                    h                    i
j                    k                    l                    m

So, I guess SQL*Loader Express doesn’t work with columns named C?  Odd.

– Bobby

Update on 05/16/2014:

As you probably can expect, this was user error on my part.  My standard header for sqlplus scripts has this code:

column u new_value us noprint;
column n new_value ns noprint;
column c new_value cs noprint;
 
select name n from v$database;
select user u from dual;
SELECT SYS_CONTEXT('USERENV', 'CON_NAME') c FROM DUAL;

I use this code to build a prompt that will tell me which container I’m in like this:

set sqlprompt &ns:&cs:&us>

But, this means I can’t use columns named n, u, or c, but in my sql*loader test I was using c.  So, not a bug, just a user error!

– Bobby

Posted in Uncategorized | Leave a comment

ORA-00600 [3631] recovering pluggable database after flashback database in Oracle 12c

I was trying to recreate the scenario where a 12c container database is flashed back to a SCN before the point that I recovered a pluggable database to using point in time recovery.

I got this ugly ORA-00600:

RMAN> recover pluggable database pdborcl;

Starting recover at 16-APR-14
using channel ORA_DISK_1

starting media recovery
media recovery failed
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of recover command at 04/16/2014 06:07:40
ORA-00283: recovery session canceled due to errors
RMAN-11003: failure during parse/execution of SQL statement: alter database recover if needed
 datafile 32 , 33 , 34 , 35
ORA-00283: recovery session canceled due to errors
ORA-00600: internal error code, arguments: [3631], [32], [4096], [4210689], [], [], [], [], [], [], [], []

I think the above error message stems from this bug:

Bug 14536110  ORA-600 [ktfaput: wrong pdb] / crash using PDB and FDA

There may have been some clever way to recover from this but I ended up just deleting and recreating the CDB through DBCA which was good experience playing with DBCA in Oracle 12c.  I’m trying to learn 12c but I have a feeling that I have hit a bug that keeps me from testing this flashback database, point in time recovery of a pluggable database scenario.  I wonder if I should patch?  I think that Oracle has included a patch for this bug in a patch set.  It could be good 12c experience to apply a patch set.

– Bobby

Posted in Uncategorized | 2 Comments

Using test prep software to prepare for 12c OCP upgrade exam

I got the newly available Kaplan test prep software for the Oracle 12c OCP upgrade exam.

I took the test in certification mode when I was tired at the end of the day some day last week and got 44% right – fail!  I usually wait until I get all the questions right before taking the real test so I have a ways to go.

The practice test software has been useful  in terms of showing me things I didn’t study very well or at all.  I’m expecting to significantly improve my correct answer percentage on my next pass.

I’m a little nervous though because it seems that the real test involves some questions that are generic database questions and I don’t think that the test prep software includes that section.  If you look at the list of topics they have a  section called “Key DBA Skills”.  I’d hope that after 19 years as an Oracle DBA I’d have some skills, but there are plenty of things I don’t do every day, such as setting up ASM.  I guess I’ll just have to bone up on the key areas of pre-12c that I don’t use all the time and hope I’m not surprised.

Anyway, I’m at 44% but hoping to make some strides in the new few weeks.

– Bobby

Posted in Uncategorized | Leave a comment

Two Adaptive Plans Join Methods Examples

Here is a zip of two examples I built as I’m learning about the new adaptive plans features of Oracle 12c: zip

The first example has the optimizer underestimate the number of rows and the adaptive plans feature switches the plan on the fly from nested loops to hash join.

In the second example the optimizer overestimates the number of rows and the adaptive plans feature switches the plan from merge join to nested loops.

I ran the same scripts on 12c and 11.2.0.3 for comparison.

Example 1 11g:

Plan hash value: 2697562628

------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |      |      1 |        |      1 |00:00:00.01 |      18 |
|   1 |  SORT AGGREGATE               |      |      1 |      1 |      1 |00:00:00.01 |      18 |
|   2 |   NESTED LOOPS                |      |      1 |        |      8 |00:00:00.01 |      18 |
|   3 |    NESTED LOOPS               |      |      1 |      1 |      8 |00:00:00.01 |      17 |
|*  4 |     TABLE ACCESS FULL         | T1   |      1 |      1 |      8 |00:00:00.01 |      14 |
|*  5 |     INDEX RANGE SCAN          | T2I  |      8 |      1 |      8 |00:00:00.01 |       3 |
|   6 |    TABLE ACCESS BY INDEX ROWID| T2   |      8 |      1 |      8 |00:00:00.01 |       1 |
------------------------------------------------------------------------------------------------

Example 1 12c:

-----------------------------------------------------------------------------------------------------------------
| Id  | Operation           | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem |  O/1/M   |
-----------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      |      1 |        |      1 |00:00:00.01 |       6 |       |       |          |
|   1 |  SORT AGGREGATE     |      |      1 |      1 |      1 |00:00:00.01 |       6 |       |       |          |
|*  2 |   HASH JOIN         |      |      1 |      1 |      8 |00:00:00.01 |       6 |  2168K|  2168K|     1/0/0|
|*  3 |    TABLE ACCESS FULL| T1   |      1 |      1 |      8 |00:00:00.01 |       3 |       |       |          |
|   4 |    TABLE ACCESS FULL| T2   |      1 |      1 |     16 |00:00:00.01 |       3 |       |       |          |
-----------------------------------------------------------------------------------------------------------------

Example 2 11g

---------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem |  O/1/M   |
---------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |      |      1 |        |      1 |00:00:00.01 |      16 |       |       |          |
|   1 |  SORT AGGREGATE               |      |      1 |      1 |      1 |00:00:00.01 |      16 |       |       |          |
|   2 |   MERGE JOIN                  |      |      1 |      4 |      1 |00:00:00.01 |      16 |       |       |          |
|   3 |    TABLE ACCESS BY INDEX ROWID| T2   |      1 |     16 |      2 |00:00:00.01 |       2 |       |       |          |
|   4 |     INDEX FULL SCAN           | T2I  |      1 |     16 |      2 |00:00:00.01 |       1 |       |       |          |
|*  5 |    SORT JOIN                  |      |      2 |      4 |      1 |00:00:00.01 |      14 | 73728 | 73728 |          |
|*  6 |     TABLE ACCESS FULL         | T1   |      1 |      4 |      1 |00:00:00.01 |      14 |       |       |          |
---------------------------------------------------------------------------------------------------------------------------

Example 2 12c

------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |      |      1 |        |      1 |00:00:00.01 |       5 |
|   1 |  SORT AGGREGATE               |      |      1 |      1 |      1 |00:00:00.01 |       5 |
|   2 |   NESTED LOOPS                |      |      1 |        |      1 |00:00:00.01 |       5 |
|   3 |    NESTED LOOPS               |      |      1 |      4 |      1 |00:00:00.01 |       4 |
|*  4 |     TABLE ACCESS FULL         | T1   |      1 |      4 |      1 |00:00:00.01 |       3 |
|*  5 |     INDEX RANGE SCAN          | T2I  |      1 |        |      1 |00:00:00.01 |       1 |
|   6 |    TABLE ACCESS BY INDEX ROWID| T2   |      1 |      1 |      1 |00:00:00.01 |       1 |
------------------------------------------------------------------------------------------------

The output of the plans for the 12c examples end with this line:

Note
-----
   - this is an adaptive plan

So, that tells me it is the adaptive plan feature that is changing the plan despite the wrong estimate of the number of rows.

– Bobby

Posted in Uncategorized | 5 Comments

Oracle 12c Auditing Chapters

Spent a good amount of time yesterday and today reading about auditing in Oracle 12c.  Can’t say I read every word, but I think it was worth reading the three chapters in the Security manual related to auditing:

Chapter 21 Introduction to Auditing
Chapter 22 Configuring Audit Policies
Chapter 23 Administering the Audit Trail

I haven’t used these features but it seems like a major new piece of code with the Unified Audit Trail.

I also read this chapter of the VLDB guide because it seemed to have a lot of things that were either new to 12c or new to me:

Chapter 5 Managing and Maintaining Time-Based Information

This chapter describes features that cause data to age out and get moved on to less expensive storage automatically over time.

Anyway, just wanted to pass on some full chapters that I’ve read and am pondering as I try to comprehend the new 12c features.

– Bobby

Posted in Uncategorized | Leave a comment

Learned a couple of things from RMAN restore

A RMAN restore and recovery that I completed today answered a couple of questions that remained after the recovery that was the topic of my post from June.  Both today’s recovery and June’s involved a restore of a production database on another host and a recovery of that database to a particular point in time.

Question 1: How to rename redo logs?

When doing a restore and recovery to a point in time RMAN does not restore the redo logs.  So, the production redo log directory does not have to exist on your target.  All you have to do is rename the redo logs after the restore and recover rman commands and before the alter database open resetlogs command.

Oracle document 1338193.1 in step 8 titled “Relocate all the online redo logs” documents the needed command and when to run it.

For each production redo log you run a command like this on the mounted but not opened restored and recovered database:

alter database rename file 
'old redo log path and name' to 
'new redo log path and name';

Question 2: How to I prevent the restored archive logs from filling up the archive filesystem?

It turns out that there is an option of the recover command that limits the amount of space the restored archive logs will take up and there is another option that causes the recover command to delete the archive logs after applying them:

recover database delete archivelog maxsize 50 G;

Otherwise this was the same case as the earlier blog post.  But, at least in this case I didn’t worry about archivelogs filling up the filesystem and I was able to put the redo logs where I wanted them.

– Bobby

Posted in Uncategorized | Leave a comment