Two New ASH FORCE_MATCHING_SIGNATURE scripts

I just put two new scripts on my OracleDatabaseTuningSQL GitHub repository. These came out of a PeopleSoft Financials performance problem I was working on that involved a lot of similar SQL statements that used constants instead of bind variables, so they did not show up at the top of the AWR report. I had to look at ASH data to find them and had to group by their force matching signature to group the similar statements together. These are the two scripts:

ashfmscount.sql – Looks at a single application engine session and groups all the time spent by force matching signature to find the queries that consumed the most time. I used my simple ashdump.sql script to dump out a few rows when I knew the app engine was running and I found the SESSION_ID and SESSION_SERIAL# values there.

ashtopelapsed.sql – This is meant to look like the SQL by elapsed time report on an AWR report except that it groups SQL by force matching signature but gives an example sql id with its text to give you an idea of what the signature represents. Might be good to run this the next time my AWR report does not have any long running SQL statement on the top SQL report.

I really used the first one to resolve the issue along with various other scripts to get to that point. I created the second one just now as a possible future script to use in addition to an AWR report. I didn’t check the ASH report to see if this is a duplicate of it, but these two new scripts work well.

Bobby

Posted in Uncategorized | Leave a comment

Plan Change Due to Index Partition Names Mismatch

I want to document a bug I ran across in Oracle 11.2.0.3 on HP-UX Itanium and how it caused unexpected plan changes that would bring the database to its knees with many long running queries running the bad plan. I found that the problem table had a local index whose partition names did not match the partition names of the table and that recreating the index with partition names that matched the table resolved the issue.

We have a busy and important Oracle database that had queries that ran fine for weeks and months and then suddenly changed to a bad plan. They all had a similar pattern. The good plan did a simple range scan on a certain partition’s index. The bad plan did a fast full index scan.

Good:

PARTITION RANGE SINGLE | 
 INDEX RANGE SCAN | PROBLEM_INDEX

Bad:

PARTITION RANGE SINGLE | 
 INDEX FAST FULL SCAN | PROBLEM_INDEX

The query accessed the table using the first two columns of PROBLEM_INDEX and the first column of the index is the partitioning column for the range partitioned table. So, call the first two columns of the index PARTITIONING_C and OTHER_C then the query was like:

select
...
from problem_table
where
PARTITIONING_C = :bindvar1 and
OTHER_C = :bindvar2
...

I have known for a long time that some choice of bind variable values was causing the plan to flip to the fast full scan on the index partition. But I did not know that it had to do with the names of the index’s partitions. The table and index creation scripts must have looked something like this:

create problem_table
(
PARTITIONING_C NUMBER,
OTHER_C NUMBER,
...
)
PARTITION BY RANGE (PARTITIONING_C)
(
PARTITION P1000 VALUES LESS THAN (1000),
PARTITION P2000 VALUES LESS THAN (2000),
PARTITION P3000 VALUES LESS THAN (3000),
PARTITION P4000 VALUES LESS THAN (4000),
PARTITION P5000 VALUES LESS THAN (5000));

CREATE UNIQUE INDEX problem_index ON problem_table
(PARTITIONING_C, OTHER_C , ...)
LOCAL (  
  PARTITION P2000,
  PARTITION P3000,
  PARTITION P4000,
  PARTITION P5000,
  PARTITION P6000);

The index has the same number of partitions as the table and a lot of them have the same names, but they do not line up. I found this bug in the version of Oracle that we are on:

Bug 14013094 – DBMS_STATS places statistics in the wrong index partition

I am not sure that I am hitting that bug, but I am hitting some similar bug because 14013094 relates to index partitions names that do not match table partition names. For one partition of the problem index the statistics were set to 0 but its corresponding table partition had millions of rows. It would be as if partition P3000 on problem_table had 20,000,000 rows in stats and corresponding partition P4000 on problem_index had 0 rows. If I gathered index partition statistics on P4000 it correctly set stats to 20,000,000. If I gathered table partition statistics on P3000 it cleared the index partition stats on P4000 setting them to 0! (!!) Yikes! How weird is that? Seems obvious to me it is a bug, but maybe not exactly 14013094. I tried dropping and recreating the index leaving the partition names as they are, but it did not resolve the issue. Then I just created the index letting it default to matching partition names like this:

CREATE UNIQUE INDEX problem_index ON problem_table
(PARTITIONING_C, OTHER_C , ...)
LOCAL;

I’m not sure how the partition names got mismatched but it is a simple fix. It took me a while staring at the partition statistics to realize what was going on and then it took a while to prove out the fix. We do not yet have this in production, but I believe we have nailed down the cause of the plan changes. In the past I have been using SQL Profiles to lock in the plans of any new query that uses the problem table, but I plan to run without them after putting in the index. I kick myself for not researching this earlier, but it was not obvious so maybe it was not crazy to use SQL Profiles for a while. But it became a burden to keep using them and it left the system vulnerable to a plan change on any query on the problem table that did not already have a SQL Profile.

In a lot of ways this is a simple query tuning problem with bad zero stats on a partition with millions of rows. Any time the statistics are wildly inaccurate bad plans are likely. But tracking down the reason for the zero statistics was not simple. I should mention that some of the table partitions are empty and have 0 rows. In fact, to take our example, imagine that table partition P4000 has 0 rows just like index partition P4000. No one would suspect that the statistics are wrong unless you realize that P4000 on the index corresponds not to P4000 but to P3000 on the table and P3000 has 20,000,000 rows!

Bind variables lead to these unexpected plan changes when the plan is different when different values are passed into the variables. It is a fundamental tradeoff of using bind variables to reduce the parsing that constants would cause. If one set of bind variable values causes a bad plan that plan gets locked into memory and it is used for all the other values passed to the query until the plan is flushed out of memory. There are some cases where the optimizer will look at the values passed into the bind variables and choose between multiple plans but that did not occur in this problem.

So, what can someone get from this post if they are not on 11.2.0.3 and do not have locally partitioned indexes with similar partition names to the table but offset and mismatched? I think for me the point is after using a SQL Profile to resolve a problem query plan to dig deeper into the underlying reason for the bad plan. I expect bad plans because I think that the optimizer is limited in its ability to correctly run a query even if it has the best possible statistics. Also, since I support many databases, I do not have time to dig deeply into the underlying reason for every bad plan. But I need to dig deeper when similar queries keep getting the same kind of bad plans. In many cases bad statistics lead to bad plans and it is a lot better to fix the statistics once than to keep using SQL Profiles and hints repeatedly to fix similar queries with the same sort of switches to bad plans. In this case I had the bizarre fix of recreating the index with partition names that match the table and that ensured that the statistics on the index partitions were accurate and that I no longer need SQL Profiles to lock in the good plan.

Bobby

Posted in Uncategorized | Leave a comment

MongoDB test server on Oracle Linux

I use Oracle Enterprise Linux on VirtualBox running on my Windows 10 laptop for test servers (virtual machines) of various types of software. I just setup a MongoDB VM yesterday and thought I would document some of the things I did which are not in the standard documentation.

I followed this URL for the install:

https://docs.mongodb.com/manual/tutorial/install-mongodb-on-red-hat/

I created the yum repository file mongodb-org-4.4.repo as documented:

[root@mongodb yum.repos.d]# cat mongodb-org-4.4.repo
[mongodb-org-4.4]
name=MongoDB Repository
baseurl=https://repo.mongodb.org/yum/redhat/$releasever/mongodb-org/4.4/x86_64/
gpgcheck=1
enabled=1
gpgkey=https://www.mongodb.org/static/pgp/server-4.4.asc

I installed the MongoDB yum package:

[root@mongodb yum.repos.d]# sudo yum install -y mongodb-org
Loaded plugins: langpacks, ulninfo
mongodb-org-4.4                                                                                                           | 2.5 kB  00:00:00
ol7_UEKR4                                                                                                                 | 2.5 kB  00:00:00
ol7_latest                                                                                                                | 2.7 kB  00:00:00
mongodb-org-4.4/7Server/primary_db                                                                                        |  47 kB  00:00:02
Resolving Dependencies
--> Running transaction check
---> Package mongodb-org.x86_64 0:4.4.6-1.el7 will be installed
--> Processing Dependency: mongodb-org-shell = 4.4.6 for package: mongodb-org-4.4.6-1.el7.x86_64
...

I didn’t need sudo since I was root, but it worked. I don’t know if this was really needed but I set ulimit with these commands:

ulimit -f unlimited
ulimit -t unlimited
ulimit -v unlimited
ulimit -l unlimited
ulimit -n 64000
ulimit -m unlimited
ulimit -u 64000

I am not sure if these commands stick when you run them as root. They seem to but for now I’ve been running them manually after I reboot. These were documented here:

https://docs.mongodb.com/manual/reference/ulimit/

Based on this document I also created the file /etc/security/limits.d/99-mongodb-nproc.conf:

[root@mongodb ~]# cat /etc/security/limits.d/99-mongodb-nproc.conf
*          soft    nproc     64000
*          hard    nproc     64000
root       soft    nproc     unlimited
[root@mongodb ~]#

I don’t know for sure if this was needed, but it did not cause any problems.

I edited /etc/selinux/config to prevent SELinux from interfering:

[root@mongodb selinux]# diff config.07012021 config
7c7
< SELINUX=enforcing
---
> SELINUX=disabled

I also disabled the firewall just in case it would cause problems:

[root@mongodb ~]# systemctl disable firewalld
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
Removed symlink /etc/systemd/system/basic.target.wants/firewalld.service.
[root@mongodb ~]# systemctl stop firewalld
[root@mongodb ~]# systemctl status firewalld
? firewalld.service - firewalld - dynamic firewall daemon
   Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
   Active: inactive (dead)
     Docs: man:firewalld(1)
...

Lastly, in order to be able to access MongoDB from outside the VM I had to edit /etc/mongod.conf to allow access from all IP addresses:

[root@mongodb etc]# diff  mongod.conf mongod.conf.07012021
29c29
<   bindIp: 0.0.0.0 # Enter 0.0.0.0,:: to bind to all IPv4 and IPv6 addresses or, alternatively, use the net.bindIpAll setting.
---
>   bindIp: 127.0.0.1  # Enter 0.0.0.0,:: to bind to all IPv4 and IPv6 addresses or, alternatively, use the net.bindIpAll setting.

Of course, in a production system you would want to make this more secure, but this is just a quick and dirty test VM.

Finally, this command brings up the database:

systemctl start mongod

I ran this in a new root Putty window to get the ulimit settings. Not sure if that was necessary, but it did work.

I have a NAT network and port forwarding setup so that while MongoDB listens by default on port 27017 host localhost I setup VirtualBox to connect it to port 61029 host 127.0.0.1 on my laptop.

Since the programming language that I am most familiar with is Python (I have not learned any JavaScript) I setup a test connection to my new MongoDB database using the pymongo module.

I installed it like this:

pip install pymongo[srv]

Simple test program looks like this:

from pymongo import MongoClient
from pymongo.errors import ConnectionFailure

client = MongoClient('127.0.0.1', 61029)
try:
    # The ismaster command is cheap and does not require auth.
    client.admin.command('ismaster')
except ConnectionFailure:
    print("Server not available")

I got that from stackoverflow. I was also following the pymongo tutorial:

https://pymongo.readthedocs.io/en/stable/tutorial.html

One test program:

from pymongo import MongoClient

client = MongoClient('127.0.0.1', 61029)

db = client['test-database']

collection = db['test-collection']

import datetime
post = {"author": "Mike",
          "text": "My first blog post!",
          "tags": ["mongodb", "python", "pymongo"],
          "date": datetime.datetime.utcnow()}
          
posts = db.posts
post_id = posts.insert_one(post).inserted_id
print(type(post_id))
print(post_id)

print(db.list_collection_names())

import pprint
pprint.pprint(posts.find_one())

pprint.pprint(posts.find_one({"author": "Mike"}))

pprint.pprint(posts.find_one({"author": "Eliot"}))

pprint.pprint(posts.find_one({"_id": post_id}))

post_id_as_str = str(post_id)
pprint.pprint(posts.find_one({"_id": post_id_as_str}))

from bson.objectid import ObjectId

pprint.pprint(posts.find_one({"_id": ObjectId(post_id_as_str)}))

Its output:

<class 'bson.objectid.ObjectId'>
60df400f53f43d1c2703265c
['test-collection', 'posts']
{'_id': ObjectId('60de4f187d04f268e5b54786'),
 'author': 'Mike',
 'date': datetime.datetime(2021, 7, 1, 23, 26, 16, 538000),
 'tags': ['mongodb', 'python', 'pymongo'],
 'text': 'My first blog post!'}
{'_id': ObjectId('60de4f187d04f268e5b54786'),
 'author': 'Mike',
 'date': datetime.datetime(2021, 7, 1, 23, 26, 16, 538000),
 'tags': ['mongodb', 'python', 'pymongo'],
 'text': 'My first blog post!'}
None
{'_id': ObjectId('60df400f53f43d1c2703265c'),
 'author': 'Mike',
 'date': datetime.datetime(2021, 7, 2, 16, 34, 23, 410000),
 'tags': ['mongodb', 'python', 'pymongo'],
 'text': 'My first blog post!'}
None
{'_id': ObjectId('60df400f53f43d1c2703265c'),
 'author': 'Mike',
 'date': datetime.datetime(2021, 7, 2, 16, 34, 23, 410000),
 'tags': ['mongodb', 'python', 'pymongo'],
 'text': 'My first blog post!'}

I ran this on Python 3.9.6 so the strings like ‘Mike’ are not u’Mike’. It looks like the output on the tutorial is from some version of Python 2, so you get Unicode strings like u’Mike’ but on Python 3 strings are Unicode by default so you get ‘Mike’.

Anyway, I didn’t get any further than getting MongoDB installed and starting to run through the tutorial, but it is up and running. Might be helpful to someone else (or myself) if they are running through this to setup a test VM.

Bobby

Posted in Uncategorized | Leave a comment

Real World Index Example

This is a follow up to my previous post with a real world example of how much difference indexes can make on tables that do not have them.

I looked at an AWR report of a peak hour on a busy database and noticed sql_id 028pmgvcryk5n:

Peak Hour No Indexes

The top SQL is a PL/SQL call so it does not have a plan but 028pmgvcryk5n stood out because it was taking 11.24% of the total elapsed time and running over 2 seconds (2.23) per execution. Most queries on this system run in hundredths of a second like .02 seconds.

I looked at this query’s plan and it included two tables with full scans on both:

I put indexes on each table and the new plan looks like this:

With indexes it ran the query in about 1 millisecond:

Without indexes it ran from 600 to 2000 milliseconds:

I guess I have a simple point. Indexes can make a big difference. A query that runs in 600 milliseconds may be good enough in some cases but if it can run in 1 millisecond it is worth putting on the right indexes.

Bobby

p.s. I used my sqlstat.sql and vsqlarea.sql scripts to get the execution history with and without the indexes. I used getplans.sql to get the plan without indexes from the AWR and extractplansqlid.sql to get the plan with the indexes from the V$ views. The query ran too fast to show up on the AWR once the indexes were in place so that is why I used the V$ queries to get information about the query after adding the indexes.

Posted in Uncategorized | Leave a comment

Simple Index Example

There is a lot to be said about when to add indexes on Oracle tables, but I wanted to show a simple example. In this case we have a table with no indexes and a query with two equal conditions in the where clause which identify a single row out of many. Here is a zip of the SQL*Plus script and log for this post: zip

The table TEST is a clone of DBA_TABLES. I load it up with a bunch of copies of DBA_TABLES and with a single row that is unique. Then I run the query without any indexes on the table:

SQL> select TABLESPACE_NAME
  2  from test
  3  where
  4  owner='TESTOWNER' and
  5  table_name = 'TESTTABLE';

TABLESPACE_NAME
------------------------------
TESTTABLESPACE

Elapsed: 00:00:00.15

I add an index on owner and table_name and run it again:

SQL> select TABLESPACE_NAME
  2  from test
  3  where
  4  owner='TESTOWNER' and
  5  table_name = 'TESTTABLE';

TABLESPACE_NAME
------------------------------
TESTTABLESPACE

Elapsed: 00:00:00.00

This may not seem like a big deal going from .15 seconds to .00 seconds (less than .01 seconds). But if you start running a query like this hundreds of thousands of times per hour the .15 seconds of CPU per execution can slow your system down.

See the zip for the details. The create index command looked like this:

SQL> create index testi on test (owner, table_name);

Index created.

Bobby

Posted in Uncategorized | 1 Comment

Fixed Broken Links

I had a couple of new comments on older blog pages and I noticed that some links pointed to things which no longer exist. I fixed a few things that I knew were wrong. Finally, this week I decided to review every post back to the beginning 9 years ago and click on every link.

For dead links I removed the link and added a comment like (DOES NOT EXIST). In some cases, the link still existed but was different. I changed several links to Oracle’s documentation for example. In many cases I put (updated) or something like that to show that there was a new link. I synced up a lot of the old links to the current version of my scripts on GitHub. Hopefully these changes won’t make the older posts unreadable, but at least the links will point to useful and up to date versions of things.

I did not carefully read every post because I was in a hurry, but I did look at every post and it gave me the chance to see how things changed over the past 9 years. It was kind of a quick review of what I was thinking. Some of the posts seemed kind of dumb. (What was I thinking?) But others are genuinely useful. But it was interesting to see which topics I talked about and how that changed over time. It makes me wonder where things will go in the future. I guess I cannot expect to fully predict the future, but it is good to think about what I should be learning and what skills I should be developing as things go forward.

Anyway, hopefully the updated links will make the posts a little more helpful to myself and others.

Bobby

Posted in Uncategorized | Leave a comment

SymPy Tutorial Repository

I have been playing with the Python SymPy package and created a repository with my test scripts and notes:

https://github.com/bobbydurrett/sympytutorial

Might be helpful to someone. I just got started.

I had used Maxima before. SymPy and Maxima are both what Wikipedia calls “Computer Algebra Systems.” They have a nice list here:

https://en.wikipedia.org/wiki/List_of_computer_algebra_systems

I got a lot of use out of Maxima but I think it makes sense to switch the SymPy because it is written in Python and works well with other mainstream Python packages that I use like Matplotlib. They both fall under the SciPy umbrella of related tools so for me if I need some computer algebra I probably should stick with SymPy.

Maxima and SymPy are both free.

Bobby

Posted in Uncategorized | Leave a comment

ORA-14767 when day of month > 28 with interval partitioning month interval

SQL> CREATE TABLE test(
  2  RUN_DATE DATE,
  3  MY_NBR NUMBER(4)
  4  )
  5  PARTITION BY RANGE (RUN_DATE)
  6  INTERVAL (NUMTOYMINTERVAL(1, 'MONTH'))
  7  (
  8    PARTITION data_p1 VALUES LESS THAN (TO_DATE('01/29/2017', 'MM/DD/YYYY'))
  9  );
CREATE TABLE test(
*
ERROR at line 1:
ORA-14767: Cannot specify this interval with existing high bounds


SQL> 
SQL> CREATE TABLE test(
  2  RUN_DATE DATE,
  3  MY_NBR NUMBER(4)
  4  )
  5  PARTITION BY RANGE (RUN_DATE)
  6  INTERVAL (NUMTOYMINTERVAL(1, 'MONTH'))
  7  (
  8    PARTITION data_p1 VALUES LESS THAN (TO_DATE('01/28/2017', 'MM/DD/YYYY'))
  9  );

Table created.

Creating a range partitioned table with a date type partitioning column and a month interval must have a starting partition that has a day < 29 or it gets an ORA-14767 error.

The error message “Cannot specify this interval with existing high bounds” is not helpful. How about something like “Need a day of the month that exists in every month”? February only has 28 days most years so 28 is the max.

Bobby

Posted in Uncategorized | 3 Comments

Free Machine Learning Class from MIT

I noticed this new class from MIT:

https://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-036-introduction-to-machine-learning-fall-2020/

It is about machine learning and is free. I think it has some built in exercises with automatic grading but no instructor to interact with.

Since ML is such a hot topic I thought I would share it. I have not taken the class.

Bobby

Posted in Uncategorized | 2 Comments

60,000 sessions in 64 gigabyte VM using shared servers

Just a quick note. I have an application that is generating thousands of inactive sessions and with the default dedicated server configuration we are having to add more and more memory to our virtual host to support the connections. We estimate that the application may need 45,000 mostly inactive sessions once the application is fully rolled out. So, I thought about how much memory would be required to support 45,000 sessions using shared servers. In an earlier post I mentioned how I got the sessions up to about 11,000 so I just took the Java program from that post and tried to adjust memory parameters to support over 45,000. I got it up to 0ver 60,000 so the test was essentially successful. I don’t think I would want to run a system with 60,000 sessions on a single node, but it is nice to see that it is to some degree possible.

I used a 64 gigabyte Linux VM and set these parameters:

sga_max_size=52G
sga_target=52G
shared_pool_size=36G
dispatchers='(PROTOCOL=TCP)(DISPATCHERS=64)'
max_shared_servers=16
shared_servers=16
large_pool_size=512M

Pretty sure that the large pool grew dynamically to fill the sga space not taken up by the shared pool. 52-36=16 gigabyte large pool.

Anyway, I don’t have time to write this up carefully now, but I wanted to publish the parameters.

Here is the previous post with the Java program I used to open 1000 connections:

https://www.bobbydurrettdba.com/2013/06/26/testing-maximum-number-of-oracle-sessions-supported-by-shared-servers/

I ended up running 30 of these on 3 servers for a total of 90,000 potential logins and got up to over 63,000.

Bobby

Posted in Uncategorized | 2 Comments