Tweaked script to show top SQL by force matching signature

I just finished another production tuning exercise that is like two recent posts:


Tuning Example With Outline Hints and SQL Profile

The biggest new thing was that I ran the ashtopelapsed.sql script that I mentioned in the first post above but it missed the top SQL statement for the job I was trying to tune. I am struggling to get the logic right. I am trying to write an ASH-based top SQL report for a given time range. It is like an AWR report where you see the top elapsed time SQL statement for the given time frame. But I am grouping by FORCE_MATCHING_SIGNATURE and not SQL_ID because I want to see the top SQL that all have the same signature but different SQL_IDs. In this case there was a PeopleSoft update and select statement that did not use bind variables, so each execution had its own SQL_ID. Here is the select:

A.ACCOUNTING_DT <= TO_DATE('2021-09-04','YYYY-MM-DD');

I guess this runs for different values of BUSINESS_UNIT_GL, ACCOUNTING_DT, etc.

I am trying to get ashtopelapsed.sql to show the top SQL by force matching signature:

create table topsigs as
count(*) active
to_date('13-SEP-2021 01:00:19','DD-MON-YYYY HH24:MI:SS')
to_date('13-SEP-2021 06:52:25','DD-MON-YYYY HH24:MI:SS')
group by

In this case during the time frame of the problem job, from about 1 to 7 am yesterday.

But then I want to find at least one SQL_ID for the given value of FORCE_MATCHING_SIGNATURE so I can then get its text as an example of a real SQL statement that has the signature:

create table sigtoid as
min(a.sql_id) a_sql_id
topsigs t,
a.sql_id = s.sql_id

There has to be a row in DBA_HIST_SQLTEXT for the signature or else I will not have an example SQL statement and the signature will get dropped from the report. I may need to put some sort of outer join somewhere so I can see the FORCE_MATCHING_SIGNATURE values that do not have an example SQL_ID in DBA_HIST_SQLTEXT. But fortunately, with the last revision (commit) to ashtopelapsed.sql I was able to see the text of an example SQL statement for each of the top signatures during the time when the problem job ran.

The other funny thing is that I forgot I had ashfmscount.sql so I ended up rewriting it. I need to read my own blog posts. 🙂 Hopefully this more detailed post will serve as a model for future performance tuning efforts and so I will not forget the scripts that I used this time.

As for hacking a SQL Profile together, the subject of the second post above, I edited one in vi as described in the post and the other I just extracted using coe_xfr_sql_profile.sql. I also edited both SQL Profiles to use FORCE = TRUE to make the SQL profile apply to all SQL statements with the same signature. I described FORCE = TRUE in earlier posts:


Example of coe_xfr_sql_profile force_match TRUE

I think there is some overlap in all these posts, but I wanted to document today’s tuning exercise. The biggest new thing is the improvement of ashtopelapsed.sql.

Couple of images of the AWR and ASH top SQL for the time of this long running job:


Neither of these captured the top SQL by signature so that is why ashtopelapsed.sql was helpful. There may be some Oracle supplied way to do the same thing that I do not know about but the standard awrrpt.sql and ashrpt.sql do not handle this situation well. I could not find any useful information about my long running job in the top SQL portions of the AWR and ASH reports.

Here is the top output from ashtopelapsed.sql:

---------- ------------------------ ------------- --------------------------
     10389                        0 02101wfuxvwtd INSERT INTO PS_AERUNCONTRO
      1394      2407690495325880429 003t0wk45taqs UPDATE PS_HR_ACCTG_LINE SE
       858      9373188292577351646 04hvza4k3u72x INSERT INTO PS_PROJ_RES_TM
                                                  ID, VOUCHER_ID, VOUCHER_LI
                                                  D_DETAIL, CST_DISTRIB_STAT
                                                  _TYPE, JL.RESOURCE_TYPE, J
                                                  JRNL_HEADER JH , PS_JRNL_L

       718       810675553005238195 b8fakhtc5g3rx select * from (select a.os
       646     12856152818202055532 00mmbs0c6bh1a SELECT DISTINCT TO_CHAR(A.

I created FORCE=TRUE SQL Profiles for FORCE_MATCHING_SIGNATURE values 2407690495325880429 and 12856152818202055532. Here was the output from ashfmscount.sql:

------------------------ ----------
     2407690495325880429       1394
    12856152818202055532        646
    12319028094632613940         43
    12084330158653099691          2
    10829167302201730459          2
     4612906341224490548          1

Almost all the 6-hour runtime was taken up by 2407690495325880429 and 12856152818202055532. The units are in 10 seconds so in hours it is:

((1394 + 646)*10)/3600.0 = 5.666666666666667

Almost all the job run time was spent on SQL statements that match these two signatures.

These two had full scans and needed index skip scans.

Bad plan:

Plan hash value: 3305693634
| Id  | Operation          | Name             |
|   0 | UPDATE STATEMENT   |                  |
|   1 |  UPDATE            | PS_HR_ACCTG_LINE |

Good plan:

Plan hash value: 1538925713
| Id  | Operation                    | Name             |
|   0 | UPDATE STATEMENT             |                  |
|   1 |  UPDATE                      | PS_HR_ACCTG_LINE |
|*  3 |    INDEX SKIP SCAN           | PSAHR_ACCTG_LINE |

Not to beat a dead horse, but my main point is that I updated ashtopelapsed.sql to be useful in this situation. It gave me information that the AWR and ASH reports did not.

Here are the scripts that I used for this tuning example:

ashdump.sql – dump out a single 10 second point in time to find the session that is running the batch job

I also had to run this script which is not in my git repository (it is based on ashcount.sql which is my generic script for grouping by certain ash columns).

  2  SESSION_ID,count(*) active
  4  where
  5  sample_time
  6  between
  7  to_date('13-SEP-2021 01:00:19','DD-MON-YYYY HH24:MI:SS')
  8  and
  9  to_date('13-SEP-2021 06:52:25','DD-MON-YYYY HH24:MI:SS')
 10  and program like 'PSAESRV%'
 11  group by SESSION_ID
 12  order by active desc;

---------- ----------
      1039       2088
      1005        957
       858        804
      1006        195
       993        150
       942        143
      1060        114
       940        109
       927         69
      1026          1

I was trying to figure out which PSAESRV session was the one that ran all 6 hours. Session 1039 was the one, so I used it in ashfmscount.sql.

ashfmscount.sql – find the top signatures for the session that is the batch job

ashtopelapsed.sql – get the text of the top SQL statements grouped by signature in the time frame that the job ran

plan.sql – get a plan and output an outline hint to match it

test2.sql – test query with and without outline hint. also get plan hash value in memory so it can be extracted by sqlt.

optimizerstatistics – get index columns and column statistics

fmsstat2.sql – show execution history of 2407690495325880429 from AWR – here is before the profile:

------------------------ ------------- --------------- -------------------------
     2407690495325880429 03jh5xnnsts7q      3305693634 06-AUG-21 AM
     2407690495325880429 17wshhdq89azx      3305693634 06-AUG-21 AM
     2407690495325880429 1k4a28bdg7r8h      3305693634 06-AUG-21 AM
     2407690495325880429 3zhumhfujfwj5      3305693634 06-AUG-21 AM
     2407690495325880429 59xs2q8215bby      3305693634 06-AUG-21 AM
     2407690495325880429 666s10cy2k3fh      3305693634 06-AUG-21 AM
     2407690495325880429 6bdybc28zn3fg      3305693634 06-AUG-21 AM
     2407690495325880429 7050x70hjg5mj      3305693634 06-AUG-21 AM
     2407690495325880429 7wvsquchacp2p      3305693634 06-AUG-21 AM
     2407690495325880429 8yjhydta27wsn      3305693634 06-AUG-21 AM
     2407690495325880429 ahj6g61c8hzcq      3305693634 06-AUG-21 AM
     2407690495325880429 ba76xa0dpua3n      3305693634 06-AUG-21 AM
     2407690495325880429 bfzkxy486cfu5      3305693634 06-AUG-21 AM
     2407690495325880429 cg65kxmcytw00      3305693634 06-AUG-21 AM
     2407690495325880429 d63wz9wyhp10y      3305693634 06-AUG-21 AM
     2407690495325880429 dk7annutx6n4z      3305693634 06-AUG-21 AM
     2407690495325880429 dpw0m2t5dmcs8      3305693634 06-AUG-21 AM
     2407690495325880429 fx4q3cpn84zbf      3305693634 06-AUG-21 AM
     2407690495325880429 gbc4ksykvucqk      3305693634 06-AUG-21 AM
     2407690495325880429 gn81xu18vsctn      3305693634 06-AUG-21 AM
     2407690495325880429 gxvwq870b82tg      3305693634 06-AUG-21 AM

vsqlarea.sql – show current execution details such as plan for 2407690495325880429 after the profile

------------------- ------------------------ ------------- ---------------
2021-09-14 17:54:36      2407690495325880429 5vczbrua1hnyk      1538925713
2021-09-14 17:54:36      2407690495325880429 gb7awt1g0spj3      1538925713
2021-09-14 17:54:34      2407690495325880429 54tz4ks94ard2      1538925713
2021-09-14 17:49:13      2407690495325880429 g9r370krad681      1538925713
2021-09-14 17:49:12      2407690495325880429 g4mqhstqmb616      1538925713
2021-09-14 17:43:19      2407690495325880429 apfc4uuksxzt2      1538925713
2021-09-14 17:43:18      2407690495325880429 g1k89u3xqb0nt      1538925713
2021-09-14 17:43:17      2407690495325880429 1rshyrb28xppz      1538925713
2021-09-14 17:36:52      2407690495325880429 934a88k6ct5ct      1538925713

The job ran last night in about 36 minutes instead of almost 6 hours, so it was a big improvement.

I was not able to improve the performance of this long running job using AWR or ASH reports but once I fixed an issue with my ashtopelapsed.sql script and used it along with my other ASH and non-ASH scripts I was able to resolve the performance issue. The job had many update and select statements that were too short to show up on the AWR and ASH reports but when summarized by their force matching signatures they showed up at the top of my ashtopelapsed.sql and ashfmscount.sql scripts’ outputs. Going forward I hope to use the collection of scripts described in this post when faced with tuning situations where an AWR report does not point to a top SQL statement. I may start using these on a regular basis even in cases where the AWR report does provide helpful information just to have another source of information.


Posted in Uncategorized | 2 Comments

Outline Hint from dbms_xplan does not contain PARALLEL hint

I was tuning a parallel query and was a little surprised to find that an outline hint from EXPLAIN PLAN and dbms_xplan.display did not contain a PARALLEL hint. So, a query I was testing did not run in parallel with the outline hint. I had to add the PARALLEL hint. It is surprising because I think of an outline hint as a full set of hints.

I built a testcase to demonstrate this. Here is a zip of its script and output on a 19c database:

I created a small test table and got the outline for the plan of a select statement with a parallel hint:

drop table test;

create table test noparallel as select * from dba_tables;

explain plan into plan_table for 
select /*+ parallel(test,8) */ sum(blocks) from test;

set markup html preformat on

select * from table(dbms_xplan.display('PLAN_TABLE',NULL,'ADVANCED'));

The outline hint looks like this:

      FULL(@"SEL$1" "TEST"@"SEL$1")

Notice that it just has a FULL hint and nothing about running the query in parallel.

Next, I ran the query with the outline hint but without any parallel hint like this:

select   /*+
      FULL(@"SEL$1" "TEST"@"SEL$1")
  */ sum(blocks) from test;

It ran with a serial (non-parallel) plan:

| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | SELECT STATEMENT   |      |       |       |    27 (100)|          |
|   1 |  SORT AGGREGATE    |      |     1 |     3 |            |          |
|   2 |   TABLE ACCESS FULL| TEST |  1785 |  5355 |    27   (0)| 00:00:01 |

Then I ran the query with both a parallel hint and the same outline hint:

select   /*+ parallel(test,8)
      FULL(@"SEL$1" "TEST"@"SEL$1")
  */ sum(blocks) from test;

It ran the desired parallel plan:

| Id  | Operation              | Name     |...|IN-OUT| PQ Distrib |
|   0 | SELECT STATEMENT       |          |...|      |            |
|   1 |  SORT AGGREGATE        |          |...|      |            |
|   2 |   PX COORDINATOR       |          |...|      |            |
|   3 |    PX SEND QC (RANDOM) | :TQ10000 |...| P->S | QC (RAND)  |
|   4 |     SORT AGGREGATE     |          |...| PCWP |            |
|   5 |      PX BLOCK ITERATOR |          |...| PCWC |            |
|*  6 |       TABLE ACCESS FULL| TEST     |...| PCWP |            |

In other tuning situations with plans that did not execute in parallel the outline hints from dbms_xplan.display_cursor did a great job of capturing all the details of the plan. But when extracting an outline hint from a query that has a parallel hint in it, I needed both the outline hint and the parallel hint to get the same plan.


Posted in Uncategorized | Leave a comment

Tuning Example With Outline Hints and SQL Profile

I tuned a production query this week using outline hints and a SQL profile. This is like other examples I have posted but may have enough differences to be worth documenting here.

A query in a batch job was taking over an hour to run. I got it to run in under 30 minutes.

Old plan

898524 rows selected.

Elapsed: 01:13:34.35

New plan

899018 rows selected.

Elapsed: 00:28:00.28

The new plan ran about 45 minutes faster.

The main point here is that I was able to improve the plan in a more selective way using an outline hint than I could have with regular hints.

I ran the problem query using my test2.sql script to show the expected and actual number or rows in each step in the plan and to show the amount of time spent on each step. The output looked like this:

This is the typical example of a bad plan using an index in a case where a full scan is more efficient. Most of the time is spent in step 4 doing 898 thousand table lookups. So, to fix this query I tried a FULL hint to get rid of the SKIP SCAN, but it did not help. I tried various combinations of FULL hints on the four tables in the plan and they all resulted in inefficient plans with different types of joins than the original plan.

Full hints:



| Id  | Operation              | Name         |
|   0 | SELECT STATEMENT       |              |
|*  1 |  HASH JOIN RIGHT OUTER |              |
|*  3 |   HASH JOIN RIGHT OUTER|              |
|*  5 |    HASH JOIN           |              |
|*  6 |     TABLE ACCESS FULL  | S_ORG_EXT    |

Then I got the idea of using an outline hint. The plan.sql script that I use to run EXPLAIN PLAN on a query outputs an outline hint. In this case the outline hint looks like this:

The main idea that I wanted to write this post about is to take this outline hint and leave everything else the same but to replace the skip scan with a full table scan. Adding a FULL hint was changing a lot of things about the plan. But if I start with the outline hint, I can force the plan to remain the same except for the one thing I change. I ran plan.sql with the query with a full hint to get the outline with a full scan and then pulled out just the full scan line to put into the original plan’s outline. Here are the old and new lines that I used to edit to outline hints for the original query’s plan, an index skip scan and a full table scan.




FULL(@"SEL$1" "DD"@"SEL$1")

Here is the edited version of the outline hint:

So, then I ran my test2.sql script but with the edited outline hint and the query ran in 28 minutes. So, all we had to do was put the outline hint into the existing SQL script.

But then I realized that I could finish off the performance improvement using a SQL Profile and not have to edit the script. The output for the run of my test2.sql script with the hint had this sql_id and plan_hash_value: 3m80wvb2v6vdq 42036438.

My slow production query had this sql id and plan hash value: dzfr3vz7z5p66 2238264355.

So, I passed these two sets of parameters to coe_xfr_sql_profile.sql in the sqlt/utl subdirectory creating these two scripts:




So, I hacked the two scripts together to make coe_xfr_sql_profile_dzfr3vz7z5p66_42036438.sql which has the SQL text of the original query and the plan that my test script with the outline hint used. The scripts output by coe_xfr_sql_profile.sql save the text of the SQL statement and its plan when creating a new SQL Profile. You can get a little creative and connect production SQL text with a plan you have generated through hints.

 11  -- SQL Text pieces below do not have to be of same length.
 12  -- So if you edit SQL Text (i.e. removing temporary Hints),
 13  -- there is no need to edit or re-align unmodified pieces.
 14  wa(q'[SELECT DD.ROW_ID|| '||' ||
 15  DD.PAR_ROW_ID|| '||' ||

text of original query starts line 14

 56  wa(q'[ERY_WINDOW  (+) = '2'
 58  DBMS_LOB.CLOSE(sql_txt);

text of query ends line 57. outline hint with full scan starts line 60

 80  q'[USE_MERGE(@"SEL$1" "ORG"@"SEL$1")]',
 81  q'[END_OUTLINE_DATA]');
 82  :signature := DBMS_SQLTUNE.SQLTEXT_TO_SIGNATURE(sql_txt);
 83  :signaturef := DBMS_SQLTUNE.SQLTEXT_TO_SIGNATURE(sql_txt, TRUE);

ends line 81

So, I ran the edited coe_xfr_sql_profile_dzfr3vz7z5p66_42036438.sql to put in the SQL Profile which related the production query to the plan generated by the edited outline hint and the production job ran in 28 minutes last night without any change to its SQL code. The outline hint allowed me to leave the original plan intact except for the one step that needed a full scan instead of an index skip scan. Then the SQL Profile trick allowed me to apply the better plan without any change to the application.


P.S. It looks like I didn’t have to hack together the sql profile script the way I did. See this post:

I could have just extracted the SQL Profile script like this:

coe_xfr_sql_profile.sql dzfr3vz7z5p66 42036438

That would have generated oe_xfr_sql_profile_dzfr3vz7z5p66_42036438.sql automatically combining the production SQL text with the plan with the outline hint and full scan instead of me editing it together manually.

Posted in Uncategorized | Leave a comment


I just put two new scripts on my OracleDatabaseTuningSQL GitHub repository. These came out of a PeopleSoft Financials performance problem I was working on that involved a lot of similar SQL statements that used constants instead of bind variables, so they did not show up at the top of the AWR report. I had to look at ASH data to find them and had to group by their force matching signature to group the similar statements together. These are the two scripts:

ashfmscount.sql – Looks at a single application engine session and groups all the time spent by force matching signature to find the queries that consumed the most time. I used my simple ashdump.sql script to dump out a few rows when I knew the app engine was running and I found the SESSION_ID and SESSION_SERIAL# values there.

ashtopelapsed.sql – This is meant to look like the SQL by elapsed time report on an AWR report except that it groups SQL by force matching signature but gives an example sql id with its text to give you an idea of what the signature represents. Might be good to run this the next time my AWR report does not have any long running SQL statement on the top SQL report.

I really used the first one to resolve the issue along with various other scripts to get to that point. I created the second one just now as a possible future script to use in addition to an AWR report. I didn’t check the ASH report to see if this is a duplicate of it, but these two new scripts work well.


Posted in Uncategorized | 1 Comment

Plan Change Due to Index Partition Names Mismatch

I want to document a bug I ran across in Oracle on HP-UX Itanium and how it caused unexpected plan changes that would bring the database to its knees with many long running queries running the bad plan. I found that the problem table had a local index whose partition names did not match the partition names of the table and that recreating the index with partition names that matched the table resolved the issue.

We have a busy and important Oracle database that had queries that ran fine for weeks and months and then suddenly changed to a bad plan. They all had a similar pattern. The good plan did a simple range scan on a certain partition’s index. The bad plan did a fast full index scan.





The query accessed the table using the first two columns of PROBLEM_INDEX and the first column of the index is the partitioning column for the range partitioned table. So, call the first two columns of the index PARTITIONING_C and OTHER_C then the query was like:

from problem_table
PARTITIONING_C = :bindvar1 and
OTHER_C = :bindvar2

I have known for a long time that some choice of bind variable values was causing the plan to flip to the fast full scan on the index partition. But I did not know that it had to do with the names of the index’s partitions. The table and index creation scripts must have looked something like this:

create problem_table

CREATE UNIQUE INDEX problem_index ON problem_table

The index has the same number of partitions as the table and a lot of them have the same names, but they do not line up. I found this bug in the version of Oracle that we are on:

Bug 14013094 – DBMS_STATS places statistics in the wrong index partition

I am not sure that I am hitting that bug, but I am hitting some similar bug because 14013094 relates to index partitions names that do not match table partition names. For one partition of the problem index the statistics were set to 0 but its corresponding table partition had millions of rows. It would be as if partition P3000 on problem_table had 20,000,000 rows in stats and corresponding partition P4000 on problem_index had 0 rows. If I gathered index partition statistics on P4000 it correctly set stats to 20,000,000. If I gathered table partition statistics on P3000 it cleared the index partition stats on P4000 setting them to 0! (!!) Yikes! How weird is that? Seems obvious to me it is a bug, but maybe not exactly 14013094. I tried dropping and recreating the index leaving the partition names as they are, but it did not resolve the issue. Then I just created the index letting it default to matching partition names like this:

CREATE UNIQUE INDEX problem_index ON problem_table

I’m not sure how the partition names got mismatched but it is a simple fix. It took me a while staring at the partition statistics to realize what was going on and then it took a while to prove out the fix. We do not yet have this in production, but I believe we have nailed down the cause of the plan changes. In the past I have been using SQL Profiles to lock in the plans of any new query that uses the problem table, but I plan to run without them after putting in the index. I kick myself for not researching this earlier, but it was not obvious so maybe it was not crazy to use SQL Profiles for a while. But it became a burden to keep using them and it left the system vulnerable to a plan change on any query on the problem table that did not already have a SQL Profile.

In a lot of ways this is a simple query tuning problem with bad zero stats on a partition with millions of rows. Any time the statistics are wildly inaccurate bad plans are likely. But tracking down the reason for the zero statistics was not simple. I should mention that some of the table partitions are empty and have 0 rows. In fact, to take our example, imagine that table partition P4000 has 0 rows just like index partition P4000. No one would suspect that the statistics are wrong unless you realize that P4000 on the index corresponds not to P4000 but to P3000 on the table and P3000 has 20,000,000 rows!

Bind variables lead to these unexpected plan changes when the plan is different when different values are passed into the variables. It is a fundamental tradeoff of using bind variables to reduce the parsing that constants would cause. If one set of bind variable values causes a bad plan that plan gets locked into memory and it is used for all the other values passed to the query until the plan is flushed out of memory. There are some cases where the optimizer will look at the values passed into the bind variables and choose between multiple plans but that did not occur in this problem.

So, what can someone get from this post if they are not on and do not have locally partitioned indexes with similar partition names to the table but offset and mismatched? I think for me the point is after using a SQL Profile to resolve a problem query plan to dig deeper into the underlying reason for the bad plan. I expect bad plans because I think that the optimizer is limited in its ability to correctly run a query even if it has the best possible statistics. Also, since I support many databases, I do not have time to dig deeply into the underlying reason for every bad plan. But I need to dig deeper when similar queries keep getting the same kind of bad plans. In many cases bad statistics lead to bad plans and it is a lot better to fix the statistics once than to keep using SQL Profiles and hints repeatedly to fix similar queries with the same sort of switches to bad plans. In this case I had the bizarre fix of recreating the index with partition names that match the table and that ensured that the statistics on the index partitions were accurate and that I no longer need SQL Profiles to lock in the good plan.


Posted in Uncategorized | Leave a comment

MongoDB test server on Oracle Linux

I use Oracle Enterprise Linux on VirtualBox running on my Windows 10 laptop for test servers (virtual machines) of various types of software. I just setup a MongoDB VM yesterday and thought I would document some of the things I did which are not in the standard documentation.

I followed this URL for the install:

I created the yum repository file mongodb-org-4.4.repo as documented:

[root@mongodb yum.repos.d]# cat mongodb-org-4.4.repo
name=MongoDB Repository

I installed the MongoDB yum package:

[root@mongodb yum.repos.d]# sudo yum install -y mongodb-org
Loaded plugins: langpacks, ulninfo
mongodb-org-4.4                                                                                                           | 2.5 kB  00:00:00
ol7_UEKR4                                                                                                                 | 2.5 kB  00:00:00
ol7_latest                                                                                                                | 2.7 kB  00:00:00
mongodb-org-4.4/7Server/primary_db                                                                                        |  47 kB  00:00:02
Resolving Dependencies
--> Running transaction check
---> Package mongodb-org.x86_64 0:4.4.6-1.el7 will be installed
--> Processing Dependency: mongodb-org-shell = 4.4.6 for package: mongodb-org-4.4.6-1.el7.x86_64

I didn’t need sudo since I was root, but it worked. I don’t know if this was really needed but I set ulimit with these commands:

ulimit -f unlimited
ulimit -t unlimited
ulimit -v unlimited
ulimit -l unlimited
ulimit -n 64000
ulimit -m unlimited
ulimit -u 64000

I am not sure if these commands stick when you run them as root. They seem to but for now I’ve been running them manually after I reboot. These were documented here:

Based on this document I also created the file /etc/security/limits.d/99-mongodb-nproc.conf:

[root@mongodb ~]# cat /etc/security/limits.d/99-mongodb-nproc.conf
*          soft    nproc     64000
*          hard    nproc     64000
root       soft    nproc     unlimited
[root@mongodb ~]#

I don’t know for sure if this was needed, but it did not cause any problems.

I edited /etc/selinux/config to prevent SELinux from interfering:

[root@mongodb selinux]# diff config.07012021 config
< SELINUX=enforcing
> SELINUX=disabled

I also disabled the firewall just in case it would cause problems:

[root@mongodb ~]# systemctl disable firewalld
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
Removed symlink /etc/systemd/system/
[root@mongodb ~]# systemctl stop firewalld
[root@mongodb ~]# systemctl status firewalld
? firewalld.service - firewalld - dynamic firewall daemon
   Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
   Active: inactive (dead)
     Docs: man:firewalld(1)

Lastly, in order to be able to access MongoDB from outside the VM I had to edit /etc/mongod.conf to allow access from all IP addresses:

[root@mongodb etc]# diff  mongod.conf mongod.conf.07012021
<   bindIp: # Enter,:: to bind to all IPv4 and IPv6 addresses or, alternatively, use the net.bindIpAll setting.
>   bindIp:  # Enter,:: to bind to all IPv4 and IPv6 addresses or, alternatively, use the net.bindIpAll setting.

Of course, in a production system you would want to make this more secure, but this is just a quick and dirty test VM.

Finally, this command brings up the database:

systemctl start mongod

I ran this in a new root Putty window to get the ulimit settings. Not sure if that was necessary, but it did work.

I have a NAT network and port forwarding setup so that while MongoDB listens by default on port 27017 host localhost I setup VirtualBox to connect it to port 61029 host on my laptop.

Since the programming language that I am most familiar with is Python (I have not learned any JavaScript) I setup a test connection to my new MongoDB database using the pymongo module.

I installed it like this:

pip install pymongo[srv]

Simple test program looks like this:

from pymongo import MongoClient
from pymongo.errors import ConnectionFailure

client = MongoClient('', 61029)
    # The ismaster command is cheap and does not require auth.
except ConnectionFailure:
    print("Server not available")

I got that from stackoverflow. I was also following the pymongo tutorial:

One test program:

from pymongo import MongoClient

client = MongoClient('', 61029)

db = client['test-database']

collection = db['test-collection']

import datetime
post = {"author": "Mike",
          "text": "My first blog post!",
          "tags": ["mongodb", "python", "pymongo"],
          "date": datetime.datetime.utcnow()}
posts = db.posts
post_id = posts.insert_one(post).inserted_id


import pprint

pprint.pprint(posts.find_one({"author": "Mike"}))

pprint.pprint(posts.find_one({"author": "Eliot"}))

pprint.pprint(posts.find_one({"_id": post_id}))

post_id_as_str = str(post_id)
pprint.pprint(posts.find_one({"_id": post_id_as_str}))

from bson.objectid import ObjectId

pprint.pprint(posts.find_one({"_id": ObjectId(post_id_as_str)}))

Its output:

<class 'bson.objectid.ObjectId'>
['test-collection', 'posts']
{'_id': ObjectId('60de4f187d04f268e5b54786'),
 'author': 'Mike',
 'date': datetime.datetime(2021, 7, 1, 23, 26, 16, 538000),
 'tags': ['mongodb', 'python', 'pymongo'],
 'text': 'My first blog post!'}
{'_id': ObjectId('60de4f187d04f268e5b54786'),
 'author': 'Mike',
 'date': datetime.datetime(2021, 7, 1, 23, 26, 16, 538000),
 'tags': ['mongodb', 'python', 'pymongo'],
 'text': 'My first blog post!'}
{'_id': ObjectId('60df400f53f43d1c2703265c'),
 'author': 'Mike',
 'date': datetime.datetime(2021, 7, 2, 16, 34, 23, 410000),
 'tags': ['mongodb', 'python', 'pymongo'],
 'text': 'My first blog post!'}
{'_id': ObjectId('60df400f53f43d1c2703265c'),
 'author': 'Mike',
 'date': datetime.datetime(2021, 7, 2, 16, 34, 23, 410000),
 'tags': ['mongodb', 'python', 'pymongo'],
 'text': 'My first blog post!'}

I ran this on Python 3.9.6 so the strings like ‘Mike’ are not u’Mike’. It looks like the output on the tutorial is from some version of Python 2, so you get Unicode strings like u’Mike’ but on Python 3 strings are Unicode by default so you get ‘Mike’.

Anyway, I didn’t get any further than getting MongoDB installed and starting to run through the tutorial, but it is up and running. Might be helpful to someone else (or myself) if they are running through this to setup a test VM.


Posted in Uncategorized | Leave a comment

Real World Index Example

This is a follow up to my previous post with a real world example of how much difference indexes can make on tables that do not have them.

I looked at an AWR report of a peak hour on a busy database and noticed sql_id 028pmgvcryk5n:

Peak Hour No Indexes

The top SQL is a PL/SQL call so it does not have a plan but 028pmgvcryk5n stood out because it was taking 11.24% of the total elapsed time and running over 2 seconds (2.23) per execution. Most queries on this system run in hundredths of a second like .02 seconds.

I looked at this query’s plan and it included two tables with full scans on both:

I put indexes on each table and the new plan looks like this:

With indexes it ran the query in about 1 millisecond:

Without indexes it ran from 600 to 2000 milliseconds:

I guess I have a simple point. Indexes can make a big difference. A query that runs in 600 milliseconds may be good enough in some cases but if it can run in 1 millisecond it is worth putting on the right indexes.


p.s. I used my sqlstat.sql and vsqlarea.sql scripts to get the execution history with and without the indexes. I used getplans.sql to get the plan without indexes from the AWR and extractplansqlid.sql to get the plan with the indexes from the V$ views. The query ran too fast to show up on the AWR once the indexes were in place so that is why I used the V$ queries to get information about the query after adding the indexes.

Posted in Uncategorized | Leave a comment

Simple Index Example

There is a lot to be said about when to add indexes on Oracle tables, but I wanted to show a simple example. In this case we have a table with no indexes and a query with two equal conditions in the where clause which identify a single row out of many. Here is a zip of the SQL*Plus script and log for this post: zip

The table TEST is a clone of DBA_TABLES. I load it up with a bunch of copies of DBA_TABLES and with a single row that is unique. Then I run the query without any indexes on the table:

  2  from test
  3  where
  4  owner='TESTOWNER' and
  5  table_name = 'TESTTABLE';


Elapsed: 00:00:00.15

I add an index on owner and table_name and run it again:

  2  from test
  3  where
  4  owner='TESTOWNER' and
  5  table_name = 'TESTTABLE';


Elapsed: 00:00:00.00

This may not seem like a big deal going from .15 seconds to .00 seconds (less than .01 seconds). But if you start running a query like this hundreds of thousands of times per hour the .15 seconds of CPU per execution can slow your system down.

See the zip for the details. The create index command looked like this:

SQL> create index testi on test (owner, table_name);

Index created.


Posted in Uncategorized | 1 Comment

Fixed Broken Links

I had a couple of new comments on older blog pages and I noticed that some links pointed to things which no longer exist. I fixed a few things that I knew were wrong. Finally, this week I decided to review every post back to the beginning 9 years ago and click on every link.

For dead links I removed the link and added a comment like (DOES NOT EXIST). In some cases, the link still existed but was different. I changed several links to Oracle’s documentation for example. In many cases I put (updated) or something like that to show that there was a new link. I synced up a lot of the old links to the current version of my scripts on GitHub. Hopefully these changes won’t make the older posts unreadable, but at least the links will point to useful and up to date versions of things.

I did not carefully read every post because I was in a hurry, but I did look at every post and it gave me the chance to see how things changed over the past 9 years. It was kind of a quick review of what I was thinking. Some of the posts seemed kind of dumb. (What was I thinking?) But others are genuinely useful. But it was interesting to see which topics I talked about and how that changed over time. It makes me wonder where things will go in the future. I guess I cannot expect to fully predict the future, but it is good to think about what I should be learning and what skills I should be developing as things go forward.

Anyway, hopefully the updated links will make the posts a little more helpful to myself and others.


Posted in Uncategorized | Leave a comment

SymPy Tutorial Repository

I have been playing with the Python SymPy package and created a repository with my test scripts and notes:

Might be helpful to someone. I just got started.

I had used Maxima before. SymPy and Maxima are both what Wikipedia calls “Computer Algebra Systems.” They have a nice list here:

I got a lot of use out of Maxima but I think it makes sense to switch the SymPy because it is written in Python and works well with other mainstream Python packages that I use like Matplotlib. They both fall under the SciPy umbrella of related tools so for me if I need some computer algebra I probably should stick with SymPy.

Maxima and SymPy are both free.


Posted in Uncategorized | Leave a comment