I am trying to learn about Docker by installing it on an Oracle Linux 7 VM on top of VirtualBox on my work laptop. My work laptop uses Zscaler. I had a bunch of certificate issues and ended up learning a lot about Docker by working around them. I tried to do the Sample Application – really the simplest first step in the Docker documentation – and had all kinds of trouble getting it to work. Ultimately, I ended up with a Dockerfile that looked like this:
[root@docker ~]# cat Dockerfile
COPY z.pem /etc/pki/ca-trust/source/anchors/z.pem
RUN echo sslverify=false >> /etc/yum.conf
RUN yum install -y oracle-nodejs-release-el7 oracle-release-el7
RUN yum install -y nodejs
RUN npm install -g npm
RUN npm install -g yarn
COPY . .
RUN yarn config set "strict-ssl" false -g
RUN yarn install --production
CMD ["node", "src/index.js"]
By contrast the Dockerfile that was supposed to work looks like this:
RUN apk add --no-cache python2 g++ make
COPY . .
RUN yarn install --production
CMD ["node", "src/index.js"]
I ended up using the oraclelinux:7 image because it had more stuff installed such as update-ca-trust. Because I could not get anything to work with Zscaler I had to start with an image that did not require me to pull more stuff down with yum. Then, after playing with it I still ended up disabling SSL verification on yum and yarn. I had to install node since I was starting with a plain Linux image and not a node image.
I had these instructions for getting Zscaler to work on my Oracle Linux 7 VirtualBox VMs on my company computer:
Had to extract Zscaler .cer root ca from Chrome browser as z.cer.
Moved to linux and ran:
openssl x509 -inform der -in z.cer -outform der -out z.pem
copied z.pem to /etc/pki/ca-trust/source/anchors/
I do not know if this is really doing anything. It affects curl so that I can use curl without the -k option to disable SSL verification. Maybe things that use curl under the covers are affected by adding z.pem to the trusted certificates.
Anyway, I just wanted to document this for myself. Maybe someone out there will benefit also.
Some applications or API requests, such as synchronous API calls to databases, have long periods of inactivity. GWLB has a fixed idle timeout of 350 seconds for TCP flows and 120 seconds for non-TCP flows. Once the idle timeout is reached for a flow, it is removed from GWLB’s connection state table. As a result, the subsequent packets for that flow are treated as a new flow and may be sent to a different healthy firewall instance. This can result in the flow timing out on the client side. Some firewalls have a default timeout of 3600 seconds (1 hour). In this case, GWLB’s idle timeout is lower than the timeout value on the firewall, which causes GWLB to remove the flow without the firewall or client being aware it was dropped.
Best practices for deploying Gateway Load Balancer
This means that my JDBC call using the thin driver will work fine if I sleep for 349 seconds but will hang forever if I try to sleep for 350 seconds. The solution is to update a Linux operating system parameter and to update the JDBC connect string.
sysctl -w net.ipv4.tcp_keepalive_time=60
add this line to /etc/sysctl.conf:
Evidently our default tcp_keepalive_time value was 7200 seconds which is longer than the 350 second timeout so we had to lower it to 60 seconds to that the Gateway Load Balancer would know that our JDBC call was actually doing something.
You have to add (ENABLE=broken) to the jdbc connect string like this:
Once I did this my Java test program worked fine. It ran for about 350 seconds and finished cleanly.
If you are working in AWS and connecting to a on premises database using JDBC and you have a SQL statement that should run for 350 seconds or more and hangs forever you might check whether you are being affected by this timeout.
p.s. I forgot to mention that the Oracle database session goes away after 350 seconds. It is just the client side JDBC call that hangs apparently forever.
p.p.s. We have a related issue with Putty sessions connecting to Amazon EC2 Linux VMs timing out after 350 seconds. A coworker offered this article as a solution:
Another coworker of mine was saying that certain types of firewalls work this way with timeouts. The problem is that the GWLB times out our on-premises side but not our AWS side. So, in the case of using Putty to ssh into an EC2 that does not have keepalives configured my Putty session, which also does not have keepalives configured, times out after 350 seconds of idle time. When I hit enter, I get “Network error: Software caused connection abort” but if I check my BASH shell process id, I see that my shell process was never terminated. So, old processes hang around forever on my EC2 if the ssh connection times out due to the GWLB 350 second timeout.
Maybe it is normal for connections on one side of a firewall to time out and the other side to hang forever? I am not sure.
This is all old stuff, but I want to record a simple thing I found. I was following Oracle’s support document for setting up audit table cleanup using the DBMS_AUDIT_MGMT package. I used this document:
SCRIPT: Basic example to manage AUD$ table with dbms_audit_mgmt (Doc ID 1362997.1)
This is a very helpful document, but the example script runs DBMS_AUDIT_MGMT.INIT_CLEANUP before it runs DBMS_AUDIT_MGMT.SET_AUDIT_TRAIL_LOCATION and it moves the audit tables SYS.AUD$ first to the SYSAUX tablespace and then to a newly created AUDIT_DATA tablespace. My simple thought is to run SET_AUDIT_TRAIL_LOCATION first to move SYS.AUD$ to AUDIT_DATA and then run INIT_CLEANUP which leaves SYS.AUD$ in AUDIT_DATA. Nothing monumental, but it seems more efficient to move the audit table once.
I did a couple of quick tests on an 18c database to demonstrate that SYS.AUD$ only moves once with SET_AUDIT_TRAIL_LOCATION first.
Test1: Follow the order in the Oracle document:
3 from dba_tables
5 owner='SYS' and
Table is in AUDIT_DATA (moved twice SYSTEM->SYSAUX->AUDIT_DATA):
3 from dba_tables
5 owner='SYS' and
Test2: Reverse the order in the Oracle document:
First, I restored my database to its original condition:
3 from dba_tables
5 owner='SYS' and
After creating the tablespace again, I ran set the trail location and the table is now in AUDIT_DATA:
3 from dba_tables
5 owner='SYS' and
Next, I do the init and the table does not move:
3 from dba_tables
5 owner='SYS' and
So, I am not sure why Oracle’s document has you do INIT_CLEANUP before SET_AUDIT_TRAIL_LOCATION but it seems more efficient to do them in the reverse order and move SYS.AUD$ once, from SYSTEM to AUDIT_DATA.
Finally got around to installing Oracle 21c on my laptop. I installed it on Oracle’s Linux version 7 running in VirtualBox. There are other posts out there, so I won’t get too detailed. I noticed this post:
unzip -q /home/oracle/LINUX.X64_213000_db_home.zip
I was annoyed by the -q option of unzip. If I did it again, I would leave it off so that I could see the list of files as they were unzipped.
From the console I ran this:
I got a warning about the clock source being kvm-clock and not tsc. I tried Oracle’s instructions to fix this warning, but they did not work for me. It was just a warning apparently coming from the cluster verification utility. Since I am not using RAC, I didn’t see how this could matter so I ignored it. The install was very simple. Forced to use a CDB this time as I already knew.
The only interesting part for me was the read only Oracle home. Evidently the files that are updatable are stored under ORACLE_BASE_HOME instead of ORACLE_HOME.
We had an application that created 500,000 tables and 500,000 sequences and the vendor sent us a cleanup script that we thought would drop the tables but not the sequences. It took us a few attempts to get a cleanup script that looked like it would work with the tables, but the sequences seemed totally wrong. The tables had dashes in their names and a bunch of random characters between the dashes and the cleanup script looked for that pattern. But the sequence names all started with ISEQ$$ and they were trying to drop sequences whose names were like the tables’ names. Confusing. Finally, they convinced us to run the script whether it looks like it would work or not. After tweaking it a bit it did run and dropped both the tables and the sequences. What in the world? Then I had a vague memory of something called “identity columns” probably from my 12.1 certification. I do not think I have seen them in a real system, so I checked the tables and sure enough they all had a single identity column and each sequence matched up with each table. So, when we dropped the tables, the sequences went with them. I do not know why I did not clue into the fact that the sequences have dollar signs $$ in their name which means they were probably system generated. Duh!
I thought about not posting anything about this because I was sure there were several good posts out there about this and there are. So, I will link some of them below and not try to recreate them. The funny thing is that the DBAs at the vendor kept talking about flushing the recycle bin to drop the sequence and that also made no sense at the time. Why would flushing the dropped tables out of the recycle bin have anything to do with the sequences? But as you will see in the posts it does.
One thing that probably is not in the posts is that if you turn off the recycle bin with a parameter then dropping the table drops the sequence that is associated with its identity column without having to clear the recycle bin or do a drop table purge. We have this parameter set:
SQL> show parameter recyclebin
NAME TYPE VALUE
------------------------------------ ----------- -----------------
recyclebin string OFF
Covers a lot of ground but talks about the recycle bin.
Anyway, I wanted to note these posts/links as well as my experience. If you want to drop sequences with names starting with ISEQ$$ you need to drop the associated table with an identity column and be sure to purge the table from the recycle bin if you have it enabled.
I have been working hard on an issue that happened after we patched an Oracle database to the 19.13 patch set which just came out in October. The application experienced a ton of latch: shared pool waits, and the patch set had to be backed out. Currently the database is running on the 19.5 patch set and running fine. I have been trying to figure this out on my own and I have been working with Oracle support but so far, I have not come up with a proven resolution. But I thought I would document what I know here before I go off on vacation for two weeks around the Christmas and New Year holidays.
It all started on October 26th when we were still on 19.5 and our application hung up with alert log messages consistent with this bug:
Bug 30417732 – Instance Crash After Hitting ORA-00600 [kqrHashTableRemove: X lock] (Doc ID 30417732.8)
Here were the error messages:
One of these:
ORA-00600: internal error code, arguments: [kqrHashTableRemove: X lock], [0x11AA5F3A0]
I found that the fix for bug 30417732 is in the 19.9 and later patch sets. So, I, perhaps foolishly, decided to apply the latest patch set (19.13) after testing it in our non-production databases. In retrospect I probably should have found a one-off patch for 30417732 but I wanted to catch up to the latest patch set and be on the current and hopefully most stable version of Oracle.
So after applying 19.13 in our lower environments and testing there we applied it on production on November 13th and we had poor performance in the application Monday through Thursday and finally backed out the patch set Thursday night. It has been fine ever since and the original problem, bug 30417732, has fortunately not happened again.
I generated an AWR report during an hour when the problem was occurring, and it showed latch: shared pool as the top foreground event. Like this:
I was able to use my onewait.py and sessioncounts.py to show a correlation between the number of connected sessions and the number of latch waits.
Number of connected sessions:
This database uses shared servers which uses memory in the shared pool and large pool so it makes sense that a bunch of logins might cause some contention for the shared pool. I tried to recreate the high latch: shared pool waits on a test database by running a Java script that just creates 1000 new sessions:
This did result in similar or even worse latch: shared pool waits during the time the Java script was running:
So, I have been able to recreate the problem in a test database but now I am stuck. This is not a minimal reproducible example. The test database is actively used for development and testing. I have tried to create my own load to apply to a small database but so far, I cannot reproduce the exact results. If I run 10 concurrent copies of MaxSessions.java I can get latch: shared pool waits on a small database, but I get the same behavior on 19.5 as I do on 19.13. So, something is missing. Here is a spreadsheet of my test results:
19.5, 19.12, and 19.13 have the same results but 18.14 is a lot better. I was trying to come up with a test that was a lot better on 19.5 than 19.13 but I have not succeeded.
I have looked at a boatload of other things not listed here but I am running out of steam. Friday is my last day working until January 4th, so I hope to put this out of my mind until then. But I thought it was worth using this blog post to document the journey. Maybe at some point I will be able to post a positive update.
Made some progress working with Oracle support on this yesterday. I did not mention in this post that the latch behind all the latch: shared pool waits was
kghfrunp: alloc: wait
Here is the top latch from the AWR report:
An Oracle analyst noted that this bug also involves the same latch:
Bug 33406872 : LATCH SHARED POOL CONTENTION AFTER UPGRADE TO 19.12
I had noticed bug 33406872 but I did not know that it was on the kghfrunp: alloc: wait latch because that information is not visible to Oracle customers.
The same analyst had me rerun the Java script listed above but with a dedicated server connection. I did and it did not have the latch waits. So, this is a shared server bug.
Oracle development is working on it so there is no fix yet but here is the situation as it seems to be now: If you are using shared servers on 19.12 or 19.13 and have high latch: shared pool waits on the kghfrunp: alloc: wait latch you may be hitting a new bug that Oracle is working on.
After getting back from 2 weeks vacation I took another crack at this and I think I have come up with a useful test case for the SR. Time will tell, but it looks promising. I added a second Java script to the MaxSessions one listed above. Here it is:
This opens 1000 shared server sessions and then runs unique select statements on each of the 1000 sessions one at a time looping up to 1000 times. After about 60 loops I get the latch: shared pool waits. I tried this on both 19.13 and 19.5 and I get the waits in both but with differences.
19.13 after 100 loops:
19.5 after 100 loops
Avg Wait, % DB time, and sleeps for the kghfrunp: alloc: wait latch were all higher on 19.13.
Note that I had to keep the GenLoad.java script running while I ran MaxSessions.java and I took AWR snapshots around the MaxSessions run to get these AWR report outputs.
Maybe this will be enough to help Oracle recreate this behavior internally.
The biggest new thing was that I ran the ashtopelapsed.sql script that I mentioned in the first post above but it missed the top SQL statement for the job I was trying to tune. I am struggling to get the logic right. I am trying to write an ASH-based top SQL report for a given time range. It is like an AWR report where you see the top elapsed time SQL statement for the given time frame. But I am grouping by FORCE_MATCHING_SIGNATURE and not SQL_ID because I want to see the top SQL that all have the same signature but different SQL_IDs. In this case there was a PeopleSoft update and select statement that did not use bind variables, so each execution had its own SQL_ID. Here is the select:
FROM PS_HR_ACCTG_LINE A
A.APPL_JRNL_ID = 'ABCDE' AND
A.BUSINESS_UNIT_GL = 'FG' AND
(A.LEDGER_GROUP = ' ' OR A.LEDGER_GROUP = 'HIJK') AND
A.GL_DISTRIB_STATUS = 'N' AND
A.ACCOUNTING_DT >= TO_DATE('2006-09-03','YYYY-MM-DD') AND
A.ACCOUNTING_DT <= TO_DATE('2021-09-04','YYYY-MM-DD');
I guess this runs for different values of BUSINESS_UNIT_GL, ACCOUNTING_DT, etc.
I am trying to get ashtopelapsed.sql to show the top SQL by force matching signature:
create table topsigs as
from DBA_HIST_ACTIVE_SESS_HISTORY a
to_date('13-SEP-2021 01:00:19','DD-MON-YYYY HH24:MI:SS')
to_date('13-SEP-2021 06:52:25','DD-MON-YYYY HH24:MI:SS')
In this case during the time frame of the problem job, from about 1 to 7 am yesterday.
But then I want to find at least one SQL_ID for the given value of FORCE_MATCHING_SIGNATURE so I can then get its text as an example of a real SQL statement that has the signature:
create table sigtoid as
t.FORCE_MATCHING_SIGNATURE = a.FORCE_MATCHING_SIGNATURE and
a.sql_id = s.sql_id
group by t.FORCE_MATCHING_SIGNATURE;
There has to be a row in DBA_HIST_SQLTEXT for the signature or else I will not have an example SQL statement and the signature will get dropped from the report. I may need to put some sort of outer join somewhere so I can see the FORCE_MATCHING_SIGNATURE values that do not have an example SQL_ID in DBA_HIST_SQLTEXT. But fortunately, with the last revision (commit) to ashtopelapsed.sql I was able to see the text of an example SQL statement for each of the top signatures during the time when the problem job ran.
The other funny thing is that I forgot I had ashfmscount.sql so I ended up rewriting it. I need to read my own blog posts. 🙂 Hopefully this more detailed post will serve as a model for future performance tuning efforts and so I will not forget the scripts that I used this time.
As for hacking a SQL Profile together, the subject of the second post above, I edited one in vi as described in the post and the other I just extracted using coe_xfr_sql_profile.sql. I also edited both SQL Profiles to use FORCE = TRUE to make the SQL profile apply to all SQL statements with the same signature. I described FORCE = TRUE in earlier posts:
I think there is some overlap in all these posts, but I wanted to document today’s tuning exercise. The biggest new thing is the improvement of ashtopelapsed.sql.
Couple of images of the AWR and ASH top SQL for the time of this long running job:
Neither of these captured the top SQL by signature so that is why ashtopelapsed.sql was helpful. There may be some Oracle supplied way to do the same thing that I do not know about but the standard awrrpt.sql and ashrpt.sql do not handle this situation well. I could not find any useful information about my long running job in the top SQL portions of the AWR and ASH reports.
Plan hash value: 1538925713
| Id | Operation | Name |
| 0 | UPDATE STATEMENT | |
| 1 | UPDATE | PS_HR_ACCTG_LINE |
|* 2 | TABLE ACCESS BY INDEX ROWID| PS_HR_ACCTG_LINE |
|* 3 | INDEX SKIP SCAN | PSAHR_ACCTG_LINE |
Not to beat a dead horse, but my main point is that I updated ashtopelapsed.sql to be useful in this situation. It gave me information that the AWR and ASH reports did not.
Here are the scripts that I used for this tuning example:
ashdump.sql – dump out a single 10 second point in time to find the session that is running the batch job
I also had to run this script which is not in my git repository (it is based on ashcount.sql which is my generic script for grouping by certain ash columns).
2 SESSION_ID,count(*) active
3 from DBA_HIST_ACTIVE_SESS_HISTORY a
7 to_date('13-SEP-2021 01:00:19','DD-MON-YYYY HH24:MI:SS')
9 to_date('13-SEP-2021 06:52:25','DD-MON-YYYY HH24:MI:SS')
10 and program like 'PSAESRV%'
11 group by SESSION_ID
12 order by active desc;
I was trying to figure out which PSAESRV session was the one that ran all 6 hours. Session 1039 was the one, so I used it in ashfmscount.sql.
ashfmscount.sql – find the top signatures for the session that is the batch job
ashtopelapsed.sql – get the text of the top SQL statements grouped by signature in the time frame that the job ran
plan.sql – get a plan and output an outline hint to match it
test2.sql – test query with and without outline hint. also get plan hash value in memory so it can be extracted by sqlt.
The job ran last night in about 36 minutes instead of almost 6 hours, so it was a big improvement.
I was not able to improve the performance of this long running job using AWR or ASH reports but once I fixed an issue with my ashtopelapsed.sql script and used it along with my other ASH and non-ASH scripts I was able to resolve the performance issue. The job had many update and select statements that were too short to show up on the AWR and ASH reports but when summarized by their force matching signatures they showed up at the top of my ashtopelapsed.sql and ashfmscount.sql scripts’ outputs. Going forward I hope to use the collection of scripts described in this post when faced with tuning situations where an AWR report does not point to a top SQL statement. I may start using these on a regular basis even in cases where the AWR report does provide helpful information just to have another source of information.
I was tuning a parallel query and was a little surprised to find that an outline hint from EXPLAIN PLAN and dbms_xplan.display did not contain a PARALLEL hint. So, a query I was testing did not run in parallel with the outline hint. I had to add the PARALLEL hint. It is surprising because I think of an outline hint as a full set of hints.
I created a small test table and got the outline for the plan of a select statement with a parallel hint:
drop table test;
create table test noparallel as select * from dba_tables;
explain plan into plan_table for
select /*+ parallel(test,8) */ sum(blocks) from test;
set markup html preformat on
select * from table(dbms_xplan.display('PLAN_TABLE',NULL,'ADVANCED'));
In other tuning situations with plans that did not execute in parallel the outline hints from dbms_xplan.display_cursor did a great job of capturing all the details of the plan. But when extracting an outline hint from a query that has a parallel hint in it, I needed both the outline hint and the parallel hint to get the same plan.
I tuned a production query this week using outline hints and a SQL profile. This is like other examples I have posted but may have enough differences to be worth documenting here.
A query in a batch job was taking over an hour to run. I got it to run in under 30 minutes.
898524 rows selected.
899018 rows selected.
The new plan ran about 45 minutes faster.
The main point here is that I was able to improve the plan in a more selective way using an outline hint than I could have with regular hints.
I ran the problem query using my test2.sql script to show the expected and actual number or rows in each step in the plan and to show the amount of time spent on each step. The output looked like this:
This is the typical example of a bad plan using an index in a case where a full scan is more efficient. Most of the time is spent in step 4 doing 898 thousand table lookups. So, to fix this query I tried a FULL hint to get rid of the SKIP SCAN, but it did not help. I tried various combinations of FULL hints on the four tables in the plan and they all resulted in inefficient plans with different types of joins than the original plan.
SELECT /*+ FULL(DD) FULL(ORG) FULL(WINDOW1) FULL(WINDOW2) */
| Id | Operation | Name |
| 0 | SELECT STATEMENT | |
|* 1 | HASH JOIN RIGHT OUTER | |
|* 2 | TABLE ACCESS FULL | S_ORG_EXT_XM |
|* 3 | HASH JOIN RIGHT OUTER| |
|* 4 | TABLE ACCESS FULL | S_ORG_EXT_XM |
|* 5 | HASH JOIN | |
|* 6 | TABLE ACCESS FULL | S_ORG_EXT |
|* 7 | TABLE ACCESS FULL | S_ORG_EXT_XM |
Then I got the idea of using an outline hint. The plan.sql script that I use to run EXPLAIN PLAN on a query outputs an outline hint. In this case the outline hint looks like this:
The main idea that I wanted to write this post about is to take this outline hint and leave everything else the same but to replace the skip scan with a full table scan. Adding a FULL hint was changing a lot of things about the plan. But if I start with the outline hint, I can force the plan to remain the same except for the one thing I change. I ran plan.sql with the query with a full hint to get the outline with a full scan and then pulled out just the full scan line to put into the original plan’s outline. Here are the old and new lines that I used to edit to outline hints for the original query’s plan, an index skip scan and a full table scan.
So, then I ran my test2.sql script but with the edited outline hint and the query ran in 28 minutes. So, all we had to do was put the outline hint into the existing SQL script.
But then I realized that I could finish off the performance improvement using a SQL Profile and not have to edit the script. The output for the run of my test2.sql script with the hint had this sql_id and plan_hash_value: 3m80wvb2v6vdq 42036438.
My slow production query had this sql id and plan hash value: dzfr3vz7z5p66 2238264355.
So, I passed these two sets of parameters to coe_xfr_sql_profile.sql in the sqlt/utl subdirectory creating these two scripts:
So, I hacked the two scripts together to make coe_xfr_sql_profile_dzfr3vz7z5p66_42036438.sql which has the SQL text of the original query and the plan that my test script with the outline hint used. The scripts output by coe_xfr_sql_profile.sql save the text of the SQL statement and its plan when creating a new SQL Profile. You can get a little creative and connect production SQL text with a plan you have generated through hints.
9 DBMS_LOB.CREATETEMPORARY(sql_txt, TRUE);
10 DBMS_LOB.OPEN(sql_txt, DBMS_LOB.LOB_READWRITE);
11 -- SQL Text pieces below do not have to be of same length.
12 -- So if you edit SQL Text (i.e. removing temporary Hints),
13 -- there is no need to edit or re-align unmodified pieces.
14 wa(q'[SELECT DD.ROW_ID|| '||' ||
15 DD.PAR_ROW_ID|| '||' ||
text of original query starts line 14
55 AND WINDOW2.X_DELIV]');
56 wa(q'[ERY_WINDOW (+) = '2'
57 AND WINDOW2.TYPE (+)= 'DELIVERY TIMES']');
59 h := SYS.SQLPROF_ATTR(
text of query ends line 57. outline hint with full scan starts line 60
80 q'[USE_MERGE(@"SEL$1" "ORG"@"SEL$1")]',
82 :signature := DBMS_SQLTUNE.SQLTEXT_TO_SIGNATURE(sql_txt);
83 :signaturef := DBMS_SQLTUNE.SQLTEXT_TO_SIGNATURE(sql_txt, TRUE);
ends line 81
So, I ran the edited coe_xfr_sql_profile_dzfr3vz7z5p66_42036438.sql to put in the SQL Profile which related the production query to the plan generated by the edited outline hint and the production job ran in 28 minutes last night without any change to its SQL code. The outline hint allowed me to leave the original plan intact except for the one step that needed a full scan instead of an index skip scan. Then the SQL Profile trick allowed me to apply the better plan without any change to the application.
I could have just extracted the SQL Profile script like this:
coe_xfr_sql_profile.sql dzfr3vz7z5p66 42036438
That would have generated oe_xfr_sql_profile_dzfr3vz7z5p66_42036438.sql automatically combining the production SQL text with the plan with the outline hint and full scan instead of me editing it together manually.
I just put two new scripts on my OracleDatabaseTuningSQL GitHub repository. These came out of a PeopleSoft Financials performance problem I was working on that involved a lot of similar SQL statements that used constants instead of bind variables, so they did not show up at the top of the AWR report. I had to look at ASH data to find them and had to group by their force matching signature to group the similar statements together. These are the two scripts:
ashfmscount.sql – Looks at a single application engine session and groups all the time spent by force matching signature to find the queries that consumed the most time. I used my simple ashdump.sql script to dump out a few rows when I knew the app engine was running and I found the SESSION_ID and SESSION_SERIAL# values there.
ashtopelapsed.sql – This is meant to look like the SQL by elapsed time report on an AWR report except that it groups SQL by force matching signature but gives an example sql id with its text to give you an idea of what the signature represents. Might be good to run this the next time my AWR report does not have any long running SQL statement on the top SQL report.
I really used the first one to resolve the issue along with various other scripts to get to that point. I created the second one just now as a possible future script to use in addition to an AWR report. I didn’t check the ASH report to see if this is a duplicate of it, but these two new scripts work well.