# SQLT Data Explosion

Well, I’ve been running SQLT all day.  I’ve got a case open with Oracle support on the issue described in my previous post.  Oracle support wants me to use SQLT to run one of the queries with the long parse time and so I picked one out of the AWR.  Just to be safe I decided to download the latest SQLT zip and install it on our development database first.  Then I ran the problem query knowing that it wouldn’t hit the same issue in development.

Well, when I installed SQLT the first time in development I put all the objects in the USERS tablespace which promptly filled up.  So, then I created a 10 gig tablespace just for SQLT and reinstalled it with this TS.  I ran through a full XECUTE step and it took several hours and produced a 100 megabyte zip file.  So, then I promptly deleted all the output and started over on production.

On production SQLT is still running hours later and it filled up my 10 gig tablespace.  So, now it is a 20 gig tablespace.  But, I’m left to wonder if I should start over and let this run all night.  But then maybe it will fill up something overnight or cause some issue with the batch processes.  Yuck!  The good thing is that the /u01 filesystem I’m using has 80 gig free so hopefully when the 20 gig of data is extracted it won’t fill my 80 gig filesystem with the resulting zip.  But, I have dinner plans tonight so do I let it run or not?  Not to mention that now it appears hung trying to extract our VPD functions.  Fun.

So, bottom line is that SQLT seems like a pretty cool tool but if you cut it loose on a system like ours with tables that have thousands of subpartions and many other complexities you may spend all day generating gigabytes of information to upload to oracle support that they probably can’t use anyway.

– Bobby

P.S. They query I’m testing ran for less than 15 minutes.  It is all the data capture that is killing us.

P.P.S.  Well, it blew out my 20 gig SQLT tablespace so I upped it to 80 gig.  Looks like it is up to 28 gig used.  The bad thing is on this run the query ran in seconds so it probably didn’t even get a trace of the issue.  SQLT is not my friend.

Also, Oracle support had me try these options to speed up SQLT:

EXEC sqltxplain.sqlt$a.set_param(‘test_case_builder’, ‘N’); EXEC sqltxplain.sqlt$a.set_param(‘sta_time_limit_secs’, ’30’);

But, it has still been running since this morning.  Fun.

Thursday update:

Latest recommendation is to run XPREXC version instead.  This looks like it may work.  It only used about 8 gig of my SQLT tablespace.  But, it has been running for about 16 hours.  Still, it has only used about 1 gig of my /u01 filesystem and I have 78 gig free so it may well finish.

2:15 pm MST Thursday.  I spoke too soon.  SQLT has been stuck for about 4 hours with no new output.  Here is the last file updated:

-rw-r--r--  1 oracle oinstall 610406400 Sep 26 11:54
sqlt_s20527_cell_state_begin_and_end.txt

The server timezone is CST.  Currently running this query:

SELECT 'INST_ID="'
|| inst_id
|| '" CELL_NAME="'
|| cell_name
|| '" OBJECT_NAME="'
|| object_name
|| '" STATISTICS_TYPE="'
|| statistics_type
|| '"'
|| CHR (10)
id,
'BEGIN:'
|| CHR (10)
|| REPLACE (statistics_value_b, '><', '>' || CHR (10) || '<')
begin_value,
'END:'
|| CHR (10)
|| REPLACE (statistics_value_e, '><', '>' || CHR (10) || '<')
end_value
FROM SQLTXADMIN.sqlt$_gv$cell_state_v
WHERE statement_id = :v_statement_id
ORDER BY inst_id,
cell_name,
object_name,
statistics_type

I’m stuck in SQLT purgatory!

3:08 pm MST:  Victory!  I guess I just had to be patient.  Here are the files left after the final run:

-rw-r--r--  1 oracle oinstall     16342 Sep 25 19:12
65mvg950s0q7z_output_s20527.txt

-rw-r--r--  1 oracle oinstall 506419884 Sep 26 17:03
sqlt_s20527_xprexc.zip



A mere 22 hours and a 500 meg zip which now I have to upload to Oracle’s support site.  Sweet!

– Bobby