Finding query with high temp space usage using ASH views

We are having challenges with queries that spend most of their time on these waits:

enq: TS – contention

This wait indicates that a query on one instance of a RAC system is causing its instance to add temp space to its sort segment and remove space from another instance’s sort segment.  For some reason this process is extremely slow and totally bogs down any query waiting on this type of wait.  I worked an SR with Oracle on this issue and Oracle development claims this is not a bug.  I’ve duplicated the problem on both 10.2 and 11.2 RAC.

Anyway, so now we periodically see queries spending most of their time on this wait and this occurs when some other query is eating up all the temp space or has done so recently.  So how do I go back in time and figure out what query was using temp space?  If I wanted to see the session ids of the current users of temp space on a RAC system I would just join gv$tempseg_usage to gv$session like this:

select se.inst_id,se.sid,sum(tu.blocks)*8192
gv$tempseg_usage tu,
gv$session se
tu.inst_id=se.inst_id and
tu.session_addr=se.saddr and
group by se.inst_id,se.sid
order by se.inst_id,se.sid;

This assumes an 8K blocksize.  But to go back in time you can use the TEMP_SPACE_ALLOCATED column of V$ACTIVE_SESSION_HISTORY or DBA_HIST_ACTIVE_SESS_HISTORY to identify a query that consumed a lot of temp space.

Here is an example:

select sql_id,max(TEMP_SPACE_ALLOCATED)/(1024*1024*1024) gig 
sample_time > sysdate-2 and 
TEMP_SPACE_ALLOCATED > (50*1024*1024*1024) 
group by sql_id order by sql_id;

This gives the sql_id and maximum allocated temp space of any queries that ran in the past two days and exceeded 50 gigabytes of temp space.

This is a great feature of these ASH views.  Now we can go back in time and find the queries that were using all the temp space and tune them.

– Bobby

About Bobby

I live in Chandler, Arizona with my wife and three daughters. I work for US Foods, the second largest food distribution company in the United States. I have worked in the Information Technology field since 1989. I have a passion for Oracle database performance tuning because I enjoy challenging technical problems that require an understanding of computer science. I enjoy communicating with people about my work.
This entry was posted in Uncategorized. Bookmark the permalink.

22 Responses to Finding query with high temp space usage using ASH views

  1. Raj says:

    Hi Bobby,

    First of all thank for posting this 🙂

    We are facing the Temp issue frequently as after temp filled up we can see the alert log .

    We couldn’t catch which SID and SQLText cause this issue .

    Note : Current active session we can able to see who is culprit but It was happened three hours back mean how we will find out that culprit .

    We tried the above query but couldn’t. Please if you know the exact query share me :

    Many thanks in advance for your help.


  2. Chuck Davis says:

    Thanks for the post. This helped us track down the session and statement that gobble up all of temp during a particular window of time.

  3. Venkat says:

    Thanks for the post Bobby. I have a question. If we need to find out how much temp space is taken by a query. For example.. I write a new query and wanted to test how much temp space it is going to use…

    I know we can use your query above with time.. but is there any way…Like joining your query with V$sql where sql_text like ‘my new query’? Please suggest

    • Bobby says:


      Thank you for your comment but I don’t really have a query like that. I know that with a problematic query I’ve monitored the available space in the sort segment but I can’t find a script for that. The thing is that depending on where the query is in the execution of its plan temp space usage can go up or down. I did a quick google search and this page seems to have some scripts that you can use to monitor the temp space usage of an active session:

      – Bobby

    • Don says:

      This comes 3 years late I know, but it may help someone …
      After running your query, you can wait till the end of the current snap interval and then run a query similar to the following, modifying the interval begin and end times in the WHERE clause to match your case:

      select round(temp_space_gb,1) temp_space_gb,
      to_char(begin_interval_time,’dd/mm/yyyy hh24:mi’) begin_interval_time,
      to_char(end_interval_time,’dd/mm/yyyy hh24:mi’) end_interval_time,
      from dba_hist_snapshot a
      left join dba_hist_sqlstat b
      on b.snap_id = a.snap_id
      left join dba_hist_sqltext c
      on c.sql_id = b.sql_id
      inner join (
      select snap_id, sql_id, max(temp_space_allocated) / (1024*1024*1024) temp_space_gb
      from dba_hist_active_sess_history
      group by snap_id, sql_id) d
      on d.snap_id = a.snap_id
      and d.sql_id = b.sql_id
      begin_interval_time >= to_date (’02/05/2017 12:00′,’dd/mm/yyyy hh24:mi’)
      and end_interval_time <= to_date ('02/05/2017 13:01 ','dd/mm/yyyy hh24:mi')
      and temp_space_gb is not null
      order by temp_space_gb desc;

      This will show you all the SQL queries that ran during the chosen snap interval/s, ranked by their maximum sampled temp space allocation per snap interval.

      You do have to wait until the end of the snap interval in which your business query ran, but that is normally a small price to pay.

      Thanks Bobby for the post which pointed the way to incorporating temp space into an existing query.


      • Bobby says:

        Thanks for your reply. I haven’t had a chance to review it yet I appreciate your contribution.


  4. Jyothish says:


    Thank you very much for your query.

  5. Prash_DBA says:

    — Query to check TEMP USAGE

    SELECT a.tablespace_name,sum(ROUND((c.total_blocks*b.block_size)/1024/1024/1024,2))
    “Total Size [GB]”,sum(ROUND((a.used_blocks*b.block_size)/1024/1024/1024,2)) “Used_size[GB]”,
    sum(ROUND(((c.total_blocks-a.used_blocks)*b.block_size)/1024/1024/1024,2)) “Free_size[GB]”,
    sum(ROUND((a.max_blocks*b.block_size)/1024/1024/1024,2)) “Max_Size_Ever_Used[GB]”,
    sum(ROUND((a.max_used_blocks*b.block_size)/1024/1024/1024,2)) “MaxSize_ever_Used_by_Sorts[GB]” ,
    sum(ROUND((a.used_blocks/c.total_blocks)*100,2)) “Used Percentage”
    FROM gV$sort_segment a,dba_tablespaces b,(SELECT tablespace_name,SUM(blocks)
    total_blocks FROM dba_temp_files GROUP by tablespace_name) c
    WHERE a.tablespace_name=b.tablespace_name AND a.tablespace_name=c.tablespace_name group by a.tablespace_name;

    Top 10 temp usage

    select * from
    (SELECT d.tablespace_name,a.inst_id,a.sid,a.serial#,a.username “DB Username”,a.osuser,nvl(a.module,a.program) proginfo,floor(last_call_et/3600)||’:’||floor(mod(last_call_et,3600)/60)||’:’||mod(mod(last_call_et,3600),60) lastcallet,ROUND((b.blocks*d.block_size)/1024/1024,2) “Used MB”,c.sql_id,c.sql_text
    FROM gv$session a, gv$tempseg_usage b, gv$sqlarea c,dba_tablespaces d
    WHERE a.saddr = b.session_addr AND c.address= a.sql_address AND c.hash_value = a.sql_hash_value AND d.tablespace_name=b.tablespace ORDER BY “Used MB” DESC)
    where rownum sysdate-2/24 and
    TEMP_SPACE_ALLOCATED > (10*1024*1024*1024)
    group by sql_id,session_id,session_serial# order by GIG desc;

    • Anonymous says:

      History :-

      Following query gives sql_id and maximum allocated temp space of any queries that ran in the past 2 hours and exceeded 10 GB of temp space.

      select sql_id,session_id,session_serial#,max(TEMP_SPACE_ALLOCATED)/(1024*1024*1024) gig
      from gv$active_session_history
      sample_time > sysdate-2/24 and
      TEMP_SPACE_ALLOCATED > (10*1024*1024*1024)
      group by sql_id,session_id,session_serial# order by GIG desc;

  6. sandhya says:

    Thanks for sharing this info. It helped me alot.

  7. Anonymous says:

    Thank You Bobby.Very good insights to do better job on SQL tuning


  8. uttam Rawat says:

    thanks for yor support

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.