Memory added based on buffer pool advisory did not give desired result

Development asked me to look at ways to cut the run time of a series of PeopleSoft payroll batch jobs so I took an AWR report of the entire 4 hour period.  Based on the waits, the percentage of the elapsed time spent using I/O and the buffer pool advisory I chose to double the size of the buffer cache. But, this added memory did not improve the run time of the batch jobs. Maybe the affected blocks are only read into memory once so they would not get cached no matter how big the buffer pool was.

Here is the original run on June 22 with the original memory settings:

Cache Sizes

Begin End
Buffer Cache: 3,328M 3,424M Std Block Size: 8K
Shared Pool Size: 1,600M 1,520M Log Buffer: 7,208K

Top 5 Timed Foreground Events

Event Waits Time(s) Avg wait (ms) % DB time Wait Class
db file sequential read 1,910,393 10,251 5 72.03 User I/O
DB CPU 2,812 19.76
log file sync 35,308 398 11 2.80 Commit
resmgr:cpu quantum 31,551 62 2 0.43 Scheduler
db file scattered read 7,499 60 8 0.42 User I/O

Buffer Pool Advisory

  • Only rows with estimated physical reads >0 are displayed
  • ordered by Block Size, Buffers For Estimate
P Size for Est (M) Size Factor Buffers (thousands) Est Phys Read Factor Estimated Phys Reads (thousands) Est Phys Read Time Est %DBtime for Rds
D 336 0.10 41 2.71 6,513,502 1 9842530.00
D 672 0.20 83 2.42 5,831,130 1 8737799.00
D 1,008 0.29 124 2.18 5,241,763 1 7783636.00
D 1,344 0.39 166 1.96 4,720,053 1 6939010.00
D 1,680 0.49 207 1.77 4,250,981 1 6179603.00
D 2,016 0.59 248 1.59 3,825,904 1 5491420.00
D 2,352 0.69 290 1.43 3,438,372 1 4864023.00
D 2,688 0.79 331 1.28 3,083,734 1 4289879.00
D 3,024 0.88 373 1.15 2,758,459 1 3763273.00
D 3,360 0.98 414 1.02 2,459,644 1 3279504.00
D 3,424 1.00 422 1.00 2,405,118 1 3191229.00
D 3,696 1.08 455 0.91 2,184,668 1 2834329.00
D 4,032 1.18 497 0.80 1,931,082 1 2423784.00
D 4,368 1.28 538 0.71 1,696,756 1 2044421.00
D 4,704 1.37 579 0.62 1,479,805 1 1693185.00
D 5,040 1.47 621 0.53 1,278,370 1 1367070.00
D 5,376 1.57 662 0.45 1,090,505 1 1062925.00
D 5,712 1.67 704 0.38 914,112 1 777352.00
D 6,048 1.77 745 0.31 746,434 1 505888.00
D 6,384 1.86 786 0.24 580,310 1 236941.00
D 6,720 1.96 828 0.17 414,233 1 149325.00

In the SQL ordered by Elapsed Time report the top batch job SQL was 99.14% I/O

Based on this report it seems that the number of physical reads could be reduced to about 20% what they were on June 22 by doubling the size of the buffer cache. But, adding the memory did not cut the number of physical reads in any major way.

Here is yesterday’s run:

Cache Sizes

Begin End
Buffer Cache: 6,848M 6,816M Std Block Size: 8K
Shared Pool Size: 3,136M 3,136M Log Buffer: 16,572K

Top 5 Timed Foreground Events

Event Waits Time(s) Avg wait (ms) % DB time Wait Class
db file sequential read 1,789,852 10,173 6 72.15 User I/O
DB CPU 2,970 21.06
log file sync 37,562 200 5 1.42 Commit
resmgr:cpu quantum 24,996 59 2 0.42 Scheduler
db file scattered read 5,409 54 10 0.38 User I/O

Buffer Pool Advisory

  • Only rows with estimated physical reads >0 are displayed
  • ordered by Block Size, Buffers For Estimate
P Size for Est (M) Size Factor Buffers (thousands) Est Phys Read Factor Estimated Phys Reads (thousands) Est Phys Read Time Est %DBtime for Rds
D 672 0.10 83 11.25 516,440 1 1309098.00
D 1,344 0.20 166 5.98 274,660 1 683610.00
D 2,016 0.29 248 4.02 184,712 1 450915.00
D 2,688 0.39 331 2.90 133,104 1 317404.00
D 3,360 0.49 414 2.20 100,860 1 233990.00
D 4,032 0.59 497 1.80 82,768 1 187185.00
D 4,704 0.69 580 1.53 70,445 1 155305.00
D 5,376 0.79 663 1.31 60,345 1 129176.00
D 6,048 0.88 745 1.14 52,208 1 108127.00
D 6,720 0.98 828 1.01 46,477 1 93301.00
D 6,848 1.00 844 1.00 45,921 1 91862.00
D 7,392 1.08 911 0.95 43,572 1 85785.00
D 8,064 1.18 994 0.89 40,789 1 78585.00
D 8,736 1.28 1,077 0.85 38,889 1 73671.00
D 9,408 1.37 1,160 0.81 37,112 1 69073.00
D 10,080 1.47 1,242 0.77 35,490 1 64876.00
D 10,752 1.57 1,325 0.75 34,439 1 62158.00
D 11,424 1.67 1,408 0.73 33,353 1 59347.00
D 12,096 1.77 1,491 0.71 32,524 1 57204.00
D 12,768 1.86 1,574 0.69 31,909 1 55613.00
D 13,440 1.96 1,656 0.68 31,361 1 54194.00

After the memory add the same top batch job SQL was 98.80% I/O.  Some improvement but not nearly as much as I expected based on the buffer pool advisory.

I guess the moral of the story is that the buffer pool advisory does not apply to specific workloads and is only a general guideline.  Maybe this is the same kind of fallacy that you have with buffer cache hit ratios where certain workloads make the ratio irrelevant.  Here were the hit ratios:  Before 98.59% After 98.82%.  Basically these are the same.

I just thought I would share this to document a real case of using the buffer pool advisory and having it not produce the expected results.

Bobby

About Bobby

I live in Chandler, Arizona with my wife and three daughters. I work for US Foods, the second largest food distribution company in the United States. I have worked in the Information Technology field since 1989. I have a passion for Oracle database performance tuning because I enjoy challenging technical problems that require an understanding of computer science. I enjoy communicating with people about my work.
This entry was posted in Uncategorized. Bookmark the permalink.

2 Responses to Memory added based on buffer pool advisory did not give desired result

  1. “the moral of the story is that the buffer pool advisory does not apply to specific workloads and is only a general guideline. ”

    That’s the way that I’ve always felt about it.
    Also, if your process does not consume the block again, then it won’t enjoy the larger cache size. And if the blocks aren’t already in the cache, you won’t see a reduction in physical I/O.

    • Bobby says:

      Thank you for your comment Brian. I think you are right when you say “if the blocks aren’t already in the cache, you won’t see a reduction in physical I/O”. I think that is the case here. It is a payroll run and probably hitting many blocks once instead of reusing them. I guess the buffer pool advisory can not predict this kind of behavior.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.