Full scan performance on tables with more than 255 columns

Production issue

We have some production tables with more than 255 columns and are experiencing performance degradation on queries that access certain columns.  I was able to show that a full table scan with a where clause condition including one column ran in 6 minutes but with another column ran over 20 minutes.  Also, I noticed that the slower full scans were seeing “db file sequential read” waits and that the statistic “table fetch continued rows” was steadily growing.

A coworker told me that they thought the difference in performance was caused by us having more than 255 columns in the table and asked if we should reorder the columns to improve performance since it seemed like the faster column was at the beginning of the table.

I picked a recent partition of the table and tested a full scan on each of the over 300 columns and found that the first 15 columns performed much better than all the rest.

Then I dumped out a block from a partition and found that the rows were split into 255 column chunks and smaller chunks.  But, the interesting thing is that the 255 column chunk had the last 255 columns and not the first 255.

All this led to the building of a test case to figure out how this works.  Here is the zip.

Take away

Before I get into a long description of all the steps I took in my test case let me first describe what I believe is the take away:

  1. My testing assumes your table has more than 255 columns and less than 2 * 255
  2. A full scan with a where clause condition on one of the last 255 columns will experience row chaining
  3. If you add new columns to the table and populate them with data the chaining experienced on the last 255 columns will be greatly increased
  4. If you reorganize the table by copying the data to a new table then the first N-255 columns will not experience row chaining.  (N=total number of columns).

So, if you have a table with N columns and N>255 and <2*255 then best performance would be found if you reorganize the rows in the table after you add columns and populate them.  Also, ideal performance will be achieved if the columns you use in your where clause are in the first N-255 columns.

Test Case

Tables

test255c – 255 columns
test256c – 256 columns
test256cplus1 – 256 columns initially, then one added and populated with data

Each of these tables are loaded with a million rows of data.

Test scripts – run in this order

test255c.sql – 255 columns, builds table, tests last column
test256c.sql – 256 columns, builds table, tests last column
test256cplus1.sql – 257 columns, builds table, adds column, tests last column
testfirstcolumns.sql – retest first column all three tables
testsecondcolumns.sql – retest second column all three tables
testthirdcolumns.sql – retest third column all three tables
nexttolastcolumns.sql – retest next to last column all three tables

blockdumps255c.trc – one data block from test255c
blockdumps256c.trc – one data block from test256c
blockdumps256cplus1.trc – one data block from test256cplus1

The key test results relate to the statistic “table fetch continued rows”.  I’ve summarized the results of sequential full table scans on each of these tables in this chart:

Table         Column 1 Column 2 Column 3 Last Column-1 Last Column

TEST255C             0        0        0             0           0

TEST256C             0    83333    83333         83333       83333

TEST256CPLUS1        0    83333  1999999       1999999     1999999

So, in the 255 column example there was no row chaining.  In the 256 column example there was some row chaining but small compared to the total number of rows.  In the example where a column had been added to make 257 columns the last 255 columns experienced a lot of row chaining compared to the number of rows.  Unfortunately for us our production situation is represented by the last example.  Queries against our first 15 columns don’t have chaining just like the first column in the example, but many other columns are chaining like crazy.

The block dumps weren’t really necessary to confirm this but they do.  What I see is that on table test255c the row pieces are consistently 255 columns.  On test256c they are either 1 column or 255 columns and the one column row contains the first column.  On test256cplus1 there are 1 column pieces and 255 column pieces and the one column pieces contain either the first or the second column.  So, adding a column seems to split off the first column from the 255 column piece and add the new column to the end making a new 255 column piece.  But, if you notice the minimal row chaining on column 2 of my test against test256cplus1 that confirms what the block dumps show.  Column two hasn’t really moved that much after the column add but the last 255 columns must have moved a lot.

So, my suggestion that we reorganize the rows of a table with more than 255 columns that has had columns added and loaded with data is based on the results for table test256c.  I believe these results represent what a reorganized table’s performance would be.  So, not terrible row chaining and first columns have none.  Of course, if you can keep your tables under 256 columns that would be even better!

– Bobby

P.S. This is on Oracle 11.2.0.3 on HP-UX 11.31, 64-bit Itanium.  16K block size.

About Bobby

I live in Chandler, Arizona with my wife and three daughters. I work for US Foods, the second largest food distribution company in the United States. I have worked in the Information Technology field since 1989. I have a passion for Oracle database performance tuning because I enjoy challenging technical problems that require an understanding of computer science. I enjoy communicating with people about my work.
This entry was posted in Uncategorized. Bookmark the permalink.

One Response to Full scan performance on tables with more than 255 columns

  1. Pingback: Exadata compression reduces row chaining > 255 columns | Bobby Durrett's DBA Blog

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.