In an earlier post about how query high compression eliminated row chaining in tables with more than 255 columns I got question about whether updates to tables with more than 255 columns and query high compression ran slower with this type of compression than on an uncompressed table.
The answer is that, at least in the tests I ran, the updates of a table with more than 255 columns took much longer on a table with query high compression than on the uncompressed table.
Here is a zip of my test scripts and their logs: zip
I took a table with 256 columns and populated it with 1,000,000 rows filling all columns with data. Then I updated the first column, the last column, and then all columns in three separate transactions.
Here are the results:
Column 1 Column 256 All Columns Uncompressed run 1 00:00:11.81 00:00:24.20 00:00:57.89 Uncompressed run 2 00:00:12.25 00:00:23.94 00:00:59.28 Compressed run 1 00:02:27.20 00:02:27.65 00:02:50.08 Compressed run 2 00:02:26.83 00:02:26.33 00:02:47.88
I don’t have the time or motivation to dig any deeper into the causes of this but here is what I think it going on based on these results.
- Row chaining in the uncompressed table with more than 255 columns causes the update of the last column or all columns to be slower than updating just the first column.
- Updating any or all columns of the table when it is compressed for query high uncompresses the table (I think it remains compressed for OLTP but did not check).
- The time it takes to uncompress the table during the update is much more than the time taken to update the uncompressed table, even in situations where we have row chaining.
Maybe someone could construct a situation where the time taken by row chaining exceeds the time taken to uncompress the rows for the update, but in my test case it isn’t close. The uncompress takes more than twice the time of the updates that experience row chaining.