I’ve been working on testing performance on a database that uses Delphix as its storage system and ran across an unexpected side effect of using Delphix to clone a production database for testing. Because Delphix uses NFS to present the filesystems for the datafiles you are required to use direct IO, at least on HP-UX which is our platform. But, our production source database doesn’t have direct I/O turned on. So, I’m seeing differences in runtimes in some queries that can be confusing if you don’t understand the reason.
If you have a query that uses direct path reads on a system that has a Unix filesystem and doesn’t use direct I/O those reads can be cached in the Unix filesystem’s buffer cache so that if you query that same table multiple times you eventually get direct path read times far lower than is possible on a system with direct I/O. So, in our case if you have a developer running a test query on a non-Delphix, non-direct I/O system it can run many times faster than is possible on any system, Delphix or otherwise, that uses direct I/O.
This Oracle document spells out the requirements for using NFS for Oracle filesystems on various platforms:
Mount Options for Oracle files when used with NFS on NAS devices (Doc ID 359515.1)
For HP-UX datafiles you have to use:
Note the inclusion of “forcedirectio”. This does just what it sounds like. It forces the NFS filesystem to be accessed using direct I/O which bypasses the Unix filesystem buffer cache. Generally direct I/O is the recommended option but when you implement Delphix you have a special situation. The main point of Delphix is to be able to easily spin up multiple clones of your production database for development and testing. This is great because it allows you to have a very realistic set of data to test against. But, if your source system doesn’t use direct I/O you have introduced a difference from production that will affect testing.
I’ve put together a simple test case to show that direct path reads can be cached at the Unix filesystem level if you don’t have direct I/O and that with direct I/O – whether on a Delphix based NFS filesystem or a SAN based filesystem that uses direct I/O – you don’t see the effect of the Unix filesystem caching. Here is the zip of the test case: zip.
I built a table big enough that 11 g Oracle would do direct path reads on a full scan:
create table test as select * from dba_objects where rownum < 1001; insert /*+append */ into test select * from test; commit; insert /*+append */ into test select * from test; commit; insert /*+append */ into test select * from test; commit; insert /*+append */ into test select * from test; commit; insert /*+append */ into test select * from test; commit; insert /*+append */ into test select * from test; commit; insert /*+append */ into test select * from test; commit; insert /*+append */ into test select * from test; commit; insert /*+append */ into test select * from test; commit; insert /*+append */ into test select * from test; commit; insert /*+append */ into test select * from test; commit; execute dbms_stats.gather_table_stats(NULL,'TEST'); -- do an alter move to get blocks out of the buffer cache alter table test move;
Then I ran a select against this table over and over again until I got the best results possible due to caching. Here is what I got on a system (18.104.22.168) without direct I/O:
EVENT AVERAGE_WAIT ------------------ ------------ direct path read 0
This is from v$session_event so the units are centiseconds rounded to the nearest hundredth centisecond. So this means the direct path reads were averaging less than 100 microseconds so it must be cached at the database server host level.
Here is about the best I could get on our Delphix system:
EVENT AVERAGE_WAIT ----------------- ------------ direct path read .11
This is 1.1 milliseconds which means the data was probably cached within Delphix but not at the database server’s filesystem cache.
Just for comparison here is a SAN based filesystem with direct I/O and the same test:
EVENT AVERAGE_WAIT ------------------ ------------ direct path read .07
So, in every case with direct I/O we were stuck with around 1 millisecond or just under no matter how many times I ran the query to get things cached. Without direct I/O it was about ten or more times faster.
Of course, not everything is going to be cached like this, especially large tables that you are doing full scans on and getting direct path I/O. But there may be certain applications that run significantly different on your source system than on your target if your source is not using direct I/O, your target is using direct I/O, and your queries are doing direct path reads.
Nice writeup on some of the surprises going from non-Direct I/O to Direct I/O.
Direct I/O if configured with cache to replace the file system caching that it circumvents will run faster and with less CPU but actually configuring Oracle to cache things in a similar way as the UNIX file system cache can be less than obvious.
AFAIK the forcedirect IO requirement is on 11g on HPUX only. I’m pretty sure I tested HPUX, Sparc, Linux and AIX on 10g and all ran without direct I/O over NFS.
For a lot of folks, just running without Direct I/O in order to avoid any configuration changes is easier but ultimately it’s better to configure Direct I/O with adjustments for the bypassed UNIX file system cache.
Thanks for the writeup.
Oracle document 359515.1 says that Solaris requires forcedirectio also, but it looks like that is the only other one besides HP-UX.
I tried mounting the filesystem without forcedirectio and got an ORA-27054 error. It looks like you can disable the check of the filesystem options with event 10298. I tried this briefly but didn’t get very far with it.
Pingback: Reviewing Delphix blog posts before OpenWorld | Bobby Durrett's DBA Blog