Recently I used DBMS_RESOURCE_MANAGER.calibrate_io to measure disk I/O performance using a call like this:
DECLARE l_latency PLS_INTEGER; l_iops PLS_INTEGER; l_mbps PLS_INTEGER; BEGIN DBMS_RESOURCE_MANAGER.calibrate_io (num_physical_disks => 10, max_latency => 20, max_iops => l_iops, max_mbps => l_mbps, actual_latency => l_latency); DBMS_OUTPUT.put_line('Max IOPS = ' || l_iops); DBMS_OUTPUT.put_line('Max MBPS = ' || l_mbps); DBMS_OUTPUT.put_line('Latency = ' || l_latency); END; /
Using this call I have a surprising result. This test returns very different results when run against two different databases on the same Linux virtual machine and the same filesystem.
Max IOPS = 7459 Max MBPS = 863 Latency = 18
Max IOPS = 39921 Max MBPS = 1105 Latency = 0
Both databases use direct and asynchronous I/O. The databases differ in size and configuration. It seems that something about the databases themselves affects the results since they share the same filesystem on the same machine.
I did not get useful numbers from calibrate_io and have wasted a lot of time trying to interpret its results. You may want to focus on other tools for measuring disk I/O performance.