If all else fails and you can’t get the optimizer to join the tables together in an efficient order you can break the query into multiple queries saving the intermediate results in a global temporary table. Here is how to break our three table join into two joins – first sales and products, and then the results of that join with customers:
SQL> create global temporary table sales_product_results 2 ( 3 sale_date date, 4 customer_number number, 5 amount number, 6 product_type varchar2(12), 7 product_name varchar2(12) 8 ) on commit preserve rows; SQL> insert /*+append */ 2 into sales_product_results 3 select 4 sale_date, 5 customer_number, 6 amount, 7 product_type, 8 product_name 9 from sales, products 10 where 11 sales.product_number=products.product_number and 12 sale_date between 13 to_date('01/01/2012','MM/DD/YYYY') and 14 to_date('01/31/2012','MM/DD/YYYY') and 15 product_type = 'Cheese'; SQL> commit; SQL> select 2 sale_date, product_name, customer_name, amount 3 from sales_product_results spr, customers c 4 where 5 spr.customer_number=c.customer_number and 6 c.customer_state = 'FL'; SALE_DATE PRODUCT_NAME CUSTOMER_NAME AMOUNT --------- ------------ ----------------- ---------- 02-JAN-12 Chedder Sunshine State Co 100 03-JAN-12 Chedder Green Valley Inc 200 04-JAN-12 Feta Sunshine State Co 300 05-JAN-12 Feta Green Valley Inc 400
Breaking a query up like this is a very powerful method of tuning. If you have the ability to modify an application in this way you have total control over how the query is run because you decide which joins are done and in what order. I’ve seen dramatic run time improvement using this simple technique.
Good start, but in many cases I guess you wuold like to add indexes and gather statistics(!) on the temp tables.
You are right. You can gather stats on each temp table, add indexes, and put hints on the statement.