JL Computer Consultancy

The 10g Plan Table

Dec 2005

Here’s a little detail that may add a little extra insight to your execution plans when you use autotrace, or any variation of explain plan in 10g, brought to you courtesy of Siim Kask, one of the people attending my Trouble-shooting tutorial in Estonia this month.

In 10g, there are two scripts to create plan_tables.  One of them is $ORACLE_HOME/rdbms/admin/utlxplan.sql which creates a table called plan_table in your current schema. The other is $ORACLE_HOME/rdbms/admin/catplan.sql, which is usually called during database installation from catproc.sql; this script creates a global temporary table (GTT) with the “on commit preserve rows” option called plan_table$ in the sys schema, grants access to public, and creates a public synonym plan_table for it.  (If you are not running 10g, it is a smart idea to copy this approach, although you may feel happier creating the table in the system schema.)

Both scripts create various columns as type numeric, e.g.

        cost        numeric,

        cardinality numeric,

        bytes       numeric,


        cpu_cost    numeric,

        io_cost     numeric,

        temp_space  numeric,

When you look at the definitions using the SQL*Plus describe command, numeric turns into number(38) – which means no decimal places. But an interesting thing happens if you change the cost, and cpu_cost to allow higher precision – number(38,10) say. You discover that the code that generates the plan will actually populate these columns with high precision values (although 10 d.p. on the cpu_cost is probably pointless). For example:


rem   This example starts with a one-off change to

rem   the global temporary table owned by SYS.



alter table plan_table$ modify cpu_cost number(38,10);

alter table plan_table$ modify cost number(38,10);


explain plan for



from  source$

where obj# = 33




      lpad(' ',depth*2) || operation "Operation",


      cost, io_cost, cpu_cost



order by id




Operation            OBJECT_NAME                COST    IO_COST   CPU_COST

-------------------- -------------------- ---------- ---------- ----------

SELECT STATEMENT                          20.0820252         20   479388.8

  TABLE ACCESS       SOURCE$              20.0820252         20   479388.8

    INDEX            I_SOURCE1            5.03565924          5   208407.2


3 rows selected.


If you are wondering where final cost of 20.0820252came from, the first thing to check is the setting for the CPU speed and single block read time (see below) which happen to be 671 (million operations per second) and 8.71 (milliseconds) in this case, then

            Cost     =          io_cost + (cpu_cost / 671000000) * (1/0.00871)

                        =          20 + (cpu time in seconds) * (single block reads per second)

                        =          20 + 0.08202518308150985300483 …

Having this extra precision (perhaps to fewer decimal places) may be helpful in some cases because the typical output from autotrace and dbms_xplan otherwise has to allow for rounding errors on a line by line bases – the code carries the high precision into subsequent multiplications – and can produce apparently contradictory numbers when rounding occurs on each line.


select      pname, pval1

from        aux_stats$

where       pname in ('CPUSPEED', 'SREADTIM')



PNAME               PVAL1

-------------- ----------

SREADTIM             8.71
CPUSPEED              671


2 rows selected.


Addenda (17th Dec 2005)

This note produced a flurry of interest and email.

The first point was highlighted by Connor McDonald. If you adopt the global temporary table approach, you must make sure it uses the ‘on commit preserve rows’ option (which I have added to the notes above). This is necessary for two reasons.

First, there are various examples of queries (using subquery factoring, grouping sets or star transformations, for example) which can create a form of global temporary table to hold intermediate results. The execution plan for the DML statement populating these temporary objects will be inserted into the plan table in a recursive operation using an autonomous transaction – which means you get a recursive commit, which would immediately lose the recursive plan if the plan table were create with ‘on commit delete rows’.

Secondly, not only do you lose the recursive plan, you also get unexpected Oracle errors as the explain plan call runs:

ORA-00604: error occurred at recursive SQL level 1
ORA-14450: attempt to access a transactional temp table already in use

Another little detail that came out was one revealed by Melanie Caffrey in a comment http://tkyte.blogspot.com/2005/12/every-day.html#comments on Tom Kyte’s blog, where he had supplied a URL to this article. It looks like this feature isn’t really a feature; it’s a bug which has been fixed in 10g release 2. You don’t get fractional costs reported in the cost column in my database – which is a nuisance, but not a problem, because you can always take the arithmetic I’ve demonstrated above using the CPUspeed and sreadtim and include it in your own scripts to report on the plan table.

Finally, a little accidental discovery – you can get fractional costs in 8i, 9i, and 10g, even with CPU costing disabled. I was re-running some tests of the way that index joins work, and discovered that (amongst other oddities of the costing) the value reported in my modified plan_table was showing a small fractional component in the cost column. Following this up, I re-ran some of my tests for bitmap index costing, and found that the fractional parts of the cost that I had previously been digging out of 10053 trace files were, indeed, being reported in the plan_table. Too bad that they disappear again in 10gR2.

Addendum (30th Dec 2005)

Clearly one should never say “finally” about Oracle – there’s always something more to discover. Here’s a 10g execution plan from dbms_xplan.display(). Note how the cost column includes a (%CPU) figure in parentheses:


| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |


|   0 | SELECT STATEMENT   |      |     1 |       |   329   (1)| 00:00:04 |

|   1 |  SORT AGGREGATE    |      |     1 |       |            |          |

|   2 |   TABLE ACCESS FULL| T2   | 48000 |       |   329   (1)| 00:00:04 |


When I redefined the cost column to be number(38,10) instead of the default number(38), I got a string of decimal places after the 329, and the (1) was pushed out of existence. Since that CPU percentage can be a useful clue to where time could be spent, you may not want to lose it.

Back to Index of Topics