JL Computer Consultancy

Folk Lore and Fairy Tales

January 2001

This note is a list of some of the more widely held beliefs about Oracle databases which aren't (necessarily) true. The basic format for articles on this page is : Belief, Commentary, Examples. Initially, however, items may start life with just the belief statement.

A very important point to remember when reading these pages is that there are ALWAYS special cases. Many of the beliefs listed below are almost always incorrect, but sometimes they happen to be true (and in the special cases when they are true it usually doesn't matter anyway unless you're really keen to score brownie points for being a smart bunny).

A higher buffer hit ratio is a good thing. (17-Jan-2001)

The article after this one has been on the site for a long time, and points out the silliness of assuming that a high buffer hit ratio is a good thing. I recently discovered (whilst investigating a problem with an 8.1.5 system), an amusing way of increasing the buffer hit ratio in a way guaranteed to increase the stress on the system. Start by logging in to SQL*Plus, creating a very small table, and doing a simple select statement:

        create table t1 
        select rownum id, rpad('x',200) padding
        from all_objects
        where rownum <= 100
        create unique index i1
        on t1(id)
        select /*+ index(t1) */
               substr(padding,1,10) from t1
        where id > 0

However, before doing the select statement, check the the number of consistent gets done by your session (check v$sess_io, v$sesstats, or the view v$my_stats), then check again afterwards.

On the NT version 8.1.5 database with a 2K block size that I tested, the query required 27 consistent gets.

Now repeat the select statement, but before you do so change the SQL*Plus arraysize to 1 (the default on 8.1.5 seems to be 15). In my case the figure for consistent gets climbed to 114 - even though the number of logical blocks accessed remains unchanged..

The moral of the story appears to be simple - if you want to increase the buffer hit ratio on your system all you have to do is avoid array fetches and go for single row fetches. The fact that systems tend to go much faster whilst using array fetches is clearly an illusion.

I will be publishing at a later date an explanation of why arrays fetches decrease the consistent gets figure. But just to make things quite clear - array fetches are a good idea, single row fetches are a bad idea, and focusing intently on a high buffer hit ratio is a bad idea.

My database must be okay, the buffer hit ratio is 98%

The buffer hit ratio has been defined in 3 or 4 slightly different ways that express logical I/O as a percentage of total I/O. The formulae usually involve the v$sysstat values for db block gets, consistent gets, and physical reads, e.g.:

        1 - ((physical reads) / (db block gets + consistent reads))

It is true that if a large fraction of the Oracle block gets turns into physical I/O requests then your system may become I/O bound and there may be a problem; and a hit ratio of 90% or less is an indication of this problem. The converse, however, is not true - a high buffer hit ratio does not mean that your database is performing well it may simply mean that you are running some appallingly inefficient SQL that does not generate physical read requests.

As a demonstration of this, the following little SQL*Plus script (specially crafted for Oracle 8i, but easily modified for any other version of Oracle) will allow you to set you buffer hit ratio to any value you care to choose:

        create table jpl1 (n primary key)
        organization index     --  Eliminate this for v7, and early v8
        select rownum n 
        from all_objects 
        where rownum <= 21     --  Adjust number (slightly) to suit
        set pagesize 0
        set linesize 20
        set trimspool on
        set termout off
        set arraysize 2000
        /* create table jpl2 unrecoverable as select count(*) X from ..... */
        select n from jpl1
        connect by n > prior n
        start with n = 1

For very small values of rownum this select statement produces some delightfully entertaining arithmetic if you care to put the output to file and include the level pseudo-column and indents. Rabbit breeders and mathematicians will see the Fibonacci sequence dropping out very early on and deduce that the total number of rows returned is always 2 ^ N where the number N is one less than the number of rows in table JPL1. The example as given will produce 1,048,576 rows.

If you have been able to use an index organized table, the number of logical I/Os will be very close to the number of rows returned, and none of the other system statistics will give the DBA a clue as to what is going on - if you include the 'create table' part of the code then the DBA won't even be able to see a suspect statement in v$sql. The only giveaway will come from the work reported by v$sess_io and other dynamic peformance views.

If you have used a normal table, the logical I/O will be very much larger and the table scan system stats will show very large numbers of short tablescans and rows scanned unless you include an /*+ index () */ hint in the SQL.

A small table is one that is 4 blocks or less.

In general when a table is scanned the blocks are loaded into the 'least recently used' end of the db block buffer (technically, the table has the 'forget bit' set in the buffer flags.. This ensures that when a large table is scanned it flushes itself cyclically though the buffer a few blocks at a time rather than clearing out everything else that had been buffered up to that point.

A 'small' table is handled differently. Because it is small it does not pose a threat to the db block buffer if it is loaded into the high (MRU) end of the buffer and aged out like any other object. The system statistics 'table scans (short tables)' and 'table scans (long tables)' separated the two types of activity. Until version 7 of Oracle, a 'small' table was defined internally as a table of 4 blocks or less, so a good DBA was sometimes able to identify which table was the target of 'long tablescans' by using these figures, in combination with the 'table scan rows gotten' and 'table scan blocks gotten' to estimate the rough size of the guilty table.

In version 7, the strategy changed - and in fact it has changed several times. At one stage you could choose your own block count to define a small table by setting the paramter 'small_table_threshold', but after a while this parameter went into the great list of hidden (a.k.a underscore) parameters, with a default value of 2% of the db_block_buffers.

On top of that, you could ignore the problem of counting blocks and simply declare a table to be a cached table; however you had to remember to set the cache_size_threshold if you did this, as a cached table which exceeded the cache_size_threshold was not cached at all. This was not usually a problem, as the default value for cache_size_threshold is 10% of the db_block_buffers.

Of course, Oracle 8 introduces the concept of the buffer_pool_recycle a section of the db_block_buffers reserved for objects that you expect to use and discard rapidly, and a buffer_pool_keep for objects you want to hold onto as long as possible. To encourage you to use these extra buffer pools properly, Oracle has introduced a bug into the system which has a severe impact on performance if you use the soon-to-be-obsolete cache feature.

So, small tables are much more complicated than they used to be - the most important thing to remember though is that, at 2% of db_block_buffers, a small table can be VERY BIG.

Tables of only 1 or 2 blocks do not need indexing

Some time during the life of Oracle 5, it was entered in the manuals that tables of one or two blocks did not need to be indexed as it was cheaper on CPU to scan them rather than using an indexed access to select a single row - after all an indexed acces would require at least one block access to get an index, and a second block access to get to the table.

This argument certainly has a ring of truth to it, and I have to say that I never questioned it when I first started using Oracle 5; however it didn't take me long to notice with Oracle 6 that sometimes SQL running against unindexed two-block tables was running very slowly and consuming huge amounts of CPU - even when the access path seemed to be the ideal path. Adding the appropriate index to these very small tables miraculously cured the problem !

So is there any truth in the statement that very small tables don't need indexes ? Yes there is - if the small tables contain only a small number of rows, and if the column used to join to the table is very short, then the difference in performance between a tablescan nested loop, and an index access nested loop might be quite small. (On the other hand I would rather waste a little space than give the optimiser an extra reason for taking an undesirable execution path).

If you need to see the effect of failing to index on a small table, consider this - a single 16K block could hold 200 reference rows of 80 bytes each, so a simple join on a single block table could require Oracle to work its way through 200 row index entries and then do 200 comparisons.

The following code (which I ran on a 2K block database) is a slightly artificial example of how bad things can get if the optimiser finds that is has to add in a column from a small, unindexed, reference table.

        create table jpl1      -- 80 rows, just fitting into a single block.
        pctfree 0
               trunc(sysdate)+rownum id,
               rpad('x',10) padding
               rownum <= 80
        create table jpl2      -- 10,047 rows
        pctfree 0
               trunc(sysdate)+mod(rownum,80)+1   id_fk, 
               rownum id, 
               rpad('x',40) padding 
               /*+ ordered use_nl(jpl1) */
               jpl1.id = jpl2.id_fk

CPU Time without an index on jpl1 - 6.27 seconds, or 6.35 if the id/id_fk are changed to varchar2() in the form YYMMDD,

CPU Time with a primary key index on jpl1 - 1.2 seconds, unchanged by changing the type of the id/id_fk.

On even a relatively insignificant query, an index on a single block makes quite a difference unless the number of rows in the block is really very small.

Rewrite IN subqueries as EXISTS subqueries, they work faster

Read the tuning manual


Recursive SQL is a bad thing

But only if you are still running version 5


You get optimum performance if a tables or index fits inside a single extent

This belief may have been the cause of more wasted time and effort than any other misconception about Oracle.


It's a good idea to set the default PCTINCREASE to 1 on your tablespaces to get automatic coalescing

And it is another one of those little things that helps to keep your DBA busy.


Back to Main Index of Topics