JL Computer Consultancy

Cached tables in Oracle 7

April 1999


Over the years the internal handling of table-scans has changed dramatically. However many people have not yet caught up with the many different ways in which Oracle chooses to cache, or not, tables which are subject to scans.

This article describes the init.ora parameters which can affect table-scanning, the system statistics which are affected by tablescans, and a strategy for keeping scanned tables cached in the db_block_buffers. The article covers Oracle 7.3-ish because Oracle 8 has been re-engineered to cater for better control of table-scanning and the caching. Oracle 8.1 even makes several of the current init.ora parameter obsolete. The critical features of Oracle 8 are covered in a separate note.

Before going into the details, let's start with the simple question: 'I have a fairly large table that I would like to keep in the Oracle buffer permanently - how do I do it ?' The absolute answer is that there is no guaranteed way of doing it, but you can get pretty close in the following 4 steps:

What does this do ?

First it tells Oracle that the table should be loaded into the 'most recently used' end (MRU) of the block buffer when the table is scanned IF the table is not over a critical limit.

If a table is larger than cache_size_threshold it will always be scanned into the LRU end of the buffer, even if it has been declared as a cached table. This is why we set the cache_size_threshold to be comfortably larger than the table in question as our second step. By the way, most versions of the manuals say that this parameter is only relevant Oracle Parallel Server - the manuals are wrong. The parameter defaults to 10% of db_block_buffers.

The third step is a safety precaution - 'small' tables are always scanned into the MRU end of the buffer, but 'small' is defined by the value of the (now-hidden but once visible) parameter _small_table_threshold. This parameter defaults to 2% of the db_block_buffers value, so it can be rather big ! To reduce the risk of rather large 'small' table being scanned into the MRU end of the buffer and knocking out a chunk of our special table we have the option of setting our own concept of 'small'.

Finally, to load the table into the buffer, we have to do a tablescan.

Complications:

Naturally nothing is easy in Oracle and there is much more to the behaviour of tablescans than the few details listed in the strategy above. An obvious indication of this comes from the rows in V$SYSSTAT which describe tablescans, these are:

A short table is a table which is (a) less than _small_table_threshold blocks long, or (b) a cached table which is less than cache_size_threshold blocks long. A short table will be scanned into the MRU end of the buffer. This means, by the way, that if you rig the system to cache a very large table, that table wil fall into the category of 'short tables' when you scan it.

A long table is a table which is not a short table (thank goodness one bit is easy)

But -- (fooled you, nothing is ever easy), if you are running with the parallel query option you may define some of your long tables to be parallel tables so that they get scanned in parallel. In this case the table is sliced up into chunks by the query co-ordinator and ranges of rowids are handed out to the parallel query slaves - each slave records a a partial scan of this type under table scans (rowid ranges).

But -- (gotcha again), if direct reads are enabled (a featured introduced around 7.1.6, and invoked automatically if you have the init.ora compatible set to something above 7.1.6) then the parallel query slaves will not use the db_block_buffer to read the table, they will instead read the table into local memory. In this case the activity will be recorded under the heading of table scans (direct read). An important side effect of this strategy is that for read-consistency reasons Oracle has to write any dirty blocks that the query could address back to disc before the query starts - depending on the version it will do this through extent-based checkpoints, object (segment) based checkpoints, or global checkpoints.

Of course you may have declared the table to be a cached parallel table. In this case the scans by the parallel query slaves will be recorded under table scans (cache partitions) even if the table was far larger than the cache_size_threshold and not cached at all. In this case I assume that Oracle believes that it will find a large percentage of the table in the buffer and therefore does NOT use direct reads.

The only really good thing about all these stats is that the number of rows and blocks scanned is correct (in most versions of Oracle), and not individual scan appears in more than one of the other 5 statistics.

Finally I have to say that I can't guarantee that you will be able to reproduce my observations exactly - unless you are runing Oracle 7.3.3.5.1 on HP-UX 10.20 with a list of about 10 patches to both Oracle and the Operating System. The people at Oracle Corp. seem to spend a lot of time working at better ways of crunching through large volumes of data, and the behaviour of tablescans and the parallel query option seem to be particularly prone to change as a consequence.

By the way - if you want to see how many blocks in your buffer have come in at the LRU end of the cache through a tablescan then try the following SQL (as SYS):

        select dbafil /*+ use file# in v8 */, dbablk
        from X$BH
        where bitand(flag,524288) = 524288.

Back to Main Index of Topics