|Author's name: Jonathan Lewis
Author's Email: Jonathan@jlcomp.demon.co.uk
|Date written: 9th August 2000
Oracle version(s): 7.3 - 220.127.116.11
|I am using Oracle SEQUENCES, but keep losing sequence numbers, why ?|
Back to index of questions
Loosely speaking the answer is ' because you are supposed to'. See Further Reading below for an article that covers many of the common details of sequences, but the key point to note is that a sequence is supplied only as a non-blocking mechanism for generating unique numbers which are handed out in sequential order by an instance. To do this, a sequence is typically cached in memory as a last used value and a high water mark value, it is only when the last used value reaches the high water mark that Oracle writes a change to the database and moves the high water mark; consequently it is always possible to lose all the values between the last used value and the high water mark.
You can minimise the loss in a couple of ways. In the first case you can define the sequence to be non-caching using sql like:
alter sequence slow_sequence nocache;
The drawback to this is that every call for a new value will result in an update to the SYS.SEQ$ table; an overhead that is unlikely to be acceptable in a high-speed OLTP system. In fact I often advise DBAs to check their database for sequences which are high-usage but defined with the default cache size of 20 - the performance benefits of altering the cache size of such a sequence can be noticeable: a cache size of 1,000 is usually sufficient. (NOTE: increasing the cache size of a sequence does not waste space, the cache is still defined by just two numbers, the last used and the high water mark; it is just that the high water mark is jumped by a much larger value every time it is reached.
Another method which may help you to avoid wasting values, and this applies only to earlier versions of Oracle prior to 8.1, is to increase the init.ora parameter SEQUENCE_CACHE_ENTRIES. This defaults to 20 and is often too low - prior to about version 7.3 if a sequence definition was pushed out of memory to make room for a '21st' sequence, then its latest value was not written back to disc. Newer versions of Oracle do a proper writeback, and the very latest ignore the sequence_cache_entries limit anyway. When this change in architecture appeared it also became possible to use the dbms_pool package to keep sequences, and this is probably the most up-to-date option you can choose if you have problems with losing sequence numbers.
In some versions of Oracle you could also lose values in sequences by issuing alter system flush shared pool. To get around this problem (and the loss due to a shutdown abort) you could alter sequence nocache before the command, which has the effect of writing the current value back to disk, and then remember to set the cache back to a suitable size afterwards.
Apart from flushing the buffer, you should also remember that a user may request a sequence number and then fail to use it (possibly because they rolled back the transaction that had asked for it). There is nothing you can do about this, Oracle issues sequence numbers on a fire and forget basis. You can, of course, redesign you application to use alternative mechanisms, but these tend cause bottlenecks because of serialisation problems.
Remember - sequences exist to help you generate unique, roughly ordered, numbers quickly; not to guarantee an unbrokem series of values. Don't expect more from them than they are supposed to supply.
Further reading: All about Sequences a modestly titled article by Jonathan Lewis
Back to top
Back to index of questions