JL Computer Consultancy

The pre-conceptual scientist

June 2005


I’ve just been reading some comments that Don Burleson has written about a presentation I gave at the IOUG-A conference this year. The presentation was one of seven listed by Burleson as “The best of IOUG Live! 2005”, and was the one on “Rebuilding Indexes – Why, When, How?”

Unfortunately, apart from the extracts cut and pasted from the whitepaper that went with the presentation, I barely recognized it.

After an enthusiastically exaggerated opening paragraph (which taught me that ‘standing room only’ probably means ‘the empty seats are near the front’), the article makes the following comment:

There has been some debate regarding rebuilding indexes, with experts claiming that Oracle indexes rarely need rebuilding, while acknowledging that high-DML (insert, update, delete) indexes can become sub-optimal very quickly and require rebuilding.

This could almost be a good introductory paragraph – after all, people who understand how Oracle’s B-tree indexes work have been saying for many years that there is generally little benefit to be had by rebuilding indexes regularly. Unfortunately the paragraph is spoilt by the second comment about ‘high-DML indexes’ (I think we have to assume that this garbled phrase is supposed to mean indexes on tables that are subject to high rates of change). The people who understand Oracle’s B-tree indexes have also been saying for many years that there are types of activity that can make B-tree indexes become inefficient – but it’s not the rate of change that is the problem, it’s the nature of the activity – so it’s sloppy thinking and very misleading to suggest that ‘high-DML’ is the issue, and clearly not a position that would be held by an expert.

The next paragraph starts

Lewis' theme was that index rebuilding can be risky when performed improperly, using questionable tactics such as re-using index tablespace. 

No, that wasn’t my theme. My theme was that too many people waste time and resources that they can’t afford rebuilding B-tree indexes that do not need to be rebuilt. Moreover, they waste the time because they have heard and believed sweeping generalizations such as the “high-DML” invention that Don Burleson has put forward.

I happened to raise as one critical issue the fact that when you run an automatic sweep to rebuild indexes, random sessions may crash with Oracle error ORA-01410 as a side effect. But any index rebuild has the potential to cause other sessions to crash each other – unless you happen to have stopped all other database activity that might cause space allocation to occur in the tablespace where the selected index happens to have been.

In passing, it is interesting to note that Don Burleson has been a great proponent of rebuilding indexes regularly, so it’s not surprising that he should attempt to re-direct your attention by suggesting that the problem lies in ‘questionable tactics’ rather than the whole concept of pointless rebuilds. Moreover, I don’t think he has ever suggested in the past that you should avoid “re-using index tablespaces” when rebuilding indexes, or made any suggestions about what might constitute “questionable tactics”.

After an extract from the white-paper, the review says:

The presentation then went-on to discuss how to detect when an index would benefit from rebuilding.  Lewis successfully argued that the index height is not a factor to consider when determining index fragmentation and Lewis argued that "index packing" might be a primary factor. 

Three significant errors in two sentences is quite a feat – but Don Burleson has managed it.

        I did not discuss how to detect when an index would benefit from rebuilding, I discussed how to detect when an index might benefit from rebuilding – and stressed more than once that the numbers you get from (sensible) scripts could give you a clue about this, but I argued that it was knowledge of the application that allowed you to decide whether that clue was relevant or misleading.

        In the presentation I gave a live demonstration of how the combination of an unfortunate index definition and an unlucky data-handling process could make an index suffer such catastrophic block splitting that it reached the maximum legal height for an Oracle B-tree index after just 25 row insertions, crashing the session as it did so. I fail to understand how this could be interpreted as a successful argument that “height is not a factor”. Height can be an important factor – and I suspect that most people in the audience probably picked up on that point. I did, however, point out that rebuilding an index was in most cases unlikely to change the index height. In passing, Don Burleson is well known for his excessive zeal for moving indexes into tablespaces with the largest possibly block size because this ‘flattens the structure of the index’ – so where does that leave him if he now thinks that I’ve successfully argued that height isn’t a factor – will he recant on indexes and big blocks?

        I did not use the term “Index Fragmentation”– and you will note that Don Burleson offers no definition for the term – I spoke about the efficiency of the index, and explained what I meant by efficiency. It is very important to be precise with your use of words – sloppy use of words leads to misunderstanding. To demonstrate this point, I started the presentation with a little quiz about indexes and one question I asked was whether the rule-based optimizer could use a function-based index. Most of the audiences thought not. But the name “function-based indexes” is wrong! Call them “indexes with virtual columns” and you might immediately wonder what happens if you have a two-column index where only the second column is a virtual column. The rule based optimizer will be able to use such an index. Precision with words is important.

After a lengthy extract from the whitepaper, the review then comments:

Of course, there were some areas where other experts disagreed with some of his conclusions, most notably:

        [quoted from whitepaper] "If an index needs constant care and attention, is this a clue that you really need to be fixing a design error."

That’s not a conclusion; it’s a rhetorical question – easy to miss, of course, because I forgot to put a question mark at the end. However, Don Burleson then says that “Other experts have noted that high-DML environments require regular index maintenance and there is no work-around for having an index on highly-volatile tables”. I assume that “there is no work-around for having an index on highly-volatile tables” is trying to say “there is no alternative to rebuilding indexes if they have been created on highly-volatile tables”. Yes there is – think very carefully about whether you are rebuilding the index unnecessarily; but if the rebuild really does seem to be necessary think very carefully about whether you have a design error that could be fixed. I get to see quite a lot of environments with high rates of DML and highly-volatile tables – I don’t get to see lots of environments where regular index maintenance needs to be done on anything but the odd special case index. If you use “high-DML” and “highly-volatile tables” as the basis for your decision to keep rebuilding, then you may have rebuilt those indexes which you ought not to have rebuilt, and you may not have rebuilt those indexes that you ought to have rebuilt.

        [quoted from whitepaper] “Rebuilding indexes can be expensive, intrusive and risky.”

I don’t think anyone who wants to claim the title of expert would want to contradict this statement – at least, not if they noticed the phrase “can be”. To disagree with this statement, you would have to hold the position that rebuilding indexes is never expensive, intrusive and risky. Don Burleson, however, thinks he has refuted the statement with the argument that index rebuilding is very low-risk when performed according to (unspecified) “DBA best-practices”, especially if done on databases with weekly downtime windows. That falls a long way short of “never”, and fails to address the issues of “expensive” and “intrusive”. (Down-time is pretty intrusive to most people these days, and I’m sure I’ve seen Don Burleson pontificating about how all the databases he sees are so important that an hour’s downtime costs hundreds of thousands of dollars).

Maybe downtime doesn’t count as intrusive if it’s planned. But is an environment that is “high-DML” (the ones where there is “no workaround for having an index”, apparently) the sort of environment that is likely to have weekly down-time.

Funnily enough one of the comments I made early on in the presentation was that if you have hours every night when nothing else is happening, then rebuilding all your indexes probably won’t cause much harm – so it seems that I am one of the experts contradicting me, according to the Don Burleson school of argument.

The next point in the review said:

Lewis also helped to dispel the assertion that index height should be a factor to consider when deciding to rebuild an index:

Of course, Don Burleson has already made this claim once in the article – stating it again doesn’t make it more accurate. Moreover, the following paragraph is quoted as the source of his idiosyncratic interpretation:

        [quoted from whitepaper] "You have to be quite lucky (or unlucky) to be in the position where an index rebuild reduces the height of a B-tree index and *** a visible performance benefit. But that's why you should predict, then monitor, the effect.”

[*** ed: the original whitepaper omitted the word “produces” at this point]. To me, the article seems to be saying that height can be affected, and height can have a performance impact – but it’s not very likely. (And I can repeat things too, look: the live demonstration proved that height can make a massive difference.)

The next comment from the reviewer was:

Lewis concluded with some warnings about flippant index rebuilding strategies:

Bear in mind that my translation for the expression “flippant index rebuilding strategies” would be: “every strategy I have ever seen that describes a mechanism for automatically rebuilding indexes on a regular basis”. The only “non-flippant” strategy I know for index rebuilding strategies is the one that goes: “Don’t do it unless you’re pretty sure that it’s a good idea – and if you do do it, check whether it really was a good idea”.

The closing shot of the review was a quick rave about the sys_op_lbid() function because it:

offers a method for bypassing the expensive "alter index validate structure" commands to gather index density information.”

My script certainly bypasses the validate command, and is much friendlier because it doesn’t lock the underlying table, and it produces better quality information because it doesn’t crunch all the detail down into a single meaningless number – but as far as expense goes, it reads index leaf blocks, so it’s going to be pretty expensive unless you take a small sample. Of course, indexes being the ordered, dense, structures that they are, a small sample is more likely to give misleading results. I can’t be 100% certain, but I am fairly confident that in the presentation I said something like “bear in mind this is an expensive bit of SQL, so don’t run it casually against every index in the system”.


The Saga Continues

In a related article, ostensibly about my suggested use of the sys_op_lbid() function, Don Burleson has this to say:

The current debate over when to rebuild indexes has reached some consensus with most Oracle experts agreeing:

        Oracle 10g will soon automate index rebuilds

        Sparse blocks matter

        Height is not an issue

        Big blocks help

The first statement is very strange. How can “experts agree” that Oracle 10g will soon automate index rebuilds – it’s a marketing claim which may or may not become fact. If Oracle 10g does automate index rebuilds the experts can consider whether or not the implementation works well. The statement is simply Don Burleson’s standard hand-waving argument “Oracle 10g is going to do automatic index rebuilding, so whatever I’ve said about automatically rebuilding indexes must be right.” (Identification of the classic logical inconsistency is left as an exercise to the reader. but here’s a clue: Oracle 10g might do some sensible analysis first).

The supporting comment for the second statement is interesting, it says: “The most likely candidate indexes are those that experience massive delete operations, leaving "sparse" index blocks.” (No definition of “sparse” of course). But in the article about the presentation, published at the same time, the reviewer said that it was “high-DML” and “highly-volatile tables” and made no comment about massive delete operations. So which experts are agreeing with which generalizations? Unsurprisingly, the “massive delete operations” is yet another sweeping generalization which could leave you rebuilding indexes that do not need to be rebuilt, and ignoring indexes with exactly that same “sparse” block condition (assuming that “sparse block” means “small percentage of space used in each leaf block”) that do need some maintenance.

The third statement is, of course, managing to misquote my presentation again – fortunately without an attribution – but it’s still wrong, even though it’s been said three times.

The supporting comment for the fourth statement is at least partly true (“larger index block size can reduce logical I/O for …”) but trite and dangerously misleading; and its reference to throughput is dubious in the extreme (although perhaps an explanation from the author of what he means by throughput might clarify the issue).

The entire article seems to exist to re-iterate the script from my presentation – but unfortunately Don Burleson has been unable to resist the temptation to attach my name to an article that also re-iterates some of his pet theories. (“Proof by irrelevant association” is a phrase that springs to mind).

Don Burleson finally gets to the question “So, how do we measure sparse index blocks?”, and quotes my scripted use of the sys_op_lbid() function. (In passing, he says, yet again, and still incorrectly, that I’ve said that the index height doesn’t matter – four times, but don’t worry, it’s still not true.) But the crowning glory of the article is this comment:

You simply encapsulate the above SQL into a procedure (i.e. rows_per_block) and call it, passing the index name as an argument:

 select 
    "exec rows_per_block("||index_name||");"
 from
    all_indexes;

First problem – this seems to be a SQL script to write a SQL script that you then have to run. This is bad practice for a production system, and should never be encouraged. Consider what might happen if you run a job that creates a script, but someone else manages to run the same job just before you manage to start the script – you will be running a script generated by someone else, which is at best going to produce unpredictable results. Remember – most Oracle systems are multi-user systems.

Second problem – see the double-quotes – " – that’s not valid SQL syntax. If you’ve managed to write a ‘simple encapsulating procedure’ then this SQL statement will produce Oracle error ORA-00904: invalid identifier.

Third problem – when you get around to “simply encapsulating” the SQL, you may find that it’s not so simple. If you check the original SQL, it queries a table, not an index, so your first task will be to find the table that the index belongs to. Then you have to identify the columns in that index, because for some reason my SQL happens to use a not null predicate on a couple of columns (they are there to ensure that the SQL can legally be hinted to use the index). Moreover, my whitepaper only tells you how to use the function on simple B-tree indexes, not on IOT’s, secondary indexes on IOT’s, or Partitioned Indexes – and the suggested SQL (see second problem above) hasn’t excluded any of these (let alone bitmap indexes).

Fourth problem – one of the points I made in the presentation was that this function is labour-intensive, and should not be used casually against every index in your system. In fact, the entire presentation was about not wasting time and resources. Attempting to hammer the system to death by smashing through every index with the sys_op_lbid() function is not a good way to avoid wasting resources.


Conclusion

Don Burleson may have rated my presentation was one of the best at IOUG Live! 2005. But if I believed that the rest of the audience had managed to misunderstand it as badly as he did then I’d have to rate it as a complete disaster.

Footnote

When I wrote this article, there was a very witty cartoon at this site which featured a young child explaining that they were planning to be a pre-conceptual scientist when they grew up. A pre-conceptual scientist decides what the answer is before looking at the evidence, and when the evidence highlights the error in the answer the pre-conceptual scientist ignores the evidence.


Back to Index of Topics