By Roger Ford-Oracle on May 16, 2012
Before we look into them, a quick aside on typing accented characters if you don't have them natively on your keyboard. I'm not going to go too deep into character sets and code pages, but if you're on a European Windows machine, you're probably using the WIN1252 character set.
If I want to type one of the accented characters in that set, in most Windows applications I can hold down the ALT key, then enter the four-digit decimal code for that character on my numeric keypad. So to enter a lower-case o-umlaut character ("o" with two dots above it: "ö"), I would hold down ALT and type 0246 on the numeric keypad.
Let's start with perhaps the simplest - the BASE_LETTER attribute. If I set this to "true" then characters with accents in the text will effectively be indexed as their base form.
So for example if I index the German word "schön" (meaning beautiful in English) then the actual indexed token will be "schon". That means if the user searches for " schön" or "schon" then the word will be found (since the query term is processed in the same way).
That makes life much easier for people who don't have the oh-umlaut character on their keyboard, and don't know the ALT trick mentioned above (or don't want to have to look up the necessary character code).
So that's simple - might as well set that option on always, right? Well, not necessarily. Because in German, the word "schon" actually has a quite different meaning to " schön", and German users wouldn't want to find the unaccented word if they specifically meant to look for the accented word.
So the simple rule of thumb is this: If you think that most of your users will be searching for words for which they have the correct keyboard (for example German users searching German text) then it's best to set BASE_LETTER to false. But if you think users might want to search foreign words for which they do not have the correct keyboard (for example English, or multi-national users searching German or mixed text) then it's best to set BASE_LETTER to TRUE.
Now it starts to get a little more complex. Some languages - notably German again - allow you to avoid the use of accented characters by spelling the words differently. In German, an accented character is replaced by the un-accented form of the same letter, followed by an "e". So "shön" could equally validly be spelled "shoen", and every German user would recognise it as the same word. Equally "Muenchen" (the city English speakers call Munich) would be recognized as the same as "München".
So that we can treat these alternate spelling forms as equivalent Oracle Text has the ALTERNATE_SPELLING attribute. When set to "german", Oracle Text will look for the "alternate form" of "vowel + e" and index both that and the accented form of the word. When processing a query, it will equally translate "oe" into "ö", etc, thus ensuring that the word can always be found, regardless of which of the alternate forms is searched for, and which is in the indexed text.
Aside: anyone following closely might ask "why does it index both the alternate form "shoen" and the accented form "shön"? Surely it would be sufficient to just index the "shön" form? Well, mostly it would. But what happens if the word in question was actually an English word in the middle of the German text, such as "poet"? OK, it would be indexed as "pöt" and anybody seaching for "poet" would still find it (since the transformation is applied to the searchterm as well). But what if they used a wildcard and searched for "po%"? They would still expect to find the term, but if only the accented form was indexed, they wouldn't. Hence the reason we index both forms, just in case.
Combining ALTERNATE_SPELLING and BASE_LETTER
OK, so we want ALTERNATE_SPELLING set to "german" for our German text. But we also know that people with non-German keyboards are often going to search it. So we BASE_LETTER on as well. What happens now?
If the indexer comes across "schön", ALTERNATE_SPELLING would normally index that without any change. But BASE_LETTER forces it to the unaccented form, and "schon" is actually indexed. If the indexer comes across "shoen", then ALTERNATE_SPELLING decides it should be indexed as both "schoen" and "schön". But before the tokens are written to the index, BASE_LETTER is applied, so the tokens "shoen" and "shon" are indexed.
That all works fine, and we can find either term by searching for "shon", "shön" or "shoen". Great!
But (there's always a but) what happens if we index the French word "Rouède" (the name of a town near the Spanish border)? "uè" is not a candidate for ALTERNATE_SPELLING, so it is left alone. Then BASE_LETTER is applied, and the word "Rouede" is written to the index. If the user searches for "Rouède" then the query-time processing works the same, the search is converted to "Rouede" and the query works fine. However, if the user searches for the base letter form "Rouede", things don't go so well. This time ALTERNATE_SPELLING does get applied to the query term (since the query processor has no way of knowing that the "e" character should be accented) and the searchterm is converted to "Roüde". BASE_LETTER is then applied, and it looks for "Roude" in the index. But the indexed term is "Rouede", so nothing is found.
To solve this problem, the OVERRIDE_BASE_LETTER attribute was introduced.If you set OVERRIDE_BASE_LETTER to "true", then ALTERNATE_SPELLING will "mask" BASE_LETTER. That means that if we meet accented characters in the text which have a alternate form (such as "ö"), we will index them in their original, accented form and also in their alternate form. If we meet them in their alternate form (eg Muenchen) we will index ONLY the alternate form and not transform them. Accented characters which do not have an alternate form (such as "è") have BASE_LETTER processing applied to them to transform them to their equivalent unaccented character. Then at query time, we apply only ALTERNATE_SPELLING to any appropriate accented search terms, and BASE_LETTER to all others.This has the positive effect that our previous example "rouède" can be found, if searched for with or without the accent on the "e" character.
It does have the negative effect that base letter searches no longer work on German words - we can't search for "shon" anymore, only "shön" or "shoen" will work. So OVERRIDE_BASE_LETTER makes sense if we want to perform ALTERNATE_SPELLING on German (or other specified language) words, and BASE_LETTER on all other languages.
Appendix: Test Script
This is the script I used to test the effects of the various options. To avoid any issues with character set translation, I used the UNISTR() function to create Unicode characters for my accented characters. Note the German words are prefixed by two-letter codes "wa" - with accent, "na" - no accent and "af" alternate form. That allowed me to distinguish in the index between the index terms derived from "schön" and those derived from "schon".
ctx_ddl.drop_preference ( 'my_lexer');
ctx_ddl.create_preference ( 'my_lexer', 'BASIC_LEXER' );
ctx_ddl.set_attribute( 'my_lexer', 'BASE_LETTER', 'true' );
ctx_ddl.set_attribute( 'my_lexer', 'OVERRIDE_BASE_LETTER', 'true');
ctx_ddl.set_attribute( 'my_lexer', 'ALTERNATE_SPELLING','german' );
drop table tt;
create table tt(a1 number primary key,text varchar2(45));
-- town name
"Rouède", accent on the e
insert into tt values (1,'rou'||unistr('\00E8')||'de');
-- shön with accent
insert into tt values (2,'wash'||unistr('\00F6')||'n');
-- shon no accent
insert into tt values (3,'nashon');
-- muenchen alternate
insert into tt values (4,'afmuenchen');
select * from tt;
create index tta on tt(text) indextype is ctxsys.context
parameters ( 'lexer my_lexer' );
set feedback 2
select token_text, token_type from dr$tta$i;
PROMPT searching for the base letter form, without accent on
the first e
select * from tt where contains(text,'Rouede')>0;
PROMPT and with the accent
select * from tt where contains(text,'Rou'||unistr('\00E8')||'de') > 0;
--select * from tt where
--select * from tt where contains(text,'muenchen') > 0;
set echo on
--select * from tt where contains(text,'nashoen') > 0;
--select * from tt where contains(text,'nashon') > 0;
--select * from tt where contains(text,'na'||unistr('\00F6')||'n') > 0;
select * from tt where contains(text,'washon') > 0;
select * from tt where contains(text,'washoen') > 0;
select * from tt where contains(text,'wa'||unistr('\00F6')||'n') > 0;
set echo off
-- The following section shows how to see how the query has
been transformed - it shows
-- the actual words looked for in the index.
drop table test_explain;
create table test_explain(
index_name => 'tta',
text_query => 'wasch'||unistr('\00F6')||'n',
explain_table => 'test_explain',
sharelevel => 0,
explain_id => 'Test');
col explain_id for a10
col id for 99
col parent_id for 99
col operation for a10
col options for a10
col object_name for a20
col position for 99
select explain_id, id, parent_id, operation, options,
from test_explain order by id;