Another day, another slow query. This time customers are complaining queries of the form:
select * from some_table where text_column = 'some value';
Are taking for-ever to execute.
So you try and slap an index on the column. Only to be met with:
create index text_index on some_table ( text_column ); ORA-01450: maximum key length (6398) exceeded
You what now?!
The maximum size of the string is 4,000 characters. How can that not fit in 6,398 bytes? And more importantly, how do you avoid the error?
Photo by George Becker from Pexels
In this post you’ll find out:
- Why you get ORA-01450 errors
- Solutions
But first, why would a 4,000 character string complain that it needs 6,398 bytes?
What Causes ORA-01450 Errors
The smallest unit of data storage in Oracle Database is the block. This defaults to 8k (8,192 bytes).
Each index entry must fit within one block. So the maximum size of an indexed value must be less than the block size. Minus some storage overheads. In a default installation this means the largest indexable value is around 6,400 bytes.
But… why does this mean you can’t index a varchar2 (4000)?!
In older releases of Oracle Database, the maximum size of a varchar2 was 4,000 bytes. So you could only hit this problem when creating an index on many of these columns:
create index multi_col_index
on some_table (
vc_4000_byte_1, vc_4000_byte_2
);
ORA-01450: maximum key length (6398) exceeded
This changed in Oracle Database 12c with the introduction of extended data types. This increased the maximum size of a varchar2 to 32,767 bytes.
Critically, if you’re using the character set AL32UTF8 for your database (the default from 12.2), each character could consume up to 4 bytes of storage. So if you declare a varchar2 with character semantics, the maximum size string could be 16,000 bytes!
You can verify this by checking data_length in the *_tab_cols views:
select data_type, data_length, char_length from user_tab_cols where table_name = 'SOME_TABLE' and column_name = 'TEXT_COLUMN'; DATA_TYPE DATA_LENGTH CHAR_LENGTH VARCHAR2 16000 4000
Clearly this can’t fit in 8k. So what can you do about it?
Upgrade to Oracle AI Database 26ai
In Oracle AI Database 26ai you can create indexes where the largest possible value is too big to fit in a block:
create index multi_col_index
on some_table (
vc_4000_byte_1, vc_4000_byte_2
);
Index MULTI_COL_INDEX created
Provided the total length of the values you’re adding fits in a block, you can insert them:
insert into some_table ( vc_4000_byte_1, vc_4000_byte_2 ) values ( 'short', 'short' ); 1 row inserted.
But beware. The maximum key length still exists. It’s now a DML check instead of a DDL check.
This inserts values totalling 8,000 bytes. Clearly this is greater than the 6,397 limit so raises an error:
insert into some_table ( vc_4000_byte_1, vc_4000_byte_2 ) values ( rpad ( 'long', 4000, 'g' ), rpad ( 'long', 4000, 'g' ) ); ORA-01450: maximum key length (6397) exceeded
Many columns declared with large varchar2 lengths do so “just in case” and are unlikely to bump into ORA-1450 issues. In these cases, it’s better to stop people from storing large blocks of text. Decrease their length as discussed next.
If customers do need to store long strings this will be frustrating for them. Start investigating using a larger block size and other solutions in this article to avoid this.
Decrease the Largest String Length

Ryan McGuire Gratisography
The simplest solution is to decrease the maximum length of strings you can store. Of course, to do this you need to check that you can use a smaller limit.
A good first pass for this is to find the longest strings stored in the column. Both in characters and bytes:
select max ( length ( text_column ) ) mx_char_length,
max ( lengthb ( text_column ) ) mx_byte_length
from some_table;
Assuming these are well short of the current upper limit, it’s time for good old-fashioned business analysis. Speak with the data consumers to find out what a reasonable maximum length is.
Provided you find a smaller limit is acceptable, you can alter the table to shrink the size:
alter table some_table modify text_column varchar2(1000 char);
But many free-form text fields genuinely can store large bodies of text. In which case you can’t lower the upper limit. You need a workaround.
As the problem is the indexed values may not fit in one block, one solution is:
Use a larger block size for the indexes!
Store the Index in a Tablespace with a Larger Block Size
The least impact to application code is to store the index in a tablespace with a 16k or 32k block size.
Assuming the database uses an 8k block size this means you’ll have multiple block sizes. So to do this, first you need to set the appropriate db_Nk_cache_size parameter. Then create a tablespace with the desired block size and store the index in it:
create tablespace tblsp_32k_blocks datafile 'tblsp_32k_blocks' size 1m blocksize 32768; create index text_index on some_table ( text_column ) tablespace tblsp_32k_blocks;
While this is simple from a SQL perspective, it makes the database harder to manage. DBAs may need to start micro-managing buffer cache sizes; a poor use of their time.
If this is a no-go, there are some workarounds using function-based indexes.
Create a STANDARD_HASH Index

Hashing the text returns a fixed-length value. The exact length depends on the hash function, but even a SHA512 hash value is 128 characters. Well within the limit for an 8k block.
So you can create an index using standard_hash:
create index text_hash_index
on some_table (
standard_hash ( text_column )
);
Those of you familiar with function-based indexes will know: normally you have to change your query to use them. The function in the index must appear in your where clause.
But the optimizer has some tricks up its sleeve. It can automatically use this index in equality or IN-list comparisons:
select * from some_table where text_column = 'this'; ---------------------------------------------------------- | Id | Operation | Name | ---------------------------------------------------------- | 0 | SELECT STATEMENT | | |* 1 | TABLE ACCESS BY INDEX ROWID BATCHED| SOME_TABLE | |* 2 | INDEX RANGE SCAN | TEXT_INDEX | ----------------------------------------------------------
But range comparisons can’t use this index. Which also means it’s unusable for like queries
select * from some_table where text_column >= 't' and text_column < 'u'; ---------------------------------------- | Id | Operation | Name | ---------------------------------------- | 0 | SELECT STATEMENT | | |* 1 | TABLE ACCESS FULL| SOME_TABLE | ---------------------------------------- select * from some_table where text_column like 'this%'; ---------------------------------------- | Id | Operation | Name | ---------------------------------------- | 0 | SELECT STATEMENT | | |* 1 | TABLE ACCESS FULL| SOME_TABLE | ----------------------------------------
So what if you do these kinds of queries?
The optimizer can also do the same magic to use a substr-based index.
Create a SUBSTR Index
The substr function returns the first N characters of a string. When searching for a string, often these are the most important for finding a particular row.
create index text_substr_index
on some_table (
substr ( text_column, 1, 10 )
);
Again, the optimizer can use this index without you having to change the SQL. This time for range comparisons too, but sadly like still won’t use the index:
select * from some_table
where text_column = 'this';
-----------------------------------------------------------------
| Id | Operation | Name |
-----------------------------------------------------------------
| 0 | SELECT STATEMENT | |
|* 1 | TABLE ACCESS BY INDEX ROWID BATCHED| SOME_TABLE |
|* 2 | INDEX RANGE SCAN | TEXT_SUBSTR_INDEX |
-----------------------------------------------------------------
select * from some_table
where text_column >= 't'
and text_column < 'u';
-----------------------------------------------------------------
| Id | Operation | Name |
-----------------------------------------------------------------
| 0 | SELECT STATEMENT | |
| 1 | TABLE ACCESS BY INDEX ROWID BATCHED| SOME_TABLE |
| 2 | INDEX RANGE SCAN | TEXT_SUBSTR_INDEX |
-----------------------------------------------------------------
select * from some_table
where text_column like 'this%';
----------------------------------------
| Id | Operation | Name |
----------------------------------------
| 0 | SELECT STATEMENT | |
| 1 | TABLE ACCESS FULL| SOME_TABLE |
----------------------------------------
So which of these two methods should you use?
If you need range comparisons on the text, you’ll have to use substr. But it’s likely to be less efficient for equality searches.
So if you only have equality searches on the column, standard_hash is the way to go. You can also use this in a unique index if you want to prevent duplicates in the text.
But provided you can work within their restrictions, either of the function-based index methods work well.
That said, both of these methods are kind-of hidden from developers and DBAs. Which can increase the surprise-factor for the application.
There is a way to make these clear: use virtual columns.
Image by Dimitris Vetsikas from Pixabay)
Create a Virtual Column
A virtual column returns the result of evaluating an expression on the values of other columns in the row. The expression may contain deterministic SQL functions. The database only computes the virtual column’s values at runtime. The values are not stored in the table.
Or – starting in patch 23.7 of Oracle Database 23ia – you can create materialized columns. Like virtual columns, they are derived from expressions, but the database computes and stores their values when inserting or updating rows. At query time, it fetches the precalculated values.
So, instead of creating a function-based index on standard_hash or substr, you can use these functions in a virtual or materialized column. Then, create the index on that:
alter table some_table
add text_hash varchar2(40 char) as (
standard_hash ( text_column )
);
create index vc_text_hash_index
on some_table ( text_hash );
Technically, creating a standard index on a virtual column is the same as creating a function-based index. When building a function-based index, the database will create a hidden virtual column to support the index.
But there are a couple of advantages to making a virtual column first over a function-based index:
- The manual column is visible by default, so it appears in generic access like
select *. This makes it more obvious to others querying the data, compared to the generated index column which is hidden. - You can give the column a meaningful name, making its purpose clear. The system-generated virtual index column will have an obscure name like SYS_NC00002$.
Function-based indexes and indexes on virtual columns can also be useful if you want to extract the middle of a character column value for some reason and access it in SQL statements. For example, you may want to remove a common prefix or get certain fields from delimited or fixed-width data. The optimizer may use such indexes to speed up the evaluation of such queries.
But this highlights a general problem with indexing free-text fields. Often you want to search for specific values within a string. So you need wildcards at the start and end of the search string:
select * from some_table where text_column like '%middle text%';
Standard indexes are practically useless for these types of query. Bringing the question of why you’re bothering to index these columns at all!
Luckily there is a way to index large text strings in Oracle Database, allowing you to do ad-hoc searches of them:
Create an Oracle Text Index
Oracle Text is a full text indexing engine with huge functionality. Most relevant to the ORA-01460 problem is you can create these indexes on large varchar2 columns. Without needing any of the fancy tricks described above.
To use it, which state kind of text index you want with the indextype clause. The most common type is a context index:
create index oracle_text_index on some_table ( text_column ) indextype is ctxsys.context;
Although there’s nothing special you need to do to create these indexes, to use them, your query must use the contains operator:
select * from some_table where contains ( text_column, 'value' ) > 0;
This may mean rewriting lots of SQL so it can use the index. But it also opens new possibilities, such as enabling fuzzy-text search. If you go down this route, this is a good chance to speak with your customers see if they want any of these extra searching capabilities.
Summary
Perhaps the best solution to ORA-01450 errors is to avoid them in the first place: ensure you set appropriate limits for character columns!
Remember that the size of a varchar2 is a constraint. If the maximum size of a string should be ten characters, make it a varchar2(10 char). Besides avoiding the indexing issues above, this improves data quality. Avoid choosing varchar2(4000) “just-in-case” you need to large blocks of text.
But there will always be cases where you need to store large pieces of text. It’s worth remembering you can use two or more of the solutions above on the same column. So you could create a function-based index to make equality searches fast. While also adding an Oracle Text index to improve the performance of more free-form queries.
Ultimately all the solutions above have their pros and cons. It’s worth testing a couple of solutions in your application to see which trade-offs gives you the most upside with least downside. This could well be something you need to decide on a case-by-case basis!
UPDATED: 5 Jan 2021 – Changing UTF8 -> ALUTF8 in character set discussion following comment from Manfred
UPDATED: 7 Nov 2024 – Adding behaviour change in Oracle Database 23ai
UPDATED: 7 May 2025 – Adding materialized columns in 23.7. Reworking benefits of virtual columns: removing incorrect mention of stats. Thanks to Sergiusz Wolicki for pointing out my mistake and reviewing the edits
UPDATED: 20 Oct 2025 – Oracle AI Database 26ai replaces Oracle Database 23ai
