To group rows into part of a datetime value with Oracle SQL you can use the trunc function. This rounds down datetime values. The first argument is the datetime and the second is the units to round down to. For example, this groups the rows by hour:
select trunc ( datetime, 'hh' ) hr, count(*) from time_data group by trunc ( datetime, 'hh' ) order by hr;
The trunc function supports many formats; common you may wish to use are:
yyyy= yearmm= monthdy= day (the default format)hh= hourmi= minute
But what if you want to group rows into a multiple of a unit? For example, five minutes, two hours, or three days?
To do this you need a formula or – from RU 23.7 – you can use the time_bucket function. Read on to see how to:
- Place rows in N-minute time slices
- Use time_bucket to group rows into intervals
- Show all time buckets in a period, including any missing buckets
- Combine rows into equal sized buckets, but variable gaps between them
- Combine rows into variable sized buckets and variable gaps between them
For a summary of this, grab the cheat sheet at the end.
To start we’ll see how to place rows into N-minute time intervals.

Count rows in 5-minute intervals
The first step in counting the rows in five minute groups is to find the interval they fall in. To do this, first assign the values into numbered buckets. This needs two key items:
- The origin datetime: the starting point for the time groups
- The stride: the size of the time interval you want to group rows into
The start of the first interval is a natural choice for the origin, but it can be any datetime which is a valid interval start time. For five-minute intervals that start on the hour, the origin can be the start of any day.
Place these values in this formula to find the bucket number:
bucket# = floor ( ( datetime – origin ) / stride_interval )
Then map the bucket numbers to interval times to with these formulas:
start time = origin + ( bucket# * stride_interval )
end time = origin + ( ( bucket# + 1 ) * stride_interval )
To convert these formulas to Oracle SQL, bear in mind that when you calculate the difference between datetime values, the result is:
- An interval if either value is a
timestamp - The number of days if both values are
dates
There are no built-in functions to convert intervals directly into one unit such as hours or minutes. This makes calculating bucket numbers from intervals fiddly. Simplify this so you’re working with numbers by casting timestamps to dates first.
With the difference between the datetimes and the origin in days, the next step is to convert them to your desired units. For example, if you want five-minute intervals, multiply up by 1,440 (the number of minutes in a day) after finding the difference:
bucket# = floor ( ( datetime – origin ) * 1440 / stride_interval )
But doing this brings a subtle issue: floating-point number imprecision.
For example, the time 01:15AM is 75 minutes after midnight. Using a five minute stride, this goes in bucket 15 (75 / 5 = 15).
When do you do this calculation in the database however, the result is:
( 01:15 – 00:00 ) * 1440 = 14.99999...95
Taking the floor of this places 01:15 in bucket 14!
To overcome this, round after converting to your desired units. This gives a bucket number formula like:
bucket# = floor ( round ( ( datetime – origin ) * units_in_day, 9 ) / stride_interval )
I’ve rounded this to 9 decimal digits to allow this to work with any units within a day (hours, minutes, seconds). Plugging these formulas into SQL to place rows in 5-minute intervals gives:
with rws as (
select date'2025-02-01' origin,
datetime,
1440 units_in_day,
5 stride_interval
from time_data
), buckets as (
select floor ( round ( ( datetime - origin ) * units_in_day, 9 ) / stride_interval ) bucket#,
r.*
from rws r
), intervals as (
select origin + ( bucket# * stride_interval / units_in_day ) start_datetime,
origin + ( ( bucket# + 1 ) * stride_interval / units_in_day ) end_datetime
from buckets
)
select count(*), start_datetime, end_datetime
from intervals
group by start_datetime, end_datetime
order by start_datetime;
You can use this principle to group rows into buckets of any size up to one day. Instead of hardcoding the duration to five, ten, or fifteen minutes, do the good thing and use bind variables:
with rws as (
select to_date ( :start_date, 'DD-MON-YYYY HH24:MI:SS' ) origin,
datetime,
:units_in_day units_in_day,
:stride stride_interval
from time_data
), buckets as (
select floor ( round ( ( datetime - origin ) * units_in_day, 9 ) / stride_interval ) bucket#,
rws.*
from rws
), intervals as (
select origin + ( bucket# * stride_interval / units_in_day ) start_datetime,
origin + ( ( bucket# + 1 ) * stride_interval / units_in_day ) end_datetime
from buckets
)
select count(*), start_datetime
from intervals
group by start_datetime
order by start_datetime;
While this does the job, it’s clunky. It also only works with intervals that are multiples of days or smaller units. If you want to group into months or years you need a different approach. Oracle AI Database 26ai simplified this in RU 23.7 by adding a time_bucket function.
<img alt="Find start of 2-hour group for
Count rows in 2 month intervals with time_bucket
In theory, the process for grouping rows into N monthly or yearly intervals is the same as the formulas above. The months_between function looks a good way to calculate the bucket numbers.
In practice, it’s more complex because the days in a month vary and months_between assumes a 31-day month. So it’s unsuitable for this task.
The time_bucket function in 23.7 handles this complexity for you. Its parameters are:
- The datetime to round
- The stride; this must be an
intervaldata type - The origin date time
- Whether the result should be the
startorendof the bucket; this is optional and defaults tostart.
For example, this groups rows into two month intervals starting on 1st Feb:
select count(*),
time_bucket (
datetime, interval '2' month, date'2025-02-01'
) start_date,
time_bucket (
datetime, interval '2' month, date'2025-02-01', end
) end_date
from time_data
group by start_date, end_date
order by start_date;
You can also use it for N-minute intervals. This function simplifies the SQL above to place rows in five minute groups to:
select count(*),
time_bucket (
datetime, interval '5' minute, date'2025-02-01'
) start_date,
time_bucket (
datetime, interval '5' minute, date'2025-02-01', end
) end_date
from time_data
group by start_date, end_date
order by start_date;
Using time_bucket has a couple more benefits over the hand-written formula:
- It supports epochs (the seconds since 1 Jan 1970)
- The stride can be an ISO 8601 format time interval string, enabling you to use one bind variable to represent any interval. Note this can’t mix-and-match years and months with days to seconds. If your time format includes both,
time_bucketuses years and months for rounding and ignores the smaller units.
For example, this finds the start of the day for the epoch 1234567890:
select time_bucket (
1234567890, -- Feb 13 23:31:30 2009
interval '1' day,
0 -- Jan 01 00:00:00 1970
) start_epoch_day;
And this groups rows into three month intervals (quarters) starting on 1st Feb:
select time_bucket (
datetime,
'P3M',
date'2025-01-01'
) quarter
from time_data
group by quarter
order by quarter;
So far we’ve assumed that either the table stores at least one row for every period or it’s OK to have gaps between them. While in busy systems it may be almost certain there are data recorded for every interval, slots could be missing due to outages or bugs. You need a different approach to be sure every period is in the output.

Group rows into 10-minute buckets with sparse data
One way to guarantee you get a row for every interval in the date range is to:
-
Generate a row for all the intervals you want
-
Outer join to these the times that are greater than or equal to the start and strictly less than the end of each interval
Which looks like:
with times as ( select to_date ( :start_time, 'DD-MON-YYYY HH24:MI' ) + ( ( level - 1 ) * :duration_in_minutes / 1440 ) dt from dual connect by level <= :number_of_intervals ), intervals as ( select dt start_date, dt + ( :duration_in_minutes / 1440 ) end_date from times ) select start_date, end_date, count ( datetime ) from intervals i left join time_data t on start_date <= datetime and datetime < end_date group by start_date, end_date order by start_date;
The best thing about this solution is it generalizes to intervals of any size. For example, 12 minutes, 90 minutes, or 3 days. As before, set :duration_in_minutes to the number of minutes in the interval.
The query above accepts the start date as input. You may want to get the initial time from the data itself.
To do this, find the minimum datetime value first and use this as the driving point for the data generation:
with start_date as ( select min ( datetime ) start_date, :duration_in_minutes / 1440 time_interval from time_data ), intervals as ( select start_date + ( ( level - 1 ) * time_interval ) start_date, start_date + ( ( level ) * time_interval ) end_date from start_date connect by level <= :number_of_intervals ) select start_date, count ( datetime ) from intervals left join time_data on start_date <= datetime and datetime < end_date group by start_date order by start_date;
Joining to a generated list of times is a good general case solution. It’s flexible and – most importantly – guaranteed to return a row for every interval in the period.
The drawback is you need to join. If you want the bucket start for a specific row or datetime this is wasted effort. It’s also more efficient to avoid the join and use the formula or time_bucket function as above if you’re certain there will be at least one row in each interval. Grouping on the source data alone is also valid if you want to omit any missing intervals!
Sometimes you may want to keep these gaps, but with a twist. Each interval should begin at the time of the first row in the group, not a fixed multiple of the origin.

Combine rows into 15-minute intervals with variable gaps
The solutions so far all assume that the start of each group should be a multiple of the initial time.
For example, if the times are 00:00, 00:07, 00:13, and 00:21, grouping into 10-minute intervals would have three groups starting at 00:00, 00:10, and 00:20. But starting each group from the next available time would produce two groups 00:00 – 00:10, and 00:13 – 00:23!
The challenge here is there could be any size gap between rows. It’s possible to solve this using recursive queries or the model clause. But I prefer to use SQL pattern matching. This looks like:
select *
from time_data
match_recognize (
order by datetime
measures
init.datetime as interval_start,
init.datetime + numtodsinterval (
:duration_in_minutes, 'minute'
) as interval_end,
count(*) as rw#
pattern ( init time_interval* )
define
time_interval as datetime < init.datetime
+ numtodsinterval (
:duration_in_minutes, 'minute'
)
);
To understand what’s going on here, look at the pattern and define clauses. The database walks through the rows to see which variables they match, working from left to right through the pattern.
-
For the first row, the database checks if it matches the first variable:
init. This variable is undefined – it’s unlisted in thedefineclause – so is always true. Thus the first row matchesinit. -
It then moves to the second row and second variable:
time_interval. This has an asterisk after it. Patterns are regular expressions, meaningtime_intervalmatches zero or more rows. -
Thus the database adds all rows to the group where
time_intervalis true. These are rows which are within:duration_in_minutesof the first row in each group (init.datetime). To aid readability, I’ve added a minute interval instead of converting the minutes to a fraction of a day (:duration_in_minutes / 1440). Use whichever of these methods you prefer. -
As soon as the database fetches a row
:duration_in_minutesor more after the starttime_intervalis false. This completes the group. The database then starts searching for the pattern again, starting from theinitvariable. This variable is always true, so this fetched row becomes the starting point for the next group.
This process repeats until the database has fetched all the rows in the data set.
SQL pattern matching makes it easy to solve other non-standard time grouping needs. For example, Oren Nakdimon shows how you can use this to combine events into groups of equal frequency. Or you may want to group rows that are within N minutes of the last row.
Group all rows that fall within 20 minutes of the previous
Instead of grouping rows into fixed size intervals, you may want to combine all rows that fall within twenty minutes of the previous. This could lead to intervals of any size!
The SQL pattern matching method for this is:
select *
from time_data
match_recognize (
order by datetime
measures
init.datetime as interval_start,
init.datetime + numtodsinterval (
:duration_in_minutes, 'minute'
) as interval_end,
count(*) as rw#
pattern ( init time_interval* )
define
time_interval as datetime < prev ( datetime )
+ numtodsinterval (
:duration_in_minutes, 'minute'
)
);
This is like the previous match_recognize solution. All that’s different is a tweak to the time_interval definition. Instead of grouping rows that are within N minutes of the first timestamp in the group, it checks the value of the previous row.
Summary
There are many ways to group rows into N-minute intervals with SQL. If you’re asked to do this it’s important define exactly how to combine rows and deal with missing data.
Often you’ll need a row for every interval, even when there are gaps in the data. Generating a row per interval, then outer joining your data to this ensures every time is present.
The downside of joining is it adds work to the query. If performance matters and you’re certain there are no gaps in your data, basic grouping is the fastest. SQL pattern matching is also efficient, but more importantly enables you to use non-standard grouping logic.
Here’s a summary cheat sheet of to group rows into N-unit time intervals:

If you’d like to try these examples out, you can get the scripts for this post on Live SQL.
Want to improve your SQL skills? Oracle Dev Gym has a vast library of SQL quizzes to help you learn all about Oracle AI Database.
UPDATED 17 Feb 2025: Significant rewrite to include time_bucket function
UPDATED 14 Apr 2025: Changed Live SQL link to new platform
UPDATED 20 Oct 2025: Oracle AI Database 26ai replaces Oracle Database 23ai
