Window functions
Background
The window function is a special class of built-in functions. Similar to the aggregation function, it also does calculations on multiple input rows to get a single data value. The difference is that the window function processes the input data within a specific window, rather than using the "group by" method. The data in each window can be sorted and grouped using the over() clause. The window function computes a separate value for each row, rather than computing one value for each group. This flexibility allows users to add additional columns to the select clause and further filter the result set. The window function can only appear in the select list and the outermost position of a clause. It takes effect at the end of the query, that is, after the join
, where
, and group by
operations are performed. The window function is often used to analyze trends, calculate outliers, and perform bucketing analyses on large-scale data.
Usage
Syntax
function(args) OVER(partition_by_clause order_by_clause [window_clause])
partition_by_clause ::= PARTITION BY expr [, expr ...]
order_by_clause ::= ORDER BY expr [ASC | DESC] [, expr [ASC | DESC] ...]
PARTITION BY clause
The Partition By clause is similar to Group By. It groups the input rows by one or more specified columns. Rows with the same value are grouped together.
ORDER BY clause
The Order By
clause is basically the same as the outer Order By
. It defines the order of the input rows. If Partition By
is specified, Order By
defines the order within each Partition grouping. The only difference is that Order By n
(n is a positive integer) in the OVER
clause is equivalent to no operation, whereas n
in the outer Order By
indicates sorting by the nth column.
Example:
This example shows adding an id column to the select list with values of 1, 2, 3, etc., sorted by the date_and_time
column in the events table.
SELECT row_number() OVER (ORDER BY date_and_time) AS id,
c1, c2, c3, c4
FROM events;
Window clause
The window clause is used to specify a range of rows for operations (the preceding and following rows based on the current row). It supports the following syntaxes: AVG(), COUNT(), FIRST_VALUE(), LAST_VALUE(), and SUM(). For MAX() and MIN(), the window clause can specify the start to UNBOUNDED PRECEDING
.
Syntax:
ROWS BETWEEN [ { m | UNBOUNDED } PRECEDING | CURRENT ROW] [ AND [CURRENT ROW | { UNBOUNDED | n } FOLLOWING] ]
Window function sample table
This section creates a sample table scores
. You can use this table to test many window functions below.
CREATE TABLE `scores` (
`id` int(11) NULL,
`name` varchar(11) NULL,
`subject` varchar(11) NULL,
`score` int(11) NULL
)
DISTRIBUTED BY HASH(`score`) BUCKETS 10;
INSERT INTO `scores` VALUES
(1, "lily", "math", NULL),
(1, "lily", "english", 100),
(1, "lily", "physics", 60),
(2, "tom", "math", 80),
(2, "tom", "english", 98),
(2, "tom", "physics", NULL),
(3, "jack", "math", 95),
(3, "jack", "english", NULL),
(3, "jack", "physics", 99),
(4, "amy", "math", 80),
(4, "amy", "english", 92),
(4, "amy", "physics", 99),
(5, "mike", "math", 70),
(5, "mike", "english", 85),
(5, "mike", "physics", 85),
(6, "amber", "math", 92),
(6, "amber", NULL, 90),
(6, "amber", "physics", 100);
Function examples
This section describes the window functions supported in StarRocks.
AVG()
Calculates the average value of a field in a given window. This function ignores NULL values.
Syntax:
AVG(expr) [OVER (*analytic_clause*)]
Examples:
The following example uses stock data as an example.
CREATE TABLE stock_ticker (
stock_symbol STRING,
closing_price DECIMAL(8,2),
closing_date DATETIME
)
DUPLICATE KEY(stock_symbol)
COMMENT "OLAP"
DISTRIBUTED BY HASH(closing_date);
INSERT INTO stock_ticker VALUES
("JDR", 12.86, "2014-10-02 00:00:00"),
("JDR", 12.89, "2014-10-03 00:00:00"),
("JDR", 12.94, "2014-10-04 00:00:00"),
("JDR", 12.55, "2014-10-05 00:00:00"),
("JDR", 14.03, "2014-10-06 00:00:00"),
("JDR", 14.75, "2014-10-07 00:00:00"),
("JDR", 13.98, "2014-10-08 00:00:00")
;
Calculate the average closing price in the current row and each row before and after it.
select stock_symbol, closing_date, closing_price,
avg(closing_price)
over (partition by stock_symbol
order by closing_date
rows between 1 preceding and 1 following
) as moving_average
from stock_ticker;
Output:
+--------------+---------------------+---------------+----------------+
| stock_symbol | closing_date | closing_price | moving_average |
+--------------+---------------------+---------------+----------------+
| JDR | 2014-10-02 00:00:00 | 12.86 | 12.87500000 |
| JDR | 2014-10-03 00:00:00 | 12.89 | 12.89666667 |
| JDR | 2014-10-04 00:00:00 | 12.94 | 12.79333333 |
| JDR | 2014-10-05 00:00:00 | 12.55 | 13.17333333 |
| JDR | 2014-10-06 00:00:00 | 14.03 | 13.77666667 |
| JDR | 2014-10-07 00:00:00 | 14.75 | 14.25333333 |
| JDR | 2014-10-08 00:00:00 | 13.98 | 14.36500000 |
+--------------+---------------------+---------------+----------------+
For example, 12.87500000
in the first row is the average value of closing prices on "2014-10-02" (12.86
), its previous day "2014-10-01" (null), and its following day "2014-10-03" (12.89
).
COUNT()
Calculates the total number of rows that meet the specified conditions in a give window.
Syntax:
COUNT(expr) [OVER (analytic_clause)]
Examples:
Count the occurrence of math scores that are greater than 90 from the current row to the first row in the math partition. This example uses the data in the Sample table scores
.
select *,
count(score)
over (
partition by subject
order by score
rows between unbounded preceding and current row
) as 'score_count'
from scores where subject in ('math') and score > 90;
+------+-------+---------+-------+-------------+
| id | name | subject | score | score_count |
+------+-------+---------+-------+-------------+
| 6 | amber | math | 92 | 1 |
| 3 | jack | math | 95 | 2 |
+------+-------+---------+-------+-------------+
CUME_DIST()
The CUME_DIST() function calculates the cumulative distribution of a value within a partition or window, indicating its relative position as a percentage in the partition. It is often used to calculate the distribution of highest or lowest values in a group.
- If data is sorted in ascending order, this function calculates the percentage of values less than or equal to the value in the current row.
- If data is sorted in descending order, this function calculates the percentage of values greater than or equal to the value in the current row.
The cumulative distribution is in the range of 0 to 1. It is useful for percentile calculation and data distribution analysis.
This function is supported from v3.2.
Syntax:
CUME_DIST() OVER (partition_by_clause order_by_clause)
partition_by_clause
: optional. If this clause is not specified, the entire result set is processed as a single partition.order_by_clause
: This function must be used with ORDER BY to sort partition rows into the desired order.
CUME_DIST() contains NULL values and treats them as the lowest values.
Examples:
The following example shows the cumulative distribution of each score within each subject
group. This example uses the data in the Sample table scores
.
SELECT *,
cume_dist()
OVER (
PARTITION BY subject
ORDER BY score
) AS cume_dist
FROM scores;
+------+-------+---------+-------+---------------------+
| id | name | subject | score | cume_dist |
+------+-------+---------+-------+---------------------+
| 6 | amber | NULL | 90 | 1 |
| 3 | jack | english | NULL | 0.2 |
| 5 | mike | english | 85 | 0.4 |
| 4 | amy | english | 92 | 0.6 |
| 2 | tom | english | 98 | 0.8 |
| 1 | lily | english | 100 | 1 |
| 1 | lily | math | NULL | 0.16666666666666666 |
| 5 | mike | math | 70 | 0.3333333333333333 |
| 2 | tom | math | 80 | 0.6666666666666666 |
| 4 | amy | math | 80 | 0.6666666666666666 |
| 6 | amber | math | 92 | 0.8333333333333334 |
| 3 | jack | math | 95 | 1 |
| 2 | tom | physics | NULL | 0.16666666666666666 |
| 1 | lily | physics | 60 | 0.3333333333333333 |
| 5 | mike | physics | 85 | 0.5 |
| 4 | amy | physics | 99 | 0.8333333333333334 |
| 3 | jack | physics | 99 | 0.8333333333333334 |
| 6 | amber | physics | 100 | 1 |
+------+-------+---------+-------+---------------------+
- For
cume_dist
in the first row, theNULL
group has only one row, and only this row itself meets the condition of "less than or equal to the current row". The cumulative distribution is 1。 - For
cume_dist
in the second row, theenglish
group has five rows, and only this row itself (NULL) meets the condition of "less than or equal to the current row". The cumulative distribution is 0.2. - For
cume_dist
in the third row, theenglish
group has five rows, and two rows (85 and NULL) meet the condition of "less than or equal to the current row". The cumulative distribution is 0.4.
DENSE_RANK()
The DENSE_RANK() function is used to represent rankings. Unlike RANK(), DENSE_RANK() does not have vacant numbers. For example, if there are two 1s, the third number of DENSE_RANK() is still 2, whereas the third number of RANK() is 3.
Syntax:
DENSE_RANK() OVER(partition_by_clause order_by_clause)
Examples:
The following example shows the ranking of math scores (sorted in descending order). This example uses the data in the Sample table scores
.
select *,
dense_rank()
over (
partition by subject
order by score desc
) as `rank`
from scores where subject in ('math');
+------+-------+---------+-------+------+
| id | name | subject | score | rank |
+------+-------+---------+-------+------+
| 3 | jack | math | 95 | 1 |
| 6 | amber | math | 92 | 2 |
| 2 | tom | math | 80 | 3 |
| 4 | amy | math | 80 | 3 |
| 5 | mike | math | 70 | 4 |
| 1 | lily | math | NULL | 5 |
+------+-------+---------+-------+------+
The result data has two rows whose score is 80. They all rank 3. The rank for the next score 70 is 4. This shows DENSE_RANK() does not have vacant numbers.
FIRST_VALUE()
FIRST_VALUE() returns the first value of the window range.
Syntax:
FIRST_VALUE(expr [IGNORE NULLS]) OVER(partition_by_clause order_by_clause [window_clause])
IGNORE NULLS
is supported from v2.5.0. It is used to determine whether NULL values of expr
are eliminated from the calculation. By default, NULL values are included, which means NULL is returned if the first value in the filtered result is NULL. If you specify IGNORE NULLS, the first non-null value in the filtered result is returned. If all the values are NULL, NULL is returned even if you specify IGNORE NULLS.
Examples:
Return the first score
value for each member in each group (descending order), grouping by subject
. This example uses the data in the Sample table scores
.
select *,
first_value(score IGNORE NULLS)
over (
partition by subject
order by score desc
) as first
from scores;
+------+-------+---------+-------+-------+
| id | name | subject | score | first |
+------+-------+---------+-------+-------+
| 1 | lily | english | 100 | 100 |
| 2 | tom | english | 98 | 100 |
| 4 | amy | english | 92 | 100 |
| 5 | mike | english | 85 | 100 |
| 3 | jack | english | NULL | 100 |
| 6 | amber | physics | 100 | 100 |
| 3 | jack | physics | 99 | 100 |
| 4 | amy | physics | 99 | 100 |
| 5 | mike | physics | 85 | 100 |
| 1 | lily | physics | 60 | 100 |
| 2 | tom | physics | NULL | 100 |
| 6 | amber | NULL | 90 | 90 |
| 3 | jack | math | 95 | 95 |
| 6 | amber | math | 92 | 95 |
| 2 | tom | math | 80 | 95 |
| 4 | amy | math | 80 | 95 |
| 5 | mike | math | 70 | 95 |
| 1 | lily | math | NULL | 95 |
+------+-------+---------+-------+-------+
LAST_VALUE()
LAST_VALUE() returns the last value of the window range. It is the opposite of FIRST_VALUE().
Syntax:
LAST_VALUE(expr [IGNORE NULLS]) OVER(partition_by_clause order_by_clause [window_clause])
IGNORE NULLS
is supported from v2.5.0. It is used to determine whether NULL values of expr
are eliminated from the calculation. By default, NULL values are included, which means NULL is returned if the last value in the filtered result is NULL. If you specify IGNORE NULLS, the last non-null value in the filtered result is returned. If all the values are NULL, NULL is returned even if you specify IGNORE NULLS.
By default, LAST_VALUE() calculates rows between unbounded preceding and current row
, which compares the current row with all its preceding rows. If you want to show only one value for each partition, use rows between unbounded preceding and unbounded following
after ORDER BY.
Examples:
Returns the last score
for each member in the group (descending order), grouping by subject
. This example uses the data in the Sample table scores
.
select *,
last_value(score IGNORE NULLS)
over (
partition by subject
order by score desc
rows between unbounded preceding and unbounded following
) as last
from scores;
+------+-------+---------+-------+------+
| id | name | subject | score | last |
+------+-------+---------+-------+------+
| 1 | lily | english | 100 | 85 |
| 2 | tom | english | 98 | 85 |
| 4 | amy | english | 92 | 85 |
| 5 | mike | english | 85 | 85 |
| 3 | jack | english | NULL | 85 |
| 6 | amber | physics | 100 | 60 |
| 3 | jack | physics | 99 | 60 |
| 4 | amy | physics | 99 | 60 |
| 5 | mike | physics | 85 | 60 |
| 1 | lily | physics | 60 | 60 |
| 2 | tom | physics | NULL | 60 |
| 6 | amber | NULL | 90 | 90 |
| 3 | jack | math | 95 | 70 |
| 6 | amber | math | 92 | 70 |
| 2 | tom | math | 80 | 70 |
| 4 | amy | math | 80 | 70 |
| 5 | mike | math | 70 | 70 |
| 1 | lily | math | NULL | 70 |
+------+-------+---------+-------+------+
LAG()
Returns the value of the row that lags the current row by offset
rows. This function is often used to compare values between rows and filter data.
LAG()
can be used to query data of the following types:
- Numeric: TINYINT, SMALLINT, INT, BIGINT, LARGEINT, FLOAT, DOUBLE, DECIMAL
- String: CHAR, VARCHAR
- Date: DATE, DATETIME
- BITMAP and HLL are supported from StarRocks v2.5.
Syntax:
LAG(expr [IGNORE NULLS] [, offset[, default]])
OVER([<partition_by_clause>] [<order_by_clause>])
Parameters:
expr
: the field you want to compute.offset
: the offset. It must be a positive integer. If this parameter is not specified, 1 is the default.default
: the default value returned if no matching row is found. If this parameter is not specified, NULL is the default.default
supports any expression whose type is compatible withexpr
.IGNORE NULLS
is supported from v3.0. It is used to determine whether NULL values ofexpr
are included in the result. By default, NULL values are included whenoffset
rows are counted, which means NULL is returned if the value of the destination row is NULL. See Example 1. If you specify IGNORE NULLS, NULL values are ignored whenoffset
rows are counted and the system continues to search foroffset
non-null values. Ifoffset
non-null values cannot be found, NULL ordefault
(if specified) is returned. See Example 2.
Example 1: IGNORE NULLS is not specified
Create a table and insert values:
CREATE TABLE test_tbl (col_1 INT, col_2 INT)
DISTRIBUTED BY HASH(col_1);
INSERT INTO test_tbl VALUES
(1, NULL),
(2, 4),
(3, NULL),
(4, 2),
(5, NULL),
(6, 7),
(7, 6),
(8, 5),
(9, NULL),
(10, NULL);
Query data from this table, where offset
is 2, which means traversing the previous two rows; default
is 0, which means 0 is returned if no matching rows are found.
Output:
SELECT col_1, col_2, LAG(col_2,2,0) OVER (ORDER BY col_1)
FROM test_tbl ORDER BY col_1;
+-------+-------+---------------------------------------------+
| col_1 | col_2 | lag(col_2, 2, 0) OVER (ORDER BY col_1 ASC ) |
+-------+-------+---------------------------------------------+
| 1 | NULL | 0 |
| 2 | 4 | 0 |
| 3 | NULL | NULL |
| 4 | 2 | 4 |
| 5 | NULL | NULL |
| 6 | 7 | 2 |
| 7 | 6 | NULL |
| 8 | 5 | 7 |
| 9 | NULL | 6 |
| 10 | NULL | 5 |
+-------+-------+---------------------------------------------+
For the first two rows, no previous two rows exist and the default value 0 is returned.
For NULL in row 3, the value two rows backward is NULL and NULL is returned because NULL values are allowed.
Example 2: IGNORE NULLS is specified
Use the preceding table and parameter settings.
SELECT col_1, col_2, LAG(col_2 IGNORE NULLS,2,0) OVER (ORDER BY col_1)
FROM test_tbl ORDER BY col_1;
+-------+-------+---------------------------------------------+
| col_1 | col_2 | lag(col_2, 2, 0) OVER (ORDER BY col_1 ASC ) |
+-------+-------+---------------------------------------------+
| 1 | NULL | 0 |
| 2 | 4 | 0 |
| 3 | NULL | 0 |
| 4 | 2 | 0 |
| 5 | NULL | 4 |
| 6 | 7 | 4 |
| 7 | 6 | 2 |
| 8 | 5 | 7 |
| 9 | NULL | 6 |
| 10 | NULL | 6 |
+-------+-------+---------------------------------------------+
For rows 1 to 4, the system cannot find two non-NULL values for each of them in the previous rows and the default value 0 is returned.
For value 6 in row 7, the value two rows backward is NULL and NULL is ignored because IGNORE NULLS is specified. The system continues to search for non-null values and 2 in row 4 is returned.
LEAD()
Returns the value of the row that leads the current row by offset
rows. This function is often used to compare values between rows and filter data.
Data types that can be queried by LEAD()
are the same as those supported by LAG().
Syntax:
LEAD(expr [IGNORE NULLS] [, offset[, default]])
OVER([<partition_by_clause>] [<order_by_clause>])
Parameters:
expr
: the field you want to compute.offset
: the offset. It must be a positive integer. If this parameter is not specified, 1 is the default.default
: the default value returned if no matching row is found. If this parameter is not specified, NULL is the default.default
supports any expression whose type is compatible withexpr
.IGNORE NULLS
is supported from v3.0. It is used to determine whether NULL values ofexpr
are included in the result. By default, NULL values are included whenoffset
rows are counted, which means NULL is returned if the value of the destination row is NULL. See Example 1. If you specify IGNORE NULLS, NULL values are ignored whenoffset
rows are counted and the system continues to search foroffset
non-null values. Ifoffset
non-null values cannot be found, NULL ordefault
(if specified) is returned. See Example 2.
Example 1: IGNORE NULLS is not specified
Create a table and insert values:
CREATE TABLE test_tbl (col_1 INT, col_2 INT)
DISTRIBUTED BY HASH(col_1);
INSERT INTO test_tbl VALUES
(1, NULL),
(2, 4),
(3, NULL),
(4, 2),
(5, NULL),
(6, 7),
(7, 6),
(8, 5),
(9, NULL),
(10, NULL);
Query data from this table, where offset
is 2, which means traversing the subsequent two rows; default
is 0, which means 0 is returned if no matching rows are found.
Output:
SELECT col_1, col_2, LEAD(col_2,2,0) OVER (ORDER BY col_1)
FROM test_tbl ORDER BY col_1;
+-------+-------+----------------------------------------------+
| col_1 | col_2 | lead(col_2, 2, 0) OVER (ORDER BY col_1 ASC ) |
+-------+-------+----------------------------------------------+
| 1 | NULL | NULL |
| 2 | 4 | 2 |
| 3 | NULL | NULL |
| 4 | 2 | 7 |
| 5 | NULL | 6 |
| 6 | 7 | 5 |
| 7 | 6 | NULL |
| 8 | 5 | NULL |
| 9 | NULL | 0 |
| 10 | NULL | 0 |
+-------+-------+----------------------------------------------+
For the first row, the value two rows forward is NULL and NULL is returned because NULL values are allowed.
For the last two rows, no subsequent two rows exist and the default value 0 is returned.
Example 2: IGNORE NULLS is specified
Use the preceding table and parameter settings.
SELECT col_1, col_2, LEAD(col_2 IGNORE NULLS,2,0) OVER (ORDER BY col_1)
FROM test_tbl ORDER BY col_1;
+-------+-------+----------------------------------------------+
| col_1 | col_2 | lead(col_2, 2, 0) OVER (ORDER BY col_1 ASC ) |
+-------+-------+----------------------------------------------+
| 1 | NULL | 2 |
| 2 | 4 | 7 |
| 3 | NULL | 7 |
| 4 | 2 | 6 |
| 5 | NULL | 6 |
| 6 | 7 | 5 |
| 7 | 6 | 0 |
| 8 | 5 | 0 |
| 9 | NULL | 0 |
| 10 | NULL | 0 |
+-------+-------+----------------------------------------------+
For rows 7 to 10, the system cannot find two non-null values in the subsequent rows and the default value 0 is returned.
For the first row, the value two rows forward is NULL and NULL is ignored because IGNORE NULLS is specified. The system continues to search for the second non-null value and 2 in row 4 is returned.
MAX()
Returns the maximum value of the specified rows in the current window.
Syntax:
MAX(expr) [OVER (analytic_clause)]
Examples:
Calculate the maximum value of rows from the first row to the row after the current row. This example uses the data in the Sample table scores
.
select *,
max(scores)
over (
partition by subject
order by score
rows between unbounded preceding and 1 following
) as max
from scores
where subject in ('math');
+------+-------+---------+-------+------+
| id | name | subject | score | max |
+------+-------+---------+-------+------+
| 1 | lily | math | NULL | 70 |
| 5 | mike | math | 70 | 80 |
| 2 | tom | math | 80 | 80 |
| 4 | amy | math | 80 | 92 |
| 6 | amber | math | 92 | 95 |
| 3 | jack | math | 95 | 95 |
+------+-------+---------+-------+------+
The following example calculates the maximum score among all rows for the math
subject.
select *,
max(score)
over (
partition by subject
order by score
rows between unbounded preceding and unbounded following
) as max
from scores
where subject in ('math');
From StarRocks 2.4 onwards, you can specify the row range as rows between n preceding and n following
, which means you can capture n
rows before the current row and n
rows after the current row.
Example statement:
select *,
max(score)
over (
partition by subject
order by score
rows between 3 preceding and 2 following) as max
from scores
where subject in ('math');
MIN()
Returns the minimum value of the specified rows in the current window.
Syntax:
MIN(expr) [OVER (analytic_clause)]
Examples:
Calculate the lowest score among all rows for the math subject. This example uses the data in the Sample table scores
.
select *,
min(score)
over (
partition by subject
order by score
rows between unbounded preceding and unbounded following)
as min
from scores
where subject in ('math');
+------+-------+---------+-------+------+
| id | name | subject | score | min |
+------+-------+---------+-------+------+
| 1 | lily | math | NULL | 70 |
| 5 | mike | math | 70 | 70 |
| 2 | tom | math | 80 | 70 |
| 4 | amy | math | 80 | 70 |
| 6 | amber | math | 92 | 70 |
| 3 | jack | math | 95 | 70 |
+------+-------+---------+-------+------+
From StarRocks 2.4 onwards, you can specify the row range as rows between n preceding and n following
, which means you can capture n
rows before the current row and n
rows after the current row.
Example statement:
select *,
min(score)
over (
partition by subject
order by score
rows between 3 preceding and 2 following) as max
from scores
where subject in ('math');
NTILE()
NTILE() function divides the sorted rows in a partition by the specified number of num_buckets
as equally as possible, stores the divided rows in the respective buckets, starting from 1 [1, 2, ..., num_buckets]
, and returns the bucket number that each row is in.
About the size of the bucket:
- If the row counts can be divided by the specified number of
num_buckets
exactly, all the buckets will be of the same size. - If the row counts cannot be divided by the specified number of
num_buckets
exactly, there will be buckets of two different sizes. The difference between sizes is 1. The buckets with more rows will be listed ahead of the one with fewer rows.
Syntax:
NTILE (num_buckets) OVER (partition_by_clause order_by_clause)
num_buckets
: Number of the buckets to be created. The value must be a constant positive integer whose maximum is 2^63 - 1
.
Window clause is not allowed in NTILE() function.
NTILE() function returns BIGINT type of data.
Examples:
The following example divides all rows in the partition into two buckets. This example uses the data in the Sample table scores
.
select *,
ntile(2)
over (
partition by subject
order by score
) as bucket_id
from scores;
Output:
+------+-------+---------+-------+-----------+
| id | name | subject | score | bucket_id |
+------+-------+---------+-------+-----------+
| 6 | amber | NULL | 90 | 1 |
| 1 | lily | math | NULL | 1 |
| 5 | mike | math | 70 | 1 |
| 2 | tom | math | 80 | 1 |
| 4 | amy | math | 80 | 2 |
| 6 | amber | math | 92 | 2 |
| 3 | jack | math | 95 | 2 |
| 3 | jack | english | NULL | 1 |
| 5 | mike | english | 85 | 1 |
| 4 | amy | english | 92 | 1 |
| 2 | tom | english | 98 | 2 |
| 1 | lily | english | 100 | 2 |
| 2 | tom | physics | NULL | 1 |
| 1 | lily | physics | 60 | 1 |
| 5 | mike | physics | 85 | 1 |
| 3 | jack | physics | 99 | 2 |
| 4 | amy | physics | 99 | 2 |
| 6 | amber | physics | 100 | 2 |
+------+-------+---------+-------+-----------+
As the above example shown, when num_buckets
is 2
:
- For the first row, this partition has only this record and it is assigned to only one bucket.
- For rows 2 to 7, the partition has 6 records and the first 3 records are assigned to bucket 1 and other 3 records are assigned to bucket 2.
PERCENT_RANK()
Calculates the relative rank of a row within a result set as a percentage.
PERCENT_RANK() is calculated using the following formula, where Rank
represents the rank of the current row in the partition.
(Rank - 1)/(Rows in partition - 1)
The return values range from 0 to 1. This function is useful for percentile calculation and analyzing data distribution. It is supported from v3.2.
Syntax:
PERCENT_RANK() OVER (partition_by_clause order_by_clause)
This function must be used with ORDER BY to sort partition rows into the desired order.
Examples:
The following example shows the relative rank of each score
within the group of math
. This example uses the data in the Sample table scores
.
SELECT *,
PERCENT_RANK()
OVER (
PARTITION BY subject
ORDER BY score
) AS `percent_rank`
FROM scores where subject in ('math');
+------+-------+---------+-------+--------------+
| id | name | subject | score | percent_rank |
+------+-------+---------+-------+--------------+
| 1 | lily | math | NULL | 0 |
| 5 | mike | math | 70 | 0.2 |
| 2 | tom | math | 80 | 0.4 |
| 4 | amy | math | 80 | 0.4 |
| 6 | amber | math | 92 | 0.8 |
| 3 | jack | math | 95 | 1 |
+------+-------+---------+-------+--------------+