Skip to main content
Version: Candidate-4.1

Iceberg catalog

tip

This example uses the Local Climatological Data(LCD) dataset featured in the StarRocks Basics Quick Start. You can load the data and try the example yourself.

An Iceberg catalog is a type of external catalog that is supported by StarRocks from v2.4 onwards. With Iceberg catalogs, you can:

  • Directly query data stored in Iceberg without the need to manually create tables.
  • Use INSERT INTO or asynchronous materialized views (which are supported from v2.5 onwards) to process data stored in Iceberg and load the data into StarRocks.
  • Perform operations on StarRocks to create or drop Iceberg databases and tables, or sink data from StarRocks tables to Parquet-formatted Iceberg tables by using INSERT INTO (this feature is supported from v3.1 onwards).

To ensure successful SQL workloads on your Iceberg cluster, your StarRocks cluster must be able to access the storage system and metastore of your Iceberg cluster. StarRocks supports the following storage systems and metastores:

  • Distributed file system (HDFS) or object storage like AWS S3, Microsoft Azure Storage, Google GCS, or other S3-compatible storage system (for example, MinIO)

  • Metastore like Hive metastore, AWS Glue, or Tabular

note
  • If you choose AWS S3 as storage, you can use HMS or AWS Glue as metastore. If you choose any other storage system, you can only use HMS as metastore.
  • If you choose Tabular as metastore, you need to use the Iceberg REST catalog.

Usage notes​

Take note of the following points when you use StarRocks to query data from Iceberg:

File formatCompression formatIceberg table version
ParquetSNAPPY, LZ4, ZSTD, GZIP, and NO_COMPRESSION
  • v1 tables: supported.
  • v2 tables: supported from StarRocks v3.1 onwards in which queries on these v2 tables support position deletes. In v3.1.10, v3.2.5, v3.3 and their later versions, queries on v2 tables also support equality deletes.
ORCZLIB, SNAPPY, LZO, LZ4, ZSTD, and NO_COMPRESSION
  • v1 tables: supported.
  • v2 tables: supported from StarRocks v3.0 onwards in which queries on these v2 tables support position deletes. In v3.1.8, v3.2.3, v3.3 and their later versions, queries on v2 tables also support equality deletes.

Integration preparation​

Before you create an Iceberg catalog, make sure your StarRocks cluster can integrate with the storage system and metastore of your Iceberg cluster.


Storage​

Select the tab that matches your storage type:

If your Iceberg cluster uses AWS S3 as storage or AWS Glue as metastore, choose your suitable authentication method and make the required preparations to ensure that your StarRocks cluster can access the related AWS cloud resources.

The following authentication methods are recommended:

  • Instance profile
  • Assumed role
  • IAM user

Of the above-mentioned three authentication methods, instance profile is the most widely used.

For more information, see Preparation for authentication in AWS IAM.


Create an Iceberg catalog​

Syntax​

CREATE EXTERNAL CATALOG <catalog_name>
[COMMENT <comment>]
PROPERTIES
(
"type" = "iceberg",
[SecurityParams],
MetastoreParams,
StorageCredentialParams,
MetadataRelatedParams
)

Parameters​

catalog_name​

The name of the Iceberg catalog. The naming conventions are as follows:

  • The name can contain letters, digits (0-9), and underscores (_). It must start with a letter.
  • The name is case-sensitive and cannot exceed 1023 characters in length.

comment​

The description of the Iceberg catalog. This parameter is optional.

type​

The type of your data source. Set the value to iceberg.

SecurityParams​

Parameter(s) about how StarRocks manages data access to the catalog.

For detailed instructions on managing data access for Iceberg Catalogs, see Security Setup for Iceberg REST Catalog.

catalog.access.control​

The data access control policy. Valid values:

  • native (Default): The StarRocks built-in data access control system is used.
  • allowall: All data access checks are delegated to the Catalog itself.
  • ranger: Data access checks are delegated to Apache Ranger.

MetastoreParams​

A set of parameters about how StarRocks integrates with the metastore of your data source. Choose the tab that matches your metastore type:

Hive metastore​

If you choose Hive metastore as the metastore of your data source, configure MetastoreParams as follows:

"iceberg.catalog.type" = "hive",
"hive.metastore.uris" = "<hive_metastore_uri>"
note

Before querying Iceberg data, you must add the mapping between the host names and IP addresses of your Hive metastore nodes to the /etc/hosts path. Otherwise, StarRocks may fail to access your Hive metastore when you start a query.

The following table describes the parameter you need to configure in MetastoreParams.

  • iceberg.catalog.type

    • Required: Yes
    • Description: The type of metastore that you use for your Iceberg cluster. Set the value to hive.
  • hive.metastore.uris

    • Required: Yes
    • Description: The URI of your Hive metastore. Format: thrift://<metastore_IP_address>:<metastore_port>.
      If high availability (HA) is enabled for your Hive metastore, you can specify multiple metastore URIs and separate them with commas (,), for example, "thrift://<metastore_IP_address_1>:<metastore_port_1>,thrift://<metastore_IP_address_2>:<metastore_port_2>,thrift://<metastore_IP_address_3>:<metastore_port_3>".

StorageCredentialParams​

A set of parameters about how StarRocks integrates with your storage system. This parameter set is optional.

Note the following points:

  • If you use HDFS as storage, you do not need to configure StorageCredentialParams and can skip this section. If you use AWS S3, other S3-compatible storage system, Microsoft Azure Storage, or Google GCS as storage, you must configure StorageCredentialParams.

  • If you use Tabular as metastore, you do not need to configure StorageCredentialParams and can skip this section. If you use HMS or AWS Glue as metastore, you must configure StorageCredentialParams.

Choose the tab that matches your storage type:

AWS S3​

If you choose AWS S3 as storage for your Iceberg cluster, take one of the following actions:

  • To choose the instance profile-based authentication method, configure StorageCredentialParams as follows:

    "aws.s3.use_instance_profile" = "true",
    "aws.s3.region" = "<aws_s3_region>"
  • To choose the assumed role-based authentication method, configure StorageCredentialParams as follows:

    "aws.s3.use_instance_profile" = "true",
    "aws.s3.iam_role_arn" = "<iam_role_arn>",
    "aws.s3.region" = "<aws_s3_region>"
  • To choose the IAM user-based authentication method, configure StorageCredentialParams as follows:

    "aws.s3.use_instance_profile" = "false",
    "aws.s3.access_key" = "<iam_user_access_key>",
    "aws.s3.secret_key" = "<iam_user_secret_key>",
    "aws.s3.region" = "<aws_s3_region>"
  • To choose vended credential (supported from v4.0 onwards) with the REST catalog, configure StorageCredentialParams as follows:

    "aws.s3.region" = "<aws_s3_region>"

StorageCredentialParams for AWS S3:

aws.s3.use_instance_profile​
  • Required: Yes
  • Description: Specifies whether to enable the instance profile-based authentication method and the assumed role-based authentication method. Valid values: true and false. Default value: false.
aws.s3.iam_role_arn​
  • Required: No
  • Description: The ARN of the IAM role that has privileges on your AWS S3 bucket. If you use the assumed role-based authentication method to access AWS S3, you must specify this parameter.
aws.s3.region​
  • Required: Yes
  • Description: The region in which your AWS S3 bucket resides. Example: us-west-1.
aws.s3.access_key​
  • Required: No
  • Description: The access key of your IAM user. If you use the IAM user-based authentication method to access AWS S3, you must specify this parameter.
aws.s3.secret_key​
  • Required: No
  • Description: The secret key of your IAM user. If you use the IAM user-based authentication method to access AWS S3, you must specify this parameter.

For information about how to choose an authentication method for accessing AWS S3 and how to configure an access control policy in AWS IAM Console, see Authentication parameters for accessing AWS S3.


MetadataRelatedParams​

A set of parameters about cache of the Iceberg metadata in StarRocks. This parameter set is optional.

From v3.3.3 onwards, StarRocks supports the periodic metadata refresh strategy. In most cases, you can ignore the parameters below and do not need to tune the policy parameters in it, because the default values of these parameters already provide you with performance out-of-the-box. You can adjust the Iceberg metadata parsing mode using the system variable plan_mode.

ParameterDefaultDescription
enable_iceberg_metadata_cachetrueWhether to cache Iceberg-related metadata, including Table Cache, Partition Name Cache, and the Data File Cache and Delete Data File Cache in Manifest.
iceberg_manifest_cache_with_column_statisticsfalseWhether to cache the statistics of columns.
refresh_iceberg_manifest_min_length2 * 1024 * 1024The minimum Manifest file length that triggers a Data File Cache refresh.
iceberg_data_file_cache_memory_usage_ratio0.1The maximum memory usage ratio for the data file Manifest cache. Supported from v3.5.6 onwards.
iceberg_delete_file_cache_memory_usage_ratio0.1The maximum memory usage ratio for the delete file Manifest cache. Supported from v3.5.6 onwards.
iceberg_table_cache_refresh_interval_sec60The interval (in seconds) at which the asynchronous refresh of the Iceberg table cache is triggered. Supported from v3.5.7 onwards.

Starting from v3.4, StarRocks can obtain statistics of Iceberg tables by reading Iceberg metadata through setting the following parameters, without actively triggering the collection of Iceberg table statistics.

ParameterDefaultDescription
enable_get_stats_from_external_metadatafalseWhether to obtain statistics from Iceberg metadata. When this item is set to true, you can further control which type of statistics to collect through the session variable enable_get_stats_from_external_metadata.

Examples​

The following examples create an Iceberg catalog named iceberg_catalog_hms or iceberg_catalog_glue, depending on the type of metastore you use, to query data from your Iceberg cluster. Chose the tab that matches your storage type:

AWS S3​

If you choose instance profile-based credential​
  • If you use Hive metastore in your Iceberg cluster, run a command like below:

    CREATE EXTERNAL CATALOG iceberg_catalog_hms
    PROPERTIES
    (
    "type" = "iceberg",
    "iceberg.catalog.type" = "hive",
    "hive.metastore.uris" = "thrift://xx.xx.xx.xx:9083",
    "aws.s3.use_instance_profile" = "true",
    "aws.s3.region" = "us-west-2"
    );
  • If you use AWS Glue in your Amazon EMR Iceberg cluster, run a command like below:

    CREATE EXTERNAL CATALOG iceberg_catalog_glue
    PROPERTIES
    (
    "type" = "iceberg",
    "iceberg.catalog.type" = "glue",
    "aws.glue.use_instance_profile" = "true",
    "aws.glue.region" = "us-west-2",
    "aws.s3.use_instance_profile" = "true",
    "aws.s3.region" = "us-west-2"
    );
If you choose assumed role-based credential​
  • If you use Hive metastore in your HIceberg cluster, run a command like below:

    CREATE EXTERNAL CATALOG iceberg_catalog_hms
    PROPERTIES
    (
    "type" = "iceberg",
    "iceberg.catalog.type" = "hive",
    "hive.metastore.uris" = "thrift://xx.xx.xx.xx:9083",
    "aws.s3.use_instance_profile" = "true",
    "aws.s3.iam_role_arn" = "arn:aws:iam::081976408565:role/test_s3_role",
    "aws.s3.region" = "us-west-2"
    );
  • If you use AWS Glue in your Amazon EMR Iceberg cluster, run a command like below:

    CREATE EXTERNAL CATALOG iceberg_catalog_glue
    PROPERTIES
    (
    "type" = "iceberg",
    "iceberg.catalog.type" = "glue",
    "aws.glue.use_instance_profile" = "true",
    "aws.glue.iam_role_arn" = "arn:aws:iam::081976408565:role/test_glue_role",
    "aws.glue.region" = "us-west-2",
    "aws.s3.use_instance_profile" = "true",
    "aws.s3.iam_role_arn" = "arn:aws:iam::081976408565:role/test_s3_role",
    "aws.s3.region" = "us-west-2"
    );
If you choose IAM user-based credential​
  • If you use Hive metastore in your Iceberg cluster, run a command like below:

    CREATE EXTERNAL CATALOG iceberg_catalog_hms
    PROPERTIES
    (
    "type" = "iceberg",
    "iceberg.catalog.type" = "hive",
    "hive.metastore.uris" = "thrift://xx.xx.xx.xx:9083",
    "aws.s3.use_instance_profile" = "false",
    "aws.s3.access_key" = "<iam_user_access_key>",
    "aws.s3.secret_key" = "<iam_user_access_key>",
    "aws.s3.region" = "us-west-2"
    );
  • If you use AWS Glue in your Amazon EMR Iceberg cluster, run a command like below:

    CREATE EXTERNAL CATALOG iceberg_catalog_glue
    PROPERTIES
    (
    "type" = "iceberg",
    "iceberg.catalog.type" = "glue",
    "aws.glue.use_instance_profile" = "false",
    "aws.glue.access_key" = "<iam_user_access_key>",
    "aws.glue.secret_key" = "<iam_user_secret_key>",
    "aws.glue.region" = "us-west-2",
    "aws.s3.use_instance_profile" = "false",
    "aws.s3.access_key" = "<iam_user_access_key>",
    "aws.s3.secret_key" = "<iam_user_secret_key>",
    "aws.s3.region" = "us-west-2"
    );
If you choose vended credential​

If you choose REST catalog with vended credential, run a command like below:

CREATE EXTERNAL CATALOG polaris_s3
PROPERTIES
(
"type" = "iceberg",
"iceberg.catalog.uri" = "http://xxx:xxx/api/catalog",
"iceberg.catalog.type" = "rest",
"iceberg.catalog.rest.nested-namespace-enabled"="true",
"iceberg.catalog.security" = "oauth2",
"iceberg.catalog.oauth2.credential" = "xxxxx:xxxx",
"iceberg.catalog.oauth2.scope"='PRINCIPAL_ROLE:ALL',
"iceberg.catalog.warehouse" = "iceberg_catalog",
"aws.s3.region" = "us-west-2"
);

Use your catalog​

View Iceberg catalogs​

You can use SHOW CATALOGS to query all catalogs in the current StarRocks cluster:

SHOW CATALOGS;

You can also use SHOW CREATE CATALOG to query the creation statement of an external catalog. The following example queries the creation statement of an Iceberg catalog named iceberg_catalog_glue:

SHOW CREATE CATALOG iceberg_catalog_glue;

Switch to an Iceberg Catalog and a database in it​

You can use one of the following methods to switch to an Iceberg catalog and a database in it:

  • Use SET CATALOG to specify an Iceberg catalog in the current session, and then use USE to specify an active database:

    -- Switch to a specified catalog in the current session:
    SET CATALOG <catalog_name>
    -- Specify the active database in the current session:
    USE <db_name>
  • Directly use USE to switch to an Iceberg catalog and a database in it:

    USE <catalog_name>.<db_name>

Drop an Iceberg catalog​

You can use DROP CATALOG to drop an external catalog.

The following example drops an Iceberg catalog named iceberg_catalog_glue:

DROP Catalog iceberg_catalog_glue;

View the schema of an Iceberg table​

You can use one of the following syntaxes to view the schema of an Iceberg table:

  • View schema

    DESC[RIBE] <catalog_name>.<database_name>.<table_name>
  • View schema and location from the CREATE statement

    SHOW CREATE TABLE <catalog_name>.<database_name>.<table_name>

Query an Iceberg table​

  1. Use SHOW DATABASES to view the databases in your Iceberg cluster:

    SHOW DATABASES FROM <catalog_name>
  2. Switch to an Iceberg catalog and a database in it.

  3. Use SELECT to query the destination table in the specified database:

    SELECT count(*) FROM <table_name> LIMIT 10

Iceberg DDL Operations​

For DDL operations (CREATE/DROP DATABASE, CREATE/DROP TABLE, and CREATE/ALTER VIEW), see Iceberg DDL operations.


Iceberg DML Operations​

For DML operations (INSERT), see Iceberg DML operations.


Iceberg Stored Procedures​

For Iceberg stored procedures (using snapshots, performing manual Compaction), see Iceberg Stored Procedures.


Configure metadata caching​

The metadata files of your Iceberg cluster may be stored in remote storage such as AWS S3 or HDFS. By default, StarRocks caches Iceberg metadata in memory. To accelerate queries, StarRocks adopts a two-level metadata caching mechanism, with which it can cache metadata both in memory and on disk. For each initial query, StarRocks caches their computation results. If any subsequent query that is semantically equivalent to a previous query is issued, StarRocks first attempts to retrieve the requested metadata from its caches, and it retrieves the metadata from the remote storage only when the metadata cannot be hit in its caches.

StarRocks uses the Least Recently Used (LRU) algorithm to cache and evict data. The basic rules are as follows:

  • StarRocks first attempts to retrieve the requested metadata from the memory. If the metadata cannot be hit in the memory, StarRock attempts to retrieve the metadata from the disks. The metadata that StarRocks has retrieved from the disks will be loaded into the memory. If the metadata cannot be hit in the disks either, StarRock retrieves the metadata from the remote storage and caches the retrieved metadata in the memory.
  • StarRocks writes the metadata evicted out of the memory into the disks, but it directly discards the metadata evicted out of the disks.

From v3.3.3 onwards, StarRocks supports the periodic metadata refresh strategy. You can adjust the Iceberg metadata caching plan using the system variable plan_mode.

FE Configurations on Iceberg metadata caching​

enable_iceberg_metadata_disk_cache​
  • Unit: N/A
  • Default value: false
  • Description: Specifies whether to enable the disk cache.
iceberg_metadata_cache_disk_path​
  • Unit: N/A
  • Default value: StarRocksFE.STARROCKS_HOME_DIR + "/caches/iceberg"
  • Description: The save path of cached metadata files on disk.
iceberg_metadata_disk_cache_capacity​
  • Unit: Bytes
  • Default value: 2147483648, equivalent to 2 GB
  • Description: The maximum size of cached metadata allowed on disk.
iceberg_metadata_memory_cache_capacity​
  • Unit: Bytes
  • Default value: 536870912, equivalent to 512 MB
  • Description: The maximum size of cached metadata allowed in memory.
iceberg_metadata_memory_cache_expiration_seconds​
  • Unit: Seconds
  • Default value: 86500
  • Description: The amount of time after which a cache entry in memory expires counting from its last access.
iceberg_metadata_disk_cache_expiration_seconds​
  • Unit: Seconds
  • Default value: 604800, equivalent to one week
  • Description: The amount of time after which a cache entry on disk expires counting from its last access.
iceberg_metadata_cache_max_entry_size​
  • Unit: Bytes
  • Default value: 8388608, equivalent to 8 MB
  • Description: The maximum size of a file that can be cached. Files whose size exceeds the value of this parameter cannot be cached. If a query requests these files, StarRocks retrieves them from the remote storage.
enable_background_refresh_connector_metadata​
  • Unit: -
  • Default value: true
  • Description: Whether to enable the periodic Iceberg metadata cache refresh. After it is enabled, StarRocks polls the metastore (Hive Metastore or AWS Glue) of your Iceberg cluster, and refreshes the cached metadata of the frequently accessed Iceberg catalogs to perceive data changes. true indicates to enable the Iceberg metadata cache refresh, and false indicates to disable it.
background_refresh_metadata_interval_millis​
  • Unit: Millisecond
  • Default value: 600000
  • Description: The interval between two consecutive Iceberg metadata cache refreshes. - Unit: millisecond.
background_refresh_metadata_time_secs_since_last_access_sec​
  • Unit: Second
  • Default value: 86400
  • Description: The expiration time of an Iceberg metadata cache refresh task. For the Iceberg catalog that has been accessed, if it has not been accessed for more than the specified time, StarRocks stops refreshing its cached metadata. For the Iceberg catalog that has not been accessed, StarRocks will not refresh its cached metadata.

Appendix A: Periodic Metadata Refresh Strategy​

Iceberg supports snapshots. With the newest snapshot, you can get the newest result. Therefore, only cached snapshots can influence data freshness. As a result, you only need to pay attention to the refresh strategy of cache that contains snapshot.

The following flowchart shows the time intervals on a timeline.

Timeline for updating and discarding cached metadata

Appendix B: Metadata File Parsing​

  • Distributed Plan for Large volume of Metadata

    To handle large volume of metadata effectively, StarRocks employs a distributed approach using multiple BE and CN nodes. This method leverages the parallel computing capabilities of modern query engines, which can distribute tasks such as reading, decompressing, and filtering manifest files across multiple nodes. By processing these manifest files in parallel, the time required for metadata retrieval is significantly reduced, leading to faster job planning. This is particularly beneficial for large queries involving numerous manifest files, as it eliminates single-point bottlenecks and enhances overall query execution efficiency.

  • Local Plan for Small volume of Metadata

    For smaller queries, where the repeated decompression and parsing of manifest files can introduce unnecessary delays, a different strategy is employed. StarRocks caches deserialized memory objects, especially Avro files, to address this issue. By storing these deserialized files in memory, the system can bypass the decompression and parsing stages for subsequent queries. This caching mechanism allows direct access to the required metadata, significantly reducing retrieval times. As a result, the system becomes more responsive and better suited to meet high query demands and materialized view rewriting needs.

  • Adaptive Metadata Retrieval Strategy (Default)

    StarRocks is designed to automatically select the appropriate metadata retrieval method based on various factors, including the number of FE and BE/CN nodes, their CPU core counts, and the number of manifest files required for the current query. This adaptive approach ensures that the system dynamically optimizes metadata retrieval without the need for manual adjustment of metadata-related parameters. By doing so, StarRocks provides a seamless experience, balancing between distributed and local plans to achieve optimal query performance under different conditions.

You can adjust the Iceberg metadata caching plan using the system variable plan_mode.

Rocky the happy otterStarRocks Assistant

AI generated answers are based on docs and other sources. Please test answers in non-production environments.