Skip to main content
Version: Stable-3.1

SHOW LOAD

Description

Displays information of all load jobs or given load jobs in a database. This statement can only display load jobs that are created by using Broker Load, INSERT, and SPARK_LOAD. You can also view load job information via the curl command. From v3.1 onwards, we recommend that you use the SELECT statement to query the results of Broker Load or Insert jobs from the loads table in the information_schema database. For more information, see Loading.

In addition to the preceding loading methods, StarRocks supports using Stream Load and Routine Load to load data. Stream Load is a synchronous operation and will directly return information of Stream Load jobs. Routine Load is an asynchronous operation where you can use the SHOW ROUTINE LOAD statement to display information of Routine Load jobs.

Syntax

SHOW LOAD [ FROM db_name ]
[
WHERE [ LABEL { = "label_name" | LIKE "label_matcher" } ]
[ [AND] STATE = { "PENDING" | "ETL" | "LOADING" | "FINISHED" | "CANCELLED" } ]
]
[ ORDER BY field_name [ ASC | DESC ] ]
[ LIMIT { [offset, ] limit | limit OFFSET offset } ]

Note

You can add the \G option to the statement (such as SHOW LOAD WHERE LABEL = "label1"\G;) to vertically display output rather than in the usual horizontal table format. For more information, see Example 1.

Parameters

ParameterRequiredDescription
db_nameNoThe database name. If this parameter is not specified, your current database is used by default.
LABEL = "label_name"NoThe labels of load jobs.
LABEL LIKE "label_matcher"NoIf this parameter is specified, the information of load jobs whose labels contain label_matcher is returned.
ANDNo
  • If you specify only one filter condition in the WHERE clause, do not specify this keyword. Example: WHERE STATE = "PENDING".
  • If you specify two or three filter conditions in the WHERE clause, you must specify this keyword. Example: WHERE LABEL = "label_name" AND STATE = "PENDING".
STATENoThe states of load jobs. The states vary based on loading methods.
  • Broker Load
    • PENDING: The load job is created.
    • QUEUEING: The load job is in the queue waiting to be scheduled.
    • LOADING: The load job is running.
    • PREPARED: The transaction has been committed.
    • FINISHED: The load job succeeded.
    • CANCELLED: The load job failed.
  • Spark Load
    • PENDING: Your StarRocks cluster is preparing configurations related to ETL and then submits an ETL job to your Apache Spark™ cluster.
    • ETL: Your Spark cluster is executing the ETL job and then writes the data into the corresponding HDFS cluster.
    • LOADING: The data in the HDFS cluster is being loaded to your StarRocks cluster, which means the load job is running.
    • PREPARED: The transaction has been committed.
    • FINISHED: The load job succeeded.
    • CANCELLED: The load job failed.
  • INSERT
    • FINISHED: The load job succeeded.
    • CANCELLED: The load job failed.
If the STATE parameter is not specified, the information of load jobs in all states is returned by default. If the STATE parameter is specified, only the information of load jobs in the given state is returned. For example, STATE = "PENDING" returns the information of load jobs in the PENDING state.
ORDER BY field_name [ASC | DESC]NoIf this parameter is specified, the output is sorted in ascending or descending order based on a field. The following fields are supported: JobId, Label, State, Progress, Type, EtlInfo, TaskInfo, ErrorMsg, CreateTime, EtlStartTime, EtlFinishTime, LoadStartTime, LoadFinishTime, URL, and JobDetails.
  • To sort the output in ascending order, specify ORDER BY field_name ASC.
  • To sort the output in descending order, specify ORDER BY field_name DESC.
If you do not specify the field and the sort order, the output is sorted in ascending order of JobId by default.
LIMIT limitNoThe number of load jobs that are allowed to display. If this parameter is not specified, the information of all load jobs that match the filter conditions are displayed. If this parameter is specified, for example, LIMIT 10, only the information of 10 load jobs that match filter conditions are returned.
OFFSET offsetNoThe offset parameter defines the number of load jobs to be skipped. For example, OFFSET 5 skips the first five load jobs and returns the rest. The value of the offset parameter defaults to 0.

Output

+-------+-------+-------+----------+------+---------+----------+----------+------------+--------------+---------------+---------------+----------------+-----+------------+
| JobId | Label | State | Progress | Type | Priority | EtlInfo | TaskInfo | ErrorMsg | CreateTime | EtlStartTime | EtlFinishTime | LoadStartTime | LoadFinishTime | URL | JobDetails |
+-------+-------+-------+----------+------+---------+----------+----------+------------+--------------+---------------+---------------+----------------+-----+------------+

The output of this statement varies based on loading methods.

FieldBroker LoadSpark LoadINSERT
JobIdThe unique ID assigned by StarRocks to identify the load job in your StarRocks cluster.The field has the same meaning in a Spark Load job as it does in a Broker Load job.The field has the same meaning in a INSERT job as it does in a Broker Load job.
LabelThe label of the load job. The label of a load job is unique within a database but can be duplicate across different databases.The field has the same meaning in a Spark Load job as it does in a Broker Load job.The field has the same meaning in a INSERT job as it does in a Broker Load job.
StateThe state of the load job.
  • PENDING: The load job is created.
  • QUEUEING: The load job is in the queue waiting to be scheduled.
  • LOADING: The load job is running.
  • PREPARED: The transaction has been committed.
  • FINISHED: The load job succeeded.
  • CANCELLED: The load job failed.
The state of the load job.
  • PENDING: Your StarRocks cluster is preparing configurations related to ETL and then submits an ETL job to your Spark cluster.
  • ETL: Your Spark cluster is executing the ETL job and then writes the data into the corresponding HDFS cluster.
  • LOADING: The data in the HDFS cluster is being loaded to your StarRocks cluster, which means the load job is running.
  • PREPARED: The transaction has been committed.
  • FINISHED: The load job succeeded.
  • CANCELLED: The load job failed.
The state of the load job.
  • FINISHED: The load job succeeded.
  • CANCELLED: The load job failed.
ProgressThe stage of the load job. A Broker Load job only has the LOAD stage, which ranges from 0% to 100% to describe the progress of the stage. When the load job enters the LOAD stage, LOADING is returned for the State parameter. A Broker Load job does not have the ETL stage. The ETL parameter is valid only for a Spark Load job.
Note
  • The formula to calculate the progress of the LOAD stage: Number of StarRocks tables that complete data loading/Number of StarRocks tables that you plan to load data into * 100%.
  • When all data is loaded into StarRocks, 99% is returned for the LOAD parameter. Then, loaded data starts taking effect in StarRocks. After the data takes effect, 100% is returned for the LOAD parameter.
  • The progress of the LOAD stage is not linear. Therefore, the value of the LOAD parameter may not change over a period of time even if data loading is still ongoing.
The stage of the load job. A Spark Load job has two stages:
  • ETL: ranges from 0% to 100% to describe the progress of the ETL stage.
  • LOAD: ranges from 0% to 100% to describe the progress of the Load stage.
When the load job enters the ETL stage, ETL is returned for the State parameter. When the load job moves to the LOAD stage, LOADING is returned for the State parameter.
The Note is the same as those for Broker Load.
The stage of the load job. An INSERT job only has the LOAD stage, which ranges from 0% to 100% to describe the progress of the stage. When the load job enters the LOAD stage, LOADING is returned for the State parameter. An INSERT job does not have the ETL stage. The ETL parameter is valid only for a Spark Load job.
The Note is the same as those for Broker Load.
TypeThe method of the load job. The value of this parameter defaults to BROKER.The method of the load job. The value of this parameter defaults to SPARK.The method of the load job. The value of this parameter defaults to INSERT.
PriorityThe priority of the load job. Valid values: LOWEST, LOW, NORMAL, HIGH, and HIGHEST.--
EtlInfoThe metrics related to ETL.
  • unselected.rows: The number of rows that are filtered out by the WHERE clause.
  • dpp.abnorm.ALL: The number of rows that are filtered out due to data quality issues, which refers to mismatches between source tables and StarRocks tables in, for example, the data type and the number of columns.
  • dpp.norm.ALL: The number of rows that are loaded into your StarRocks cluster.
The sum of the preceding metrics is the total number of rows of raw data. You can use the following formula to calculate whether the percentage of unqualified data exceeds the value of the max-filter-ratio parameter:dpp.abnorm.ALL/(unselected.rows + dpp.abnorm.ALL + dpp.norm.ALL).
The field has the same meaning in a Spark Load job as it does in a Broker Load job.The metrics related to ETL. An INSERT job does not have the ETL stage. Therefore, NULL is returned.
TaskInfoThe parameters that are specified when you create the load job.
  • resource: This parameter is valid only in a Spark Load job.
  • timeout: The time period that a load job is allowed to run. Unit: seconds.
  • max-filter-ratio: The largest percentage of rows that are filtered out due to data quality issues.
For more information, see BROKER LOAD.
The parameters that are specified when you create the load job.
  • resource: The resource name.
  • timeout: The time period that a load job is allowed to run. Unit: seconds.
  • max-filter-ratio: The largest percentage of rows that are filtered out due to data quality issues.
For more information, see SPARK LOAD.
The parameters that are specified when you create the load job.
  • resource: This parameter is valid only in a Spark Load job.
  • timeout: The time period that a load job is allowed to run. Unit: seconds.
  • max-filter-ratio: The largest percentage of rows that are filtered out due to data quality issues.
For more information, see INSERT.
ErrorMsgThe error message returned when the load job fails. When the state of the loading job is PENDING, LOADING, or FINISHED, NULL is returned for the ErrorMsg field. When the state of the loading job is CANCELLED, the value returned for the ErrorMsg field consists of two parts: type and msg.
  • The type part can be any of the following values:
    • USER_CANCEL: The load job was manually canceled.
    • ETL_SUBMIT_FAIL: The load job failed to be submitted.
    • ETL-QUALITY-UNSATISFIED: The load job failed because the percentage of unqualified data exceeds the value of the max-filter-ratio parameter.
    • LOAD-RUN-FAIL: The load job failed in the LOAD stage.
    • TIMEOUT: The load job failed to finish within the specified timeout period.
    • UNKNOWN: The load job failed due to an unknown error.
  • The msg part provides the detailed cause of the load failure.
The error message returned when the load job fails. When the state of the loading job is PENDING, LOADING, or FINISHED, NULL is returned for the ErrorMsg field. When the state of the loading job is CANCELLED, the value returned for the ErrorMsg field consists of two parts: type and msg.
  • The type part can be any of the following values:
    • USER_CANCEL: The load job was manually canceled.
    • ETL_SUBMIT_FAIL: StarRocks failed to submit an ETL job to Spark.
    • ETL-RUN-FAIL: Spark failed to execute the ETL job.
    • ETL-QUALITY-UNSATISFIED: The load job failed because the percentage of unqualified data exceeds the value of the max-filter-ratio parameter.
    • LOAD-RUN-FAIL: The load job failed in the LOAD stage.
    • TIMEOUT: The load job failed to finish within the specified timeout period.
    • UNKNOWN: The load job failed due to an unknown error.
  • The msg part provides the detailed cause of the load failure.
The error message returned when the load job fails. When the state of the loading job is FINISHED, NULL is returned for the ErrorMsg field. When the state of the loading job is CANCELLED, the value returned for the ErrorMsg field consists of two parts: type and msg.
  • The type part can be any of the following values:
    • USER_CANCEL: The load job was manually canceled.
    • ETL_SUBMIT_FAIL: The load job failed to be submitted.
    • ETL_RUN_FAIL: The load job failed to run.
    • ETL_QUALITY_UNSATISFIED: The load job failed due to quality issues of raw data.
    • LOAD-RUN-FAIL: The load job failed in the LOAD stage.
    • TIMEOUT: The load job failed to finish within the specified timeout period.
    • UNKNOWN: The load job failed due to an unknown error.
    • TXN_UNKNOWN: The load job failed because the state of the transaction of the load job is unknown.
  • The msg part provides the detailed cause of the load failure.
CreateTimeThe time at which the load job was created.The field has the same meaning in a Spark Load job as it does in a Broker Load job.The field has the same meaning in a INSERT job as it does in a Broker Load job.
EtlStartTimeA Broker Load job does not have the ETL stage. Therefore, the value of this field is the same as the value of the LoadStartTime field.The time at which the ETL stage starts.An INSERT job does not have the ETL stage. Therefore, the value of this field is the same as the value of the LoadStartTime field.
EtlFinishTimeA Broker Load job does not have the ETL stage. Therefore, the value of this field is the same as the value of the LoadStartTime field.The time at which the ETL stage finishes.An INSERT job does not have the ETL stage. Therefore, the value of this field is the same as the value of the LoadStartTime field.
LoadStartTimeThe time at which the LOAD stage starts.The field has the same meaning in a Spark Load job as it does in a Broker Load job.The field has the same meaning in a INSERT job as it does in a Broker Load job.
LoadFinishTimeThe time at which the load job finishes.The field has the same meaning in a Spark Load job as it does in a Broker Load job.The field has the same meaning in a INSERT job as it does in a Broker Load job.
URLThe URL that is used to access the unqualified data detected in the load job. You can use the curl or wget command to access the URL and obtain the unqualified data. If no unqualified data is detected, NULL is returned.The field has the same meaning in a Spark Load job as it does in a Broker Load job.The field has the same meaning in a INSERT job as it does in a Broker Load job.
JobDetailsOther information related to the load job.
  • Unfinished backends: The ID of the BE that does not complete data loading.
  • ScannedRows: The total number of rows that are loaded into StarRocks and the number of rows that are filtered out.
  • TaskNumber: A load job can be split into one or more tasks that concurrently run. This field indicates the number of load tasks.
  • All backends: The ID of the BE that is executing data loading.
  • FileNumber: The number of source data files.
  • FileSize: The data volume of source data files. Unit: bytes.
The field has the same meaning in a Spark Load job as it does in a Broker Load job.The field has the same meaning in a INSERT job as it does in a Broker Load job.

Usage notes

  • The information returned by the SHOW LOAD statement is valid for 3 days from LoadFinishTime of a load job. After 3 days, the information cannot be displayed. You can use the label_keep_max_second parameter to modify the default validity period.

    ADMIN SET FRONTEND CONFIG ("label_keep_max_second" = "value");
  • If the value of the LoadStartTime field is N/A for a long time, it means that load jobs heavily pile up. We recommended that you reduce the frequency of creating load jobs.

  • Total time period consumed by a load job = LoadFinishTime - CreateTime.

  • Total time period a load job consumed in the LOAD stage = LoadFinishTime - LoadStartTime.

Examples

Example 1: Vertically display all load jobs in your current database.

SHOW LOAD\G
*************************** 1. row ***************************
JobId: 976331
Label: duplicate_table_with_null
State: FINISHED
Progress: ETL:100%; LOAD:100%
Type: BROKER
Priority: NORMAL
EtlInfo: unselected.rows=0; dpp.abnorm.ALL=0; dpp.norm.ALL=65546
TaskInfo: resource:N/A; timeout(s):300; max_filter_ratio:0.0
ErrorMsg: NULL
CreateTime: 2022-10-17 19:35:00
EtlStartTime: 2022-10-17 19:35:04
EtlFinishTime: 2022-10-17 19:35:04
LoadStartTime: 2022-10-17 19:35:04
LoadFinishTime: 2022-10-17 19:35:06
URL: NULL
JobDetails: {"Unfinished backends":{"b90a703c-6e5a-4fcb-a8e1-94eca5be0b8f":[]},"ScannedRows":65546,"TaskNumber":1,"All backends":{"b90a703c-6e5a-4fcb-a8e1-94eca5be0b8f":[10004]},"FileNumber":1,"FileSize":548622}

Example 2: Display two load jobs whose labels contain the string null in your current database.

SHOW LOAD 
WHERE LABEL LIKE "null"
LIMIT 2;

+-------+---------------------------+----------+---------------------+--------+---------------------------------------------------------+---------------------------------------------------------------------------------------------------------+----------+---------------------+---------------------+---------------------+---------------------+---------------------+--------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| JobId | Label | State | Progress | Type | EtlInfo | TaskInfo | ErrorMsg | CreateTime | EtlStartTime | EtlFinishTime | LoadStartTime | LoadFinishTime | URL | JobDetails |
+-------+---------------------------+----------+---------------------+--------+---------------------------------------------------------+---------------------------------------------------------------------------------------------------------+----------+---------------------+---------------------+---------------------+---------------------+---------------------+--------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| 10082 | duplicate_table_with_null | FINISHED | ETL:100%; LOAD:100% | BROKER | unselected.rows=0; dpp.abnorm.ALL=0; dpp.norm.ALL=65546 | resource:N/A; timeout(s):300; max_filter_ratio:0.0 | NULL | 2022-08-02 14:53:27 | 2022-08-02 14:53:30 | 2022-08-02 14:53:30 | 2022-08-02 14:53:30 | 2022-08-02 14:53:31 | NULL | {"Unfinished backends":{"4393c992-5da1-4e9f-8b03-895dc0c96dbc":[]},"ScannedRows":65546,"TaskNumber":1,"All backends":{"4393c992-5da1-4e9f-8b03-895dc0c96dbc":[10002]},"FileNumber":1,"FileSize":548622} |
| 10103 | unique_table_with_null | FINISHED | ETL:100%; LOAD:100% | SPARK | unselected.rows=0; dpp.abnorm.ALL=0; dpp.norm.ALL=65546 | resource:test_spark_resource_07af473a_1230_11ed_b483_00163e0e550b; timeout(s):300; max_filter_ratio:0.0 | NULL | 2022-08-02 14:56:06 | 2022-08-02 14:56:19 | 2022-08-02 14:56:41 | 2022-08-02 14:56:41 | 2022-08-02 14:56:44 | http://emr-header-1.cluster-49091:20888/proxy/application_1655710334658_26391/ | {"Unfinished backends":{"00000000-0000-0000-0000-000000000000":[]},"ScannedRows":65546,"TaskNumber":1,"All backends":{"00000000-0000-0000-0000-000000000000":[-1]},"FileNumber":1,"FileSize":8790855} |
+-------+---------------------------+----------+---------------------+--------+---------------------------------------------------------+---------------------------------------------------------------------------------------------------------+----------+---------------------+---------------------+---------------------+---------------------+---------------------+--------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

Example 3: Display the load jobs whose labels contain the string table in example_db. In addition, the load jobs returned are displayed in descending order of the LoadStartTime field.

SHOW LOAD FROM example_db 
WHERE LABEL Like "table"
ORDER BY LoadStartTime DESC;

+-------+---------------------------+----------+---------------------+--------+---------------------------------------------------------+---------------------------------------------------------------------------------------------------------+----------+---------------------+---------------------+---------------------+---------------------+---------------------+--------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| JobId | Label | State | Progress | Type | EtlInfo | TaskInfo | ErrorMsg | CreateTime | EtlStartTime | EtlFinishTime | LoadStartTime | LoadFinishTime | URL | JobDetails |
+-------+---------------------------+----------+---------------------+--------+---------------------------------------------------------+---------------------------------------------------------------------------------------------------------+----------+---------------------+---------------------+---------------------+---------------------+---------------------+--------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| 10103 | unique_table_with_null | FINISHED | ETL:100%; LOAD:100% | SPARK | unselected.rows=0; dpp.abnorm.ALL=0; dpp.norm.ALL=65546 | resource:test_spark_resource_07af473a_1230_11ed_b483_00163e0e550b; timeout(s):300; max_filter_ratio:0.0 | NULL | 2022-08-02 14:56:06 | 2022-08-02 14:56:19 | 2022-08-02 14:56:41 | 2022-08-02 14:56:41 | 2022-08-02 14:56:44 | http://emr-header-1.cluster-49091:20888/proxy/application_1655710334658_26391/ | {"Unfinished backends":{"00000000-0000-0000-0000-000000000000":[]},"ScannedRows":65546,"TaskNumber":1,"All backends":{"00000000-0000-0000-0000-000000000000":[-1]},"FileNumber":1,"FileSize":8790855} |
| 10082 | duplicate_table_with_null | FINISHED | ETL:100%; LOAD:100% | BROKER | unselected.rows=0; dpp.abnorm.ALL=0; dpp.norm.ALL=65546 | resource:N/A; timeout(s):300; max_filter_ratio:0.0 | NULL | 2022-08-02 14:53:27 | 2022-08-02 14:53:30 | 2022-08-02 14:53:30 | 2022-08-02 14:53:30 | 2022-08-02 14:53:31 | NULL | {"Unfinished backends":{"4393c992-5da1-4e9f-8b03-895dc0c96dbc":[]},"ScannedRows":65546,"TaskNumber":1,"All backends":{"4393c992-5da1-4e9f-8b03-895dc0c96dbc":[10002]},"FileNumber":1,"FileSize":548622} |
+-------+---------------------------+----------+---------------------+--------+---------------------------------------------------------+---------------------------------------------------------------------------------------------------------+----------+---------------------+---------------------+---------------------+---------------------+---------------------+--------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

Example 4: Display the load job whose label is duplicate_table_with_null and state is FINISHED in example_db.

SHOW LOAD FROM example_db 
WHERE LABEL = "duplicate_table_with_null" AND STATE = "FINISHED";

+-------+---------------------------+----------+---------------------+--------+---------------------------------------------------------+----------------------------------------------------+----------+---------------------+---------------------+---------------------+---------------------+---------------------+------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| JobId | Label | State | Progress | Type | EtlInfo | TaskInfo | ErrorMsg | CreateTime | EtlStartTime | EtlFinishTime | LoadStartTime | LoadFinishTime | URL | JobDetails |
+-------+---------------------------+----------+---------------------+--------+---------------------------------------------------------+----------------------------------------------------+----------+---------------------+---------------------+---------------------+---------------------+---------------------+------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| 10082 | duplicate_table_with_null | FINISHED | ETL:100%; LOAD:100% | BROKER | unselected.rows=0; dpp.abnorm.ALL=0; dpp.norm.ALL=65546 | resource:N/A; timeout(s):300; max_filter_ratio:0.0 | NULL | 2022-08-02 14:53:27 | 2022-08-02 14:53:30 | 2022-08-02 14:53:30 | 2022-08-02 14:53:30 | 2022-08-02 14:53:31 | NULL | {"Unfinished backends":{"4393c992-5da1-4e9f-8b03-895dc0c96dbc":[]},"ScannedRows":65546,"TaskNumber":1,"All backends":{"4393c992-5da1-4e9f-8b03-895dc0c96dbc":[10002]},"FileNumber":1,"FileSize":548622} |
+-------+---------------------------+----------+---------------------+--------+---------------------------------------------------------+----------------------------------------------------+----------+---------------------+---------------------+---------------------+---------------------+---------------------+------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

Example 5: Skip the first load job and display the next two load jobs. In addition, these two load jobs are sorted in ascending order.

SHOW LOAD FROM example_db 
ORDER BY CreateTime ASC
LIMIT 2 OFFSET 1;

Or

SHOW LOAD FROM example_db 
ORDER BY CreateTime ASC
LIMIT 1,2;

The output of the preceding statements is as follows.

+-------+---------------------------------------------+----------+---------------------+--------+---------------------------------------------------------+---------------------------------------------------------------------------------------------------------+----------+---------------------+---------------------+---------------------+---------------------+---------------------+--------------------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| JobId | Label | State | Progress | Type | EtlInfo | TaskInfo | ErrorMsg | CreateTime | EtlStartTime | EtlFinishTime | LoadStartTime | LoadFinishTime | URL | JobDetails |
+-------+---------------------------------------------+----------+---------------------+--------+---------------------------------------------------------+---------------------------------------------------------------------------------------------------------+----------+---------------------+---------------------+---------------------+---------------------+---------------------+--------------------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| 10103 | unique_table_with_null | FINISHED | ETL:100%; LOAD:100% | SPARK | unselected.rows=0; dpp.abnorm.ALL=0; dpp.norm.ALL=65546 | resource:test_spark_resource_07af473a_1230_11ed_b483_00163e0e550b; timeout(s):300; max_filter_ratio:0.0 | NULL | 2022-08-02 14:56:06 | 2022-08-02 14:56:19 | 2022-08-02 14:56:41 | 2022-08-02 14:56:41 | 2022-08-02 14:56:44 | http://emr-header-1.cluster-49091:20888/proxy/application_1655710334658_26391/ | {"Unfinished backends":{"00000000-0000-0000-0000-000000000000":[]},"ScannedRows":65546,"TaskNumber":1,"All backends":{"00000000-0000-0000-0000-000000000000":[-1]},"FileNumber":1,"FileSize":8790855} |
| 10120 | insert_3a57b595-1230-11ed-b075-00163e14c85e | FINISHED | ETL:100%; LOAD:100% | INSERT | NULL | resource:N/A; timeout(s):3600; max_filter_ratio:0.0 | NULL | 2022-08-02 14:56:26 | 2022-08-02 14:56:26 | 2022-08-02 14:56:26 | 2022-08-02 14:56:26 | 2022-08-02 14:56:26 | | {"Unfinished backends":{},"ScannedRows":0,"TaskNumber":0,"All backends":{},"FileNumber":0,"FileSize":0} |
+-------+---------------------------------------------+----------+---------------------+--------+---------------------------------------------------------+---------------------------------------------------------------------------------------------------------+----------+---------------------+---------------------+---------------------+---------------------+---------------------+--------------------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+