From 8559013c5cdf71aadc968fe6b26e2f965567c24c Mon Sep 17 00:00:00 2001
From: Jackie Qu
Date: Fri, 12 Apr 2024 15:33:25 +0800
Subject: [PATCH 01/63]
v430-beta-300.experience-parallelly-importing-and-data-compression-update
---
...rallelly-importing-and-data-compression.md | 42 ++++++++++++-------
1 file changed, 28 insertions(+), 14 deletions(-)
diff --git a/en-US/200.quickstart/500.experience-advanced-features-of-oceanbase/300.experience-parallelly-importing-and-data-compression.md b/en-US/200.quickstart/500.experience-advanced-features-of-oceanbase/300.experience-parallelly-importing-and-data-compression.md
index 85f64cb6d8..f65148c881 100644
--- a/en-US/200.quickstart/500.experience-advanced-features-of-oceanbase/300.experience-parallelly-importing-and-data-compression.md
+++ b/en-US/200.quickstart/500.experience-advanced-features-of-oceanbase/300.experience-parallelly-importing-and-data-compression.md
@@ -1,14 +1,22 @@
-|description||
+| description ||
|---|---|
-|keywords||
-|dir-name||
-|dir-name-en||
-|tenant-type||
+| keywords ||
+| dir-name ||
+| dir-name-en ||
+| tenant-type ||
# Try out parallel import and data compression
This topic describes how to try out parallel import and data compression.
+Common application scenarios are described as follows:
+
+- **Data migration**: When you migrate a large amount of data from one system to another, you can use parallel import to significantly accelerate the data migration and use data compression to reduce the transmission bandwidth and storage resources required.
+
+- **Backup and restore**: When you back up data, you can use data compression to minimize the size of backup files to reduce the occupied storage space. When you restore data, you can use parallel import to accelerate the restore process.
+
+- **Columnstore table**: When you query data in a columnstore table, only the involved columns are accessed without the need to access entire rows. Compressed columns can still be scanned and processed. This can reduce I/O operations, accelerate queries, and improve the overall query performance. After you import data to a columnstore table in batches, we recommend that you initiate a data compression task to improve the read performance. Note that the major compaction for a columnstore table is slow.
+
## Parallel import
In addition to analytical queries, another important part of operational online analytical processing (OLAP) lies in the parallel import of massive amounts of data, namely the batch data processing capability. The parallel execution framework of OceanBase Database allows you to execute data manipulation language (DML) statements in parallel, which is known as parallel DML (PDML) operations. It supports concurrent data writes to multiple servers in a multi-node database without compromising data consistency for large transactions. The asynchronous minor compaction mechanism enhances the support of the LSM-tree-based storage engine for large transactions when memory is tight.
@@ -18,7 +26,7 @@ This topic describes how to try out the PDML feature of OceanBase Database. Firs
First, copy the schema of the `lineitem` table and create an empty table named `lineitem2`. Table partitioning is used to manage large tables in OceanBase Database. In this example, 16 partitions are used. Therefore, the `lineitem2` table must also be split into 16 partitions:
```shell
-obclient [test]> SHOW CREATE TABLE lineitem\G;
+obclient [test]> SHOW CREATE TABLE lineitem\G
*************************** 1. row ***************************
Table: lineitem
@@ -91,7 +99,7 @@ obclient [test]> TRUNCATE TABLE lineitem2;
obclient [test]> INSERT /*+ parallel(16) enable_parallel_dml */ INTO lineitem2 SELECT * FROM lineitem;
```
-Execution result:
+The execution result is as follows:
```shell
obclient> TRUNCATE TABLE lineitem2;
@@ -142,7 +150,9 @@ OceanBase Database allows you to import data from CSV files in multiple ways. Th
...
```
-3. Create a table named `t_f1` in the `test` database of the `test` tenant. For more information about how to create a tenant, see "Manage tenants."
+3. Create a table named `t_f1` in the `test` database of the `test` tenant.
+
+ For more information about how to create a tenant, see "Manage tenants."
```shell
# obclient -hxxx.xxx.xxx.xxx -P2881 -uroot@test -Dtest -A -p -c
@@ -161,7 +171,7 @@ obclient [test]> GRANT FILE ON *.* to username;
Notice
- For security reasons, the SQL statement for modifying secure_file_priv
can only be executed locally and cannot be executed on a remote OBClient. In other words, you must log on to an OBClient (or a MySQL client) on the server where the observer process is located to execute it. For more information, see secure_file_priv.
+
For security reasons, the SQL statement for modifying secure_file_priv
can only be executed locally and cannot be executed on a remote OBClient. For more information, see secure_file_priv.
After you complete the settings, reconnect to the session to make the settings take effect. Then, set the transaction timeout period for the session to ensure that the execution does not exit due to timeout.
@@ -181,7 +191,9 @@ You can see that OceanBase Database takes about 4 minutes to import 10 GB of dat
After the data is imported, check the number of records in the table and the space occupied by the table.
-1. The table contains 50 million records.
+1. Check the number of records in the table.
+
+ The table contains 50 million records.
```shell
obclient [test]> SELECT COUNT(*) FROM t_f1;
@@ -192,16 +204,18 @@ After the data is imported, check the number of records in the table and the spa
+----------+
```
-2. Trigger a major compaction.
+2. Initiate a major compaction.
- To check the compression result of baseline data, log on as the `root` user to the `sys` tenant of the cluster and initiate a major compaction to compact and compress the incremental data and baseline data. You can execute the following SQL statement to initiate a major compaction:
+ To check the compression results of baseline data, log on as the `root` user to the sys tenant of the cluster and initiate a major compaction to compact and compress the incremental data and baseline data. You can execute the following SQL statement to initiate a major compaction.
```shell
# obclient -h127.0.0.1 -P2881 -uroot@sys -Doceanbase -A -p -c
obclient[oceanbase]> ALTER SYSTEM MAJOR FREEZE;
```
-3. Execute the following SQL statement to query the state of the major compaction. If the statement returns `IDLE`, the major compaction is completed.
+3. Run the following command to query the state of the major compaction.
+
+ If the result returns `IDLE`, the major compaction is completed.
```shell
obclient [oceanbase]> SELECT * FROM oceanbase.CDB_OB_MAJOR_COMPACTION;
@@ -217,7 +231,7 @@ After the data is imported, check the number of records in the table and the spa
4. Execute the following SQL statement in the `sys` tenant to query the space occupied by the imported data:
```shell
- obclient [oceanbase]> select b.table_name,a.svr_ip,data_size/1024/1024/1024 from CDB_OB_TABLET_REPLICAS a,DBA_OB_TABLE_LOCATIONS b where a.tablet_id=b.tablet_id and b.table_name='T_F1';
+ obclient [oceanbase]> select b.table_name,a.svr_ip,data_size/1024/1024/1024 from CDB_OB_TABLET_REPLICAS a,CDB_OB_TABLE_LOCATIONS b where a.tablet_id=b.tablet_id and b.table_name='T_F1';
+------------+---------------+----------------------------+
| table_name | svr_ip | a.data_size/1024/1024/1024 |
+------------+---------------+----------------------------+
From 296df095e2b780d0c0bbc04ffc2cffbe138d49b7 Mon Sep 17 00:00:00 2001
From: Jackie Qu
Date: Fri, 12 Apr 2024 16:32:39 +0800
Subject: [PATCH 02/63]
v430-beta-200.differences-between-enterprise-edition-and-community-edition-update
---
...nterprise-edition-and-community-edition.md | 21 +++++++++++--------
1 file changed, 12 insertions(+), 9 deletions(-)
diff --git a/en-US/100.learn-more-about-oceanbase/200.differences-between-enterprise-edition-and-community-edition.md b/en-US/100.learn-more-about-oceanbase/200.differences-between-enterprise-edition-and-community-edition.md
index 5ed08b2311..87538fa55e 100644
--- a/en-US/100.learn-more-about-oceanbase/200.differences-between-enterprise-edition-and-community-edition.md
+++ b/en-US/100.learn-more-about-oceanbase/200.differences-between-enterprise-edition-and-community-edition.md
@@ -27,6 +27,7 @@ The following table describes the features supported by these two editions.
| High availability | Five IDCs across three regions | Supported | Supported |
| High availability | Transparent horizontal scaling | Supported | Supported |
| High availability | Multi-tenant management | Supported | Supported |
+| High availability | Tenant cloning | Supported | Supported |
| High availability | Data backup and restore | Supported | Supported |
| High availability | Resource isolation | Supported | Supported |
| High availability | Physical standby database | Supported | Supported |
@@ -38,22 +39,24 @@ The following table describes the features supported by these two editions.
| Compatibility | Compatibility with Oracle syntax | Supported | Not supported |
| Compatibility | XA transactions | Supported | Not supported |
| Compatibility | Table locks | Supported | Not supported |
-| Compatibility | Function indexes | Supported | Supported |
+| Compatibility | Function-based indexes | Supported | Supported |
| High performance | Cost-based optimizer | Supported | Supported |
| High performance | Optimization and rewriting of complex queries | Supported | Supported |
| High performance | Parallel execution engine | Supported | Supported |
| High performance | Vectorized engine | Supported | Supported |
-| High performance | Advanced SQL execution plan management (SPM) | Supported | Not supported |
+| High performance | Columnar engine | Supported | Supported |
+| High performance | Advanced SQL plan management (SPM) | Supported | Not supported |
| High performance | Minimum specifications | Supported | Supported |
| High performance | Paxos-based log transmission | Supported | Supported |
| High performance | Distributed strong-consistency transactions, complete atomicity, consistency, isolation, and durability (ACID), and multi-version support | Supported | Supported |
-| High performance | Data partitioning (RANGE, HASH, or LIST) | Supported | Supported |
+| High performance | Data partitioning (RANGE, HASH, and LIST) | Supported | Supported |
| High performance | Global indexes | Supported | Supported |
| High performance | Advanced compression | Supported | Supported |
| High performance | Dynamic sampling | Supported | Supported |
-| High performance | Auto DOP | Supported | Supported |
-| Cross-data source access | Read-only foreign tables (CSV format) | Supported | Supported |
-| Cross-data source access | DBLink | Supported | Not supported |
+| High performance | Auto degree of parallelism (DOP) | Supported | Supported |
+| High performance | Materialized views | Supported | Supported |
+| Cross-data source access | Read-only external tables (in the CSV format) | Supported | Supported |
+| Cross-data source access | DBLink | Supported | Supported |
| Multimodel | TableAPI | Supported | Supported |
| Multimodel | HbaseAPI | Supported | Supported |
| Multimodel | JSON | Supported | Supported |
@@ -63,11 +66,11 @@ The following table describes the features supported by these two editions.
| Security | Privilege management | Supported | Supported |
| Security | Communication encryption | Supported | Supported |
| Security | Advanced security scaling | Supported | Not supported.
OceanBase Database Community Edition does not support transparent data encryption (TDE) for row-level labels, data, and logs. |
-| O&M management | Full-link diagnostics | Supported | Supported |
+| O&M management | End-to-end diagnostics | Supported | Supported |
| O&M management | O&M components (liboblog and ob_admin) | Supported | Supported |
| O&M management | OBLOADER & OBDUMPER | Supported | Supported |
-| O&M management | GUI-based development and management tools | Supported | Supported.
OceanBase Database Community Edition supports GUI-based development and management tools such as OceanBase Cloud Platform (OCP), OceanBase Migration Service (OMS), and OceanBase Developer Center (ODC). You can download these tools for free. However, OceanBase Migration Assessment (OMA) is charged. |
+| O&M management | GUI-based development and management tools | Supported | Supported.
OceanBase Database Community Edition supports GUI-based development and management tools such as OceanBase Cloud Platform (OCP), OceanBase Migration Service (OMS), and OceanBase Developer Center (ODC). You can download these tools for free. However, OceanBase Migration Assessment (OMA) is a paid service. |
| Support and services | Technical consultation (on products) | Supported | OceanBase Database Community Edition provides only community-based technical consultation on products. No commercial expert team is provided for technical consultation. |
-| Support and services | Service acquisition (channels for obtaining technical support) | Professional commercial support team | OceanBase Database Community Edition provides online service consultation only on its official website or in its official community and does not provide commercial expert teams. |
+| Support and services | Service acquisition (channels for obtaining technical support) | Commercial expert team | OceanBase Database Community Edition provides online service consultation only on its official website or in its official community and does not provide commercial expert teams. |
| Support and services | Expert services (planning, implementation, inspection, fault recovery, and production assurance) | On-site services by commercial experts | OceanBase Database Community Edition does not provide expert assurance services. |
| Support and services | Response to faults | 24/7 services | OceanBase Database Community Edition does not provide emergency troubleshooting services. |
From dde7e318c430827c86fea8c86f3671da4fcaf671 Mon Sep 17 00:00:00 2001
From: Jackie Qu
Date: Fri, 12 Apr 2024 17:44:35 +0800
Subject: [PATCH 03/63] v430-beta-500.data-migration-update
---
.../100.overview-of-bypass-import.md | 10 +-
...ad-data-statement-to-bypass-import-data.md | 79 ++++++++++---
...-select-statement-to-bypass-import-data.md | 8 +-
...ad-csv-data-files-to-oceanbase-database.md | 105 +++++++++---------
...csv-data-file-to-the-oceanbase-database.md | 56 +++++-----
5 files changed, 150 insertions(+), 108 deletions(-)
diff --git a/en-US/500.data-migration/1100.bypass-import/100.overview-of-bypass-import.md b/en-US/500.data-migration/1100.bypass-import/100.overview-of-bypass-import.md
index d5ceb22759..1405d8897c 100644
--- a/en-US/500.data-migration/1100.bypass-import/100.overview-of-bypass-import.md
+++ b/en-US/500.data-migration/1100.bypass-import/100.overview-of-bypass-import.md
@@ -29,7 +29,7 @@ At present, OceanBase Database supports the following statements for bypass impo
Note
-- Column store tables support bypass import.
+- Columnstore tables support bypass import.
LOB
columns support bypass import.
@@ -37,11 +37,9 @@ At present, OceanBase Database supports the following statements for bypass impo
* Limitations on the `LOAD DATA` statement are as follows:
- * You cannot execute two statements to write the same table at a time because a lock is added to the table during import.
-
- * The statement cannot be used in triggers.
-
- * The statement cannot be executed in a multi-row transaction.
+ * Only one `LOAD DATA` statement can be executed on a table at a time because the statement will lock the table.
+ * The `LOAD DATA` statement is not supported in triggers.
+ * The `LOAD DATA` statement is not supported in a multi-row transaction that contains multiple operations.
* Limitations on the `INSERT INTO SELECT` statement are as follows:
diff --git a/en-US/500.data-migration/1100.bypass-import/200.use-load-data-statement-to-bypass-import-data.md b/en-US/500.data-migration/1100.bypass-import/200.use-load-data-statement-to-bypass-import-data.md
index b0f74af8de..a0ce7e59ca 100644
--- a/en-US/500.data-migration/1100.bypass-import/200.use-load-data-statement-to-bypass-import-data.md
+++ b/en-US/500.data-migration/1100.bypass-import/200.use-load-data-statement-to-bypass-import-data.md
@@ -11,48 +11,89 @@ In the `LOAD DATA` statement, the `direct` keyword is used as a hint to indicate
Note
-- Column store tables support bypass import.
-LOB
columns support bypass import.
+- Columnstore tables support bypass import.
+LOB
columns support bypass import.
## Limitations
-* You cannot execute two statements to write the same table at a time because a lock is added to the table during import.
-
-* The `LOAD DATA` statement cannot be used in triggers.
-
-* The `LOAD DATA` statement cannot be executed in a multi-row transaction.
+* Only one `LOAD DATA` statement can be executed on a table at a time because the statement will lock the table.
+* The `LOAD DATA` statement is not supported in triggers.
+* The `LOAD DATA `statement is not supported in a multi-row transaction that contains multiple operations.
## Considerations
-To speed up data import, OceanBase Database adopts a parallel design for `LOAD DATA` operations. During the process, data to be imported is divided into multiple subtasks, which are executed in parallel. Each subtask is processed as an independent transaction in a random order. Therefore, observe the following considerations:
+OceanBase Database uses parallel processing to optimize the data import rate of the `LOAD DATA` statement. In parallel processing, data is split into multiple subtasks for parallel execution. Each subtask is considered an independent transaction. The execution sequence of the subtasks is not fixed. Therefore:
-* The atomicity of the overall data import operation is not guaranteed.
-* For a table without a primary key, data may be written to the table in an order different from that in the source file.
+* Global atomicity cannot be ensured during the data import.
+* For a table without a primary key, data may be written in a sequence different from that in the original file.
## Syntax
-```sql
-LOAD DATA /*+ direct(need_sort,max_error) parallel(N) */ INFILE 'file_name' ...
-```
+- The syntax for importing data from a single file is as follows:
+
+ ```sql
+ LOAD DATA /*+ direct(need_sort,max_error) parallel(N) */ [REMOTE_OSS | LOCAL] INFILE 'file_name' ...
+ ```
+
+- The syntax for importing data from multiple files is as follows:
+
+ ```sql
+ LOAD DATA /*+ direct(need_sort,max_error) parallel(N) */ [LOCAL] INFILE 'file_name1,file_name2,...' INTO TABLE tbl_name FIELDS [TERMINATED BY 'string'] ...
+ ```
For more information about the syntax of the `LOAD DATA` statement, see [LOAD DATA](../../700.reference/500.sql-reference/100.sql-syntax/200.common-tenant-of-mysql-mode/600.sql-statement-of-mysql-mode/5900.load-data-of-mysql-mode.md).
-The following table describes the parameters.
+**The parameters are described as follows:**
| Parameter | Description |
|------|------|
| direct | Specifies to use bypass import. |
+| REMOTE_OSS \| LOCAL | This parameter is optional. -
REMOTE_OSS
specifies whether to read data from an OSS file system. Notice
If this parameter is specified, file_name
must be an OSS address.
-
LOCAL
specifies whether to read data from the file system of the local client. If you do not specify the `LOCAL` parameter, data will be read from the file system of an OBServer node.
|
| need_sort | Specifies whether OceanBase Database needs to sort the data. The value is of the Boolean type.- `true`: Data sorting is needed.
- `false`: Data sorting is not needed.
|
| max_error | The maximum number of erroneous rows allowed. The value is of the INT type. If this value is exceeded, the `LOAD DATA` statement returns an error. |
| parallel(N) | The degree of parallelism (DOP) for loading data. The default value of `N` is `4`. |
+## Use wildcards to import data from multiple files
+
+When you import data in bypass mode, you can specify only one file in Alibaba Cloud Object Storage Service (OSS) as the file source. To import multiple files, you must execute the `LOAD DATA` statement repeatedly. Data files stored in the file system of a cluster must be separated with commas (,). When you import multiple data files from the file system of a cluster or from Alibaba Cloud OSS, you can use wildcards to specify the file names. This method is inapplicable when the file source is a client disk.
+
+**Import data from the file system of a cluster in bypass mode.**
+
+- Here are some examples:
+
+ - Use wildcards to match file names: `load data /*+ parallel(20) direct(true, 0) */ infile '/xxx/test.*.csv' replace into table t1 fields terminated by '|';`
+
+ - Use wildcards to match a directory: `load data /*+ parallel(20) direct(true, 0) */ infile '/aaa*bb/test.1.csv' replace into table t1 fields terminated by '|';`
+
+ - Use wildcards to match a directory and file names: `load data /*+ parallel(20) direct(true, 0) */ infile '/aaa*bb/test.*.csv' replace into table t1 fields terminated by '|';`
+
+- Take note of the following considerations:
+
+ - At least one file must be matched. Otherwise, the error `OB_FILE_NOT_EXIST` will be returned.
+
+ - For `load data /*+ parallel(20) direct(true, 0) */ infile '/xxx/test.1*.csv,/xxx/test.6*.csv' replace into table t1 fields terminated by '|';`, `/xxx/test.1*.csv,/xxx/test.6*.csv` is matched as a whole. If no file or directory is matched, the error `OB_FILE_NOT_EXIST` will be returned.
+
+ - Only wildcards compatible with the GLOB function in Portable Operating System Interface (POSIX) are supported. For example, `test.6*(6|0).csv` and `test.6*({0.csv,6.csv}|.csv)` can be added to the `ls` command but are not supported by the GLOB function, and therefore the error `OB_FILE_NOT_EXIST` will be returned.
+
+**Import data from Alibaba Cloud OSS in bypass mode:**
+
+- Here is an example:
+
+ - Use wildcards to match file names: `load data /*+ parallel(20) direct(true, 0) */ remote_oss infile 'oss://xxx/test.*.csv?host=xxx&access_id=xxx&access_key=xxx' replace into table t1 fields terminated by '|';`
+
+- Take note of the following considerations:
+
+ - You cannot use wildcards to match a directory. For example, if you execute the statement `load data /*+ parallel(20) direct(true, 0) */ remote_oss infile 'oss://aa*bb/test.*.csv?host=xxx&access_id=xxx&access_key=xxx' replace into table t1 fields terminated by '|';`, the error `OB_NOT_SUPPORTED` will be returned.
+
+ - Only the asterisk (`*`) and question mark (`?`) are supported as the wildcards for file names. You can enter other wildcards but these wildcards cannot match any result.
+
## Examples
-
- Note
- The following example shows how to import data from a server file. In OceanBase Database, you can also use the LOCAL INFILE
clause in the LOAD DATA
statement to import local files in bypass mode. For an example of the LOAD DATA LOCAL INFILE
statement, see Import data by using the LOAD DATA
statement.
-
+
+ Note
+ The following example shows how to import data from a server file. In OceanBase Database, you can also use the LOCAL INFILE
clause in the LOAD DATA
statement to import local files in bypass mode. For an example of the LOAD DATA LOCAL INFILE
statement, see Import data by using the LOAD DATA statement.
+
1. Log on to the server where the target OBServer node resides and create test data in the `/home/admin` directory.
@@ -125,5 +166,7 @@ The following table describes the parameters.
## References
* For more information about how to use the `LOAD DATA` statement, see [Import data by using the LOAD DATA statement](../700.migrate-data-from-csv-file-to-oceanbase-database/200.use-the-load-command-to-load-the-csv-data-file-to-the-oceanbase-database.md).
+
* For more information about how to connect to a database, see [Overview](../../300.develop/100.application-development-of-mysql-mode/100.connect-to-oceanbase-database-of-mysql-mode/100.connection-methods-overview-of-mysql-mode.md).
+
* For more information about how to drop a table, see [Drop a table](../../700.reference/300.database-object-management/200.manage-object-of-oracle-mode/100.manage-tables-of-oracle-mode/800.delete-a-table-of-oracle-mode.md).
\ No newline at end of file
diff --git a/en-US/500.data-migration/1100.bypass-import/300.use-insert-into-select-statement-to-bypass-import-data.md b/en-US/500.data-migration/1100.bypass-import/300.use-insert-into-select-statement-to-bypass-import-data.md
index 0eeed8f5d3..9b3d027f32 100644
--- a/en-US/500.data-migration/1100.bypass-import/300.use-insert-into-select-statement-to-bypass-import-data.md
+++ b/en-US/500.data-migration/1100.bypass-import/300.use-insert-into-select-statement-to-bypass-import-data.md
@@ -11,8 +11,8 @@ The `INSERT INTO SELECT` statement uses the `append` keyword and the `enable_par
Note
-- Column store tables support bypass import.
-LOB
columns support bypass import.
+- Columnstore tables support bypass import.
+LOB
columns support bypass import.
## Limitations
@@ -23,7 +23,7 @@ The `INSERT INTO SELECT` statement uses the `append` keyword and the `enable_par
* The `INSERT INTO SELECT` statement cannot be used in triggers.
-* The `INSERT INTO SELECT` statement cannot be executed in a multi-row transaction.
+* The `INSERT INTO SELECT` statement cannot be executed in a multi-row transaction that contains multiple operations.
## Syntax
@@ -33,7 +33,7 @@ INSERT /*+ append enable_parallel_dml parallel(N) */ INTO table_name select_sen
For more information about the syntax of the `INSERT INTO` statement, see [INSERT (MySQL mode)](../../700.reference/500.sql-reference/100.sql-syntax/200.common-tenant-of-mysql-mode/600.sql-statement-of-mysql-mode/5700.insert-sql-of-mysql-mode.md) and [INSERT (Oracle mode)](../../700.reference/500.sql-reference/100.sql-syntax/300.common-tenant-of-oracle-mode/900.sql-statement-of-oracle-mode/200.dml-of-oracle-mode/200.insert-of-oracle-mode.md).
-The following table describes the parameters.
+**The parameters are described as follows:**
| Parameter | Description |
|------|------|
diff --git a/en-US/500.data-migration/700.migrate-data-from-csv-file-to-oceanbase-database/100.use-datax-to-load-csv-data-files-to-oceanbase-database.md b/en-US/500.data-migration/700.migrate-data-from-csv-file-to-oceanbase-database/100.use-datax-to-load-csv-data-files-to-oceanbase-database.md
index b95dcae5a3..301790e72d 100644
--- a/en-US/500.data-migration/700.migrate-data-from-csv-file-to-oceanbase-database/100.use-datax-to-load-csv-data-files-to-oceanbase-database.md
+++ b/en-US/500.data-migration/700.migrate-data-from-csv-file-to-oceanbase-database/100.use-datax-to-load-csv-data-files-to-oceanbase-database.md
@@ -7,9 +7,9 @@
# Use DataX to migrate CSV files to OceanBase Database
-DataX is an open-source version of Alibaba Cloud DataWorks. It is an offline data synchronization tool widely used by Alibaba Group. It efficiently synchronizes data between heterogeneous data sources such as MySQL, Oracle, SQL Server, PostgreSQL, Hadoop Distributed File System (HDFS), Hive, ADS, HBase, Tablestore (OTS), MaxCompute (formerly known as ODPS), Hologres, Distributed Relational Database Service (DRDS), and OceanBase Database.
+DataX is the open-source version of Alibaba Cloud DataWorks. It is an offline data synchronization tool widely used in Alibaba Group. DataX efficiently synchronizes data between heterogeneous data sources such as MySQL, Oracle, SQL Server, PostgreSQL, Hadoop Distributed File System (HDFS), Hive, ADS, HBase, Tablestore (OTS), MaxCompute (formerly known as ODPS), Distributed Relational Database Service (DRDS), and OceanBase Database.
-If you use OceanBase Database Enterprise Edition, you can request the internal version of DataX (RPM package) from OceanBase Technical Support. If you use OceanBase Database Community Edition, you can download the source code from the [open-source website of DataX](https://github.com/alibaba/datax) and then compile the code. During compilation, remove unused database plug-ins from the `pom.xml` file. Otherwise, the compiled package will be very large.
+If you use OceanBase Database Enterprise Edition, you can obtain the RPM package of the internal version of DataX from OceanBase Technical Support. If you use OceanBase Database Community Edition, you can download the source code from the [DataX open-source website](https://github.com/alibaba/datax) and compile the code. During compilation, you can delete database plug-ins that you do not need from the `pom.xml` file to control the size of the compiled package.
## Framework design
@@ -21,26 +21,26 @@ DataX is an offline data synchronization framework that is designed based on the
* The writer plug-in is a data write module that retrieves data from the framework and writes the data to the destination.
-* The framework builds a data transmission channel to connect the reader and the writer and processes core technical issues such as caching, throttling, concurrency control, and data conversion.
+* The framework builds a data transmission channel to connect the reader and the writer and processes core technical issues such as caching, throttling, concurrency, and data conversion.
-DataX migrates data through tasks. Each task migrates only one table and has a configuration file in `JSON` format. The configuration file contains two sections: `reader` and `writer`. `reader` and `writer` respectively correspond to the database reader and writer plug-ins supported by DataX. For example, when you migrate table data from a MySQL database to an OceanBase database, the `txtfilereader` plug-in of MySQL and the `oceanbasev10writer` plug-in of OceanBase Database are used to respectively read data from the MySQL database and write the data to the OceanBase database. The following sections describe the `txtfilereader` and `oceanbasev10writer` plug-ins.
+DataX migrates data through tasks. Each task processes only one table and has a configuration file in the `JSON` format. The configuration file contains two parts: `reader` and `writer`. The `reader` and `writer` parts respectively correspond to the database read and write plugins supported by DataX. For example, when you migrate table data from a MySQL database to OceanBase Database, the data must be read from the MySQL database and then written to OceanBase Database. In this case, the `txtfilereader` plug-in of the MySQL database and the `oceanbasev10writer` plug-in of OceanBase Database are used. The following sections describe the `txtfilereader` and `oceanbasev10writer` plugins.
+### `txtfilereader` plug-in
-### `txtfilereader`
+The `txtfilereader` plug-in reads data from the local file system. converts the data, and transfers the data to the writer through DataX.
-The `txtfilereader` plug-in reads data from the local file system. In underlying implementation, `txtfilereader` obtains data from local files, converts the data into data that complies with the DataX transmission protocol, and then passes the data to the writer.
+
-
+### `OceanbaseV10Writer` plug-in
-### `oceanbasev10writer`
+The `oceanbasev10writer` plug-in writes data to the destination table in OceanBase Database.
+To be specific, the `oceanbasev10writer` plug-in connects to a remote OceanBase database from a Java client (MySQL JDBC or OBClient) by using OceanBase Database Proxy (ODP) and executes the corresponding `INSERT` statement to write the data to the remote OceanBase database. The data is committed in the OceanBase database in batches.
-The `oceanbasev10writer` plug-in writes data to the destination table in the OceanBase database. It connects to a remote OceanBase database from a Java client (MySQL JDBC or OBClient) by using ODP and executes the corresponding `INSERT` statement to write the data to the remote OceanBase database. The data is committed to the remote OceanBase database in batches.
-
-`oceanbasev10writer` uses the DataX framework to obtain the protocol data generated by the reader and then generates an insert statement. If a primary key or unique key conflict occurs when data is written, you can update all fields in the table by using the `replace` mode for a MySQL tenant of OceanBase Database, and only the insert mode for an Oracle tenant of OceanBase Database. For performance purposes, the batch write mode is used. A write request is initiated only when the number of rows reaches the specified threshold.
+`oceanbasev10writer` uses the DataX framework to obtain the protocol data generated by the reader and then generates an `INSERT` statement. If a primary key or unique key conflict occurs during data writes, you can update all fields in the table by using the `replace` mode for a MySQL tenant of OceanBase Database, and by using only the insert mode for an Oracle tenant of OceanBase Database. For performance purposes, the batch write mode is used. A write request is initiated only when the number of rows reaches the specified threshold.
### DataX configuration file
-The following is an example configuration file:
+The following example shows the content of the configuration file:
```json
{
@@ -81,12 +81,12 @@ The following is an example configuration file:
}
```
-
- Notice
- DataX migrates only the data of a table. Therefore, you must create the schema of the table in the destination database in advance.
-
+
+ Notice
+ DataX migrates only table data. Therefore, you need to create the corresponding table schema in the destination database in advance.
+
-Place the `JSON` configuration file in the `job` directory of DataX or in a custom path. Run the following command:
+Place the `JSON` configuration file in the `job` directory of DataX or in a custom path. Here is a sample command:
```shell
$bin/datax.py job/stream2stream.json
@@ -94,29 +94,29 @@ $bin/datax.py job/stream2stream.json
The output is as follows:
-```bash
+```shell
<.....>
2021-08-26 11:06:09.217 [job-0] INFO JobContainer - PerfTrace not enable!
2021-08-26 11:06:09.218 [job-0] INFO StandAloneJobContainerCommunicator - Total 20 records, 380 bytes | Speed 38B/s, 2 records/s | Error 0 records, 0 bytes | All Task WaitWriterTime 0.000s | All Task WaitReaderTime 0.000s | Percentage 100.00%
2021-08-26 11:06:09.223 [job-0] INFO JobContainer -
-Task start time : 2021-08-26 11:05:59
-Task end time : 2021-08-26 11:06:09
-Time consumption : 10s
-Average task traffic : 38 B/s
-Record writing speed : 2rec/s
-Total number of read records : 20
-Total read and write failures : 0
+Task start time: 2021-08-26 11:05:59
+Task end time: 2021-08-26 11:06:09
+Time consumption: 10s
+Average task traffic: 38 B/s
+Record writing speed: 2 rec/s
+Total number of read records: 20
+Total read and write failures: 0
```
-After DataX executes a task, it generates a simple task report that covers the preceding average traffic, write speed, and total number of read/write failures.
+After a DataX task is executed, a simple task report is returned, providing information such as the average output traffic, write speed, and total number of read and write failures.
-You can specify the speed and error record limit in the job parameter `settings` of DataX.
+The `settings` parameter of DataX tasks can define the speed and error log tolerance.
```json
"setting": {
"speed": {
- "channel": 10
+ "channel": 10
},
"errorLimit": {
"record": 10,
@@ -125,14 +125,14 @@ You can specify the speed and error record limit in the job parameter `settings`
}
```
-The parameters are described as follows:
+where
-- `errorLimit`: the limit on the number of error records. When this limit is exceeded, the task is terminated.
-- `channel`: the concurrency. Technically, a higher concurrency value indicates higher migration performance. In actual operations, you must also consider the read pressure on the source database, network transmission performance, and write performance of the destination database.
+* `errorLimit` specifies the maximum number of erroneous records allowed for the task. If this limit is exceeded, the task is interrupted and exits.
+* `channel`: the number of concurrent tasks. A larger number of concurrent tasks indicate higher migration performance. However, you must consider the read stress of the source database, network transmission performance, and write performance of the destination database.
## Prepare the environment
-Download the .tar package from http://datax-opensource.oss-cn-hangzhou.aliyuncs.com/datax.tar.gz.
+Download the .tar package from `http://datax-opensource.oss-cn-hangzhou.aliyuncs.com/datax.tar.gz`.
Decompress the installation package:
@@ -140,7 +140,8 @@ Decompress the installation package:
tar zxvf datax.tar.gz
cd datax
```
-The directories are as follows:
+
+Table of contents:
```shell
$tree -L 1 --filelimit 30
@@ -156,19 +157,18 @@ $tree -L 1 --filelimit 30
└── tmp
```
-The following table describes the directories in the installation package.
-
-| Directory name | Description |
-| --- | --- |
-| bin | The directory where the executable file is located. The datax.py file in this directory is the startup script of DataX tasks. |
-| conf | The directory where log files are located. The DataX configuration files unrelated to tasks are stored in this directory. |
-| lib | The directory where the libraries required for running are located. The global .jar files required for the running of DataX are stored in this directory. |
-| job | The directory where the task configuration file for verifying DataX installation is located. |
-| log | The directory where log files are located. The running logs of DataX tasks are stored in this directory. By default, when DataX runs, standard logs are generated and written to the log directory. |
-| plugin | The directory where the plug-in files are located. The data source plug-ins supported by DataX are stored in this directory. |
+The following table describes some important directories in the installation package:
+| Directory | Description |
+| ------ | ------ |
+| bin | The directory where the executable files are located. The `datax.py` file in this directory is the startup script of DataX tasks. |
+| conf | The directory where the log files are located. This directory stores DataX configuration files that are not related to tasks. |
+| lib | The directory where the libraries that DBCAT depends on are located. This directory stores the global .jar files for running DataX. |
+| job | This directory contains a task configuration file for testing and verifying the installation of DataX. |
+| log | The log directory. This directory stores the running logs of DataX tasks. During DataX runtime, logs are output to the standard output and written to the log directory by default. |
+| plugin | The plug-in directory. This directory stores various data source plug-ins supported by DataX. |
-## Use DataX to migrate a CSV file to OceanBase Database
+## Examples
Copy the CSV file exported from the source to the destination DataX server, and then import it into the destination OceanBase database.
@@ -230,23 +230,24 @@ The `myjob.json` configuration file is as follows:
```
+The parameters are described in the following table.
| Parameter | Description |
|----------|--------------------|
-| name | The name of the reader or writer plug-in for connecting to the database. The reader plug-in of MySQL is `mysqlreader`, and the writer plug-in of OceanBase Database is `oceanbasev10writer`. For more information about the reader and writer plug-ins, see [DataX data source guide](https://github.com/alibaba/datax). |
-| jdbcUrl | The JDBC URL of the database to which you want to connect. The value is a JSON array and multiple URLs can be entered for a database. You need to enter at least one JDBC URL in the JSON array. The value must be entered in compliance with the MySQL official format. You can also specify a configuration property in the URL. For more information, see [Configuration Properties](http://dev.mysql.com/doc/connector-j/en/connector-j-reference-configuration-properties.html) in the MySQL documentation. **Notice**The JDBC URL must be included in the connection section of the code. You must connect to OceanBase Database by using ODP. The default port is 2883. The JDBC URL of the writer does not need to be enclosed with square brackets (`[]`) but the JDBC URL of the reader must be enclosed with square brackets (`[]`). Required: Yes. Default value: None. |
+| name | The name of the database plug-in corresponding to the reader or writer that connects to the database. The reader plug-in of MySQL Database is `mysqlreader`, and the writer plug-in of OceanBase Database is `oceanbasev10writer`. For more information about the reader and writer plug-ins, see [DataX data source guide](https://github.com/alibaba/datax). |
+| jdbcUrl | The JDBC URL of the database to which you want to connect. The value is a JSON array and multiple URLs can be entered for a database. You must enter at least one JDBC URL in the JSON array. The value must be entered in compliance with the MySQL official format. You can also specify a configuration property in the URL. For more information, see [Configuration Properties](http://dev.mysql.com/doc/connector-j/en/connector-j-reference-configuration-properties.html) in the MySQL documentation. Notice
- The JDBC URL must be included in the `connection` section of the code.
- OceanBase Database must be connected by using an OBProxy. The port is 2883 by default.
- You do not need to use
[]
to enclose the connection string of the JDBC URL of the writer. However, you must use []
to enclose the connection string of the JDBC URL of the reader.
- Required: Yes.
- Default value: None.
|
| username | The username for logging on to the database. Required: Yes. Default value: None. |
| password | The password of the specified username required to log on to the database. Required: Yes. Default value: None. |
-| table | The table to be synchronized. The value is a JSON array and multiple tables can be specified at the same time. When you specify multiple tables, make sure that they use the same schema structure. The `mysqlreader` plug-in does not verify whether the specified tables belong to the same logic table. **Notice**The table string must be included in the connection section of the code. Required: Yes. Default value: None. |
-| column | The set of names of columns to be synchronized in the configured table. The values are specified in a JSON array. We recommend that you do not set the column parameter to `['*']`, because this configuration changes when the schema changes. We recommend that you specify specific column names. Column pruning is supported. You can export only the specified columns. Column reordering is supported. You can export columns without following the column order in the table schema. You can specify constants in the MySQL SQL format: ``["id", "`table`", "1", "'bazhen.csy'", "null", "to_char(a + 1)", "2.3" , "true"]``. **Note** - `id` is a regular column name.
- `table` is the name of the column that includes a reserved word.
- `1` is an integer constant.
- `bazhen.csy` is a string constant.
- `null` is a null pointer.
- `to_char(a + 1)` is an expression.
- `2.3` is a floating point number.
- `true` is a Boolean value.
Required: Yes. Default value: None. |
-| where | The filter condition. The `mysqlreader` plug-in assembles the specified column, table, and `WHERE` clause into an SQL statement. Then, it extracts data based on this SQL statement. To synchronize data of the current day, you can specify the `WHERE` clause as `gmt_create > $bizdate`.
**Notice**You cannot set the `WHERE` clause to `limit 10`, because `limit` is not a valid `WHERE` clause of an SQL statement. A `WHERE` clause allows you to orderly synchronize the incremental business data. If you do not specify the `WHERE` clause or do not specify the key or value of the `WHERE` clause, DataX performs full synchronization. Required: No.Default value: None. |
+| table | The table to be synchronized. The value is a JSON array and multiple tables can be specified at the same time. When you specify multiple tables, make sure that they use the same schema structure. mysqlreader does not verify whether the specified tables belong to the same logic table. Notice
The table string must be included in the `connection` section of the code.
- Required: Yes.
- Default value: None.
|
+| column | The set of names of columns to be synchronized in the configured table. The values are specified in a JSON array. We recommend that you do not set this parameter to `['*']`, because the configuration changes with the table schema. We recommend that you specify the column names instead. Column pruning is supported. You can export only the specified columns. Column reordering is supported. You can export columns without following the column order in the schema. You can specify constants in the SQL syntax format of MySQL, for example, `["id", "`table`", "1", "'bazhen.csy'", "null", "to_char(a + 1)", "2.3" , "true"]`. Note
id
is a regular column name. table
is the name of the column that includes a reserved word. 1
is an integer constant. bazhen.csy
is a string constant. null
is a null pointer. to_char(a +1)
is an expression. 2.3
is a floating-point number. true
is a Boolean value.
- Required: Yes.
- Default value: None.
|
+| where | The filter condition. mysqlreader assembles the specified column, table, and `WHERE` clause into an SQL statement. Then, mysqlreader extracts data based on this SQL statement. To synchronize data of the current day, you can set the condition of the WHERE clause to `gmt_create > $bizdate`. Notice
You cannot set the condition of the WHERE
clause to limit 10
, because limit
is not a valid key in the WHERE
clause of an SQL statement. A WHERE
clause allows you to orderly synchronize the incremental business data. If you do not specify the WHERE
clause or do not specify the key
or value
of the WHERE
clause, DataX performs full synchronization.
- Required: No.
- Default value: None.
|
-After the job configuration file is configured, execute this job.
+After you configure the job file, execute the job. The syntax of the commands is as follows:
```shell
python datax.py ../job/myjob.json
```
-## References
+## More information
-For more information about DataX, see [DataX](https://github.com/alibaba/DataX).
\ No newline at end of file
+For more information about DataX, see [DataX](https://github.com/alibaba/DataX).
diff --git a/en-US/500.data-migration/700.migrate-data-from-csv-file-to-oceanbase-database/200.use-the-load-command-to-load-the-csv-data-file-to-the-oceanbase-database.md b/en-US/500.data-migration/700.migrate-data-from-csv-file-to-oceanbase-database/200.use-the-load-command-to-load-the-csv-data-file-to-the-oceanbase-database.md
index 81a3f714ff..5ddd3abf45 100644
--- a/en-US/500.data-migration/700.migrate-data-from-csv-file-to-oceanbase-database/200.use-the-load-command-to-load-the-csv-data-file-to-the-oceanbase-database.md
+++ b/en-US/500.data-migration/700.migrate-data-from-csv-file-to-oceanbase-database/200.use-the-load-command-to-load-the-csv-data-file-to-the-oceanbase-database.md
@@ -15,10 +15,10 @@ Do not use the `LOAD DATA` statement on tables with triggers.
## Considerations
-To speed up data import, OceanBase Database adopts a parallel design for `LOAD DATA` operations. During the process, data to be imported is divided into multiple subtasks, which are executed in parallel. Each subtask is processed as an independent transaction in a random order. Therefore, observe the following considerations:
+OceanBase Database uses parallel processing to optimize the data import rate of the `LOAD DATA` statement. In parallel processing, data is split into multiple subtasks for parallel execution. Each subtask is considered an independent transaction. The execution sequence of the subtasks is not fixed. Therefore:
-* The atomicity of the overall data import operation is not guaranteed.
-* For a table without a primary key, data may be written to the table in an order different from that in the source file.
+* Global atomicity cannot be ensured during the data import.
+* For a table without a primary key, data may be written in a sequence different from that in the original file.
## Scenarios
@@ -45,13 +45,13 @@ You can use the `LOAD DATA` statement to import a CSV file as follows:
For more information about the `LOAD DATA` statement, see [LOAD DATA (MySQL mode)](../../700.reference/500.sql-reference/100.sql-syntax/200.common-tenant-of-mysql-mode/600.sql-statement-of-mysql-mode/5900.load-data-of-mysql-mode.md) or [LOAD DATA (Oracle mode)](../../700.reference/500.sql-reference/100.sql-syntax/300.common-tenant-of-oracle-mode/900.sql-statement-of-oracle-mode/300.dcl-of-oracle-mode/1900.load-data-of-oracle-mode.md).
-### Obtain the privilege to execute the `LOAD DATA` statement
+### Obtain the privileges to execute the `LOAD DATA` statement
Before you execute the `LOAD DATA` statement, you must obtain the required privileges. The procedure for granting execution privileges is as follows:
1. Grant the `FILE` privilege.
- **Here is an example:**
+ Here is an example:
Use the following syntax to grant the `FILE` privilege:
@@ -59,13 +59,13 @@ Before you execute the `LOAD DATA` statement, you must obtain the required privi
GRANT FILE ON *.* TO user_name;
```
- Here, `user_name` is the user who needs to execute the `LOAD DATA` statement.
+ `user_name` is the user who needs to execute the `LOAD DATA` statement.
2. Grant other necessary privileges.
* The `INSERT` privilege on the corresponding table is required in MySQL mode.
- **Here is an example:**
+ Here is an example:
Use the following syntax to grant the `INSERT` privilege:
@@ -73,11 +73,11 @@ Before you execute the `LOAD DATA` statement, you must obtain the required privi
GRANT INSERT ON database_name.tbl_name TO user_name;
```
- Here, `database_name` specifies the database name, `tbl_name` specifies the table name, and `user_name` specifies the user who needs to execute the `LOAD DATA` statement.
+ `database_name` specifies the database name, `tbl_name` specifies the table name, and `user_name` specifies the user who needs to execute the `LOAD DATA` statement.
* The `CREATE SESSION` privilege is required in Oracle mode.
- **Here is an example:**
+ Here is an example:
Use the following syntax to grant the `CREATE SESSION` privilege:
@@ -85,7 +85,7 @@ Before you execute the `LOAD DATA` statement, you must obtain the required privi
GRANT CREATE SESSION TO user_name;
```
- Here, `user_name` is the username of the user to which the privilege is to be granted.
+ `user_name` is the username of the user to which the privilege is to be granted.
## Examples
@@ -98,7 +98,7 @@ Before you execute the `LOAD DATA` statement, you must obtain the required privi
1. Log on to the server where the OBServer node to connect to resides.
- **Here is an example:**
+ Here is an example:
```shell
ssh admin@10.10.10.1
@@ -106,7 +106,7 @@ Before you execute the `LOAD DATA` statement, you must obtain the required privi
2. Create test data in the `/home/admin/test_data` directory.
- **Here is an example:**
+ Here is an example:
Run the following command to write a script named `student.sql`:
@@ -116,7 +116,7 @@ Before you execute the `LOAD DATA` statement, you must obtain the required privi
3. Enter the editing mode and add test data.
- **Here is an example:**
+ Here is an example:
Press the i or Insert key to enter the insert mode of the `vi` editor and add the following content:
@@ -133,7 +133,7 @@ Before you execute the `LOAD DATA` statement, you must obtain the required privi
For security reasons, when you set the system variable secure_file_priv
, you can connect to the database only through a local socket to execute the SQL statement that modifies the global variable. For more information, see secure_file_priv.
- **Here is an example:**
+ Here is an example:
1. Log on to the server where the OBServer node to connect to resides.
@@ -141,9 +141,9 @@ Before you execute the `LOAD DATA` statement, you must obtain the required privi
ssh admin@10.10.10.1
```
- 2. Execute the following statement to connect to the `mysql001` tenant through a local Unix socket:
+ 2. Execute the following statement to connect to the `mysql001` tenant through a local Unix Socket:
- **Here is an example:**
+ Here is an example:
```shell
obclient -S /home/admin/oceanbase/run/sql.sock -uroot@mysql001 -p******
@@ -157,7 +157,7 @@ Before you execute the `LOAD DATA` statement, you must obtain the required privi
5. Reconnect to the database.
- **Here is an example:**
+ Here is an example:
```shell
obclient -h127.0.0.1 -P2881 -utest_user001@mysql001 -p****** -A
@@ -165,7 +165,7 @@ Before you execute the `LOAD DATA` statement, you must obtain the required privi
6. Create a test table.
- **Here is an example:**
+ Here is an example:
Execute the following statement to create a test table named `student`.
@@ -175,7 +175,7 @@ Before you execute the `LOAD DATA` statement, you must obtain the required privi
7. Use the `LOAD DATA` statement to import data.
- **Here is an example:**
+ Here is an example:
Execute the following `LOAD DATA` statement to load data from the specified file to a data table. In this statement:
@@ -195,7 +195,7 @@ Before you execute the `LOAD DATA` statement, you must obtain the required privi
(id,name,score);
```
- **The return result is as follows:**
+ The return result is as follows:
```shell
Query OK, 3 rows affected
@@ -204,7 +204,7 @@ Before you execute the `LOAD DATA` statement, you must obtain the required privi
8. View data in the destination table.
- **Here is an example:**
+ Here is an example:
```shell
obclient [test]> SELECT * FROM student;
@@ -229,7 +229,7 @@ Perform the following steps to import data from a local file to a table in Ocean
1. Create test data in the local directory `/home/admin/test_data`.
- **Here is an example:**
+ Here is an example:
Run the following command to write a script named `test_tbl1.csv`:
@@ -239,7 +239,7 @@ Perform the following steps to import data from a local file to a table in Ocean
2. Enter the editing mode and add test data.
- **Here is an example:**
+ Here is an example:
Press the i or Insert key to enter the insert mode of the `vi` editor and add the following content:
@@ -251,7 +251,7 @@ Perform the following steps to import data from a local file to a table in Ocean
3. Start the client.
- **Here is an example:**
+ Here is an example:
Run the following command to use OceanBase Client (OBClient) to connect to OceanBase Database. Add the `--local-infile` parameter in the command to enable the feature for loading data from local files.
@@ -266,7 +266,7 @@ Perform the following steps to import data from a local file to a table in Ocean
4. Create a test table.
- **Here is an example:**
+ Here is an example:
```shell
CREATE TABLE test_tbl1(col1 INT,col2 INT);
@@ -274,7 +274,7 @@ Perform the following steps to import data from a local file to a table in Ocean
5. Execute the `LOAD DATA LOCAL INFILE` statement on the client to load data from a local file.
- **Here is an example:**
+ Here is an example:
```shell
obclient [test]> LOAD DATA LOCAL INFILE '/home/admin/test_data/test_tbl1.csv' INTO TABLE test_tbl1 FIELDS TERMINATED BY ',';
@@ -289,7 +289,7 @@ Perform the following steps to import data from a local file to a table in Ocean
6. View data in the destination table.
- **Here is an example:**
+ Here is an example:
```shell
obclient [test]> SELECT * FROM test_tbl1;
@@ -332,5 +332,5 @@ Row ErrCode ErrMsg
## References
* For more information about how to use the `LOAD DATA` statement to import data in bypass mode, see [Import data in bypass mode by using the LOAD DATA statement](../1100.bypass-import/200.use-load-data-statement-to-bypass-import-data.md).
-* For more information about how to connect to OceanBase Database, see [Overview](../../300.develop/100.application-development-of-mysql-mode/100.connect-to-oceanbase-database-of-mysql-mode/100.connection-methods-overview-of-mysql-mode.md).
+* For more information about how to connect to a database, see [Overview](../../300.develop/100.application-development-of-mysql-mode/100.connect-to-oceanbase-database-of-mysql-mode/100.connection-methods-overview-of-mysql-mode.md).
* For more information about how to drop a table, see [Drop a table](../../700.reference/300.database-object-management/200.manage-object-of-oracle-mode/100.manage-tables-of-oracle-mode/800.delete-a-table-of-oracle-mode.md).
From 3a09597b4980dd8672e16b9924f4d59945754703 Mon Sep 17 00:00:00 2001
From: Jackie Qu
Date: Fri, 12 Apr 2024 18:26:43 +0800
Subject: [PATCH 04/63] v430-beta-cluster-and-tenant-management
---
.../300.common-cluster-operations/300.restart-a-node.md | 4 ++--
.../300.common-cluster-operations/400.add-a-node.md | 5 +++++
.../200.modify-the-configuration-of-a-resource-unit.md | 2 +-
.../200.adjust-resource-specifications.md | 2 +-
4 files changed, 9 insertions(+), 4 deletions(-)
diff --git a/en-US/600.manage/100.cluster-management/300.common-cluster-operations/300.restart-a-node.md b/en-US/600.manage/100.cluster-management/300.common-cluster-operations/300.restart-a-node.md
index 3bb11a3ffc..7d2734d113 100644
--- a/en-US/600.manage/100.cluster-management/300.common-cluster-operations/300.restart-a-node.md
+++ b/en-US/600.manage/100.cluster-management/300.common-cluster-operations/300.restart-a-node.md
@@ -92,10 +92,10 @@ This topic describes how to restart a single node in a cluster. To restart multi
1. Log on as the `admin` user to the server where the node whose process is to be stopped is located.
- 2. Access the `/home/admin/oceanbase/bin` directory from the command-line interface (CLI).
+ 2. Access the `/home/admin/oceanbase` directory from the command-line interface (CLI).
```bash
- [admin@xxx /]$ cd /home/admin/oceanbase/bin
+ [admin@xxx /]$ cd /home/admin/oceanbase
```
For more information about the installation directory of OceanBase Database, see [Structure of the OBServer installation directory](../../../700.reference/100.oceanbase-database-concepts/1200.observer-node-architecture/100.observer-installation-directory-structure.md).
diff --git a/en-US/600.manage/100.cluster-management/300.common-cluster-operations/400.add-a-node.md b/en-US/600.manage/100.cluster-management/300.common-cluster-operations/400.add-a-node.md
index 20ef6da0d7..0ccdca5489 100644
--- a/en-US/600.manage/100.cluster-management/300.common-cluster-operations/400.add-a-node.md
+++ b/en-US/600.manage/100.cluster-management/300.common-cluster-operations/400.add-a-node.md
@@ -25,6 +25,11 @@ The capability to deploy multiple replicas is also an architectural advantage of
## Procedure
+
+Note
+This section mainly introduces how to manually add nodes to the cluster through commands. For OceanBase Database Community Edition, if you need to add nodes to the OceanBase cluster through OBD, see Scale out a cluster and change cluster components.
+
+
1. (Optional) In the case where you must first add a zone before you can add nodes to the zone, make sure that the zone has been added. For information about how to add a zone, see [Add a zone](../300.common-cluster-operations/800.add-a-zone.md).
2. SSH to an OBServer node to be added, initialize the OBServer node, and configure the clock source.
diff --git a/en-US/600.manage/200.tenant-management/600.common-tenant-operations/1600.resource-specification-management/200.modify-the-configuration-of-a-resource-unit.md b/en-US/600.manage/200.tenant-management/600.common-tenant-operations/1600.resource-specification-management/200.modify-the-configuration-of-a-resource-unit.md
index 8cb340c568..8c148d49f2 100644
--- a/en-US/600.manage/200.tenant-management/600.common-tenant-operations/1600.resource-specification-management/200.modify-the-configuration-of-a-resource-unit.md
+++ b/en-US/600.manage/200.tenant-management/600.common-tenant-operations/1600.resource-specification-management/200.modify-the-configuration-of-a-resource-unit.md
@@ -26,7 +26,7 @@ where
* `CPU_CAPACITY`: Indicates the total CPU capacity on a single node.
-* `resource_hard_limit`: Indicates the amount of CPU resources allocated by the system when allocating units. The value range is \[0, 10000\]. The default value is `100`, indicating that over-allocation is not allowed.
+* `resource_hard_limit`: Indicates the amount of CPU resources allocated by the system when allocating units. The value range is \[100, 10000\]. The default value is `100`, indicating that over-allocation is not allowed.
For more information about the `resource_hard_limit` parameter, see [resource_hard_limit](../../../../700.reference/800.configuration-items-and-system-variables/100.system-configuration-items/300.cluster-level-configuration-items/16800.resource_hard_limit.md).
diff --git a/en-US/600.manage/200.tenant-management/600.common-tenant-operations/800.tenant-scale-in-and-out/200.adjust-resource-specifications.md b/en-US/600.manage/200.tenant-management/600.common-tenant-operations/800.tenant-scale-in-and-out/200.adjust-resource-specifications.md
index c70638083f..c74732e537 100644
--- a/en-US/600.manage/200.tenant-management/600.common-tenant-operations/800.tenant-scale-in-and-out/200.adjust-resource-specifications.md
+++ b/en-US/600.manage/200.tenant-management/600.common-tenant-operations/800.tenant-scale-in-and-out/200.adjust-resource-specifications.md
@@ -58,7 +58,7 @@ Take note of the following parameter description:
* `resource_hard_limit`: a specific system parameter.
- The system allocates CPU resources based on the value of the `resource_hard_limit` parameter. The value range is `[0, 10000]`. The default value is `100`, indicating that over-allocation is not allowed.
+ The system allocates CPU resources based on the value of the `resource_hard_limit` parameter. The value range is `[100, 10000]`. The default value is `100`, indicating that over-allocation is not allowed.
For more information about the `resource_hard_limit` parameter, see [resource_hard_limit](../../../../700.reference/800.configuration-items-and-system-variables/100.system-configuration-items/300.cluster-level-configuration-items/16800.resource_hard_limit.md).
From 34c2cebbd1d184b4f0c21abe61d23e4ea4536187 Mon Sep 17 00:00:00 2001
From: Jackie Qu
Date: Sat, 13 Apr 2024 10:08:58 +0800
Subject: [PATCH 05/63] 430-beta-300.replica-management-update
---
.../100.view-locality.md | 10 +++++-----
.../200.modify-locality.md | 14 +++++++-------
.../300.add-replica.md | 6 +++---
.../400.reduce-replica.md | 6 +++---
.../500.adjust-replica-distribution.md | 6 +++---
.../600.view-the-location-change-record.md | 2 +-
.../700.unit-migration.md | 6 +++---
.../400.data-distribution.md | 12 ++++++------
8 files changed, 31 insertions(+), 31 deletions(-)
diff --git a/en-US/600.manage/300.replica-management/200.replica-distribution/200.locality-common-operations/100.view-locality.md b/en-US/600.manage/300.replica-management/200.replica-distribution/200.locality-common-operations/100.view-locality.md
index 759c7eb102..a634944b4d 100644
--- a/en-US/600.manage/300.replica-management/200.replica-distribution/200.locality-common-operations/100.view-locality.md
+++ b/en-US/600.manage/300.replica-management/200.replica-distribution/200.locality-common-operations/100.view-locality.md
@@ -7,9 +7,9 @@
# View locality
-This topic describes how to query for the types and distribution of replicas for tenants from the `oceanbase.DBA_OB_TENANTS` view.
+This topic describes how to query the types and distribution of replicas for tenants from the `DBA_OB_TENANTS` view.
-For more information about the `oceanbase.DBA_OB_TENANTS` view, see [oceanbase.DBA_OB_TENANTS](../../../../700.reference/700.system-views/400.system-view-of-mysql-mode/200.dictionary-view-of-mysql-mode/19300.oceanbase-dba_ob_tenants-of-mysql-mode.md).
+For more information about the `DBA_OB_TENANTS` view, see [DBA_OB_TENANTS](../../../../700.reference/700.system-views/400.system-view-of-mysql-mode/200.dictionary-view-of-mysql-mode/19300.oceanbase-dba_ob_tenants-of-mysql-mode.md).
## Procedure
@@ -25,7 +25,7 @@ For more information about the `oceanbase.DBA_OB_TENANTS` view, see [oceanbase.D
obclient>use oceanbase;
```
-3. Query for the types and distribution of replicas for tenants from the `oceanbase.DBA_OB_TENANTS` view.
+3. Query the types and distribution of replicas for tenants from the `oceanbase.DBA_OB_TENANTS` view.
```shell
obclient> SELECT TENANT_ID,TENANT_NAME,TENANT_TYPE,PRIMARY_ZONE,LOCALITY FROM oceanbase.DBA_OB_TENANTS;
@@ -46,7 +46,7 @@ For more information about the `oceanbase.DBA_OB_TENANTS` view, see [oceanbase.D
The locality in the preceding example is FULL{1}@sa128_obv4_1, FULL{1}@sa128_obv4_2
. In practice, the majority principle must be met after you add zones.
-The `LOCALITY` column in the `DBA_OB_TENANTS` table indicates the types and distribution of replicas for tenants. It describes the types of replicas for tenants and the distribution of replicas across zones in the cluster.
+The `LOCALITY` column in the `DBA_OB_TENANTS` view indicates the types and distribution of replicas for tenants. It describes the types of replicas for tenants and the distribution of replicas across zones in the cluster.
For example, `LOCALITY = 'FULL{1}@sa128_obv4_1, FULL{1}@sa128_obv4_2'` for the `mq_t1` tenant indicates that the tenant has one full-featured replica in both `sa128_obv4_1` and `sa128_obv4_2`.
@@ -56,4 +56,4 @@ For example, `LOCALITY = 'FULL{1}@sa128_obv4_1, FULL{1}@sa128_obv4_2'` for the `
* [Modify locality](../200.locality-common-operations/200.modify-locality.md)
-* [Add replicas](../200.locality-common-operations/300.add-replica.md)
+* [Add replicas](../200.locality-common-operations/300.add-replica.md)
\ No newline at end of file
diff --git a/en-US/600.manage/300.replica-management/200.replica-distribution/200.locality-common-operations/200.modify-locality.md b/en-US/600.manage/300.replica-management/200.replica-distribution/200.locality-common-operations/200.modify-locality.md
index 2a4119135f..416701f941 100644
--- a/en-US/600.manage/300.replica-management/200.replica-distribution/200.locality-common-operations/200.modify-locality.md
+++ b/en-US/600.manage/300.replica-management/200.replica-distribution/200.locality-common-operations/200.modify-locality.md
@@ -13,7 +13,7 @@ This topic describes how to modify the locality of a tenant.
You can modify the locality of a tenant to increase or decrease the number of replicas for the tenant or adjust the distribution of replicas.
-* Increase the number of replicas for a tenant:
+* Increase the number of replicas for a tenant
You can increase the number of replicas for a tenant in the cluster. Only one replica can be added at a time. For example, if you want to increase the number of replicas for a tenant from 1 to 3, you must change the locality first from `F@z1` to `F@z1,F@z2`, and then from `F@z1,F@z2` to `F@z1,F@z2,F@z3`. For more information, see [Add replicas](../200.locality-common-operations/300.add-replica.md).
@@ -22,7 +22,7 @@ You can modify the locality of a tenant to increase or decrease the number of re
You can directly increase the number of replicas for a tenant from 3 to 5, without the need to first change to 4 replicas and then to 5 replicas.
-* Reduce the number of replicas for a tenant:
+* Reduce the number of replicas for a tenant
You can reduce the number of replicas for a tenant within the cluster, for example, reduce the number of replicas from 3 to 2, or from 5 to 3. For more information, see [Remove replicas](../200.locality-common-operations/400.reduce-replica.md).
@@ -34,7 +34,7 @@ You can modify the locality of a tenant to increase or decrease the number of re
-* Adjust the distribution of replicas for a tenant:
+* Adjust the distribution of replicas for a tenant
You can modify the locality of a tenant in the cluster several times to adjust the distribution of replicas across zones. For example, the original locality of a tenant is `F@z1,F@z2,F@z3`. Assuming that zone z3 expires, the replica must be migrated to z4, and the locality must be changed to `F@z1,F@z2,F@z4`. Specifically, the locality must be changed first from `F@z1,F@z2,F@z3` to `F@z1,F@z2,F@z3,F@z4`, and then from `F@z1,F@z2,F@z3,F@z4` to `F@z1,F@z2,F@z4`. For more information, see [Adjust the distribution of replicas](../200.locality-common-operations/500.adjust-replica-distribution.md).
@@ -62,13 +62,13 @@ For more information about how to view the resource information of each node in
2. Assume that the primary zone is set to RANDOM when you add zone4. After zone4 is added, it also provides read and write services.
-* Modifying a tenant's locality involves changing Paxos members and copying data. You must evaluate the impact on system stability and performance. Regarding changes of Paxos members, evaluate the impact on the tolerance to OBServer node failures. Regarding changes of replica distribution, evaluate the impact on the service level agreement (SLA) design for IDC and city-wide failures in the deployment architecture. Regarding data copying, observe the impact on other tenants on the same OBServer node. We recommend that you copy data during off-peak hours and prepare a contingency plan in advance.
+* Modifying a tenant's locality involves changing Paxos members and copying data. You must evaluate the impact on system stability and performance. Regarding changes of Paxos members, evaluate the impact on the tolerance to OBServer node failures. Regarding changes of replica distribution, evaluate the impact on the service level agreement (SLA) design for IDC and region-wide failures in the deployment architecture. Regarding data copying, observe the impact on other tenants on the same OBServer node. We recommend that you copy data during off-peak hours and prepare a contingency plan in advance.
-* Locality modification must not reduce the expected tolerance to OBServer node failures. For example, if you want to replace an IDC in a cluster with three replicas, you must add a replica and then remove a replica to avoid reducing the expected tolerance to OBServer node failures during locality modification. Theoretically, a single-replica failure during locality modification does not affect service continuity. If service continuity is affected, immediately recover the availability of the database. We recommend that you try to rectify hardware faults first.
+* Locality modification must not reduce the expected tolerance to OBServer node failures. For example, if you want to replace an IDC in a cluster with three replicas, you must add a replica and then remove a replica to avoid reducing the expected tolerance to OBServer node failures during locality modification. Theoretically, a single-replica failure during locality modification does not affect service continuity. If service continuity is affected, immediately recover the availability of the database. We recommend that you try to rectify hardware faults first.
## Procedure
-The following example describes how to modify the locality of tenant `mq_t1` from `FULL{1}@sa128_obv4_1,FULL{1}@sa128_obv4_2` to `F{1}@sa128_obv4_1,F{1}@sa128_obv4_2,F{1}@sa128_obv4_3`.
+The following example describes how to modify the locality of the `mq_t1` tenant from `FULL{1}@sa128_obv4_1,FULL{1}@sa128_obv4_2` to `F{1}@sa128_obv4_1,F{1}@sa128_obv4_2,F{1}@sa128_obv4_3`.
1. Log on to the sys tenant of the cluster as the root user.
@@ -137,4 +137,4 @@ The following example describes how to modify the locality of tenant `mq_t1` fro
* [Locality](../100.locality-overview.md)
-* [View locality](../200.locality-common-operations/100.view-locality.md)
+* [View locality](../200.locality-common-operations/100.view-locality.md)
\ No newline at end of file
diff --git a/en-US/600.manage/300.replica-management/200.replica-distribution/200.locality-common-operations/300.add-replica.md b/en-US/600.manage/300.replica-management/200.replica-distribution/200.locality-common-operations/300.add-replica.md
index 7a129200d3..fd3b57458a 100644
--- a/en-US/600.manage/300.replica-management/200.replica-distribution/200.locality-common-operations/300.add-replica.md
+++ b/en-US/600.manage/300.replica-management/200.replica-distribution/200.locality-common-operations/300.add-replica.md
@@ -7,7 +7,7 @@
# Add replicas
-This topic describes how to modify the locality of a tenant in a cluster to increase the number of partition replicas in the tenant.
+This topic describes how to add log stream replicas to a tenant in a cluster by modifying the locality of the tenant.
For more information about how to modify the locality, see [Modify locality](../200.locality-common-operations/200.modify-locality.md).
@@ -31,7 +31,7 @@ For more information about how to view the resource information of each node in
* We recommend that you modify the primary zone before you modify the locality.
- Here are two examples:
+ Here are some examples:
1. Assume that zone3 is a primary zone with the highest priority. When you remove zone3, read and write services of the tenant are affected during locality modification.
@@ -110,7 +110,7 @@ The following example describes how to modify the locality of tenant `mysql001`
The query results in step 3 and step 6 show that the locality of the `mysql001` tenant is changed from `FULL{1}@zone1, FULL{1}@zone2` to `FULL{1}@zone1, FULL{1}@zone2, FULL{1}@zone3`, and the `mysql001` tenant has a full-featured replica in each of zone1, zone2, and zone3.
-After you add a replica to a tenant, the locality of the tenant does not match the primary zone of the tenant. If the new zone is involved in leader switchover, you must modify the primary zone. For more information about how to modify the primary zone, see [Modify the number of primary zones for a tenant](../../../200.tenant-management/600.common-tenant-operations/800.tenant-scale-in-and-out/400.adjust-primary-zone.md). If the new zone is not involved in leader switchover, you do not need to modify the primary zone.
+After you add a replica to a tenant, the locality of the tenant does not match the primary zone of the tenant. If the new zone is involved in leader switchover, you must modify the primary zone. For more information about how to modify the primary zone, see [Modify the primary zone](../../../200.tenant-management/600.common-tenant-operations/800.tenant-scale-in-and-out/400.adjust-primary-zone.md). If the new zone is not involved in leader switchover, you do not need to modify the primary zone.
## References
diff --git a/en-US/600.manage/300.replica-management/200.replica-distribution/200.locality-common-operations/400.reduce-replica.md b/en-US/600.manage/300.replica-management/200.replica-distribution/200.locality-common-operations/400.reduce-replica.md
index f703f3c2b7..edd7b97744 100644
--- a/en-US/600.manage/300.replica-management/200.replica-distribution/200.locality-common-operations/400.reduce-replica.md
+++ b/en-US/600.manage/300.replica-management/200.replica-distribution/200.locality-common-operations/400.reduce-replica.md
@@ -7,7 +7,7 @@
# Remove replicas
-This topic describes how to modify the locality of a tenant in a cluster to reduce the number of partition replicas in the tenant.
+This topic describes how to remove log stream replicas from a tenant in a cluster by modifying the locality of the tenant.
For more information about how to modify the locality, see [Modify locality](../200.locality-common-operations/200.modify-locality.md).
@@ -35,7 +35,7 @@ For more information about how to view the resource information of each node in
1. The tenant has an independent resource pool in `z3`. For example, the tenant has two resource pools: `resource_pool1` and `resource_pool2`, and `resource_pool2` belongs to `z3`. After the replica is removed, you must remove `resource_pool2` from the resource pool list of the tenant. Then you can delete the resource pool to release the resources. For more information, see [Drop a resource pool](../../../200.tenant-management/600.common-tenant-operations/1500.resource-pool-management/600.delete-resource-pool.md).
- 2. The tenant has no independent resource pool in `z3`. For example, the resource pool `resource_pool1` of the tenant belongs to `z2` and `z3`. After you remove the replica, you must remove `z3` from the `zone` list of `resource_pool1` to release the resources. For more information about how to modify the `zone` list of a resource pool, see [Modify attributes of a resource pool](../../../200.tenant-management/600.common-tenant-operations/900.modify-resource-pool-properties.md).
+ 2. The tenant has no independent resource pool in `z3`. For example, the resource pool `resource_pool1` of the tenant belongs to `z2` and `z3`. After you remove the replica in `z3`, you must remove `z3` from the `zone` list of `resource_pool1` to release the resources. For more information about how to modify the `zone` list of a resource pool, see [Modify attributes of a resource pool](../../../200.tenant-management/600.common-tenant-operations/900.modify-resource-pool-properties.md).
## Procedure
@@ -115,4 +115,4 @@ The query results in step 3 and step 6 show that the locality of the `mysql001`
* [View locality](../200.locality-common-operations/100.view-locality.md)
-* [Add replicas](../200.locality-common-operations/300.add-replica.md)
+* [Add replicas](../200.locality-common-operations/300.add-replica.md)
\ No newline at end of file
diff --git a/en-US/600.manage/300.replica-management/200.replica-distribution/200.locality-common-operations/500.adjust-replica-distribution.md b/en-US/600.manage/300.replica-management/200.replica-distribution/200.locality-common-operations/500.adjust-replica-distribution.md
index 650e9d7f93..1251d2bb81 100644
--- a/en-US/600.manage/300.replica-management/200.replica-distribution/200.locality-common-operations/500.adjust-replica-distribution.md
+++ b/en-US/600.manage/300.replica-management/200.replica-distribution/200.locality-common-operations/500.adjust-replica-distribution.md
@@ -31,7 +31,7 @@ For more information about how to view the resource information of each node in
## Procedure
-The following example describes how to modify the locality of the the `mysql001` tenant from `FULL{1}@zone1, FULL{1}@zone2, FULL{1}@zone3` to `FULL{1}@zone1, FULL{1}@zone2, FULL{1}@zone4`.
+The following example describes how to modify the locality of the `mysql001` tenant from `FULL{1}@zone1, FULL{1}@zone2, FULL{1}@zone3` to `FULL{1}@zone1, FULL{1}@zone2, FULL{1}@zone4`.
1. Log on to the sys tenant of the cluster as the root user.
@@ -114,7 +114,7 @@ The following example describes how to modify the locality of the the `mysql001`
The query results in step 3 and step 8 show that the locality of the `mysql001` tenant is changed from `FULL{1}@zone1, FULL{1}@zone2, FULL{1}@zone3` to `FULL{1}@zone1, FULL{1}@zone2, FULL{1}@zone4`, and the `mysql001` tenant has a full-featured replica in each of zone1, zone2, and zone4.
-After you add the replica in step 4, the locality of the tenant does not match the primary zone of the tenant. If the new zone is involved in leader switchover, you must modify the primary zone. For more information about how to modify the primary zone, see [Modify the number of primary zones for a tenant](../../../200.tenant-management/600.common-tenant-operations/800.tenant-scale-in-and-out/400.adjust-primary-zone.md). If the new zone is not involved in leader switchover, you do not need to modify the primary zone.
+After you add the replica in step 4, the locality of the tenant does not match the primary zone of the tenant. If the new zone is involved in leader switchover, you must modify the primary zone. For more information about how to modify the primary zone, see [Modify the primary zone](../../../200.tenant-management/600.common-tenant-operations/800.tenant-scale-in-and-out/400.adjust-primary-zone.md). If the new zone is not involved in leader switchover, you do not need to modify the primary zone.
## References
@@ -128,4 +128,4 @@ After you add the replica in step 4, the locality of the tenant does not match t
* [Remove replicas](../200.locality-common-operations/400.reduce-replica.md)
-* [Modify the number of primary zones for a tenant](../../../200.tenant-management/600.common-tenant-operations/800.tenant-scale-in-and-out/400.adjust-primary-zone.md)
+* [Modify the primary zone](../../../200.tenant-management/600.common-tenant-operations/800.tenant-scale-in-and-out/400.adjust-primary-zone.md)
\ No newline at end of file
diff --git a/en-US/600.manage/300.replica-management/200.replica-distribution/200.locality-common-operations/600.view-the-location-change-record.md b/en-US/600.manage/300.replica-management/200.replica-distribution/200.locality-common-operations/600.view-the-location-change-record.md
index 0faa319a55..3c54bc4cfc 100644
--- a/en-US/600.manage/300.replica-management/200.replica-distribution/200.locality-common-operations/600.view-the-location-change-record.md
+++ b/en-US/600.manage/300.replica-management/200.replica-distribution/200.locality-common-operations/600.view-the-location-change-record.md
@@ -7,7 +7,7 @@
# View the locality change history
-This topic describes how to query for the locality change history from the `DBA_OB_TENANT_JOBS` view.
+This topic describes how to query the locality change history from the `DBA_OB_TENANT_JOBS` view.
For more information about the `DBA_OB_TENANT_JOBS` view, see [DBA_OB_TENANT_JOBS](../../../../700.reference/700.system-views/300.system-view-of-sys-tenant/200.dictionary-view-of-sys-tenant/5700.oceanbase-dba_ob_tenant_jobs-of-sys-tenant.md).
diff --git a/en-US/600.manage/300.replica-management/200.replica-distribution/200.locality-common-operations/700.unit-migration.md b/en-US/600.manage/300.replica-management/200.replica-distribution/200.locality-common-operations/700.unit-migration.md
index 482adc16b1..679aba1979 100644
--- a/en-US/600.manage/300.replica-management/200.replica-distribution/200.locality-common-operations/700.unit-migration.md
+++ b/en-US/600.manage/300.replica-management/200.replica-distribution/200.locality-common-operations/700.unit-migration.md
@@ -11,7 +11,7 @@ This topic describes how to migrate units between OBServer nodes within a single
## Procedure
-The following example describes how to migrate the unit with the ID of `1006` in the `mq_t1` tenant to another OBServer node in the same zone.
+The following example describes how to migrate the unit with the ID of `1006` in the `mq_t1` tenant to another OBServer node in the same zone.
1. Log on to the sys tenant of the cluster as the root user.
@@ -25,9 +25,9 @@ The following example describes how to migrate the unit with the ID of `1006` in
obclient>use oceanbase;
```
-3. Query for `tenant_id` of the `mq_t1` tenant.
+3. Query `tenant_id` of the `mq_t1` tenant.
- In this example, `tenant_id` of the `mq_t1` tenant is 1004.
+ In this example, `tenant_id` of the `mq_t1` tenant is `1004`.
```shell
obclient> SELECT * FROM oceanbase.DBA_OB_TENANTS WHERE TENANT_NAME = 'mq_t1';
diff --git a/en-US/600.manage/300.replica-management/400.data-distribution.md b/en-US/600.manage/300.replica-management/400.data-distribution.md
index ef8f2a0368..de77e1fc27 100644
--- a/en-US/600.manage/300.replica-management/400.data-distribution.md
+++ b/en-US/600.manage/300.replica-management/400.data-distribution.md
@@ -17,13 +17,13 @@ OceanBase Database supports non-partitioned tables, partitioned tables, and subp
OceanBase Database adds additional restrictions on tenant management from V4.0, requiring that all zones of the same tenant have the same number of units. The system assigns a unique ID to each unit in a zone, and units with the same ID form a unit group. Unit groups have the following characteristics:
-* Each unit group is assigned a unique ID. The `sys` tenant can query the ID in the `UNIT_GROUP_ID` field of the `oceanbase.DBA_OB_UNITS` view.
+* Each unit group is assigned a unique ID, which can be queried from the `UNIT_GROUP_ID` column in the `oceanbase.DBA_OB_UNITS` view in the sys tenant.
* One log stream belongs only to one unit group, and is distributed only on units in the unit group. Therefore, the same data partitions are distributed on all units in a unit group based on log streams, thus defining a group of data. In this case, all zones must have equivalent service capabilities.
* In OceanBase Database V4.0 and later, you can no longer separately specify the number of units for each zone of a tenant. Instead, you can only adjust the number of units in a unit group. For example, if you want to horizontally scale out a tenant by increasing the number of units, you can only increase the number of units for all zones. Similarly, if you want to scale in a tenant, you can only decrease the number of units in the unit group. The unit group mechanism ensures homogeneous data distribution in different zones.
-You can query for all units and corresponding unit groups from the `oceanbase.DBA_OB_UNITS` view. Here is an example:
+You can query all units and corresponding unit groups from the `oceanbase.DBA_OB_UNITS` view. Here is an example:
```shell
obclient> select UNIT_ID,TENANT_ID,UNIT_GROUP_ID,ZONE,SVR_IP,SVR_PORT from DBA_OB_UNITS where TENANT_ID = 1004;
@@ -45,7 +45,7 @@ Therefore, one log stream belongs to one log stream group, which cannot be modif
The number of log streams in a log stream group dynamically changes with the primary zone setting of the tenant. The lifecycle of a log stream group is bound to that of the corresponding unit group.
-You can query for the log streams of all tenants in a cluster and corresponding log stream groups from the `oceanbase.CDB_OB_LS` view. Here is an example:
+You can query the log streams of all tenants in a cluster and corresponding log stream groups from the `oceanbase.CDB_OB_LS` view. Here is an example:
```shell
obclient> select TENANT_ID,LS_ID,STATUS,PRIMARY_ZONE,UNIT_GROUP_ID,LS_GROUP_ID from CDB_OB_LS where TENANT_ID=1004;
@@ -64,7 +64,7 @@ This section summarizes the concepts involved in this topic.
* Units are abstracted from physical resources. Each unit occupies specific physical resources on an OBServer node, including CPU, memory, and storage resources. Resources are scheduled by unit. You can adjust the distribution of units across OBServer nodes in a zone to achieve load balancing and disaster recovery across OBServer nodes.
-* Each tenant contains several units. You can set the number of units and the primary zone for the tenant to define a series of unit sets for carrying business traffic. The number of unit sets is equal to the number of units multiplied by the number of zones contained in the primary zone. Each unit is deployed on an OBServer node to facilitate horizontal scaling of the tenant.
+* Each tenant contains several units. You can set the number of units and the primary zone for the tenant to define a series of unit sets for carrying business traffic. Each unit is deployed on an OBServer node to facilitate horizontal scaling of the tenant.
* A log stream defines a group of data, including several data partitions and ordered redo log streams. Logs are synchronized across replicas based on the Paxos protocol to ensure data consistency between replicas and high availability of data. Transactions are committed by log stream. When a transaction is modified within a single log stream, you can atomically commit the transaction in one phase. When a transaction is modified across multiple log streams, you can atomically commit the transaction based on the optimized two-phase commit protocol of OceanBase Database. Log streams are participants of distributed transactions. A log stream has a location attribute and a role attribute. All data partitions in the log stream inherit its attributes.
@@ -72,8 +72,8 @@ This section summarizes the concepts involved in this topic.
* One log stream group corresponds to one unit group. The number of log streams in a log stream group is determined by the number of zones contained in the primary zone. Therefore, each zone contained in the primary zone accommodates the leader of a log stream in the log stream group.
-For example, if a tenant is configured with `unit_num=2` and `primary_zone='Z1,Z2,Z3'`, the tenant has two unit groups, two log stream groups, and six log streams, as shown in the diagram below.
+ For example, if a tenant is configured with `unit_num=2` and `primary_zone='Z1,Z2,Z3'`, the tenant has two unit groups, two log stream groups, and six log streams, as shown in the diagram below.
-![1](https://obbusiness-private.oss-cn-shanghai.aliyuncs.com/doc/img/observer-enterprise/V4.2.1/EN_US/600.manage/300.replica-management/LogStreamGroup.png)
+ ![1](https://obbusiness-private.oss-cn-shanghai.aliyuncs.com/doc/img/observer-enterprise/V4.2.1/EN_US/600.manage/300.replica-management/LogStreamGroup.png)
In OceanBase Database, data and traffic are flexibly distributed on multiple OBServer nodes in multiple dimensions. You can migrate units between OBServer nodes in a zone to achieve load balancing and disaster recovery across OBServer nodes.
From 1a34fb4ab3b7f550477883a0e4dfc56f5d401ffe Mon Sep 17 00:00:00 2001
From: Jackie Qu
Date: Sat, 13 Apr 2024 10:25:02 +0800
Subject: [PATCH 06/63] v430-beta-400.high-availability-update
---
.../600.role-switch/200.perform-switchover.md | 2 +-
.../400.modify-the-degradation-timeout.md | 2 +-
.../500.purge-the-recyclebin.md | 35 ++++++++++---------
3 files changed, 20 insertions(+), 19 deletions(-)
diff --git a/en-US/600.manage/400.high-availability/300.physical-standby-database-disaster-recovery/600.role-switch/200.perform-switchover.md b/en-US/600.manage/400.high-availability/300.physical-standby-database-disaster-recovery/600.role-switch/200.perform-switchover.md
index 1016bc6450..425259211f 100644
--- a/en-US/600.manage/400.high-availability/300.physical-standby-database-disaster-recovery/600.role-switch/200.perform-switchover.md
+++ b/en-US/600.manage/400.high-availability/300.physical-standby-database-disaster-recovery/600.role-switch/200.perform-switchover.md
@@ -309,7 +309,7 @@ In the Physical Standby Database solution based on log archiving, a switchover o
Oracle mode:
```sql
- SELECT DEST_ID, ROUND_ID, DEST_NO, STATUS, CHECKPOINT_SCN, CHECKPOINT_SCN_DISPLAY, PATH FROM oceanbase.DBA_OB_ARCHIVELOG;
+ SELECT DEST_ID, ROUND_ID, DEST_NO, STATUS, CHECKPOINT_SCN, CHECKPOINT_SCN_DISPLAY, PATH FROM SYS.DBA_OB_ARCHIVELOG;
```
The query result in MySQL mode is as follows:
diff --git a/en-US/600.manage/400.high-availability/400.arbitration-high-availability/400.modify-the-degradation-timeout.md b/en-US/600.manage/400.high-availability/400.arbitration-high-availability/400.modify-the-degradation-timeout.md
index f1dd8a501f..57a399458c 100644
--- a/en-US/600.manage/400.high-availability/400.arbitration-high-availability/400.modify-the-degradation-timeout.md
+++ b/en-US/600.manage/400.high-availability/400.arbitration-high-availability/400.modify-the-degradation-timeout.md
@@ -59,7 +59,7 @@ When you enable the arbitration service for a tenant, you must specify the log s
3. Select an appropriate statement to modify the log stream downgrade control time based on your business scenario.
- The `arbitration_timeout` parameter specifies the control time for triggering automatic downgrade of a log stream for a tenant. The default value is 5s. The value range is [3s,+∞). If you want to avoid the downgrade and achieve the effect that is similar to the Maximum Protection mode for standby clusters in primary/standby deployment in OceanBase Database V3.x, you can specify a larger value, such as `30d`, which indicates that the downgrade timeout period is 30 days. The modification of this parameter takes effect immediately without restarting the OBServer node.
+ The `arbitration_timeout` parameter specifies the control time for triggering automatic downgrade of a log stream for a tenant. The default value is 5s. The value range is [3s,+∞). If you want to avoid the downgrade, you can specify a larger value, such as `30d`, which indicates that the downgrade timeout period is 30 days. The modification of this parameter takes effect immediately without restarting the OBServer node.
* Execute the following statement in a user tenant to modify the log stream downgrade control time for the user tenant:
diff --git a/en-US/600.manage/400.high-availability/500.recyclebin-management/500.purge-the-recyclebin.md b/en-US/600.manage/400.high-availability/500.recyclebin-management/500.purge-the-recyclebin.md
index d95e8ef7fc..d7cd1fc10d 100644
--- a/en-US/600.manage/400.high-availability/500.recyclebin-management/500.purge-the-recyclebin.md
+++ b/en-US/600.manage/400.high-availability/500.recyclebin-management/500.purge-the-recyclebin.md
@@ -27,23 +27,24 @@ You can execute the `PURGE` statement to purge the recycle bin.
1. Log on to the database as an administrator of the `sys` tenant or a user tenant.
-
- Note
-
- - The administrator user of a MySQL user tenant is
root
and that of an Oracle user tenant is SYS
.
- - If you want to purge tenants from the recycle bin, log on to the database as an administrator of the
sys
tenant.
-
-
+
+ Note
+
+ - The administrator user of a MySQL user tenant is
root
and that of an Oracle user tenant is SYS
.
+ - If you want to purge tenants from the recycle bin, log on to the database as an administrator of the
sys
tenant.
+
+
- Note that you must specify the corresponding fields in the following sample code based on your actual database configurations.
- ```shell
- obclient -h10.xx.xx.xx -P2883 -uroot@sys -p***** -A
- ```
+ Note that you must specify the corresponding parameters in the following sample code based on your actual database configurations.
+
+ ```shell
+ obclient -h10.xx.xx.xx -P2883 -uroot@sys#obdemo -p***** -A
+ ```
For more information about how to connect to a database, see [Overview (MySQL mode)](../../../300.develop/100.application-development-of-mysql-mode/100.connect-to-oceanbase-database-of-mysql-mode/100.connection-methods-overview-of-mysql-mode.md) or [Overview (Oracle mode)](../../../300.develop/200.application-development-of-oracle-mode/100.connect-to-oceanbase-database-of-oracle-mode/100.connection-methods-overview-of-oracle-mode.md).
-2. Execute the `SHOW RECYCLEBIN` statement to query for the names of objects in the recycle bin.
+2. Execute the `SHOW RECYCLEBIN` statement to query the names of objects in the recycle bin.
```shell
obclient [(none)]> SHOW RECYCLEBIN;
@@ -116,7 +117,7 @@ You can execute the `PURGE` statement to purge the recycle bin.
obclient> PURGE INDEX object_name;
```
- Here, `object_name` specifies the name of the database in the recycle bin. You cannot set this parameter to the original name of the database.
+ Here, `object_name` specifies the name of the index in the recycle bin. You cannot set this parameter to the original name of the index.
For example:
@@ -148,15 +149,15 @@ For more information about the `recyclebin_object_expire_time` parameter, see [r
1. Log on to the `sys` tenant as the `root` user.
- Note that you must specify the corresponding fields in the following sample code based on your actual database configurations.
+ Note that you must specify the corresponding parameters in the following sample code based on your actual database configurations.
```shell
- obclient -h10.xx.xx.xx -P2883 -uroot@sys -p***** -A
+ obclient -h10.xx.xx.xx -P2883 -uroot@sys#obdemo -p***** -A
```
For more information, see [Overview](../../../300.develop/100.application-development-of-mysql-mode/100.connect-to-oceanbase-database-of-mysql-mode/100.connection-methods-overview-of-mysql-mode.md).
-2. Execute the following statement to view the automatic cleanup policy of the recycle bin.
+2. Execute the following statement to view the automatic purge strategy of the recycle bin:
```sql
obclient [(none)]> SHOW PARAMETERS LIKE 'recyclebin_object_expire_time';
@@ -197,4 +198,4 @@ For more information about the recycle bin, see the following topics:
* [View objects in the recycle bin](../500.recyclebin-management/300.view-the-recyclebin-objects.md)
-* [Restore objects from the recycle bin](../500.recyclebin-management/400.restore-the-recyclebin-objects.md)
+* [Restore objects from the recycle bin](../500.recyclebin-management/400.restore-the-recyclebin-objects.md)
\ No newline at end of file
From 067fe6aa2a3521aa07e095f38110b44875c7aa49 Mon Sep 17 00:00:00 2001
From: Jackie Qu
Date: Sat, 13 Apr 2024 12:01:26 +0800
Subject: [PATCH 07/63] v430-beta-600.backup-and-recovery-update
---
...verview-of-physical-backup-and-recovery.md | 6 +-
...ified-experience-of-backup-and-recovery.md | 24 ++-
.../100.overview-of-log-archive.md | 12 +-
.../200.preparation-before-log-archive.md | 146 ++++++++----------
.../300.open-the-log-archive-mode.md | 5 +-
.../320.suspend-the-archiving.md | 54 +++++++
.../400.close-the-log-archive-mode.md | 6 +-
.../700.view-log-archive-history.md | 2 +-
.../900.views-of-log-archive.md | 16 +-
.../100.preparation-before-data-backup.md | 122 +++++++++------
.../200.initiate-full-data-backup.md | 2 +-
.../300.initiate-incremental-data-backup.md | 8 +-
.../400.data-backup/400.stop-data-backup.md | 8 +-
.../200.initiate-the-tenant-restore.md | 23 ++-
.../400.view-the-restore-progress.md | 4 +-
.../500.view-the-restore-history.md | 4 +-
.../900.views-of-the-restore.md | 6 +-
17 files changed, 266 insertions(+), 182 deletions(-)
create mode 100644 en-US/600.manage/600.backup-and-recovery/300.log-archive/320.suspend-the-archiving.md
diff --git a/en-US/600.manage/600.backup-and-recovery/100.overview-of-physical-backup-and-recovery.md b/en-US/600.manage/600.backup-and-recovery/100.overview-of-physical-backup-and-recovery.md
index 8d657e4372..4680d041fd 100644
--- a/en-US/600.manage/600.backup-and-recovery/100.overview-of-physical-backup-and-recovery.md
+++ b/en-US/600.manage/600.backup-and-recovery/100.overview-of-physical-backup-and-recovery.md
@@ -152,7 +152,7 @@ Top-level data backup directories contain the following types of data:
### Log archive directories
-When you use NFS, OSS, or COS as the backup media, directories at the backup destination and file types under each directory are as follows:
+When you use NFS, OSS, or COS as the backup media, directories at the log archive destination and file types under each directory are as follows:
```javascript
log_archive_dest
@@ -174,7 +174,7 @@ log_archive_dest
├── logstream_1 // Log stream 1.
│ ├── file_info.obarc // The list of files in log stream 1.
│ ├── log
- │ │ └── 1.obarc // The archive file in log stream 1.
+ │ │ └── 1.obarc // The archive files in log stream 1.
│ └── schema_meta // The metadata of data dictionaries. This file is generated only in log stream 1.
│ └── 1677588501408765915.obarc
└── logstream_1001 // Log stream 1001.
@@ -279,7 +279,7 @@ In the preceding directory structure, `1.obarc` indicates a single archive file
| Feature | V3.x/V2.2x | V4.x |
|-------------------|------------------------------------------------------------------|------------------------------------------------------|
| Backup level | Cluster level | Tenant level |
-| Privilege | Operations such as setting a backup path, starting backup, and viewing the backup progress can be performed only in the `sys` tenant. | These operations can be performed either in the `sys` tenant or by an administrator user in a user tenant. |
+| Required privileges | Operations such as setting a backup path, starting backup, and viewing the backup progress can be performed only in the `sys` tenant. | These operations can be performed either in the `sys` tenant or by an administrator user in a user tenant. |
| Setting a backup path | You can use the `ALTER SYSTEM SET BACKUP_DEST` statement to set a backup path for a cluster. | You can use the `ALTER SYSTEM SET DATA_BACKUP_DEST` statement to set a backup path for a tenant. |
| Backing up data of a specified path | You can use the `ALTER SYSTEM BACKUP TENANT tenant_name_list TO backup_destination;` statement to initiate the backup from the `sys` tenant. | Not supported |
| BACKUP PLUS ARCHIVELOG | Not supported | Supported |
diff --git a/en-US/600.manage/600.backup-and-recovery/120.extremely-simplified-experience-of-backup-and-recovery.md b/en-US/600.manage/600.backup-and-recovery/120.extremely-simplified-experience-of-backup-and-recovery.md
index 4d43e77c0b..639db8df61 100644
--- a/en-US/600.manage/600.backup-and-recovery/120.extremely-simplified-experience-of-backup-and-recovery.md
+++ b/en-US/600.manage/600.backup-and-recovery/120.extremely-simplified-experience-of-backup-and-recovery.md
@@ -15,7 +15,7 @@ This topic provides an example of simplified deployment for you to experience th
## Considerations
-* This topic takes the backup of a single tenant as an example. To back up multiple tenants, you need to configure a separate path for the backup destination and archiving destination for each tenant. Different tenants cannot share the same path.
+* This topic takes the backup of a single tenant as an example. To back up multiple tenants, you need to configure a separate path for the backup destination and archive destination for each tenant. Different tenants cannot share the same path.
* OceanBase Database supports data restore both within a cluster and across clusters.
## Prerequisites
@@ -28,7 +28,7 @@ This topic takes the non-encrypted tenant `oracle_test` as an example to describ
Here is the key information:
-1. The log archiving path of the source tenant is `/data/nfs/backup/archive`.
+1. The log archive path of the source tenant is `/data/nfs/backup/archive`.
2. The default business prioritizing mode (optional) is required for archiving, and a log piece needs to be generated every two days.
3. The data backup path of the source tenant is `/data/nfs/backup/data`.
4. The resource pool of the restore destination tenant `oracle_backup` is `restore_pool`. The `Locality` replica is `F@z1`. The tenant data needs to be restored to the latest timestamp.
@@ -41,7 +41,7 @@ Here is the key information:
1. Log on to the database as the administrator of the `oracle_test` tenant.
-2. Configure the archiving destination.
+2. Configure the archive destination.
```sql
obclient> ALTER SYSTEM SET LOG_ARCHIVE_DEST='LOCATION=file:///data/nfs/backup/archive';
@@ -53,7 +53,7 @@ Here is the key information:
obclient> ALTER SYSTEM ARCHIVELOG;
```
-4. Check whether the log archiving state is `DOING`. You can initiate data backup only when the log archiving state is `DOING`.
+4. Check whether the log archiving status is `DOING`. You can initiate data backup only when the log archiving status is `DOING`.
```sql
obclient> SELECT * FROM DBA_OB_ARCHIVELOG\G
@@ -92,13 +92,13 @@ Here is the key information:
1 row in set
```
- The query result shows that the value of the log archiving state field `STATUS` is `DOING`.
+ The query result shows that the value of the log archiving status field `STATUS` is `DOING`.
For more information about log archiving operations and instructions, see [Log archiving](300.log-archive/100.overview-of-log-archive.md).
### Step 2: Initiate data backup
-Make sure that the log archiving state is `DOING` before you initiate data backup.
+Make sure that the log archiving status is `DOING` before you initiate data backup.
1. Log on to the database as the administrator of the `oracle_test` tenant.
@@ -108,13 +108,13 @@ Make sure that the log archiving state is `DOING` before you initiate data backu
obclient> ALTER SYSTEM SET DATA_BACKUP_DEST='file:///data/nfs/backup/data';
```
-3. Initiate a full data backup task.
+3. Initiate a full data backup.
```sql
obclient> ALTER SYSTEM BACKUP DATABASE;
```
-4. Wait for the data backup task to be completed.
+4. Wait for the data backup to be completed.
You can query the `DBA_OB_BACKUP_TASKS` view to check whether the data backup task is completed. If the returned task list is empty, the data backup task is completed.
@@ -122,13 +122,13 @@ Make sure that the log archiving state is `DOING` before you initiate data backu
obclient> SELECT * FROM DBA_OB_BACKUP_TASKS;
```
-5. View the result of the data backup task.
+5. View the data backup result.
```sql
obclient> SELECT * FROM DBA_OB_BACKUP_JOB_HISTORY;
```
- In this example, the query result is as follows:
+ The query result is as follows:
```shell
*************************** 1. row ***************************
@@ -179,6 +179,4 @@ For more information about data backup operations and instructions, see [Data ba
obclient> ALTER SYSTEM RESTORE oracle_backup FROM 'file:///data/nfs/backup/data,file:///data/nfs/backup/archive' WITH 'pool_list=restore_pool&locality=F@z1';
```
-For more information about the operations and instructions on physical restore, see [Restore data](600.restore-data/100.preparation-before-recovery.md).
-
-
+For more information about the operations and instructions on physical restore, see [Restore data](600.restore-data/100.preparation-before-recovery.md).
\ No newline at end of file
diff --git a/en-US/600.manage/600.backup-and-recovery/300.log-archive/100.overview-of-log-archive.md b/en-US/600.manage/600.backup-and-recovery/300.log-archive/100.overview-of-log-archive.md
index f57a03fe47..2f747ea595 100644
--- a/en-US/600.manage/600.backup-and-recovery/300.log-archive/100.overview-of-log-archive.md
+++ b/en-US/600.manage/600.backup-and-recovery/300.log-archive/100.overview-of-log-archive.md
@@ -25,7 +25,7 @@ Two log archiving modes are supported: `ARCHIVELOG` and `NOARCHIVELOG`. You can
### Archiving status
-In log archiving, a collection of finite-state machines is used to describe the log archiving status of each tenant. You can view the archiving status of the `sys` tenant from the `STATUS` column in the `oceanbase.CDB_OB_ARCHIVELOG` view. You can view the archiving status of a user tenant from the `STATUS` column in the `oceanbase.DBA_OB_ARCHIVELOG` view in MySQL mode or from the `STATUS` column in the `sys.DBA_OB_ARCHIVELOG` view in Oracle mode.
+In log archiving, a collection of finite-state machines is used to describe the log archiving status of the archive destination. You can view the archiving status of the `sys` tenant from the `STATUS` column in the `oceanbase.CDB_OB_ARCHIVELOG` view. You can view the archiving status of a user tenant from the `STATUS` column in the `oceanbase.DBA_OB_ARCHIVELOG` view in MySQL mode or from the `STATUS` column in the `sys.DBA_OB_ARCHIVELOG` view in Oracle mode.
The following table describes the archiving status.
@@ -44,13 +44,13 @@ The following table describes the archiving status.
### Log group
-In OceanBase Database, each archive log record is a collection of logs, including several log entries. The archive log record is called a log group. Each log entry has a system change number (SCN). The log group also has an SCN, which is the largest one of the SCNs of all log entries. In log archiving, log groups are organized and managed in the archiving media.
+In OceanBase Database, each archive log is a collection of log entries, and is therefore called a log group. Each log entry has a system change number (SCN). The log group also has an SCN, which is the largest one of the SCNs of all log entries. Log archiving is essentially to manage and organize log groups in the archive media.
### Piece
-OceanBase Database organizes and manages archived logs by piece. A piece is a complete collection of logs of a tenant within a consecutive period. The range of SCNs of logs in a piece is a left-closed and right-open interval. This implies the following points:
+OceanBase Database organizes and manages archive logs by piece. A piece is a complete collection of logs of a tenant within a consecutive period. The range of SCNs of logs in a piece is a left-closed and right-open interval. This implies the following points:
-* One piece contains the logs of a consecutive period, such as `1d` or `2d`. This period is specified by the `PIECE_SWITCH_INTERVAL` attribute in the `LOG_ARCHIVE_DEST` parameter.
+* One piece contains the logs within a consecutive period, such as `1d` or `2d`. This period is specified by the `PIECE_SWITCH_INTERVAL` attribute in the `LOG_ARCHIVE_DEST` parameter.
* The logs in a piece are complete. The logs generated by all log streams of this tenant within the specified period are organized in this piece.
@@ -58,11 +58,11 @@ Assume that the `PIECE_SWITCH_INTERVAL` attribute in the `LOG_ARCHIVE_DEST` para
### Attributes of pieces
-A piece has the following attributes: `START_SCN`, `END_SCN`, `CHECKPOINT_SCN`, `STATUS` and `FILE_STATUS`.
+A piece has the following attributes: `START_SCN`, `END_SCN`, `CHECKPOINT_SCN`, `STATUS`, and `FILE_STATUS`.
### `START_SCN`
-`START_SCN` specifies the start SCN of consecutive logs in the piece.
+`START_SCN` indicates the start SCN of consecutive logs in the piece.
### `END_SCN`
diff --git a/en-US/600.manage/600.backup-and-recovery/300.log-archive/200.preparation-before-log-archive.md b/en-US/600.manage/600.backup-and-recovery/300.log-archive/200.preparation-before-log-archive.md
index e1ab9156ca..90cb4fcf14 100644
--- a/en-US/600.manage/600.backup-and-recovery/300.log-archive/200.preparation-before-log-archive.md
+++ b/en-US/600.manage/600.backup-and-recovery/300.log-archive/200.preparation-before-log-archive.md
@@ -21,7 +21,7 @@ Before you enable the archiving mode for a tenant, you can configure the log arc
Notice
- If the tenant is configured with two or four CPU cores, we recommend that you retain the default value of this parameter.
+ If the tenant is configured with two or four CPU cores, we recommend that you use the default value.
Methods for adjusting the log archiving concurrency are as follows:
@@ -44,10 +44,10 @@ Before you enable the archiving mode for a tenant, you can configure the log arc
ALTER SYSTEM SET log_archive_concurrency = 10 TENANT = all;
```
-
- Note
- Starting from OceanBase Database V4.2.1, TENANT = all_user
and TENANT = all
express the same semantics. If you want an operation to take effect on all user tenants, we recommend that you use TENANT = all_user
. TENANT = all
will be deprecated.
-
+
+ Note
+ Starting from OceanBase Database V4.2.1, TENANT = all_user
and TENANT = all
express the same semantics. If you want an operation to take effect on all user tenants, we recommend that you use TENANT = all_user
. TENANT = all
will be deprecated.
+
* Execute the following statement in a user tenant to adjust its log archiving concurrency:
@@ -78,13 +78,13 @@ Make sure that the archive path for each tenant is a separate empty directory. Y
2. Configure the archive destination.
- * Configure the archive destination for a specified tenant in the `sys` tenant:
+ * Configure the archive destination for a specified tenant in the `sys` tenant
```sql
ALTER SYSTEM SET LOG_ARCHIVE_DEST='LOCATION=archive_path [BINDING=archive_mode] [PIECE_SWITCH_INTERVAL=piece_switch_interval]' TENANT = tenant_name;
```
- * Configure the archive destination for a user tenant from the current tenant:
+ * Configure the archive destination for a user tenant from the current tenant
```sql
ALTER SYSTEM SET LOG_ARCHIVE_DEST='LOCATION=archive_path [BINDING=archive_mode] [PIECE_SWITCH_INTERVAL=piece_switch_interval]';
@@ -95,7 +95,7 @@ Make sure that the archive path for each tenant is a separate empty directory. Y
After upgrading OceanBase Database from V4.0.x to V4.1.0, it is necessary to change the archive path. However, when upgrading from V4.1.x to V4.2.x, there is no need to change the archive path. Moreover, during the upgrade process from V4.1.x to V4.2.x, you have the option to archive logs.
- The procedure is as follows:
+ The details are as follows:
* (Required) Set `LOCATION`.
@@ -124,13 +124,21 @@ Make sure that the archive path for each tenant is a separate empty directory. Y
To set OSS as the archive destination and configure the `tagging` cleanup mode for the `mysql_tenant` tenant in the `sys` tenant, execute the following statement:
```sql
- obclient> ALTER SYSTEM SET LOG_ARCHIVE_DEST='LOCATION=oss://oceanbase-test-bucket/backup/archive?host=***.aliyun-inc.com&access_id=***&access_key=***&delete_mode=tagging' TENANT = mysql_tenant;
+ obclient> ALTER SYSTEM SET LOG_ARCHIVE_DEST='LOCATION=oss://oceanbase-test-bucket/backup/archive?
+ host=***.aliyun-inc.com
+ &access_id=***
+ &access_key=***
+ &delete_mode=tagging' TENANT = mysql_tenant;
```
To set OSS as the archive destination and configure the `tagging` cleanup mode for the current user tenant, execute the following statement:
```sql
- obclient> ALTER SYSTEM SET LOG_ARCHIVE_DEST='LOCATION=oss://oceanbase-test-bucket/backup/archive?host=***.aliyun-inc.com&access_id=***&access_key=***&delete_mode=tagging';
+ obclient> ALTER SYSTEM SET LOG_ARCHIVE_DEST='LOCATION=oss://oceanbase-test-bucket/backup/archive?
+ host=***.aliyun-inc.com
+ &access_id=***
+ &access_key=***
+ &delete_mode=tagging';
```
Here, `oss://` indicates that OSS is used as the archive destination, the storage bucket name is `oceanbase-test-bucket`, the path in the storage bucket is `/backup/archive`, and `?` is used to separate other parameters of the path. The `host` parameter specifies the host address of the storage bucket. The `access_id` and `access_key` parameters specify the access key of OSS. The cleanup mode is set to `tagging`.
@@ -167,34 +175,62 @@ Make sure that the archive path for each tenant is a separate empty directory. Y
Notice
- If you use COS as the archive destination, you must disable the list cache for buckets. Otherwise, backup data consistency errors may occur. For guidance on how to disable the list cache of a bucket, contact the technical support of COS.
+ If you use COS as the archive destination, note that:
+
+ - The list cache of the bucket must be disabled. Otherwise, a backup inconsistency error occurs. For guidance on how to disable the list cache of a bucket, contact the technical support of COS.
+ - To use the APPEND Object API for a bucket, you must disable the multi-AZ feature. If the multi-AZ feature is enabled, an error is reported during archiving.
+
- Like OSS, COS also allows you to configure the cleanup mode of archive files by using the `delete_mode` parameter in the same way as with OSS.
+ COS also allows you to configure the cleanup mode of archive files by using the `delete_mode` parameter in the same way as with OSS.
- To set COS as the archive destination and configure the `tagging` cleanup mode for the `mysql_tenant` tenant in the `sys` tenant, execute the following statement:
+ * To set COS as the archive destination and configure the `tagging` cleanup mode for the `mysql_tenant` tenant in the `sys` tenant, execute the following statement:
- ```sql
- obclient> ALTER SYSTEM SET LOG_ARCHIVE_DEST='LOCATION=cos://oceanbase-test-appid/backup/archive?host=cos.ap-xxx.myqcloud.com&access_id=***&access_key=***&appid=***&delete_mode=tagging' TENANT = mysql_tenant;
- ```
+ ```sql
+ obclient> ALTER SYSTEM SET LOG_ARCHIVE_DEST='LOCATION=cos://oceanbase-test-appid/backup/archive?
+ host=cos.ap-xxx.myqcloud.com
+ &access_id=***
+ &access_key=***&appid=***
+ &delete_mode=tagging' TENANT = mysql_tenant;
+ ```
- Here, `cos://` indicates that COS is used as the archive destination, the storage bucket name is `oceanbase-test-appid`, the path in the storage bucket is `/backup/archive`, and `?` is used to separate other parameters of the path. The `host` parameter specifies the host address of the storage bucket, that is, the endpoint (without the bucket name) of the bucket. The `access_id` and `access_key` parameters specify the access key of COS. The `appid` parameter is required, and it specifies the Tencent Cloud account. The cleanup mode is set to `tagging`.
+ * To set COS as the archive destination and configure the `tagging` cleanup mode for the current user tenant, execute the following statement:
+
+ ```sql
+ obclient> ALTER SYSTEM SET LOG_ARCHIVE_DEST='LOCATION=cos://oceanbase-test-appid/backup/archive?
+ host=cos.ap-xxx.myqcloud.com
+ &access_id=***
+ &access_key=***&appid=***
+ &delete_mode=tagging';
+ ```
+
+ Here, `cos://` indicates that COS is used as the archive destination, the storage bucket name is `oceanbase-test-appid`, the path in the storage bucket is `/backup/archive`, and `?` is used to separate other parameters of the path. The `host` parameter specifies the host address of the storage bucket, that is, the endpoint (without the bucket name) of the bucket. The `access_id` and `access_key` parameters specify the access key of COS. The `appid` parameter is required, and it specifies the APPID of the Tencent Cloud account. The cleanup mode is set to `tagging`.
tab S3
- Like OSS and COS, S3 also allows you to configure the cleanup mode of archive files by using the `delete_mode` parameter in the same way as with OSS and COS.
+ S3 also allows you to configure the cleanup mode of archive files by using the `delete_mode` parameter in the same way as with OSS and COS.
- To set S3 as the archive destination and configure the `tagging` cleanup mode for the `mysql_tenant` tenant in the `sys` tenant, execute the following statement:
+ * To set S3 as the archive destination and configure the `tagging` cleanup mode for the `mysql_tenant` tenant in the sys tenant, execute the following statement:
- ```sql
- obclient> ALTER SYSTEM SET LOG_ARCHIVE_DEST='LOCATION=s3://oceanbase-test-bucket/backup/archive?host=s3..amazonaws.com&access_id=******&access_key=******&s3_region=******&delete_mode=tagging' TENANT = mysql_tenant;
- ```
+ ```sql
+ obclient> ALTER SYSTEM SET LOG_ARCHIVE_DEST='LOCATION=s3://oceanbase-test-bucket/backup/archive?
+ host=s3..amazonaws.com
+ &access_id=******
+ &access_key=******
+ &s3_region=******
+ &delete_mode=tagging' TENANT = mysql_tenant;
+ ```
- To set S3 as the archive destination and configure the `tagging` cleanup mode for the current user tenant, execute the following statement:
+ * To set S3 as the archive destination and configure the `tagging` cleanup mode for the current user tenant, execute the following statement:
- ```sql
- obclient> ALTER SYSTEM SET LOG_ARCHIVE_DEST='LOCATION=s3://oceanbase-test-bucket/backup/archive?host=s3..amazonaws.com&access_id=******&access_key=******&s3_region=******&delete_mode=tagging';
- ```
+ ```sql
+ obclient> ALTER SYSTEM SET LOG_ARCHIVE_DEST='LOCATION=s3://oceanbase-test-bucket/backup/archive?
+ host=s3..amazonaws.com
+ &access_id=******
+ &access_key=******
+ &s3_region=******
+ &delete_mode=tagging';
+ ```
Here, `s3://` indicates that S3 is used as the archive destination, the storage bucket name is `oceanbase-test-bucket`, the path in the storage bucket is `/backup/archive`, and `?` is used to separate other parameters of the path. The `host` parameter specifies the domain name of the S3 service. The `access_id` and `access_key` parameters specify the access key of AWS services. The `s3_region` parameter is required, and it specifies the region where the S3 storage bucket is located. The cleanup mode is set to `tagging`.
@@ -254,72 +290,18 @@ In addition, incremental configuration is not supported after the archive destin
Execute the following statement to modify the attribute settings.
-* Modify the attribute settings for a specified tenant in the `sys` tenant:
+* Modify the attribute settings for a specified tenant in the `sys` tenant
```sql
obclient> ALTER SYSTEM SET LOG_ARCHIVE_DEST = 'LOCATION=file:///data/nfs/backup/archive BINDING=Mandatory PIECE_SWITCH_INTERVAL=2d' TENANT = mysql_tenant;
```
-* Modify the attribute settings for the current user tenant:
+* Modify the attribute settings for the current user tenant
```sql
obclient> ALTER SYSTEM SET LOG_ARCHIVE_DEST = 'LOCATION=file:///data/nfs/backup/archive BINDING=Mandatory PIECE_SWITCH_INTERVAL=2d';
```
-## Set the archiving status of the archive destination
-
-By default, the archiving status of a newly configured archive destination is `ENABLE`. You can query the `DBA_OB_ARCHIVE_DEST` view for the archiving status of the archive destination. For more information, see [View the archiving parameter settings](../300.log-archive/800.view-parameters-of-log-archive.md). If `ARCHIVELOG` has been enabled for this tenant, the system automatically triggers an archiving job to archive logs to the specified path of the destination.
-
-The following table describes the archiving status of the archive destination.
-
-| State | Description |
-|-------------|-------------------------------------------------------------------------|
-| ENABLE | Archiving is enabled at the archive destination. If no archiving job is initiated, the system will initiate an archiving job. |
-| DEFER | Archiving is stopped at the archive destination. If an archiving job is in progress, the system will stop the archiving job. |
-
-Perform the following steps to set the archiving status of the archive destination to `DEFER` or `ENABLE` based on the actual situation.
-
-1. Log on to the database as an administrator of the `sys` tenant or a user tenant.
-
-
- Note
- The administrator user is root
for a tenant in MySQL mode and SYS
for a tenant in Oracle mode.
-
-
-2. Enable or disable archiving at the archive destination based on the actual situation.
-
- * Enable archiving at the archive destination.
-
- * Enable archiving at the archive destination for a specified tenant in the `sys` tenant
-
- ```sql
- ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE='ENABLE' TENANT = tenant_name;
- ```
-
- * Enable archiving at the archive destination for the current user tenant
-
- ```sql
- ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE='ENABLE';
- ```
-
- After archiving is enabled at the archive destination, an archiving job will be triggered after `ARCHIVELOG` is enabled for the tenant, to archive logs to the specified path of the destination. If `ARCHIVELOG` has been enabled for the tenant, an archiving job is automatically triggered.
-
- * Disable archiving at the archive destination.
-
- * Disable archiving at the archive destination for a specified tenant in the `sys` tenant
-
- ```sql
- ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE='DEFER' TENANT = tenant_name;
- ```
-
- * Disable archiving at the archive destination for the current user tenant
-
- ```sql
- ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE='DEFER';
- ```
-
- After archiving is disabled at the archive destination, the ongoing archiving job will be suspended. You can set the archiving status of the archive destination to `ENABLE` to resume the archiving job.
-
## References
* [View the archiving parameter settings](../300.log-archive/800.view-parameters-of-log-archive.md)
diff --git a/en-US/600.manage/600.backup-and-recovery/300.log-archive/300.open-the-log-archive-mode.md b/en-US/600.manage/600.backup-and-recovery/300.log-archive/300.open-the-log-archive-mode.md
index 0213efbd9f..fe1cad7d63 100644
--- a/en-US/600.manage/600.backup-and-recovery/300.log-archive/300.open-the-log-archive-mode.md
+++ b/en-US/600.manage/600.backup-and-recovery/300.log-archive/300.open-the-log-archive-mode.md
@@ -21,7 +21,7 @@ You can enable `ARCHIVELOG` for all tenants or a specified tenant in the cluster
2. Execute the following statement to enable `ARCHIVELOG`.
- * Enable `ARCHIVELOG` for all tenants in the cluster.
+ * Enable `ARCHIVELOG` for all tenants in the cluster
To enable `ARCHIVELOG` for all tenants in the cluster, execute the following statement:
@@ -49,7 +49,7 @@ You can enable `ARCHIVELOG` for all tenants or a specified tenant in the cluster
2 rows in set
```
- * Enable `ARCHIVELOG` for a specified tenant.
+ * Enable `ARCHIVELOG` for a specified tenant
You can enable `ARCHIVELOG` for a specified tenant without affecting other tenants in the cluster.
@@ -103,6 +103,7 @@ You can enable `ARCHIVELOG` for a user tenant from the current tenant without af
## References
+* [Suspend archiving](320.suspend-the-archiving.md)
* [Disable ARCHIVELOG](../300.log-archive/400.close-the-log-archive-mode.md)
* [View the archiving progress](../300.log-archive/600.view-log-archive-progress.md)
* [View the archiving history](../300.log-archive/700.view-log-archive-history.md)
\ No newline at end of file
diff --git a/en-US/600.manage/600.backup-and-recovery/300.log-archive/320.suspend-the-archiving.md b/en-US/600.manage/600.backup-and-recovery/300.log-archive/320.suspend-the-archiving.md
new file mode 100644
index 0000000000..4abb867c44
--- /dev/null
+++ b/en-US/600.manage/600.backup-and-recovery/300.log-archive/320.suspend-the-archiving.md
@@ -0,0 +1,54 @@
+|description||
+|---|---|
+|keywords||
+|dir-name||
+|dir-name-en||
+|tenant-type||
+
+# Suspend archiving
+
+After you enable `ARCHIVELOG`, you can change the archiving status at the archive destination to `DEFER` to suspend archiving.
+
+## Background information
+
+The following table describes the archiving status at the archive destination.
+
+| State | Description |
+|-------------|-----------------------------------------------------------------------------------|
+| ENABLE | Archiving is enabled at the archive destination. By default, the archive destination is in the `ENABLE` state. |
+| DEFER | Archiving is suspended at the archive destination. If the archive destination is in this state, the system will suspend any existing archiving jobs. |
+
+## Procedure
+
+1. Log on to the database as an administrator of the `sys` tenant or a user tenant.
+
+
+ Note
+ The administrator user is root
for a tenant in MySQL mode and SYS
for a tenant in Oracle mode.
+
+
+2. Suspend archiving at the archive destination.
+
+ * Suspend archiving for a specified tenant from the sys tenant
+
+ ```sql
+ ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE='DEFER' TENANT = tenant_name;
+ ```
+
+ * Suspend archiving for the current user tenant
+
+ ```sql
+ ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE='DEFER';
+ ```
+
+3. Query the archiving status of the archive destination from a view. For more information, see [View the archiving parameter settings](../300.log-archive/800.view-parameters-of-log-archive.md).
+
+
+## What to do next
+
+After archiving is suspended, you can re-enable archiving by setting the archiving status at the archive destination to `ENABLE`.
+
+## References
+
+* [View the archiving progress](../300.log-archive/600.view-log-archive-progress.md)
+* [View the archiving history](../300.log-archive/700.view-log-archive-history.md)
\ No newline at end of file
diff --git a/en-US/600.manage/600.backup-and-recovery/300.log-archive/400.close-the-log-archive-mode.md b/en-US/600.manage/600.backup-and-recovery/300.log-archive/400.close-the-log-archive-mode.md
index c7014ebce7..6e03954280 100644
--- a/en-US/600.manage/600.backup-and-recovery/300.log-archive/400.close-the-log-archive-mode.md
+++ b/en-US/600.manage/600.backup-and-recovery/300.log-archive/400.close-the-log-archive-mode.md
@@ -21,7 +21,7 @@ You can disable `ARCHIVELOG` for all tenants or a specified tenant in the cluste
2. Execute the following statement to disable `ARCHIVELOG`.
- * Disable `ARCHIVELOG` for all tenants in the cluster.
+ * Disable `ARCHIVELOG` for all tenants in the cluster
To disable `ARCHIVELOG` for all tenants in the cluster, execute the following statement:
@@ -42,7 +42,7 @@ You can disable `ARCHIVELOG` for all tenants or a specified tenant in the cluste
2 rows in set
```
- * Disable `ARCHIVELOG` for a specified tenant in the cluster.
+ * Disable `ARCHIVELOG` for a specified tenant in the cluster
To disable `ARCHIVELOG` for a specified tenant without affecting other tenants in the cluster, execute the following statement:
@@ -58,7 +58,7 @@ You can disable `ARCHIVELOG` for all tenants or a specified tenant in the cluste
In this example, after you execute this statement, only the `mysql_tenant` tenant enters the `NOARCHIVELOG` mode. You can query the `oceanbase.DBA_OB_TENANTS` view from the `sys` tenant for the archiving modes of all tenants in the cluster.
```shell
- obclient [(none)]> SELECT TENANT_NAME, LOG_MODE oceanbase.FROM DBA_OB_TENANTS WHERE TENANT_TYPE = 'USER'\G
+ obclient [(none)]> SELECT TENANT_NAME, LOG_MODE FROM oceanbase.DBA_OB_TENANTS WHERE TENANT_TYPE = 'USER'\G
*************************** 1. row ***************************
TENANT_NAME: mysql_tenant
LOG_MODE: NOARCHIVELOG
diff --git a/en-US/600.manage/600.backup-and-recovery/300.log-archive/700.view-log-archive-history.md b/en-US/600.manage/600.backup-and-recovery/300.log-archive/700.view-log-archive-history.md
index 7026dc630a..fc8bad3038 100644
--- a/en-US/600.manage/600.backup-and-recovery/300.log-archive/700.view-log-archive-history.md
+++ b/en-US/600.manage/600.backup-and-recovery/300.log-archive/700.view-log-archive-history.md
@@ -104,7 +104,7 @@ In the preceding example, the current cluster contains two tenants with the IDs
For tenant 1002, logs whose SCNs fall within the range of `'2022-06-01 06:00:00.000000'` to `'2022-06-03 12:00:00.000000'` are archived during the first round of archiving, generating three pieces with the piece IDs 1 to 3. The amount of archived log data is 90 GB.
-`round_id` of tenant `1004` is `1`, indicating that log archiving is enabled the first time. The archive media type is OSS, the storage bucket is `oceanbase-test-bucket`, and the archive path is `/backup/archive`. Two pieces with the piece IDs 1 and 2 have been generated since log archiving is enabled. Logs whose SCNs fall within the range of `'2022-06-20 01:00:00.000000'` to `'2022-06-21 08:00:00.000000'` are archived.
+`round_id` of tenant `1004` is `1`, indicating that log archiving is enabled the first time. The archive media type is OSS, the storage bucket is `oceanbase-test-bucket`, and the archive path is `/backup/archive`. Two pieces with the piece IDs 1 and 2 have been generated since log archiving is enabled. Logs whose SCNs fall within the range of `'2022-06-20 01:00:00.000000'` to `'2022-06-21 08:00:00.000000'` are archived.
For more information about the `CDB_OB_ARCHIVELOG_SUMMARY` view, see [CDB_OB_ARCHIVELOG_SUMMARY](../../../700.reference/700.system-views/300.system-view-of-sys-tenant/200.dictionary-view-of-sys-tenant/13500.oceanbase-cdb_ob_archivelog_summary-of-sys-tenant.md).
diff --git a/en-US/600.manage/600.backup-and-recovery/300.log-archive/900.views-of-log-archive.md b/en-US/600.manage/600.backup-and-recovery/300.log-archive/900.views-of-log-archive.md
index 16efacffeb..94ebcf731f 100644
--- a/en-US/600.manage/600.backup-and-recovery/300.log-archive/900.views-of-log-archive.md
+++ b/en-US/600.manage/600.backup-and-recovery/300.log-archive/900.views-of-log-archive.md
@@ -44,7 +44,7 @@ OceanBase Database allows you to query views for information related to log arch
| STATUS | The archiving status.- `BEGINNING`: Log archiving is being started.
- `DOING`: Log archiving is in progress.
- `INTERRUPTED`: Log archiving is interrupted and requires manual intervention.
- `STOP`: Log archiving is stopped.
- `STOPPING`: Log archiving is being stopped.
- `SUSPENDING`: Log archiving is being suspended.
- `SUSPEND`: Log archiving is suspended.
|
| START_SCN | The start SCN for archiving. |
| START_SCN_DISPLAY | The value of `START_SCN` after being converted into the unit of time. |
-| CHECKPOINT_SCN | The end SCN for archiving. |
+| CHECKPOINT_SCN | The SCN at which the currently latest archived logs were generated. |
| CHECKPOINT_SCN_DISPLAY | The value of `CHECKPOINT_SCN` after being converted into the unit of time. |
| COMPATIBLE | The version number for compatibility. |
| BASE_PIECE_ ID | The ID of the first piece generated during this round of archiving. |
@@ -77,14 +77,14 @@ OceanBase Database allows you to query views for information related to log arch
| STATUS | The archiving status.- `BEGINNING`: Log archiving is being started.
- `DOING`: Log archiving is in progress.
- `INTERRUPTED`: Log archiving is interrupted and requires manual intervention.
- `STOP`: Log archiving is stopped.
- `STOPPING`: Log archiving is being stopped.
- `SUSPENDING`: Log archiving is being suspended.
- `SUSPEND`: Log archiving is suspended.
|
| START_SCN | The start SCN for archiving. |
| START_SCN_DISPLAY | The value of `START_SCN` after being converted into the unit of time. |
-| CHECKPOINT_SCN | The end SCN for archiving. |
+| CHECKPOINT_SCN | The SCN at which the currently latest archived logs were generated. |
| CHECKPOINT_SCN_DISPLAY | The value of `CHECKPOINT_SCN` after being converted into the unit of time. |
| COMPATIBLE | The version number for compatibility. |
| BASE_PIECE_ID | The ID of the first piece generated during this round of archiving. |
| USED_PIECE_ID | The piece ID that has been used in this round of archiving. |
| PIECE_SWITCH_INTERVAL | The switching interval of pieces. |
-| UNIT_SIZE | The size of the log block into which archive log data is compressed or encrypted. This column has not been used. |
-| COMPRESSION | The compression algorithm. This column has not been used. |
+| UNIT_SIZE | The size of the log block into which archive log data is compressed or encrypted. This field has not been used. |
+| COMPRESSION | The compression algorithm. This field has not been used. |
| INPUT_BYTES | The amount of raw log data. |
| INPUT_BYTES_DISPLAY | The amount of raw log data with a unit, such as 798.01 MB or 5.25 GB. |
| OUTPUT_BYTES | The amount of archive log data after compression or encryption. |
@@ -111,7 +111,7 @@ OceanBase Database allows you to query views for information related to log arch
| STATUS | The archiving status. |
| START_SCN | The start SCN for archiving. |
| START_SCN_DISPLAY | The value of `START_SCN` after being converted into the unit of time. |
-| CHECKPOINT_SCN | The end SCN for archiving. |
+| CHECKPOINT_SCN | The SCN at which the currently latest archived logs were generated. |
| CHECKPOINT_SCN_DISPLAY | The value of `CHECKPOINT_SCN` after being converted into the unit of time. |
| COMPATIBLE | The version number for compatibility. |
| BASE_PIECE_ID | The ID of the first piece generated during this round of archiving. |
@@ -176,10 +176,10 @@ OceanBase Database allows you to query views for information related to log arch
| PIECE_ID | The ID of the archived piece. |
| INCARNATION | The ID of the incarnation. |
| DEST_NO | The archive path. The value `0` indicates the `LOG_ARCHIVE_DEST` parameter. |
-| STATUS | The status of the piece:- `ACTIVE`: The piece is active.
- `FREEZING`: The piece is being frozen.
- `FROZEN`: The piece has been frozen. After a piece is frozen, its status will no longer be modified.
|
+| STATUS | The status of the piece.- `ACTIVE`: The piece is active.
- `FREEZING`: The piece is being frozen.
- `FROZEN`: The piece has been frozen. After a piece is frozen, its status will no longer be modified.
|
| START_SCN | The start SCN of the piece. |
| START_SCN_DISPLAY | The value of `START_SCN` after being converted into the unit of time. |
-| CHECKPOINT_SCN | The end SCN of the piece. |
+| CHECKPOINT_SCN | The SCN at which the currently latest archived logs in the piece were generated. |
| CHECKPOINT_SCN_DISPLAY | The value of `CHECKPOINT_SCN` after being converted into the unit of time. |
| MAX_SCN | The maximum SCN of all log streams in the piece. |
| END_SCN | The end SCN of the piece. |
@@ -206,7 +206,7 @@ OceanBase Database allows you to query views for information related to log arch
| PIECE_ID | The ID of the archived piece. |
| INCARNATION | The ID of the incarnation. |
| DEST_NO | The archive path. The value `0` indicates the `LOG_ARCHIVE_DEST` parameter. |
-| STATUS | The status of the piece:- `ACTIVE`: The piece is active.
- `FREEZING`: The piece is being frozen.
- `FROZEN`: The piece has been frozen. After a piece is frozen, its status will no longer be modified.
|
+| STATUS | The status of the piece.- `ACTIVE`: The piece is active.
- `FREEZING`: The piece is being frozen.
- `FROZEN`: The piece has been frozen. After a piece is frozen, its status will no longer be modified.
|
| START_SCN | The start SCN of the piece. |
| START_SCN_DISPLAY | The value of `START_SCN` after being converted into the unit of time. |
| CHECKPOINT_SCN | The end SCN of the piece. |
diff --git a/en-US/600.manage/600.backup-and-recovery/400.data-backup/100.preparation-before-data-backup.md b/en-US/600.manage/600.backup-and-recovery/400.data-backup/100.preparation-before-data-backup.md
index 369a693d3a..72546313e9 100644
--- a/en-US/600.manage/600.backup-and-recovery/400.data-backup/100.preparation-before-data-backup.md
+++ b/en-US/600.manage/600.backup-and-recovery/400.data-backup/100.preparation-before-data-backup.md
@@ -11,7 +11,7 @@ Before you back up data, you must configure the backup destination and backup ke
## Backup architecture
-Followers are preferentially used for backup. The following figure shows the backup architecture:
+Followers are preferentially used for backup. The following figure shows the backup architecture.
![Backup architecture](https://obbusiness-private.oss-cn-shanghai.aliyuncs.com/doc/img/observer-enterprise/V4.1.0/EN_US/6.manage/6.backup-and-restore/LogArchiving.png)
@@ -36,13 +36,13 @@ When you configure the backup destination, make sure that the backup path for ea
OceanBase Database allows you to use the following types of media as the backup destination: Network File System (NFS), Alibaba Cloud Object Storage Service (OSS), Tencent Cloud Object Storage (COS), and Amazon Simple Storage Service (S3).
- * Configure the backup destination for a specified tenant in the sys tenant.
+ * Configure the backup destination for a specified tenant in the sys tenant
```sql
ALTER SYSTEM SET DATA_BACKUP_DEST= 'data_backup_path' TENANT = mysql_tenant;
```
- * Configure the backup destination for the current tenant in a user tenant.
+ * Configure the backup destination for the current tenant in a user tenant
```sql
@@ -61,19 +61,27 @@ When you configure the backup destination, make sure that the backup path for ea
When you use OSS as the backup destination, you can set the backup destination and the `delete_mode` parameter to configure the backup file cleanup mode. The `delete_mode` parameter can be set to `delete` or `tagging`. If you do not specify `delete_mode`, the default cleanup mode `delete` takes effect. For more information about the `delete_mode` parameter, see [View data backup parameter settings](700.parameters-of-data-backup.md).
- To set OSS as the backup destination and configure the backup file cleanup mode for the `mysql_tenant` tenant in the `sys` tenant, execute the following statement:
+ * To set OSS as the backup destination and configure the backup file cleanup mode for the `mysql_tenant` tenant in the sys tenant, execute the following statement:
- ```sql
- obclient> ALTER SYSTEM SET DATA_BACKUP_DEST='oss://oceanbase-test-bucket/backup/?host=***.aliyun-inc.com&access_id=***&access_key=***&delete_mode=delete' TENANT = mysql_tenant;
- ```
+ ```sql
+ obclient> ALTER SYSTEM SET DATA_BACKUP_DEST='oss://oceanbase-test-bucket/backup/data?
+ host=***.aliyun-inc.com
+ &access_id=***
+ &access_key=***
+ &delete_mode=delete' TENANT = mysql_tenant;
+ ```
- To set OSS as the backup destination and configure the backup file cleanup mode for the current user tenant, execute the following statement:
+ * To set OSS as the backup destination and configure the backup file cleanup mode for the current user tenant, execute the following statement:
- ```sql
- obclient> ALTER SYSTEM SET DATA_BACKUP_DEST='oss://oceanbase-test-bucket/backup/?host=***.aliyun-inc.com&access_id=***&access_key=***&delete_mode=delete';
- ```
+ ```sql
+ obclient> ALTER SYSTEM SET DATA_BACKUP_DEST='oss://oceanbase-test-bucket/backup/data?
+ host=***.aliyun-inc.com
+ &access_id=***
+ &access_key=***
+ &delete_mode=delete';
+ ```
- Here, `oss://` indicates that OSS is used as the backup destination, the storage bucket name is `oceanbase-test-bucket`, the path in the storage bucket is `/backup`, and `?` is used to separate other parameters of the path. The `host` parameter specifies the host address of the storage bucket. The `access_id` and `access_key` parameters specify the access key of OSS. The cleanup mode is set to `delete`.
+ Here, `oss://` indicates that OSS is used as the backup destination, the storage bucket name is `oceanbase-test-bucket`, the path in the storage bucket is `/backup/data`, and `?` is used to separate other parameters of the path. The `host` parameter specifies the host address of the storage bucket. The `access_id` and `access_key` parameters specify the access key of OSS. The cleanup mode is set to `delete`.
For more information about the automatic cleanup of backup data in `delete` or `tagging` mode, see [Automatically clean up expired backup data](../500.clear-backup-data/100.cleaning-up-backed-up-data-automatically.md).
@@ -89,17 +97,17 @@ When you configure the backup destination, make sure that the backup path for ea
- To set NFS as the backup destination for the `mysql_tenant` tenant in the `sys` tenant, execute the following statement:
+ * To set NFS as the backup destination for the `mysql_tenant` tenant in the `sys` tenant, execute the following statement:
- ```sql
- obclient> ALTER SYSTEM SET DATA_BACKUP_DEST= 'file:///data/nfs/backup/data' TENANT = mysql_tenant;
- ```
+ ```sql
+ obclient> ALTER SYSTEM SET DATA_BACKUP_DEST= 'file:///data/nfs/backup/data' TENANT = mysql_tenant;
+ ```
- To set NFS as the backup destination for the current user tenant, execute the following statement:
+ * To set NFS as the backup destination for the current user tenant, execute the following statement:
- ```sql
- obclient> ALTER SYSTEM SET DATA_BACKUP_DEST='file:///data/nfs/backup/data';
- ```
+ ```sql
+ obclient> ALTER SYSTEM SET DATA_BACKUP_DEST='file:///data/nfs/backup/data';
+ ```
Here, `file://` indicates that NFS is used as the backup destination, and the backup path is `file:///data/nfs/backup/data`.
@@ -107,46 +115,68 @@ When you configure the backup destination, make sure that the backup path for ea
Notice
- If you use COS as the backup destination, you must disable the list cache for buckets. Otherwise, backup data consistency errors may occur. For guidance on how to disable the list cache of a bucket, contact the technical support of COS.
+ When you use COS as the backup destination, note that:
+
+ - The list cache of the bucket must be disabled. Otherwise, a backup inconsistency error occurs. For guidance on how to disable the list cache of a bucket, contact the technical support of COS.
+ - To use the APPEND Object API for a bucket, you must disable the multi-AZ feature. If the multi-AZ feature is enabled, an error is reported during backup.
+
- Like OSS, COS also allows you to configure the cleanup mode of backup files by using the `delete_mode` parameter in the same way as with OSS.
+ COS also allows you to configure the cleanup mode of backup files by using the `delete_mode` parameter in the same way as with OSS.
- To set COS as the backup destination for the `mysql_tenant` tenant in the `sys` tenant, execute the following statement:
+ * To set COS as the backup destination for the `mysql_tenant` tenant in the sys tenant, execute the following statement:
- ```sql
- obclient> ALTER SYSTEM SET DATA_BACKUP_DEST='cos://oceanbase-test-appid/backup?host=cos.ap-xxxx.myqcloud.com&access_id=***&access_key=***&appid=***' TENANT = mysql_tenant;
- ```
+ ```sql
+ obclient> ALTER SYSTEM SET DATA_BACKUP_DEST='cos://oceanbase-test-appid/backup/data?
+ host=cos.ap-xxxx.myqcloud.com
+ &access_id=***
+ &access_key=***
+ &appid=***' TENANT = mysql_tenant;
+ ```
- To set COS as the backup destination for the current user tenant, execute the following statement:
+ * To set COS as the backup destination for the current user tenant, execute the following statement:
- ```sql
- obclient> ALTER SYSTEM SET DATA_BACKUP_DEST='cos://oceanbase-test-appid/backup?host=cos.ap-xxxx.myqcloud.com&access_id=***&access_key=***&appid=***';
- ```
+ ```sql
+ obclient> ALTER SYSTEM SET DATA_BACKUP_DEST='cos://oceanbase-test-appid/backup/data?
+ host=cos.ap-xxxx.myqcloud.com
+ &access_id=***
+ &access_key=***
+ &appid=***';
+ ```
- Here, `cos://` indicates that COS is used as the backup destination, the storage bucket name is `oceanbase-test-appid`, the path in the storage bucket is `/backup`, and `?` is used to separate other parameters of the path. The `host` parameter specifies the host address of the storage bucket, that is, the endpoint (without the bucket name) of the bucket. The `access_id` and `access_key` parameters specify the access key of COS. The `appid` parameter is required, and it specifies the Tencent Cloud account.
+ Here, `cos://` indicates that COS is used as the backup destination, the storage bucket name is `oceanbase-test-appid`, the path in the storage bucket is `/backup/data`, and `?` is used to separate other parameters of the path. The `host` parameter specifies the host address of the storage bucket, that is, the endpoint (without the bucket name) of the bucket. The `access_id` and `access_key` parameters specify the access key of COS. The `appid` parameter is required, and it specifies the APPID of the Tencent Cloud account.
tab S3
- Like OSS and COS, S3 also allows you to configure the cleanup mode of backup files by using the `delete_mode` parameter in the same way as with OSS and COS.
+ S3 also allows you to configure the cleanup mode of backup files by using the `delete_mode` parameter in the same way as with OSS and COS.
- To set S3 as the backup destination and configure the `tagging` cleanup mode for the `mysql_tenant` tenant in the `sys` tenant, execute the following statement:
+ * To set S3 as the backup destination and configure the `tagging` cleanup mode for the `mysql_tenant` tenant in the sys tenant, execute the following statement:
- ```sql
- obclient> ALTER SYSTEM SET DATA_BACKUP_DEST='s3://oceanbase-test-bucket/backup/data?host=s3..amazonaws.com&access_id=***&access_key=***&s3_region=***&delete_mode=tagging' TENANT = mysql_tenant;
- ```
+ ```sql
+ obclient> ALTER SYSTEM SET DATA_BACKUP_DEST='s3://oceanbase-test-bucket/backup/data?
+ host=s3..amazonaws.com
+ &access_id=***
+ &access_key=***
+ &s3_region=***
+ &delete_mode=tagging' TENANT = mysql_tenant;
+ ```
- To set S3 as the backup destination and configure the `tagging` cleanup mode for the current user tenant, execute the following statement:
+ * To set S3 as the backup destination and configure the `tagging` cleanup mode for the current user tenant, execute the following statement:
- ```sql
- obclient> ALTER SYSTEM SET DATA_BACKUP_DEST='s3://oceanbase-test-bucket/backup/data?host=s3..amazonaws.com&access_id=***&access_key=***&s3_region=***&delete_mode=tagging';
- ```
+ ```sql
+ obclient> ALTER SYSTEM SET DATA_BACKUP_DEST='s3://oceanbase-test-bucket/backup/data?
+ host=s3..amazonaws.com
+ &access_id=***
+ &access_key=***
+ &s3_region=***
+ &delete_mode=tagging';
+ ```
Here, `s3://` indicates that S3 is used as the backup destination, the storage bucket name is `oceanbase-test-bucket`, the path in the storage bucket is `/backup/data`, and `?` is used to separate other parameters of the path. The `host` parameter specifies the domain name of the S3 service. The `access_id` and `access_key` parameters specify the access key of AWS services. The `s3_region` parameter is required, and it specifies the region where the S3 storage bucket is located. The cleanup mode is set to `tagging`.
:::
-3. Query the `CDB_OB_BACKUP_PARAMETER` view from the `sys` tenant for the backup paths of all tenants in the current cluster. For more information, see [View data backup parameter settings](700.parameters-of-data-backup.md).
+3. Query the `CDB_OB_BACKUP_PARAMETER` view from the sys tenant for the backup paths of all tenants in the current cluster. For more information, see [View data backup parameter settings](700.parameters-of-data-backup.md).
### Considerations
@@ -171,13 +201,13 @@ Before you back up data, you must check the encryption status of the source tena
2. Back up a key.
- * You can back up the key for a specified tenant from the `sys` tenant:
+ * You can back up the key for a specified tenant from the sys tenant.
```sql
ALTER SYSTEM BACKUP KEY TENANT = tenant_name TO 'backup_key_path' ENCRYPTED BY 'password';
```
- * You can back up the key for the current user tenant:
+ * You can also back up the key for the current user tenant.
@@ -199,7 +229,7 @@ Before you back up data, you must check the encryption status of the source tena
3. Check the back path of the key in views.
- * In the `sys` tenant, you can query the `CDB_OB_BACKUP_STORAGE_INFO` view for the backup path of the key.
+ * In the sys tenant, you can query the `CDB_OB_BACKUP_STORAGE_INFO` view for the backup path of the key.
```sql
SELECT * FROM oceanbase.CDB_OB_BACKUP_STORAGE_INFO;
@@ -210,7 +240,7 @@ Before you back up data, you must check the encryption status of the source tena
:::tab
tab MySQL mode
- The syntax is as follows:
+ The statement is as follows:
```sql
SELECT * FROM oceanbase.DBA_OB_BACKUP_STORAGE_INFO;
@@ -218,7 +248,7 @@ Before you back up data, you must check the encryption status of the source tena
tab Oracle mode
- The syntax is as follows:
+ The statement is as follows:
```sql
SELECT * FROM SYS.DBA_OB_BACKUP_STORAGE_INFO;
diff --git a/en-US/600.manage/600.backup-and-recovery/400.data-backup/200.initiate-full-data-backup.md b/en-US/600.manage/600.backup-and-recovery/400.data-backup/200.initiate-full-data-backup.md
index 3c5015cb36..45df57000d 100644
--- a/en-US/600.manage/600.backup-and-recovery/400.data-backup/200.initiate-full-data-backup.md
+++ b/en-US/600.manage/600.backup-and-recovery/400.data-backup/200.initiate-full-data-backup.md
@@ -23,7 +23,7 @@ Before you initiate a data backup job, you can set a password for the backup set
1. Log on to the `sys` tenant of the cluster as the `root` user.
-2. (Optional) Run the following command to set a backup password:
+2. (Optional) Run the following command to set a backup password.
Note
diff --git a/en-US/600.manage/600.backup-and-recovery/400.data-backup/300.initiate-incremental-data-backup.md b/en-US/600.manage/600.backup-and-recovery/400.data-backup/300.initiate-incremental-data-backup.md
index c227e603ad..236852f226 100644
--- a/en-US/600.manage/600.backup-and-recovery/400.data-backup/300.initiate-incremental-data-backup.md
+++ b/en-US/600.manage/600.backup-and-recovery/400.data-backup/300.initiate-incremental-data-backup.md
@@ -9,7 +9,7 @@
An incremental data backup job backs up all macroblocks modified since the last full backup or incremental data backup. For example, when the system initiates an incremental data backup job for a tenant, if a full backup was performed in the last data backup job and a full backup file named `full_backup_set` was generated, the system will back up all macroblocks modified since the generation of `full_backup_set`, and if an incremental backup was performed in the last data backup job and an incremental backup file named `inc_backup_set` was generated, the system will back up all macroblocks modified since the generation of `inc_backup_set`.
-## Applicable scenarios and considerations
+## Limitations and considerations
* To perform an incremental data backup, make sure that a full data backup exists. If an incremental backup is initiated without a full data backup, the system will automatically convert the incremental backup to a full backup.
@@ -58,9 +58,9 @@ You can initiate an incremental data backup job for all tenants or a specified t
3. Execute the following statement to initiate an incremental data backup job.
- * Initiate an incremental data backup job for all tenants in the cluster.
+ * Initiate an incremental data backup job for all tenants in the cluster
- Execute the following statement to initiate an incremental data backup job for all tenants in the cluster:
+ Execute the following statement to initiate an incremental data backup job for all tenants in the cluster.
```shell
obclient [(none)]> ALTER SYSTEM BACKUP INCREMENTAL DATABASE;
@@ -68,7 +68,7 @@ You can initiate an incremental data backup job for all tenants or a specified t
After the statement is executed, the system initiates an incremental data backup job for the `mysql_tenant` and `oracle_tenant` tenants in the cluster.
- * Initiate an incremental data backup job for a specified tenant in the cluster.
+ * Initiate an incremental data backup job for a specified tenant in the cluster
You can initiate an incremental data backup job for a specified tenant without affecting other tenants in the cluster.
diff --git a/en-US/600.manage/600.backup-and-recovery/400.data-backup/400.stop-data-backup.md b/en-US/600.manage/600.backup-and-recovery/400.data-backup/400.stop-data-backup.md
index 1d0db674b7..e812e1390d 100644
--- a/en-US/600.manage/600.backup-and-recovery/400.data-backup/400.stop-data-backup.md
+++ b/en-US/600.manage/600.backup-and-recovery/400.data-backup/400.stop-data-backup.md
@@ -15,7 +15,7 @@ Incomplete backup data is generated after you stop an ongoing back job. We recom
## Background information
-Assume that the current cluster contains three tenants: `sys`, `mysql_tenant`, and `oracle_tenant`, and that data backup is being performed for `mysql_tenant` and `oracle_tenant`.
+Assume that the current cluster contains three tenants: `sys`, `mysql_tenant`, and `oracle_tenant`, and that data backup is being performed for the `mysql_tenant` and `oracle_tenant` tenants.
## Stop data backup from the sys tenant
@@ -25,7 +25,7 @@ You can stop data backup for all tenants or a specified tenant in the cluster fr
2. Execute the following statement to stop a data backup job:
- * Stop data backup for all tenants in the cluster.
+ * Stop data backup for all tenants in the cluster
Execute the following statement to stop the data backup jobs of all tenants in the cluster:
@@ -33,9 +33,9 @@ You can stop data backup for all tenants or a specified tenant in the cluster fr
obclient [(none)]> ALTER SYSTEM CANCEL BACKUP;
```
- * Stop data backup for a specified tenant in the cluster.
+ * Stop data backup for a specified tenant in the cluster
- Execute the following statement to stop an ongoing data backup job of a specified tenant without affecting other tenants in the cluster:
+ Execute the following statement to stop an ongoing data backup job of a specified tenant without affecting other tenants in the cluster.
```shell
obclient [(none)]> ALTER SYSTEM CANCEL BACKUP TENANT = mysql_tenant;
diff --git a/en-US/600.manage/600.backup-and-recovery/600.restore-data/200.initiate-the-tenant-restore.md b/en-US/600.manage/600.backup-and-recovery/600.restore-data/200.initiate-the-tenant-restore.md
index cc5aa268c5..b5c6364522 100644
--- a/en-US/600.manage/600.backup-and-recovery/600.restore-data/200.initiate-the-tenant-restore.md
+++ b/en-US/600.manage/600.backup-and-recovery/600.restore-data/200.initiate-the-tenant-restore.md
@@ -43,7 +43,26 @@ The preparations before restore are complete. For more information, see [Prepara
SET DECRYPTION IDENTIFIED BY '******','******';
```
-3. Execute the following statement to start a restore job:
+3. Execute the following statement to set the encryption information.
+
+
+ Note
+ If the original key management service can be accessed without encryption or during recovery, skip this step.
+
+
+ ```shell
+ obclient [(none)]> SET @kms_encrypt_info = '';
+ ```
+
+ Here, is the value of `EXTERNAL_KMS_INFO`, which is a tenant-level parameter.
+
+
+ Note
+ external_kms_info
is used to store some key management information. For a detailed description of this parameter, see external_kms_info.
+
+
+
+4. Execute the following statement to start a restore job:
@@ -159,7 +178,7 @@ The preparations before restore are complete. For more information, see [Prepara
ALTER SYSTEM RESTORE mysql FROM 's3://oceanbase-test-bucket/backup/data?host=s3..amazonaws.com&access_id=xxx&access_key=xxx&s3_region=xxx, s3://oceanbase-test-bucket/backup/archive?host=s3..amazonaws.com&access_id=xxx&access_key=xxx&s3_region=xxx' UNTIL TIME='2024-01-15 00:00:00' WITH 'pool_list=restore_pool';
```
-4. (Optional) After you create the meta tenant corresponding to the tenant to be restored, set the `ha_low_thread_score` parameter to specify the number of worker threads for the restore job to speed up the restore.
+5. (Optional) After you create the meta tenant corresponding to the tenant to be restored, set the `ha_low_thread_score` parameter to specify the number of worker threads for the restore job to speed up the restore.
1. Query the `oceanbase.DBA_OB_TENANTS` view to check whether the meta tenant is created.
diff --git a/en-US/600.manage/600.backup-and-recovery/600.restore-data/400.view-the-restore-progress.md b/en-US/600.manage/600.backup-and-recovery/600.restore-data/400.view-the-restore-progress.md
index b56509dea2..85fb1c286f 100644
--- a/en-US/600.manage/600.backup-and-recovery/600.restore-data/400.view-the-restore-progress.md
+++ b/en-US/600.manage/600.backup-and-recovery/600.restore-data/400.view-the-restore-progress.md
@@ -37,7 +37,7 @@ During data restore, you can check the physical restore progress in views.
STATUS: WAIT_TENANT_RESTORE_FINISH
START_TIMESTAMP: 2022-06-1 10:58:33.689560
BACKUP_SET_LIST: file:///data/nfs/backup/data/backup_set_1_full
- BACKUP_PIECE_LIST: file:///data/nfs/backup/archive/2_1_2,file:///data/nfs/backup/archive/2_1_3
+ BACKUP_PIECE_LIST: file:///data/nfs/backup/archive/piece_d1001r1p1,file:///data/nfs/backup/archive/piece_d1001r2p2
TOTAL_BYTES: NULL
TOTAL_BYTES_DISPLAY: NULL
FINISH_BYTES: NULL
@@ -58,7 +58,7 @@ During data restore, you can check the physical restore progress in views.
STATUS: RESTORING
START_TIMESTAMP: 2022-06-1 10:58:33.689560
BACKUP_SET_LIST: file:///data/nfs/backup/data/backup_set_1_full
- BACKUP_PIECE_LIST: file:///data/nfs/backup/archive/2_1_2,file:///data/nfs/backup/archive/2_1_3
+ BACKUP_PIECE_LIST: file:///data/nfs/backup/archive/piece_d1001r1p1,file:///data/nfs/backup/archive/piece_d1001r2p2
TOTAL_BYTES: 313158553
TOTAL_BYTES_DISPLAY: 298.65MB
FINISH_BYTES: 0
diff --git a/en-US/600.manage/600.backup-and-recovery/600.restore-data/500.view-the-restore-history.md b/en-US/600.manage/600.backup-and-recovery/600.restore-data/500.view-the-restore-history.md
index 1e83c3c09d..2d027461af 100644
--- a/en-US/600.manage/600.backup-and-recovery/600.restore-data/500.view-the-restore-history.md
+++ b/en-US/600.manage/600.backup-and-recovery/600.restore-data/500.view-the-restore-history.md
@@ -37,7 +37,7 @@ After a physical restore job is completed, you can view the restore results in v
START_TIMESTAMP: 2022-06-1 15:40:58.366601
FINISH_TIMESTAMP: 2022-06-1 15:44:16.061358
STATUS: SUCCESS
- BACKUP_PIECE_LIST: file:///data/nfs/backup/archive/2_1_2,file:///data/nfs/backup/archive/2_1_3
+ BACKUP_PIECE_LIST: file:///data/nfs/backup/archive/piece_d1001r1p1,file:///data/nfs/backup/archive/piece_d1001r2p2
BACKUP_SET_LIST: file:///data/nfs/backup/data/backup_set_1_full
BACKUP_CLUSTER_VERSION: 17179869184
LS_COUNT: 3
@@ -65,7 +65,7 @@ After a physical restore job is completed, you can view the restore results in v
START_TIMESTAMP: 2022-06-1 15:40:58.366601
FINISH_TIMESTAMP: 2022-06-1 15:44:05.304540
STATUS: SUCCESS
- BACKUP_PIECE_LIST: file:///data/nfs/backup/archive/2_1_2,file:///data/nfs/backup/archive/2_1_3
+ BACKUP_PIECE_LIST: file:///data/nfs/backup/archive/piece_d1001r1p1,file:///data/nfs/backup/archive/piece_d1001r2p2
BACKUP_SET_LIST: file:///data/nfs/backup/data/backup_set_1_full
BACKUP_CLUSTER_VERSION: 17179869184
LS_COUNT: 3
diff --git a/en-US/600.manage/600.backup-and-recovery/600.restore-data/900.views-of-the-restore.md b/en-US/600.manage/600.backup-and-recovery/600.restore-data/900.views-of-the-restore.md
index 264bbbc90c..85bb8f65d3 100644
--- a/en-US/600.manage/600.backup-and-recovery/600.restore-data/900.views-of-the-restore.md
+++ b/en-US/600.manage/600.backup-and-recovery/600.restore-data/900.views-of-the-restore.md
@@ -26,10 +26,10 @@ OceanBase Database allows you to query the restore progress and result in views.
| RESTORE_OPTION | The restore option specified when restore is initiated. |
| RESTORE_SCN | The restore system change number (SCN). |
| RESTORE_SCN_DISPLAY | The restore SCN displayed as a timestamp. |
-| STATUS | The restore status. The possible states of a restore job in the `sys` tenant are inconsistent with those in the restored tenant.
For the `sys` tenant: - `CREATE_TENANT`: The `sys` tenant is creating the target tenant to be restored.
- `WAIT_TENANT_RESTORE_FINISH`: The system is waiting for the restore of the target tenant to complete.
- `RESTORE_SUCCESS`: The tenant is restored.
- `RESTORE_FAIL`: The restore of the tenant failed.
For the target tenant: - `RESTORING`: Data of the tenant is being restored.
- `POST_CHECK`: The system is checking the role of the tenant and restoring the tenant as a standby database.
- `UPGRADE`: The tenant is being upgraded. For restore across versions, an upgrade will be performed for the tenant.
- `RESTORE_SUCCESS`: The restore succeeded.
- `RESTORE_FAIL`: The restore failed.
|
+| STATUS | The restore status. The possible states of a restore job in the `sys` tenant are inconsistent with those in the restored tenant.
For the `sys` tenant: - `CREATE_TENANT`: The `sys` tenant is creating the target tenant to be restored.
- `WAIT_TENANT_RESTORE_FINISH`: The system is waiting for the restore of the target tenant to complete.
- `RESTORE_SUCCESS`: The tenant is restored.
- `RESTORE_FAIL`: The restore of the tenant failed.
For the target tenant: - `RESTORING`: Data of the tenant is being restored.
- `RESTORE_SUCCESS`: The restore succeeded.
- `RESTORE_FAIL`: The restore failed.
|
| START_TIMESTAMP | The start timestamp of the restore job. |
| BACKUP_SET_LIST | The backup set paths for restore, which are separated with commas (`,`). For example, `file:///data/nfs/backup/data/backup_set_1_full,file:///data/nfs/backup/data/backup_set_2_inc`. |
-| BACKUP_PIECE_LIST | The paths of log archive pieces for restore, which are separated with commas (`,`). For example, `file:///data/nfs/backup/archive/2_1_2,file:///data/nfs/backup/archive/2_1_3`. |
+| BACKUP_PIECE_LIST | The paths of log archive pieces for restore, which are separated with commas (`,`). For example, `file:///data/nfs/backup/archive/piece_d1001r1p1,file:///data/nfs/backup/archive/piece_d1001r2p2`. |
| TOTAL_BYTES | The total number of bytes to restore. |
| TOTAL_BYTES_DISPLAY | The total number of bytes to restore, in a storage capacity unit. |
| FINISH_BYTES | The number of bytes restored. |
@@ -56,7 +56,7 @@ OceanBase Database allows you to query the restore progress and result in views.
| START_TIMESTAMP | The start timestamp of the restore job. |
| FINISH_TIMESTAMP | The end timestamp of the restore job. |
| STATUS | The restore result. Valid values: - `SUCCESS`: The restore succeeded.
- `FAILED`: The restore failed.
|
-| BACKUP_PIECE_LIST | The paths of log archive pieces for restore, which are separated with commas (`,`). For example, `file:///data/nfs/backup/archive/2_1_2,file:///data/nfs/backup/archive/2_1_3`. |
+| BACKUP_PIECE_LIST | The paths of log archive pieces for restore, which are separated with commas (`,`). For example, `file:///data/nfs/backup/archive/piece_d1001r1p1,file:///data/nfs/backup/archive/piece_d1001r2p2`. |
| BACKUP_SET_LIST | The backup set paths for restore, which are separated with commas (`,`). For example, `file:///data/nfs/backup/data/backup_set_1_full,file:///data/nfs/backup/data/backup_set_2_inc`. |
| BACKUP_CLUSTER_VERSION | The version number of the backup source cluster. |
| LS_COUNT | The total number of log streams to restore. |
From 48d0e08d744a921908fe391bddacce1dc38a65bb Mon Sep 17 00:00:00 2001
From: Jackie Qu
Date: Sat, 13 Apr 2024 17:11:26 +0800
Subject: [PATCH 08/63] v430-beta-700.monitor-and-900.daily-inspection
---
.../200.majority-node-failure.md | 2 +-
.../300.sql-monitor/200.sql-audit.md | 105 +++++++++---------
.../200.check-cluster-parameters.md | 2 +-
3 files changed, 54 insertions(+), 55 deletions(-)
diff --git a/en-US/600.manage/100.cluster-management/400.common-cluster-failure/200.majority-node-failure.md b/en-US/600.manage/100.cluster-management/400.common-cluster-failure/200.majority-node-failure.md
index 953d94230e..f504a8966c 100644
--- a/en-US/600.manage/100.cluster-management/400.common-cluster-failure/200.majority-node-failure.md
+++ b/en-US/600.manage/100.cluster-management/400.common-cluster-failure/200.majority-node-failure.md
@@ -9,4 +9,4 @@
The failures of the majority of nodes will affect the majority of replicas in some log streams.
-When the majority of nodes fail, some log streams will lack a leader for a long period. In this case, it is the top priority to recover services. You must identify the cause of the failure as soon as possible and try to troubleshoot the issue. For example, you can check whether the node failure is caused by a network or hardware exception. In the event of a majority-node failure caused by clog deletion or hardware failure, the cluster cannot be recovered. Recovery can only be achieved through physical backup restoration or failover to a physical backup database.
+When the majority of nodes fail, some log streams will lack a leader for a long period. In this case, it is the top priority to recover services. You must identify the cause of the failure as soon as possible and try to troubleshoot the issue. For example, you can check whether the node failure is caused by a network or hardware exception. If the majority of nodes fail due to clog deletion or hardware failures, you cannot perform ordinary O&M operations to restore the cluster. Instead, you can restore the cluster only by using a physical backup or initiating a failover of the physical standby database.
\ No newline at end of file
diff --git a/en-US/600.manage/700.monitor/200.monitor-items-introduction/300.sql-monitor/200.sql-audit.md b/en-US/600.manage/700.monitor/200.monitor-items-introduction/300.sql-monitor/200.sql-audit.md
index 942e025761..7f8462a231 100644
--- a/en-US/600.manage/700.monitor/200.monitor-items-introduction/300.sql-monitor/200.sql-audit.md
+++ b/en-US/600.manage/700.monitor/200.monitor-items-introduction/300.sql-monitor/200.sql-audit.md
@@ -47,7 +47,7 @@ This view provides information in many fields. The main fields are described as
* `DECODE_TIME`: the time spent decoding the request after it left the queue.
* `GET_PLAN_TIME`: the time when the execution plan was generated, which reflects the health of the plan cache of the current tenant.
* `EXECUTE_TIME`: the execution time of the plan.
-* `EXECUTE_TIME`: the actual execution time, which is the sum of the CPU time and the value of the `TOTAL_WAIT_TIME_MICRO` field. The value of the `TOTAL_WAIT_TIME_MICRO` field is the sum of the values of the following fields: `APPLICATION_WAIT_TIME`, `CONCURRENCY_WAIT_TIME`, `USER_IO_WAIT_TIME`, and `SCHEDULE_TIME`. The difference between the values of the `EXECUTE_TIME` and `TOTAL_WAIT_TIME_MICRO` fields is the value of the `CPU_TIME` field.
+* `EXECUTE_TIME`: the actual execution time, which is the sum of the CPU time and the value of the `TOTAL_WAIT_TIME_MICRO` field. The value of the `TOTAL_WAIT_TIME_MICRO` field is the sum of the values of the following fields: `APPLICATION_WAIT_TIME`, `CONCURRENCY_WAIT_TIME`, `USER_IO_WAIT_TIME`, and `SCHEDULE_TIME`. `EXECUTE_TIME` is the total time spent on execution, including the time required for CPU computation (`CPU_TIME`) and various wait times (`TOTAL_WAIT_TIME_MICRO`).
* `APPLICATION_WAIT_TIME`: the total amount of time spent waiting for events of the `application` class.
* `CONCURRENCY_WAIT_TIME`: the total amount of time spent waiting for events of the `concurrency` class.
* `USER_IO_WAIT_TIME`: the total amount of time spent waiting for the events of the `user_io` class.
@@ -77,25 +77,25 @@ The `GV$OB_SQL_AUDIT` view allows you to query the SQL execution information at
* The following example shows how to query SQL statements whose execution time exceeds 100 ms:
- ```shell
- obclient> select request_id,usec_to_time(request_time),ELAPSED_TIME,QUEUE_TIME,EXECUTE_TIME,query_sql
- from v$OB_SQL_AUDIT where ELAPSED_TIME > 100000 limit 10;
- +------------+----------------------------+--------------+------------+--------------+------------------------------------------------------------------------------------------------------------------------------------------------------+
- | request_id | usec_to_time(request_time) | ELAPSED_TIME | QUEUE_TIME | EXECUTE_TIME | query_sql |
- +------------+----------------------------+--------------+------------+--------------+------------------------------------------------------------------------------------------------------------------------------------------------------+
- | 1538599798 | 2023-03-08 11:00:46.089711 | 335152 | 462 | 329196 | select request_id,usec_to_time(request_time),ELAPSED_TIME,QUEUE_TIME,EXECUTE_TIME,query_sql from v$OB_SQL_AUDIT where ELAPSED_TIME > 100000 limit 10 |
- | 1538601580 | 2023-03-08 11:00:47.411316 | 276913 | 1420 | 275345 | select request_id,usec_to_time(request_time),ELAPSED_TIME,QUEUE_TIME,EXECUTE_TIME,query_sql from v$OB_SQL_AUDIT where ELAPSED_TIME > 100000 limit 10 |
- | 1538603976 | 2023-03-08 11:00:49.258464 | 154873 | 461 | 154236 | select request_id,usec_to_time(request_time),ELAPSED_TIME,QUEUE_TIME,EXECUTE_TIME,query_sql from v$OB_SQL_AUDIT where ELAPSED_TIME > 100000 limit 10 |
- | 1538613501 | 2023-03-08 11:00:56.123111 | 188973 | 688 | 188144 | select request_id,usec_to_time(request_time),ELAPSED_TIME,QUEUE_TIME,EXECUTE_TIME,query_sql from v$OB_SQL_AUDIT where ELAPSED_TIME > 100000 limit 10 |
- | 1538712684 | 2023-03-08 11:02:07.504777 | 288516 | 1137 | 287180 | select request_id,usec_to_time(request_time),ELAPSED_TIME,QUEUE_TIME,EXECUTE_TIME,query_sql from v$OB_SQL_AUDIT where ELAPSED_TIME > 100000 limit 10 |
- | 1538743161 | 2023-03-08 11:02:29.135127 | 289585 | 26 | 289380 | select request_id,usec_to_time(request_time),ELAPSED_TIME,QUEUE_TIME,EXECUTE_TIME,query_sql from v$OB_SQL_AUDIT where ELAPSED_TIME > 100000 limit 10 |
- | 1538749786 | 2023-03-08 11:02:33.890317 | 294356 | 45 | 294180 | select request_id,usec_to_time(request_time),ELAPSED_TIME,QUEUE_TIME,EXECUTE_TIME,query_sql from v$OB_SQL_AUDIT where ELAPSED_TIME > 100000 limit 10 |
- | 1538792259 | 2023-03-08 11:03:04.626596 | 192843 | 128 | 192569 | select request_id,usec_to_time(request_time),ELAPSED_TIME,QUEUE_TIME,EXECUTE_TIME,query_sql from v$OB_SQL_AUDIT where ELAPSED_TIME > 100000 limit 10 |
- | 1538799117 | 2023-03-08 11:03:09.567622 | 201594 | 55 | 201388 | select request_id,usec_to_time(request_time),ELAPSED_TIME,QUEUE_TIME,EXECUTE_TIME,query_sql from v$OB_SQL_AUDIT where ELAPSED_TIME > 100000 limit 10 |
- | 1538804299 | 2023-03-08 11:03:13.274090 | 235720 | 241 | 235302 | select request_id,usec_to_time(request_time),ELAPSED_TIME,QUEUE_TIME,EXECUTE_TIME,query_sql from v$OB_SQL_AUDIT where ELAPSED_TIME > 100000 limit 10 |
- +------------+----------------------------+--------------+------------+--------------+------------------------------------------------------------------------------------------------------------------------------------------------------+
- 10 rows in set (0.28 sec)
- ```
+ ```shell
+ obclient> select request_id,usec_to_time(request_time),ELAPSED_TIME,QUEUE_TIME,EXECUTE_TIME,query_sql
+ from v$OB_SQL_AUDIT where ELAPSED_TIME > 100000 limit 10;
+ +------------+----------------------------+--------------+------------+--------------+------------------------------------------------------------------------------------------------------------------------------------------------------+
+ | request_id | usec_to_time(request_time) | ELAPSED_TIME | QUEUE_TIME | EXECUTE_TIME | query_sql |
+ +------------+----------------------------+--------------+------------+--------------+------------------------------------------------------------------------------------------------------------------------------------------------------+
+ | 1538599798 | 2023-03-08 11:00:46.089711 | 335152 | 462 | 329196 | select request_id,usec_to_time(request_time),ELAPSED_TIME,QUEUE_TIME,EXECUTE_TIME,query_sql from v$OB_SQL_AUDIT where ELAPSED_TIME > 100000 limit 10 |
+ | 1538601580 | 2023-03-08 11:00:47.411316 | 276913 | 1420 | 275345 | select request_id,usec_to_time(request_time),ELAPSED_TIME,QUEUE_TIME,EXECUTE_TIME,query_sql from v$OB_SQL_AUDIT where ELAPSED_TIME > 100000 limit 10 |
+ | 1538603976 | 2023-03-08 11:00:49.258464 | 154873 | 461 | 154236 | select request_id,usec_to_time(request_time),ELAPSED_TIME,QUEUE_TIME,EXECUTE_TIME,query_sql from v$OB_SQL_AUDIT where ELAPSED_TIME > 100000 limit 10 |
+ | 1538613501 | 2023-03-08 11:00:56.123111 | 188973 | 688 | 188144 | select request_id,usec_to_time(request_time),ELAPSED_TIME,QUEUE_TIME,EXECUTE_TIME,query_sql from v$OB_SQL_AUDIT where ELAPSED_TIME > 100000 limit 10 |
+ | 1538712684 | 2023-03-08 11:02:07.504777 | 288516 | 1137 | 287180 | select request_id,usec_to_time(request_time),ELAPSED_TIME,QUEUE_TIME,EXECUTE_TIME,query_sql from v$OB_SQL_AUDIT where ELAPSED_TIME > 100000 limit 10 |
+ | 1538743161 | 2023-03-08 11:02:29.135127 | 289585 | 26 | 289380 | select request_id,usec_to_time(request_time),ELAPSED_TIME,QUEUE_TIME,EXECUTE_TIME,query_sql from v$OB_SQL_AUDIT where ELAPSED_TIME > 100000 limit 10 |
+ | 1538749786 | 2023-03-08 11:02:33.890317 | 294356 | 45 | 294180 | select request_id,usec_to_time(request_time),ELAPSED_TIME,QUEUE_TIME,EXECUTE_TIME,query_sql from v$OB_SQL_AUDIT where ELAPSED_TIME > 100000 limit 10 |
+ | 1538792259 | 2023-03-08 11:03:04.626596 | 192843 | 128 | 192569 | select request_id,usec_to_time(request_time),ELAPSED_TIME,QUEUE_TIME,EXECUTE_TIME,query_sql from v$OB_SQL_AUDIT where ELAPSED_TIME > 100000 limit 10 |
+ | 1538799117 | 2023-03-08 11:03:09.567622 | 201594 | 55 | 201388 | select request_id,usec_to_time(request_time),ELAPSED_TIME,QUEUE_TIME,EXECUTE_TIME,query_sql from v$OB_SQL_AUDIT where ELAPSED_TIME > 100000 limit 10 |
+ | 1538804299 | 2023-03-08 11:03:13.274090 | 235720 | 241 | 235302 | select request_id,usec_to_time(request_time),ELAPSED_TIME,QUEUE_TIME,EXECUTE_TIME,query_sql from v$OB_SQL_AUDIT where ELAPSED_TIME > 100000 limit 10 |
+ +------------+----------------------------+--------------+------------+--------------+------------------------------------------------------------------------------------------------------------------------------------------------------+
+ 10 rows in set (0.28 sec)
+ ```
* The following sample statement shows how to query the average queuing time of the last 1,000 SQL statements:
@@ -112,41 +112,40 @@ The `GV$OB_SQL_AUDIT` view allows you to query the SQL execution information at
* The following example shows how to query the SQL statements that occupy the most resources of a tenant. The SQL statements are sorted in descending order by `the execution time × the number of executions`. If the CPU resource of the tenant is fully used, you can use this statement to check whether the issue is caused by SQL statements and, if yes, the suspicious SQL statements.
- ```shell
- obclient>
- select SQL_ID,
- avg(ELAPSED_TIME),
- avg(QUEUE_TIME),
- avg(ROW_CACHE_HIT + BLOOM_FILTER_CACHE_HIT + BLOCK_CACHE_HIT + DISK_READS) avg_logical_read,
- avg(execute_time) avg_exec_time,
- count(*) cnt,
- avg(execute_time - TOTAL_WAIT_TIME_MICRO ) avg_cpu_time,
- avg( TOTAL_WAIT_TIME_MICRO ) avg_wait_time,
- WAIT_CLASS,
- avg(retry_cnt)
- from v$OB_SQL_AUDIT
- group by 1
- order by avg_exec_time * cnt desc limit 10;
- +----------------------------------+-------------------+-----------------+------------------+---------------+--------+--------------+---------------+------------+----------------+
- | SQL_ID | avg(ELAPSED_TIME) | avg(QUEUE_TIME) | avg_logical_read | avg_exec_time | cnt | avg_cpu_time | avg_wait_time | WAIT_CLASS | avg(retry_cnt) |
- +----------------------------------+-------------------+-----------------+------------------+---------------+--------+--------------+---------------+------------+----------------+
- | 2705182A6EAB699CEC8E59DA80710B64 | 54976.9269 | 43.8605 | 17664.2727 | 54821.5828 | 11759 | 54821.5828 | 0.0000 | OTHER | 0.0000 |
- | 32AB97A0126F566064F84DDDF4936F82 | 1520.9832 | 380.7903 | 63.7847 | 789.6781 | 63632 | 789.6781 | 0.0000 | OTHER | 0.0000 |
- | A5F514E873BE9D1F9A339D0DA7481D69 | 44032.5553 | 44.5149 | 8943.7834 | 43878.1405 | 1039 | 43878.1405 | 0.0000 | OTHER | 0.0000 |
- | 31FD78420DB07C11C8E3154F1658D237 | 7769857.0000 | 35.7500 | 399020.7500 | 7769682.7500 | 4 | 7769682.7500 | 0.0000 | NETWORK | 1.0000 |
- | C48AEE941D985D8DEB66892228D5E845 | 8528.6227 | 0.0000 | 0.0000 | 8450.4047 | 1601 | 8450.4047 | 0.0000 | OTHER | 0.0000 |
- | 101B7B79DFA9AE801BEE4F1A234AD294 | 158.2296 | 41.7211 | 0.0000 | 46.0345 | 286758 | 46.0345 | 0.0000 | OTHER | 0.0000 |
- | 1D0BA376E273B9D622641124D8C59264 | 1774.5924 | 0.0049 | 0.0000 | 1737.4885 | 5081 | 1737.4885 | 0.0000 | OTHER | 0.0000 |
- | 64CF75576816DB5614F3D5B1F35B1472 | 1801.8767 | 747.0343 | 0.0000 | 827.1674 | 10340 | 827.1674 | 0.0000 | OTHER | 0.0000 |
- | 23D1C653347BA469396896AD9B20DCA1 | 5564.9419 | 0.0000 | 0.0000 | 5478.2228 | 1257 | 5478.2228 | 0.0000 | OTHER | 0.0000 |
- | FA4F493FA5CE2DCC64F51CF3754F96C6 | 2478.3956 | 378.7557 | 3.1040 | 1731.1802 | 3357 | 1731.1802 | 0.0000 | OTHER | 0.0000 |
- +----------------------------------+-------------------+-----------------+------------------+---------------+--------+--------------+---------------+------------+----------------+
- 10 rows in set (1.34 sec)
- ```
+ ```shell
+ obclient>
+ select SQL_ID,
+ avg(ELAPSED_TIME),
+ avg(QUEUE_TIME),
+ avg(ROW_CACHE_HIT + BLOOM_FILTER_CACHE_HIT + BLOCK_CACHE_HIT + DISK_READS) avg_logical_read,
+ avg(execute_time) avg_exec_time,
+ count(*) cnt,
+ avg(execute_time - TOTAL_WAIT_TIME_MICRO ) avg_cpu_time,
+ avg( TOTAL_WAIT_TIME_MICRO ) avg_wait_time,
+ WAIT_CLASS,
+ avg(retry_cnt)
+ from v$OB_SQL_AUDIT
+ group by 1
+ order by avg_exec_time * cnt desc limit 10;
+ +----------------------------------+-------------------+-----------------+------------------+---------------+--------+--------------+---------------+------------+----------------+
+ | SQL_ID | avg(ELAPSED_TIME) | avg(QUEUE_TIME) | avg_logical_read | avg_exec_time | cnt | avg_cpu_time | avg_wait_time | WAIT_CLASS | avg(retry_cnt) |
+ +----------------------------------+-------------------+-----------------+------------------+---------------+--------+--------------+---------------+------------+----------------+
+ | 2705182A6EAB699CEC8E59DA80710B64 | 54976.9269 | 43.8605 | 17664.2727 | 54821.5828 | 11759 | 54821.5828 | 0.0000 | OTHER | 0.0000 |
+ | 32AB97A0126F566064F84DDDF4936F82 | 1520.9832 | 380.7903 | 63.7847 | 789.6781 | 63632 | 789.6781 | 0.0000 | OTHER | 0.0000 |
+ | A5F514E873BE9D1F9A339D0DA7481D69 | 44032.5553 | 44.5149 | 8943.7834 | 43878.1405 | 1039 | 43878.1405 | 0.0000 | OTHER | 0.0000 |
+ | 31FD78420DB07C11C8E3154F1658D237 | 7769857.0000 | 35.7500 | 399020.7500 | 7769682.7500 | 4 | 7769682.7500 | 0.0000 | NETWORK | 1.0000 |
+ | C48AEE941D985D8DEB66892228D5E845 | 8528.6227 | 0.0000 | 0.0000 | 8450.4047 | 1601 | 8450.4047 | 0.0000 | OTHER | 0.0000 |
+ | 101B7B79DFA9AE801BEE4F1A234AD294 | 158.2296 | 41.7211 | 0.0000 | 46.0345 | 286758 | 46.0345 | 0.0000 | OTHER | 0.0000 |
+ | 1D0BA376E273B9D622641124D8C59264 | 1774.5924 | 0.0049 | 0.0000 | 1737.4885 | 5081 | 1737.4885 | 0.0000 | OTHER | 0.0000 |
+ | 64CF75576816DB5614F3D5B1F35B1472 | 1801.8767 | 747.0343 | 0.0000 | 827.1674 | 10340 | 827.1674 | 0.0000 | OTHER | 0.0000 |
+ | 23D1C653347BA469396896AD9B20DCA1 | 5564.9419 | 0.0000 | 0.0000 | 5478.2228 | 1257 | 5478.2228 | 0.0000 | OTHER | 0.0000 |
+ | FA4F493FA5CE2DCC64F51CF3754F96C6 | 2478.3956 | 378.7557 | 3.1040 | 1731.1802 | 3357 | 1731.1802 | 0.0000 | OTHER | 0.0000 |
+ +----------------------------------+-------------------+-----------------+------------------+---------------+--------+--------------+---------------+------------+----------------+
+ 10 rows in set (1.34 sec)
+ ```
Note
- When an SQL response time (RT) jitter occurs in a tenant, the CPU resource of the tenant is fully used and the RT of all SQL statements soars. In this case, you must first determine whether the issue is caused by the SQL statements or other problems.
- The SQL statement described in the preceding example is quite useful. It aggregates executed SQL statements based on the
SQL_ID
and sorts them in descending order by the amount of occupied CPU resource, which is the product of avg_exec_time multiplied by cnt
. This way, you can check the top SQL statements for exceptions.
-
-
+
\ No newline at end of file
diff --git a/en-US/600.manage/900.daily-inspection/200.check-cluster-parameters.md b/en-US/600.manage/900.daily-inspection/200.check-cluster-parameters.md
index c952fbd852..fac48163d0 100644
--- a/en-US/600.manage/900.daily-inspection/200.check-cluster-parameters.md
+++ b/en-US/600.manage/900.daily-inspection/200.check-cluster-parameters.md
@@ -25,7 +25,7 @@ This topic describes how to view and modify cluster parameters by using SQL stat
| [syslog_io_bandwidth_limit](../../700.reference/800.configuration-items-and-system-variables/100.system-configuration-items/300.cluster-level-configuration-items/20400.syslog_io_bandwidth_limit.md) | The maximum I/O bandwidth available for system logs. If this value is reached, the remaining system logs are discarded. | 5 MB |
| [memstore_limit_percentage](../../700.reference/800.configuration-items-and-system-variables/100.system-configuration-items/300.cluster-level-configuration-items/13900.memstore_limit_percentage.md) | The percentage of the memory occupied by the MemStore to the total available memory of a tenant. Value range: \[1, 99\]. | 50 |
| [system_memory](../../700.reference/800.configuration-items-and-system-variables/100.system-configuration-items/300.cluster-level-configuration-items/20700.system_memory.md) | The memory size reserved by the system for the `sys500` tenant. Value range: \[0 MB, +∞). | 50 GB |
-| [resource_hard_limit](../../700.reference/800.configuration-items-and-system-variables/100.system-configuration-items/300.cluster-level-configuration-items/16800.resource_hard_limit.md) | Specifies how resource units are allocated. During the allocation of resources such as CPU cores and memory, the total resource volume is the actual volume multiplied by the specified value in percentage. The proportion of the final allocated server resource volume cannot exceed the value of `resource_hard_limit`. Value range: \[1, 10000\]. | 100 |
+| [resource_hard_limit](../../700.reference/800.configuration-items-and-system-variables/100.system-configuration-items/300.cluster-level-configuration-items/16800.resource_hard_limit.md) | Specifies how resource units are allocated. During the allocation of resources such as CPU cores and memory, the total resource volume is the actual volume multiplied by the specified value in percentage. The proportion of the final allocated server resource volume cannot exceed the value of `resource_hard_limit`. Value range: \[100, 10000\]. | 100 |
| [syslog_level](../../700.reference/800.configuration-items-and-system-variables/100.system-configuration-items/300.cluster-level-configuration-items/20500.syslog_level.md) | The level of system logs. Valid values: DEBUG, TRACE, INFO, WARN, USER-ERR, and ERROR. | INFO |
| [data_disk_usage_limit_percentage](../../700.reference/800.configuration-items-and-system-variables/100.system-configuration-items/300.cluster-level-configuration-items/5000.data_disk_usage_limit_percentage.md) | The maximum usage of the data disk. When the usage exceeds this threshold, data can no longer be migrated into the data disk. Value range: \[50, 100\]. | 95 |
| [enable_perf_event](../../700.reference/800.configuration-items-and-system-variables/100.system-configuration-items/300.cluster-level-configuration-items/7700.enable_perf_event.md) | Specifies whether to enable the information collection feature for performance events. Valid values: True and False. | True |
From 2c6725c4fc74f4a339ca74f7fe9cc981f67f1f4e Mon Sep 17 00:00:00 2001
From: Jackie Qu
Date: Sat, 13 Apr 2024 17:29:10 +0800
Subject: [PATCH 09/63] v430-beta-300.develop-update
---
...ata-generation-of-mysql-mode-in-develop.md | 356 ++++++++++++++++++
...ta-generation-of-oracle-mode-in-develop.md | 315 ++++++++++++++++
2 files changed, 671 insertions(+)
create mode 100644 en-US/300.develop/100.application-development-of-mysql-mode/400.write-data-of-mysql-mode/500.batch-data-generation-of-mysql-mode-in-develop.md
create mode 100644 en-US/300.develop/200.application-development-of-oracle-mode/400.write-data-of-oracle-mode/500.batch-data-generation-of-oracle-mode-in-develop.md
diff --git a/en-US/300.develop/100.application-development-of-mysql-mode/400.write-data-of-mysql-mode/500.batch-data-generation-of-mysql-mode-in-develop.md b/en-US/300.develop/100.application-development-of-mysql-mode/400.write-data-of-mysql-mode/500.batch-data-generation-of-mysql-mode-in-develop.md
new file mode 100644
index 0000000000..d05a2ebdef
--- /dev/null
+++ b/en-US/300.develop/100.application-development-of-mysql-mode/400.write-data-of-mysql-mode/500.batch-data-generation-of-mysql-mode-in-develop.md
@@ -0,0 +1,356 @@
+|description||
+|---|---|
+|keywords||
+|dir-name||
+|dir-name-en||
+|tenant-type|MySQL Mode|
+
+# Generate test data in batches
+
+This topic describes how to generate test data in batches by using a shell script, a stored procedure, and OceanBase Developer Center (ODC).
+
+## Prerequisites
+
+* You have deployed an OceanBase cluster and created a MySQL tenant. For more information about how to deploy an OceanBase cluster, see [Overview](../../../400.deploy/100.deploy-overview.md).
+* You have the `CREATE`, `INSERT` and `SELECT` privileges. For more information about how to view the privileges of the current user, see [View user privileges](../../../600.manage/500.security-and-permissions/300.access-control/200.user-and-permission/200.permission-of-mysql-mode/400.view-user-permissions-of-mysql-mode.md). If you do not have the required privileges, request the administrator to grant the privileges. For more information, see [Modify user privileges](../../../600.manage/500.security-and-permissions/300.access-control/200.user-and-permission/200.permission-of-mysql-mode/500.modify-user-permissions-of-mysql-mode.md).
+
+## Use a shell script to generate test data in batches
+
+You can compile a shell script to generate an SQL script that inserts test data in batches. This way, you do not need to manually write complex SQL statements. This method can generate a large amount of test data as required, improving efficiency and reducing manual work.
+
+### Procedure
+
+1. Create a test database and a test table.
+2. Create a shell script.
+3. Run the SQL script.
+4. View the data.
+
+#### Step 1: Create a test database and a test table
+
+Use a database management tool, such as a command-line tool or a GUI-based tool, to create a database for storing test data and create a test table in the database.
+
+1. Connect to the prepared MySQL tenant.
+
+ Here is an example:
+
+ ```shell
+ obclient -hxxx.xxx.xxx.xxx -P2881 -uroot@mysql001 -p****** -A
+ ```
+
+2. Create a test database.
+
+ Here is an example:
+
+ Create a test database named `test_sql_file_db`.
+
+ ```sql
+ CREATE DATABASE test_sql_file_db;
+ ```
+
+ For more information about how to create a database, see [Create a database](../300.database-object-planning-of-mysql-mode/100.create-database-of-mysql-mode-in-develop.md).
+
+3. Create a test table.
+
+ Here is an example:
+
+ Create a test table named `test_sql_file_db.test_sql_file_tbl1`.
+
+ ```sql
+ CREATE TABLE test_sql_file_db.test_sql_file_tbl1 (id INT, name VARCHAR(50), email VARCHAR(50));
+ ```
+
+ For more information about how to create a table, see [Create a table](../300.database-object-planning-of-mysql-mode/300.create-table-of-mysql-mode-in-develop.md).
+
+#### Step 2: Create a shell script
+
+Use a text editor to create a shell script file with the `.sh` extension. In the shell script, use the output redirection symbol (`>` or `>>`) to write the generated test data into an SQL script file. During the loop or traversal, the generated data is written as SQL statements (`INSERT`) to the SQL script file.
+
+1. Open the terminal.
+2. Create a shell script file.
+
+ Use the `vi` or `vim` editor to create a new shell script file.
+
+ Here is an example:
+
+ Write a shell script named `generate_sql.sh`.
+
+ ```shell
+ vi generate_sql.sh
+ ```
+
+3. Enter the insert mode.
+
+ Press the i or Insert key to enter the insert mode of `vi` or `vim` to edit the content of the file.
+
+4. Write the shell script logic.
+
+ In insert mode, write the logic and commands for the shell script. These commands may be shell commands, conditional statements, loops, and functions.
+
+ Here is an example:
+
+ Content of the `generate_sql.sh` script is as follows:
+
+ ```shell
+ #!/bin/bash
+
+ # Name the SQL file
+ SQL_FILE="insert_test_sql_file_tbl1.sql"
+
+ # Create the SQL file
+ touch $SQL_FILE
+
+ # Define the SQL statement
+ INSERT_SQL="INSERT INTO test_sql_file_tbl1 (id, name, email) VALUES "
+
+ # Generate 100,000 user records in loops
+ for ((i=1; i<=100000; i++))
+ do
+ user_id=$i
+ user_name="user_$i"
+ user_email="user_$i@example.com"
+ values="($user_id, '$user_name', '$user_email')"
+ if (($i == 100000))
+ then
+ INSERT_SQL="$INSERT_SQL$values;"
+ else
+ INSERT_SQL="$INSERT_SQL$values, "
+ fi
+ done
+
+ # Write SQL statements into the SQL file
+ echo $INSERT_SQL >> $SQL_FILE
+ ```
+
+
+ Note
+ - The script generates an SQL file named
insert_test_sql_file_tbl1.sql
and inserts 100,000 user records into it. You can modify the SQL statement and the number of user records generated as needed. - When you insert a large amount of data, pay attention to the resource usage of the relevant server in advance to avoid data insertion failures or performance degradation caused by insufficient resources.
+
+
+5. Save the file.
+
+ Press the Esc key to exit the insert mode. Enter the `:wq` command to save the file and exit the `vi` or `vim` editor.
+
+6. Run the shell script file.
+
+ Run the created shell script in the terminal to generate an SQL script.
+
+ Here is an example:
+
+ Run the created shell script. The following command generates an SQL script file named `insert_test_sql_file_tbl1.sql` in the current directory. The file contains 100,000 `INSERT` statements.
+
+ ```shell
+ sudo bash generate_sql.sh
+ ```
+
+#### Step 3: Run the SQL script
+
+You can run the following command in the CLI to import data from the SQL script file.
+
+
+ Note
+ For more information about running SQL scripts, see Import data from SQL files to OceanBase Database.
+
+
+```shell
+obclient -h$host -u$user_name -P$port -p$password -D$database_name < $sql_file
+```
+
+**Parameters**
+
+* `$host`: the IP address for connecting to OceanBase Database, which is the IP address of an OceanBase Database Proxy (ODP) for connection through ODP, or the IP address of an OBServer node for direct connection.
+* `$port`: the port for connecting to OceanBase Database. For connection through ODP, the default value is `2883`, which can be customized when ODP is deployed. For direct connection, the default value is `2881`, which can be customized when OceanBase Database is deployed.
+* `$database_name`: the name of the database to be accessed.
+
+
+ Notice
+ The user for connecting to a tenant must have the CREATE
, INSERT
, and SELECT
privileges on the database. For more information about user privileges, see Privilege types in MySQL mode.
+
+
+* `$user_name`: the tenant account. For connection through ODP, the tenant account can be in the `username@tenant name#cluster name` or `cluster name:tenant name:username` format. For direct connection, the tenant account is in the `username@tenant name` format.
+* `$password`: the password of the account.
+* `$sql_file`: the name of the SQL script file.
+
+
+ Note
+ To run the SQL script, use the absolute path of the SQL script file.
+
+
+Here is an example:
+
+Run the following command to connect to the specified OBServer node and import all `INSERT` statements in the SQL script file to the `test_sql_file_db` database to insert 100,000 data records into the `insert_test_sql_file_tbl1` table.
+
+```shell
+obclient -hxxx.xxx.xxx.xxx -uroot@mysql001 -P2881 -p****** -Dtest_sql_file_db < /home/admin/test_data/insert_test_sql_file_tbl1.sql
+```
+
+#### Step 4: View the data
+
+Execute the following SQL statement to view the number of rows in the `test_sql_file_db.test_sql_file_tbl1` table.
+
+```sql
+obclient [(none)]> SELECT count(*) FROM test_sql_file_db.test_sql_file_tbl1;
+```
+
+The return result is as follows:
+
+```shell
++----------+
+| count(*) |
++----------+
+| 100000 |
++----------+
+1 row in set
+```
+
+## Use a stored procedure to generate test data in batches
+
+You can use a stored procedure to automatically generate test data in batches in an effective manner.
+
+### Procedure
+
+1. Create a test database and a test table.
+2. Create a stored procedure.
+3. Call the stored procedure.
+4. View the data.
+
+#### Step 1: Create a test database and a test table
+
+Use a database management tool, such as a command-line tool or a GUI-based tool, to create a database for storing test data and create a test table in the database.
+
+1. Connect to the prepared MySQL tenant.
+
+ Here is an example:
+
+ ```shell
+ obclient -hxxx.xxx.xxx.xxx -P2881 -uroot@mysql001 -p****** -A
+ ```
+
+2. Create a test database.
+
+ Here is an example:
+
+ Create a test database named `test_db`.
+
+ ```shell
+ obclient [(none)]> CREATE DATABASE test_db;
+ ```
+
+3. Execute the following SQL statement to switch to the `test_db` database.
+
+ ```shell
+ obclient [(none)]> use test_db;
+ ```
+
+ The return result is as follows:
+
+ ```shell
+ Database changed
+ obclient [test_db]>
+ ```
+
+4. Create a test table.
+
+ Here is an example:
+
+ Create a table named `test_pro_tbl1` with four fields:
+
+ * `id`: an auto-increment integer field defined as the primary key.
+ * `create_time`: a datetime field that indicates the time when data in a row was created. `DEFAULT CURRENT_TIMESTAMP` is used to set the default value of this field to the current time.
+ * `name`: a character field that supports a maximum of 50 characters.
+ * `enrollment_date`: a date field used to store date data.
+
+ ```shell
+ obclient [test_db]> CREATE TABLE test_pro_tbl1 (
+ id INT NOT NULL AUTO_INCREMENT,
+ create_time DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP,
+ name VARCHAR(50),
+ enrollment_date DATE,
+ PRIMARY KEY(id));
+ ```
+
+#### Step 2: Create a stored procedure
+
+
+1. Specify a custom delimiter.
+
+ Here is an example:
+
+ Use `DELIMITER` to specify the custom delimiter `//`.
+
+ ```sql
+ DELIMITER //
+ ```
+
+2. Create a stored procedure.
+
+ Here is an example:
+
+ Execute the following SQL statement to create a stored procedure named `pro_generate_data`. The input parameter is `n`, which specifies the number of data records to insert. Use loop statements and `INSERT` statements to generate and insert data. Here, `test_pro_tbl1` is the name of the table into which the data is inserted, `name` and `enrollment_date` are the fields into which the data is inserted, and `I` is the loop counter. The `CONCAT` function is used to generate a name, and the `DATE_ADD` function is used to generate a date.
+
+ ```sql
+ CREATE PROCEDURE pro_generate_data(IN n INT)
+ BEGIN
+ DECLARE i INT DEFAULT 1;
+ WHILE i <= n DO
+ INSERT INTO test_pro_tbl1 (name, enrollment_date) VALUES (CONCAT('Name', i), DATE_ADD('2022-01-01', INTERVAL i DAY));
+ SET i = i + 1;
+ END WHILE;
+ END;
+ //
+ ```
+
+ For more information about how to create a stored procedure, see [Stored procedures](../../../700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/200.storage-object-mysql/300.pl-stored-procedure-mysql.md).
+
+3. Restore to the default delimiter, which is semicolon (;).
+
+ ```sql
+ DELIMITER ;
+ ```
+
+#### Step 3: Call the stored procedure
+
+Use the `CALL` statement to call the stored procedure to execute the logic that generates the test data. You can pass a parameter to the stored procedure to specify the amount of data to be generated.
+
+Here is an example:
+
+Execute the following SQL statement to call the stored procedure `pro_generate_data` and pass the parameter value 100,000, specifying to insert 100,000 data records.
+
+```sql
+obclient [test_db]> CALL pro_generate_data(100000);
+```
+
+
+ Note
+ You can increase or decrease the value of the input parameter to control the amount of test data. When adjusting the parameter value, take note of the database performance and storage space to avoid database crashes or storage space insufficiency.
+
+
+#### Step 4: View the data
+
+Execute the following SQL statement to view the number of rows in the `test_pro_tbl1` table.
+
+```shell
+obclient [test_db]> SELECT count(*) FROM test_pro_tbl1;
+```
+
+The return result is as follows:
+
+```shell
++----------+
+| count(*) |
++----------+
+| 100000 |
++----------+
+1 row in set
+```
+
+## Use ODC to generate test data in batches
+
+ODC is an enterprise-level database development platform tailored for OceanBase Database. For more information about ODC, see [What is ODC?](https://www.oceanbase.com/docs/enterprise-odc-doc-cn-10000000002082168)
+
+ODC provides the data mocking feature that can generate data based on field types in a table. This can meet your requirement for generating a large amount of data during database performance tests or feature verification. For more information about the data mocking feature provided by ODC, see [Data mocking](https://www.oceanbase.com/docs/enterprise-odc-doc-cn-10000000002082193).
+
+## References
+
+* For more information about how to connect to a database, see [Overview](../100.connect-to-oceanbase-database-of-mysql-mode/100.connection-methods-overview-of-mysql-mode.md).
+* For more information about how to drop a table, see [Drop a table](../../../700.reference/300.database-object-management/100.manage-object-of-mysql-mode/200.manage-tables-of-mysql-mode/800.delete-a-table-of-mysql-mode.md).
+* For more information about how to delete data, see [Delete data](300.delete-data-of-mysql-mode-in-develop.md).
diff --git a/en-US/300.develop/200.application-development-of-oracle-mode/400.write-data-of-oracle-mode/500.batch-data-generation-of-oracle-mode-in-develop.md b/en-US/300.develop/200.application-development-of-oracle-mode/400.write-data-of-oracle-mode/500.batch-data-generation-of-oracle-mode-in-develop.md
new file mode 100644
index 0000000000..11680b1996
--- /dev/null
+++ b/en-US/300.develop/200.application-development-of-oracle-mode/400.write-data-of-oracle-mode/500.batch-data-generation-of-oracle-mode-in-develop.md
@@ -0,0 +1,315 @@
+|description||
+|---|---|
+|keywords||
+|dir-name||
+|dir-name-en||
+|tenant-type|Oracle Mode|
+
+# Generate test data in batches
+
+This topic describes how to generate test data in batches by using a shell script, a stored procedure, and OceanBase Developer Center (ODC).
+
+## Prerequisites
+
+* You have deployed an OceanBase cluster and created an Oracle tenant. For more information about how to deploy an OceanBase cluster, see [Overview](../../../400.deploy/100.deploy-overview.md).
+* You have the `CREATE TABLE`, `INSERT` and `SELECT` privileges. For more information about how to view the privileges of the current user, see [View user privileges](../../../600.manage/500.security-and-permissions/300.access-control/200.user-and-permission/300.permission-of-oracle-mode/600.view-user-permissions-of-oracle-mode.md). If you do not have the required privileges, request the administrator to grant the privileges. For more information, see [Modify user privileges](../../../600.manage/500.security-and-permissions/300.access-control/200.user-and-permission/300.permission-of-oracle-mode/700.modify-user-permissions-of-oracle-mode.md).
+
+## Use a shell script to generate test data in batches
+
+You can compile a shell script to generate an SQL script that inserts test data in batches. This way, you do not need to manually write complex SQL statements. This method can generate a large amount of test data as required, improving efficiency and reducing manual work.
+
+### Procedure
+
+1. Create a test table.
+2. Create a shell script.
+3. Run the SQL script.
+4. View the data.
+
+#### Step 1: Create a test table
+
+Use a database management tool, such as a command-line tool or a GUI-based tool, to create a test table in the database.
+
+1. Connect to the prepared Oracle tenant.
+
+ Here is an example:
+
+ ```shell
+ obclient -hxxx.xxx.xxx.xxx -P2881 -usys@oracle001 -p****** -A
+ ```
+
+2. Create a test table.
+
+ Here is an example:
+
+ Create a test table named `test_sql_file_tbl1`.
+
+ ```sql
+ obclient [SYS]> CREATE TABLE test_sql_file_tbl1 (id NUMBER, name VARCHAR2(20), email VARCHAR2(50));
+ ```
+
+ For more information about how to create a table, see [Create a table](../300.database-object-planning-of-oracle-mode/200.create-table-of-oracle-mode-in-develop.md).
+
+#### Step 2: Create a shell script
+
+Use a text editor to create a shell script file with the `.sh` extension. In the shell script, use the output redirection symbol (`>` or `>>`) to write the generated test data into an SQL script file. During the loop or traversal, the generated data is written as SQL statements (`INSERT`) to the SQL script file.
+
+1. Open the terminal.
+2. Create a shell script file.
+
+ Use the `vi` or `vim` editor to create a new shell script file.
+
+ Here is an example:
+
+ Write a shell script named `generate_sql.sh`.
+
+ ```shell
+ vi generate_sql.sh
+ ```
+
+3. Enter the insert mode.
+
+ Press the i or Insert key to enter the insert mode of `vi` or `vim` to edit the content of the file.
+
+4. Write the shell script logic.
+
+ In insert mode, write the logic and commands for the shell script. These commands may be shell commands, conditional statements, loops, and functions.
+
+ Here is an example:
+
+ Content of the `generate_sql.sh` script is as follows:
+
+ ```shell
+ #!/bin/bash
+
+ # Name the SQL file
+ SQL_FILE="insert_test_sql_file_tbl1.sql"
+
+ # Create the SQL file
+ touch $SQL_FILE
+
+ # Define the SQL statement
+ INSERT_SQL="INSERT INTO test_sql_file_tbl1 (id, name, email) VALUES "
+
+ # Generate 100,000 user records in loops
+ for ((i=1; i<=100000; i++))
+ do
+ user_id=$i
+ user_name="user_$i"
+ user_email="user_$i@example.com"
+ values="($user_id, '$user_name', '$user_email')"
+ if (($i == 100000))
+ then
+ INSERT_SQL="$INSERT_SQL$values;"
+ else
+ INSERT_SQL="$INSERT_SQL$values, "
+ fi
+ done
+
+ # Write SQL statements into the SQL file
+ echo $INSERT_SQL >> $SQL_FILE
+ ```
+
+
+ Note
+ - The script generates an SQL file named
insert_test_sql_file_tbl1.sql
and inserts 100,000 user records into it. You can modify the SQL statement and the number of user records generated as needed. - When you insert a large amount of data, pay attention to the resource usage of the relevant server in advance to avoid data insertion failures or performance degradation caused by insufficient resources.
+
+
+5. Save the file.
+
+ Press the Esc key to exit the insert mode. Enter the `:wq` command to save the file and exit the `vi` or `vim` editor.
+
+6. Run the shell script file.
+
+ Run the created shell script in the terminal to generate an SQL script.
+
+ Here is an example:
+
+ Run the created shell script. The following command generates an SQL script file named `insert_test_sql_file_tbl1.sql` in the current directory. The file contains 100,000 `INSERT` statements.
+
+ ```shell
+ sudo bash generate_sql.sh
+ ```
+
+#### Step 3: Run the SQL script
+
+Run the following command to import the data in the SQL script file to the database.
+
+
+ Note
+ For more information about running SQL scripts, see Import data from SQL files to OceanBase Database.
+
+
+1. Open the terminal or CLI, connect to the Oracle tenant of OceanBase Database.
+
+ Here is an example:
+
+ ```shell
+ obclient -hxxx.xxx.xxx.xxx -P2881 -usys@oracle001 -p****** -A
+ ```
+
+2. Run the `source` command to run the SQL script.
+
+ Here is an example:
+
+ ```shell
+ obclient [SYS]> source /home/admin/test_data/insert_test_sql_file_tbl1.sql
+ ```
+
+
+ Note
+ To run the SQL script, use the absolute path of the SQL script file.
+
+
+ The return result is as follows:
+
+ ```shell
+ Query OK, 100000 rows affected
+ Records: 100000 Duplicates: 0 Warnings: 0
+ ```
+
+#### Step 4: View the data
+
+Execute the following SQL statement to view the number of rows in the `test_sql_file_tbl1` table.
+
+```sql
+obclient [SYS]> SELECT count(*) FROM test_sql_file_tbl1;
+```
+
+The return result is as follows:
+
+```shell
++----------+
+| COUNT(*) |
++----------+
+| 100000 |
++----------+
+1 row in set
+```
+
+## Use a stored procedure to generate test data in batches
+
+You can use a stored procedure to automatically generate test data in batches in an effective manner.
+
+### Procedure
+
+1. Create a test table.
+2. Create a stored procedure.
+3. Call the stored procedure.
+4. View the data.
+
+#### Step 1: Create a test table
+
+Use a database management tool, such as a command-line tool or a GUI-based tool, to create a test table in the database.
+
+
+1. Connect to the prepared Oracle tenant.
+
+ Here is an example:
+
+ ```shell
+ obclient -hxxx.xxx.xxx.xxx -P2881 -usys@oracle001 -p****** -A
+ ```
+
+2. Create a test table.
+
+ Here is an example:
+
+ Execute the following SQL statement to create a table named `test_pro_tbl1` with three fields:
+
+ * `id`: an integer field.
+ * `name`: a character field that supports a maximum of 50 characters.
+ * `enrollment_date`: a date field used to store date data.
+
+ ```shell
+ obclient [SYS]> CREATE TABLE test_pro_tbl1 (
+ id NUMBER,
+ name VARCHAR2(50),
+ enrollment_date DATE);
+ ```
+
+#### Step 2: Create a stored procedure
+
+
+1. Specify a custom delimiter.
+
+ Here is an example:
+
+ Use `DELIMITER` to specify the custom delimiter `//`.
+
+ ```sql
+ obclient [SYS]> DELIMITER //
+ ```
+
+2. Create a stored procedure.
+
+ Here is an example:
+
+ Execute the following SQL statement to create a stored procedure named `pro_generate_data`. The input parameter is `n`, which specifies the number of data records to insert. Use loop statements and `INSERT` statements to generate and insert data.
+
+ ```sql
+ obclient [SYS]> CREATE OR REPLACE PROCEDURE pro_generate_data(n IN INT) IS
+ i INT := 1;
+ BEGIN
+ WHILE i <= n LOOP
+ INSERT INTO test_pro_tbl1 (id, name, enrollment_date) VALUES (i, 'Name' || i, TO_DATE('2022-01-01', 'YYYY-MM-DD') + i);
+ i := i + 1;
+ END LOOP;
+ END;
+ //
+ ```
+
+ For more information about how to create a stored procedure, see [Stored procedures](../../../700.reference/100.oceanbase-database-concepts/700.user-interface-and-query-language/200.PL/100.pl-concepts/100.pl-of-oracle-mode/200.stored-procedure-of-oracle-mode.md).
+
+3. Restore to the default delimiter, which is semicolon (;).
+
+ ```sql
+ obclient [SYS]> DELIMITER ;
+ ```
+
+#### Step 3: Call the stored procedure
+
+Use the `CALL` statement to call the stored procedure to execute the logic that generates the test data. You can pass a parameter to the stored procedure to specify the amount of data to be generated.
+
+Here is an example:
+
+Execute the following SQL statement to call the stored procedure `pro_generate_data` and pass the parameter value 100,000, specifying to insert 100,000 data records.
+
+```sql
+obclient [SYS]> CALL pro_generate_data(100000);
+```
+
+
+ Note
+ You can increase or decrease the value of the input parameter to control the amount of test data. When adjusting the parameter value, take note of the database performance and storage space to avoid database crashes or storage space insufficiency.
+
+
+#### Step 4: View the data
+
+Execute the following SQL statement to view the number of rows in the `test_pro_tbl1` table.
+
+```shell
+obclient [SYS]> SELECT count(*) FROM test_pro_tbl1;
+```
+
+The return result is as follows:
+
+```shell
++----------+
+| COUNT(*) |
++----------+
+| 100000 |
++----------+
+1 row in set
+```
+
+## Use ODC to generate test data in batches
+
+ODC is an enterprise-level database development platform tailored for OceanBase Database. For more information about ODC, see [What is ODC?](https://www.oceanbase.com/docs/enterprise-odc-doc-cn-10000000002082168)
+
+ODC provides the data mocking feature that can generate data based on field types in a table. This can meet your requirement for generating a large amount of data during database performance tests or feature verification. For more information about the data mocking feature provided by ODC, see [Data mocking](https://www.oceanbase.com/docs/enterprise-odc-doc-cn-10000000002082193).
+
+## References
+
+* For more information about how to connect to a database, see [Overview](../100.connect-to-oceanbase-database-of-oracle-mode/100.connection-methods-overview-of-oracle-mode.md).
+* For more information about how to drop a table, see [Drop a table](../../../700.reference/300.database-object-management/200.manage-object-of-oracle-mode/100.manage-tables-of-oracle-mode/800.delete-a-table-of-oracle-mode.md).
+* For more information about how to delete data, see [Delete data](300.delete-data-of-oracle-mode-in-develop.md).
From 8708554e45aa40581218686b80e1f6f12ee08221 Mon Sep 17 00:00:00 2001
From: Jackie Qu
Date: Sat, 13 Apr 2024 18:27:14 +0800
Subject: [PATCH 10/63] v430-beta-rn-ee-improve
---
.../9600.V4.3/9690.ob-430_ee.md | 225 +++++++++---------
1 file changed, 113 insertions(+), 112 deletions(-)
diff --git a/en-US/900.release-notes/10100.enterprise-edition-history-release/9600.V4.3/9690.ob-430_ee.md b/en-US/900.release-notes/10100.enterprise-edition-history-release/9600.V4.3/9690.ob-430_ee.md
index c1ccd12ae2..6d6e15e90d 100644
--- a/en-US/900.release-notes/10100.enterprise-edition-history-release/9600.V4.3/9690.ob-430_ee.md
+++ b/en-US/900.release-notes/10100.enterprise-edition-history-release/9600.V4.3/9690.ob-430_ee.md
@@ -14,6 +14,7 @@
* Release date: March 22, 2024
* Version: V4.3.0 Beta
* RPM version: oceanbase-4.3.0.1-101000062024032200
+* Description: The Beta version resolved most of the issues and is becoming more and more stable. However, there may still be some minor issues or errors that need to be addressed in the final stable release, so we recommend that you use this version in a testing environment.
### Overview
@@ -21,13 +22,13 @@ OceanBase Database V4.3.0 is released to accommodate typical analytical processi
### Key features
-#### Key AP features
+#### AP features
-* **Columnar storage engine**
+* **Columnar engine**
Columnar storage is crucial for AP databases in scenarios involving complex analytics or ad-hoc queries on a large amount of data. A columnar storage differs from a row-based storage in that it physically arranges data in tables based on columns. When data is stored in columnar storage, the engine can scan only the column data required for query evaluation without scanning entire rows in AP scenarios. This reduces the usage of I/O and memory resources and increases the evaluation speed. In addition, columnar storage naturally provides better data compression conditions to achieve a higher compression ratio, thereby reducing the storage space and network bandwidth required.
- However, common columnar engines are implemented generally based on the assumption that the data organized by column is static without massive random updates. In the case of massive random updates, system performance issues are unavoidable. The LSM-Tree architecture of OceanBase Database can resolve this problem by separately processing baseline data and incremental data. Therefore, OceanBase Database V4.3.0 supports the columnar engine based on the current architecture. It implements columnar storage and row-based storage on the same OBServer node based on a single set of code and architecture, ensuring both TP and AP query performance.
+ However, columnar engines are implemented generally based on the assumption that the data organized by column is static without massive random updates. In the case of massive random updates, system performance issues are unavoidable. The LSM-Tree architecture of OceanBase Database can resolve this problem by separately processing baseline data and incremental data. Therefore, OceanBase Database V4.3.0 supports the columnar engine based on the current architecture. It implements columnar storage and row-based storage on the same OBServer node based on a single set of code and architecture, ensuring both TP and AP query performance.
The columnar engine is optimized in terms of optimizer, executor, DDL processing, and transaction processing modules to facilitate AP business migration and improve ease of use in the new version. Specifically, a columnar storage-based new cost model and a vectorized engine are introduced, the query pushdown feature is extended and enhanced, and new features such as the Skip Index attribute, new column-based encoding algorithm, and adaptive compactions are provided.
@@ -59,13 +60,13 @@ OceanBase Database V4.3.0 is released to accommodate typical analytical processi
In earlier versions of OceanBase Database, the cost model uses constant parameters evaluated by internal servers as hardware system statistics. It uses a series of formulas and constant parameters to describe the execution overhead of each operator. In actual business scenarios, different hardware environments can provide different CPU clock frequencies, sequential/random read speeds, and NIC bandwidths. The differences may contribute to cost estimation deviations. Due to the deviations, the optimizer cannot always generate the optimal execution plan in different business environments. This version optimizes the implementation of the cost model. The cost model can use the `DBMS_STATS` package to collect or set system statistics parameters to adapt to the hardware environment. The `DBA_OB_AUX_STATISTICS` view is provided to display the system statistics parameters of the current tenant.
-* **Fixing of session variables for function indexes**
+* **Fixed session variables for function-based indexes**
- When a function index is created on a table, a hidden virtual generated column is added to the table and defined as the index key of the function index. The values of the virtual generated column are stored in the index table. The results of some built-in system functions are affected by session variables. The evaluation result of a function varies based on the values of session variables, even if the input arguments are the same. When a function index or generated column is created in this version, the dependent session variables are fixed in the schema of the index column or generated column to improve stability. When values of the index column or generated column are calculated, fixed values are used and are not affected by variable values in the current session. In OceanBase Database V4.3.0, system variables that can be fixed include `timezone_info`, `nls_format`, `nls_collation`, and `sql_mode`.
+ When a function-based index is created on a table, a hidden virtual generated column is added to the table and defined as the index key of the function-based index. The values of the virtual generated column are stored in the index table. The results of some built-in system functions are affected by session variables. The evaluation result of a function varies based on the values of session variables, even if the input arguments are the same. When a function-based index or generated column is created in this version, the dependent session variables are fixed in the schema of the index column or generated column to improve stability. When values of the index column or generated column are calculated, fixed values are used and are not affected by variable values in the current session. In OceanBase Database V4.3.0, system variables that can be fixed include `timezone_info`, `nls_format`, `nls_collation`, and `sql_mode`.
* **Online DDL extension in MySQL mode**
- OceanBase Database of this version supports online DDL operations for column type changes in more scenarios, including:
+ OceanBase Database V4.3.0 supports more online DDL scenarios for column type changes, including:
* Conversion of integer types: Online DDL operations, instead of offline DDL operations, are performed to change the data type of a primary key column, index column, generated column, column on which a generated column depends, or column with a `UNIQUE` or `CHECK` constraint to an integer type with a larger value range.
* Conversion of the `DECIMAL` data type: For columns that support the `DECIMAL` data type, online DDL operations are performed to increase the precision within any of the [1,9], [10,18], [19,38], and [39,76] ranges without changing the scale.
@@ -75,102 +76,102 @@ OceanBase Database V4.3.0 is released to accommodate typical analytical processi
* Conversion between the `TINYTEXT` and `VARCHAR` data types: For columns that support the `TINYTEXT` data type, online DDL operations are performed to change the `VARCHAR(x)` data type to the `TINYTEXT` data type if `x <= 255`, and offline DDL operations are performed if otherwise. For columns that support the `VARCHAR` data type, online DDL operations are performed to change the `TINYTEXT` data type to the `VARCHAR(x)` data type if `x >= 255`, and offline DDL operations are performed if otherwise.
* Conversion between the `TINYBLOB` and `VARBINARY` data types: For columns that support the `TINYBLOB` data type, online DDL operations are performed to change the `VARBINARY(x)` data type to the `TINYBLOB` data type if `x <= 255`, and offline DDL operations are performed if otherwise. For columns that support the `VARBINARY` data type, online DDL operations are performed to change the `TINYBLOB` data type to the `VARBINARY(x)` data type if `x >= 255`, and offline DDL operations are performed if otherwise.
-* **Globally unique client session IDs**
+* **Globally unique client session ID**
- If OceanBase Database is of a version earlier than V4.3.0 and OceanBase Database Proxy (ODP) is of a version earlier than V4.2.3, the client session ID of ODP is returned if you execute the `SHOW PROCESSLIST` statement in ODP to query the session ID, and the server session ID is returned if you query the session ID by using an expression such as `connection_id` or from a system view. One client session ID corresponds to multiple server session IDs, making it difficult to use a unique ID to identify a session on the entire link. As a result, you can be easily confused when you query session information, which causes inconveniences in user session management. This version restructures the client session ID generation and maintenance process. If OceanBase Database is of V4.3.0 or later and ODP is of V4.2.3 or later, when you query a session ID by executing the `SHOW PROCESSLIST` statement, from the `information_schema.PROCESSLIST` or `GV$OB_PROCESSLIST` view, or by using the `connection_id`, `userenv('sid')`/`userenv('sessionid')`, or `sys_context('userenv','sid')`/`sys_context('userenv','sessionid')` expression, the client session ID is returned. You can manage client sessions by using the `KILL` statement in SQL or PL. If OceanBase Database or ODP does not meet the version requirement, the handling method in earlier versions is used.
-
+ Prior to OceanBase Database V4.3.0 and OceanBase Database Proxy (ODP) V4.2.3, when the client executes `SHOW PROCESSLIST` through ODP, the client session ID in ODP is returned. However, when the client queries the session ID by using an expression such as `connection_id` or from a system view, the session ID on the server is returned. A client session ID corresponds to multiple server session IDs. This causes confusion in session information queries and makes user session management difficult. In the new version, the client session ID generation and maintenance process is reconstructed. When the version of OceanBase Database is not earlier than V4.3.0 and the version of ODP is not earlier than V4.2.3, the session IDs returned by various channels, such as the `SHOW PROCESSLIST` command, the `information_schema.PROCESSLIST` and `GV$OB_PROCESSLIST` views, and the `connection_id`, `userenv('sid')`, `userenv('sessionid')`, `sys_context('userenv','sid')`, and `sys_context('userenv','sessionid')` expressions, are all client session IDs. You can specify a client session ID in the SQL or PL command `KILL` to terminate the corresponding session. If the preceding version requirements for OceanBase Database and ODP are not met, the handling method in earlier versions is used.
+
-* **Renovation of the log stream state machine**
+* **Improvement of the log stream state machine**
- In this version, the status of a log stream is subject to the memory status and persistence status. The persistence status indicates the lifecycle of the log stream. After the log stream is restarted upon a server breakdown, the presence status and memory status of the log stream are determined based on the persistence status. The memory status is the running status of the log stream. It indicates the overall status of the log stream and the status of key submodules. Based on the explicit status and status sequence of the log stream, underlying modules can determine which operations of the log stream are safe and whether the log stream has changed from one state to another and then changed back to the original state. The working status and performance of a log stream after it is restarted upon a server breakdown are optimized for backup and restore processes and migration processes. This improves the stability of log stream features and enhances the concurrency control over log streams.
+ In OceanBase Database V4.3.0, the log stream status is split into the in-memory status and persistent status. The persistent status indicates the life cycle of a log stream. After the OBServer node where a log stream resides breaks down and then restarts, the system determines whether the log stream should exist and what the in-memory status of the log stream should be based on the persistent status of the log stream. The in-memory status indicates the runtime status of a log stream, representing the overall status of the log stream and the status of key submodules. Based on the explicit status and status sequence of the log stream, underlying modules can determine which operations are safe to the log stream and whether the log stream has gone through a status change of the ABA type. For backup and restore or migration processes, the working status of a log stream is optimized after the OBServer node where the log stream resides restarts. This feature improves the stability of log stream-related features and enhances the concurrency control on log streams.
* **Tenant cloning**
- OceanBase Database V4.3.0 introduces the tenant cloning feature. You can execute the `CREATE TENANT new_tenant_name FROM source_tenant_name WITH RESOURCE_POOL [=] resource_pool_name, UNIT [=] unit_config` statement in the sys tenant to clone the specified tenant. The cloned tenant is a standby tenant. You can execute the `ALTER SYSTEM ACTIVATE STANDBY TENANT new_tenant_name` statement to switch the cloned tenant to the PRIMARY role to provide services. The cloned tenant and original tenant share the physical macroblocks. However, new data changes and resource usage are isolated by tenant. If you want to perform temporary data analysis or other risky operations with high resource consumption on an online tenant, you can clone the tenant and perform analysis or verification on the cloned tenant to avoid affecting the online tenant. You can also clone a tenant for disaster recovery. When an unrecoverable misoperation is performed on the original tenant, you can use the cloned tenant for data rollback.
-
+ OceanBase Database V4.3.0 supports tenant cloning. You can execute `CREATE TENANT new_tenant_name FROM source_tenant_name WITH RESOURCE_POOL [=] resource_pool_name, UNIT [=] unit_config` in the sys tenant to quickly clone a specified tenant. After a tenant cloning job is completed, the created new tenant is a standby tenant. You can use `ALTER SYSTEM ACTIVATE STANDBY TENANT new_tenant_name` to convert the standby tenant into the primary tenant to provide services. The new tenant and the source tenant share physical macroblocks in the initial state, but new data changes and resource usage are isolated between the tenants. You can clone an online tenant for temporary data analysis with high resource consumption or other high-risk operations to avoid risking the online tenant. In addition, you can also clone a tenant for disaster recovery. When irrecoverable misoperations are performed in the source tenant, you can use the new tenant for data rollback.
+
#### Performance improvements
* **PDML transaction optimization**
- This version supports parallel commit and log replay at the transaction layer, and provides partition-level rollback inside transaction participants, which helps significantly improve the DML execution performance in high concurrency scenarios in contrast to earlier V4.x versions.
+ The new version implements optimizations at the transaction layer by supporting parallel commit, log replay, and partition-level rollbacks within transaction participants. Compared with earlier V4.x versions, the new version significantly improves the DML execution performance in high concurrency scenarios.
-* **Optimization of I/O usage in loading tablet metadata**
+* **I/O usage optimization for loading tablet metadata**
- OceanBase Database V4.x supports millions of partitions on a single server and on-demand loading of metadata because the metadata of millions of tablets cannot be all stored in the memory. On-demand loading is supported at the partition and subcategory levels. Metadata in a partition is divided into different subcategories for layered storage. When a background task requires metadata of a deep level, reading the data results in a high I/O overhead. A high I/O overhead is acceptable for a local solid-state disk (SSD) but may compromise the system performance in scenarios where a hard disk drive (HDD) or a cloud disk is used. This version aggregates the frequently accessed metadata for storage, reducing the number of I/O times required for accessing the metadata to 1. This greatly decreases the I/O overhead in the case of no load and prevents the I/O overheads of background tasks from affecting the query performance in the foreground. The process of loading metadata upon an OBServer node restart is also optimized. Specifically, tablet metadata is batch loaded based on macroblocks. This significantly reduces discrete read I/O operations and increases the restart speed by multiple times or even dozens of times.
+ OceanBase Database V4.x supports millions of partitions on a single machine. As the memory may fail to hold the metadata of millions of tablets, OceanBase Database V4.x supports on-demand loading of tablet metadata. OceanBase Database supports on-demand loading of metadata at the partition level and the subclass level within partitions. In a partition, metadata is split into multiple subclasses for hierarchical storage. In scenarios where background tasks require deeper metadata, the data read consumes more I/O resources. These I/O overheads are not a problem for local SSD disks, but may affect system performance when HDD disks or cloud disks are used. OceanBase Database V4.3.0 aggregates frequently accessed metadata in storage, and only one I/O operation is required to access the metadata. This greatly reduces the I/O overhead in zero load scenarios and avoids the impact on foreground query performance caused by background task I/O overhead. In addition, the metadata loading process during the restart of an OBServer node is optimized. Tablet metadata is loaded in batches at the granularity of macroblocks, greatly reducing discrete I/O reads and speeding up the restart by several or even dozens of times.
#### High availability enhancements
-* **Proactive broadcasting/refreshing of tablet locations**
+* **Proactive broadcast/refresh of tablet locations**
- OceanBase Database provides the periodic location cache refreshing mechanism to ensure that the location information of log streams is updated in real time and is consistent. However, tablet location information can only be passively refreshed. Changes in the mappings between tablets and log streams can trigger SQL retries and read/write errors with a certain probability. OceanBase Database V4.3.0 supports proactive broadcasting of tablet locations to reduce SQL retries and read/write errors caused by changes in mappings after transfer. This version also supports proactive refreshing to avoid unrecoverable read/write errors.
+ In earlier versions, OceanBase Database provides the periodic location cache refresh mechanism to ensure that the location information of log streams is updated in real time and consistent. However, tablet location information can only be passively refreshed. Changes in the mappings between tablets and log streams can trigger SQL retries and read/write errors with a certain probability. OceanBase Database V4.3.0 supports proactive broadcast of tablet locations to reduce SQL retries and read/write errors caused by changes in mappings after transfer. It also supports proactive refresh to avoid unrecoverable read/write errors.
-* **AWS S3 supported for backup and restore**
+* **Support for S3 as the backup and restore media**
- OceanBase Database supports Network File System (NFS), Alibaba Cloud Object Storage Service (OSS), and Tencent Cloud Object Storage (COS) as the storage media for the backup and restore feature in earlier versions. OceanBase Database V4.3.0 further supports AWS S3 as the storage media for backup and restore. You can use AWS S3 as the destination for log archiving and data backup, and use the backup data on AWS S3 for physical restore.
+ Earlier versions of OceanBase Database support two types of storage media for backup and restore: file storage (NFS) and object storage (OSS and COS). The new version supports Amazon Simple Storage Service (S3) as the log archive and data backup destination. You can also use backup data on S3 for physical restore.
-* **Memory scaling limitations**
+* **Limitations on memory specification adjustment**
- This version improves the stability in memory scaling and avoids out-of-memory (OOM) errors caused by an improper `memory_limit` value. Two conditions must be met for the `memory_limit` setting to take effect on an OBServer node: the reserved memory of the sys500 tenant is not less than the occupied memory, and the value of `memory_limit` is greater than the sum of the value of `system_memory` and the memory allocated to resource units. If either condition is not met, when you set the `memory_limit` parameter, no error is returned but the parameter setting does not take effect.
+ In OceanBase Database V4.3.0, the stability of memory specification adjustment is enhanced to avoid the out-of-memory (OOM) problem in the system caused by an improper value of the `memory_limit` parameter. The `memory_limit` parameter on an OBServer node takes effect only when both of the following conditions are met: First, the reserved memory for the sys500 tenant is not less than the actual occupied memory. Second, the value of `memory_limit` is higher than the sum of `system_memory` and the allocated memory of resource units. If either condition is not met, the `memory_limit` parameter does not take effect, even when no error is returned.
-* **Active transaction transfer**
+* **Migration of active transactions during tablet transfer**
- In the log stream design in OceanBase Database V4.x, data is managed in the unit of tablet and logs are managed in the unit of log stream. Tablets are aggregated in a log stream to avoid two-phase commit of transactions in the log stream. To achieve a balance of data and traffic among different log streams, OceanBase Database allows you to flexibly transfer tablets among log streams. However, during the transfer, an active transaction may still be operating data, which may compromise the atomicity, consistency, isolation, and durability (ACID) capability of the transaction. For example, if the data of an active transaction at the source is not fully transferred to the destination, the atomicity of the transaction cannot be ensured. In versions earlier than V4.3.0, OceanBase Database will terminate active transactions during the transfer. This affects normal execution of transactions. To resolve this issue, OceanBase Database V4.3.0 supports the transfer of active transactions. This allows parallel execution of active transactions and avoids transaction rollback or inconsistency caused by the transfer.
+ In the design of standalone log streams, data is in the unit of tablets, while logs are in the unit of log streams. Multiple tablets are aggregated to one log stream, saving the high cost of two-phase commit of transactions within a single log stream. To balance data and traffic among different log streams, tablets can be flexibly transferred between log streams. However, during the tablet transfer process, active transactions may still be handling the data, and even a simple operation may damage the atomicity, consistency, isolation, and durability (ACID) of the transactions. For example, if active transaction data on the transfer source cannot be completely migrated to the transfer destination during concurrent transaction execution, the atomicity of the transactions cannot be guaranteed. In earlier versions, active transactions were killed during the transfer to avoid transaction problems. This mechanism affects the normal execution of transactions to some extent. To solve this problem, the new version supports the migration of active transactions during tablet transfer, which enables concurrent execution of active transactions and ensures that no abnormal rollbacks or consistency issues occur in concurrent transactions due to the transfer.
#### Resource usage optimization
* **MINIMAL mode of transaction logs**
- This version restructures the `MINIMAL` mode of transaction logs to optimize the implementation of the `MINIMAL` feature in earlier versions and improve the stability of the feature. Enabling the `MINIMAL` mode significantly decreases the volume of clogs generated for `UPDATE` and `DELETE` statements, thereby reducing the resource overheads in log storage, archiving, and transmission. This mode applies to private clouds with limited cross-city network bandwidth resources and public clouds with limited bandwidth resources for writing data to the cloud disk. In OceanBase Database V4.3.0 Beta, this feature is disabled by default because OceanBase Migration Service (OMS) has not been specifically adapted. In a scenario where no tool is required for incremental data synchronization, you can set the system variable `binlog_row_image` to `MINIMAL` to enable this feature.
+ In OceanBase Database V4.3.0, the `MINIMAL` mode of transaction logs is reconstructed to optimize the implementation of the `MINIMAL` feature, achieving better feature stability. After the `MINIMAL` mode is enabled, the volume of generated clogs drops significantly for `UPDATE` and `DELETE` operations, which can significantly reduce the resource cost for log storage, archiving, and transmission. This feature is especially useful in private cloud environments with limited cross-region network bandwidth and public cloud environments with limited cloud disk write bandwidth. In V4.3.0 Beta, this feature is disabled by default because OceanBase Migration Service (OMS) has not been adapted to this feature. In scenarios where you do not need to use OMS for incremental data synchronization, you can set the system variable `binlog_row_image` to `MINIMAL` to enable this feature.
* **Memory throttling mechanism**
- In OceanBase Database of a version earlier than V4.x, only a few modules require freezes and minor compactions to release the memory, and most of the modules are MemTables. Therefore, a memory limit is set for MemTables and throttling logic is used to ensure that the memory usage smoothly approaches the upper limit, avoiding write stop in the system caused by sudden OOM errors. In OceanBase Database V4.x, more modules, such as the TxData module, require freezes and minor compactions to release the memory. More refined means are provided to control the memory usage of modules. The memory for the TxData and Multi Data Source (MDS) modules is limited. The two modules share the memory with MemTables. When the memory usage reaches the value of `Tenant memory × _tx_share_memory_limit_percentage% × writing_throttling_trigger_percentage%`, overall throttling is triggered. This version also supports triggering freezes and minor compactions for transaction data tables by time. By default, a freeze is triggered for transaction data tables every 1,800 seconds to reduce the memory usage of the TxData module.
+ Prior to OceanBase Database V4.x, only a few modules release memory based on freezes and minor compactions, and the MemTable is the largest part among them. Therefore, in earlier versions, an upper limit is set for memory usage of the MemTable, enabling it run as smoothly as possible within the memory usage limit and avoiding writing failures caused by sudden memory exhaustion. In OceanBase Database V4.x, more modules that release memory based on freezes and minor compactions are introduced, such as the transaction data module. The new version provides more granular means to control the memory usage of various modules and supports the memory upper limit control of TxData and MDS modules. The two modules share memory space with the MemTable. When the sum of the memory usage of the three modules reaches `Tenant memory * _tx_share_memory_limit_percentage% * writing_throttling_trigger_percentage%`, overall memory throttling is triggered for the three modules. The new version also supports freezes and minor compactions of the transaction data table by time to reduce the memory usage of the transaction data module. By default, the transaction data table is frozen once every 1800 seconds.
-* **Optimization of the space for temporary results of DDL operations**
+* **Optimization of DDL temporary result space**
- Many DDL operations store temporary results in materialized structures. Here are two typical scenarios:
+ During the DDL operations, many processes may temporarily store temporary results in materialized structures. Here are two typical scenarios:
- 1. In an index creation scenario where data is scanned from the data table and inserted into the index table, the data scanned from the data table needs to be sorted. If the memory is insufficient during the sorting, the current data in the memory will be temporarily stored in materialized structures to release the memory space for subsequent scanning. Then, the data in the materialized structures will be merged and sorted. This practice is particularly effective in the case of limited memory but requires extra disk space.
- 2. In a columnar storage bypass import scenario, the system temporarily stores the data to be inserted into column groups in materialized structures, and then reads the data from the materialized structures when inserting data into each column group. These materialized structures can be used in the `SORT` operator to store intermediate data required for external sorting. When the system inserts data into column groups, it can cache the data to avoid extra overheads caused by repeated table scanning. This practice can prevent repeated scanning from compromising the performance, but increases the disk space occupied by temporary files.
+ 1. During index creation, the system scans data in the base data table and sorts and inserts the obtained data to the index table. If the memory is insufficient during the sorting process, current data in the memory space will be temporarily stored in materialized structures to release the memory space for subsequent scanning. Data in the materialized structures is then merged and sorted. This approach is particularly effective when the memory space is limited, but requires additional space on the disk.
+ 2. In the column store bypass import scenario, the system first temporarily stores the data to be inserted into each column group in materialized structures, and then obtains data from the materialized structures for insertion. These materialized structures can be used in the `SORT` operator to store intermediate data required for external sorting. When the system inserts data into column groups, the data can be cached in materialized structures, avoiding additional overhead caused by repeated table scanning. Although this method can reduce the performance loss caused by repeated scanning, the temporary files, however, will occupy much disk space.
- To resolve these issues, this version optimizes the data flow of DDL operations. Specifically, it eliminates unnecessary redundant structures to simplify the data flow. It also encodes and compresses the temporary results before storing them in the disk. This way, the disk space occupied by temporary results during DDL operations is significantly reduced, facilitating efficient use of storage resources.
+ In the new version, the data flow in DDL operations is optimized to address these issues. First, the new version eliminates unnecessary redundant structures to simplify the data flow. Second, the new version introduces coding compression for storing temporary results on disks. This way, the disk space consumed by temporary results during DDL operations is significantly reduced, and storage resources are thereby more efficiently utilized.
-#### Improvement in ease of use
+#### Improvement in usability
* **Index monitoring**
Indexes are usually created to improve the performance in querying data from a database. The number of indexes created on a data table increases as business scenarios and operation personnel increase over time. Unused indexes will waste the storage space and increase the overheads of DML operations. In this case, constant attention is required to identify and delete useless indexes to reduce the system load. However, it is difficult to manually identify useless indexes. Therefore, OceanBase Database V4.3.0 introduces the index monitoring feature. You can enable this feature for a user tenant and set sampling rules. The index use information that meets the specified rules is recorded in the memory and updated to the internal table every 15 minutes. You can query the `DBA_INDEX_USAGE` view to verify whether indexes in a table are referenced and delete useless indexes to release the space.
-* **RPC security certificate management**
+* **RPC authentication certificate management**
- After remote procedure call (RPC) authentication is enabled for a cluster, when a client, such as an arbitration service client, primary or standby database, or OceanBase Change Data Capture (CDC) client, initiates an access request to the cluster, you need to first place the root CA certificate of the client to the deployment directory of each OBServer node in the cluster and then complete related settings. This process is complex. OceanBase Database V4.3.0 supports the internal certificate management feature. You can call the `DBMS_TRUSTED_CERTIFICATE_MANAGER` system package in the sys tenant to add, delete, and modify root CA certificates trusted by a cluster. You can query the `DBA_OB_TRUSTED_ROOT_CERTIFICATE` view in the sys tenant for the list of root CA certificates added to the cluster, as well as information about the certificates, such as the expiration time.
+ When RPC authentication is enabled for a cluster, for an access request from a client, such as the arbitration service, primary/standby database, or OBCDC, you need to place the CA root certificate of the client in the deployment directory of each OBServer node in the cluster, and then perform related configurations. This whole process is complicated. OceanBase Database V4.3.0 supports the internal certificate management feature. You can use the `DBMS_TRUSTED_CERTIFICATE_MANAGER` system package provided in the sys tenant to add, delete, and modify CA root certificates trusted by an OceanBase cluster. The `DBA_OB_TRUSTED_ROOT_CERTIFICATE` view is also provided in the sys tenant to display the list of client CA root certificates added to OBServer nodes in the cluster and the certificate expiration time.
* **Parameter resetting**
- In earlier versions, if you want to reset a modified parameter to its default value, you must first query the default value of the parameter and then manually set the parameter to the default value, delivering poor ease of use. In OceanBase Database V4.3.0, the `ALTER SYSTEM [RESET] parameter_name [SCOPE = {MEMORY | SPFILE | BOTH}] {TENANT [=] 'tenant_name'}` syntax is provided for resetting a parameter to its default value. The default value is obtained from the node that executes the statement. You can reset cluster-level parameters and the parameters of a specified user tenant from the sys tenant. You can reset the parameters of only the current tenant from a user tenant. The implementation of the `SCOPE` option is consistent across different versions of OceanBase Database. For parameters whose modifications take effect statically, the system only stores their default values in the disk but does not update their values in the memory. For parameters whose modifications take effect dynamically, the system updates their values in the memory and stores their default values in the disk.
+ In earlier versions, if you want to reset a parameter to the default value, you need to query the default value of the parameter first, and then manually set the parameter to the default value. The new version provides the `ALTER SYSTEM [RESET] parameter_name [SCOPE = {MEMORY | SPFILE | BOTH}] {TENANT [=] 'tenant_name'}` syntax for you to reset a parameter to the default value. The default value is obtained from the node that executes the statement. You can reset cluster-level parameters or parameters of a specified tenant in the sys tenant. You can also reset parameters for the current user tenant. On OBServer nodes, whether the `SCOPE` option is specified or not does affect the implementation logic. For a parameter that takes effect statically, the default value is only stored on the disk but not updated to the memory. For a parameter that takes effect dynamically, the default value is stored on the disk and updated to the memory.
* **Detailed display of parameter data types**
- In OceanBase Database V4.3.0, data types of parameters are displayed in the `data_type` column of parameter-related views such as `[G]V$OB_PARAMETERS` and in the return result of the `SHOW PARAMETERS` statement. For example, the data type of the `log_disk_size` parameter is `CAPACITY`, that of `rpc_port` is `INT`, and that of `devname` is `STRING`.
+ In the new version, the parameter data type (`data_type`) is displayed in parameter-related views, such as `[G]V$OB_PARAMETERS`, and the return results of the `SHOW PARAMETERS` command. For example, the data type of `log_disk_size` is `CAPACITY`, that of `rpc_port` is `INT`, and that of `devname` is `STRING`. This feature enables peripheral tools to display and limit parameters based on their data type.
-* **INROW storage threshold for LOBs**
+* **LOB INROW threshold configuration**
- A large object (LOB) less than or equal to 4 KB in size is stored in INROW (in-memory storage) mode. A LOB greater than 4 KB is stored in the LOB auxiliary table. The row-based storage feature of INROW storage provides higher performance than auxiliary table-based storage in some scenarios. Therefore, OceanBase Database V4.3.0 supports dynamic configuration of the LOB storage mode. You can dynamically adjust the INROW storage size as needed provided that the size does not exceed the maximum row size allowed.
+ By default, LOB data of a size less than or equal to 4 KB is stored in INROW mode, and LOB data of a size greater than 4 KB is stored in the LOB auxiliary table. In some scenarios, INROW storage provides higher performance than auxiliary table-based storage. Therefore, this version supports dynamic configuration of the LOB storage mode. You can adjust the INROW threshold based on your business needs, provided that the threshold does not exceed the limit for INROW storage.
* **Local import from the client**
- OceanBase Database V4.3.0 provides the local import feature (`LOAD DATA LOCAL INFILE` statement) for loading data from local files on the client in streaming mode. This way, developers can directly use local files for testing without the need to upload the files to the server or object storage service, improving the working efficiency in scenarios where a small amount of data needs to be imported.
+ OceanBase Database V4.3.0 supports the `LOAD DATA LOCAL INFILE` statement for local import from the client. You can use the feature to import local files through streaming file processing. Based on this feature, developers can import local files for testing without uploading files to the server or object storage, improving the efficiency of importing a small amount of data.
Notice
- To use this feature, make sure that the following conditions are met:
- The version of OceanBase Client (OBClient) is V2.2.4 or later.
- The version of ODP is V3.2.4 or later, if ODP is used for connection to OceanBase Database. If you directly connect to an OBServer node, ignore this requirement.
- The version of OceanBase Connector/J is V2.4.8 or later, if Java and OceanBase Connector/J are used.
+ To import local data from the client, make sure that:
- The OceanBase Client (OBClient) version is V2.2.4 or later.
- The version of ODP used to connect to OceanBase Database is V3.2.4 or later. If you directly connect to an OBServer node, ignore this requirement.
- The version of OceanBase Connector/J is V2.4.8 or later if you use Java and OceanBase Connector/J.
Note
- - You can directly use a MySQL client or a native MariaDB client of any version.
- The
SECURE_FILE_PRIV
variable specifies the privileges for accessing paths on the server. It does not affect the local import feature and therefore does not need to be specified.
+ - You can use the native MySQL or MariaDB client of any versions directly.
- The
SECURE_FILE_PRIV
variable is used to specify the server paths that can be accessed by the OBServer node. This variable does not affect local import from a client, and therefore does not need to be set.
### Performance test report
@@ -218,20 +219,20 @@ Here we use three OBServer servers, with the architecture of 1:1:1 to deploy Oce
Notice
- The value of the system variable parallel_servers_target
is max_cpu * server_num * 8
.
+ Set the parallel_servers_target
variable to max_cpu * server_num * 8
.
- In OceanBase Database, set the default table storage format to columnar storage:
+ In OceanBase Database 4.3.0, change the default storage mode of tables to column store.
- ```sql
- ALTER SYSTEM SET default_table_store_format = 'column';
- ```
+ ```sql
+ ALTER SYSTEM SET default_table_store_format = 'column';
+ ```
-#### TPCH 1T test result
+#### TPC-H 1TB test results
Note
- Tables in OceanBase Database V4.2.2 are row-based tables, those in V4.3.0 Beta are columnar tables. Disk reads occur with 180 GB tenant memory.
+ Tables in OceanBase Database V4.2.2 are rowstore tables, and tables in OceanBase Database V4.3.0 Beta are columnstore tables. Disk reads occur in the tenant with the 180 GB memory.
| - | V4.2.2 | V4.3.0 Beta |
@@ -266,101 +267,101 @@ Here we use three OBServer servers, with the architecture of 1:1:1 to deploy Oce
| Change | Description |
|-------------------------------------------------------------|---------|
-| Client session IDs are unique in ODP in earlier versions and globally unique in the cluster since V4.3.0. | If OceanBase Database is of a version earlier than V4.3.0 and ODP is of a version earlier than V4.2.3, the client session ID of ODP is returned if you execute the `SHOW PROCESSLIST` statement in ODP to query the session ID, and the server session ID is returned if you query the session ID by using an expression such as `connection_id` or from a system view. One client session ID corresponds to multiple server session IDs, making it difficult to use a unique ID to identify a session on the entire link. As a result, you can be easily confused when you query session information, which causes inconveniences in user session management. This version restructures the client session ID generation and maintenance process. If OceanBase Database is of V4.3.0 or later and ODP is of V4.2.3 or later, when you query a session ID by executing the `SHOW PROCESSLIST` statement, from the `information_schema.PROCESSLIST` or `GV$OB_PROCESSLIST` view, or by using the `connection_id`, `userenv('sid')`/`userenv('sessionid')`, or `sys_context('userenv','sid')`/`sys_context('userenv','sessionid')` expression, the client session ID is returned. You can manage client sessions by using the `KILL` statement in SQL or PL. If OceanBase Database or ODP does not meet the version requirement, the handling method in earlier versions is used. |
-| Limitations are imposed on the `memory_limit` setting. | This version improves the stability in memory scaling and avoids OOM errors caused by an improper `memory_limit` value. Two conditions must be met for the `memory_limit` setting to take effect on an OBServer node: the reserved memory of the sys500 tenant is not less than the occupied memory, and the value of `memory_limit` is greater than the sum of the value of `system_memory` and the memory allocated to resource units. If either condition is not met, when you set the `memory_limit` parameter, no error is returned but the parameter setting does not take effect. |
-| The `zlib` compression algorithm is no longer used for storage. | In OceanBase Database V4.2.0, the `zlib` compression algorithm is no longer supported for new tables but is still supported for existing tables. In OceanBase Database V4.3.0, the storage layer no longer supports the `zlib` compression algorithm. If the `zlib` compression algorithm is used before you upgrade your database to V4.3.0, you must change the compression algorithm for data tables or choose not to compress data tables. The `zlib` compression algorithm is also prohibited for the transmission of clogs and TableAPIs. |
-| Limitations on using `archive_lag_target` are refined. | OceanBase Database V4.3.0 refines the limitations on using the `archive_lag_target` parameter.- If no archive media is specified, when you modify this parameter, a message is displayed prompting that you are not allowed to change the default value of this parameter because no archive media is specified.
- If AWS S3 is specified as the archive media, the minimum value of this parameter is 60 seconds. An error is returned when you attempt to specify a smaller value.
- If OSS, NFS, or COS is specified as the archive media, you can set this parameter to any value within the value range.
- If OSS, NFS, or COS is specified as the archive media and the current value of the parameter is smaller than 60 seconds, an error is returned when you attempt to change the archive media to AWS S3 by using a statement.
|
-| The `max_syslog_file_count` parameter specifies the total number of system logs of all types. | To reduce the risk that the log disk is used up after the end-to-end diagnostic feature is enabled, the `max_syslog_file_count` parameter specifies the total number of system logs of all types, instead of the number of system logs of each type. In this case, OceanBase Database V4.3.0 evicts log files based on the first in, first out (FIFO) strategy. |
-| Data types of parameters are displayed in the `data_type` column in the return result of the `SHOW PARAMETERS` statement. | Data types of parameters are displayed in the `data_type` column in the return result of the `SHOW PARAMETERS` statement. The default value of the `data_type` column is changed from `NULL` to `UNKNOWN`. |
-| The default values of `MAX_IOPS` and `MIN_IOPS` of resource units are changed. | In OceanBase Database of a version earlier than V4.3.0, if both `MIN_IOPS` and `MAX_IOPS` are not specified, their values are automatically calculated based on the value of `MIN_CPU`. To be specific, one CPU core corresponds to 10,000 IOPS, namely, `MAX_IOPS = MIN_IOPS = MIN_CPU × 10000`. In OceanBase Database V4.3.0, if `MIN_IOPS` and `MAX_IOPS` are not specified, the default IOPS is changed to `INT64_MAX`, which specifies not to limit IOPS resources. |
+| Client session IDs are locally unique in ODP before but are globally unique in an OceanBase cluster in the new version. | Prior to OceanBase Database V4.3.0 and ODP V4.2.3, when the client executes `SHOW PROCESSLIST` through ODP, the client session ID in ODP is returned. However, when the client queries the session ID by using an expression such as `connection_id` or from a system view, the session ID on the server is returned. A client session ID corresponds to multiple server session IDs. This causes confusion in session information queries and makes user session management difficult. In the new version, the client session ID generation and maintenance process is reconstructed. When the version of OceanBase Database is not earlier than V4.3.0 and the version of ODP is not earlier than V4.2.3, the session IDs returned by various channels, such as the `SHOW PROCESSLIST` command, the `information_schema.PROCESSLIST` and `GV$OB_PROCESSLIST` views, and the `connection_id`, `userenv('sid')`, `userenv('sessionid')`, `sys_context('userenv','sid')`, and `sys_context('userenv','sessionid')` expressions, are all client session IDs. You can specify a client session ID in the SQL or PL command `KILL` to terminate the corresponding session. If the preceding version requirements for OceanBase Database and ODP are not met, the handling method in earlier versions is used. |
+| The setting of `memory_limit` is limited. | In OceanBase Database V4.3.0, the stability of memory specification adjustment is enhanced to avoid the OOM problem in the system caused by an improper value of the `memory_limit` parameter. The `memory_limit` parameter on an OBServer node takes effect only when both of the following conditions are met: First, the reserved memory for the sys500 tenant is not less than the actual occupied memory. Second, the value of `memory_limit` is higher than the sum of `system_memory` and the allocated memory of resource units. If either condition is not met, the `memory_limit` parameter does not take effect, even when no error is returned. |
+| The `zlib` compression algorithm is no longer used for storage. | In OceanBase Database V4.2.0, `zlib` compression is not supported for new tables but existing tables can still use `zlib` compression. In OceanBase Database V4.3.0, the `zlib` compression algorithm is no longer supported at the storage layer. If you used the `zlib` compression algorithm for a data table before you upgrade to V4.3.0, you must change the compression algorithm or choose not to compress the data table. The `zlib` compression algorithm is also prohibited for clog transmission and TableAPI transmission. |
+| Limitations on the usage of `archive_lag_target` are refined. | In OceanBase Database V4.3.0, limitations on the usage of `archive_lag_target` are refined as follows:- If you have not specified the archive media, when you try to modify this parameter, the system prompts that the default value of this parameter cannot be changed because no archive media is specified.
- If you specify S3 as the archive media, the minimum value of this parameter is 60 seconds. An error is returned if you set this parameter to a smaller value.
- If you specify OSS, NFS, or COS as the archive media, you can set this parameter to any value in the value range.
- If you change the archive media from OSS, NFS, or COS to S3 by using a statement but the current value of this parameter is smaller than 60 seconds, an error is returned after the statement is executed.
|
+| `max_syslog_file_count` is used to specify the total number of system logs of all types. | To reduce the risk of full log disk situation after end-to-end diagnostics is enabled, `max_syslog_file_count` specifies the total number of system logs of all types instead of the number of system logs of each specific type. In this case, the new version adopts the first in, first out (FIFO) strategy to evict log files. |
+| Parameter data types are displayed in the `data_type` column in the return result of the `SHOW PARAMETERS` statement. | The `data_type` column in the return result of the `SHOW PARAMETERS` statement displays parameter data types. The default value of this column is changed from `NULL` to `UNKNOWN`. |
+| The default values of `MAX_IOPS` and `MIN_IOPS` of resource units are changed. | In earlier versions, when neither `MIN_IOPS` nor `MAX_IOPS` is specified, their values are calculated based on `MIN_CPU`. The rule is that one CPU core corresponds to 10,000 IOPS, that is, `MAX_IOPS = MIN_IOPS = MIN_CPU * 10000`. In OceanBase Database V4.3.0, when neither `MIN_IOPS` nor `MAX_IOPS` is specified, the default IOPS is `INT64_MAX`. That is, IOPS resources are not restricted. |
#### View changes
| View | Change type | Description |
-|--------------------------------------|---------|---------|
-| DBA_OB_TRUSTED_ROOT_CERTIFICATE | New | Displays the list of trusted root CA certificates of the cluster, as well as information about the certificates, such as the expiration time. You can query this view in the sys tenant. |
-| CDB/DBA/ALL/USER_MVIEW_LOGS | New | Displays information about materialized view logs. You can query the `CDB` view only in the sys tenant, `ALL/USER` views in Oracle tenants, and the `DBA` view in all tenants. |
-| CDB/DBA/ALL/USER_MVIEWS | New | Displays information about materialized views. You can query the `CDB` view only in the sys tenant, `ALL/USER` views in Oracle tenants, and the `DBA` view in all tenants. |
-| CDB/DBA/USER_MVREF_STATS_SYS_DEFAULTS | New | Displays the system-level default values of refresh history statistical attributes for materialized views. You can query the `CDB` view only in the sys tenant, the `USER` view only in Oracle tenants, and the `DBA` view in all tenants. |
-| CDB/DBA/USER_MVREF_STATS_PARAMS | New | Displays the refresh statistical attributes of each materialized view. You can query the `CDB` view only in the sys tenant, the `USER` view only in Oracle tenants, and the `DBA` view in all tenants. |
-| CDB/DBA/USER_MVREF_RUN_STATS | New | Displays information about each refresh of materialized views. Each refresh is identified by a refresh ID. The information includes the timing statistics and refresh parameters of each refresh. You can query the `CDB` view only in the sys tenant, the `USER` view only in Oracle tenants,and the `DBA` view in all tenants. |
-| CDB/DBA/USER_MVREF_STATS | New | Displays the basic timing statistics on each refresh of each materialized view. You can query the `CDB` view only in the sys tenant, the `USER` view only in Oracle tenants, and the DBA view in all tenants. |
-| CDB/DBA/USER_MVREF_CHANGE_STATS | New | Displays the data changes in the base table involved in each refresh of all materialized views. You can query the `CDB` view only in the sys tenant, the `USER` view only in Oracle tenants, and the `DBA` view in all tenants. |
-| CDB/DBA/USER_MVREF_STMT_STATS | New | Displays information about refresh statements. You can query the `CDB` view only in the sys tenant, the `USER` view only in Oracle tenants, and the `DBA` view in all tenants. |
-| CDB/DBA_INDEX_USAGE | New | Displays the usage information of indexes. You can query the `CDB` view only in the sys tenant, and the `DBA` view in all tenants. |
-| DBA_OB_CLONE_PROGRESS | New | Displays information about ongoing tenant cloning jobs. You can query this view in the sys tenant. |
-| DBA_OB_CLONE_HISTORY | New | Displays information about completed cloning jobs. You can query this view from the sys tenant. |
-| CDB/DBA_OB_AUX_STATISTICS | New | Displays the auxiliary statistics of each tenant. You can query the `CDB` view only in the sys tenant, and the `DBA` view in all tenants. |
-| [G]V$OB_TABLET_COMPACTION_HISTORY | Modified | The `KEPT_SNAPSHOT` column is added to show the multi-version retention timestamps. The `MERGE_LEVEL` column is added to show information about the reuse of macroblocks and microblocks. The width of the `COMMENTS` column is adjusted. |
-| [G]V$OB_PARAMETERS | Modified | Data types of parameters are displayed in the `DATA_TYPE` column. The default value of the `DATA_TYPE` column is changed from `NULL` to `UNKNOWN`. |
-| [G]V$OB_PROCESSLIST | Modified | The `USER_CLIENT_PORT` column is added to show the port number of the client. |
-| DBA_SCHEDULER_JOB_CLASSES | New | This view can be used in Oracle and sys tenants to record information related to the creation of `job class`.|
-| CDB/DBA_OB_RECOVER_TABLE_JOBS | New | Displays information about table-level restore jobs. You can query the `CDB` view only in the sys tenant, and the `DBA` view in all tenants. |
-| CDB/DBA_OB_RECOVER_TABLE_JOB_HISTORY | New | Displays the history of table-level restore jobs. You can query the `CDB` view only in the sys tenant, and the `DBA` view in all tenants. |
-| CDB/DBA_OB_IMPORT_TABLE_JOBS | New | Displays information about cross-tenant import jobs. You can query the `CDB` view only in the sys tenant, and the `DBA` view in all tenants. |
-| CDB/DBA_OB_IMPORT_TABLE_JOB_HISTORY | New | Displays the history of cross-tenant import jobs. You can query the `CDB` view only in the sys tenant, and the `DBA` view in all tenants. |
-| CDB/DBA_OB_IMPORT_TABLE_TASKS | New | Displays information about table-level cross-tenant import tasks. You can query the `CDB` view only in the sys tenant, and the `DBA` view in all tenants. |
-| CDB/DBA_OB_IMPORT_TABLE_TASK_HISTORY | New | Displays the history of table-level cross-tenant import tasks. You can query the `CDB` view only in the sys tenant, and the `DBA` view in all tenants. |
-| [GV$OB_SQL_AUDIT] | Modified | The `PLSQL_EXEC_TIME` column is added to show the PL execution time in μs, excluding the SQL execution time. |
+|---------------------------------------|---------|---------|
+| DBA_OB_TRUSTED_ROOT_CERTIFICATE | New | Displays the list of client CA root certificates trusted by the OceanBase cluster and the certificate expiration time. This view is supported in the sys tenant. |
+| CDB/DBA/ALL/USER_MVIEW_LOGS | New | Displays information about materialized view logs. The `CDB` view is supported only in the sys tenant. The `ALL` and `USER` views are supported only in Oracle tenants. The `DBA` view is supported in all tenants. |
+| CDB/DBA/ALL/USER_MVIEWS | New | Displays information about materialized views. The `CDB` view is supported only in the sys tenant. The `ALL` and `USER` views are supported only in Oracle tenants. The `DBA` view is supported in all tenants. |
+| CDB/DBA/USER_MVREF_STATS_SYS_DEFAULTS | New | Displays the system-level default values of refresh history statistical attributes for materialized views. The `CDB` view is supported only in the sys tenant. The `USER` view is supported only in Oracle tenants. The `DBA` view is supported in all tenants. |
+| CDB/DBA/USER_MVREF_STATS_PARAMS | New | Displays the refresh statistical attributes of each materialized view. The `CDB` view is supported only in the sys tenant. The `USER` view is supported only in Oracle tenants. The `DBA` view is supported in all tenants. |
+| CDB/DBA/USER_MVREF_RUN_STATS | New | Displays information about each refresh of materialized views. Each refresh is identified by a refresh ID (`REFRESH_ID`). The information includes the timing statistics and refresh parameters of each refresh. The `CDB` view is supported only in the sys tenant. The `USER` view is supported only in Oracle tenants. The `DBA` view is supported in all tenants. |
+| CDB/DBA/USER_MVREF_STATS | New | Displays basic timing statistics on materialized view refreshes. The `CDB` view is supported only in the sys tenant. The `USER` view is supported only in Oracle tenants. The `DBA` view is supported in all tenants. |
+| CDB/DBA/USER_MVREF_CHANGE_STATS | New | Displays data changes in base tables involved in the refreshes of all materialized views and the information required for loading the data changes. The `CDB` view is supported only in the sys tenant. The `USER` view is supported only in Oracle tenants. The `DBA` view is supported in all tenants. |
+| CDB/DBA/USER_MVREF_STMT_STATS | New | Displays information about materialized view refresh statements. The `CDB` view is supported only in the sys tenant. The `USER` view is supported only in Oracle tenants. The `DBA` view is supported in all tenants. |
+| CDB/DBA_INDEX_USAGE | New | Displays information about index usage. The `CDB` view is supported only in the sys tenant. The `DBA` view is supported in all tenants. |
+| DBA_OB_CLONE_PROGRESS | New | Displays information about ongoing tenant cloning jobs. This view is supported in the sys tenant. |
+| DBA_OB_CLONE_HISTORY | New | Displays information about completed tenant cloning jobs. This view is supported in the sys tenant. |
+| CDB/DBA_OB_AUX_STATISTICS | New | Displays auxiliary statistics of each tenant. The `CDB` view is supported only in the sys tenant. The `DBA` view is supported in all tenants. |
+| [G]V$OB_TABLET_COMPACTION_HISTORY | Modified | The `KEPT_SNAPSHOT` column is added to display multi-version retention timestamps. The `MERGE_LEVEL` column is added to display macroblock and microblock reuse information. The width of the `COMMENTS` column is adjusted. |
+| [G]V$OB_PARAMETERS | Modified | The `DATA_TYPE` column displays parameter data types. The default value of this column is changed from `NULL` to `UNKNOWN`. |
+| [G]V$OB_PROCESSLIST | Modified | The `USER_CLIENT_PORT` column is added to display the client port number. |
+| DBA_SCHEDULER_JOB_CLASSES | New | Displays information about the created job classes. This view is supported in the sys tenant and Oracle tenants. |
+| CDB/DBA_OB_RECOVER_TABLE_JOBS | New | Displays information about table-level restore jobs. The `CDB` view is supported only in the sys tenant. The `DBA` view is supported in all tenants. |
+| CDB/DBA_OB_RECOVER_TABLE_JOB_HISTORY | New | Displays the history of table-level restore jobs. The `CDB` view is supported only in the sys tenant. The `DBA` view is supported in all tenants. |
+| CDB/DBA_OB_IMPORT_TABLE_JOBS | New | Displays information about cross-tenant import jobs. The `CDB` view is supported only in the sys tenant. The `DBA` view is supported in all tenants. |
+| CDB/DBA_OB_IMPORT_TABLE_JOB_HISTORY | New | Displays the history of cross-tenant import jobs. The `CDB` view is supported only in the sys tenant. The `DBA` view is supported in all tenants. |
+| CDB/DBA_OB_IMPORT_TABLE_TASKS | New | Displays information about table-level cross-tenant import tasks. The `CDB` view is supported only in the sys tenant. The `DBA` view is supported in all tenants. |
+| CDB/DBA_OB_IMPORT_TABLE_TASK_HISTORY | New | Displays the history of table-level cross-tenant import tasks. The `CDB` view is supported only in the sys tenant. The `DBA` view is supported in all tenants. |
+| [GV$OB_SQL_AUDIT] | Modified | The `PLSQL_EXEC_TIME` column is added to display PL execution time (excluding SQL execution time), in `us`. |
| [G]V$OB_LS_SNAPSHOTS | New | Displays information about physical log stream snapshots in resource units. |
-#### Parameter changes
+#### Parameter/System variable changes
| Parameter/System variable | Change type | Description |
|----------------------------------|------------|---------|
-| enable_rpc_authentication_bypass | New | Specifies whether to allow OMS to connect to a cluster without undergoing RPC security authentication when RPC security authentication is enabled for the OBServer node. It is a cluster-level parameter. |
-| default_compress_func | Modified | The `zlib_lite_1.0` value is added, which specifies to use a `zlib` compression algorithm with higher performance in an environment that supports hardware acceleration. The `zlib_1.0` value is deleted. The `zlib_1.0` compression algorithm is prohibited for new tables. |
-| large_query_threshold | Modified | The value range is changed from \[1ms, +∞) to \[0ms, +∞). The value `0` specifies to disable the large query identification feature. |
-| default_table_store_format | New | Specifies the default format for a primary table created in a user tenant. It is a tenant-level parameter. The default value is `row`. If `with column group` is not specified during table creation, a row-based storage table is created by default. You can change the value to `column` (indicating a columnar storage table) or `compound` (indicating a compound rowstore-columstore table) as needed. |
-| server_cpu_quota_min | Modified | The effective mode is changed from effective upon a restart to effective immediately |
-| server_cpu_quota_max | Modified | The effective mode is changed from effective upon a restart to effective immediately |
+| enable_rpc_authentication_bypass | New | Specifies whether to allow OMS to bypass RPC security authentication when it connects to a cluster where RPC security authentication is enabled for OBServer nodes. This is a cluster-level parameter. |
+| default_compress_func | Modified | The value `zlib_lite_1.0` is added, which specifies to use the `zlib` compression algorithm with higher performance in an environment that supports hardware acceleration. The value `zlib_1.0` is removed. The `zlib_1.0` compression algorithm is no longer supported for table creation. |
+| large_query_threshold | Modified | The value range is changed from \[1ms, + ∞) to \[0ms, + ∞). The value `0` specifies to disable the large query identification feature. |
+| default_table_store_format | New | Specifies the default format of a primary table created in a user tenant. This is a tenant-level parameter. The default value is `row`, indicating that a table is created as a rowstore table when `with column group` is not specified. You can change the value to `column` (columnstore table) or `compound` (rowstore-columnstore redundant table). |
+| server_cpu_quota_min | Modified | The effective mode is changed from taking effect upon restart to taking effect immediately. |
+| server_cpu_quota_max | Modified | The effective mode is changed from taking effect upon restart to taking effect immediately. |
#### Function/PL package changes
| Function/PL package | Change type | Description |
|----------------------------------|---------|---------|
-| ob_transaction_id | New | Queries the transaction ID of the current session. If the current session is not in an active transaction, `0` is returned. It is a built-in function. |
-| DBMS_TRUSTED_CERTIFICATE_MANAGER | New | Contains three subprograms `ADD_TRUSTED_CERTIFICATE`, `DELETE_TRUSTED_CERTIFICATE`, and `UPDATE_TRUSTED_CERTIFICATE`, which are respectively used for adding, deleting, and modifying trusted root CA certificates of the cluster. You can call this package in the sys tenant. This package is supported when RPC authentication is enabled. |
+| ob_transaction_id | New | Queries the transaction ID of the current session. If the session is not in an active transaction, `0` is returned. The `ob_transaction_id()` built-in function is supported in both Oracle and MySQL modes. |
+| DBMS_TRUSTED_CERTIFICATE_MANAGER | New | Provides the `ADD_TRUSTED_CERTIFICATE`, `DELETE_TRUSTED_CERTIFICATE`, and `UPDATE_TRUSTED_CERTIFICATE` subprograms respectively for adding, deleting, and modifying client CA root certificates trusted by an OceanBase cluster. This PL package is supported in the sys tenant. You can use the package after you enable RPC authentication. |
#### Syntax changes
-* The DDL syntax for pre-aggregating column indexes in SSTables is added.
-* The syntax for cloning tenants is added.
-* The syntax for resetting parameters is added.
-* The columnar storage and columnar storage indexing syntaxes are added.
-* Syntaxes related to materialized views are added.
-* The syntax for performing partition-level major compactions is added.
+| Syntax change | Description |
+|------|---------|
+| The DDL syntax that supports the `SKIP_INDEX` attribute for column pre-aggregation in SSTables is added. | For more information, see [Column skip index attribute (MySQL mode)](../../../700.reference/300.database-object-management/100.manage-object-of-mysql-mode/200.manage-tables-of-mysql-mode/250.identify-skip-index-properties-of-mysql-mode.md) and [Column skip index attribute (Oracle mode)](../../../700.reference/300.database-object-management/200.manage-object-of-oracle-mode/100.manage-tables-of-oracle-mode/250.identify-skip-index-properties-of-oracle-mode). |
+| The syntax for tenant cloning is added. | For more information, see [CREATE TENANT](../../../700.reference/500.sql-reference/100.sql-syntax/100.system-tenants/800.create-tenant.md). |
+| The syntax for parameter resetting is added. | For more information, see [PARAMETER](../../../700.reference/500.sql-reference/100.sql-syntax/100.system-tenants/200.alter-system/2600.PARAMETER.md). |
+| The syntaxes related to column store and columnstore indexes are added. | For more information, see [CREATE TABLE (MySQL mode)](../../../700.reference/500.sql-reference/100.sql-syntax/200.common-tenant-of-mysql-mode/600.sql-statement-of-mysql-mode/2600.create-table-of-mysql-mode.md), [CREATE TABLE (Oracle mode)](../../../700.reference/500.sql-reference/100.sql-syntax/300.common-tenant-of-oracle-mode/900.sql-statement-of-oracle-mode/100.ddl-of-oracle-mode/2400.create-table-of-oracle-mode.md), [CREATE INDEX (MySQL mode)](../../../700.reference/500.sql-reference/100.sql-syntax/200.common-tenant-of-mysql-mode/600.sql-statement-of-mysql-mode/2200.create-index-of-mysql-mode.md), and [CREATE INDEX (Oracle mode)](../../../700.reference/500.sql-reference/100.sql-syntax/300.common-tenant-of-oracle-mode/900.sql-statement-of-oracle-mode/100.ddl-of-oracle-mode/1600.create-index-of-oracle-mode.md). |
+| The syntaxes related to materialized views are added. | For more information, see [CREATE MATERIALIZED VIEW (MySQL mode)](../../../700.reference/500.sql-reference/100.sql-syntax/200.common-tenant-of-mysql-mode/600.sql-statement-of-mysql-mode/2250.create-materialized-views-of-mysql-mode-in-sql.md) and [CREATE MATERIALIZED VIEW (Oracle mode)](../../../700.reference/500.sql-reference/100.sql-syntax/300.common-tenant-of-oracle-mode/900.sql-statement-of-oracle-mode/100.ddl-of-oracle-mode/1750.create-materialized-views-of-oracle-mode-in-sql.md). |
+| The syntaxes related to partition-level major compactions are added. | For more information, see [MERGE](../../../700.reference/500.sql-reference/100.sql-syntax/100.system-tenants/200.alter-system/1300.merge.md). |
### Recommended versions of tools
-The following table lists the recommended versions of tools for OceanBase Database V4.3.0_CE.
+The following table lists the recommended versions of tools for OceanBase Database V4.3.0.
| Tool | Version | Remarks |
-|-------------|------------|------|
+|-------------|---------------|------|
| ODP | V4.2.3 BP1 | - |
| OceanBase Cloud Platform (OCP) | V4.2.2 BP1 | - |
| OceanBase Developer Center (ODC) | V4.2.3 BP1 | - |
| OceanBase CDC | V4.3.0 | - |
-| OMS | V4.2.2 | OceanBase Database V4.3.0_CE can serve only as the destination. Incremental data cannot be pulled from OceanBase Database V4.3.0_CE. |
| OMS | The public cloud version | OMS does not currently support OceanBase Database V4.3.0 as the source. OceanBase Database V4.3.0 can only be used as the destination, and the structural migration process does not take into account whether the tables are stored in a columnar format. |
-| OCCI | V1.0.3 | - |
-| OBCI | V2.0.8 | - |
-| ECOB | V1.1.8 | - |
-| OBClient | V2.2.4 | - |
-| LibOBClient | V2.2.4 | - |
-| OBJDBC | V2.4.8 | - |
-| OBODBC | V2.0.8 | - |
-| OBLOADER | V4.2.8.2 | - |
+| OceanBase C++ Call Interface (OCCI) | V1.0.3 | - |
+| OceanBase Call Interface (OBCI) | V2.0.8 | - |
+| Embedded SQL in C for OceanBase (ECOB) | V1.1.8 | - |
+| OBClient | V2.2.4 | - |
+| OceanBase Connector/C | V2.2.4 | - |
+| OceanBase Connector/J | V2.4.8 | - |
+| OceanBase Connector/ODBC | V2.0.8 | - |
+| OBLOADER | V4.2.8.2 | - |
### Upgrade notes
-* To use OceanBase Database V4.3.0, you need to create a cluster. Smooth upgrade from an earlier version, such as V1.x, V2.x, V3.x, V4.1, or V4.2.x, to V4.3.0 Beta is not supported. If you want to upgrade to V4.3.0, deploy an OceanBase cluster of V4.3.0 and then migrate existing data to the cluster by using OBDUMPER & OBLOADER or OMS.
-* Offline upgrade from V4.3.0 Alpha to V4.3.0 Beta, and smooth upgrade from V4.3.0 Alpha Hotfix1 to V4.3.0 Beta are supported.
+* When you use OceanBase Database V4.3.0 for the first time, you need to create a cluster. Smooth upgrade from OceanBase Database V1.x, V2.x, V3.x, V4.1, or V4.2.x to V4.3.0 Beta is not supported. You can use the logical import/export method to migrate data from an earlier version to V4.3.0 Beta for the purpose of upgrade.
+* Downtime upgrade from OceanBase Database V4.3.0 Alpha to V4.3.0 Beta is supported. Smooth upgrade from V4.3.0 Alpha Hotfix1 to V4.3.0 Beta is supported.
* Upgrade from V4.3.0 Beta to V4.3.x will be supported later.
-* Upgrade from V4.2.x to V4.3.x will be supported in later versions.
+* As the version evolves, upgrade paths from V4.2.x to V4.3.x will be supported.
### Considerations
-* We recommend that you set the maximum concurrency for bypass import based on the value of `Tenant memory × 0.001/2 MB`. If the concurrency exceeds this value, the memory for temporary files may be insufficient. When you use bypass import, we recommend that you split large files into multiple smaller ones to improve the import efficiency.
-
-* OMS does not support the `MINIMAL` mode for now. When you use OMS for incremental data synchronization, you are not allowed to set `binlog_row_image` to `MINIMAL`. The default value of this parameter is `FULL`.
\ No newline at end of file
+* We recommend that you set the maximum concurrency for bypass import to `Tenant memory * 0.001/2 MB`. If you set the concurrency to a larger value, memory for storing temporary files may be insufficient. We recommend that you split a large file into multiple small ones for bypass import to improve the import efficiency.
+* Currently, OMS is not adapted to the `MINIMAL` mode. If you use OMS for incremental data synchronization, you are not allowed to set `binlog_row_image` to `MINIMAL`. The default value is `FULL`.
\ No newline at end of file
From db34be21fcb5e6bfae1a8da1f2c86e90a65d2d11 Mon Sep 17 00:00:00 2001
From: Jackie Qu
Date: Mon, 15 Apr 2024 10:06:30 +0800
Subject: [PATCH 11/63] 430-beta-fix-1
---
.../9600.V4.3/9690.ob-430_ee.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/en-US/900.release-notes/10100.enterprise-edition-history-release/9600.V4.3/9690.ob-430_ee.md b/en-US/900.release-notes/10100.enterprise-edition-history-release/9600.V4.3/9690.ob-430_ee.md
index 6d6e15e90d..25c7c1616e 100644
--- a/en-US/900.release-notes/10100.enterprise-edition-history-release/9600.V4.3/9690.ob-430_ee.md
+++ b/en-US/900.release-notes/10100.enterprise-edition-history-release/9600.V4.3/9690.ob-430_ee.md
@@ -327,7 +327,7 @@ Here we use three OBServer servers, with the architecture of 1:1:1 to deploy Oce
| Syntax change | Description |
|------|---------|
-| The DDL syntax that supports the `SKIP_INDEX` attribute for column pre-aggregation in SSTables is added. | For more information, see [Column skip index attribute (MySQL mode)](../../../700.reference/300.database-object-management/100.manage-object-of-mysql-mode/200.manage-tables-of-mysql-mode/250.identify-skip-index-properties-of-mysql-mode.md) and [Column skip index attribute (Oracle mode)](../../../700.reference/300.database-object-management/200.manage-object-of-oracle-mode/100.manage-tables-of-oracle-mode/250.identify-skip-index-properties-of-oracle-mode). |
+| The DDL syntax that supports the `SKIP_INDEX` attribute for column pre-aggregation in SSTables is added. | - |
| The syntax for tenant cloning is added. | For more information, see [CREATE TENANT](../../../700.reference/500.sql-reference/100.sql-syntax/100.system-tenants/800.create-tenant.md). |
| The syntax for parameter resetting is added. | For more information, see [PARAMETER](../../../700.reference/500.sql-reference/100.sql-syntax/100.system-tenants/200.alter-system/2600.PARAMETER.md). |
| The syntaxes related to column store and columnstore indexes are added. | For more information, see [CREATE TABLE (MySQL mode)](../../../700.reference/500.sql-reference/100.sql-syntax/200.common-tenant-of-mysql-mode/600.sql-statement-of-mysql-mode/2600.create-table-of-mysql-mode.md), [CREATE TABLE (Oracle mode)](../../../700.reference/500.sql-reference/100.sql-syntax/300.common-tenant-of-oracle-mode/900.sql-statement-of-oracle-mode/100.ddl-of-oracle-mode/2400.create-table-of-oracle-mode.md), [CREATE INDEX (MySQL mode)](../../../700.reference/500.sql-reference/100.sql-syntax/200.common-tenant-of-mysql-mode/600.sql-statement-of-mysql-mode/2200.create-index-of-mysql-mode.md), and [CREATE INDEX (Oracle mode)](../../../700.reference/500.sql-reference/100.sql-syntax/300.common-tenant-of-oracle-mode/900.sql-statement-of-oracle-mode/100.ddl-of-oracle-mode/1600.create-index-of-oracle-mode.md). |
From 6e334782eca2be022ff31b0d3e6cc058a2ea51b3 Mon Sep 17 00:00:00 2001
From: Jackie Qu
Date: Mon, 15 Apr 2024 10:41:55 +0800
Subject: [PATCH 12/63]
v430-beta-100.connection-lost-with-error-code-2013-of-mysql-mode-update
---
...ion-lost-with-error-code-2013-of-mysql-mode.md | 15 ++++++---------
1 file changed, 6 insertions(+), 9 deletions(-)
diff --git a/en-US/300.develop/100.application-development-of-mysql-mode/700.application-error-handling-specification-and-common-error-solutions/200.common-errors-and-solutions-of-mysql-mode/100.connection-lost-with-error-code-2013-of-mysql-mode.md b/en-US/300.develop/100.application-development-of-mysql-mode/700.application-error-handling-specification-and-common-error-solutions/200.common-errors-and-solutions-of-mysql-mode/100.connection-lost-with-error-code-2013-of-mysql-mode.md
index de6895c75d..9009baee82 100644
--- a/en-US/300.develop/100.application-development-of-mysql-mode/700.application-error-handling-specification-and-common-error-solutions/200.common-errors-and-solutions-of-mysql-mode/100.connection-lost-with-error-code-2013-of-mysql-mode.md
+++ b/en-US/300.develop/100.application-development-of-mysql-mode/700.application-error-handling-specification-and-common-error-solutions/200.common-errors-and-solutions-of-mysql-mode/100.connection-lost-with-error-code-2013-of-mysql-mode.md
@@ -61,11 +61,6 @@ Take the following steps to view and modify the values of session-related variab
3. Set the value of the `wait_timeout` variable to `28800`.
-
- Note
- The wait_timeout
variable specifies the time in seconds that the server waits for an interactive connection to become active before closing it.
-
-
```sql
obclient [test]> SET wait_timeout = 28800;
Query OK, 0 rows affected
@@ -73,10 +68,6 @@ Take the following steps to view and modify the values of session-related variab
4. Set the value of the `interactive_timeout` variable to `28800`.
-
- Note
- The interactive_timeout
variable specifies the time in seconds that the server waits for an non-interactive connection to become active before closing it.
-
```sql
obclient [test]> SET interactive_timeout = 28800;
@@ -96,3 +87,9 @@ Take the following steps to view and modify the values of session-related variab
+----+---------------------+---------------------+
3 rows in set
```
+
+## References
+
+* For more information about the `wait_timeout` parameter, see [wait_timeout](../../../../700.reference/800.configuration-items-and-system-variables/200.system-variable/300.global-system-variable/13700.wait_timeout-global.md).
+
+* For more information about the `interactive_timeout` parameter, see [interactive_timeout](../../../../700.reference/800.configuration-items-and-system-variables/200.system-variable/300.global-system-variable/3200.interactive_timeout-global.md).
\ No newline at end of file
From ac4db4d320a571b86a33a7d34954d1e8b016fffc Mon Sep 17 00:00:00 2001
From: Jackie Qu
Date: Mon, 15 Apr 2024 11:36:59 +0800
Subject: [PATCH 13/63] v430-beta-400.deploy-update
---
.../100.add-server.md | 22 ++++++
...server-machine-to-the-ocp-resource-pool.md | 68 -------------------
...eployment-of-oceanbase-database-use-ocp.md | 4 +-
...uorum-high-availability-service-use-ocp.md | 4 +-
...ingle-replica-oceanbase-cluster-use-ocp.md | 4 +-
...hree-oceanbase-replica-clusters-use-ocp.md | 4 +-
.../500.configure-limits-conf-optional.md | 8 +--
.../600.deploy-by-systemd.md | 2 +-
.../9600.V4.3/9690.ob-430_ee.md | 2 +-
9 files changed, 36 insertions(+), 82 deletions(-)
delete mode 100644 en-US/400.deploy/300.deploy-oceanbase-enterprise-edition/300.deploy-through-a-graphical-interface/300.deploy-oceanbase-cluster-use-ocp/100.add-observer-machine-to-the-ocp-resource-pool.md
diff --git a/en-US/400.deploy/300.deploy-oceanbase-enterprise-edition/300.deploy-through-a-graphical-interface/100.configuring-the-deploy-environment-through-oat/100.add-server.md b/en-US/400.deploy/300.deploy-oceanbase-enterprise-edition/300.deploy-through-a-graphical-interface/100.configuring-the-deploy-environment-through-oat/100.add-server.md
index b9e9603068..30c50c888e 100644
--- a/en-US/400.deploy/300.deploy-oceanbase-enterprise-edition/300.deploy-through-a-graphical-interface/100.configuring-the-deploy-environment-through-oat/100.add-server.md
+++ b/en-US/400.deploy/300.deploy-oceanbase-enterprise-edition/300.deploy-through-a-graphical-interface/100.configuring-the-deploy-environment-through-oat/100.add-server.md
@@ -78,6 +78,11 @@ You have installed OAT. For more information, see [Deploy OAT](../../200.prepara
The installed OCPs refer to those available on the page that appears after you choose Product Services > Products > OCP in the OAT console. For more information about how to install OCP, see Deploy OCP.
+
+ Notice
+ - The installed OCPs refer to those available on the page that appears after you choose Product Services > Products > OCP in the OAT console. For more information about how to install OCP, see Deploy OCP.
- Before deploying an OceanBase cluster using OCP, it is necessary to add the nodes where the OceanBase cluster will be deployed to the OCP resource pool.
+
+
* **Disk initialization settings**: To ensure the stability of OceanBase Database and other OceanBase services, we recommend that you configure the directories of the services on independent disks or partitions. If you select the disk initialization option, OAT automatically creates the corresponding volume group (VG) by using a logical volume manager (LVM), and mounts the logical volumes (LVs) of the corresponding size to the specified mounting directories.
* **Select a disk or partition**: The system automatically identifies unmounted disks. You can select one or more disks to mount.
@@ -95,7 +100,24 @@ Then you can view the new server in the server list. The server is in the **Init
If an error is returned for a subtask that does not affect business, you can view the logs and manually mark the subtask as success.
+## What to do next
+
+After you successfully add the servers, you can deploy OCP according to your specific requirements.
+
+* [Deploy OCP](../200.deploy-ocp-use-oat/400.deploy-ocp.md)
+
+After you successfully deploy OCP and add servers, you can deploy an OceanBase cluster according to your specific requirements.
+
+* [Deploy a standalone OceanBase database by using OCP](../300.deploy-oceanbase-cluster-use-ocp/200.stand-alone-deployment-of-oceanbase-database-use-ocp.md)
+
+* [Deploy a two-replica OceanBase cluster with the arbitration service by using OCP](../300.deploy-oceanbase-cluster-use-ocp/300.deploy-the-quorum-high-availability-service-use-ocp.md)
+
+* [Deploy a single-replica OceanBase cluster by using OCP](../300.deploy-oceanbase-cluster-use-ocp/400.deploy-single-replica-oceanbase-cluster-use-ocp.md)
+
+* [Deploy a three-replica OceanBase cluster by using OCP](../300.deploy-oceanbase-cluster-use-ocp/500.deploy-three-oceanbase-replica-clusters-use-ocp.md)
+
## References
* For more information about how to add a server, see [Add a server](https://en.oceanbase.com/docs/enterprise-oat-10000000000949578).
+
* After a server is initialized, you can perform O&M operations on it. For example, you can change the purpose of the server and configure clock synchronization and disk initialization for the server. For more information about server O&M operations, see [Perform O&M operations on a server](https://en.oceanbase.com/docs/enterprise-oat-10000000000949575).
\ No newline at end of file
diff --git a/en-US/400.deploy/300.deploy-oceanbase-enterprise-edition/300.deploy-through-a-graphical-interface/300.deploy-oceanbase-cluster-use-ocp/100.add-observer-machine-to-the-ocp-resource-pool.md b/en-US/400.deploy/300.deploy-oceanbase-enterprise-edition/300.deploy-through-a-graphical-interface/300.deploy-oceanbase-cluster-use-ocp/100.add-observer-machine-to-the-ocp-resource-pool.md
deleted file mode 100644
index b35ed4cfca..0000000000
--- a/en-US/400.deploy/300.deploy-oceanbase-enterprise-edition/300.deploy-through-a-graphical-interface/300.deploy-oceanbase-cluster-use-ocp/100.add-observer-machine-to-the-ocp-resource-pool.md
+++ /dev/null
@@ -1,68 +0,0 @@
-|description||
-|---|---|
-|keywords||
-|dir-name||
-|dir-name-en||
-|tenant-type||
-
-# Add a server to the resource pool
-
-Before you deploy an OceanBase cluster by using OceanBase Cloud Platform (OCP), you must add the server where the OceanBase cluster is to be deployed to the resource pool of OCP.
-
-
- Note
- This topic uses OCP V4.1.0 as an example. The GUI varies with the OCP version.
-
-
-## Prerequisites
-
-* You have prepared the servers for deploying an OceanBase cluster, added the servers to OceanBase Admin Toolkit (OAT), and initialized the servers. For more information, see [Prepare servers](../../200.preparations-before-deploy/100.prepare-servers.md) and [Add a server](../100.configuring-the-deploy-environment-through-oat/100.add-server.md).
-* You have deployed OCP. For more information, see [Deploy OCP](../200.deploy-ocp-use-oat/400.deploy-ocp.md).
-
-## Procedure
-
-1. Enter the access URL in the format of `http://:8080` in the address bar of your browser and press **Enter**.
- Then, use the admin account to log on to the OCP console.
-
-
- Note
- Contact OceanBase Technical Support to obtain the default password of the admin account. To ensure account security, change the password upon the first logon.
-
-
-2. In the left-side navigation pane, click **Hosts**.
-
-3. In the upper-right corner of the page that appears, click **Add Host**.
-
-4. In the dialog box that appears, enter the server information.
-
- The following table describes the parameters that you need to configure.
-
- | **Parameter** | **Description** |
- |------------|----------------------|
- | **IP Address** | The NIC IP address of the OBServer node added to the resource pool. |
- | **SSH Port** | The SSH port. Default value: `22`. |
- | **Host Type** | The label that you specify for hosts with the same configurations to facilitate host management. If the required host type is unavailable, you can click **Create Host Type** to add a host type. |
- | **IDC** | Select the IDC where the server is deployed. IDC information includes **IDC** and **Region**. - An IDC is one of the host attributes that you must record for an OceanBase cluster. The IDC is referenced in OceanBase load balancing and SQL statement routing strategies. Specify the actual IDC.
- A region specifies the geographical area where the host is located. It is one of the host attributes that you must record for an OceanBase cluster. The region affects the OceanBase load balancing and SQL statement routing strategies. Specify the actual region.
Note
OCP V3.1.1 and later support the multi-zone mode. When you add an IDC, it is created in the zone where the current OCP is deployed.
|
- | **Host Type** | You can select **Physical Machine**, **Container**, or **ECS**. |
- | **Credentials** | Select the credentials used to remotely log on to the physical server. You can select **Add Credential** from the drop-down list to create credentials. |
- | **Host Alias** | A host alias is a label that you specify for hosts with the same configurations. We recommend that you specify a meaningful alias to facilitate host management. This parameter is optional. |
- | **Description** | The host comments that can be used to facilitate host management. This parameter is optional. |
-
-
-
-5. Click **OK**.
-
-## What to do next
-
-After the server is added, you can use the following methods to deploy the OceanBase cluster:
-
-* [Deploy a standalone OceanBase database by using OCP](../300.deploy-oceanbase-cluster-use-ocp/200.stand-alone-deployment-of-oceanbase-database-use-ocp.md)
-* [Deploy a two-replica OceanBase cluster with the arbitration service by using OCP](../300.deploy-oceanbase-cluster-use-ocp/300.deploy-the-quorum-high-availability-service-use-ocp.md)
-* [Deploy a single-replica OceanBase cluster by using OCP](../300.deploy-oceanbase-cluster-use-ocp/400.deploy-single-replica-oceanbase-cluster-use-ocp.md)
-* [Deploy a three-replica OceanBase cluster by using OCP](../300.deploy-oceanbase-cluster-use-ocp/500.deploy-three-oceanbase-replica-clusters-use-ocp.md)
-
-## References
-
-* [Prepare servers](../../200.preparations-before-deploy/100.prepare-servers.md)
-* [Deploy OCP](../200.deploy-ocp-use-oat/400.deploy-ocp.md)
-* [Add a host](https://en.oceanbase.com/docs/enterprise-oceanbase-ocp-en-10000000000838723)
\ No newline at end of file
diff --git a/en-US/400.deploy/300.deploy-oceanbase-enterprise-edition/300.deploy-through-a-graphical-interface/300.deploy-oceanbase-cluster-use-ocp/200.stand-alone-deployment-of-oceanbase-database-use-ocp.md b/en-US/400.deploy/300.deploy-oceanbase-enterprise-edition/300.deploy-through-a-graphical-interface/300.deploy-oceanbase-cluster-use-ocp/200.stand-alone-deployment-of-oceanbase-database-use-ocp.md
index 2cc5aca4e8..d3c7be26cf 100644
--- a/en-US/400.deploy/300.deploy-oceanbase-enterprise-edition/300.deploy-through-a-graphical-interface/300.deploy-oceanbase-cluster-use-ocp/200.stand-alone-deployment-of-oceanbase-database-use-ocp.md
+++ b/en-US/400.deploy/300.deploy-oceanbase-enterprise-edition/300.deploy-through-a-graphical-interface/300.deploy-oceanbase-cluster-use-ocp/200.stand-alone-deployment-of-oceanbase-database-use-ocp.md
@@ -19,7 +19,7 @@ This topic describes how to deploy a standalone OceanBase database by using Ocea
## Prerequisites
* You have deployed OCP. For more information, see [Deploy OCP](../200.deploy-ocp-use-oat/400.deploy-ocp.md).
-* You have added the server where the OceanBase cluster is to be deployed to the resource pool of OCP. For more information, see [Add a server to the resource pool of OCP](../300.deploy-oceanbase-cluster-use-ocp/100.add-observer-machine-to-the-ocp-resource-pool.md).
+* You have added the server where the OceanBase cluster is to be deployed to the resource pool of OCP. For more information, see [Add a server](../100.configuring-the-deploy-environment-through-oat/100.add-server.md).
* You have logged on to the OCP console and have the `CREATE CLUSTER` privilege. For more information, see [Role overview](https://en.oceanbase.com/docs/enterprise-oceanbase-ocp-en-10000000000838707) and [Default OCP roles](https://en.oceanbase.com/docs/enterprise-oceanbase-ocp-en-10000000000838497).
* You have obtained the RPM package of OceanBase Database. For more information, see [Prepare installation packages](../../200.preparations-before-deploy/300.prepare-installation-packages.md).
@@ -83,7 +83,7 @@ After the cluster is created, you can create user tenants based on your business
## References
* [Deploy OCP](../200.deploy-ocp-use-oat/400.deploy-ocp.md)
-* [Add a server to the resource pool of OCP](../300.deploy-oceanbase-cluster-use-ocp/100.add-observer-machine-to-the-ocp-resource-pool.md)
+* [Add a server](../100.configuring-the-deploy-environment-through-oat/100.add-server.md)
* [Prepare installation packages](../../200.preparations-before-deploy/300.prepare-installation-packages.md)
* [Role overview](https://en.oceanbase.com/docs/enterprise-oceanbase-ocp-en-10000000000838707)
* [Default OCP roles](https://en.oceanbase.com/docs/enterprise-oceanbase-ocp-en-10000000000838497)
diff --git a/en-US/400.deploy/300.deploy-oceanbase-enterprise-edition/300.deploy-through-a-graphical-interface/300.deploy-oceanbase-cluster-use-ocp/300.deploy-the-quorum-high-availability-service-use-ocp.md b/en-US/400.deploy/300.deploy-oceanbase-enterprise-edition/300.deploy-through-a-graphical-interface/300.deploy-oceanbase-cluster-use-ocp/300.deploy-the-quorum-high-availability-service-use-ocp.md
index 59ff7a3c35..6e0bd3e14f 100644
--- a/en-US/400.deploy/300.deploy-oceanbase-enterprise-edition/300.deploy-through-a-graphical-interface/300.deploy-oceanbase-cluster-use-ocp/300.deploy-the-quorum-high-availability-service-use-ocp.md
+++ b/en-US/400.deploy/300.deploy-oceanbase-enterprise-edition/300.deploy-through-a-graphical-interface/300.deploy-oceanbase-cluster-use-ocp/300.deploy-the-quorum-high-availability-service-use-ocp.md
@@ -19,7 +19,7 @@ This topic describes how to deploy a two-replica OceanBase cluster with one arbi
## Prerequisites
* You have deployed OCP. For more information, see [Deploy OCP](../200.deploy-ocp-use-oat/400.deploy-ocp.md).
-* You have added the servers where the OceanBase cluster is to be deployed to the resource pool. For more information, see [Add a server to the resource pool](../300.deploy-oceanbase-cluster-use-ocp/100.add-observer-machine-to-the-ocp-resource-pool.md).
+* You have added the servers where the OceanBase cluster is to be deployed to the resource pool. For more information, see [Add a server](../100.configuring-the-deploy-environment-through-oat/100.add-server.md).
* You have logged on to the OCP console and have the `CREATE CLUSTER` privilege. For more information, see [Role overview](https://en.oceanbase.com/docs/enterprise-oceanbase-ocp-en-10000000000838707) and [Default OCP roles](https://en.oceanbase.com/docs/enterprise-oceanbase-ocp-en-10000000000838497).
* You have obtained the RPM package of OceanBase Database V4.1.0 or later. For more information, see [Prepare installation packages](../../200.preparations-before-deploy/300.prepare-installation-packages.md).
@@ -149,7 +149,7 @@ After the cluster is created, you can create user tenants based on your business
## References
* [Deploy OCP](../200.deploy-ocp-use-oat/400.deploy-ocp.md)
-* [Add a server to the resource pool](../300.deploy-oceanbase-cluster-use-ocp/100.add-observer-machine-to-the-ocp-resource-pool.md)
+* [Add a server](../100.configuring-the-deploy-environment-through-oat/100.add-server.md)
* [Prepare installation packages](../../200.preparations-before-deploy/300.prepare-installation-packages.md)
* [Role overview](https://en.oceanbase.com/docs/enterprise-oceanbase-ocp-en-10000000000838707)
* [Default OCP roles](https://en.oceanbase.com/docs/enterprise-oceanbase-ocp-en-10000000000838497)
diff --git a/en-US/400.deploy/300.deploy-oceanbase-enterprise-edition/300.deploy-through-a-graphical-interface/300.deploy-oceanbase-cluster-use-ocp/400.deploy-single-replica-oceanbase-cluster-use-ocp.md b/en-US/400.deploy/300.deploy-oceanbase-enterprise-edition/300.deploy-through-a-graphical-interface/300.deploy-oceanbase-cluster-use-ocp/400.deploy-single-replica-oceanbase-cluster-use-ocp.md
index 254a931ea7..a95b435a42 100644
--- a/en-US/400.deploy/300.deploy-oceanbase-enterprise-edition/300.deploy-through-a-graphical-interface/300.deploy-oceanbase-cluster-use-ocp/400.deploy-single-replica-oceanbase-cluster-use-ocp.md
+++ b/en-US/400.deploy/300.deploy-oceanbase-enterprise-edition/300.deploy-through-a-graphical-interface/300.deploy-oceanbase-cluster-use-ocp/400.deploy-single-replica-oceanbase-cluster-use-ocp.md
@@ -19,7 +19,7 @@ This topic describes how to deploy a single-replica OceanBase cluster by using O
## Prerequisites
* You have deployed OCP. For more information, see [Deploy OCP](../200.deploy-ocp-use-oat/400.deploy-ocp.md).
-* You have added the servers where the OceanBase cluster is to be deployed to the resource pool. For more information, see [Add a server to the resource pool](../300.deploy-oceanbase-cluster-use-ocp/100.add-observer-machine-to-the-ocp-resource-pool.md).
+* You have added the servers where the OceanBase cluster is to be deployed to the resource pool. For more information, see [Add a server](../100.configuring-the-deploy-environment-through-oat/100.add-server.md).
* You have logged on to the OCP console and have the `CREATE CLUSTER` privilege. For more information, see [Role overview](https://en.oceanbase.com/docs/enterprise-oceanbase-ocp-en-10000000000838707) and [Default OCP roles](https://en.oceanbase.com/docs/enterprise-oceanbase-ocp-en-10000000000838497).
* You have obtained the RPM package of OceanBase Database. For more information, see [Prepare installation packages](../../200.preparations-before-deploy/300.prepare-installation-packages.md).
@@ -103,7 +103,7 @@ After the cluster is created, you can create user tenants based on your business
## References
* [Deploy OCP](../200.deploy-ocp-use-oat/400.deploy-ocp.md)
-* [Add a server to the resource pool](../300.deploy-oceanbase-cluster-use-ocp/100.add-observer-machine-to-the-ocp-resource-pool.md)
+* [Add a server](../100.configuring-the-deploy-environment-through-oat/100.add-server.md)
* [Prepare installation packages](../../200.preparations-before-deploy/300.prepare-installation-packages.md)
* [Role overview](https://en.oceanbase.com/docs/enterprise-oceanbase-ocp-en-10000000000838707)
* [Default OCP roles](https://en.oceanbase.com/docs/enterprise-oceanbase-ocp-en-10000000000838497)
diff --git a/en-US/400.deploy/300.deploy-oceanbase-enterprise-edition/300.deploy-through-a-graphical-interface/300.deploy-oceanbase-cluster-use-ocp/500.deploy-three-oceanbase-replica-clusters-use-ocp.md b/en-US/400.deploy/300.deploy-oceanbase-enterprise-edition/300.deploy-through-a-graphical-interface/300.deploy-oceanbase-cluster-use-ocp/500.deploy-three-oceanbase-replica-clusters-use-ocp.md
index 9b7f8d7851..90ef9040cf 100644
--- a/en-US/400.deploy/300.deploy-oceanbase-enterprise-edition/300.deploy-through-a-graphical-interface/300.deploy-oceanbase-cluster-use-ocp/500.deploy-three-oceanbase-replica-clusters-use-ocp.md
+++ b/en-US/400.deploy/300.deploy-oceanbase-enterprise-edition/300.deploy-through-a-graphical-interface/300.deploy-oceanbase-cluster-use-ocp/500.deploy-three-oceanbase-replica-clusters-use-ocp.md
@@ -17,7 +17,7 @@ This topic describes how to deploy a three-replica OceanBase cluster by using Oc
## Prerequisites
* You have deployed OCP. For more information, see [Deploy OCP](../200.deploy-ocp-use-oat/400.deploy-ocp.md).
-* You have added the servers where the OceanBase cluster is to be deployed to the resource pool. For more information, see [Add a server to the resource pool](../300.deploy-oceanbase-cluster-use-ocp/100.add-observer-machine-to-the-ocp-resource-pool.md).
+* You have added the servers where the OceanBase cluster is to be deployed to the resource pool. For more information, see [Add a server](../100.configuring-the-deploy-environment-through-oat/100.add-server.md).
* You have logged on to the OCP console and have the `CREATE CLUSTER` privilege. For more information, see [Role overview](https://en.oceanbase.com/docs/enterprise-oceanbase-ocp-en-10000000000838707) and [Default OCP roles](https://en.oceanbase.com/docs/enterprise-oceanbase-ocp-en-10000000000838497).
* You have obtained the RPM package of OceanBase Database. For more information, see [Prepare installation packages](../../200.preparations-before-deploy/300.prepare-installation-packages.md).
@@ -101,7 +101,7 @@ After the cluster is created, you can create user tenants based on your business
## References
* [Deploy OCP](../200.deploy-ocp-use-oat/400.deploy-ocp.md)
-* [Add a server to the resource pool](../300.deploy-oceanbase-cluster-use-ocp/100.add-observer-machine-to-the-ocp-resource-pool.md)
+* [Add a server](../100.configuring-the-deploy-environment-through-oat/100.add-server.md)
* [Prepare installation packages](../../200.preparations-before-deploy/300.prepare-installation-packages.md)
* [Role overview](https://en.oceanbase.com/docs/enterprise-oceanbase-ocp-en-10000000000838707)
* [Default OCP roles](https://en.oceanbase.com/docs/enterprise-oceanbase-ocp-en-10000000000838497)
diff --git a/en-US/400.deploy/500.deploy-oceanbase-database-community-edition/200.local-deployment/200.environment-and-configuration-checks/500.configure-limits-conf-optional.md b/en-US/400.deploy/500.deploy-oceanbase-database-community-edition/200.local-deployment/200.environment-and-configuration-checks/500.configure-limits-conf-optional.md
index e3bd8b92cd..c1b512f0a4 100644
--- a/en-US/400.deploy/500.deploy-oceanbase-database-community-edition/200.local-deployment/200.environment-and-configuration-checks/500.configure-limits-conf-optional.md
+++ b/en-US/400.deploy/500.deploy-oceanbase-database-community-edition/200.local-deployment/200.environment-and-configuration-checks/500.configure-limits-conf-optional.md
@@ -34,8 +34,8 @@ root soft nofile 655350
root hard nofile 655350
* soft nofile 655350
* hard nofile 655350
-* soft stack 20480
-* hard stack 20480
+* soft stack unlimited
+* hard stack unlimited
* soft nproc 655360
* hard nproc 655360
* soft core unlimited
@@ -69,7 +69,7 @@ open files (-n) 655350
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
-stack size (kbytes, -s) 20480
+stack size (kbytes, -s) unlimited
cpu time (seconds, -t) unlimited
max user processes (-u) 655360
virtual memory (kbytes, -v) unlimited
@@ -82,6 +82,6 @@ In the previous output:
* `open files` indicates the maximum number of open file descriptors. The parameter corresponds to the `nofile` parameter in the `limits.conf` configuration file. Check whether the value is `655350`.
-* `stack size` indicates the stack size in kilo bytes. The parameter corresponds to the `stack` parameter in the `limits.conf` configuration file. Check whether the value is `20480`.
+* `stack size` indicates the stack size in kilo bytes. The parameter corresponds to the `stack` parameter in the `limits.conf` configuration file. Check whether the value is `unlimited`.
* `max user processes` indicates the maximum number of user processes. The parameter corresponds to the `nproc` parameter in the `limits.conf` configuration file. Check whether the value is `655360`.
diff --git a/en-US/400.deploy/500.deploy-oceanbase-database-community-edition/200.local-deployment/600.deploy-by-systemd.md b/en-US/400.deploy/500.deploy-oceanbase-database-community-edition/200.local-deployment/600.deploy-by-systemd.md
index 73f9209c6d..3dcdc990cb 100644
--- a/en-US/400.deploy/500.deploy-oceanbase-database-community-edition/200.local-deployment/600.deploy-by-systemd.md
+++ b/en-US/400.deploy/500.deploy-oceanbase-database-community-edition/200.local-deployment/600.deploy-by-systemd.md
@@ -50,7 +50,7 @@ tab Online installation
2. Install OceanBase Database.
```shell
- [admin@test001 ~]$ sudo yum install oceanbase-ce
+ [admin@test001 ~]$ sudo yum install oceanbase-ce oceanbase-ce-libs obclient
```
By default, the preceding command deploys OceanBase Database of the latest version. You can install a specific version by declaring the version number. For example, you can use the `yum install oceanbase-ce-4.2.2.0` command to install OceanBase Database V4.2.2. We recommend that you install the latest version.
diff --git a/en-US/900.release-notes/10100.enterprise-edition-history-release/9600.V4.3/9690.ob-430_ee.md b/en-US/900.release-notes/10100.enterprise-edition-history-release/9600.V4.3/9690.ob-430_ee.md
index 25c7c1616e..792dca6a06 100644
--- a/en-US/900.release-notes/10100.enterprise-edition-history-release/9600.V4.3/9690.ob-430_ee.md
+++ b/en-US/900.release-notes/10100.enterprise-edition-history-release/9600.V4.3/9690.ob-430_ee.md
@@ -327,7 +327,7 @@ Here we use three OBServer servers, with the architecture of 1:1:1 to deploy Oce
| Syntax change | Description |
|------|---------|
-| The DDL syntax that supports the `SKIP_INDEX` attribute for column pre-aggregation in SSTables is added. | - |
+| The DDL syntax that supports the `SKIP_INDEX` attribute for column pre-aggregation in SSTables is added. | For more information, see [Column skip index attribute (MySQL mode)](../../../700.reference/300.database-object-management/100.manage-object-of-mysql-mode/200.manage-tables-of-mysql-mode/250.identify-skip-index-properties-of-mysql-mode.md) and [Column skip index attribute (Oracle mode)](../../../700.reference/300.database-object-management/200.manage-object-of-oracle-mode/100.manage-tables-of-oracle-mode/250.identify-skip-index-properties-of-oracle-mode). |
| The syntax for tenant cloning is added. | For more information, see [CREATE TENANT](../../../700.reference/500.sql-reference/100.sql-syntax/100.system-tenants/800.create-tenant.md). |
| The syntax for parameter resetting is added. | For more information, see [PARAMETER](../../../700.reference/500.sql-reference/100.sql-syntax/100.system-tenants/200.alter-system/2600.PARAMETER.md). |
| The syntaxes related to column store and columnstore indexes are added. | For more information, see [CREATE TABLE (MySQL mode)](../../../700.reference/500.sql-reference/100.sql-syntax/200.common-tenant-of-mysql-mode/600.sql-statement-of-mysql-mode/2600.create-table-of-mysql-mode.md), [CREATE TABLE (Oracle mode)](../../../700.reference/500.sql-reference/100.sql-syntax/300.common-tenant-of-oracle-mode/900.sql-statement-of-oracle-mode/100.ddl-of-oracle-mode/2400.create-table-of-oracle-mode.md), [CREATE INDEX (MySQL mode)](../../../700.reference/500.sql-reference/100.sql-syntax/200.common-tenant-of-mysql-mode/600.sql-statement-of-mysql-mode/2200.create-index-of-mysql-mode.md), and [CREATE INDEX (Oracle mode)](../../../700.reference/500.sql-reference/100.sql-syntax/300.common-tenant-of-oracle-mode/900.sql-statement-of-oracle-mode/100.ddl-of-oracle-mode/1600.create-index-of-oracle-mode.md). |
From 44f5ade05e452a8c364c44f3a41d9bea15f80a32 Mon Sep 17 00:00:00 2001
From: Jackie Qu
Date: Mon, 15 Apr 2024 13:51:47 +0800
Subject: [PATCH 14/63] v430-beta-300.multi-tenant-architecture-update
---
.../100.tenant-resource-management.md | 28 ++++++-------------
.../200.resource-isolation-between-tenants.md | 16 +++++------
2 files changed, 16 insertions(+), 28 deletions(-)
diff --git a/en-US/700.reference/100.oceanbase-database-concepts/300.multi-tenant-architecture/500.tenants-and-resource-management/100.tenant-resource-management.md b/en-US/700.reference/100.oceanbase-database-concepts/300.multi-tenant-architecture/500.tenants-and-resource-management/100.tenant-resource-management.md
index b2cb307a3f..e18f2d750f 100644
--- a/en-US/700.reference/100.oceanbase-database-concepts/300.multi-tenant-architecture/500.tenants-and-resource-management/100.tenant-resource-management.md
+++ b/en-US/700.reference/100.oceanbase-database-concepts/300.multi-tenant-architecture/500.tenants-and-resource-management/100.tenant-resource-management.md
@@ -30,33 +30,21 @@ obclient> CREATE RESOURCE UNIT uc1 MAX_CPU 5, MIN_CPU 4, MEMORY_SIZE '36G', MAX_
Required parameters in the `CREATE RESOURCE UNIT` statement include the following ones:
-* `MAX_CPU`
+* `MAX_CPU`: the maximum CPU resources that can be provided by a resource unit that uses this resource unit configuration.
-* `MEMORY_SIZE`
+* `MEMORY_SIZE`: the memory size that can be provided by a resource unit that uses this resource unit configuration.
Take note of the following description about the optional parameters `MIN_CPU`, `MAX_IOPS`, `MIN_IOPS`, and `LOG_DISK_SIZE`:
-* By default, the value of `MIN_CPU` equals to that of `MAX_CPU`.
+* `MIN_CPU`: the minimum CPU resources that can be provided by a resource unit that uses this resource unit configuration. This parameter defaults to `MAX_CPU`.
-* By default, the value of `MIN_IOPS` equals to that of `MAX_IOPS`.
+* `MAX_IOPS` and `MIN_IOPS`: the maximum and minimum IOPS resources that can be provided by a resource unit that uses this resource unit configuration. The minimum values of the two parameters are both 1024, and the value of `MAX_IOPS` must be greater than or equal to that of `MIN_IOPS`.
-* By default, the value of `LOG_DISK_SIZE` is three times of the memory size, and is 2 GB at the minimum.
+ * If the values of `MIN_IOPS` and `MAX_IOPS` are not specified, the default value of `MIN_IOPS` and `MAX_IOPS` is `INT64_MAX`.
-* The minimum values of `MAX_IOPS` and `MIN_IOPS` are both `1024`, and the value of `MAX_IOPS` must be greater than or equal to that of `MIN_IOPS`.
+ * If only `MAX_IOPS` is specified, the `MIN_IOPS` value is the same as the `MAX_IOPS` value. If only `MIN_IOPS` is specified, the `MAX_IOPS` value is the same as the `MIN_IOPS` value.
-* If the values of `MIN_IOPS` and `MAX_IOPS` are not specified:
-
- * The default value of `MIN_IOPS` and `MAX_IOPS` is `INT64_MAX`.
-
- * If `IOPS_WEIGHT` is not specified, the value of `IOPS_WEIGHT` equals to that of `MIN_CPU`. If `IOPS_WEIGHT` is specified, the value specified for `IOPS_WEIGHT` prevails.
-
-* If only `MAX_IOPS` is specified:
-
- * The default value of `MIN_IOPS` is `INT64_MAX`, and vice versa.
-
- * If `IOPS_WEIGHT` is not specified, the default value `0` is used.
-
-Here, `MIN_CPU` indicates the minimum CPU resource that can be provided by a resource unit that uses the resource unit configuration.
+* `LOG_DISK_SIZE`: the log disk space, which is three times the memory size by default, and is at least 2 GB.
### Resource configuration of a meta tenant
@@ -82,7 +70,7 @@ The resources are described as follows:
The following table lists the resource specifications of a user tenant and a meta tenant.
-| Resource parameters | Tenant specifications | User tenant | Meta tenant |
+| Resource parameter | Tenant specification | User tenant | Meta tenant |
|--------------------|-------------------|---------------------|----------------------------|
| MIN_CPU/MAX_CPU | Minimum: 1 core | Shared CPU resource specifications | Shared CPU resource specifications |
| MEMORY_SIZE | Minimum: 1 GB | Minimum: 512 MB | 10% of the overall tenant memory specification- When the overall tenant memory is greater than or equal to 2 GB, the minimum memory of the meta tenant is 1 GB.
- When the overall tenant memory is less than 2G, the meta tenant memory occupies 512 MB of memory.
|
diff --git a/en-US/700.reference/100.oceanbase-database-concepts/300.multi-tenant-architecture/500.tenants-and-resource-management/200.resource-isolation-between-tenants.md b/en-US/700.reference/100.oceanbase-database-concepts/300.multi-tenant-architecture/500.tenants-and-resource-management/200.resource-isolation-between-tenants.md
index 465a39d4cb..eeb64269ff 100644
--- a/en-US/700.reference/100.oceanbase-database-concepts/300.multi-tenant-architecture/500.tenants-and-resource-management/200.resource-isolation-between-tenants.md
+++ b/en-US/700.reference/100.oceanbase-database-concepts/300.multi-tenant-architecture/500.tenants-and-resource-management/200.resource-isolation-between-tenants.md
@@ -19,17 +19,17 @@ Resource isolation refers to the behavior of controlling resource allocation amo
## Advantages of multi-tenant isolation in OceanBase Database
-Compared to Docker and VMs, OceanBase Database offers more lightweight tenant isolation and is easier to implement advanced features such as priority management. From the perspective of OceanBase Database requirements, Docker or VMs have the following disadvantages:
+Compared with Docker and VMs, OceanBase Database offers more lightweight tenant isolation and is easier to implement advanced features such as priority management. From the perspective of OceanBase Database requirements, Docker or VMs have the following disadvantages:
-* There are heavy overheads in runtime environments of both Docker and VMs, while OceanBase Database needs to support lightweight tenants.
+* There are heavy overheads in runtime environments of both Docker containers and VMs, while OceanBase Database needs to support lightweight tenants.
-* The specification change and migration of Docker or VMs are costly, while OceanBase Database is designed to help tenants change specifications and perform the migration at a faster speed.
+* The specification change and migration of Docker containers or VMs are costly, while OceanBase Database is designed to help tenants change specifications and perform the migration at a faster speed.
-* Tenants cannot share resources such as object pools if Docker or VMs are used.
+* Tenants cannot share resources such as object pools if Docker containers or VMs are used.
-* Customizing resource isolation within Docker or VMs, such as priority support within tenants, is challenging.
+* Customizing resource isolation within Docker containers or VMs, such as priority support within tenants, is challenging.
-In addition, unified views cannot be exposed if Docker or VMs are used.
+In addition, unified views cannot be exposed if Docker containers or VMs are used.
## Isolation from the perspective of user tenants
@@ -173,7 +173,7 @@ Because the execution of SQL statements may involve I/O waiting and lock waiting
After cgroups are configured, worker threads of different tenants are stored in different cgroup directories, which improves CPU isolation between tenants. The isolation results are described as follows:
-* If a tenant on an OBServer node has a very high load and the rest of the tenants are relatively idle, then the CPU of the high-load tenant will also be limited by `MAX_CPY`.
+* If a tenant on an OBServer node has a very high load and the rest of the tenants are relatively idle, then the CPU of the high-load tenant will also be limited by `MAX_CPU`.
* If the load of the idle tenants increases, physical CPU resources become insufficient. In this case, cgroups allocate time slices based on weights.
@@ -281,4 +281,4 @@ min_token_cnt = 2,
max_token_cnt = 2,
ass_token_cnt = 2 , // The number of tokens allocated to the group. You can determine the number of tokens based on the token_cnt field. Typically, the values of the two fields are the same.
rpc_stat_info: pcode=0x150a:cnt=1489 pcode=0x150b:cnt=1091}) // The RPC pcodes that have been received most frequently by the tenant in a period of time. The statistical period is 10 seconds, and the top five RPC pcodes are printed.
-```
+```
\ No newline at end of file
From 0ac584d8529e7e6db716aceea043ce91f14239a7 Mon Sep 17 00:00:00 2001
From: Jackie Qu
Date: Mon, 15 Apr 2024 14:09:06 +0800
Subject: [PATCH 15/63] 430-beta-fix-2
---
.../9600.V4.3/9690.ob-430_ee.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/en-US/900.release-notes/10100.enterprise-edition-history-release/9600.V4.3/9690.ob-430_ee.md b/en-US/900.release-notes/10100.enterprise-edition-history-release/9600.V4.3/9690.ob-430_ee.md
index 792dca6a06..f0ec318078 100644
--- a/en-US/900.release-notes/10100.enterprise-edition-history-release/9600.V4.3/9690.ob-430_ee.md
+++ b/en-US/900.release-notes/10100.enterprise-edition-history-release/9600.V4.3/9690.ob-430_ee.md
@@ -327,7 +327,7 @@ Here we use three OBServer servers, with the architecture of 1:1:1 to deploy Oce
| Syntax change | Description |
|------|---------|
-| The DDL syntax that supports the `SKIP_INDEX` attribute for column pre-aggregation in SSTables is added. | For more information, see [Column skip index attribute (MySQL mode)](../../../700.reference/300.database-object-management/100.manage-object-of-mysql-mode/200.manage-tables-of-mysql-mode/250.identify-skip-index-properties-of-mysql-mode.md) and [Column skip index attribute (Oracle mode)](../../../700.reference/300.database-object-management/200.manage-object-of-oracle-mode/100.manage-tables-of-oracle-mode/250.identify-skip-index-properties-of-oracle-mode). |
+| The DDL syntax that supports the `SKIP_INDEX` attribute for column pre-aggregation in SSTables is added. | |
| The syntax for tenant cloning is added. | For more information, see [CREATE TENANT](../../../700.reference/500.sql-reference/100.sql-syntax/100.system-tenants/800.create-tenant.md). |
| The syntax for parameter resetting is added. | For more information, see [PARAMETER](../../../700.reference/500.sql-reference/100.sql-syntax/100.system-tenants/200.alter-system/2600.PARAMETER.md). |
| The syntaxes related to column store and columnstore indexes are added. | For more information, see [CREATE TABLE (MySQL mode)](../../../700.reference/500.sql-reference/100.sql-syntax/200.common-tenant-of-mysql-mode/600.sql-statement-of-mysql-mode/2600.create-table-of-mysql-mode.md), [CREATE TABLE (Oracle mode)](../../../700.reference/500.sql-reference/100.sql-syntax/300.common-tenant-of-oracle-mode/900.sql-statement-of-oracle-mode/100.ddl-of-oracle-mode/2400.create-table-of-oracle-mode.md), [CREATE INDEX (MySQL mode)](../../../700.reference/500.sql-reference/100.sql-syntax/200.common-tenant-of-mysql-mode/600.sql-statement-of-mysql-mode/2200.create-index-of-mysql-mode.md), and [CREATE INDEX (Oracle mode)](../../../700.reference/500.sql-reference/100.sql-syntax/300.common-tenant-of-oracle-mode/900.sql-statement-of-oracle-mode/100.ddl-of-oracle-mode/1600.create-index-of-oracle-mode.md). |
From 8484f083e468c263ef51cdb02f06b05e790087bc Mon Sep 17 00:00:00 2001
From: Jackie Qu
Date: Mon, 15 Apr 2024 14:36:40 +0800
Subject: [PATCH 16/63]
v430-beta-1000.high-data-reliability-and-availability-update
---
.../200.proxy-high-availability.md | 5 +-
.../200.backup-direvtory-structure.md | 234 +++++++++---------
2 files changed, 114 insertions(+), 125 deletions(-)
diff --git a/en-US/700.reference/100.oceanbase-database-concepts/1000.high-data-reliability-and-availability/100.high-availability-architecture/200.proxy-high-availability.md b/en-US/700.reference/100.oceanbase-database-concepts/1000.high-data-reliability-and-availability/100.high-availability-architecture/200.proxy-high-availability.md
index 0c04c2d6d1..509dc51ea7 100644
--- a/en-US/700.reference/100.oceanbase-database-concepts/1000.high-data-reliability-and-availability/100.high-availability-architecture/200.proxy-high-availability.md
+++ b/en-US/700.reference/100.oceanbase-database-concepts/1000.high-data-reliability-and-availability/100.high-availability-architecture/200.proxy-high-availability.md
@@ -69,9 +69,8 @@ Implement the asynchronous termination mechanism. When an OBServer node failure
* Asynchronously refresh the table location cache to detect replica switchover operations. If a replica switchover operation is performed upon a server failure, requests are routed to the new replica.
- **Limitations: The OBServer node must notify ODP of routing failures. If the OBServer node fails and does not return routing information, the table location cache is not refreshed.**
+ **Limitations: The OBServer node must notify ODP of routing failures. If the OBServer node fails and does not return routing information, the table location cache is not automatically refreshed.**
* Adopts the primary/standby cluster architecture for two IDCs in the Physical Standby Database solution. If the primary cluster is changed, ODP can switch to the new primary cluster efficiently.
- **Limitations: The RTO depends on the OBServer node switchover time and the interval at which a scheduled task is triggered to refresh the status of the OBServer node. The default interval is 20s. You can also specify a custom interval.**
-
+ **Limitations: The RTO depends on the OBServer node switchover time and the interval at which a scheduled task is triggered to refresh the status of the OBServer node. The default interval is 20s. You can also specify a custom interval.**
\ No newline at end of file
diff --git a/en-US/700.reference/100.oceanbase-database-concepts/1000.high-data-reliability-and-availability/500.backup-and-recovery/200.backup-direvtory-structure.md b/en-US/700.reference/100.oceanbase-database-concepts/1000.high-data-reliability-and-availability/500.backup-and-recovery/200.backup-direvtory-structure.md
index 1408bb4645..abcec12931 100644
--- a/en-US/700.reference/100.oceanbase-database-concepts/1000.high-data-reliability-and-availability/500.backup-and-recovery/200.backup-direvtory-structure.md
+++ b/en-US/700.reference/100.oceanbase-database-concepts/1000.high-data-reliability-and-availability/500.backup-and-recovery/200.backup-direvtory-structure.md
@@ -1,183 +1,170 @@
-# Backup directory structures
-
-## Structure of the log archive directory
-
-You can specify the time interval for splitting the log archive directory. By default, the directory is split by day. In other words, the log data directory of one day corresponds to a `backup_piece`. A `backup_piece` directory contains a `single_piece_info` file, which records the `backup_piece` information, and a `tenant_backup_piece_infos` file, which records the information about all `backup_pieces` of the tenant.
-
-`rounds` in the log archive directory is a placeholder for log archiving. Each placeholder indicates the start and end of a log round. A placeholder is in the format of `round_DSETID_ROUNDID_STATE`, where:
-
-* `DSETID` indicates the ID of the archive path.
-
-* `ROUNDID` indicates the round of log archiving.
-
-* `STATE` indicates the start or end of the log round.
-
-`pieces` in the log archive directory is a placeholder for log archiving. Each placeholder indicates the start and end of a log piece. A placeholder is in the format of `piece_DSETID_ROUNDID_PIECEID_STATE_DATE`, where:
-
-* `DSETID` indicates the ID of the archive path.
+|description||
+|---|---|
+|keywords||
+|dir-name||
+|dir-name-en||
+|tenant-type||
-* `ROUNDID` indicates the round of log archiving.
-
-* `PIECEID` indicates the ID of the log piece.
-
-* `STATE` indicates the start or end of the log round.
+# Backup directory structures
-* `DATE` indicates the backup time. The value is in the format of `yyyymmddThhmmss`. It records the start or end time.
+## Structure of log archive directories
-For backup media such as NFS, OSS, and COS, the directory structure for log archiving is as follows:
+You can specify the time interval for splitting log archive directories. By default, the directories are split by day. In other words, the log data directory of one day corresponds to a backup_piece.
+When you use Network File System (NFS), Alibaba Cloud Object Storage Service (OSS), or Tencent Cloud Object Storage (COS) as the backup media, the structure of log archive directories is as follows:
```javascript
log_archive_dest
├── check_file
│ └── 1002_connect_file_20230111T193049.obbak // The connectivity check file.
├── format.obbak // The format information of the backup path.
- ├── rounds // The rounds placeholder directory.
- │ └── round_d1002r1_start.obarc // The placeholder indicating that a round starts.
+ ├── rounds // The round placeholder directory.
+ │ └── round_d1002r1_start.obarc // The round start placeholder.
├── pieces // The piece placeholder directory.
- │ ├── piece_d1002r1p1_start_20230111T193049.obarc // The placeholder indicating that a piece starts, in the format of piece_DESTID_ROUNDID_PIECEID_DATE_start.
- │ └── piece_d1002r1p1_end_20230111T193249.obarc // The placeholder indicating that a piece ends, in the format of piece_DESTID_ROUNDID_PIECEID_DATE_end.
- └── piece_d1002r1p1 // The piece directory, which is in the format of piece_DESTID_ROUNDID_PIECEID.
- ├── piece_d1002r1p1_20230111T193049_20230111T193249.obarc // Records the continuous interval of the piece.
+ │ ├── piece_d1002r1p1_start_20230111T193049.obarc // The piece start placeholder, in the format of piece_DESTID_ROUNDID_PIECEID_start_DATE.
+ │ └── piece_d1002r1p1_end_20230111T193249.obarc // The piece end placeholder, in the format of piece_DESTID_ROUNDID_PIECEID_end_DATE.
+ └── piece_d1002r1p1 // The piece directory, named in the format of piece_DESTID_ROUNDID_PIECEID.
+ ├── piece_d1002r1p1_20230111T193049_20230111T193249.obarc // The contiguous interval of a piece.
├── checkpoint
- │ └── checkpoint_info.0.obarc // Records the archiving progress of the piece. Only the piece being archived is recorded.
- ├── single_piece_info.obarc // Records the metadata of the piece.
- ├── tenant_archive_piece_infos.obarc // Records the metadata of all frozen pieces before this piece.
- ├── file_info.obarc // List of all log stream files.
+ │ └── checkpoint_info.0.obarc // The archiving progress of a piece that is being archived.
+ ├── single_piece_info.obarc // The metadata of the piece.
+ ├── tenant_archive_piece_infos.obarc // The metadata of all frozen pieces before this piece.
+ ├── file_info.obarc // The list of files in all log streams.
├── logstream_1 // Log stream 1.
- │ ├── file_info.obarc // File list for log stream 1.
+ │ ├── file_info.obarc // The list of files in log stream 1.
│ ├── log
- │ │ └── 1.obarc // Archive file under log stream 1.
- │ └── schema_meta // Records metadata of the data dictionary. This file is only generated in log stream 1.
+ │ │ └── 1.obarc // The archive file in log stream 1.
+ │ └── schema_meta // The metadata of data dictionaries. This file is generated only in log stream 1.
│ └── 1677588501408765915.obarc
└── logstream_1001 // Log stream 1001.
- ├── file_info.obarc // File list for log stream 1001.
+ ├── file_info.obarc // The list of files in log stream 1001.
└── log
- └── 1.obarc // Archive file for log stream 1001.
+ └── 1.obarc // The archive file in log stream 1001.
```
-In the above directory structure, the top-level directory contains the following three types of data:
+In the above directory structure, the top-level directory contains the following types of data:
-* `format.obbak` and `check_file`: Used for connectivity checks for the user log archive directory.
+* `format.obbak`: Records the metadata of the archive path, including the tenant that uses the path.
-* `rounds`: Summarizes the rounds of log archiving, recording the list of all rounds.
+* `check_file`: Checks the connectivity to the root archive directory.
-* `pieces`: Summarizes the pieces of log archiving, recording the list of all pieces.
+* `rounds`: The summary directory that records all rounds of log archiving.
-* `piece_d1002r1p1`: The directory for log archiving pieces, with a naming format of `piece_DESTID_ROUNDID_PIECEID`. Here, `DESTID` refers to the ID corresponding to `log_archive_dest`; `ROUNDID` refers to the ID of the log archiving Round, which is a monotonically increasing integer; and `PIECEID` refers to the ID of the log archiving piece, also a monotonically increasing integer.
+* `pieces`: The summary directory that records all pieces of log archiving.
+
+* `piece_d1002r1p1`: The piece directory for log archiving, which is named in the format of `piece_DESTID_ROUNDID_PIECEID`. Here, `DESTID` refers to the ID corresponding to `log_archive_dest`, `ROUNDID` refers to the ID of the log archiving round, which is a monotonically increasing integer, and `PIECEID` refers to the ID of the log archiving piece, which is also a monotonically increasing integer.
In a log archiving piece directory, the following data is included:
- * `piece_d1002r1p1_20230111T193049_20230111T193249.obarc`: This file displays the current piece's ID, start and end times, and is only used for informational purposes.
+ * `piece_d1002r1p1_20230111T193049_20230111T193249.obarc`: This file displays the ID, start time, and end time of the current piece and is used only for display purposes.
+
+ * `checkpoint`: This directory is used to record the archiving timestamps of active pieces. The ObArchiveScheduler module periodically updates the timestamp information in this directory.
- * `checkpoint`: This directory is the active piece's record of the archiving checkpoint, and the ObArchiveScheduler module regularly updates the checkpoint information in this directory.
+ * `single_piece_info.obarc`: This file records the metadata of the current piece.
- * `single_piece_info.obarc`: This file records the metadata of the current piece.
+ * `tenant_archive_piece_infos.obarc`: This file records the metadata of all frozen pieces in the current tenant.
- * `tenant_archive_piece_infos.obarc`: This file records the metadata of all frozen pieces within the current tenant.
-
- * `file_info.obarc`: This file records the list of log streams within the piece.
+ * `file_info.obarc`: This file records the list of log streams within the piece.
- * `logstream_1`: This directory records the log files of log stream 1, which is the system log stream of the OceanBase tenant.
+ * `logstream_1`: This directory records the log files of log stream 1, which is the system log stream of an OceanBase Database tenant.
- * `logstream_1001`: This directory records the log files of log stream 1001, where log streams with an ID greater than 1000 are user log streams of the OceanBase tenant.
+ * `logstream_1001`: This directory records the log files of log stream 1001, where log streams with numbers greater than 1000 are user log streams of an OceanBase Database tenant.
- At the same time, each log stream backup contains three types of data:
+ Additionally, each log stream backup contains three types of data:
- * `file_info.obarc`: This file records the list of files within the log stream.
-
- * `log`: This directory contains all the archived files of the current log stream, with file names consistent with the internal log files of the source cluster.
+ * `file_info.obarc`: This file records the list of files in the log stream.
- * `schema_meta`: This directory records the metadata of the data dictionary, only present in the system log stream, and is not present in user log streams.
+ * `log`: This directory contains all the archive files of the current log stream, with file names consistent with those in the source cluster.
-For S3, the directory structure for log archiving is different from other media, as each individual archive file is composed of multiple small files and corresponding metadata files, with the specific directory structure as follows.
+ * `schema_meta`: This directory records the metadata of data dictionaries. It is only present in the system log stream but not in user log streams.
+
+When you use Amazon Simple Storage Service (S3) as the backup media, the structure of log archive directories is different from that in other media. An archive file consists of multiple small files and the corresponding metadata file. The specific directory structure is as follows.
Note
-Although the directory structure for log archiving in S3 differs from that of backup media such as NFS, OSS, and COS, backup files on S3 can still be restored after being copied to other backup media across clouds. For example, by copying archived data from S3 to OSS and using the OSS path, successful recovery is still possible.
+Despite the difference in directory structure, backup files copied from S3 to other backup media across clouds can still be used for restore. For example, if you copy archived data from S3 to OSS, you can successfully restore data from the OSS path.
-
```javascript
log_archive_dest
├── ......
- └── piece_d1002r1p1 // The piece directory, with a naming format of piece_DESTID_ROUNDID_PIECEID.
- ├── ...... // List of all log stream files.
+ └── piece_d1002r1p1 // The piece directory, named in the format of piece_DESTID_ROUNDID_PIECEID.
+ ├── ...... // The list of files in all log streams.
├── logstream_1 // Log stream 1.
- │ ├── file_info.obarc // File list for log stream 1.
+ │ ├── file_info.obarc // The list of files in log stream 1.
│ ├── log
- │ │ └── 1.obarc // The archive file under log stream 1, marked by a prefix.
- | | └── @APD_PART@0-32472973.obarc // The actual data within the archive file, recording data from bytes 0 to 32472973 of the log file.
+ │ │ └── 1.obarc // The archive file in log stream 1, which is identified by a prefix.
+ | | └── @APD_PART@0-32472973.obarc // The actual data in the archive file, including the data from byte 0 to byte 32472973 in the log file.
| | └── ......
- | | └── @APD_PART@FORMAT_META.obarc // The archive file format.
- | | └── @APD_PART@SEAL_META.obarc // The metadata information of the archive file.
- │ └── schema_meta // Records metadata of the data dictionary, only present in log stream 1.
+ | | └── @APD_PART@FORMAT_META.obarc // The format of the archive file.
+ | | └── @APD_PART@SEAL_META.obarc // The metadata of the archive file.
+ │ └── schema_meta // The metadata of the data dictionary. This file is generated only in log stream 1.
│ └── 1677588501408765915.obarc
└── logstream_1001 // Log stream 1001.
- ├── file_info.obarc // File list for log stream 1001.
+ ├── file_info.obarc // The list of files in log stream 1001.
└── log
- └── 1.obarc // Archive file for log stream 1001.
+ └── 1.obarc // The archive file in log stream 1001.
```
-In the above log archiving directory, `1.obarc` represents a single archive file, which is identified by a prefix that matches the archive file name. Each individual archive file mainly contains the following three types of data:
+In the above directory structure, `1.obarc` indicates a single archive file that is identified by a prefix. The prefix and the name of the archive file are the same. An archive file contains the following three types of data:
-* `@APD_PART@FORMAT_META.obarc`: When writing to the archive file for the first time, the `format_meta` file is written in this directory to record the format of the archive file.
+* `@APD_PART@FORMAT_META.obarc`: When data is written to the archive file for the first time, the `format_meta` file is generated in this directory to record the format of the archive file.
-* `@APD_PART@0-32472973.obarc`: The actual data within the archive file is written to a file named with this prefix, and each write operation records the start and end offsets in the file name.
+* `@APD_PART@0-32472973.obarc`: The actual data in the archive file is written to the file named with this prefix, and the start offset and the end offset of each write are recorded in the file name.
-* `@APD_PART@SEAL_META.obarc`: After the final write operation to the archive file, the `seal_meta` file is generated in this directory to record the metadata of the archive file.
+* `@APD_PART@SEAL_META.obarc`: After data is written to the archive file for the last time, the `seal_meta` file is generated in this directory to record the metadata in the archive file.
-## Structure of the data backup directory
+## Structure of data backup directories
-A single data backup corresponds to a backup_set. Every time the user runs `ALTER SYSTEM BACKUP DATABASE`, a new directory for the backup_set is created, containing all the data for the current backup.
+Each data backup corresponds to a backup_set directory. Each time the `ALTER SYSTEM BACKUP DATABASE` statement is executed, a new backup_set directory is generated, containing all data that is backed up this time.
-The directory structure for data backup is as follows:
+The structure of data backup directories is as follows:
```javascript
data_backup_dest
├── format.obbak // The format information of the backup path.
├── check_file
│ └── 1002_connect_file_20230111T193020.obbak // The connectivity check file.
-├── backup_sets // The summary directory of the data backup list, recording all the data backup sets.
-│ ├── backup_set_1_full_end_success_20230111T193420.obbak // The full backup end placeholder.
-│ ├── backup_set_1_full_start.obbak // The full backup start placeholder.
-│ ├── backup_set_2_inc_start.obbak // The incremental backup start placeholder.
-│ └── backup_set_2_inc_end_success_20230111T194420.obbak // The incremental backup end placeholder.
-└── backup_set_1_full // The full backup set. Files ending with full indicate a full backup, and inc indicates an incremental backup.
- ├── backup_set_1_full_20230111T193330_20230111T193420.obbak //The placeholder that displays the start and end time of the full backup.
+├── backup_sets // The summary directory of data backup lists, which contains all data backup sets.
+│ ├── backup_set_1_full_end_success_20230111T193420.obbak // The end placeholder for a full backup.
+│ ├── backup_set_1_full_start.obbak // The start placeholder for a full backup.
+│ ├── backup_set_2_inc_start.obbak // The start placeholder for an incremental backup.
+│ └── backup_set_2_inc_end_success_20230111T194420.obbak // The end placeholder for an incremental backup.
+└── backup_set_1_full // A full backup set. A directory whose name ends with "full" is a full backup set, and a directory whose name ends with "inc" is an incremental backup set.
+ ├── backup_set_1_full_20230111T193330_20230111T193420.obbak //The placeholder that represents the start and end time of a full backup.
├── single_backup_set_info.obbak // The metadata of the current backup set.
- ├── tenant_backup_set_infos.obbak // The information of the current tenant's full backup set.
+ ├── tenant_backup_set_infos.obbak // The full backup set information of the current tenant.
├── infos
- │ ├── major_data_info_turn_1 // Tenant-level backup files under major turn 1.
- │ │ ├── tablet_log_stream_info.obbak // The mapping file for tablet and log stream.
- │ │ ├── tenant_major_data_macro_range_index.0.obbak // The major macro block index.
- │ │ ├── tenant_major_data_meta_index.0.obbak // The major meta index.
- │ │ └── tenant_major_data_sec_meta_index.0.obbak // The mapping file for major logical ID and physical ID.
- │ ├── minor_data_info_turn_1 // Tenant-level backup files under minor turn 1.
- │ │ ├── tablet_log_stream_info.obbak // The mapping file for tablet and log stream.
- │ │ ├── tenant_minor_data_macro_range_index.0.obbak // The minor macro block index.
- │ │ ├── tenant_minor_data_meta_index.0.obbak // The minor meta index.
- │ │ └── tenant_minor_data_sec_meta_index.0.obbak // The mapping for minor logical ID and physical ID.
- │ ├── diagnose_info.obbak // The backup set diagnostic information file.
- │ ├── locality_info.obbak // The locality information of the current backup set's tenant.
- │ └── meta_info // The log stream metadata file at the tenant level, containing metadata of all log streams.
- │ ├── ls_attr_info.1.obbak // The snapshot of the log stream list during backup.
- │ └── ls_meta_infos.obbak // The meta collection of all log streams.
+ │ ├── major_data_info_turn_1 // The tenant-level baseline data backup file when the turn ID is 1.
+ │ │ ├── tablet_log_stream_info.obbak // The file describing the mapping between tablets and log streams.
+ │ │ ├── tenant_major_data_macro_range_index.0.obbak // The macroblock index in the baseline data backup file.
+ │ │ ├── tenant_major_data_meta_index.0.obbak // The meta index in the baseline data backup file.
+ │ │ └── tenant_major_data_sec_meta_index.0.obbak // The mapping between the logic ID and the physical ID of the meta index in the baseline data backup file.
+ │ ├── minor_data_info_turn_1 // The tenant-level minor-compacted backup file when the turn ID is 1.
+ │ │ ├── tablet_log_stream_info.obbak // The file describing the mapping between tablets and log streams.
+ │ │ ├── tenant_minor_data_macro_range_index.0.obbak // The macroblock index in the minor-compacted backup file.
+ │ │ ├── tenant_minor_data_meta_index.0.obbak // The meta index in the minor-compacted backup file.
+ │ │ └── tenant_minor_data_sec_meta_index.0.obbak // The mapping between the logic ID and the physical ID of the meta index in the minor-compacted backup file.
+ │ ├── diagnose_info.obbak // The diagnosis information file of the backup set.
+ │ ├── locality_info.obbak // The tenant locality information of the current backup set.
+ │ └── meta_info // The tenant-level log stream metadata file, which contains the metadata of all log streams.
+ │ ├── ls_attr_info.1.obbak // The snapshot of log streams during backup.
+ │ └── ls_meta_infos.obbak // The collection of metadata of all log streams.
├── logstream_1 // Log stream 1.
- │ ├── major_data_turn_1_retry_0 // Baseline data under turn 1, retry 0.
- │ │ ├── macro_block_data.0.obbak // A data file, with a size of 512 MB to 4 GB.
+ │ ├── major_data_turn_1_retry_0 // The baseline data when the turn ID is 1 and retry ID is 0.
+ │ │ ├── macro_block_data.0.obbak // A data file sized between 512 MB and 4 GB.
│ │ ├── macro_range_index.obbak // The macro index.
│ │ ├── meta_index.obbak // The meta index.
- │ │ └── sec_meta_index.obbak // The mapping file for logical ID and physical ID.
- │ ├── meta_info_turn_1_retry_0 // The log stream metadata file under turn 1, retry 0.
+ │ │ └── sec_meta_index.obbak // The file describing the mapping between the logical ID and the physical ID.
+ │ ├── meta_info_turn_1_retry_0 // The metadata of log streams when the turn ID is 1 and retry ID is 0.
│ │ ├── ls_meta_info.obbak // The log stream metadata.
- │ │ └── tablet_info.1.obbak // The log stream tablet metadata list.
- │ ├── minor_data_turn_1_retry_0 // The dump data under turn 1, retry 0.
+ │ │ └── tablet_info.1.obbak // The metadata of log stream tablets.
+ │ ├── minor_data_turn_1_retry_0 // The minor-compacted data when the turn ID is 1 and retry ID is 0.
│ │ ├── macro_block_data.0.obbak
│ │ ├── macro_range_index.obbak
│ │ ├── meta_index.obbak
│ │ └── sec_meta_index.obbak
- │ └── sys_data_turn_1_retry_0 // The system tablet data under turn 1, retry 0.
+ │ └── sys_data_turn_1_retry_0 // The system tablet data when the turn ID is 1 and retry ID is 0.
│ ├── macro_block_data.0.obbak
│ ├── macro_range_index.obbak
│ ├── meta_index.obbak
@@ -203,31 +190,34 @@ data_backup_dest
└── sec_meta_index.obbak
```
-In the backup directory structure, the top-level directory contains the following three types of data:
+In the backup directory structure, the top-level directory contains the following types of data:
+
+
+* `format.obbak`: Records the metadata of the backup path.
-* `format.obbak` and `check_file`: Used for connectivity checks for the user log archive directory.
+* `check_file`: Checks the connectivity to the root backup directory.
-* `backup_sets`: The summary directory of the data backup list, recording all the data backup sets.
+* `backup_sets`: The summary directory that contains all data backup sets.
-* `backup_set_1_full`: This directory represents a data backup set. The directory name ending in `full` indicates a full backup, and `inc` indicates an incremental backup. Each data backup generates a corresponding backup set, and once the data backup is complete, the backup set will no longer be modified.
+* `backup_set_1_full`: This directory represents a data backup set, where a directory whose name ends with `full` is a full backup set, and a directory whose name ends with `inc` is an incremental backup set. Each data backup generates a corresponding backup set, and the backup set will not be modified after the data backup is completed.
In a data backup set, the following data is included:
- * `backup_set_1_full_20230111T193330_20230111T193420.obbak`: This file displays the current backup set's ID, start and end times, and is only used for informational purposes.
+ * `backup_set_1_full_20230111T193330_20230111T193420.obbak`: This file records the ID, start time, and end time of the current backup set. This file is used only for display purposes.
- * `single_backup_set_info.obbak`: This file records the metadata of the current backup set, including the backup position, and information about any dependent logs.
+ * `single_backup_set_info.obbak`: This file records the metadata of the current backup set, including the backup timestamp, dependent logs, and other information.
- * `tenant_backup_set_infos.obbak`: This file records the metadata of all existing backup sets for the current tenant.
+ * `tenant_backup_set_infos.obbak`: This file records the metadata of all existing backup sets for the current tenant.
- * `infos`: This directory records the metadata of the data backup sets.
+ * `infos`: This directory records the metadata of the data backup set.
- * `logstream_1`: This directory records all the data of log stream 1, which is the system log stream for the OceanBase database tenant.
+ * `logstream_1`: This directory records all the data of log stream 1, which is the system log stream of an OceanBase Database tenant.
- * `logstream_1001`: This directory records all the data of log stream 1001, where log streams with IDs greater than 1000 are user log streams for the OceanBase database tenant.
+ * `logstream_1001`: This directory records all the data of log stream 1001, where log streams with numbers greater than 1000 are user log streams of an OceanBase Database tenant.
-At the same time, each log stream backup also has four types of directories. Directories with `retry` indicate log stream-level retries, while those with `turn` indicate tenant-level retries:
+ In addition, each log stream backup has four types of directories. Directories whose names contain `retry` record information about log stream-level retries, and directories whose names contain `turn` record information about tenant-level retries.
- * `meta_info_xx`: This directory records log stream metadata and tablet metadata.
- * `sys_data_xx`: This directory records the data of internal system Tablets within the log stream.
- * `minor_data_xx`: This directory records the dump data of regular tablets.
- * `major_data_xx`: This directory records the baseline data of regular tablets.
\ No newline at end of file
+ * `meta_info_xx`: This directory records log stream metadata and tablet metadata.
+ * `sys_data_xx`: This directory records the data of internal system tablets in log streams.
+ * `minor_data_xx`: This directory records the minor-compacted data of regular tablets.
+ * `major_data_xx`: This directory records the baseline data of regular tablets.
\ No newline at end of file
From b9c1143a256539b60d96d6382b503e5840bf2137 Mon Sep 17 00:00:00 2001
From: Jackie Qu
Date: Mon, 15 Apr 2024 14:51:06 +0800
Subject: [PATCH 17/63] v430-beta-1200.observer-node-architecture-update
---
.../300.observer-thread-model/200.worker-thread.md | 4 ++--
.../1200.observer-node-architecture/400.log.md | 14 +++++++-------
2 files changed, 9 insertions(+), 9 deletions(-)
diff --git a/en-US/700.reference/100.oceanbase-database-concepts/1200.observer-node-architecture/300.observer-thread-model/200.worker-thread.md b/en-US/700.reference/100.oceanbase-database-concepts/1200.observer-node-architecture/300.observer-thread-model/200.worker-thread.md
index 348407c0cb..9920512e64 100644
--- a/en-US/700.reference/100.oceanbase-database-concepts/1200.observer-node-architecture/300.observer-thread-model/200.worker-thread.md
+++ b/en-US/700.reference/100.oceanbase-database-concepts/1200.observer-node-architecture/300.observer-thread-model/200.worker-thread.md
@@ -23,7 +23,7 @@ The initial size of the thread pool is related to the number of CPU cores on the
The maximum number of threads on an OBServer node.
- The value range is [4096,10000], with the default value of 9999. A restart is required for the modification to take effect.
+ The value range is [0,10000], with the default value of 9999. A restart is required for the modification to take effect.
* system_cpu_quota
@@ -179,4 +179,4 @@ min_token_cnt = 2,
max_token_cnt = 2,
ass_token_cnt = 2 , // The number of tokens allocated to the group. You can determine the number of tokens based on the token_cnt field. Typically, the values of the two fields are the same.
rpc_stat_info: pcode=0x150a:cnt=1489 pcode=0x150b:cnt=1091}) // The RPC pcodes that have been received most frequently by the tenant in a period of time. The statistical period is 10 seconds, and the top five RPC pcodes are printed.
-```
+```
\ No newline at end of file
diff --git a/en-US/700.reference/100.oceanbase-database-concepts/1200.observer-node-architecture/400.log.md b/en-US/700.reference/100.oceanbase-database-concepts/1200.observer-node-architecture/400.log.md
index 2a7a01e64a..293d4259ee 100644
--- a/en-US/700.reference/100.oceanbase-database-concepts/1200.observer-node-architecture/400.log.md
+++ b/en-US/700.reference/100.oceanbase-database-concepts/1200.observer-node-architecture/400.log.md
@@ -39,9 +39,10 @@ The logs are divided into the following six levels from low to high: DEBUG, TRAC
The ERROR log level is special. It prints the stack where the log is generated. The symbol table is required to parse it.
-> **Notice**
->
-> The DEBUG log level consumes a significant amount of resources. In the latest version of OceanBase Database, the DEBUG log level is automatically removed in RELEASE compilation and does not take effect even if it is enabled.
+
+ Notice
+ The DEBUG log level consumes a significant amount of resources. In the latest version of OceanBase Database, the DEBUG log level is automatically removed in RELEASE compilation and does not take effect even if it is enabled.
+
### Modules
@@ -59,8 +60,8 @@ The ID of an internal SQL statement of OceanBase Database. The default value is
This section describes cluster-level parameters that must be used in the sys tenant. You can modify the parameters by using the following syntax:
-```sql
-alter system set enable_syslog_recycle = False;
+```shell
+obclient> ALTER SYSTEM SET enable_syslog_recycle = False;
```
* enable_syslog_recycle
@@ -105,5 +106,4 @@ alter system set enable_syslog_recycle = False;
* syslog_level
- The lowest level of logs to be printed. The default value is `WDIAG`.
-
+ The lowest level of logs to be printed. The default value is `WDIAG`.
\ No newline at end of file
From ed0f87ae9c2b73d38d80a877552b59ccfbdaf13e Mon Sep 17 00:00:00 2001
From: Jackie Qu
Date: Mon, 15 Apr 2024 15:12:02 +0800
Subject: [PATCH 18/63] v430-beta-200.configuration-management-update
---
...0.configuration-management-introduction.md | 14 +-
.../200.set-parameters.md | 123 ++++++++++++++++--
.../300.set-variables.md | 4 +-
3 files changed, 122 insertions(+), 19 deletions(-)
diff --git a/en-US/700.reference/200.system-management/200.configuration-management/100.configuration-management-introduction.md b/en-US/700.reference/200.system-management/200.configuration-management/100.configuration-management-introduction.md
index 71daa2c127..28729a4900 100644
--- a/en-US/700.reference/200.system-management/200.configuration-management/100.configuration-management-introduction.md
+++ b/en-US/700.reference/200.system-management/200.configuration-management/100.configuration-management-introduction.md
@@ -22,13 +22,13 @@ You can set global and session-level system variables. After you set a global sy
For more information about system variables, see [Set variables](../200.configuration-management/300.set-variables.md).
-## Comparison between parameters and system variables
+## Comparison between parameters and variables
-| Comparison item | Parameter | System variable |
+| Comparison item | System parameter | System variable |
|------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Effective scope | Affects a cluster, tenant, zone, or server. | Affects a tenant globally or at the session level. |
-| Effective mode | - Dynamically takes effect: The value of `edit_level` is `dynamic_effective`.
- Effective upon restart: The value of `edit_level` is `static_effective`.
| - A session-level variable takes effect only on the current session.
- A global variable does not take effect on the current session and takes effect only on sessions established upon re-logon.
|
-| Modification | - You can modify system parameters by using SQL statements. For example:
`obclient> Alter SYSTEM SET schema_history_expire_time='1h';` - You can modify system parameters by setting the startup parameters. For example:
`cd /home/admin && ./bin/observer -o "schema_history_expire_time='1h'"`
| Modification can only be performed by using SQL statements. For example: - MySQL mode
`obclient> SET ob_query_timeout = 20000000;`
`obclient> SET GLOBAL ob_query_timeout = 20000000;` - Oracle mode
`obclient> SET ob_query_timeout = 20000000;`
`obclient> SET GLOBAL ob_query_timeout = 20000000; `
`obclient> ALTER SESSION SET ob_query_timeout = 20000000;`
`obclient> ALTER SYSTEM SET ob_query_timeout = 20000000;`
|
-| Persistence | Parameters are persisted into internal tables and configuration files and can be queried from the `/home/admin/oceanbase/etc/observer.config.bin` and `/home/admin/oceanbase/etc/observer.config.bin.history` files. | Only variables at the global level are persisted, while those at the session level are not. |
-| Lifecycle | Long. A parameter remains effective for the entire duration of a process. | Short. A system variable takes effect only after the tenant schema is created. |
-| Query method | You can query a parameter by using the `SHOW PARAMETERS` statement. For example:
`obclient> SHOW PARAMETERS LIKE 'schema_history_expire_time';` | You can query a system variable by using the `SHOW [GLOBAL] VARIABLES` or `SELECT` statement. For example:- MySQL mode
`obclient> SHOW VARIABLES LIKE 'ob_query_timeout';`
`obclient> SHOW GLOBAL VARIABLES LIKE 'ob_query_timeout';`
`obclient> SELECT * FROM INFORMATION_SCHEMA.SESSION_VARIABLES WHERE VARIABLE_NAME = 'ob_query_timeout';`
`obclient> SELECT * FROM INFORMATION_SCHEMA.GLOBAL_VARIABLES WHERE VARIABLE_NAME = 'ob_query_timeout';` - Oracle mode
`obclient> SHOW VARIABLES LIKE 'ob_query_timeout';`
`obclient> SHOW GLOBAL VARIABLES LIKE 'ob_query_timeout';`
`obclient> SELECT * FROM SYS.TENANT_VIRTUAL_GLOBAL_VARIABLE WHERE VARIABLE_NAME = 'ob_query_timeout';`
`obclient> SELECT * FROM SYS.TENANT_VIRTUAL_SESSION_VARIABLE WHERE VARIABLE_NAME = 'ob_query_timeout';`
|
\ No newline at end of file
+| Effective mode | - Dynamically takes effect: The value of `edit_level` is `dynamic_effective`.
- Takes effect upon restart: The value of `edit_level` is `static_effective`.
| - A session-level variable takes effect only on the current session.
- A global variable does not take effect on the current session and takes effect only on sessions established upon re-logon.
|
+| Modification | - Modification can be performed by using SQL statements. For example:
`obclient> Alter SYSTEM SET schema_history_expire_time='1h';` - Modification can be performed by setting startup parameters. For example:
`cd /home/admin && ./bin/observer -o "schema_history_expire_time='1h'"`
| Modification can be performed only by using SQL statements. Here are some examples: - MySQL mode
`obclient> SET ob_query_timeout = 20000000;`
`obclient> SET GLOBAL ob_query_timeout = 20000000;` - Oracle mode
`obclient> SET ob_query_timeout = 20000000;`
`obclient> SET GLOBAL ob_query_timeout = 20000000; `
`obclient> ALTER SESSION SET ob_query_timeout = 20000000;`
`obclient> ALTER SYSTEM SET ob_query_timeout = 20000000;`
|
+| Persistence | Parameters are persisted into internal tables and configuration files and can be queried from the `/home/admin/oceanbase/etc/observer.config.bin` and `/home/admin/oceanbase/etc/observer.config.bin.history` files. | Only global variables are persisted, while those at the session level are not. |
+| Lifecycle | Long. A system parameter remains effective for the entire duration of a process. | Short. A system variable takes effect only after the tenant schema is created. |
+| Query method | You can query a system parameter by using the `SHOW PARAMETERS` statement. For example:
`obclient> SHOW PARAMETERS LIKE 'schema_history_expire_time';` | You can query a system variable by using the `SHOW [GLOBAL] VARIABLES` or `SELECT` statement. Here are some examples:- MySQL mode
`obclient> SHOW VARIABLES LIKE 'ob_query_timeout';`
`obclient> SHOW GLOBAL VARIABLES LIKE 'ob_query_timeout';`
`obclient> SELECT * FROM INFORMATION_SCHEMA.SESSION_VARIABLES WHERE VARIABLE_NAME = 'ob_query_timeout';`
`obclient> SELECT * FROM INFORMATION_SCHEMA.GLOBAL_VARIABLES WHERE VARIABLE_NAME = 'ob_query_timeout';` - Oracle mode
`obclient> SHOW VARIABLES LIKE 'ob_query_timeout';`
`obclient> SHOW GLOBAL VARIABLES LIKE 'ob_query_timeout';`
`obclient> SELECT * FROM SYS.TENANT_VIRTUAL_GLOBAL_VARIABLE WHERE VARIABLE_NAME = 'ob_query_timeout';`
`obclient> SELECT * FROM SYS.TENANT_VIRTUAL_SESSION_VARIABLE WHERE VARIABLE_NAME = 'ob_query_timeout';`
|
\ No newline at end of file
diff --git a/en-US/700.reference/200.system-management/200.configuration-management/200.set-parameters.md b/en-US/700.reference/200.system-management/200.configuration-management/200.set-parameters.md
index d6183a9840..4180931996 100644
--- a/en-US/700.reference/200.system-management/200.configuration-management/200.set-parameters.md
+++ b/en-US/700.reference/200.system-management/200.configuration-management/200.set-parameters.md
@@ -21,7 +21,7 @@ Parameters whose names start with an underscore (_), such as `_ob_max_thread_num
Different types of tenants have different privileges regarding querying and modifying parameters.
-| Tenant type | Parameters that can be queried | Parameters that can be set |
+| Tenant type | Parameters that can be queried | Parameter that can be set |
|------|---------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------|
| `sys` tenant | Cluster-level parameters and tenant-level parameters**Note**
You can specify the `TENANT` keyword in the `SHOW PARAMETERS` statement to query the parameter settings of a specified tenant.
| Cluster-level parameters and tenant-level parameters**Note**
You can specify the `TENANT` keyword in the `sys` tenant to modify the tenant-level parameters of all tenants or a specified tenant.
|
| User tenant | Cluster-level parameters, and tenant-level parameters of the current tenant | Tenant-level parameters of the current tenant |
@@ -37,7 +37,7 @@ The following table describes the data types of parameters in OceanBase Database
| MOMENT | The type that represents a moment in the `hh:mm` format, such as `02:00`. Special value: `disable`, which indicates that no time is specified. This data type applies only to the `major_freeze_duty_time` parameter. |
| STRING | The string type. A value of this type is entered by users. |
| STRING_LIST | The type that represents a list of strings separated with semicolons (;). |
-| TIME | The time type. The following time units are supported: `us` (microseconds), `ms` (milliseconds), `s` (seconds), `m` (minutes), `h` (hours), and `d` (days). If no suffix is added to a value of this data type, the unit s is used by default. The unit is case-insensitive. |
+| TIME | The time type. The following time units are supported: `us` (microseconds), `ms` (milliseconds), `s` (seconds), `m` (minutes), `h` (hours), and `d` (days). If no suffix is added to a value of this data type, the unit `s` is used by default. The unit is case-insensitive. |
## Modify parameters by using an SQL statement
@@ -47,12 +47,14 @@ The SQL syntax for modifying a parameter is as follows:
```sql
ALTER SYSTEM [SET]
-parameter_name = expression [SCOPE = {SPFILE | BOTH}] [COMMENT [=] 'text']
+parameter_name = expression [SCOPE = {MEMORY | SPFILE | BOTH}] [COMMENT [=} 'text']
[ TENANT [=] ALL|all_user|all_meta|tenant_name ] {SERVER [=] 'ip:port' | ZONE [=] 'zone'};
```
where
+* `SET` specifies the keyword used to set a new value for the parameter.
+
* `parameter_name` specifies the name of the parameter to be modified.
* `expression` specifies the value of the parameter after modification.
@@ -61,6 +63,8 @@ where
* `SCOPE` specifies the effective scope of the modification. The default value is `BOTH`. Valid values include:
+ * `MEMORY`: indicates that the parameter is modified only in the memory. The modification takes effect immediately and becomes invalid after OBServer nodes are restarted. Currently, no parameter supports this effective scope.
+
* `SPFILE`: indicates that only the parameter value in the internal table is modified. The modification takes effect after the OBServer node is restarted. This value is valid only for the parameters that take effect upon a restart.
* `BOTH`: indicates that the parameter value is modified in both the internal table and the memory. The modification takes effect immediately and remains effective after the OBServer node is restarted.
@@ -130,11 +134,13 @@ obclient> ALTER SYSTEM SET log_disk_utilization_threshold = 20 TENANT='Oracle';
The SQL syntax for modifying a parameter is as follows:
```sql
-ALTER SYSTEM SET parameter_name = expression
+ALTER SYSTEM [SET] parameter_name = expression
```
where
+* `SET` specifies the keyword used to set a new value for the parameter.
+
* `parameter_name` specifies the name of the parameter to be modified.
* `expression` specifies the value of the parameter after modification.
@@ -156,9 +162,106 @@ Modify the `log_disk_utilization_threshold` parameter:
obclient> ALTER SYSTEM SET log_disk_utilization_threshold = 20;
```
+## Reset a parameter by using an SQL statement
+
+### MySQL mode
+
+The SQL syntax for resetting a parameter is as follows:
+
+```sql
+ALTER SYSTEM [RESET]
+parameter_name [SCOPE = {MEMORY | SPFILE | BOTH}] {TENANT [=] 'tenant_name'};
+```
+
+where
+
+* `RESET` specifies the keyword used to clear the current value of the parameter and use its default value or start value.
+
+* `SCOPE` specifies the effective scope of the modification. The default value is `BOTH`. Valid values include the following ones:
+
+ * `MEMORY`: specifies to reset only the parameter value in the memory. The modification takes effect immediately and becomes invalid after the OBServer node is restarted. At present, no parameter supports this effective scope.
+
+ * `SPFILE`: specifies to reset only the parameter value in the internal table. The modification takes effect after the OBServer node is restarted. This value is valid only for the parameters that take effect upon a restart.
+
+ * `BOTH`: specifies to reset the parameter value in both the internal table and the memory. The modification takes effect immediately and remains effective after the OBServer node is restarted.
+
+* `TENANT`: specifies to reset the tenant-level parameters of all tenants or a specified tenant from the sys tenant.
+
+ * `tenant_name`: the name of the tenant whose tenant-level parameters are to be reset.
+
+
+ Note
+
+ - Separate multiple parameters with commas (,).
+ - You cannot reset cluster-level parameters in a user tenant or reset cluster-level parameters for a specified user tenant from the sys tenant. You can reset cluster-level parameters only in the sys tenant. For example, if you attempt to execute the
ALTER SYSTEM RESET memory_limit TENANT='test_tenant'
statement, an error will be returned because memory_limit
is a cluster-level parameter.
+ - You can modify tenant-level parameters directly in the current tenant, or in the sys tenant by specifying the
TENANT
keyword.
+ - You cannot specify a zone or OBServer node when you execute the
ALTER SYSTEM RESET
statement.
+ - The value of the
scope
parameter in the execution results of the SHOW PARAMETERS
statement specifies whether a parameter is a cluster-level or tenant-level parameter.
+
+ - If the value of
scope
is CLUSTER
, the parameter is a cluster-level parameter.
+ - If the value of
scope
is TENANT
, the parameter is a tenant-level parameter.
+
+
+
+
+### Examples
+
+Reset the `log_disk_utilization_threshold` parameter.
+
+```sql
+obclient> ALTER SYSTEM RESET log_disk_utilization_threshold;
+```
+
+Reset a tenant-level parameter for all tenants or a specified tenant from the sys tenant.
+
+Here are two examples:
+
+```sql
+obclient> ALTER SYSTEM RESET log_disk_utilization_threshold TENANT='ALL';
+obclient> ALTER SYSTEM RESET log_disk_utilization_threshold TENANT='Oracle';
+```
+
+
+ Note
+ After the statement is executed, the parameter is modified for the specified tenant.
+
+
+### Oracle mode
+
+The SQL syntax for resetting a parameter is as follows:
+
+```
+ALTER SYSTEM [RESET] parameter_name = expression;
+```
+
+where
+
+* `RESET` specifies the keyword used to clear the current value of the parameter and use its default value or start value.
+
+* `parameter_name` specifies the name of the parameter to be reset.
+
+* `expression` specifies the new value of the parameter.
+
+
+ Note
+
+ - Separate multiple parameters with commas (,).
+ - Only tenant-level parameters can be reset in Oracle mode.
+ - You can reset tenant-level parameters directly in the current tenant, or in the sys tenant by specifying the
TENANT
keyword.
+
+
+
+### **Example**
+
+Reset the `log_disk_utilization_threshold` parameter.
+
+```sql
+obclient> ALTER SYSTEM RESET log_disk_utilization_threshold;
+```
+
## Query parameters by using an SQL statement
-The SQL syntax for querying a parameter is as follows:
+The SQL syntax is as follows:
```sql
SHOW PARAMETERS [LIKE 'pattern' | WHERE expr] [TENANT = tenant_name]
@@ -167,13 +270,13 @@ SHOW PARAMETERS [LIKE 'pattern' | WHERE expr] [TENANT = tenant_name]
Note
- - In the
sys
tenant, you can query the tenant-level and cluster-level parameters of the current tenant. You can also query parameters of all tenants or a specified tenant by specifying the TENANT
keyword.
- - In a user tenant, you can query the tenant-level parameters of the current tenant and cluster-level parameters of the
sys
tenant.
+ - In the sys tenant, you can query the tenant-level and cluster-level parameters of the current tenant. You can also query parameters of all tenants or a specified tenant by specifying the
TENANT
keyword.
+ - In a user tenant, you can query the tenant-level parameters of the current tenant and cluster-level parameters of the sys tenant.
- A column attribute specified in the
WHERE expr
clause must be a column attribute in the execution results of the SHOW PARAMETERS
statement.
-Here is an example of querying parameters by using the `SHOW PARAMETERS` statement:
+Here are some examples of querying parameters by using the `SHOW PARAMETERS` statement:
```sql
obclient> SHOW PARAMETERS WHERE scope = 'tenant';
@@ -202,7 +305,7 @@ The return result is as follows:
1 row in set
```
-The following table describes the column attributes in the execution results.
+The following table describes the column attributes in the return result.
| Column name | Description |
|------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
@@ -215,7 +318,7 @@ The following table describes the column attributes in the execution results.
| value | The value of the parameter.
**Note**
You can modify the parameter value for a specified zone or server. Therefore, the value of the parameter may vary with zones and servers. |
| info | The description of the parameter. |
| section | The category of the parameter. Valid values: - `SSTABLE`: an SSTable-related parameter.
- `OBSERVER`: an OBServer node-related parameter.
- `ROOT_SERVICE`: a RootService-related parameter.
- `TENANT`: a tenant-related parameter.
- `TRANS`: a transaction-related parameter.
- `LOAD_BALANCE`: a load balancing-related parameter.
- `DAILY_MERGE`: a major compaction-related parameter.
- `CLOG`: a clog-related parameter.
- `LOCATION_CACHE`: a location cache-related parameter.
- `CACHE`: a cache-related parameter.
- `RPC`: an RPC-related parameter.
- `OBPROXY`: an ODP-related parameter.
|
-| scope | The application scope of the parameter. Valid values: - `TENANT`: indicates that the parameter is a tenant-level parameter.
- `CLUSTER`: indicates that the parameter is a cluster-level parameter.
|
+| scope | The applicable scope of the parameter. Valid values: - `TENANT`: indicates that the parameter is a tenant-level parameter.
- `CLUSTER`: indicates that the parameter is a cluster-level parameter.
|
| source | The source of the current value. Valid values: - `TENANT`
- `CLUSTER`
- `CMDLINE`
- `OBADMIN`
- `FILE`
- `DEFAULT`
|
| edit_level | The modification behavior of the parameter. Valid values:- `READONLY`: indicates that you cannot modify the parameter.
- `STATIC_EFFECTIVE`: indicates that you can modify the parameter but the modification takes effect only after the OBServer node is restarted.
- `DYNAMIC_EFFECTIVE`: indicates that you can modify the parameter and the modification takes effect in real time.
|
diff --git a/en-US/700.reference/200.system-management/200.configuration-management/300.set-variables.md b/en-US/700.reference/200.system-management/200.configuration-management/300.set-variables.md
index 9dc07b2415..9005b6672a 100644
--- a/en-US/700.reference/200.system-management/200.configuration-management/300.set-variables.md
+++ b/en-US/700.reference/200.system-management/200.configuration-management/300.set-variables.md
@@ -89,7 +89,7 @@ opt_set_sys_var:
{SET | SET VARIABLES | VARIABLES} system_var_name = expr [,system_var_name = expr] ...
```
-When you create a tenant, set the value of read-only variable `ob_compatibility_mode` to `mysql` or `oracle` and the value of global variable `ob_tcp_invited_nodes` to `%`. Here is an example:
+When you create a tenant, set the value of the read-only variable `ob_compatibility_mode` to `mysql` or `oracle` and the value of the global variable `ob_tcp_invited_nodes` to `%`. Here is an example:
```sql
obclient> CREATE TENANT IF NOT EXISTS test_tenant
@@ -124,7 +124,7 @@ SET ob_compatibility_mode='oracle', ob_tcp_invited_nodes='%';
1 row in set (0.00 sec)
```
- The following table describes the column attributes in the execution results.
+ The following table describes the column attributes in the return result.
| Column name | Description |
|---------------|-----|
From b3b851ab548d20da65dc427ab73969359eed5386 Mon Sep 17 00:00:00 2001
From: Jackie Qu
Date: Mon, 15 Apr 2024 15:25:09 +0800
Subject: [PATCH 19/63] v430-beta-400.database-objects-update
---
.../300.materialized-view-opration-of-oracle-mode.md | 10 ++++------
.../300.materialized-view-opration-of-mysql-mode.md | 10 ++++------
2 files changed, 8 insertions(+), 12 deletions(-)
diff --git a/en-US/700.reference/100.oceanbase-database-concepts/400.database-objects/100.database-objects-of-oracle-mode/500.view-of-oracle-mode/200.materialized-view-of-oracle-mode/300.materialized-view-opration-of-oracle-mode.md b/en-US/700.reference/100.oceanbase-database-concepts/400.database-objects/100.database-objects-of-oracle-mode/500.view-of-oracle-mode/200.materialized-view-of-oracle-mode/300.materialized-view-opration-of-oracle-mode.md
index 861a04192a..d1f194b3f2 100644
--- a/en-US/700.reference/100.oceanbase-database-concepts/400.database-objects/100.database-objects-of-oracle-mode/500.view-of-oracle-mode/200.materialized-view-of-oracle-mode/300.materialized-view-opration-of-oracle-mode.md
+++ b/en-US/700.reference/100.oceanbase-database-concepts/400.database-objects/100.database-objects-of-oracle-mode/500.view-of-oracle-mode/200.materialized-view-of-oracle-mode/300.materialized-view-opration-of-oracle-mode.md
@@ -11,9 +11,11 @@
Materialized views are an important tool for database optimization. They store query results in physical storage to speed up data retrieval. When you create a materialized view, a schema corresponding to the query that defines the view is first generated. The schema is then used to build an SQL statement for creating a table that stores data of the materialized view. After the table is created, the query that defines the view is executed, and the result set is inserted into the table.
+Materialized views support two update strategies: full update and incremental update.
+
## Full update
-Materialized views support two update strategies: full update and incremental update. A full update clears all existing data in a materialized view and inserts the latest query result set into the view.
+A full update clears all existing data in a materialized view and inserts the latest query result set into the view.
In this process, a combination operation of `TRUNCATE TABLE` and `INSERT INTO SELECT` is performed. OceanBase Database uses the DDL framework to implement full updates.
@@ -46,8 +48,4 @@ The corresponding `mlog` table is `mlog$_t1`. The value `N` of the `old` field i
```sql
(SELECT COUNT(c1) FROM mlog WHERE old = 'N') - (SELECT COUNT(c1) FROM mlog WHERE old = 'Y')
-```
-
-## Pre-aggregation
-
-Some analytical processing (AP) databases provide the pre-aggregation feature. Unlike materialized views, this feature does not store data in original tables. Data is directly written into materialized view logs. Background tasks regularly refresh aggregation results to materialized views to keep the data up-to-date and accurate. Pre-aggregation provides instant data aggregation, thereby greatly improving the query performance and data processing efficiency.
\ No newline at end of file
+```
\ No newline at end of file
diff --git a/en-US/700.reference/100.oceanbase-database-concepts/400.database-objects/200.database-objects-of-mysql-mode/500.view-of-mysql-mode/200.materialized-view-of-mysql-mode/300.materialized-view-opration-of-mysql-mode.md b/en-US/700.reference/100.oceanbase-database-concepts/400.database-objects/200.database-objects-of-mysql-mode/500.view-of-mysql-mode/200.materialized-view-of-mysql-mode/300.materialized-view-opration-of-mysql-mode.md
index 4e7ad6bfac..e3320f803c 100644
--- a/en-US/700.reference/100.oceanbase-database-concepts/400.database-objects/200.database-objects-of-mysql-mode/500.view-of-mysql-mode/200.materialized-view-of-mysql-mode/300.materialized-view-opration-of-mysql-mode.md
+++ b/en-US/700.reference/100.oceanbase-database-concepts/400.database-objects/200.database-objects-of-mysql-mode/500.view-of-mysql-mode/200.materialized-view-of-mysql-mode/300.materialized-view-opration-of-mysql-mode.md
@@ -11,9 +11,11 @@
Materialized views are an important tool for database optimization. They store query results in physical storage to speed up data retrieval. When you create a materialized view, a schema corresponding to the query that defines the view is first generated. The schema is then used to build an SQL statement for creating a table that stores data of the materialized view. After the table is created, the query that defines the view is executed, and the result set is inserted into the table.
+Materialized views support two update strategies: full update and incremental update.
+
## Full update
-Materialized views support two update strategies: full update and incremental update. A full update clears all existing data in a materialized view and inserts the latest query result set into the view.
+A full update clears all existing data in a materialized view and inserts the latest query result set into the view.
In this process, a combination operation of `TRUNCATE TABLE` and `INSERT INTO SELECT` is performed. OceanBase Database uses the DDL framework to implement full updates.
@@ -46,8 +48,4 @@ The corresponding `mlog` table is `mlog$_t1`. The value `N` of the `old` field i
```sql
(SELECT COUNT(c1) FROM mlog WHERE old = 'N') - (SELECT COUNT(c1) FROM mlog WHERE old = 'Y')
-```
-
-## Pre-aggregation
-
-Some analytical processing (AP) databases provide the pre-aggregation feature. Unlike materialized views, this feature does not store data in original tables. Data is directly written into materialized view logs. Background tasks regularly refresh aggregation results to materialized views to keep the data up-to-date and accurate. Pre-aggregation provides instant data aggregation, thereby greatly improving the query performance and data processing efficiency.
\ No newline at end of file
+```
\ No newline at end of file
From 334bec6674577c6300d71d08950bc0330d8b8a5d Mon Sep 17 00:00:00 2001
From: Jackie Qu
Date: Mon, 15 Apr 2024 16:22:16 +0800
Subject: [PATCH 20/63] v430-beta-500.manage-data-storage-update
---
.../100.dump-management-overview.md | 4 +-
.../200.automatically-trigger-dump.md | 4 +-
.../300.trigger-dump-manually.md | 67 ++++++++++++-------
.../400.view-dump-information.md | 6 +-
.../500.modify-dump-configuration.md | 8 +--
.../100.consolidation-management-overview.md | 6 +-
.../320.adaptive-compavtion.md | 12 ++--
.../400.manually-trigger-a-merge.md | 43 +++++++++---
.../500.manually-control-a-merge.md | 4 +-
.../500.view-merge-process.md | 24 +++----
.../700.modify-a-merge-configuration.md | 9 +--
11 files changed, 115 insertions(+), 72 deletions(-)
diff --git a/en-US/700.reference/200.system-management/500.manage-data-storage/100.dump-management/100.dump-management-overview.md b/en-US/700.reference/200.system-management/500.manage-data-storage/100.dump-management/100.dump-management-overview.md
index d5718d41de..c6db467f55 100644
--- a/en-US/700.reference/200.system-management/500.manage-data-storage/100.dump-management/100.dump-management-overview.md
+++ b/en-US/700.reference/200.system-management/500.manage-data-storage/100.dump-management/100.dump-management-overview.md
@@ -7,7 +7,7 @@
# Overview
-This topic describes how layered minor compaction works.
+This topic introduces how layered minor compaction works.
The storage engine of OceanBase Database uses the log-structured merge-tree (LSM-tree) architecture. In this architecture, data is stored in MemTables and SSTables. When the memory usage of a MemTable reaches a specified threshold, data in the MemTable is flushed to the disk to release the memory space. This process is called a minor compaction. Before a minor compaction, a minor freeze is performed to ensure that no new data is written to the MemTable. The minor freeze prevents new data from being written to the current active MemTable and generates a new active MemTable.
@@ -39,4 +39,4 @@ The following figure shows how a layered minor compaction structure works.
## Trigger methods
-A minor compaction can be automatically triggered or manually initiated.
+A minor compaction can be automatically triggered or manually initiated.
\ No newline at end of file
diff --git a/en-US/700.reference/200.system-management/500.manage-data-storage/100.dump-management/200.automatically-trigger-dump.md b/en-US/700.reference/200.system-management/500.manage-data-storage/100.dump-management/200.automatically-trigger-dump.md
index 90ecc01eef..debf5c83b2 100644
--- a/en-US/700.reference/200.system-management/500.manage-data-storage/100.dump-management/200.automatically-trigger-dump.md
+++ b/en-US/700.reference/200.system-management/500.manage-data-storage/100.dump-management/200.automatically-trigger-dump.md
@@ -7,7 +7,7 @@
# Automatically trigger a minor compaction
-This topic describes how a minor compaction is automatically triggered.
+This topic introduces how a minor compaction is automatically triggered.
When you create a tenant, you can specify the memory for the tenant. The memory of a tenant consists of scalable memory and a MemTable. When the usage of the MemTable of a tenant reaches the limit specified by `memstore_limit_percentage * freeze_trigger_percentage`, a freeze (the preparation for a minor compaction) is automatically triggered. Then, the system schedules a minor compaction. A major compaction is automatically triggered when specified conditions are met during the minor compaction. For more information, see [Automatically trigger a major compaction](../200.merge-management/200.automatic-merge-triggering.md).
@@ -19,4 +19,4 @@ For more information about the `memstore_limit_percentage` and `freeze_trigger_p
* [View minor compaction information](../100.dump-management/400.view-dump-information.md)
-* [Modify minor compaction settings](../100.dump-management/500.modify-dump-configuration.md)
+* [Modify minor compaction settings](../100.dump-management/500.modify-dump-configuration.md)
\ No newline at end of file
diff --git a/en-US/700.reference/200.system-management/500.manage-data-storage/100.dump-management/300.trigger-dump-manually.md b/en-US/700.reference/200.system-management/500.manage-data-storage/100.dump-management/300.trigger-dump-manually.md
index ee1d7fcb73..cd167fdd4e 100644
--- a/en-US/700.reference/200.system-management/500.manage-data-storage/100.dump-management/300.trigger-dump-manually.md
+++ b/en-US/700.reference/200.system-management/500.manage-data-storage/100.dump-management/300.trigger-dump-manually.md
@@ -7,17 +7,17 @@
# Manually initiate a minor compaction
-You can execute the `ALTER SYSTEM MINOR FREEZE` statement in the `sys` tenant to manually initiate a minor compaction at the tenant, zone, server, log stream, or partition level.
+You can execute the `ALTER SYSTEM MINOR FREEZE` statement in the sys tenant or a user tenant to manually initiate a minor compaction at the tenant, zone, server, log stream, or partition level.
## Initiate a minor compaction from the sys tenant
-You can initiate a minor compaction at the tenant, zone, server, log stream, or tablet level from the `sys` tenant.
+You can initiate a minor compaction at the tenant, zone, server, log stream, or tablet level from the sys tenant.
1. Log on to the `sys` tenant of the cluster as the `root` user.
2. Select a proper minor compaction type as needed.
- * Minor compaction at the tenant level
+ * Initiate a minor compaction at the tenant level
You can initiate a minor compaction for one or more tenants by using the following SQL syntax:
@@ -27,13 +27,13 @@ You can initiate a minor compaction at the tenant, zone, server, log stream, or
Here are some examples:
- * Initiate a minor compaction for the `sys` tenant
+ * Initiate a minor compaction for the sys tenant
```sql
obclient> ALTER SYSTEM MINOR FREEZE;
```
- * Initiate a minor compaction for all user tenants from the `sys` tenant
+ * Initiate a minor compaction for all user tenants from the sys tenant
```sql
obclient> ALTER SYSTEM MINOR FREEZE TENANT = all_user;
@@ -50,19 +50,19 @@ You can initiate a minor compaction at the tenant, zone, server, log stream, or
In OceanBase Database V4.2.1 and later, TENANT = all_user
and TENANT = all
express the same semantics. If you want an operation to take effect on all user tenants, we recommend that you use TENANT = all_user
. TENANT = all
will be deprecated.
- * Initiate a minor compaction for all meta tenants from the `sys` tenant
+ * Initiate a minor compaction for all meta tenants from the sys tenant
```sql
obclient> ALTER SYSTEM MINOR FREEZE TENANT = all_meta;
```
- * Initiate a minor compaction for a specified tenant from the `sys` tenant
+ * Initiate a minor compaction for a specified tenant from the sys tenant
```sql
obclient> ALTER SYSTEM MINOR FREEZE TENANT = tenant1;
```
- * Minor compaction at the zone level
+ * Initiate a minor compaction at the zone level
You can initiate a minor compaction for a specified zone by using the following SQL syntax:
@@ -76,7 +76,7 @@ You can initiate a minor compaction at the tenant, zone, server, log stream, or
obclient> ALTER SYSTEM MINOR FREEZE ZONE = zone1;
```
- * Minor compaction at the server level
+ * Initiate a minor compaction at the server level
You can initiate a minor compaction for one or more specified OBServer nodes by using the following SQL syntax:
@@ -90,7 +90,7 @@ You can initiate a minor compaction at the tenant, zone, server, log stream, or
obclient> ALTER SYSTEM MINOR FREEZE SERVER = ('xx.xx.xx.xx:2882','xx.xx.xx.xx:2882');
```
- * Minor compaction at the log stream level
+ * Initiate a minor compaction at the log stream level
You can initiate a minor compaction for a specified log stream of a specified tenant by using the following SQL syntax:
@@ -108,11 +108,11 @@ You can initiate a minor compaction at the tenant, zone, server, log stream, or
After you execute this statement, the system performs a minor compaction for all tablets in the specified log stream of the specified tenant.
- * Minor compaction at the partition level
+ * Initiate a minor compaction at the partition level
- In OceanBase Database, one partition corresponds to one tablet.
+ In OceanBase Database, each partition corresponds to one tablet.
- You can initiate a minor compaction for a specified tablet in a specified tenant by using the following SQL syntax:
+ You can initiate a minor compaction for a specified tablet of a tenant by using the following SQL syntax:
```sql
ALTER SYSTEM MINOR FREEZE TENANT [=] tenant_name TABLET_ID = tablet_id;
@@ -124,31 +124,52 @@ You can initiate a minor compaction at the tenant, zone, server, log stream, or
ALTER SYSTEM MINOR FREEZE TENANT [=] tenant_name LS [=] ls_id TABLET_ID = tablet_id;
```
- Here, `tenant_name` specifies the name of a tenant, `ls_id` specifies the ID of a log stream in the tenant, and `tablet_id` specifies the ID of a tablet in the log stream. You can query the `oceanbase.CDB_OB_TABLET_TO_LS` view for the log stream ID and tablet ID.
+ In the syntax, `tenant_name` specifies the name of a tenant, `ls_id` specifies the ID of a log stream in the tenant, and `tablet_id` specifies the ID of a tablet in the log stream. You can query the `CDB_OB_TABLE_LOCATIONS` view for IDs of log streams and tablets. For more information about columns in the `CDB_OB_TABLE_LOCATIONS` view, see [oceanbase.CDB_OB_TABLE_LOCATIONS](../../../700.system-views/300.system-view-of-sys-tenant/200.dictionary-view-of-sys-tenant/17700.oceanbase-cdb_ob_table_locations-of-sys-tenant.md).
+
The following example shows how to initiate a minor compaction for a specified tablet in a specified tenant:
```sql
- obclient> ALTER SYSTEM MINOR FREEZE TENANT = t1 TABLET_ID = 200001;
+ obclient> ALTER SYSTEM MINOR FREEZE TENANT = tenant1 TABLET_ID = 200001;
```
- The following example shows how to initiate a minor compaction for a specified tablet in a specified log stream of a specified tenant:
+ Here is an example of initiating a minor compaction for a specified tablet of a specified log stream in a specified tenant:
```sql
- obclient> ALTER SYSTEM MINOR FREEZE TENANT = t1 LS = 1001 TABLET_ID = 200001;
+ obclient> ALTER SYSTEM MINOR FREEZE TENANT = tenant1 LS = 1001 TABLET_ID = 200001;
```
## Initiate a minor compaction from a user tenant
-If you are logged on to a user tenant, you can initiate only tenant-level minor compactions for the current tenant.
+You can initiate a minor compaction only for the current tenant at the tenant level or partition level from a user tenant.
+
+1. Log on to a MySQL tenant or an Oracle tenant of the cluster as the tenant administrator.
+
+2. Select a proper minor compaction type as needed.
+
+ * Initiate a minor compaction at the tenant level
+
+ ```sql
+ obclient> ALTER SYSTEM MINOR FREEZE;
+ ```
+
+ * Initiate a minor compaction at the partition level
-1. Log on to a MySQL or Oracle tenant of the cluster as the administrator of the tenant.
+ In OceanBase Database, each partition corresponds to one tablet.
-2. Initiate a minor compaction for the current tenant based on your business needs.
+ You can initiate a minor compaction for a specified tablet in the current tenant by using the following SQL syntax:
- ```sql
- obclient> ALTER SYSTEM MINOR FREEZE;
- ```
+ ```sql
+ ALTER SYSTEM MINOR FREEZE TABLET_ID = tablet_id;
+ ```
+
+ In the syntax, `tablet_id` can be queried from the `DBA_OB_TABLE_LOCATIONS` view. For more information about columns in the `DBA_OB_TABLE_LOCATIONS` view, see [DBA_OB_TABLE_LOCATIONS](../../../700.system-views/400.system-view-of-mysql-mode/200.dictionary-view-of-mysql-mode/17800.oceanbase-dba_ob_table_locations-of-mysql-mode.md).
+
+ Here is an example:
+
+ ```sql
+ obclient> ALTER SYSTEM MINOR FREEZE TABLET_ID = 200001;
+ ```
## What to do next
diff --git a/en-US/700.reference/200.system-management/500.manage-data-storage/100.dump-management/400.view-dump-information.md b/en-US/700.reference/200.system-management/500.manage-data-storage/100.dump-management/400.view-dump-information.md
index 15cfee00cb..e69eaaeaac 100644
--- a/en-US/700.reference/200.system-management/500.manage-data-storage/100.dump-management/400.view-dump-information.md
+++ b/en-US/700.reference/200.system-management/500.manage-data-storage/100.dump-management/400.view-dump-information.md
@@ -37,7 +37,7 @@ After you initiate a minor compaction, you can query the minor compaction progre
obclient> SELECT * FROM SYS.GV$OB_TABLET_COMPACTION_PROGRESS WHERE TYPE='MINI_MERGE'\G
```
- A sample query result is as follows:
+ The query result is as follows:
```shell
*************************** 1. row ***************************
@@ -59,7 +59,7 @@ After you initiate a minor compaction, you can query the minor compaction progre
1 row in set
```
- Fields in the query result are described as follows:
+ Some of the fields in the query result are described as follows:
* `TYPE`: the type of the compaction task. Valid values include the following ones:
@@ -116,7 +116,7 @@ After you initiate a minor compaction, you can query the minor compaction progre
obclient> SELECT * FROM SYS.GV$OB_TABLET_COMPACTION_HISTORY WHERE TYPE='MINI_MERGE'\G
```
- A sample query result is as follows:
+ The query result is as follows:
```shell
*************************** 1. row ***************************
diff --git a/en-US/700.reference/200.system-management/500.manage-data-storage/100.dump-management/500.modify-dump-configuration.md b/en-US/700.reference/200.system-management/500.manage-data-storage/100.dump-management/500.modify-dump-configuration.md
index 312b64ea4a..38918c749b 100644
--- a/en-US/700.reference/200.system-management/500.manage-data-storage/100.dump-management/500.modify-dump-configuration.md
+++ b/en-US/700.reference/200.system-management/500.manage-data-storage/100.dump-management/500.modify-dump-configuration.md
@@ -7,15 +7,15 @@
# Modify minor compaction settings
-This topic describes the minor compaction parameters and describes how to set these parameters.
+This topic introduces the minor compaction parameters and describes how to set these parameters.
## Minor compaction parameters
| Parameter | Description | Default value | Value range |
|-----------------------------|-------------------------------------------------------|-----|--------------|
| minor_compact_trigger | The number of SSTables for triggering a minor compaction. It is a tenant-level parameter. | 2 | \[0, 16\] |
-| freeze_trigger_percentage | The maximum percentage of memory space used by the MemStore of the tenant to trigger a minor freeze. This is a tenant-level parameter. | 20 | \[1, 99\] |
-| memstore_limit_percentage | The percentage of MemStore memory to the total memory of the tenant. It is a tenant-level parameter. | 50 | \[1, 99\] |
+| freeze_trigger_percentage | The maximum percentage of memory space used by the MemStore of the tenant to trigger a minor freeze. This is a tenant-level parameter. | 20 | (0, 100) |
+| memstore_limit_percentage | The percentage of MemStore memory to the total memory of the tenant. It is a cluster-level parameter. Note
Starting from OceanBase Database V4.3.0, the V4.3.x series provide the tenant-level hidden parameter _memstore_limit_percentage
, which specifies the percentage of the memory that can be occupied by the MemStore to the total available memory of a tenant. The parameter has the same feature and default value as the cluster-level parameter memstore_limit_percentage
. Take note of the following considerations when you configure these two parameters:
- If you set either
_memstore_limit_percentage
or memstore_limit_percentage
to a non-default value, the value prevails. - If you set both
_memstore_limit_percentage
and memstore_limit_percentage
to non-default values, the value of _memstore_limit_percentage
prevails. - If neither is configured or both are set to default values, the system adopts the following adaptive strategy:
- For a tenant with a memory of 8 GB or less, the percentage of the memory that can be occupied by MemStore is 40%.
- For a tenant with a memory of more than 8 GB, the percentage of the memory that can be occupied by MemStore is 50%.
| 0 | [0, 100) |
## Modify minor compaction parameters by using SQL statements
@@ -40,7 +40,7 @@ This topic describes the minor compaction parameters and describes how to set th
Here are some examples:
```sql
- obclient> SHOW PARAMETERS LIKE 'minor_compact_trigger';
+ obclient> SHOW PARAMETERS LIKE 'minor_compact_trigger';
obclient> SHOW PARAMETERS LIKE 'major_compact_trigger';
diff --git a/en-US/700.reference/200.system-management/500.manage-data-storage/200.merge-management/100.consolidation-management-overview.md b/en-US/700.reference/200.system-management/500.manage-data-storage/200.merge-management/100.consolidation-management-overview.md
index 55c866ba85..4a3f24d309 100644
--- a/en-US/700.reference/200.system-management/500.manage-data-storage/200.merge-management/100.consolidation-management-overview.md
+++ b/en-US/700.reference/200.system-management/500.manage-data-storage/200.merge-management/100.consolidation-management-overview.md
@@ -7,7 +7,7 @@
# Overview
-This topic describes the classification, status, and compression algorithm of major compactions.
+This topic introduces the classification, status, and compression algorithm of major compactions.
A major compaction compacts all dynamic and static data, which is a time-consuming operation. When the incremental data generated by minor compactions reaches the specified threshold, OceanBase Database performs a major compaction on data of the same major version. The main difference between a minor compaction and a major compaction is that a major compaction compacts data in all partitions in the cluster with the global static data at a unified snapshot point. A major compaction is a global operation and generates a global snapshot.
@@ -56,10 +56,10 @@ OceanBase Database allows you to specify a compression algorithm when you create
For more information about the syntaxes for creating a table and specifying a compression algorithm, see [CREATE TABLE (MySQL mode)](../../../500.sql-reference/100.sql-syntax/200.common-tenant-of-mysql-mode/600.sql-statement-of-mysql-mode/2600.create-table-of-mysql-mode.md) and [CREATE TABLE (Oracle mode)](../../../500.sql-reference/100.sql-syntax/300.common-tenant-of-oracle-mode/900.sql-statement-of-oracle-mode/100.ddl-of-oracle-mode/2400.create-table-of-oracle-mode.md).
-## Trigger a major compaction
+## Trigger methods
OceanBase Database supports automatic, scheduled, adaptive, and manual major compactions.
## References
-[Major compactions](../../../100.oceanbase-database-concepts/900.storage-architecture/300.dump-and-merge/300.about-merge.md)
+[Major compactions](../../../100.oceanbase-database-concepts/900.storage-architecture/300.dump-and-merge/300.about-merge.md)
\ No newline at end of file
diff --git a/en-US/700.reference/200.system-management/500.manage-data-storage/200.merge-management/320.adaptive-compavtion.md b/en-US/700.reference/200.system-management/500.manage-data-storage/200.merge-management/320.adaptive-compavtion.md
index ef89fcef4a..fb9c26694b 100644
--- a/en-US/700.reference/200.system-management/500.manage-data-storage/200.merge-management/320.adaptive-compavtion.md
+++ b/en-US/700.reference/200.system-management/500.manage-data-storage/200.merge-management/320.adaptive-compavtion.md
@@ -48,11 +48,11 @@ With this feature enabled, related statistics collection and the execution of sc
+----------------+----------+-------+--------+-----------+-----------------------------+-----------+-------+---------------------------------------------------------------------------------+---------+-------------------+---------------+-----------+
| SVR_IP | SVR_PORT | ZONE | SCOPE | TENANT_ID | NAME | DATA_TYPE | VALUE | INFO | SECTION | EDIT_LEVEL | DEFAULT_VALUE | ISDEFAULT |
+----------------+----------+-------+--------+-----------+-----------------------------+-----------+-------+---------------------------------------------------------------------------------+---------+-------------------+---------------+-----------+
- | 172.xx.xxx.xxx | 2882 | zone1 | TENANT | 1 | _enable_adaptive_compaction | NULL | True | specifies whether allow adaptive major compaction schedule and information collection | TENANT | DYNAMIC_EFFECTIVE | True | YES |
- | 172.xx.xxx.xxx | 2882 | zone1 | TENANT | 1001 | _enable_adaptive_compaction | NULL | True | specifies whether allow adaptive major compaction schedule and information collection | TENANT | DYNAMIC_EFFECTIVE | True | YES |
- | 172.xx.xxx.xxx | 2882 | zone1 | TENANT | 1002 | _enable_adaptive_compaction | NULL | True | specifies whether allow adaptive major compaction schedule and information collection | TENANT | DYNAMIC_EFFECTIVE | True | YES |
- | 172.xx.xxx.xxx | 2882 | zone1 | TENANT | 1003 | _enable_adaptive_compaction | NULL | True | specifies whether allow adaptive major compaction schedule and information collection | TENANT | DYNAMIC_EFFECTIVE | True | YES |
- | 172.xx.xxx.xxx | 2882 | zone1 | TENANT | 1004 | _enable_adaptive_compaction | NULL | True | specifies whether allow adaptive major compaction schedule and information collection | TENANT | DYNAMIC_EFFECTIVE | True | YES |
+ | 172.xx.xxx.xxx | 2882 | zone1 | TENANT | 1 | _enable_adaptive_compaction | NULL | True | specifies whether allow adaptive compaction schedule and information collection | TENANT | DYNAMIC_EFFECTIVE | True | YES |
+ | 172.xx.xxx.xxx | 2882 | zone1 | TENANT | 1001 | _enable_adaptive_compaction | NULL | True | specifies whether allow adaptive compaction schedule and information collection | TENANT | DYNAMIC_EFFECTIVE | True | YES |
+ | 172.xx.xxx.xxx | 2882 | zone1 | TENANT | 1002 | _enable_adaptive_compaction | NULL | True | specifies whether allow adaptive compaction schedule and information collection | TENANT | DYNAMIC_EFFECTIVE | True | YES |
+ | 172.xx.xxx.xxx | 2882 | zone1 | TENANT | 1003 | _enable_adaptive_compaction | NULL | True | specifies whether allow adaptive compaction schedule and information collection | TENANT | DYNAMIC_EFFECTIVE | True | YES |
+ | 172.xx.xxx.xxx | 2882 | zone1 | TENANT | 1004 | _enable_adaptive_compaction | NULL | True | specifies whether allow adaptive compaction schedule and information collection | TENANT | DYNAMIC_EFFECTIVE | True | YES |
+----------------+----------+-------+--------+-----------+-----------------------------+-----------+-------+---------------------------------------------------------------------------------+---------+-------------------+---------------+-----------+
5 rows in set
```
@@ -75,4 +75,4 @@ With this feature enabled, related statistics collection and the execution of sc
* [Manually initiate a major compaction](../200.merge-management/400.manually-trigger-a-merge.md)
-* [Manually control a major compaction](../200.merge-management/500.manually-control-a-merge.md)
+* [Manually control a major compaction](../200.merge-management/500.manually-control-a-merge.md)
\ No newline at end of file
diff --git a/en-US/700.reference/200.system-management/500.manage-data-storage/200.merge-management/400.manually-trigger-a-merge.md b/en-US/700.reference/200.system-management/500.manage-data-storage/200.merge-management/400.manually-trigger-a-merge.md
index 02f150664d..2c920326da 100644
--- a/en-US/700.reference/200.system-management/500.manage-data-storage/200.merge-management/400.manually-trigger-a-merge.md
+++ b/en-US/700.reference/200.system-management/500.manage-data-storage/200.merge-management/400.manually-trigger-a-merge.md
@@ -11,15 +11,13 @@ You can manually initiate a major compaction. Manual major compactions include t
Adaptive major compactions are partition-level major compactions that are adaptively initiated by the system for partitions as needed. Like adaptive major compactions, partition-level major compactions are mainly used for data import and export, frequent DML operations, and scenarios with low query efficiency. For more information about adaptive major compactions, see [Adaptive major compactions](320.adaptive-compavtion.md). If queries become slow after you disable adaptive major compactions based on your business needs, you can manually initiate partition-level major compactions to solve the problem.
-You can initiate tenant-level and partition-level major compactions in the `sys` tenant, and only tenant-level major compactions in a user tenant.
-
## Considerations
One partition corresponds to one tablet. When you initiate a partition-level major compaction, take note of the following considerations:
* You cannot initiate a partition-level major compaction for a partition that is undergoing a tenant-level major compaction.
-* You cannot initiate a partition-level major compaction for a partition that is undergoing an adaptive major compaction.
+* You cannot initiate a partition-level major compaction for a partition that is undergoing an adaptively scheduled major compaction.
* You cannot initiate a partition-level major compaction for a partition if multiple replicas of the partition are inconsistent.
@@ -27,7 +25,7 @@ One partition corresponds to one tablet. When you initiate a partition-level maj
* You cannot initiate a partition-level major compaction when the major compaction task is suspended.
-* A partition-level major compaction is essentially a major compaction on multiple replicas of the same partition. It consumes CPU and disk I/O resources. Before you initiate a partition-level major compaction, check the resource usage of the tenant. After the partition-level major compaction is completed, the CPU and I/O usage will increase.
+* A partition-level major compaction is essentially a major compaction on multiple replicas of a partition. It consumes CPU and disk I/O resources. Before you initiate a partition-level major compaction, check the resource usage of the tenant. After the partition-level major compaction is completed, the CPU and I/O usage will increase.
## Prerequisites
@@ -116,7 +114,7 @@ To do so, perform the following steps:
ALTER SYSTEM MAJOR FREEZE TENANT [=] tenant_name TABLET_ID = tablet_id;
```
- You can query the `CDB_OB_TABLE_LOCATIONS` view for `tablet_id` in the `sys` tenant. For more information about columns in the `CDB_OB_TABLE_LOCATIONS` view, see [oceanbase.CDB_OB_TABLE_LOCATIONS](../../../700.system-views/300.system-view-of-sys-tenant/200.dictionary-view-of-sys-tenant/17700.oceanbase-cdb_ob_table_locations-of-sys-tenant.md).
+ In the syntax, `tablet_id` can be queried from the `CDB_OB_TABLE_LOCATIONS` view. For more information about columns in the `CDB_OB_TABLE_LOCATIONS` view, see [oceanbase.CDB_OB_TABLE_LOCATIONS](../../../700.system-views/300.system-view-of-sys-tenant/200.dictionary-view-of-sys-tenant/17700.oceanbase-cdb_ob_table_locations-of-sys-tenant.md).
The following example shows how to initiate a partition-level major compaction for `tenant1` in the `sys` tenant:
@@ -126,20 +124,43 @@ To do so, perform the following steps:
Notice
- The statement for initiating a partition-level major compaction is exclusive with that for initiating a tenant-level or adaptive major compaction. Successful execution of the statement does not mean that the partition-level major compaction is successfully initiated. You can check whether a partition-level major compaction is successfully initiated by querying the GV$OB_MERGE_INFO
view for compaction information corresponding to ACTION='MEDIUM_MERGE'
of the specified partition, or query the GV$OB_TABLET_COMPACTION_HISTORY
view for compaction information corresponding to TYPE='MEDIUM_MERGE'
of the specified partition. For more information, see View the major compaction process.
+ The statement for initiating a partition-level major compaction is exclusive with that for initiating a tenant-level or adaptive major compaction. Successful execution of the statement does not mean that the partition-level major compaction is successfully initiated. You can check whether a partition-level major compaction is successfully initiated by querying the GV$OB_MERGE_INFO
view for compaction information corresponding to ACTION='MEDIUM_MERGE'
of the specified partition, or querying the GV$OB_TABLET_COMPACTION_HISTORY
view for compaction information corresponding to TYPE='MEDIUM_MERGE'
of the specified partition. For more information, see View the major compaction process.
## Manually initiate a major compaction from a user tenant
-If you are logged on to a user tenant, you can initiate only tenant-level major compactions for the current tenant.
+If you are logged on to a user tenant, you can initiate only tenant-level and partition-level major compactions for the current tenant.
1. Log on to the database as an administrator of a user tenant.
-2. Initiate a tenant-level major compaction for the current tenant.
+2. Initiate a major compaction for the current tenant.
- ```sql
- obclient> ALTER SYSTEM MAJOR FREEZE;
- ```
+ * Initiate a tenant-level major compaction
+
+ ```sql
+ obclient> ALTER SYSTEM MAJOR FREEZE;
+ ```
+
+ * Initiate a partition-level major compaction
+
+ The SQL syntax is as follows:
+
+ ```sql
+ ALTER SYSTEM MAJOR FREEZE TABLET_ID = tablet_id;
+ ```
+
+ In the syntax, `tablet_id` can be queried from the `DBA_OB_TABLE_LOCATIONS` view. For more information about columns in the `DBA_OB_TABLE_LOCATIONS` view, see [DBA_OB_TABLE_LOCATIONS](../../../700.system-views/400.system-view-of-mysql-mode/200.dictionary-view-of-mysql-mode/17800.oceanbase-dba_ob_table_locations-of-mysql-mode.md).
+
+ Here is an example:
+
+ ```shell
+ obclient> ALTER SYSTEM MAJOR FREEZE TABLET_ID = 200001;
+ ```
+
+
+ Notice
+ The statement for initiating a partition-level major compaction is exclusive with that for initiating a tenant-level or adaptive major compaction. Successful execution of the statement does not mean that the partition-level major compaction is successfully initiated. You can check whether a partition-level major compaction is successfully initiated by querying the GV$OB_TABLET_COMPACTION_HISTORY
view for compaction information corresponding to TYPE='MEDIUM_MERGE'
of the specified partition. For more information, see View the major compaction process.
+
## References
diff --git a/en-US/700.reference/200.system-management/500.manage-data-storage/200.merge-management/500.manually-control-a-merge.md b/en-US/700.reference/200.system-management/500.manage-data-storage/200.merge-management/500.manually-control-a-merge.md
index f8fe2c8e2c..790068f88a 100644
--- a/en-US/700.reference/200.system-management/500.manage-data-storage/200.merge-management/500.manually-control-a-merge.md
+++ b/en-US/700.reference/200.system-management/500.manage-data-storage/200.merge-management/500.manually-control-a-merge.md
@@ -154,7 +154,7 @@ In a user tenant, you can control the major compaction of only the current tenan
After a major compaction is suspended, you can resume it.
- The syntax is as follows:
+ The statement is as follows:
```sql
obclient> ALTER SYSTEM RESUME MERGE;
@@ -164,7 +164,7 @@ In a user tenant, you can control the major compaction of only the current tenan
If a checksum verification error occurs during a major compaction and is resolved after manual intervention, you can clear this checksum error tag and resume the major compaction.
- The syntax is as follows:
+ The statement is as follows:
```sql
obclient> ALTER SYSTEM CLEAR MERGE ERROR;
diff --git a/en-US/700.reference/200.system-management/500.manage-data-storage/200.merge-management/500.view-merge-process.md b/en-US/700.reference/200.system-management/500.manage-data-storage/200.merge-management/500.view-merge-process.md
index 018fc9a108..6b4cd7a5b8 100644
--- a/en-US/700.reference/200.system-management/500.manage-data-storage/200.merge-management/500.view-merge-process.md
+++ b/en-US/700.reference/200.system-management/500.manage-data-storage/200.merge-management/500.view-merge-process.md
@@ -19,9 +19,9 @@ You can query views in the `sys` tenant for information about tenant-level and p
2. Execute the following statements to view major compaction information.
- * Query the `CDB_OB_ZONE_MAJOR_COMPACTION` or `CDB_OB_MAJOR_COMPACTION` view for tenant-level major compaction information
+ * Query the `CDB_OB_ZONE_MAJOR_COMPACTION` or `CDB_OB_MAJOR_COMPACTION` view for tenant-level major compaction information.
- The `CDB_OB_ZONE_MAJOR_COMPACTION` view displays the major compaction information of each zone of all tenants.
+ The `CDB_OB_ZONE_MAJOR_COMPACTION` view displays the major compaction information of each zone in all tenants.
```sql
obclient [oceanbase]> SELECT * FROM oceanbase.CDB_OB_ZONE_MAJOR_COMPACTION\G
@@ -114,7 +114,7 @@ You can query views in the `sys` tenant for information about tenant-level and p
| LAST_FINISH_TIME | The time when the last major compaction was completed. |
| START_TIME | The time when the major compaction started. |
| STATUS | The major compaction status. Valid values:- `IDLE`: No major compaction is in progress.
- `COMPACTING`: A major compaction is in progress.
- `VERIFYING`: The checksum is being verified.
|
- | IS_ERROR | Indicates whether an error occurred during the major compaction. Valid values: |
+ | IS_ERROR | Indicates whether an error occurred during the major compaction. Valid values: |
| INFO | The major compaction information. |
The `CDB_OB_MAJOR_COMPACTION` view displays the global major compaction information of all tenants.
@@ -225,11 +225,11 @@ You can query views in the `sys` tenant for information about tenant-level and p
| LAST_FINISH_TIME | The time when the last major compaction was completed. |
| START_TIME | The time when the major compaction started. |
| STATUS | The major compaction status. Valid values:- `IDLE`: No major compaction is in progress.
- `COMPACTING`: A major compaction is in progress.
- `VERIFYING`: The checksum is being verified.
|
- | IS_ERROR | Indicates whether an error occurred during the major compaction. Valid values: |
- | IS_SUSPENDED | Indicates whether the major compaction is suspended. Valid values: |
+ | IS_ERROR | Indicates whether an error occurred during the major compaction. Valid values: |
+ | IS_SUSPENDED | Indicates whether the major compaction is suspended. Valid values: |
| INFO | The major compaction information. |
- * Query the `GV$OB_MERGE_INFO` or `GV$OB_TABLET_COMPACTION_HISTORY` view for partition-level major compaction information
+ * Query the `GV$OB_MERGE_INFO` or `GV$OB_TABLET_COMPACTION_HISTORY` view for partition-level major compaction information.
The `GV$OB_MERGE_INFO` view displays the basic statistics of tablet-level major compactions.
@@ -301,7 +301,7 @@ You can query views in a user tenant for major compaction information of the cur
2. Execute the following statements to view major compaction information.
- * Query the `DBA_OB_ZONE_MAJOR_COMPACTION` or `DBA_OB_MAJOR_COMPACTION` view for tenant-level major compaction information of the current tenant
+ * Query the `DBA_OB_ZONE_MAJOR_COMPACTION` or `DBA_OB_MAJOR_COMPACTION` view for tenant-level major compaction information of the current tenant.
The `DBA_OB_ZONE_MAJOR_COMPACTION` view displays the major compaction information of each zone of the current tenant.
@@ -324,7 +324,7 @@ You can query views in a user tenant for major compaction information of the cur
:::
- A sample query result is as follows:
+ The query result is as follows:
```shell
*************************** 1. row ***************************
@@ -349,7 +349,7 @@ You can query views in a user tenant for major compaction information of the cur
| LAST_FINISH_TIME | The time when the last major compaction was completed. |
| START_TIME | The time when the major compaction started. |
| STATUS | The major compaction status. Valid values:- `IDLE`: No major compaction is in progress.
- `COMPACTING`: A major compaction is in progress.
- `VERIFYING`: The checksum is being verified.
|
- | IS_ERROR | Indicates whether an error occurred during the major compaction. Valid values: |
+ | IS_ERROR | Indicates whether an error occurred during the major compaction. Valid values: |
| INFO | The major compaction information. |
The `DBA_OB_MAJOR_COMPACTION` view displays the global major compaction information of the current tenant.
@@ -384,8 +384,8 @@ You can query views in a user tenant for major compaction information of the cur
| LAST_FINISH_TIME | The time when the last major compaction was completed. |
| START_TIME | The time when the major compaction started. |
| STATUS | The major compaction status. Valid values:- `IDLE`: No major compaction is in progress.
- `COMPACTING`: A major compaction is in progress.
- `VERIFYING`: The checksum is being verified.
|
- | IS_ERROR | Indicates whether an error occurred during the major compaction. Valid values: |
- | IS_SUSPENDED | Indicates whether the major compaction is suspended. Valid values: |
+ | IS_ERROR | Indicates whether an error occurred during the major compaction. Valid values: |
+ | IS_SUSPENDED | Indicates whether the major compaction is suspended. Valid values: |
| INFO | The major compaction information. |
* Query the `GV$OB_TABLET_COMPACTION_HISTORY` view for partition-level major compaction information of the current tenant.
@@ -411,7 +411,7 @@ You can query views in a user tenant for major compaction information of the cur
:::
- A sample query result is as follows:
+ The query result is as follows:
```shell
+----------------+----------+-----------+-------+-----------+-----------------+---------------------+------------------------------+------------------------------+-----------------------------------+-------------+-------------------+-------------------------------+------------------------------+--------------------------------------+-----------------+-----------------------+-------------------+---------------------+------------------------------+----------------------------+-----------------+---------------+----------------------------------------+---------------+-------------------------------------------------------------------------------------+-------------+-----------+
diff --git a/en-US/700.reference/200.system-management/500.manage-data-storage/200.merge-management/700.modify-a-merge-configuration.md b/en-US/700.reference/200.system-management/500.manage-data-storage/200.merge-management/700.modify-a-merge-configuration.md
index e5d83dcc93..a850daebb9 100644
--- a/en-US/700.reference/200.system-management/500.manage-data-storage/200.merge-management/700.modify-a-merge-configuration.md
+++ b/en-US/700.reference/200.system-management/500.manage-data-storage/200.merge-management/700.modify-a-merge-configuration.md
@@ -7,7 +7,7 @@
# Modify major compaction settings
-This topic describes the major compaction parameters and describes how to modify the parameters.
+This topic introduces the major compaction parameters and describes how to modify the parameters.
## Major compaction parameters
@@ -17,16 +17,17 @@ The following table describes the parameters related to major compactions.
|-------------------------------|----------------------------------------------------------------------------------------------------------|-------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| enable_major_freeze | Specifies whether to enable major compaction. It is a cluster-level parameter. | True | |
| major_freeze_duty_time | The scheduled time for daily major compactions. It is a cluster-level parameter. | 02:00 | \[00:00,24:00\] |
-| major_compact_trigger | The number of minor compactions for triggering a major compaction. It is a tenant-level parameter. | 0 | \[0, 65535\] |
-| default_progressive_merge_num | The default major compaction mode specified during table creation. It is a tenant-level parameter. | 0 | \[0, +∞) - `0`: indicates that the default value 100 is used.
- `1`: indicates that a full compaction will be force executed without executing a progressive compaction.
- > 1: indicates that the specified number of progressive compactions will be executed when a schema change occurs.
Note
You can also modify the major compaction behavior after table creation by using the ALTER TABLE table_name SET PROGRESSIVE_MERGE_NUM=0;
statement.
|
+| major_compact_trigger | The number of minor compactions for triggering a major compaction. It is a tenant-level parameter. | 0 | \[0,65535\] |
+| default_progressive_merge_num | The default major compaction mode specified during table creation. It is a tenant-level parameter. | 0 | \[0, +∞) - `0`: indicates that 100 progressive compactions are performed by default.
- `1`: indicates that a full compaction will be force executed without executing a progressive compaction.
- > 1: indicates that the specified number of progressive compactions will be executed when a schema change occurs.
Note
You can also use the ALTER TABLE table_name SET PROGRESSIVE_MERGE_NUM=0;
statement to modify the major compaction mode for a table after it is created.
|
| merger_check_interval | The interval for checking the major compaction progress of each zone. It is a tenant-level parameter. | 10m | \[10s, 60m\] |
## Modify major compaction parameters by using an SQL statement
1. Log on to the database as a tenant administrator.
-2. Execute the following sample statement to modify a major compaction parameter:
+2. Modify a major compaction parameter.
+ Here is an example:
```sql
From 31ad1ab237a2a0fbecf88f243f1fe4a9d4b5cc17 Mon Sep 17 00:00:00 2001
From: Jackie Qu
Date: Mon, 15 Apr 2024 16:46:17 +0800
Subject: [PATCH 21/63] v430-200.data-storage-update
---
.../200.data-storage/200.MEMTable.md | 2 +-
.../200.data-storage/300.SSTable.md | 2 +-
.../320.columnstore-engine.md | 48 +++++++++----------
.../400.compression-and-encoding.md | 10 ++--
4 files changed, 31 insertions(+), 31 deletions(-)
diff --git a/en-US/700.reference/100.oceanbase-database-concepts/900.storage-architecture/200.data-storage/200.MEMTable.md b/en-US/700.reference/100.oceanbase-database-concepts/900.storage-architecture/200.data-storage/200.MEMTable.md
index 4989fefa63..e7cacc3db7 100644
--- a/en-US/700.reference/100.oceanbase-database-concepts/900.storage-architecture/200.data-storage/200.MEMTable.md
+++ b/en-US/700.reference/100.oceanbase-database-concepts/900.storage-architecture/200.data-storage/200.MEMTable.md
@@ -11,7 +11,7 @@
The memory storage engine (MemTable) of OceanBase Database consists of B-trees and hash tables. When data is inserted, updated, or deleted, the data is written into the memory block. Hash tables and B-trees store the pointers to the corresponding data.
-![MemTable](https://obbusiness-private.oss-cn-shanghai.aliyuncs.com/doc/img/observer-enterprise/V4.2.1/700.reference/100.oceanbase-database-concepts/900.storage-architecture/200.data-storage/memtable-structure.png)
+![memtable-structure.png](https://obbusiness-private.oss-cn-shanghai.aliyuncs.com/doc/img/observer-enterprise/V4.2.1/700.reference/100.oceanbase-database-concepts/900.storage-architecture/200.data-storage/memtable-structure.png)
## Characteristics of the two data structures
diff --git a/en-US/700.reference/100.oceanbase-database-concepts/900.storage-architecture/200.data-storage/300.SSTable.md b/en-US/700.reference/100.oceanbase-database-concepts/900.storage-architecture/200.data-storage/300.SSTable.md
index a17b492f60..46377488d6 100644
--- a/en-US/700.reference/100.oceanbase-database-concepts/900.storage-architecture/200.data-storage/300.SSTable.md
+++ b/en-US/700.reference/100.oceanbase-database-concepts/900.storage-architecture/200.data-storage/300.SSTable.md
@@ -7,7 +7,7 @@
# SSTables
-In OceanBase Database, an SSTable is a basic unit that is used to manage data in each partition of a user table. If the size of a MemTable reaches a specific threshold, OceanBase Database freezes the MemTable and compacts data in the MemTable to the disk. The structure obtained after the minor compaction is called an SSTable or a minor SSTable. When a global compaction occurs in a cluster, a major compaction is performed on minor SSTables of all user table partitions based on the compaction snapshot to generate a major SSTable. All SSTables are constructed in similar ways. Each SSTable consists of metadata and a series of macroblocks. Each macroblock can be divided into multiple microblocks. Rows in a microblock are organized in the flat or encoding format based on user table modes.
+In OceanBase Database, an SSTable is a basic unit that is used to manage data in each partition of a user table. If the size of a MemTable reaches a specific threshold, OceanBase Database freezes the MemTable and compacts data in the MemTable to the disk. The structure obtained after the minor compaction is called a mini SSTable or a minor SSTable. When a global compaction occurs in a cluster, a major compaction is performed on minor SSTables of all user table partitions based on the compaction snapshot to generate a major SSTable. All SSTables are constructed in similar ways. Each SSTable consists of metadata and a series of macroblocks. Each macroblock can be divided into multiple microblocks. Rows in a microblock are organized in the flat or encoding format based on user table modes.
![SSTable](https://obbusiness-private.oss-cn-shanghai.aliyuncs.com/doc/img/observer-enterprise/V4.2.1/EN_US/700.reference/100.oceanbase-database-concepts/%E5%86%85%E6%A0%B828.png)
diff --git a/en-US/700.reference/100.oceanbase-database-concepts/900.storage-architecture/200.data-storage/320.columnstore-engine.md b/en-US/700.reference/100.oceanbase-database-concepts/900.storage-architecture/200.data-storage/320.columnstore-engine.md
index a39de2210d..5a56cc2f53 100644
--- a/en-US/700.reference/100.oceanbase-database-concepts/900.storage-architecture/200.data-storage/320.columnstore-engine.md
+++ b/en-US/700.reference/100.oceanbase-database-concepts/900.storage-architecture/200.data-storage/320.columnstore-engine.md
@@ -23,9 +23,9 @@ As a native distributed database, OceanBase Database stores user data in multipl
Random updates in column store scenarios are controllable. On this basis, OceanBase Database provides a set of column store implementation methods transparent to upper-layer business based on the characteristics of baseline data and incremental data:
-* Baseline data is stored by column, and incremental data is stored by row. Your DML operations are not affected, and upstream and downstream data is seamlessly synchronized. You can perform transaction operations on column store tables the same way as on row store tables.
+* Baseline data is stored by column, and incremental data is stored by row. Your DML operations are not affected, and upstream and downstream data is seamlessly synchronized. You can perform transaction operations on columnstore tables the same way as on rowstore tables.
-* In column store mode, the data of each column is stored as an independent SSTable, and the SSTables of all columns are combined into a virtual SSTable as baseline data for column store.
+* In columnstore mode, the data of each column is stored as an independent SSTable, and the SSTables of all columns are combined into a virtual SSTable as baseline data for columnstore.
@@ -45,9 +45,9 @@ In addition to implementing the column store mode in the storage engine, OceanBa
* Integrated storage
- * You can specify the column store, row store, or column- and row-redundant store mode for a table based on the type of its business load. The storage mode does not affect operations, such queries and backup and restore, on the table.
+ * You can specify the column store, row store, or column-row redundant store mode for a table based on the type of its business load. The storage mode does not affect operations, such queries and backup and restore, on the table.
- * Column store tables support all online and offline DDL operations, data types, and secondary indexes. You can use column store the same way you use row store.
+ * Columnstore tables support all online and offline DDL operations, data types, and secondary indexes. You can use column store the same way you use row store.
* Integrated transactions
@@ -59,23 +59,23 @@ In addition to implementing the column store mode in the storage engine, OceanBa
Major compactions in column store are significantly different from those in row store. Specifically, because incremental data is stored by row, it must be merged with baseline data before it is split and saved to a separate SSTable for each column. The compaction time and resource usage will significantly increase compared with those in row store.
- To speed up major compactions of column store tables, OceanBase Database greatly optimizes the compaction process. Like row store tables, column store tables support horizontal splitting and parallel major compactions for speed-up. In addition, column store tables support vertical splitting. Major compactions for multiple columns in a column store table are combined into one major compaction task. The database can automatically increase or decrease the number of columns in a task based on system resources. This achieves balance between the major compaction speed and the memory overheads.
+ To speed up major compactions of columnstore tables, OceanBase Database greatly optimizes the compaction process. Like rowstore tables, columnstore tables support horizontal splitting and parallel major compactions for speed-up. In addition, columnstore tables support vertical splitting. Major compactions for multiple columns in a columnstore table are combined into one major compaction task. The database can automatically increase or decrease the number of columns in a task based on system resources. This achieves balance between the major compaction speed and the memory overheads.
* Column encoding algorithm
- Data stored in OceanBase Database undergoes two stages of compression: hybrid row-column encoding provide by OceanBase Database and general compression. As a built-in algorithm of the database, hybrid row-column encoding supports direct queries without decompression. It also supports speed-up of queries, especially AP queries, based on encoded information.
+ Data stored in OceanBase Database undergoes two stages of compression: hybrid row-column encoding provided by OceanBase Database and general compression. As a built-in algorithm of the database, hybrid row-column encoding supports direct queries without decompression. It also supports speed-up of queries, especially AP queries, based on encoded information.
- Hybrid row-column encoding is designed mainly for row structures. Therefore, OceanBase Database provides a new column encoding algorithm for column store tables. Compared with the original encoding algorithm, the new algorithm supports comprehensive vectorized execution of queries, supports single instruction, multiple data (SIMD) optimization for compatibility with various instruction sets, and greatly increases the compression ratio for numeric types. This way, the new algorithm makes great improvements in terms of the performance and compression ratio.
+ Hybrid row-column encoding is designed mainly for row structures. Therefore, OceanBase Database provides a new column encoding algorithm for columnstore tables. Compared with the original encoding algorithm, the new algorithm supports comprehensive vectorized execution of queries, supports single instruction, multiple data (SIMD) optimization for compatibility with various instruction sets, and greatly increases the compression ratio for numeric types. This way, the new algorithm makes great improvements in terms of the performance and compression ratio.
* Skip index
- Regular column store databases pre-aggregate column data at a specific granularity. The aggregation results are persisted along with the data. When you query or request to access column data, the database can filter data by pre-aggregated data. This significantly reduces data access overheads and I/O consumption.
+ Regular columnstore databases pre-aggregate column data at a specific granularity. The aggregation results are persisted along with the data. When you query or request to access column data, the database can filter data by pre-aggregated data. This significantly reduces data access overheads and I/O consumption.
- OceanBase Database supports the skip index feature in the column store engine. The data of each column is aggregated at the microblock granularity for calculating the maximum value, minimum value, and total number of NULLs. Then the aggregation and accumulation are performed upwards, layer by layer, to obtain values at the macroblock, SSTable, and larger granularities. When you initiate a query, the system continuously drills down to select aggregated values at the appropriate granularity based on the scan range for filtering and aggregated output.
+ OceanBase Database supports the skip index feature in the columnstore engine. The data of each column is aggregated at the microblock granularity for calculating the maximum value, minimum value, and total number of NULLs. Then the aggregation and accumulation are performed upwards, layer by layer, to obtain values at the macroblock, SSTable, and larger granularities. When you initiate a query, the system continuously drills down to select aggregated values at the appropriate granularity based on the scan range for filtering and aggregated output.
* Query pushdown
- OceanBase Database preliminarily supports simple query pushdown since V3.2.x. OceanBase Database V4.x and later fully support vectorized storage and support more complex pushdown. In the column store engine, the pushdown feature is further enhanced and expanded in the following aspects:
+ OceanBase Database preliminarily supports simple query pushdown since V3.2.x. OceanBase Database V4.x and later fully support vectorized storage and support more complex pushdown. In the columnstore engine, the pushdown feature is further enhanced and expanded in the following aspects:
* All query filters are pushed down. At the same time, the database further utilizes the skip index feature and encoded information for speed-up based on the filter type.
@@ -85,9 +85,9 @@ In addition to implementing the column store mode in the storage engine, OceanBa
## Use column store
-### Create a column store table
+### Create a columnstore table
-When you create a table, you can specify `WITH COLUMN GROUP(each column)` to create the table as a column store table.
+When you create a table, you can specify `WITH COLUMN GROUP(each column)` to create the table as a columnstore table.
```sql
obclient> CREATE TABLE tt_column_store (c1 int PRIMARY KEY, c2 int , c3 int) WITH COLUMN GROUP (each column);
@@ -115,9 +115,9 @@ The result is as follows:
1 row in set
```
-### Create a column- and row-redundant store table
+### Create a columnstore-rowstore redundant table
-If you want to achieve balance between AP business and TP business and can accept a specific degree of data redundancy, you can specify `all columns` in the `WITH COLUMN GROUP` syntax to enable redundancy of row store data.
+If you want to achieve balance between AP business and TP business and can accept a specific degree of data redundancy, you can specify `all columns` in the `WITH COLUMN GROUP` syntax to enable redundancy of rowstore data.
```sql
obclient> CREATE TABLE tt_column_row (c1 int PRIMARY KEY, c2 int , c3 int) WITH COLUMN GROUP (all columns, each column);
@@ -145,11 +145,11 @@ The result is as follows:
1 row in set
```
-### Column store scan
+### Columnstore scan
-* Query whether the execution plan contains a column store scan plan
+* Query whether the execution plan contains a columnstore scan plan
- The column store table `tt_column_store` is used as an example.
+ The columnstore table `tt_column_store` is used as an example.
```sql
obclient> EXPLAIN SELECT * FROM tt_column_store;
@@ -176,9 +176,9 @@ The result is as follows:
11 rows in set
```
- In the query result, `COLUMN TABLE FULL SCAN` in the plan indicates the range scan for the column store table.
+ In the query result, `COLUMN TABLE FULL SCAN` in the plan indicates the range scan for the columnstore table.
- `COLUMN TABLE GET` in the plan indicates the get operation with a specified primary key on the column store table. Here is an example:
+ `COLUMN TABLE GET` in the plan indicates the get operation with a specified primary key on the columnstore table. Here is an example:
```sql
obclient> EXPLAIN SELECT * FROM tt_column_store WHERE c1 = 1;
@@ -206,11 +206,11 @@ The result is as follows:
12 rows in set
```
-* Specify whether to perform column store scan for a column- and row-redundant store table by using hints
+* Specify whether to perform columnstore scan for a columnstore-rowstore redundant table by using hints
- The optimizer determines whether to perform row store scan or column store scan for a column- and row-redundant store table based on costs. For example, for full table scan in a simple scenario, the system uses row store for generating a plan by default.
+ The optimizer determines whether to perform rowstore scan or columnstore scan for a columnstore-rowstore redundant table based on costs. For example, for full table scan in a simple scenario, the system uses row store for generating a plan by default.
- The column- and row-redundant store table `tt_column_row` is used as an example.
+ The columnstore-rowstore redundant table `tt_column_row` is used as an example.
```sql
obclient> EXPLAIN SELECT * FROM tt_column_row;
@@ -237,7 +237,7 @@ The result is as follows:
11 rows in set
```
- You can also use the `USE_COLUMN_TABLE` hint to forcibly perform column store scan for the `tt_column_row` table.
+ You can also use the `USE_COLUMN_TABLE` hint to forcibly perform columnstore scan for the `tt_column_row` table.
```sql
obclient> EXPLAIN SELECT/*+ USE_COLUMN_TABLE(tt_column_row) */ * FROM tt_column_row;
@@ -264,7 +264,7 @@ The result is as follows:
11 rows in set
```
- Similarly, you can use the `NO_USE_COLUMN_TABLE` hint to forcibly forbid column store scan for the table.
+ Similarly, you can use the `NO_USE_COLUMN_TABLE` hint to forcibly forbid columnstore scan for the table.
```sql
obclient> EXPLAIN SELECT /*+ NO_USE_COLUMN_TABLE(tt_column_row) */ c2 FROM tt_column_row;
diff --git a/en-US/700.reference/100.oceanbase-database-concepts/900.storage-architecture/200.data-storage/400.compression-and-encoding.md b/en-US/700.reference/100.oceanbase-database-concepts/900.storage-architecture/200.data-storage/400.compression-and-encoding.md
index ec0efef534..a962388281 100644
--- a/en-US/700.reference/100.oceanbase-database-concepts/900.storage-architecture/200.data-storage/400.compression-and-encoding.md
+++ b/en-US/700.reference/100.oceanbase-database-concepts/900.storage-architecture/200.data-storage/400.compression-and-encoding.md
@@ -7,25 +7,25 @@
# Compression and encoding
-In database systems, data is often compressed to different degrees to reduce storage costs. In analytical processing (AP)-oriented column store databases, data is encoded and compressed by column to improve query performance. However, in most compression algorithms, a higher compression ratio indicates more complex calculation and slower compression/decompression. In traditional B-tree-based databases, data compression during writes may increase the computing load of the CPU and affect the write performance. However, in the LSM-tree architecture of OceanBase Database, data can be compressed only during compactions, which does not affect data writes. Therefore, compression methods with higher compression ratios can be used. The superb compression capabilities of OceanBase Database are proven in application scenarios.
+In database systems, data is often compressed to different degrees to reduce storage costs. In analytical processing (AP)-oriented columnstore databases, data is encoded and compressed by column to improve query performance. However, in most compression algorithms, a higher compression ratio indicates more complex calculation and slower compression/decompression. In traditional B-tree-based databases, data compression during writes may increase the computing load of the CPU and affect the write performance. However, in the LSM-tree architecture of OceanBase Database, data can be compressed only during compactions, which does not affect data writes. Therefore, compression methods with higher compression ratios can be used. The superb compression capabilities of OceanBase Database are proven in application scenarios.
In OceanBase Database, when memory usage of a MemTable reaches a specified threshold or during daily compaction, a minor or major compaction is triggered to store data in the MemTable to the disk and compact the data into static data in an SSTable. Compared with MemTables, SSTables store more data, including more cold data. When SSTables are continuously generated during compactions, OceanBase Database compresses and encodes data in the SSTables to save storage space on the disk. This reduces I/O operations when you query data in the SSTables. In an SSTable, data is structured by block. Fixed-length macroblocks of 2 MB ease the management of storage space. Variable-length microblocks in macroblocks facilitate data compression. Microblocks are stored in the flat or encoding format. OceanBase Database compresses and encodes data by microblock. A microblock in the encoding format is constructed through row padding, encoding, general compression (optional), and encryption (optional). Then, the microblock is stored on the disk and written to a fixed-length macroblock. Encoding and general compression in this process are two data compression methods in OceanBase Database.
## General compression
-General compression is a process in which a compression algorithm is used to directly compress data blocks regardless of the data structure. This compression method encodes data blocks based on the characteristics of binary data to reduce the redundancy of stored data. Compressed data cannot be randomly accessed. Both compression and decompression are performed on data blocks. OceanBase Database supports four compression algorithms: **zlib, snappy, lz4**, and **zstd**. The compression level of the zstd and lz4 algorithms is 1. The compression level of the zlib algorithm is 6. The snappy algorithm uses the default compression level. The four algorithms are used to compress microblocks of 16 KB during compression testing in OceanBase Database. During the testing, the snappy and lz4 algorithms achieve high compression speeds with low compression ratios, and the zlib and zstd algorithms achieved high compression ratios with low compression speeds. The lz4 and snappy algorithms have similar compression ratios, but the lz4 algorithm has higher compression and decompression speeds. The zstd and zlib algorithms have similar compression ratios, but the zstd algorithm has higher compression and decompression speeds. In MySQL mode, OceanBase Database allows you to use one of the preceding compression algorithms. In Oracle mode, OceanBase Database is compatible with the compression algorithms of Oracle Database and allows you to use either the lz4 or zstd algorithm.
+General compression is a process in which a compression algorithm is used to directly compress data blocks regardless of the data structure. This compression method encodes data blocks based on the characteristics of binary data to reduce the redundancy of stored data. Compressed data cannot be randomly accessed. Both compression and decompression are performed on data blocks. OceanBase Database supports four compression algorithms: **zlib**, **snappy**, **lz4**, and **zstd**. The compression level of the zstd and lz4 algorithms is 1. The compression level of the zlib algorithm is 6. The snappy algorithm uses the default compression level. The four algorithms are used to compress microblocks of 16 KB during compression testing in OceanBase Database. During the testing, the snappy and lz4 algorithms achieve high compression speeds with low compression ratios, and the zlib and zstd algorithms achieved high compression ratios with low compression speeds. The lz4 and snappy algorithms have similar compression ratios, but the lz4 algorithm has higher compression and decompression speeds. The zstd and zlib algorithms have similar compression ratios, but the zstd algorithm has higher compression and decompression speeds. In MySQL mode, OceanBase Database allows you to use one of the preceding compression algorithms. In Oracle mode, OceanBase Database is compatible with the compression algorithms of Oracle Database and allows you to use either the lz4 or zstd algorithm.
## Encoding
Based on general compression, OceanBase Database provides a hybrid row-column storage encoding method for databases. In contrast to general compression, encoding is a process in which a compression algorithm compresses data blocks based on the format and semantics of data in data blocks. OceanBase Database is a relational database in which data is organized by table. Each column of a table represents a fixed type of data. Therefore, data in the same column is similar in a logical sense. In specific scenarios, data in adjacent rows of a business table are also similar. Therefore, you can compress and store data by column to improve compression performance. To compress data by column, OceanBase Database introduces microblocks in the encoding format. Unlike microblocks in the flat format in which all data is serialized by row, microblocks in the encoding format are stored in hybrid row-column storage mode. Logically, a microblock still stores a set of row data, but the data is encoded by column. Fixed-length encoded data is stored in the column store area of the microblock, and variable-length data is stored in the variable-length area by row. Data in microblocks in encoding format can be randomly accessed. If you want to read a specific row of data in a microblock, you can decode only the row of data. This prevents specific decompression algorithms from decompressing the entire data block when you want to read only a part of data in the data block. To reduce projection overheads, you can also decode only specified columns during vectorized execution.
-OceanBase Database supports a variety of encoding formats for compression by column, such as dictionary encoding, run-length encoding (RLE), and delta encoding that are common in column store databases. If a column stores fixed-length values, such as timestamps and bigint values, and data in the microblock is distributed within one value range, delta encoding can achieve high compression performance. Delta encoding stores only the differences between the values of each row and the minimum value in the microblock, and performs bit-packing to reduce the amount of data that is stored. If data in a microblock has small cardinality, dictionary encoding and RLE can construct a dictionary in the microblock and store the references of each row to compress the data. In an extreme case, a column in a microblock may store basically the same data. In this case, OceanBase Database adopts constant encoding to store only constants and values in the microblock that are not equal to the constants. This further increases the compression ratio.
+OceanBase Database supports a variety of encoding formats for compression by column, such as dictionary encoding, run-length encoding (RLE), and delta encoding that are common in columnstore databases. If a column stores fixed-length values, such as timestamps and bigint values, and data in the microblock is distributed within one value range, delta encoding can achieve high compression performance. Delta encoding stores only the differences between the values of each row and the minimum value in the microblock, and performs bit-packing to reduce the amount of data that is stored. If data in a microblock has small cardinality, dictionary encoding and RLE can construct a dictionary in the microblock and store the references of each row to compress the data. In an extreme case, a column in a microblock may store basically the same data. In this case, OceanBase Database adopts constant encoding to store only constants and values in the microblock that are not equal to the constants. This further increases the compression ratio.
In addition to these common encoding formats, OceanBase Database provides encoding formats for strings. If a column of data has the same prefix, OceanBase Database adopts prefix encoding to store the prefix and the suffix of each row. If a column of data is fixed-length strings with multiple same bytes, OceanBase Database adopts fixed-length string diff encoding to store the differences between the pattern string and each row. If the character cardinality of a column of strings in a microblock is less than 16, OceanBase Database uses a hexadecimal number to represent every character in each string and performs hex encoding. These string-related encoding formats can achieve high compression performance for long business IDs and formatted string data.
In business tables, data in the same column is similar, and data in different columns can also be related. Therefore, OceanBase Database introduces span-column encoding. If most of the data in two columns is the same, OceanBase Database adopts column equal encoding to make one column a reference of another column. If a column stores the prefixes of another column of data, OceanBase Database adopts column prefix encoding to store only the column and the suffixes of another column of data. Span-column encoding reduces data redundancy in tables. It can be used to achieve high compression performance for duplicate timestamps and composite columns, which increases the overall compression ratio of macroblocks. However, span-column encoding and decoding are complex. During encoding, OceanBase Database must check whether data in different columns meets encoding rules. During decoding, OceanBase Database must access referenced column data based on the reference relationship, process the data, and then decode the data. Compared with other encoding formats, span-column encoding is not as suitable to CPUs. Different columns may be referenced in a cascaded manner, which requires special processing.
-In addition to encoding by column, OceanBase Database allows you to compress a column of data by using multiple encoding methods. For example, you can use hex encoding together with other string encoding methods. However, this makes encoding and decoding more complex. The column store and row store of NULLs vary with different encoding methods. However, most encoding methods use a NULL bitmap to indicate whether data of a row corresponding to a column is NULL. Encoding formats supported by OceanBase Database are related to both table schemas and data characteristics such as the value range in a microblock. Therefore, a database administrator (DBA) cannot specify column-based encoding when designing the table data model to achieve optimal compression performance. To increase the compression ratio, OceanBase Database adaptively detects a more suitable encoding method during compactions to encode data. To detect m encoding methods for n columns of data, it is assumed that OceanBase Database must encode data m × n times. Then, OceanBase Database can determine the optimal encoding method for each column. This process becomes more complex if span-column encoding is introduced. Therefore, OceanBase Database optimizes the encoding method selection algorithm to improve data encoding efficiency during compactions.
+In addition to encoding by column, OceanBase Database allows you to compress a column of data by using multiple encoding methods. For example, you can use hex encoding together with other string encoding methods. However, this makes encoding and decoding more complex. The column store and row store of null values vary with different encoding methods. However, most encoding methods use a null bitmap to indicate whether data of a row corresponding to a column is null. Encoding formats supported by OceanBase Database are related to both table schemas and data characteristics such as the value range in a microblock. Therefore, a database administrator (DBA) cannot specify column-based encoding when designing the table data model to achieve optimal compression performance. To increase the compression ratio, OceanBase Database adaptively detects a more suitable encoding method during compactions to encode data. To detect m encoding methods for n columns of data, it is assumed that OceanBase Database must encode data m × n times. Then, OceanBase Database can determine the optimal encoding method for each column. This process becomes more complex if span-column encoding is introduced. Therefore, OceanBase Database optimizes the encoding method selection algorithm to improve data encoding efficiency during compactions.
In OceanBase Database V3.2 and later, encoding supports vectorized execution and filter pushdown. This allows you to filter encoded data based on encoding characteristics, which reduces overheads and improves filtering efficiency for specific encoding methods. OceanBase Database also supports Single Instruction, Multiple Data (SIMD) filtering for fixed-length data stored by column based on the Advanced Vector Extensions 2.0 (AVX2) instruction set. In addition, data stored by column in microblocks is decoded by column in vectorized execution, which facilitates caching and branch prediction.
@@ -67,7 +67,7 @@ Valid values of the `compression` option in MySQL mode are as follows:
Note
-The zlib compression algorithm of different minor versions may generate different compression results for the same data. To avoid replica restore failures during an upgrade, the zlib_1.0
compression algorithm is no longer supported in OceanBase Database V4.3.0 and later.
+The zlib compression algorithm of different minor versions may generate different compression results for the same data. To avoid replica restore failures during an upgrade, starting from OceanBase Database V4.3.0, the zlib_1.0
compression algorithm is no longer supported in V4.3.x series.
* none
From ea29c34f03ba9c4428d4423a9f28cda65d9813c2 Mon Sep 17 00:00:00 2001
From: Jackie Qu
Date: Mon, 15 Apr 2024 19:06:36 +0800
Subject: [PATCH 22/63] v430-beta-200.manage-tables-of-mysql-mode-update
---
...-a-table-for-mysql-tenant-of-mysql-mode.md | 45 ++-
...the-definition-of-a-table-of-mysql-mode.md | 2 +-
.../600.change-table-of-mysql-mode.md | 257 ++++++++++--------
3 files changed, 193 insertions(+), 111 deletions(-)
diff --git a/en-US/700.reference/300.database-object-management/100.manage-object-of-mysql-mode/200.manage-tables-of-mysql-mode/200.create-a-table-for-mysql-tenant-of-mysql-mode.md b/en-US/700.reference/300.database-object-management/100.manage-object-of-mysql-mode/200.manage-tables-of-mysql-mode/200.create-a-table-for-mysql-tenant-of-mysql-mode.md
index 6127f92385..b8a6f0e22a 100644
--- a/en-US/700.reference/300.database-object-management/100.manage-object-of-mysql-mode/200.manage-tables-of-mysql-mode/200.create-a-table-for-mysql-tenant-of-mysql-mode.md
+++ b/en-US/700.reference/300.database-object-management/100.manage-object-of-mysql-mode/200.manage-tables-of-mysql-mode/200.create-a-table-for-mysql-tenant-of-mysql-mode.md
@@ -138,13 +138,35 @@ Query OK, 3 rows affected
You can also execute the `CREATE TABLE LIKE` statement to copy the schema but not the data of a table.
-The syntax is as follows:
+A sample statement is as follows:
```sql
obclient>CREATE TABLE t1_like like t1;
Query OK, 0 rows affected
```
+## Create a rowstore table
+
+OceanBase Database allows you to create rowstore tables and convert rowstore tables into columnstore tables.
+
+When [default_table_store_format](../../../800.configuration-items-and-system-variables/100.system-configuration-items/400.tenant-level-configuration-items/27000.default_table_store_format.md) is set to `row`, which is the default value, a rowstore table is created by default. When [default_table_store_format](../../../800.configuration-items-and-system-variables/100.system-configuration-items/400.tenant-level-configuration-items/27000.default_table_store_format.md) is not set to `row`, you can specify the `WITH COLUMN GROUP(all columns)` option to create a rowstore table.
+
+
+For information about how to convert a rowstore table to a columnstore table, see [Modify a table](600.change-table-of-mysql-mode.md). For information about how to create a columnstore index, see [Create an index](../500.manage-indexes-of-mysql-mode/200.create-an-index-of-mysql-mode.md).
+
+You can specify the `WITH COLUMN GROUP(all columns)` option to create a rowstore table.
+
+Here is an example:
+
+```sql
+CREATE TABLE tbl1_cg (col1 INT PRIMARY KEY, col2 VARCHAR(50)) WITH COLUMN GROUP(all columns);
+```
+
+
+ Note
+ If you choose to specify the WITH COLUMN GROUP(all columns)
option to create a rowstore table, the table is still in the rowstore format even after you execute the DROP COLUMN GROUP(all columns)
statement to drop the column group.
+
+
## Create a columnstore table
OceanBase Database allows you to create a columnstore table, switch a rowstore table to a columnstore table, and create a columnstore index. When you create a table in OceanBase Database, the table is a rowstore table by default. You can use the `WITH COLUMN GROUP` option to explicitly specify to create a columnstore table or a rowstore-columnstore redundant table.
@@ -153,7 +175,7 @@ For information about how to convert a rowstore table to a columnstore table, se
You can specify the `WITH COLUMN GROUP(all columns, each column)` option to create a rowstore-columnstore redundant table.
-**Here is an example:**
+Here is an example:
```sql
CREATE TABLE tbl1_cg (col1 INT PRIMARY KEY, col2 VARCHAR(50)) WITH COLUMN GROUP(all columns, each column);
@@ -161,8 +183,23 @@ CREATE TABLE tbl1_cg (col1 INT PRIMARY KEY, col2 VARCHAR(50)) WITH COLUMN GROUP(
You can specify the `WITH COLUMN GROUP(each column)` option to create a columnstore table.
-**Here is an example:**
+Here is an example:
```sql
CREATE TABLE tbl2_cg (col1 INT PRIMARY KEY, col2 VARCHAR(50)) WITH COLUMN GROUP(each column);
-```
\ No newline at end of file
+```
+
+If you import a large amount of data to a columnstore table, you must initiate a major compaction to improve the read performance and start statistics collection to assist execution strategy adjustment.
+
+- **Major compaction**: After a batch data import, we recommend that you perform a major compaction to improve the read performance. The major compaction will consolidate segmented data for continuous physical storage, thereby reducing the disk I/Os for reading data. After a data import, initiate a major compaction in the tenant to ensure that all data is compacted to the baseline layer. For more information, see [`MAJOR and MINOR`](../../../500.sql-reference/100.sql-syntax/200.common-tenant-of-mysql-mode/600.sql-statement-of-mysql-mode/1500.alter-system-freeze-of-mysql-mode.md).
+
+
+- **Statistics collection**: After the major compaction, we recommend that you start statistics collection to help the optimizer generate an efficient query plan and execution strategy. You can execute the [`GATHER_SCHEMA_STATS`](../../../500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/1800.gather-schema-stats-mysql.md) procedure to collect statistics for all tables and query the [`GV$OB_OPT_STAT_GATHER_MONITOR`](../../../700.system-views/400.system-view-of-mysql-mode/300.performance-view-of-mysql-mode/13800.gv_ob_opt_stat_gather_monitor-of-mysql-mode.md) view for the collection progress.
+
+Note that the major compaction may slow down as the amount of data in the columnstore table increases.
+
+## References
+
+* [CREATE TABLE](../../../500.sql-reference/100.sql-syntax/200.common-tenant-of-mysql-mode/600.sql-statement-of-mysql-mode/2600.create-table-of-mysql-mode.md)
+
+* [ALTER TABLE](../../../500.sql-reference/100.sql-syntax/200.common-tenant-of-mysql-mode/600.sql-statement-of-mysql-mode/1600.alter-table-of-mysql-mode.md)
\ No newline at end of file
diff --git a/en-US/700.reference/300.database-object-management/100.manage-object-of-mysql-mode/200.manage-tables-of-mysql-mode/500.view-the-definition-of-a-table-of-mysql-mode.md b/en-US/700.reference/300.database-object-management/100.manage-object-of-mysql-mode/200.manage-tables-of-mysql-mode/500.view-the-definition-of-a-table-of-mysql-mode.md
index 9b64130e53..564cd5b82a 100644
--- a/en-US/700.reference/300.database-object-management/100.manage-object-of-mysql-mode/200.manage-tables-of-mysql-mode/500.view-the-definition-of-a-table-of-mysql-mode.md
+++ b/en-US/700.reference/300.database-object-management/100.manage-object-of-mysql-mode/200.manage-tables-of-mysql-mode/500.view-the-definition-of-a-table-of-mysql-mode.md
@@ -28,7 +28,7 @@ where
* `DEFAULT CHARSET = utf8mb4` indicates that the default character set is `utf8mb4`.
-* `ROW_FORMAT = DYNAMIC` indicates that the table is a dynamic table. If a table contains fields of the VARCHAR type, TEXT type or its variants, and BLOB type or its variants, the table is a dynamic table. In other words, `ROW_FORMAT` of the table is `DYNAMIC`. If `ROW_FORMAT` of a table is `FIXED`, the table is a static table.
+* `ROW_FORMAT = DYNAMIC` indicates that data encoding is enabled.
* `COMPRESSION` indicates the compression method of the table.
diff --git a/en-US/700.reference/300.database-object-management/100.manage-object-of-mysql-mode/200.manage-tables-of-mysql-mode/600.change-table-of-mysql-mode.md b/en-US/700.reference/300.database-object-management/100.manage-object-of-mysql-mode/200.manage-tables-of-mysql-mode/600.change-table-of-mysql-mode.md
index a9a22d8859..ac17e9eef2 100644
--- a/en-US/700.reference/300.database-object-management/100.manage-object-of-mysql-mode/200.manage-tables-of-mysql-mode/600.change-table-of-mysql-mode.md
+++ b/en-US/700.reference/300.database-object-management/100.manage-object-of-mysql-mode/200.manage-tables-of-mysql-mode/600.change-table-of-mysql-mode.md
@@ -13,7 +13,7 @@ After a table is created, you can use the `ALTER TABLE` statement to modify it.
If you do not specify the collation or character set when you create a table, the character set and collation of the database are used by default. For more information, see [Database-level character sets and collations](../../../500.sql-reference/100.sql-syntax/200.common-tenant-of-mysql-mode/100.basic-elements-of-mysql-mode/300.character-set-and-collation-of-mysql-mode/400.specify-character-set-and-collation-of-mysql-mode.md).
-After you create a table, you can modify the collation and character set of the table. The syntax is as follows:
+After you create a table, you can modify the collation and character set of the table. The statement is as follows:
```sql
ALTER TABLE table_name [[DEFAULT] CHARACTER SET [=] charset_name] [COLLATE [=] collation_name];
@@ -138,152 +138,195 @@ Assume that the schema of a table named `test` is as follows:
### Modify column attributes
-You can rename a column, and modify the data type and default value of a column.
+You can modify the name, type, default value, and Skip Index attribute of a column.
-* Rename a column
+#### Rename a column
- When you rename a column by using the `RENAME COLUMN` keyword, observe the following notes:
+When you rename a column by using the `RENAME COLUMN` keyword, observe the following notes:
- * A column that is indexed or constrained by a `FOREIGN KEY` constraint can be renamed, and the RENAME COLUMN operation is automatically cascaded to the index definition and the `FOREIGN KEY` constraint.
+* A column that is indexed or constrained by a `FOREIGN KEY` constraint can be renamed, and the RENAME COLUMN operation is automatically cascaded to the index definition and the `FOREIGN KEY` constraint.
- * A column that is referenced by a view or stored procedure can be renamed, but you need to manually modify the definition of the view or stored procedure.
+* A column that is referenced by a view or stored procedure can be renamed, but you need to manually modify the definition of the view or stored procedure.
- * You cannot rename and drop columns at the same time.
+* You cannot rename and drop columns at the same time.
- * You cannot rename columns and modify partitions, such as adding or dropping partitions, at the same time.
+* You cannot rename columns and modify partitions, such as adding or dropping partitions, at the same time.
- Assume that the schema of a table named `test` is as follows:
+Assume that the schema of a table named `test` is as follows:
- ```shell
- +-------+-------------+------+-----+---------+-------+
- | Field | Type | Null | Key | Default | Extra |
- +-------+-------------+------+-----+---------+-------+
- | c1 | int(11) | NO | PRI | NULL | |
- | c2 | varchar(50) | YES | | NULL | |
- +-------+-------------+------+-----+---------+-------+
- 2 rows in set
- ```
+```shell
++-------+-------------+------+-----+---------+-------+
+| Field | Type | Null | Key | Default | Extra |
++-------+-------------+------+-----+---------+-------+
+| c1 | int(11) | NO | PRI | NULL | |
+| c2 | varchar(50) | YES | | NULL | |
++-------+-------------+------+-----+---------+-------+
+2 rows in set
+```
- The following sample code renames the `c2` column of the `test` table to `c`.
+The following sample code renames the `c2` column of the `test` table to `c`.
- ```sql
- obclient> ALTER TABLE test RENAME COLUMN c2 TO c;
- ```
+```sql
+obclient> ALTER TABLE test RENAME COLUMN c2 TO c;
+```
- Execute the `DESCRIBE test` statement to query the table schema. The query result is as follows:
+Execute the `DESCRIBE test` statement to query the table schema. The query result is as follows:
- ```shell
- +-------+-------------+------+-----+---------+-------+
- | Field | Type | Null | Key | Default | Extra |
- +-------+-------------+------+-----+---------+-------+
- | c1 | int(11) | NO | PRI | NULL | |
- | c | varchar(50) | YES | | NULL | |
- +-------+-------------+------+-----+---------+-------+
- 2 rows in set
- ```
+```shell
++-------+-------------+------+-----+---------+-------+
+| Field | Type | Null | Key | Default | Extra |
++-------+-------------+------+-----+---------+-------+
+| c1 | int(11) | NO | PRI | NULL | |
+| c | varchar(50) | YES | | NULL | |
++-------+-------------+------+-----+---------+-------+
+2 rows in set
+```
- Your operation to rename a column fails in the following scenarios:
+Your operation to rename a column fails in the following scenarios:
- * The current table already contains a column of the specified new name.
+* The current table already contains a column of the specified new name.
- However, you can address this issue by performing a loop operation. For example, you can rename the `c1` column to `c2` and the original `c2` column to `c1` by executing the `ALTER TABLE test RENAME COLUMN c1 TO c2, rename column c2 TO c1;` statement.
+ However, you can address this issue by performing a loop operation. For example, you can rename the `c1` column to `c2` and the original `c2` column to `c1` by executing the `ALTER TABLE test RENAME COLUMN c1 TO c2, rename column c2 TO c1;` statement.
- * The column is referenced by an expression of a generated column.
+* The column is referenced by an expression of a generated column.
- * The column is referenced by a partitioning expression.
+* The column is referenced by a partitioning expression.
- * The column is referenced by a `CHECK` constraint.
+* The column is referenced by a `CHECK` constraint.
-* Change the type of a column
+#### Change the type of a column
- For more information about the rules for changing the data types of columns in the MySQL mode of OceanBase Database, see [Column type change rules](../../../500.sql-reference/100.sql-syntax/200.common-tenant-of-mysql-mode/700.ddl-function-of-mysql-mode/900.column-type-change-rule-of-mysql-mode.md).
+For more information about the rules for changing the data types of columns in the MySQL mode of OceanBase Database, see [Column type change rules](../../../500.sql-reference/100.sql-syntax/200.common-tenant-of-mysql-mode/700.ddl-function-of-mysql-mode/900.column-type-change-rule-of-mysql-mode.md).
- Assume that the schema of a table named `test` is as follows:
+Assume that the schema of a table named `test` is as follows:
- ```shell
- +-------+-------------+------+-----+---------+-------+
- | Field | Type | Null | Key | Default | Extra |
- +-------+-------------+------+-----+---------+-------+
- | c1 | int(11) | NO | PRI | NULL | |
- | c2 | varchar(50) | YES | | NULL | |
- +-------+-------------+------+-----+---------+-------+
- 2 rows in set
- ```
+```shell
++-------+-------------+------+-----+---------+-------+
+| Field | Type | Null | Key | Default | Extra |
++-------+-------------+------+-----+---------+-------+
+| c1 | int(11) | NO | PRI | NULL | |
+| c2 | varchar(50) | YES | | NULL | |
++-------+-------------+------+-----+---------+-------+
+2 rows in set
+```
- The following sample code changes the data type of the `c2` column of the `test` table to `CHAR`.
+The following sample code changes the data type of the `c2` column of the `test` table to `CHAR`.
- ```sql
- obclient> ALTER TABLE test MODIFY c2 CHAR(60);
- ```
+```sql
+obclient> ALTER TABLE test MODIFY c2 CHAR(60);
+```
- Execute the `DESCRIBE test` statement to query the table schema. The query result is as follows:
+Execute the `DESCRIBE test` statement to query the table schema. The query result is as follows:
- ```shell
- +-------+-------------+------+-----+---------+-------+
- | Field | Type | Null | Key | Default | Extra |
- +-------+-------------+------+-----+---------+-------+
- | c1 | int(11) | NO | PRI | NULL | |
- | c2 | char(60) | YES | | NULL | |
- +-------+-------------+------+-----+---------+-------+
- 2 rows in set
- ```
+```shell
++-------+-------------+------+-----+---------+-------+
+| Field | Type | Null | Key | Default | Extra |
++-------+-------------+------+-----+---------+-------+
+| c1 | int(11) | NO | PRI | NULL | |
+| c2 | char(60) | YES | | NULL | |
++-------+-------------+------+-----+---------+-------+
+2 rows in set
+```
-* Rename a column and change its data type at the same time
+#### Rename a column and change its data type at the same time
- Assume that the schema of a table named `test` is as follows:
+Assume that the schema of a table named `test` is as follows:
- ```shell
- +-------+-------------+------+-----+---------+-------+
- | Field | Type | Null | Key | Default | Extra |
- +-------+-------------+------+-----+---------+-------+
- | c1 | int(11) | NO | PRI | NULL | |
- | c2 | varchar(50) | YES | | NULL | |
- +-------+-------------+------+-----+---------+-------+
- 2 rows in set
- ```
+```shell
++-------+-------------+------+-----+---------+-------+
+| Field | Type | Null | Key | Default | Extra |
++-------+-------------+------+-----+---------+-------+
+| c1 | int(11) | NO | PRI | NULL | |
+| c2 | varchar(50) | YES | | NULL | |
++-------+-------------+------+-----+---------+-------+
+2 rows in set
+```
+
+The following sample code renames the `c2` column of the `test` table to `c` and changes its data type to `CHAR`.
+
+```sql
+obclient>ALTER TABLE test CHANGE COLUMN c2 c CHAR(60);
+```
+
+Execute the `DESCRIBE test` statement to query the table schema. The query result is as follows:
+
+```shell
++-------+-------------+------+-----+---------+-------+
+| Field | Type | Null | Key | Default | Extra |
++-------+-------------+------+-----+---------+-------+
+| c1 | int(11) | NO | PRI | NULL | |
+| c | char(60) | YES | | NULL | |
++-------+-------------+------+-----+---------+-------+
+2 rows in set
+```
+
+#### Change the default value of a column
+
+The following sample code changes the default value of the `c2` column to `2`:
+
+```sql
+obclient> ALTER TABLE test CHANGE COLUMN c2 c2 varchar(50) DEFAULT 2;
+```
+
+Execute the `DESCRIBE test` statement to query the table schema. The query result is as follows:
+
+```shell
++-------+-------------+------+-----+---------+-------+
+| Field | Type | Null | Key | Default | Extra |
++-------+-------------+------+-----+---------+-------+
+| c1 | int(11) | NO | PRI | NULL | |
+| c2 | varchar(50) | YES | | 2 | |
++-------+-------------+------+-----+---------+-------+
+2 rows in set
+```
+
+You can also use the following statement to change the default value of a column:
+
+```sql
+ALTER TABLE table_name ALTER [COLUMN] column_name {SET DEFAULT const_value | DROP DEFAULT}
+```
- The following sample code renames the `c2` column of the `test` table to `c` and changes its data type to `CHAR`.
+#### Modify the Skip Index attribute of a column
+
+OceanBase Database allows you to use the `ALTER TABLE` statement to add, modify, and delete the Skip Index attribute for a column.
+
+For more information about the Skip Index attribute, see [Skip Index attribute of columns](250.identify-skip-index-properties-of-mysql-mode.md).
+
+Here is an example:
+
+1. Execute the following statement to create a table named `test_skidx`:
```sql
- obclient>ALTER TABLE test CHANGE COLUMN c2 c CHAR(60);
+ CREATE TABLE test_skidx(
+ col1 INT SKIP_INDEX(MIN_MAX, SUM),
+ col2 FLOAT SKIP_INDEX(MIN_MAX),
+ col3 VARCHAR(1024) SKIP_INDEX(MIN_MAX),
+ col4 CHAR(10)
+ );
```
- Execute the `DESCRIBE test` statement to query the table schema. The query result is as follows:
+2. Change the Skip Index attribute of the `col2` column in the `test_skidx` table to `SUM`.
- ```shell
- +-------+-------------+------+-----+---------+-------+
- | Field | Type | Null | Key | Default | Extra |
- +-------+-------------+------+-----+---------+-------+
- | c1 | int(11) | NO | PRI | NULL | |
- | c | char(60) | YES | | NULL | |
- +-------+-------------+------+-----+---------+-------+
- 2 rows in set
+ ```sql
+ ALTER TABLE test_skidx MODIFY COLUMN col2 FLOAT SKIP_INDEX(SUM);
```
-* Change the default value of a column.
-
- The following sample code changes the default value of the `c2` column to `2`:
+3. Add the `MIN_MAX` Skip Index attribute for the `col4` column in the `test_skidx` table.
```sql
- obclient> ALTER TABLE test CHANGE COLUMN c2 c2 varchar(50) DEFAULT 2;
+ ALTER TABLE test_skidx MODIFY COLUMN col4 CHAR(10) SKIP_INDEX(MIN_MAX);
```
- Execute the `DESCRIBE test` statement to query the table schema. The query result is as follows:
+4. Delete the Skip Index attribute for the `col1` column in the `test_skidx` table.
- ```shell
- +-------+-------------+------+-----+---------+-------+
- | Field | Type | Null | Key | Default | Extra |
- +-------+-------------+------+-----+---------+-------+
- | c1 | int(11) | NO | PRI | NULL | |
- | c2 | varchar(50) | YES | | 2 | |
- +-------+-------------+------+-----+---------+-------+
- 2 rows in set
+ ```sql
+ ALTER TABLE test_skidx MODIFY COLUMN col1 INT SKIP_INDEX();
```
- You can also use the following statement to change the default value of a column:
+ or
```sql
- ALTER TABLE table_name ALTER [COLUMN] column_name {SET DEFAULT const_value | DROP DEFAULT}
+ ALTER TABLE test_skidx MODIFY COLUMN col1 INT;
```
### Modify the collation and character set of a column
@@ -372,7 +415,9 @@ Assume that the schema of a table named `test` is as follows:
obclient> ALTER TABLE test DROP c2;
```
- Execute the `DESCRIBE test` statement to query the table schema. The result is as follows:
+ Execute the `DESCRIBE test` statement to query the table schema.
+
+ The result is as follows:
```shell
+-------+-------------+------+-----+---------+-------+
@@ -466,7 +511,7 @@ When you create a table in OceanBase Database, the table is a rowstore table by
### Convert a rowstore table to a columnstore table
-**Here is an example:**
+Here is an example:
1. Create a rowstore table named `tbl1`.
@@ -482,7 +527,7 @@ When you create a table in OceanBase Database, the table is a rowstore table by
### Convert a rowstore table to a rowstore-columnstore redundant table
-**Here is an example:**
+Here is an example:
1. Create a rowstore table named `tbl2`.
@@ -498,7 +543,7 @@ When you create a table in OceanBase Database, the table is a rowstore table by
### Convert a rowstore-columnstore redundant table to a columnstore table
-**Here is an example:**
+Here is an example:
1. Create a rowstore-columnstore redundant table named `tbl3`.
@@ -514,7 +559,7 @@ When you create a table in OceanBase Database, the table is a rowstore table by
### Convert a rowstore-columnstore redundant table to a rowstore table
-**Here is an example:**
+Here is an example:
1. Create a rowstore-columnstore redundant table named `tbl4`.
From e4499855e3c89b40bc53e39b938545571698b64e Mon Sep 17 00:00:00 2001
From: Jackie Qu
Date: Mon, 15 Apr 2024 22:02:33 +0800
Subject: [PATCH 23/63] v430-beta-100.manage-object-of-mysql-mode-update-2
---
.../400.add-a-partition-of-mysql-mode.md | 6 ++++--
.../200.create-an-index-of-mysql-mode.md | 9 ++++++---
.../100.create-a-dblink-of-mysql-mode.md | 4 ----
.../200.view-a-dblink-of-mysql-mode.md | 5 -----
...0.access-a-remote-database-by-ablink-of-mysql-mode.md | 4 ----
.../500.delete-a-dblink-of-mysql-mode.md | 5 -----
6 files changed, 10 insertions(+), 23 deletions(-)
diff --git a/en-US/700.reference/300.database-object-management/100.manage-object-of-mysql-mode/300.manage-partitions-of-mysql-mode/400.add-a-partition-of-mysql-mode.md b/en-US/700.reference/300.database-object-management/100.manage-object-of-mysql-mode/300.manage-partitions-of-mysql-mode/400.add-a-partition-of-mysql-mode.md
index 7d48d4acd0..8df15a5bf5 100644
--- a/en-US/700.reference/300.database-object-management/100.manage-object-of-mysql-mode/300.manage-partitions-of-mysql-mode/400.add-a-partition-of-mysql-mode.md
+++ b/en-US/700.reference/300.database-object-management/100.manage-object-of-mysql-mode/300.manage-partitions-of-mysql-mode/400.add-a-partition-of-mysql-mode.md
@@ -21,8 +21,10 @@ The following table describes the support for adding partitions to partitioned a
| Subpartitioned table | RANGE COLUMNS-RANGE, RANGE COLUMNS-RANGE COLUMNS, RANGE COLUMNS-LIST, RANGE COLUMNS-LIST COLUMNS, RANGE COLUMNS-HASH, and RANGE COLUMNS-KEY | Supported | Not supported |
| Subpartitioned table | LIST-RANGE, LIST-RANGE COLUMNS, LIST-LIST, LIST-LIST COLUMNS, LIST-HASH, and LIST-KEY | Supported | Not supported |
| Subpartitioned table | LIST COLUMNS-RANGE, LIST COLUMNS-RANGE COLUMNS, LIST COLUMNS-LIST, LIST COLUMNS-LIST COLUMNS, LIST COLUMNS-HASH, and LIST COLUMNS-KEY | Supported | Not supported |
-| Subpartitioned table | HASH-RANGE, HASH-RANGE COLUMNS, HASH-LIST, HASH-LIST COLUMNS, HASH-HASH, and HASH-KEY | Not supported | Not supported |
-| Subpartitioned table | KEY-RANGE, KEY-RANGE COLUMNS, KEY-LIST, KEY-LIST COLUMNS, KEY-HASH, and KEY-KEY | Not supported | Not supported |
+| Subpartitioned table | Hash-Range, Hash-Range Columns, Hash-List, and Hash-List Columns | Not supported | Supported |
+| Subpartitioned table | Hash-Hash and Hash-Key | Not supported | Not supported |
+| Subpartitioned table | Key-Range, Key-Range Columns, Key + List, and Key-List Columns | Not supported | Supported |
+| Subpartitioned table | Key-Hash and Key-Key | Not supported | Not supported |
## Add a partition
diff --git a/en-US/700.reference/300.database-object-management/100.manage-object-of-mysql-mode/500.manage-indexes-of-mysql-mode/200.create-an-index-of-mysql-mode.md b/en-US/700.reference/300.database-object-management/100.manage-object-of-mysql-mode/500.manage-indexes-of-mysql-mode/200.create-an-index-of-mysql-mode.md
index 67c65a7585..d061a65beb 100644
--- a/en-US/700.reference/300.database-object-management/100.manage-object-of-mysql-mode/500.manage-indexes-of-mysql-mode/200.create-an-index-of-mysql-mode.md
+++ b/en-US/700.reference/300.database-object-management/100.manage-object-of-mysql-mode/500.manage-indexes-of-mysql-mode/200.create-an-index-of-mysql-mode.md
@@ -195,7 +195,7 @@ The query result is as follows:
The syntax is as follows:
```sql
-ALTER TABLE table_name ADD SPATIAL INDEX index_name(column_g_name);
+CREATE SPATIAL INDEX index_name ON table_name(column_g_name);
```
where
@@ -217,7 +217,7 @@ Example: Create a table named `tbl2_g` and then create a spatial index named `tb
2. Create a spatial index named `tbl2_g_idx1` on the table.
```sql
- CREATE INDEX tbl2_g_idx1 ON tbl2_g(g);
+ CREATE SPATIAL INDEX tbl2_g_idx1 ON tbl2_g(g);
```
3. View the index information.
@@ -240,7 +240,7 @@ Example: Create a table named `tbl2_g` and then create a spatial index named `tb
### Use the `ALTER TABLE` statement to create a spatial index
```sql
-ALTER TABLE table_name ADD INDEX|KEY [index_name](column_g_name);
+ALTER TABLE table_name ADD SPATIAL INDEX|KEY [index_name](column_g_name);
```
where
@@ -498,6 +498,7 @@ where
* `WITH COLUMN GROUP([all columns, ]each column)` specifies the columnstore attribute of the index.
* `WITH COLUMN GROUP(all columns, each column)`: specifies to create a rowstore-columnstore redundant index.
+ * `WITH COLUMN GROUP(all columns)`: specifies to create a rowstore index.
* `WITH COLUMN GROUP(each column)`: specifies to create a columnstore index.
**Here is an example:**
@@ -539,6 +540,7 @@ where
* `WITH COLUMN GROUP([all columns, ]each column)` specifies the columnstore attribute of the index.
* `WITH COLUMN GROUP(all columns, each column)`: specifies to create a rowstore-columnstore redundant index.
+ * `WITH COLUMN GROUP(all columns)`: specifies to create a rowstore index.
* `WITH COLUMN GROUP(each column)`: specifies to create a columnstore index.
**Here is an example:**
@@ -584,6 +586,7 @@ where
* `WITH COLUMN GROUP([all columns, ]each column)` specifies the columnstore attribute of the index.
* `WITH COLUMN GROUP(all columns, each column)`: specifies to create a rowstore-columnstore redundant index.
+ * `WITH COLUMN GROUP(all columns)`: specifies to create a rowstore index.
* `WITH COLUMN GROUP(each column)`: specifies to create a columnstore index.
**Here is an example:**
diff --git a/en-US/700.reference/300.database-object-management/100.manage-object-of-mysql-mode/900.manage-dblink-of-mysql-mode/100.create-a-dblink-of-mysql-mode.md b/en-US/700.reference/300.database-object-management/100.manage-object-of-mysql-mode/900.manage-dblink-of-mysql-mode/100.create-a-dblink-of-mysql-mode.md
index f37ae80f47..184b47e665 100644
--- a/en-US/700.reference/300.database-object-management/100.manage-object-of-mysql-mode/900.manage-dblink-of-mysql-mode/100.create-a-dblink-of-mysql-mode.md
+++ b/en-US/700.reference/300.database-object-management/100.manage-object-of-mysql-mode/900.manage-dblink-of-mysql-mode/100.create-a-dblink-of-mysql-mode.md
@@ -7,10 +7,6 @@
# Create a DBLink
-
-Applicability
-This topic applies only to OceanBase Database Enterprise Edition. OceanBase Database Community Edition does not support DBLinks.
-
OceanBase Database provides DBLinks to support cross-data source access. You can use a DBLink to access a remote database from your local database.
diff --git a/en-US/700.reference/300.database-object-management/100.manage-object-of-mysql-mode/900.manage-dblink-of-mysql-mode/200.view-a-dblink-of-mysql-mode.md b/en-US/700.reference/300.database-object-management/100.manage-object-of-mysql-mode/900.manage-dblink-of-mysql-mode/200.view-a-dblink-of-mysql-mode.md
index 43d527f623..502297479d 100644
--- a/en-US/700.reference/300.database-object-management/100.manage-object-of-mysql-mode/900.manage-dblink-of-mysql-mode/200.view-a-dblink-of-mysql-mode.md
+++ b/en-US/700.reference/300.database-object-management/100.manage-object-of-mysql-mode/900.manage-dblink-of-mysql-mode/200.view-a-dblink-of-mysql-mode.md
@@ -7,11 +7,6 @@
# Query a DBLink
-
-Applicability
-This topic applies only to OceanBase Database Enterprise Edition. OceanBase Database Community Edition does not support DBLinks.
-
-
After you create a DBLink, you can query its information from views, including the user who created the DBLink, name of the DBLink, and IP address and port number of the remote database.
## Procedure
diff --git a/en-US/700.reference/300.database-object-management/100.manage-object-of-mysql-mode/900.manage-dblink-of-mysql-mode/300.access-a-remote-database-by-ablink-of-mysql-mode.md b/en-US/700.reference/300.database-object-management/100.manage-object-of-mysql-mode/900.manage-dblink-of-mysql-mode/300.access-a-remote-database-by-ablink-of-mysql-mode.md
index 68333d4f74..7f33e6f854 100644
--- a/en-US/700.reference/300.database-object-management/100.manage-object-of-mysql-mode/900.manage-dblink-of-mysql-mode/300.access-a-remote-database-by-ablink-of-mysql-mode.md
+++ b/en-US/700.reference/300.database-object-management/100.manage-object-of-mysql-mode/900.manage-dblink-of-mysql-mode/300.access-a-remote-database-by-ablink-of-mysql-mode.md
@@ -7,10 +7,6 @@
# Use a DBLink to access data in a remote database
-
-Applicability
-This topic applies only to OceanBase Database Enterprise Edition. OceanBase Database Community Edition does not support DBLinks.
-
After you create a DBLink, you can use it to access objects, such as tables and views, in a remote database.
diff --git a/en-US/700.reference/300.database-object-management/100.manage-object-of-mysql-mode/900.manage-dblink-of-mysql-mode/500.delete-a-dblink-of-mysql-mode.md b/en-US/700.reference/300.database-object-management/100.manage-object-of-mysql-mode/900.manage-dblink-of-mysql-mode/500.delete-a-dblink-of-mysql-mode.md
index 11e5e02266..ce4a5178bd 100644
--- a/en-US/700.reference/300.database-object-management/100.manage-object-of-mysql-mode/900.manage-dblink-of-mysql-mode/500.delete-a-dblink-of-mysql-mode.md
+++ b/en-US/700.reference/300.database-object-management/100.manage-object-of-mysql-mode/900.manage-dblink-of-mysql-mode/500.delete-a-dblink-of-mysql-mode.md
@@ -7,11 +7,6 @@
# Drop a DBLink
-
-Applicability
-This topic applies only to OceanBase Database Enterprise Edition. OceanBase Database Community Edition does not support DBLinks.
-
-
You can drop a DBLink that is no longer required.
## Procedure
From cc0fc96a8d408ea9285149a8ef601b9b5944f2b7 Mon Sep 17 00:00:00 2001
From: Jackie Qu
Date: Mon, 15 Apr 2024 23:11:54 +0800
Subject: [PATCH 24/63] v430-beta-300.database-object-management-update
---
.../200.create-an-index-of-mysql-mode.md | 10 +-
...-table-for-oracle-tenant-of-oracle-mode.md | 107 +++++---------
.../600.change-table-of-oracle-mode.md | 135 +++++++++++++++++-
.../900.lock-a-table-of-oracle-mode.md | 2 +-
.../400.add-a-partition-of-oracle-mode.md | 3 +-
.../200.create-an-index-of-oracle-mode.md | 5 +-
6 files changed, 172 insertions(+), 90 deletions(-)
diff --git a/en-US/700.reference/300.database-object-management/100.manage-object-of-mysql-mode/500.manage-indexes-of-mysql-mode/200.create-an-index-of-mysql-mode.md b/en-US/700.reference/300.database-object-management/100.manage-object-of-mysql-mode/500.manage-indexes-of-mysql-mode/200.create-an-index-of-mysql-mode.md
index d061a65beb..0272ad2dcb 100644
--- a/en-US/700.reference/300.database-object-management/100.manage-object-of-mysql-mode/500.manage-indexes-of-mysql-mode/200.create-an-index-of-mysql-mode.md
+++ b/en-US/700.reference/300.database-object-management/100.manage-object-of-mysql-mode/500.manage-indexes-of-mysql-mode/200.create-an-index-of-mysql-mode.md
@@ -332,7 +332,7 @@ where
* `index_name` specifies the name of the function-based index to be created. This parameter is optional. If this parameter is not specified, the system automatically generates a name in the `functional_index_xx` format, in which `xx` is the index ID.
-* `expr` specifies the expression of the function-based index. It cannot be a Boolean expression, such as `c1=c1`.
+* `expr` specifies the expression of the function-based index. It can be a Boolean expression, such as `c1=c1`.
Example: Create a function-based index named `tbl1_func_idx1`.
@@ -429,7 +429,7 @@ where
* `index_name` specifies the name of the function-based index to be created. This parameter is optional. If you do not specify a value for this parameter, the system automatically generates an index name in the `functional_index_xx` format, in which xx is the index ID.
-* `expr` specifies the expression of the function-based index. It cannot be a Boolean expression, such as `c1=c1`.
+* `expr` specifies the expression of the function-based index. It can be a Boolean expression, such as `c1=c1`.
1. Create a table named `tbl3_func`.
@@ -501,7 +501,7 @@ where
* `WITH COLUMN GROUP(all columns)`: specifies to create a rowstore index.
* `WITH COLUMN GROUP(each column)`: specifies to create a columnstore index.
-**Here is an example:**
+Here is an example:
* Create a table named `tbl4` and a rowstore-columnstore redundant index named `idx1_tbl4_cg` at the same time.
@@ -543,7 +543,7 @@ where
* `WITH COLUMN GROUP(all columns)`: specifies to create a rowstore index.
* `WITH COLUMN GROUP(each column)`: specifies to create a columnstore index.
-**Here is an example:**
+Here is an example:
Create a table named `tbl6`, and then create a columnstore index named `idx1_tbl6_cg`.
@@ -589,7 +589,7 @@ where
* `WITH COLUMN GROUP(all columns)`: specifies to create a rowstore index.
* `WITH COLUMN GROUP(each column)`: specifies to create a columnstore index.
-**Here is an example:**
+Here is an example:
Create a table named `tbl7`, and then create a columnstore index named `idx1_tbl7_cg`.
diff --git a/en-US/700.reference/300.database-object-management/200.manage-object-of-oracle-mode/100.manage-tables-of-oracle-mode/200.create-a-table-for-oracle-tenant-of-oracle-mode.md b/en-US/700.reference/300.database-object-management/200.manage-object-of-oracle-mode/100.manage-tables-of-oracle-mode/200.create-a-table-for-oracle-tenant-of-oracle-mode.md
index 25e59ef801..6afb52b356 100644
--- a/en-US/700.reference/300.database-object-management/200.manage-object-of-oracle-mode/100.manage-tables-of-oracle-mode/200.create-a-table-for-oracle-tenant-of-oracle-mode.md
+++ b/en-US/700.reference/300.database-object-management/200.manage-object-of-oracle-mode/100.manage-tables-of-oracle-mode/200.create-a-table-for-oracle-tenant-of-oracle-mode.md
@@ -9,7 +9,7 @@
You can execute the `CREATE TABLE` statement to create a table.
-For information about how to create a partitioned table, see [Create a partitioned table](../200.manage-partitions-of-oracle-mode/200.create-a-partition-table-of-oracle-mode.md).
+For information about how to create partitioned tables, see [Create a partitioned table](../200.manage-partitions-of-oracle-mode/200.create-a-partition-table-of-oracle-mode.md).
## Create a non-partitioned table
@@ -72,7 +72,6 @@ Pay attention to data types when you create table columns. For more information
A replicated table is a special type of table in OceanBase Database. A replicated table can read the latest data modifications from any healthy replica. If you do not frequently write data and are more concerned about the operation latency and load balancing, a replicated table is a good choice.
-
After you create a replicated table, a replica of the replicated table is created on each OBServer node in the tenant. One of these replicas is elected as the leader to receive write requests, and the other replicas serve as followers to receive only read requests.
All followers must report their status, including the replica replay progress (data synchronization progress), to the leader. Generally, the replica replay progress on a follower lags behind that on the leader. A follower is considered by the leader as healthy only if the data latency between the follower and the leader is within the specified threshold. A healthy follower can quickly synchronize data modifications from the leader. If the leader considers a follower as healthy within a period of time, it will grant a lease period to the follower. In other words, the leader believes that the follower can keep healthy and provide strong-consistency read services within the lease period. During this lease period, the leader confirms the replay progress on the follower before each replicated table transaction is committed. The leader returns the commit result of a transaction only after the follower successfully replays the modifications in the transaction. At this time, you can read the modifications in the committed transaction from the follower.
@@ -131,102 +130,60 @@ CREATE TABLE t2_copy AS SELECT * FROM t2;
You cannot use the `CREATE TABLE LIKE` statement to copy table schemas.
-## Create a columnstore table
+## Create a rowstore table
+
+OceanBase Database allows you to create rowstore tables and convert rowstore tables into columnstore tables.
-OceanBase Database allows you to create a columnstore table, switch a rowstore table to a columnstore table, and create a columnstore index. When you create a table in OceanBase Database, the table is a rowstore table by default. You can use the `WITH COLUMN GROUP` option to explicitly specify to create a columnstore table or a rowstore-columnstore redundant table.
+When [default_table_store_format](../../../800.configuration-items-and-system-variables/100.system-configuration-items/400.tenant-level-configuration-items/27000.default_table_store_format.md) is set to `row`, which is the default value, a rowstore table is created by default. When [default_table_store_format](../../../800.configuration-items-and-system-variables/100.system-configuration-items/400.tenant-level-configuration-items/27000.default_table_store_format.md) is not set to `row`, you can specify the `WITH COLUMN GROUP(all columns)` option to create a rowstore table.
For information about how to convert a rowstore table to a columnstore table, see [Modify a table](600.change-table-of-oracle-mode.md). For information about how to create a columnstore index, see [Create an index](../400.manage-indexes-of-oracle-mode/200.create-an-index-of-oracle-mode.md).
-You can specify the `WITH COLUMN GROUP(all columns, each column)` option to create a rowstore-columnstore redundant table.
+You can specify the `WITH COLUMN GROUP(all columns)` option to create a rowstore table.
-**Here is an example:**
+Here is an example:
```sql
-CREATE TABLE tbl1_cg (col1 NUMBER PRIMARY KEY, col2 VARCHAR2(50)) WITH COLUMN GROUP(all columns, each column);
+CREATE TABLE tbl1_cg (col1 INT PRIMARY KEY, col2 VARCHAR(50)) WITH COLUMN GROUP(all columns);
```
-You can specify the `WITH COLUMN GROUP(each column)` option to create a columnstore table.
-
-**Here is an example:**
-
-```sql
-CREATE TABLE tbl2_cg (col1 NUMBER PRIMARY KEY, col2 VARCHAR2(50)) WITH COLUMN GROUP(each column);
-```
-
-## Convert a rowstore table to a columnstore table
-
-When you create a table in OceanBase Database, the table is a rowstore table by default. You can use the `WITH COLUMN GROUP` option to explicitly specify to create a columnstore table or a rowstore-columnstore redundant table. You can convert a rowstore table to a columnstore table by using the `ALTER TABLE` statement.
-
-### Convert a rowstore table to a columnstore table
-
-**Here is an example:**
-
-1. Create a rowstore table named `tbl1`.
-
- ```sql
- obclient> CREATE TABLE tbl1(col1 NUMBER, col2 VARCHAR2(30));
- ```
-
-2. Convert the rowstore table `tbl1` to columnar storage.
-
- ```sql
- obclient> ALTER TABLE tbl1 ADD COLUMN GROUP(each column);
- ```
-
-### Convert a rowstore table to a rowstore-columnstore redundant table
-
-**Here is an example:**
-
-1. Create a rowstore table named `tbl2`.
-
- ```sql
- obclient> CREATE TABLE tbl2(col1 NUMBER, col2 VARCHAR2(30));
- ```
-
-2. Convert the rowstore table `tbl2` to a rowstore-columnstore redundant table.
-
- ```sql
- obclient> ALTER TABLE tbl2 ADD COLUMN GROUP(all columns, each column);
- ```
+
+ Note
+ If you choose to specify the WITH COLUMN GROUP(all columns)
option to create a rowstore table, the table is still in the rowstore format even after you execute the DROP COLUMN GROUP(all columns)
statement to drop the column group.
+
-### Convert a rowstore-columnstore redundant table to a columnstore table
+## Create a columnstore table
-**Here is an example:**
+OceanBase Database allows you to create a columnstore table, switch a rowstore table to a columnstore table, and create a columnstore index. You can use the `WITH COLUMN GROUP` option to explicitly specify to create a columnstore table or a rowstore-columnstore redundant table. You can also set the `default_table_store_format` parameter to specify columnstore or rowstore-columnstore redundant as the default store format.
-1. Create a rowstore-columnstore redundant table named `tbl3`.
+For information about how to convert a rowstore table to a columnstore table, see [Modify a table](600.change-table-of-oracle-mode.md). For information about how to create a columnstore index, see [Create an index](../400.manage-indexes-of-oracle-mode/200.create-an-index-of-oracle-mode.md).
- ```sql
- obclient> CREATE TABLE tbl3(col1 NUMBER, col2 VARCHAR2(30)) WITH COLUMN GROUP(all columns, each column);
- ```
+You can specify the `WITH COLUMN GROUP(all columns, each column)` option to create a rowstore-columnstore redundant table.
-2. Convert the rowstore-columnstore redundant table `tbl3` to a columnstore table.
+Here is an example:
- ```sql
- obclient> ALTER TABLE tbl3 DROP COLUMN GROUP(all columns);
- ```
+```sql
+CREATE TABLE tbl1_cg (col1 NUMBER PRIMARY KEY, col2 VARCHAR2(50)) WITH COLUMN GROUP(all columns, each column);
+```
-### Convert a rowstore-columnstore redundant table to a rowstore table
+You can specify the `WITH COLUMN GROUP(each column)` option to create a columnstore table.
-**Here is an example:**
+Here is an example:
-1. Create a rowstore-columnstore redundant table named `tbl4`.
+```sql
+CREATE TABLE tbl2_cg (col1 NUMBER PRIMARY KEY, col2 VARCHAR2(50)) WITH COLUMN GROUP(each column);
+```
- ```sql
- obclient> CREATE TABLE tbl4(col1 NUMBER, col2 VARCHAR2(30)) WITH COLUMN GROUP(all columns, each column);
- ```
+If you import a large amount of data to a columnstore table, you must initiate a major compaction to improve the read performance and start statistics collection to assist execution strategy adjustment.
-2. Convert the rowstore-columnstore redundant table `tbl4` to a rowstore table.
+- **Major compaction**: After a batch data import, we recommend that you perform a major compaction to improve the read performance. The major compaction will consolidate segmented data for continuous physical storage, thereby reducing the disk I/Os for reading data. After a data import, initiate a major compaction in the tenant to ensure that all data is compacted to the baseline layer. For more information, see [`MAJOR and MINOR`](../../../500.sql-reference/100.sql-syntax/300.common-tenant-of-oracle-mode/900.sql-statement-of-oracle-mode/100.ddl-of-oracle-mode/4600.alter-system-major-freeze-of-oracle-mode.md).
- ```sql
- obclient> ALTER TABLE tbl4 DROP COLUMN GROUP(each column);
- ```
- or
+- **Statistics collection**: After the major compaction, we recommend that you start statistics collection to help the optimizer generate an efficient query plan and execution strategy. You can execute the [`GATHER_SCHEMA_STATS`](../../../500.sql-reference/300.pl-reference/300.pl-oracle/1400.pl-system-package-oracle/15900.dbms-stats-oracle/1800.gather-schema-stats-oracle.md) procedure to collect statistics for all tables and query the [`GV$OB_OPT_STAT_GATHER_MONITOR`](../../../700.system-views/500.system-view-of-oracle-mode/300.performance-view-of-oracle-mode/12000.gv$ob_opt_stat_gather_monitor-of-oracle-mode.md) view for the collection progress.
- ```sql
- obclient> ALTER TABLE tbl4 DROP COLUMN GROUP(all columns, each column);
- ```
+Note that the major compaction may slow down as the amount of data in the columnstore table increases.
## References
-For more information about the `ALTER TABLE` statement, see [ALTER TABLE](../../../500.sql-reference/100.sql-syntax/300.common-tenant-of-oracle-mode/900.sql-statement-of-oracle-mode/100.ddl-of-oracle-mode/1000.alter-table-of-oracle-mode.md).
\ No newline at end of file
+* [CREATE TABLE](../../../500.sql-reference/100.sql-syntax/300.common-tenant-of-oracle-mode/900.sql-statement-of-oracle-mode/100.ddl-of-oracle-mode/2400.create-table-of-oracle-mode.md)
+
+* [ALTER TABLE](../../../500.sql-reference/100.sql-syntax/300.common-tenant-of-oracle-mode/900.sql-statement-of-oracle-mode/100.ddl-of-oracle-mode/1000.alter-table-of-oracle-mode.md)
\ No newline at end of file
diff --git a/en-US/700.reference/300.database-object-management/200.manage-object-of-oracle-mode/100.manage-tables-of-oracle-mode/600.change-table-of-oracle-mode.md b/en-US/700.reference/300.database-object-management/200.manage-object-of-oracle-mode/100.manage-tables-of-oracle-mode/600.change-table-of-oracle-mode.md
index 6466c34b96..c2763d8349 100644
--- a/en-US/700.reference/300.database-object-management/200.manage-object-of-oracle-mode/100.manage-tables-of-oracle-mode/600.change-table-of-oracle-mode.md
+++ b/en-US/700.reference/300.database-object-management/200.manage-object-of-oracle-mode/100.manage-tables-of-oracle-mode/600.change-table-of-oracle-mode.md
@@ -19,7 +19,7 @@ OceanBase Database allows you to add a column to a table, modify a column and it
* You can add columns except for a primary key column to a table. To add a primary key column, you can add a normal column and then add a primary key to the column. For more information, see [Define column constraints](../100.manage-tables-of-oracle-mode/400.define-the-constraint-type-for-a-column-of-oracle-mode.md).
- Example: Add a normal column.
+ The following example adds a normal column.
```sql
obclient> DESCRIBE test;
@@ -80,7 +80,9 @@ OceanBase Database allows you to add a column to a table, modify a column and it
2 rows in set
```
-* You can rename a column. Here is an example:
+* You can rename a column.
+
+ Here is an example:
```sql
obclient> ALTER TABLE test RENAME COLUMN c1 TO c;
@@ -96,7 +98,9 @@ OceanBase Database allows you to add a column to a table, modify a column and it
2 rows in set
```
-* You can change the default value of a column. Here is an example:
+* Change the default value of a column.
+
+ Here is an example:
```sql
obclient> ALTER TABLE test MODIFY c DEFAULT 1;
@@ -112,7 +116,9 @@ OceanBase Database allows you to add a column to a table, modify a column and it
2 rows in set
```
-* You can modify the `NOT NULL` constraint on a column. Here is an example:
+* You can modify the `NOT NULL` constraint on a column.
+
+ Here is an example:
```sql
obclient> DESCRIBE test;
@@ -137,7 +143,9 @@ OceanBase Database allows you to add a column to a table, modify a column and it
2 rows in set
```
-* You can delete columns except for the primary key column and indexed columns from a table. Here is an example:
+* You can delete columns except for the primary key column and indexed columns from a table.
+
+ Here is an example:
```sql
obclient> DESCRIBE test;
@@ -182,7 +190,7 @@ OceanBase Database allows you to rename a table. Here is an example:
obclient> ALTER TABLE test RENAME TO t1;
```
-Or
+or
```sql
obclient> RENAME TABLE test TO t1;
@@ -202,3 +210,118 @@ Example: Change the number of replicas of a table to `2`.
obclient> ALTER TABLE test SET REPLICA_NUM=2;
Query OK, 0 rows affected
```
+
+## Modify the Skip Index attribute of a column
+
+OceanBase Database allows you to use the `ALTER TABLE` statement to add, modify, and delete the Skip Index attribute for a column.
+
+For more information about the Skip Index attribute, see [Skip Index attribute of columns](250.identify-skip-index-properties-of-oracle-mode.md).
+
+Here is an example:
+
+1. Execute the following statement to create a table named `test_skidx`:
+
+ ```sql
+ CREATE TABLE test_skidx(
+ col1 NUMBER SKIP_INDEX(MIN_MAX, SUM),
+ col2 FLOAT SKIP_INDEX(MIN_MAX),
+ col3 VARCHAR2(1024) SKIP_INDEX(MIN_MAX),
+ col4 CHAR(10)
+ );
+ ```
+
+2. Change the Skip Index attribute of the `col2` column in the `test_skidx` table to `SUM`.
+
+ ```sql
+ ALTER TABLE test_skidx MODIFY col2 FLOAT SKIP_INDEX(SUM);
+ ```
+
+3. Add the `MIN_MAX` Skip Index attribute for the `col4` column in the `test_skidx` table.
+
+ ```sql
+ ALTER TABLE test_skidx MODIFY col4 CHAR(10) SKIP_INDEX(MIN_MAX);
+ ```
+
+4. Delete the Skip Index attribute for the `col1` column in the `test_skidx` table.
+
+ ```sql
+ ALTER TABLE test_skidx MODIFY col1 NUMBER SKIP_INDEX();
+ ```
+
+## Convert a rowstore table to a columnstore table
+
+When you create a table in OceanBase Database, the table is a rowstore table by default. You can use the `WITH COLUMN GROUP` option to explicitly specify to create a columnstore table or a rowstore-columnstore redundant table. You can convert a rowstore table to a columnstore table by using the `ALTER TABLE` statement.
+
+### Convert a rowstore table to a columnstore table
+
+Here is an example:
+
+1. Create a rowstore table named `tbl1`.
+
+ ```sql
+ obclient> CREATE TABLE tbl1(col1 NUMBER, col2 VARCHAR2(30));
+ ```
+
+2. Convert the rowstore table `tbl1` to columnar storage.
+
+ ```sql
+ obclient> ALTER TABLE tbl1 ADD COLUMN GROUP(each column);
+ ```
+
+### Convert a rowstore table to a rowstore-columnstore redundant table
+
+Here is an example:
+
+1. Create a rowstore table named `tbl2`.
+
+ ```sql
+ obclient> CREATE TABLE tbl2(col1 NUMBER, col2 VARCHAR2(30));
+ ```
+
+2. Convert the rowstore table `tbl2` to a rowstore-columnstore redundant table.
+
+ ```sql
+ obclient> ALTER TABLE tbl2 ADD COLUMN GROUP(all columns, each column);
+ ```
+
+### Convert a rowstore-columnstore redundant table to a columnstore table
+
+Here is an example:
+
+1. Create a rowstore-columnstore redundant table named `tbl3`.
+
+ ```sql
+ obclient> CREATE TABLE tbl3(col1 NUMBER, col2 VARCHAR2(30)) WITH COLUMN GROUP(all columns, each column);
+ ```
+
+2. Convert the rowstore-columnstore redundant table `tbl3` to a columnstore table.
+
+ ```sql
+ obclient> ALTER TABLE tbl3 DROP COLUMN GROUP(all columns);
+ ```
+
+### Convert a rowstore-columnstore redundant table to a rowstore table
+
+Here is an example:
+
+1. Create a rowstore-columnstore redundant table named `tbl4`.
+
+ ```sql
+ obclient> CREATE TABLE tbl4(col1 NUMBER, col2 VARCHAR2(30)) WITH COLUMN GROUP(all columns, each column);
+ ```
+
+2. Convert the rowstore-columnstore redundant table `tbl4` to a rowstore table.
+
+ ```sql
+ obclient> ALTER TABLE tbl4 DROP COLUMN GROUP(each column);
+ ```
+
+ or
+
+ ```sql
+ obclient> ALTER TABLE tbl4 DROP COLUMN GROUP(all columns, each column);
+ ```
+
+## References
+
+For more information about the `ALTER TABLE` statement, see [ALTER TABLE](../../../500.sql-reference/100.sql-syntax/300.common-tenant-of-oracle-mode/900.sql-statement-of-oracle-mode/100.ddl-of-oracle-mode/1000.alter-table-of-oracle-mode.md).
\ No newline at end of file
diff --git a/en-US/700.reference/300.database-object-management/200.manage-object-of-oracle-mode/100.manage-tables-of-oracle-mode/900.lock-a-table-of-oracle-mode.md b/en-US/700.reference/300.database-object-management/200.manage-object-of-oracle-mode/100.manage-tables-of-oracle-mode/900.lock-a-table-of-oracle-mode.md
index 437ed69202..218d403bb8 100644
--- a/en-US/700.reference/300.database-object-management/200.manage-object-of-oracle-mode/100.manage-tables-of-oracle-mode/900.lock-a-table-of-oracle-mode.md
+++ b/en-US/700.reference/300.database-object-management/200.manage-object-of-oracle-mode/100.manage-tables-of-oracle-mode/900.lock-a-table-of-oracle-mode.md
@@ -239,7 +239,7 @@ The following table describes the fields in the query results.
| ID2 | Lock identifier 2.- For a table lock, the value is NULL.
- For a transaction lock, the value is NULL.
- For a row lock, the value is in the format of +.
|
| LMODE | The lock holding mode. Valid values:- `NONE`: No lock is held.
- `SS`: ROW SHARE
- `SX`: ROW EXCLUSIVE
- `S`: SHARE
- `SSX`: SHARE ROW EXCLUSIVE
- `X`: EXCLUSIVE
|
| REQUEST | The lock requesting mode. Valid values:- `NONE`: No lock is requested.
- `SS`: ROW SHARE
- `SX`: ROW EXCLUSIVE
- `S`: SHARE
- `SSX`: SHARE ROW EXCLUSIVE
- `X`: EXCLUSIVE
|
-| CTIME | The time to hold or wait for a lock, in seconds. |
+| CTIME | The time to hold or wait for a lock, in microseconds. |
| BLOCK | Indicates whether the lock blocks other processes. Valid values:- `0`: The lock does not block any process.
- `1`: The lock is blocking other processes.
|
## More information
diff --git a/en-US/700.reference/300.database-object-management/200.manage-object-of-oracle-mode/200.manage-partitions-of-oracle-mode/400.add-a-partition-of-oracle-mode.md b/en-US/700.reference/300.database-object-management/200.manage-object-of-oracle-mode/200.manage-partitions-of-oracle-mode/400.add-a-partition-of-oracle-mode.md
index 0ee08ce4e3..8f8b2445aa 100644
--- a/en-US/700.reference/300.database-object-management/200.manage-object-of-oracle-mode/200.manage-partitions-of-oracle-mode/400.add-a-partition-of-oracle-mode.md
+++ b/en-US/700.reference/300.database-object-management/200.manage-object-of-oracle-mode/200.manage-partitions-of-oracle-mode/400.add-a-partition-of-oracle-mode.md
@@ -27,7 +27,8 @@ The following table describes the support for adding partitions to partitioned a
| Subpartitioned table | RANGE-HASH | Not supported | Not supported |
| Subpartitioned table | LIST-RANGE and LIST-LIST | Supported | Supported |
| Subpartitioned table | LIST-HASH | Not supported | Not supported |
-| Subpartitioned table | HASH-RANGE, HASH-LIST, and HASH-HASH | Not supported | Not supported |
+| Subpartitioned table | Hash-Range and Hash-List | Not supported | Supported |
+| Subpartitioned table | Hash-Hash | Not supported | Not supported |
## Add a partition
diff --git a/en-US/700.reference/300.database-object-management/200.manage-object-of-oracle-mode/400.manage-indexes-of-oracle-mode/200.create-an-index-of-oracle-mode.md b/en-US/700.reference/300.database-object-management/200.manage-object-of-oracle-mode/400.manage-indexes-of-oracle-mode/200.create-an-index-of-oracle-mode.md
index 55a995e9cd..bc578a5953 100644
--- a/en-US/700.reference/300.database-object-management/200.manage-object-of-oracle-mode/400.manage-indexes-of-oracle-mode/200.create-an-index-of-oracle-mode.md
+++ b/en-US/700.reference/300.database-object-management/200.manage-object-of-oracle-mode/400.manage-indexes-of-oracle-mode/200.create-an-index-of-oracle-mode.md
@@ -104,7 +104,7 @@ CREATE [UNIQUE] INDEX index_name ON table_name (expr);
where
-* `[UNIQUE]` indicates a unique index. This parameter is optional. It is required when you create a unique index.
+* `[UNIQUE]` indicates a unique index. This parameter is optional. This option is required when you create a unique index.
* `index_name` specifies the name of the function-based index to be created.
@@ -166,9 +166,10 @@ where
* `WITH COLUMN GROUP([all columns, ]each column)` specifies the columnstore attribute of the index.
* `WITH COLUMN GROUP(all columns, each column)`: specifies to create a rowstore-columnstore redundant index.
+ * `WITH COLUMN GROUP(all columns)`: specifies to create a rowstore index.
* `WITH COLUMN GROUP(each column)`: specifies to create a columnstore index.
-**Here is an example:**
+Here is an example:
Create a table named `tbl3`. Then, create a columnstore index named `idx1_tbl3_cg`.
From 2c5b719204f8caecbf8cdcbfb3adbb06a6a0ac51 Mon Sep 17 00:00:00 2001
From: Jackie Qu
Date: Tue, 16 Apr 2024 14:05:51 +0800
Subject: [PATCH 25/63] v430-beta-300.database-object-management-update-1
---
.../2100.create-database-of-mysql-mode.md | 39 +-
.../2200.create-index-of-mysql-mode.md | 46 +-
.../5500.grant-of-mysql-mode.md | 70 +-
.../200.index-operations-of-mysql-mode.md | 227 +++++-
...00.primary-key-operations-of-mysql-mode.md | 32 +-
.../400.column-operations-of-mysql-mode.md | 745 ++++++++++++------
.../200.index-operations-of-oracle-mode.md | 73 +-
...0.primary-key-operations-of-oracle-mode.md | 4 +-
.../400.column-operations-of-oracle-mode.md | 157 ++--
.../100.alter-index-of-oracle-mode.md | 38 +-
.../1600.create-index-of-oracle-mode.md | 37 +-
.../1700.grant-of-oracle-mode.md | 68 +-
12 files changed, 984 insertions(+), 552 deletions(-)
diff --git a/en-US/700.reference/500.sql-reference/100.sql-syntax/200.common-tenant-of-mysql-mode/600.sql-statement-of-mysql-mode/2100.create-database-of-mysql-mode.md b/en-US/700.reference/500.sql-reference/100.sql-syntax/200.common-tenant-of-mysql-mode/600.sql-statement-of-mysql-mode/2100.create-database-of-mysql-mode.md
index fe31093ee8..e444d2b049 100644
--- a/en-US/700.reference/500.sql-reference/100.sql-syntax/200.common-tenant-of-mysql-mode/600.sql-statement-of-mysql-mode/2100.create-database-of-mysql-mode.md
+++ b/en-US/700.reference/500.sql-reference/100.sql-syntax/200.common-tenant-of-mysql-mode/600.sql-statement-of-mysql-mode/2100.create-database-of-mysql-mode.md
@@ -18,7 +18,7 @@ You can use this statement to create a database and specify default attributes o
## Required privileges
-To execute the `CREATE DATABASE` statement to create a user, you need to have the global `CREATE` privilege. For more information about privileges in OceanBase Database, see [Privilege types in MySQL mode](../../../../../600.manage/500.security-and-permissions/300.access-control/200.user-and-permission/200.permission-of-mysql-mode/100.permission-classification-of-mysql.md).
+To execute the `CREATE DATABASE` statement, you must have the global `CREATE` privilege. For more information about privileges in OceanBase Database, see [Privilege types in MySQL mode](../../../../../600.manage/500.security-and-permissions/300.access-control/200.user-and-permission/200.permission-of-mysql-mode/100.permission-classification-of-mysql.md).
## Syntax
@@ -34,36 +34,35 @@ database_option:
## Parameters
-| **Parameter** | **Description** |
-|---------------|-----------------|
-| IF NOT EXISTS | Indicates that the database cannot be created if it already exists. When you create a database, if the database exists and `IF NOT EXISTS` is not specified, an error will be reported. |
+| Parameter | Description |
+|-------|-----------|
+| IF NOT EXISTS | Specifies whether to check the existence of the database to be created. If the database already exists and `IF NOT EXISTS` is not specified, an error is reported. |
| database_name | The name of the database to be created. |
-| \[DEFAULT\] {CHARACTER SET \| CHARSET} charset_name | The character set of the database. |
-| \[DEFAULT\] COLLATE collate_name | The collation of the database. |
-| REPLICA_NUM int_num | The number of replicas. |
-| {READ ONLY \| READ WRITE} | Specifies the read-write properties of the database.- `READ ONLY`: Sets the database to read-only mode, prohibiting write operations on the database.
- `READ WRITE`: Sets the database to read-write mode, allowing read and write operations on the database.
|
-| \[DEFAULT] TABLEGROUP {table_group_name \| NULL} | The default table group of the database. When you set it to `NULL`, the system disables the default table group. |
+| \[DEFAULT] {CHARACTER SET | CHARSET} charset_name | The character set of the database. |
+| \[DEFAULT] COLLATE collate_name | The collation of the database. |
+| {READ ONLY | READ WRITE} | The read/write property of the database. - `READ ONLY`: specifies that the database is in read-only mode and no write operations are allowed for the database.
- `READ WRITE`: specifies that the database is in read/write mode and both read and write operations are allowed for the database.
|
+| \[DEFAULT] TABLEGROUP {table_group_name | NULL} | The default table group of the database. You can specify a table group or cancel the default table group for the database. |
## Examples
-* Create a database named `test1` and specify the character set as `UTF-8`.
+* Create a database named `test1` and specify the `UTF-8` character set.
- ```shell
- obclient> CREATE DATABASE IF NOT EXISTS test1 DEFAULT CHARACTER SET utf8;
- ```
+ ```shell
+ obclient> CREATE DATABASE IF NOT EXISTS test1 DEFAULT CHARACTER SET utf8;
+ ```
* Create a database named `test2` that supports read and write operations.
- ```shell
- obclient> CREATE DATABASE test2 READ WRITE;
- ```
+ ```shell
+ obclient> CREATE DATABASE test2 READ WRITE;
+ ```
## References
-* For operations to grant user privileges, see [Grant privileges](../../../../../600.manage/500.security-and-permissions/300.access-control/200.user-and-permission/200.permission-of-mysql-mode/200.authority-of-mysql-mode.md).
+* For more information about how to grant user privileges, see [Grant privileges](../../../../../600.manage/500.security-and-permissions/300.access-control/200.user-and-permission/200.permission-of-mysql-mode/200.authority-of-mysql-mode.md).
-* For operations to verify the successful creation, see [View databases](../../../../../700.reference/300.database-object-management/100.manage-object-of-mysql-mode/100.manage-databases-of-mysql-mode/200.view-a-database-of-mysql-mode.md).
+* For more information about how to verify database creation, see [View databases](../../../../../700.reference/300.database-object-management/100.manage-object-of-mysql-mode/100.manage-databases-of-mysql-mode/200.view-a-database-of-mysql-mode.md).
-* For operations to modify a created database, see [Modify a database](../../../../../700.reference/300.database-object-management/100.manage-object-of-mysql-mode/100.manage-databases-of-mysql-mode/300.modify-a-database-of-mysql-mode.md).
+* For more information about how to modify a created database, see [Modify a database](../../../../../700.reference/300.database-object-management/100.manage-object-of-mysql-mode/100.manage-databases-of-mysql-mode/300.modify-a-database-of-mysql-mode.md).
-* For operations to delete a created database, see [Drop a database](../../../../../700.reference/300.database-object-management/100.manage-object-of-mysql-mode/100.manage-databases-of-mysql-mode/400.delete-a-database-of-mysql-mode.md).
+* For more information about how to drop a created database, see [Drop a database](../../../../../700.reference/300.database-object-management/100.manage-object-of-mysql-mode/100.manage-databases-of-mysql-mode/400.delete-a-database-of-mysql-mode.md).
\ No newline at end of file
diff --git a/en-US/700.reference/500.sql-reference/100.sql-syntax/200.common-tenant-of-mysql-mode/600.sql-statement-of-mysql-mode/2200.create-index-of-mysql-mode.md b/en-US/700.reference/500.sql-reference/100.sql-syntax/200.common-tenant-of-mysql-mode/600.sql-statement-of-mysql-mode/2200.create-index-of-mysql-mode.md
index 32042d2ed4..147d6794d1 100644
--- a/en-US/700.reference/500.sql-reference/100.sql-syntax/200.common-tenant-of-mysql-mode/600.sql-statement-of-mysql-mode/2200.create-index-of-mysql-mode.md
+++ b/en-US/700.reference/500.sql-reference/100.sql-syntax/200.common-tenant-of-mysql-mode/600.sql-statement-of-mysql-mode/2200.create-index-of-mysql-mode.md
@@ -20,7 +20,7 @@ In the current version of OceanBase Database, indexes are classified into unique
## Required privileges
-You must have the INDEX privilege on the data objects to create indexes by using the `CREATE INDEX` statement. For more information about privileges in OceanBase Database, see [Privilege types in MySQL mode](../../../../../600.manage/500.security-and-permissions/300.access-control/200.user-and-permission/200.permission-of-mysql-mode/100.permission-classification-of-mysql.md).
+To execute the `CREATE INDEX` statement, you must have the INDEX privilege on the corresponding objects. For more information about privileges in OceanBase Database, see [Privilege types in MySQL mode](../../../../../600.manage/500.security-and-permissions/300.access-control/200.user-and-permission/200.permission-of-mysql-mode/100.permission-classification-of-mysql.md).
## Syntax
@@ -61,23 +61,23 @@ index_column_group_option:
| IF NOT EXISTS | Specifies whether to check the existence of the index to be created. If the index already exists and `IF NOT EXISTS` is not specified, an error is reported. |
| index_name | The name of the index to be created. |
| USING BTREE | Optional. Specifies to create a B-tree index. Note
OceanBase Database supports only USING BTREE
.
|
-| table_name | The table on which the index is created. You can directly specify the table name or specify the table name and the name of the database to which the table belongs in the `schema_name.table_name` format . |
-| sort_column_key | The key of a sort column. You can specify multiple sort columns for an index and separate them by commas (`,`). For more information, see [sort_column_key](#sort_column_key). |
-| index_option | The index options. You can specify multiple index options for an index and separate them by spaces. For more information, see [index_option](#index_option). |
-| partition_option | The index partitioning option. You can specify HASH partitioning, KEY partitioning, RANGE partitioning, LIST partitioning, and left-side table partitioning. |
-| index_column_group_option | The columnstore options for the index. For more information, see [index_column_group_option](#index_column_group_option). |
+| table_name | The table on which the index is created. You can directly specify the table name or specify the table name and the name of the database to which the table belongs in the `schema_name.table_name` format. |
+| sort_column_key | The column as the sort key. You can specify multiple columns and separate them with commas (`,`). For more information, see [sort_column_key](#sort_column_key). |
+| index_option | The index option. You can specify multiple index options and separate them with spaces. For more information, see [index_option](#index_option). |
+| partition_option | The index partitioning option. You can specify HASH partitioning, KEY partitioning, RANGE partitioning, LIST partitioning, or external table partitioning. |
+| index_column_group_option | The index option. For more information, see [index_column_group_option](#index_column_group_option). |
### sort_column_key
-* `column_name [(integer)] [ASC] [ID id]`: Specifies a column as the sort key.
+* `column_name [(integer)] [ASC] [ID id]`: specifies a column as the sort key.
- * `column_name`: The name of the column to be sorted.
+ * `column_name`: the name of the column to sort.
- * `integer`: Optional. The maximum length of the sort key.
+ * `integer`: optional. The length limit of the sort key.
- * `ASC`: Optional. Specifies the ascending order. The descending order is not supported.
+ * `ASC`: optional. The ascending order. Currently, the descending order is not supported.
- * `ID id`: Optional. The ID of the sort key.
+ * `ID id`: optional. The ID of the sort key.
The following sample statement creates an index named `index3` on the `t3` table and sorts the index by the `c1` column in ascending order.
@@ -85,13 +85,13 @@ index_column_group_option:
CREATE INDEX index3 ON t3 (c1 ASC);
```
-* `(index_expr) [ASC] [ID id]`: specifies to use an index expression as the sort key. You can define the index expression by using expressions or functions. It can contain the following options:
+* `(index_expr) [ASC] [ID id]`: specifies to use an index expression as the sort key. You can define an index expression by using expressions or functions. The index expression setting contains the following options:
- * `(index_expr)`: the index expression, which can be a Boolean expression such as `c1=c1`. Currently, you cannot create function-based indexes on generated columns in OceanBase Database. For more information about the expressions supported by function-based indexes, see [System functions supported for function-based indexes](../../../../300.database-object-management/100.manage-object-of-mysql-mode/500.manage-indexes-of-mysql-mode/500.function-index-list-of-supported-functions-of-mysql-mode.md).
+ * `(index_expr)`: the index expression, which can be a Boolean expression, such as `c1=c1`. Currently, you cannot create function-based indexes on generated columns in OceanBase Database. For more information about the expressions supported by function-based indexes, see [System functions supported for function-based indexes](../../../../300.database-object-management/100.manage-object-of-mysql-mode/500.manage-indexes-of-mysql-mode/500.function-index-list-of-supported-functions-of-mysql-mode.md).
- * `ASC`: Optional. Specifies the ascending order. The descending order is not supported.
+ * `ASC`: optional. The ascending order. Currently, the descending order is not supported.
- * `ID id`: Optional. The ID of the sort key.
+ * `ID id`: optional. The ID of the sort key.
The following example creates an index named `index4` on the `t4` table, uses `c1+c2` as the index expression, and sorts the index in ascending order.
@@ -99,7 +99,7 @@ index_column_group_option:
CREATE INDEX index4 ON t4 ((c1 + c2) ASC);
```
-When you create an index, you can specify multiple sort columns and separate them by commas (`,`). The following example creates an index named `index5` on the `t5` table and uses the `c1` column and the `c2+c3` expression as the index sort key.
+When you create an index, you can specify multiple columns as the sort key and separate them with commas (`,`). The following example creates an index named `index5` on the `t5` table and uses the `c1` column and the `c2+c3` expression as the index sort key.
```sql
CREATE INDEX index5 ON t5 (c1, (c2+c3));
@@ -111,13 +111,13 @@ CREATE INDEX index5 ON t5 (c1, (c2+c3));
* `LOCAL`: specifies to create a local index.
-* `BLOCK_SIZE integer`: the size of an index block. In other words, the number of bytes in each index block.
+* `BLOCK_SIZE integer`: the size of an index block, that is, the number of bytes in each index block.
* `COMMENT STRING_VALUE`: adds a comment to the index.
-* `STORING (column_name [, column_name...])`: the columns to be stored in the index. Separate multiple columns by commas (`,`).
+* `STORING (column_name [, column_name...])`: the columns to be stored in the index. Separate multiple columns with commas (`,`).
-* `WITH_ROWID`: specifies to create an index that contains the row ID.
+* `WITH_ROWID`: creates an index that contains the row ID.
* `WITH PARSER STRING_VALUE`: the parser required for the index.
@@ -131,7 +131,7 @@ CREATE INDEX index5 ON t5 (c1, (c2+c3));
* `VIRTUAL_COLUMN_ID virtual_column_id`: the ID of the virtual column.
-* `MAX_USED_PART_ID used_part_id`: the ID of the maximum used partition of the index.
+* `MAX_USED_PART_ID used_part_id`: the maximum partition ID allowed for the index.
### index_column_group_option
@@ -140,15 +140,15 @@ CREATE INDEX index5 ON t5 (c1, (c2+c3));
## Example
-Create a columnstore index for a table.
+Create a columnstore index on a table.
-1. Use the following SQL statement to create a table named `test_tbl1`.
+1. Create a table named `test_tbl1`.
```sql
CREATE TABLE test_tbl1 (col1 INT, col2 VARCHAR(50));
```
-2. Create a columnstore index named `idx1_test_tbl1` on the `test_tbl1` table and reference the `col1` column.
+2. On the `test_tbl1` table, create a columnstore index named `idx1_test_tbl1`, which references the `col1` column.
```sql
CREATE INDEX idx1_test_tbl1 ON test_tbl1 (col1) WITH COLUMN GROUP(each column);
diff --git a/en-US/700.reference/500.sql-reference/100.sql-syntax/200.common-tenant-of-mysql-mode/600.sql-statement-of-mysql-mode/5500.grant-of-mysql-mode.md b/en-US/700.reference/500.sql-reference/100.sql-syntax/200.common-tenant-of-mysql-mode/600.sql-statement-of-mysql-mode/5500.grant-of-mysql-mode.md
index 2f14b0964a..04c1b8ed3a 100644
--- a/en-US/700.reference/500.sql-reference/100.sql-syntax/200.common-tenant-of-mysql-mode/600.sql-statement-of-mysql-mode/5500.grant-of-mysql-mode.md
+++ b/en-US/700.reference/500.sql-reference/100.sql-syntax/200.common-tenant-of-mysql-mode/600.sql-statement-of-mysql-mode/5500.grant-of-mysql-mode.md
@@ -9,7 +9,7 @@
## Purpose
-You can use this statement to grant privileges to other users as the system administrator.
+You can use a `GRANT` statement to grant privileges to other users as the system administrator.
## Required privileges
@@ -25,14 +25,14 @@ You can use this statement to grant privileges to other users as the system admi
## Syntax
```sql
-GRANT {priv_type [, priv_type...]}
- ON priv_level
- TO {user [, user...]}
+GRANT {priv_type [, priv_type...]}
+ ON priv_level
+ TO {user [, user...]}
[WITH GRANT OPTION]
user:
user_name
- | user_name IDENTIFIED [WITH auth_plugin] BY password
+ | user_name IDENTIFIED [WITH auth_plugin] BY password
| user_name IDENTIFIED [WITH auth_plugin] BY PASSWORD password
```
@@ -40,40 +40,40 @@ user:
| **Parameter** | **Description** |
|----------------------------------------|--------------------------------------|
-| priv_type | Specifies the type of privilege to be granted. Multiple privileges can be granted by separating them with a comma (`,`). For specific privilege types and their explanations, see [Privilege types in MySQL mode](../../../../../600.manage/500.security-and-permissions/300.access-control/200.user-and-permission/200.permission-of-mysql-mode/100.permission-classification-of-mysql.md). |
-| priv_level | Specifies the level at which the privileges are granted. It can be specified to take effect on all databases and tables (`*.*`), on a specified database or specified table (`db_name.*`, `*.table_name`), or on a specific table in a specific database (`db_name.table_name`). |
-| user | Specifies the user or users to whom the privileges are granted. Multiple users can be specified by separating them with a comma (`,`). If the user does not exist, the statement will create the user directly. |
-| auth_plugin | Specifies the method of user authentication. Currently, only the `mysql_native_password` authentication plugin is supported. |
-| BY password | Specifies a plaintext password for the user to be granted, which will be stored as encrypted in the `mysql.user` table on the server. If the password contains special characters ~!@#%^&*_-+=`\|(){}[]:;',.?/
, it should be enclosed in English quotes (`''` or `""`). |
-| BY PASSWORD password | Specifies an encrypted password for the user to be granted, which will be directly stored in the `mysql.user` table. |
-| WITH GRANT OPTION | Specifies whether the privilege is allowed to be granted to others and cascaded when revoking the grant. |
+| priv_type | The type of the privilege to be granted. To grant multiple privileges to a user, separate the privileges with commas (`,`). For information about privilege types and their description, see [Privilege types in MySQL mode](../../../../../600.manage/500.security-and-permissions/300.access-control/200.user-and-permission/200.permission-of-mysql-mode/100.permission-classification-of-mysql.md). |
+| priv_level | The level of the privilege to be granted. You can specify that the privilege takes effect on all databases and all tables (`*.*`), a specified database or table (`db_name.*` or `*.table_name`), or a specific table in a specific database (`db_name.table_name`). |
+| user | The user to which the privilege is granted. To grant privileges to multiple users, separate the usernames with commas (`,`). If the specified user does not exist, the statement creates the user directly. |
+| auth_plugin | The user authentication method. Currently, only the `mysql_native_password` authentication plug-in is supported. |
+| BY password | The password for the user to be authorized. The password is in plaintext and is saved in ciphertext on the server after it is saved to the `mysql.user` table. Enclose special characters in the password in quotation marks (''
or ""
). Special characters include the following ones: ~!@#%^&*_-+=`\|(){}[]:;',.?/
. |
+| BY PASSWORD password | The password for the user to be authorized. The password is in ciphertext and is saved to the `mysql.user` table directly. |
+| WITH GRANT OPTION | Specifies whether to enable privilege delegation. When privilege delegation is enabled, grant revocation extends to dependent users. |
## Examples
-* Grant the `CREATE VIEW` privilege on database `db1` to the existing user `user1` and set to allow granting the same privilege to other users.
-
- ```shell
- obclient> GRANT CREATE VIEW ON db1.* TO user1 WITH GRANT OPTION;
- ```
-
-* Grant the `CREATE` privilege on database `db1` to the existing user `user1` and change the password for `user1`.
-
- ```shell
- obclient> GRANT CREATE ON db1.* TO user1 IDENTIFIED BY '********';
- ```
-
- After execution, check the password for the user `user1` in the `mysql.user` table to see that it has been updated to the newly set password.
-
-* Grant the `CREATE` privilege on database `db1` to the non-existing user `user2` and set a password for `user2`.
-
- ```shell
- obclient> GRANT CREATE ON db1.* TO user2 IDENTIFIED BY '********';
- ```
+* Grant the `CREATE VIEW` privilege on the `db1` database to the `user1` user and enable privilege delegation.
+
+ ```shell
+ obclient> GRANT CREATE VIEW ON db1.* TO user1 WITH GRANT OPTION;
+ ```
+
+* Grant the `CREATE` privilege on the `db1` database to the `user1` user and change the password for `user1`.
+
+ ```shell
+ obclient> GRANT CREATE ON db1.* TO user1 IDENTIFIED by '********';
+ ```
+
+ After executing the statement, check the password of `user1` in the `mysql.user` table. The password is updated to the newly set one.
+
+* Grant the `CREATE` privilege on the `db1` database to a non-existing user named `user2` and set the password for `user2`.
+
+ ```shell
+ obclient> GRANT CREATE ON db1.* TO user2 IDENTIFIED by '********';
+ ```
## References
-* For operations to grant user privileges, see [Grant privileges](../../../../../600.manage/500.security-and-permissions/300.access-control/200.user-and-permission/200.permission-of-mysql-mode/200.authority-of-mysql-mode.md).
-
-* For operations to view user permissions, see [Query user privileges](../../../../../600.manage/500.security-and-permissions/300.access-control/200.user-and-permission/200.permission-of-mysql-mode/400.view-user-permissions-of-mysql-mode.md).
+* For more information about how to grant user privileges, see [Grant privileges](../../../../../600.manage/500.security-and-permissions/300.access-control/200.user-and-permission/200.permission-of-mysql-mode/200.authority-of-mysql-mode.md).
+
+* For more information about how to view user privileges, see [View user privileges](../../../../../600.manage/500.security-and-permissions/300.access-control/200.user-and-permission/200.permission-of-mysql-mode/400.view-user-permissions-of-mysql-mode.md).
-* You can view information about the created user from the `mysql.user` table. For more information on the `mysql.user` table, see [mysql.user](../../../../700.system-views/400.system-view-of-mysql-mode/200.dictionary-view-of-mysql-mode/4000.mysql-user-of-mysql-mode.md).
\ No newline at end of file
+* You can query the `mysql.user` table to view information about the created user. For more information about the `mysql.user` table, see [mysql.user](../../../../700.system-views/400.system-view-of-mysql-mode/200.dictionary-view-of-mysql-mode/4000.mysql-user-of-mysql-mode.md).
\ No newline at end of file
diff --git a/en-US/700.reference/500.sql-reference/100.sql-syntax/200.common-tenant-of-mysql-mode/700.ddl-function-of-mysql-mode/200.index-operations-of-mysql-mode.md b/en-US/700.reference/500.sql-reference/100.sql-syntax/200.common-tenant-of-mysql-mode/700.ddl-function-of-mysql-mode/200.index-operations-of-mysql-mode.md
index 699f128ac3..7136ffcdbb 100644
--- a/en-US/700.reference/500.sql-reference/100.sql-syntax/200.common-tenant-of-mysql-mode/700.ddl-function-of-mysql-mode/200.index-operations-of-mysql-mode.md
+++ b/en-US/700.reference/500.sql-reference/100.sql-syntax/200.common-tenant-of-mysql-mode/700.ddl-function-of-mysql-mode/200.index-operations-of-mysql-mode.md
@@ -7,81 +7,234 @@
# Index operations
-The MySQL mode of OceanBase Database allows you to create an index, add an index, drop an index, and rename an index.
+This topic describes how to perform simple index operations in the MySQL mode of OceanBase Database, including creating an index, querying indexes, adding an index, dropping an index, and renaming an index.
## Create an index
-The syntax for creating an index is as follows:
+The syntaxes for creating an index are as follows:
-```sql
-CREATE INDEX index_name ON table_name (index_col_name,...);
-```
+* CREATE INDEX
-When you create an index on a table, this table still supports read and write operations.
+ ```sql
+ CREATE [SPATIAL | UNIQUE] INDEX [IF NOT EXISTS] index_name
+ [USING BTREE] ON table_name (sort_column_key [, sort_column_key... ])
+ [index_option...] [partition_option];
+ ```
-Here is an example:
+ where
-```sql
-obclient> CREATE TABLE tbl1 (c1 INT PRIMARY KEY, c2 VARCHAR(10));
-Query OK, 0 rows affected
+ * `index_name` specifies the name of the index to be created.
+
+ * `table_name` specifies the name of the table on which the index is to be created.
+
+ * `sort_column_key` specifies the column as the sort key. You can specify multiple columns and separate them with commas (`,`).
+
+ * `index_option` specifies the index option. You can specify multiple index options and separate them with spaces.
+
+ * `partition_option` specifies the index partitioning option.
+
+ For more information about the syntax, see [CREATE INDEX](../600.sql-statement-of-mysql-mode/2200.create-index-of-mysql-mode.md).
+
+ When an index is being created on a table, this table still supports read and write operations. Here are two examples of creating an index:
+
+ * Assume that table `t1` exists in the database. Create an index named `index1` on table `t1` and with the index in ascending order on column `c1`.
+
+ ```shell
+ obclient> CREATE INDEX index1 ON t1 (c1 ASC);
+ ```
+
+ * Assume that table `t2` exists in the database. Create an index named `index2` on table `t2` and with `c1 + c2` as the index expression.
+
+ ```shell
+ obclient> CREATE INDEX IF NOT EXISTS index2 ON t2 (c1 + c2);
+ ```
+
+* CREATE TABLE
+
+ ```sql
+ CREATE TABLE [IF NOT EXISTS] table_name
+ (column_name column_definition,[column_name column_definition,...],
+ {INDEX | KEY} [index_name] [index_type] (key_part,...)
+ [index_option...]);
+ ```
+
+ where
+
+ * `table_name` specifies the name of the table to be created.
+
+ * `column_name` specifies the name of the index to be created.
+
+ * `column_definition` specifies the data type of the column in the table.
+
+ * `INDEX | KEY` specifies that either INDEX or KEY can be used as the index keyword.
-obclient> CREATE INDEX tbl1_idx ON tbl1 (c1, c2 ASC);
-Query OK, 0 rows affected
+ * `index_name` specifies the name of the index to be created. This parameter is optional. If you do not specify a value for this parameter, the index name is the same as the column name by default.
+
+ * `index_type` specifies the index type. This parameter is optional.
+
+ * `key_part` specifies to create a function-based index.
+
+ * `index_option` specifies the index option. You can specify multiple index options and separate them with spaces.
+
+ For more information about the syntax, see [CREATE TABLE](../600.sql-statement-of-mysql-mode/2600.create-table-of-mysql-mode.md).
+
+ The following sample statement creates a table named `t3` and an index named `index3` in ascending order on the `id` column:
+
+ ```shell
+ obclient> CREATE TABLE t3
+ (id int,
+ name varchar(50),
+ INDEX index3 (id ASC));
+ ```
+
+## Query indexes
+
+You can use `SHOW INDEX` to query the indexes on a table. The following statement takes the `test` table as an example:
+
+```shell
+obclient> SHOW INDEX FROM test;
+```
+
+The query result is as follows:
+
+```shell
++-------+------------+----------+--------------+-------------+-----------+-------------+----------+--------+------+------------+-----------+---------------+---------+------------+
+| Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | Index_comment | Visible | Expression |
++-------+------------+----------+--------------+-------------+-----------+-------------+----------+--------+------+------------+-----------+---------------+---------+------------+
+| test | 0 | PRIMARY | 1 | id | A | NULL | NULL | NULL | | BTREE | available | | YES | NULL |
+| test | 1 | index1 | 1 | name | A | NULL | NULL | NULL | | BTREE | available | | YES | NULL |
+| test | 1 | index2 | 1 | age | A | NULL | NULL | NULL | | BTREE | available | | YES | NULL |
++-------+------------+----------+--------------+-------------+-----------+-------------+----------+--------+------+------------+-----------+---------------+---------+------------+
```
+The fields in the query result are described as follows:
+
+* `Non_unique`: If the index cannot include duplicate values, the value of the field is `0`. Otherwise, the value is `1`. In other words, the value `0` indicates a unique index.
+
+* `Key_name`: the name of the index.
+
+* `Seq_in_index`: the sequence number of the column in the composite index, such as `1` or `2`.
+
+* `Column_name`: the name of the indexed column.
+
+* `Collation`: the collation that indicates how columns are stored in the index.
+
+* `Cardinality`: the estimated number of unique values in the index.
+
+* `Sub_part`: indicates a prefixed index. If the column is partially indexed, the value is the number of indexed characters in the column. The value is `NULL` if the entire column is indexed.
+
+* `Packed`: the compression method of keywords. If keywords are not compressed, the value is `NULL`.
+
+* `Index_type`: the index type. At present, only the `BTREE` type is supported.
+
+* `Comment`: indicates whether the index is available.
+
+* `Index_comment`: the comment for the index.
+
+* `Visible`: indicates whether the index is visible.
+
## Add an index
The syntax for adding an index is as follows:
```sql
-ALTER TABLE table_name ADD INDEX index_name (index_col_name,...);
+ALTER TABLE table_name ADD {INDEX | KEY} index_name [index_type] (key_part,...) [index_option];
```
-Here is an example:
+where
-```sql
-obclient> CREATE TABLE tbl2(c1 INT PRIMARY KEY,c2 INT);
-Query OK, 0 row affected
+* `table_name` specifies the name of the table to which the index is to be added.
+
+* `INDEX | KEY` specifies that either INDEX or KEY can be used as the index keyword.
+
+* `index_name` specifies the name of the index to be added.
+
+* `index_type` specifies the index type. This parameter is optional.
+
+* `key_part` specifies to add a function-based index.
-obclient> ALTER TABLE tbl2 ADD INDEX ind1 (c1,c2);
-Query OK, 0 row affected
+* `index_option` specifies the index option. You can specify multiple index options and separate them with spaces.
+
+For more information about the syntax, see [ALTER TABEL](../600.sql-statement-of-mysql-mode/1600.alter-table-of-mysql-mode.md).
+
+Assume that table `t1` exists in the database. The following sample statement creates an index named `index2` on the `t1` table and specifies the index in ascending order on the `c2` and `c3` columns.
+
+```shell
+obclient> ALTER TABLE t1 ADD INDEX index2 (c2,c3 ASC);
```
## Drop an index
The two syntaxes for dropping an index are as follows:
-```sql
-DROP INDEX index_name ON table_name;
-```
+* DROP INDEX
-```sql
-ALTER TABLE table_name DROP INDEX index_name;
-```
+ ```sql
+ DROP INDEX index_name ON table_name;
+ ```
+
+ Here, `index_name` specifies the name of the index to be dropped, and `table_name` specifies the name of the table therefrom which the index is to be dropped. For more information about the syntax, see [DROP INDEX](../600.sql-statement-of-mysql-mode/3700.drop-table-of-mysql-mode.md).
+
+* ALTER TABEL
+
+ ```sql
+ ALTER TABLE table_name DROP {INDEX | KEY} index_name;
+ ```
+
+ where
+
+ * `table_name` specifies the name of the table from which the index is to be dropped.
+
+ * `INDEX | KEY` specifies that either `INDEX` or `KEY` can be used as the index keyword.
+
+ * `index_name` specifies the name of the index to be dropped.
+
+ For more information about the syntax, see [ALTER TABEL](../600.sql-statement-of-mysql-mode/1600.alter-table-of-mysql-mode.md).
When you drop an index from a table, this table still supports read and write operations.
-Here is an example:
+Assume that the table `t1` exists in the database and has an index named `index1`. The following sample statement drops the index:
-```sql
-obclient> ALTER TABLE tbl2 DROP INDEX ind1;
-Query OK, 0 row affected
+* DROP INDEX
-obclient> DROP INDEX ind1 ON tbl2;
-Query OK, 0 rows affected
-```
+ ```shell
+ obclient> DROP INDEX index1 ON t1;
+ ```
+
+* ALTER TABLE
+
+ ```shell
+ obclient> ALTER TABLE t1 DROP INDEX index1;
+ ```
## Rename an index
The syntax for renaming an index is as follows:
```sql
-ALTER TABLE table_name RENAME INDEX old_index_name TO new_index_name;
+ALTER TABLE table_name RENAME {INDEX | KEY} old_index_name TO new_index_name;
```
-Here is an example:
+where
-```sql
-obclient> ALTER TABLE tbl2 RENAME INDEX ind1 TO ind2;
-Query OK, 0 rows affected
+* `table_name` specifies the name of the table where the index is to be renamed.
+
+* `INDEX | KEY` specifies that either `INDEX` or `KEY` can be used as the index keyword.
+
+* `old_index_name` specifies the original name of the index to be renamed.
+
+* `new_index_name` specifies the new name of the index.
+
+For more information about the syntax, see [ALTER TABEL](../600.sql-statement-of-mysql-mode/1600.alter-table-of-mysql-mode.md).
+
+Assume that the table `t1` exists in the database and has an index named `index2`. The following sample statement renames `index2` as `index3`:
+
+```shell
+obclient> ALTER TABLE t1 RENAME INDEX index2 TO index3;
```
+
+## References
+
+* [Overview](../../../../100.oceanbase-database-concepts/400.database-objects/200.database-objects-of-mysql-mode/300.index-of-oracle-mode/100.index-overview-of-mysql-mode.md)
+
+* [Create and manage indexes](../../../../300.database-object-management/100.manage-object-of-mysql-mode/500.manage-indexes-of-mysql-mode/200.create-an-index-of-mysql-mode.md)
\ No newline at end of file
diff --git a/en-US/700.reference/500.sql-reference/100.sql-syntax/200.common-tenant-of-mysql-mode/700.ddl-function-of-mysql-mode/300.primary-key-operations-of-mysql-mode.md b/en-US/700.reference/500.sql-reference/100.sql-syntax/200.common-tenant-of-mysql-mode/700.ddl-function-of-mysql-mode/300.primary-key-operations-of-mysql-mode.md
index 265f56b63d..725478a264 100644
--- a/en-US/700.reference/500.sql-reference/100.sql-syntax/200.common-tenant-of-mysql-mode/700.ddl-function-of-mysql-mode/300.primary-key-operations-of-mysql-mode.md
+++ b/en-US/700.reference/500.sql-reference/100.sql-syntax/200.common-tenant-of-mysql-mode/700.ddl-function-of-mysql-mode/300.primary-key-operations-of-mysql-mode.md
@@ -7,7 +7,7 @@
# Primary key operations
-The MySQL mode of OceanBase Database allows you to add a primary key, modify a primary key, and drop a primary key.
+This topic describes simple primary key operations in the MySQL mode of OceanBase Database, including adding, modifying, and dropping primary keys.
## Add a primary key
@@ -17,14 +17,12 @@ The syntax for adding a primary key is as follows:
ALTER TABLE table_name ADD PRIMARY KEY (column_name);
```
-Here is an example:
+Here, `table_name` specifies the name of the table to which the primary key is to be added, and `column_name` specifies the primary key column. For more information, see [ALTER TABLE](../600.sql-statement-of-mysql-mode/1600.alter-table-of-mysql-mode.md).
-```sql
-obclient> CREATE TABLE tbl1(c1 INT,c2 VARCHAR(50));
-Query OK, 0 rows affected
+Assume that the tbl1 table already exists in the database. The following sample statement adds a primary key index to the tbl1 table and applies the index to the c1 column:
+```shell
obclient> ALTER TABLE tbl1 ADD PRIMARY KEY(c1);
-Query OK, 0 rows affected
```
## Modify a primary key
@@ -32,14 +30,15 @@ Query OK, 0 rows affected
The syntax for modifying a primary key is as follows:
```sql
-ALTER TABLE table_name DROP PRIMARY KEY,ADD PRIMARY KEY (column_name_list);
+ALTER TABLE table_name DROP PRIMARY KEY, ADD PRIMARY KEY (column_name);
```
-Here is an example:
+Here, `table_name` specifies the name of the table to which the primary key belongs, and `column_name` specifies the primary key column. For more information, see [ALTER TABLE](../600.sql-statement-of-mysql-mode/1600.alter-table-of-mysql-mode.md).
-```sql
-obclient> ALTER TABLE tbl1 DROP PRIMARY KEY,ADD PRIMARY KEY(c2);
-Query OK, 0 rows affected
+Assume that the tbl1 table already exists in the database and the tbl1 table has a primary key. The following sample statement modifies the primary key in the tbl1 table and applies it to the c2 column:
+
+```shell
+obclient> ALTER TABLE tbl1 DROP PRIMARY KEY, ADD PRIMARY KEY(c2);
```
## Drop a primary key
@@ -50,9 +49,14 @@ The syntax for dropping a primary key is as follows:
ALTER TABLE table_name DROP PRIMARY KEY;
```
-Here is an example:
+Here, `table_name` specifies the name of the table to which the primary key belongs. For more information, see [ALTER TABLE](../600.sql-statement-of-mysql-mode/1600.alter-table-of-mysql-mode.md).
-```sql
+Assume that the tbl1 table exists in the database and the tbl1 table has a primary key. The following sample statement drops the primary key from the tbl1 table:
+
+```shell
obclient> ALTER TABLE tbl1 DROP PRIMARY KEY;
-Query OK, 0 rows affected
```
+
+## References
+
+[Query indexes](../../../../300.database-object-management/100.manage-object-of-mysql-mode/500.manage-indexes-of-mysql-mode/300.view-indexes-of-mysql-mode.md)
\ No newline at end of file
diff --git a/en-US/700.reference/500.sql-reference/100.sql-syntax/200.common-tenant-of-mysql-mode/700.ddl-function-of-mysql-mode/400.column-operations-of-mysql-mode.md b/en-US/700.reference/500.sql-reference/100.sql-syntax/200.common-tenant-of-mysql-mode/700.ddl-function-of-mysql-mode/400.column-operations-of-mysql-mode.md
index 7dd9830a71..78a8e9df4c 100644
--- a/en-US/700.reference/500.sql-reference/100.sql-syntax/200.common-tenant-of-mysql-mode/700.ddl-function-of-mysql-mode/400.column-operations-of-mysql-mode.md
+++ b/en-US/700.reference/500.sql-reference/100.sql-syntax/200.common-tenant-of-mysql-mode/700.ddl-function-of-mysql-mode/400.column-operations-of-mysql-mode.md
@@ -7,72 +7,153 @@
# Column operations
-The MySQL mode of OceanBase Database allows you to add a column to the end of a table, add a column to a table at a position other than the end of the table, drop a column, rename a column, reorder columns, change the data type of a column, set a default value for a column, drop the default value of a column, change the value of an auto-increment column, set a `NULL` constraint on a column, and set a `NOT NULL` constraint on a column.
+This topic describes the column operations in the MySQL mode of OceanBase Database, including adding a column to the end of a table, adding a column to a position other than the end of a table, dropping a column, renaming a column, relocating a column, changing the data type of a column, managing the default value of a column, managing constraints, and changing the value of an auto-increment column.
-## Add a column to the end of a table
+## Add a column
+
+The syntax for **adding a column to the end of a table** differs from that for **adding a column to a position other than the end of a table**.
+
+### Add a column to the end of a table
The syntax for adding a column to the end of a table is as follows:
```sql
-ALTER TABLE table_name ADD COLUMN column_name column_definition;
+ALTER TABLE table_name ADD COLUMN column_name data_type;
```
-Here is an example:
+where
-```sql
-obclient> CREATE TABLE tbl1 (c1 INT PRIMARY KEY,c2 VARCHAR(50));
-Query OK, 0 rows affected
+* `table_name` specifies the name of the table to which the column is to be added.
-obclient> DESCRIBE tbl1;
-+-------+------------+----------+--------+---------+-------+
-| Field | Type | Null | Key | Default | Extra |
-+-------+------------+----------+--------+---------+-------+
-| c1 | int(11) | NO | PRI | 1 | |
-| c2 | varchar(50)| YES | | NULL | |
-+-------+------------+----------+--------+---------+-------+
-2 rows in set
+* `column_name` specifies the name of the column to be added.
-obclient> ALTER TABLE tbl1 ADD c3 INT;
-Query OK, 0 rows affected
+* `data_type` specifies the data type of the column to be added.
-obclient> DESCRIBE tbl1;
-+-------+-------------+------+-----+---------+-------+
-| Field | Type | Null | Key | Default | Extra |
-+-------+-------------+------+-----+---------+-------+
-| c1 | int(11) | NO | PRI | 1 | |
-| c2 | varchar(50) | YES | | NULL | |
-| c3 | int(11) | YES | | NULL | |
-+-------+-------------+------+-----+---------+-------+
-3 rows in set
-```
+Assume that the tbl1 table exists in the database. The following example adds the `c1` column to the end of the tbl1 table.
+
+1. View the structure of the tbl1 table.
+
+ ```shell
+ obclient> DESCRIBE tbl1;
+ ```
+
+ The result is as follows, showing that the table has three columns, `id`, `name`, and `age`:
+
+ ```shell
+ +-------+-------------+------+-----+---------+-------+
+ | Field | Type | Null | Key | Default | Extra |
+ +-------+-------------+------+-----+---------+-------+
+ | id | int(11) | NO | PRI | NULL | |
+ | name | varchar(50) | NO | | NULL | |
+ | age | int(11) | YES | | NULL | |
+ +-------+-------------+------+-----+---------+-------+
+ ```
-## Add a column to a table at a position other than the end of the table
+2. Add the `c1` column to the end of the table and specify the data type of the column as INT.
-The syntax for adding a column to a table at a position other than the end of the table is as follows:
+ ```shell
+ obclient> ALTER TABLE tbl1 ADD COLUMN c1 INT;
+ ```
+
+3. View the structure of the tbl1 table again.
+
+ ```shell
+ obclient> DESCRIBE tbl1;
+ ```
+
+ The result is as follows, showing that the `c1` column is added to the end of the table:
+
+ ```shell
+ +-------+-------------+------+-----+---------+-------+
+ | Field | Type | Null | Key | Default | Extra |
+ +-------+-------------+------+-----+---------+-------+
+ | id | int(11) | NO | PRI | NULL | |
+ | name | varchar(50) | NO | | NULL | |
+ | age | int(11) | YES | | NULL | |
+ | c1 | int(11) | YES | | NULL | |
+ +-------+-------------+------+-----+---------+-------+
+ ```
+
+### Add a column to a position other than the end of a table
+
+The syntax for adding a column to a position other than the end of a table is as follows:
```sql
-ALTER TABLE tbl_name ADD COLUMN new_column_name column_definition
- BEFORE column_name;
+ALTER TABLE table_name
+ ADD COLUMN new_column_name data_type
+ {FIRST | BEFORE | AFTER} column_name;
```
-Here is an example:
+where
-```sql
-obclient> ALTER TABLE tbl1 ADD COLUMN c4 INT BEFORE c3;
-Query OK, 0 rows affected
+* `table_name` specifies the name of the table to which the column is to be added.
+
+* `new_column_name` specifies the name of the column to be added.
+
+* `data_type` specifies the data type of the column to be added.
+
+* `FIRST | BEFORE | AFTER` specifies the position to which the column is to be added. `FIRST` indicates the beginning of the table, and `BEFORE` or `AFTER` indicates the position before or after the specified column.
+
+
+ Notice
+ If you use FIRST
to add a column to the beginning of a table, you do not need to set column_name
. Otherwise, an error is returned.
+
-obclient> DESCRIBE tbl1;
+* `column_name` specifies the name of the column in the specified relative position to the column to be added.
+
+Assume that the tbl1 table exists in the database and the table structure of tbl1 is as follows:
+
+```shell
+-------+-------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-------+-------------+------+-----+---------+-------+
-| c1 | int(11) | NO | PRI | 1 | |
-| c2 | varchar(50) | YES | | NULL | |
-| c4 | int(11) | YES | | NULL | |
-| c3 | int(11) | YES | | NULL | |
+| id | int(11) | NO | PRI | NULL | |
+| name | varchar(50) | NO | | NULL | |
+| age | int(11) | YES | | NULL | |
+-------+-------------+------+-----+---------+-------+
-4 rows in set
```
+The following example adds the `c1` column to the beginning of the tbl1 table, and adds the `c2` column before the `name` column and the `c3` column after the `name` column.
+
+1. Add the `c1` column to the beginning of the tbl1 table and specify the data type of the column as INT.
+
+ ```shell
+ obclient> ALTER TABLE tbl1 ADD COLUMN c1 INT FIRST;
+ ```
+
+2. Add the `c2` column before the `name` column in the tbl1 table and specify the data type of the new column as VARCHAR.
+
+ ```shell
+ obclient> ALTER TABLE tbl1 ADD COLUMN c2 VARCHAR(50) BEFORE name;
+ ```
+
+3. Add the `c3` column after the `name` column in the tbl1 table and specify the data type of the new column as VARCHAR and a constraint that the new column cannot be empty.
+
+ ```shell
+ obclient> ALTER TABLE tbl1 ADD COLUMN c3 VARCHAR(25) NOT NULL AFTER name;
+ ```
+
+4. View the structure of the tbl1 table again.
+
+ ```shell
+ obclient> DESCRIBE tbl1;
+ ```
+
+ The result is as follows:
+
+ ```shell
+ +-------+-------------+------+-----+---------+-------+
+ | Field | Type | Null | Key | Default | Extra |
+ +-------+-------------+------+-----+---------+-------+
+ | c1 | int(11) | YES | | NULL | |
+ | id | int(11) | NO | PRI | NULL | |
+ | c2 | varchar(50) | YES | | NULL | |
+ | name | varchar(50) | NO | | NULL | |
+ | c3 | varchar(25) | NO | | NULL | |
+ | age | int(11) | YES | | NULL | |
+ +-------+-------------+------+-----+---------+-------+
+ ```
+
## Drop a column
The syntax for dropping a column is as follows:
@@ -81,347 +162,543 @@ The syntax for dropping a column is as follows:
ALTER TABLE table_name DROP COLUMN column_name
```
-Here is an example:
+Here, `table_name` specifies the name of the table to which the column belongs, and `column_name` specifies the name of the column to be dropped.
-```sql
-obclient> ALTER TABLE tbl1 DROP COLUMN c3;
-Query OK, 0 rows affected
+Assume that the `tbl1` table exists in the database and the table structure of `tbl1` is as follows:
-obclient> DESCRIBE tbl1;
+```shell
+-------+-------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-------+-------------+------+-----+---------+-------+
-| c1 | int(11) | NO | PRI | 1 | |
-| c2 | varchar(50) | YES | | NULL | |
-| c4 | int(11) | YES | | NULL | |
+| id | int(11) | NO | PRI | NULL | |
+| name | varchar(50) | NO | | NULL | |
+| age | int(11) | YES | | NULL | |
+-------+-------------+------+-----+---------+-------+
-3 rows in set
```
-## Rename a column
-
-The syntax for renaming a column is as follows:
+The following example drops the `name` column:
-```sql
-ALTER TABLE table_name CHANGE old_col_name new_col_name data_type;
+```shell
+obclient> ALTER TABLE tbl1 DROP COLUMN name;
```
-Here is an example:
+After executing the preceding statement, execute `DESCRIBE tbl1;` again to view the table structure of `tbl1`. The result is as follows, showing that the `name` column is removed from the `tbl1` table:
-```sql
-obclient> DESCRIBE tbl1;
+```shell
+-------+-------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-------+-------------+------+-----+---------+-------+
-| c1 | int(11) | NO | PRI | 1 | |
-| c2 | varchar(50) | YES | | NULL | |
-| c4 | int(11) | YES | | NULL | |
+| id | int(11) | NO | PRI | NULL | |
+| age | int(11) | YES | | NULL | |
+-------+-------------+------+-----+---------+-------+
-3 rows in set
+```
+
+## Rename a column
+
+The two syntaxes for renaming a column are as follows:
-obclient> ALTER TABLE tbl1 CHANGE COLUMN c4 c3 VARCHAR(50);
-Query OK, 0 rows affected
+* Rename a column while changing the type of the column
-obclient> DESCRIBE tbl1;
+ ```sql
+ ALTER TABLE table_name CHANGE old_col_name new_col_name data_type;
+ ```
+
+ where
+
+ * `table_name` specifies the name of the table to which the column belongs.
+
+ * `old_col_name` specifies the original name of the column.
+
+ * `new_col_name` specifies the new name of the column.
+
+ * `data_type` specifies the new data type of the column to be renamed. You can specify the current data type or another data type. For more information, see [Change the data type of a column](#Change_the_data_type_of_a_column).
+
+* Rename a column only
+
+ ```sql
+ ALTER TABLE table_name RENAME COLUMN old_col_name TO new_col_name
+ ```
+
+ where
+
+ * `table_name` specifies the name of the table to which the column belongs.
+
+ * `old_col_name` specifies the original name of the column.
+
+ * `new_col_name` specifies the new name of the column.
+
+Assume that the tbl1 table exists in the database and the table structure of tbl1 is as follows:
+
+```shell
+-------+-------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-------+-------------+------+-----+---------+-------+
-| c1 | int(11) | NO | PRI | 1 | |
-| c2 | varchar(50) | YES | | NULL | |
-| c3 | varchar(50) | YES | | NULL | |
+| id | int(11) | NO | PRI | NULL | |
+| name | varchar(50) | NO | | NULL | |
+| age | int(11) | YES | | NULL | |
+-------+-------------+------+-----+---------+-------+
-3 rows in set
```
-## Reorder columns
+The following example renames the `name` column as `c2` and the `age` column as `c3` and changes the data type of the `age` column to VARCHAR.
+
+1. Rename the `name` column as `c2`.
+
+ ```shell
+ obclient> ALTER TABLE tbl1 RENAME COLUMN name TO c2;
+ ```
+
+2. Rename the `age` column as `c3` and change the data type of the `age` column to VARCHAR.
+
+ ```shell
+ obclient> ALTER TABLE tbl1 CHANGE age c3 VARCHAR(50);
+ ```
+
+3. Execute `DESCRIBE tbl1;` again to view the table structure of tbl1.
-The syntax for reordering columns is as follows:
+ The output is as follows, showing that the tbl1 table has three columns, `id`, `c2`, and `c3`, and that the data type of the `c3` column is VARCHAR.
+
+ ```shell
+ +-------+-------------+------+-----+---------+-------+
+ | Field | Type | Null | Key | Default | Extra |
+ +-------+-------------+------+-----+---------+-------+
+ | id | int(11) | NO | PRI | NULL | |
+ | c2 | varchar(50) | YES | | NULL | |
+ | c3 | varchar(50) | YES | | NULL | |
+ +-------+-------------+------+-----+---------+-------+
+ ```
+
+## Relocate a column
+
+The syntax for relocating a column is as follows:
```sql
-ALTER TABLE table_name MODIFY COLUMN column_name column_definition
- FIRST | [AFTER column_name];
+ALTER TABLE table_name
+ MODIFY [COLUMN] column_name data_type
+ {FIRST | BEFORE | AFTER} column_name;
```
-Here is an example:
+where
-```sql
-obclient> ALTER TABLE tbl1 MODIFY COLUMN c3 INT FIRST;
-Query OK, 0 rows affected
+* `table_name` specifies the name of the table for which the column is to be relocated.
+
+* `column_name` specifies the name of the column to be relocated.
-obclient> DESCRIBE tbl1;
+* `data_type` specifies the new data type of the column to be relocated. You can specify the current data type or another data type. For more information, see [Change the data type of a column](#Change_the_data_type_of_a_column).
+
+* `FIRST | BEFORE | AFTER` specifies the position to which the column is to be relocated. `FIRST` indicates the beginning of the table, and `BEFORE` or `AFTER` indicates the position before or after the specified column.
+
+
+ Notice
+ If you use FIRST
to relocate a column to the beginning of a table, you do not need to set column_name
. Otherwise, an error is returned.
+
+
+* `column_name` specifies the name of the column in the specified relative position to the column to be relocated.
+
+Assume that the tbl1 table exists in the database and the table structure of tbl1 is as follows:
+
+```shell
+-------+-------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-------+-------------+------+-----+---------+-------+
-| c3 | int(11) | YES | | 1 | |
-| c1 | int(11) | NO | PRI | NULL | |
-| c2 | varchar(50) | YES | | NULL | |
+| id | int(11) | NO | PRI | NULL | |
+| name | varchar(50) | NO | | NULL | |
+| age | int(11) | YES | | NULL | |
+-------+-------------+------+-----+---------+-------+
-3 rows in set
+```
-obclient> ALTER TABLE tbl1 MODIFY COLUMN c3 INT AFTER c2;
-Query OK, 0 rows affected
+The following example relocates the `age` column before `name` column and changes the data type of the `age` column to VARCHAR.
-obclient> DESCRIBE tbl1;
+```shell
+obclient> ALTER TABLE tbl1 MODIFY COLUMN age VARCHAR(50) BEFORE name;
+```
+
+Execute `DESCRIBE tbl1;` again to view the table structure of tbl1. The result is as follows, showing that the `age` column is relocated after the `name` column and the data type is VARCHAR:
+
+```shell
+-------+-------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-------+-------------+------+-----+---------+-------+
-| c1 | int(11) | NO | PRI | 1 | |
-| c2 | varchar(50) | YES | | NULL | |
-| c3 | int(11) | YES | | NULL | |
+| id | int(11) | NO | PRI | NULL | |
+| age | varchar(50) | YES | | NULL | |
+| name | varchar(50) | NO | | NULL | |
+-------+-------------+------+-----+---------+-------+
-3 rows in set
```
-## Change the type of a column
+## Change the data type of a column
OceanBase Database supports the following data type conversions:
* Conversion between the following character data types: `CHAR`, `VARCHAR`, `TINYTEXT`, `TEXT`, and `LONGTEXT`.
+
* Conversion between the following numeric data types: `TINYINT`, `SMALLINT`, `MEDIUMINT`, `INT`, and `BIGINT`.
+
* Conversion between the following binary data types: `BINARY`, `VARBINARY`, `BLOB`, `TINYBLOB`, `MEDIUMBLOB`, and `LONGBLOB`.
+
* Precision change for the following data types with a precision: `VARCHAR`, `FLOAT`, `DOUBLE`, and `DECIMAL`.
+
* Conversion between the following data types with a precision: `FLOAT`, `DOUBLE`, and `DECIMAL`.
+
* Conversion between the following data types: `INT`, `VARCHAR`, `DOUBLE`, `FLOAT`, and `DECIMAL`.
For more information about the rules for changing the data types of columns in OceanBase Database, see [Column type change rules](../700.ddl-function-of-mysql-mode/900.column-type-change-rule-of-mysql-mode.md).
-The syntax for changing the type of a column is as follows:
+The syntax for changing the data type of a column is as follows:
```sql
-ALTER TABLE table_name MODEFY column_name data_type;
+ALTER TABLE table_name MODIFY [COLUMN] column_name data_type;
```
-### Examples of column type changes
+where
-#### Conversions between character data types
+* `table_name` specifies the name of the table to which the column belongs.
-Change the data type of a character column and increase the length.
+* `column_name` specifies the name of the column whose data type is to be changed.
-```sql
+* `data_type` specifies the new data type.
+
+### Conversions between character data types
+
+Create the test01 table as follows:
+
+```shell
obclient> CREATE TABLE test01 (c1 INT PRIMARY KEY, c2 CHAR(10), c3 CHAR(10));
-Query OK, 0 rows affected
+```
-obclient> ALTER TABLE test01 MODIFY C2 VARCHAR(20);
-Query OK, 0 rows affected
+The following examples show how to change the data type and length limit for a column of a character data type.
-obclient> ALTER TABLE test01 MODIFY C2 VARCHAR(40);
-Query OK, 0 rows affected
+* Change the data type of the `c2` column in the test01 table to VARCHAR, and set the length limit to 20 characters.
-obclient> ALTER TABLE test01 MODIFY C2 TINYTEXT;
-Query OK, 0 rows affected
+ ```shell
+ obclient> ALTER TABLE test01 MODIFY c2 VARCHAR(20);
+ ```
-obclient> ALTER TABLE test01 MODIFY C2 LONGTEXT;
-Query OK, 0 rows affected
+* Change the data type of the `c2` column in the test01 table to VARCHAR, and set the length limit to 40 characters.
-obclient> ALTER TABLE test01 MODIFY C3 CHAR(20);
-Query OK, 0 rows affected
+ ```shell
+ obclient> ALTER TABLE test01 MODIFY c2 VARCHAR(40);
+ ```
-obclient> ALTER TABLE test01 MODIFY C3 VARCHAR(30);
-Query OK, 0 rows affected
-```
+* Change the data type of the `c2` column in the test01 table to TINYTEXT.
-#### Conversion between numeric data types
+ ```shell
+ obclient> ALTER TABLE test01 MODIFY c2 TINYTEXT;
+ ```
-* Change the data type of an integer type column and increase the length of the column.
+* Change the data type of the `c2` column in the test01 table to LONGTEXT.
- ```sql
- obclient> CREATE TABLE test02 (id INT PRIMARY KEY, name VARCHAR(10),age TINYINT, description VARCHAR(65525));
- Query OK, 0 rows affected
+ ```shell
+ obclient> ALTER TABLE test01 MODIFY c2 LONGTEXT;
+ ```
- obclient> ALTER TABLE test02 MODIFY age SMALLINT;
- Query OK, 0 rows affected
+* Change the length limit of the `c3` column in the test01 table to 20 characters.
- obclient> ALTER TABLE test02 MODIFY age INT;
- Query OK, 0 rows affected
+ ```shell
+ obclient> ALTER TABLE test01 MODIFY c3 CHAR(20);
+ ```
- obclient> ALTER TABLE test02 MODIFY age BIGGINT;
- Query OK, 0 rows affected
- ```
+* Change the data type of the `c3` column in the test01 table to VARCHAR, and set the length limit to 30 characters.
-* Change the data type and length for a column of a data type that has a precision.
+ ```shell
+ obclient> ALTER TABLE test01 MODIFY c3 VARCHAR(30);
+ ```
- ```sql
- obclient> CREATE TABLE test03(c1 INT, c2 FLOAT(8,2), c3 FLOAT(8,2), UNIQUE(c2, c3));
- Query OK, 0 rows affected
+### Conversion between numeric data types
- obclient> ALTER TABLE test03 MODIFY c2 FLOAT(5,2);
- Query OK, 0 rows affected
+#### Integer data types
- obclient> ALTER TABLE test03 MODIFY c2 DOUBLE(10,4);
- Query OK, 0 rows affected
+Create the test02 table as follows:
- obclient> ALTER TABLE test03 MODIFY c2 DOUBLE(5,2);
- Query OK, 0 rows affected
+```shell
+obclient> CREATE TABLE test02 (id INT PRIMARY KEY, name VARCHAR(10),age TINYINT, description VARCHAR(65525));
+```
- obclient> ALTER TABLE test03 MODIFY c2 DECIMAL(20, 4);
- Query OK, 0 rows affected
- ```
+The following examples show how to change the data type and length limit for a column of an integer data type.
-#### Conversion for binary data types
+* Change the data type of the `age` column in the test02 table to SMALLINT.
-```sql
-obclient> CREATE TABLE test04 (c1 TINYBLOB, c2 BINARY(64));
-Query OK, 0 rows affected
+ ```shell
+ obclient> ALTER TABLE test02 MODIFY age SMALLINT;
+ ```
+
+* Change the data type of the `age` column in the `test02` table to INT.
+
+ ```shell
+ obclient> ALTER TABLE test02 MODIFY age INT;
+ ```
-obclient> ALTER TABLE test04 MODIFY c1 BLOB;
-Query OK, 0 rows affected
+* Change the data type of the `age` column in the `test02` table to BIGGINT.
-obclient> ALTER TABLE test04 MODIFY c1 BINARY(256);
-Query OK, 0 rows affected
+ ```shell
+ obclient> ALTER TABLE test02 MODIFY age BIGGINT;
+ ```
-obclient> CREATE TABLE test05 (id INT PRIMARY KEY, name TINYTEXT,age INT, description VARCHAR(65535));
-Query OK, 0 rows affected
+#### Numeric Data types with a precision
-obclient> ALTER TABLE test05 MODIFY name VARCHAR(256);
-Query OK, 0 rows affected
+Create the test03 table as follows:
+
+```shell
+obclient> CREATE TABLE test03(c1 INT, c2 FLOAT(8,2), c3 FLOAT(8,2), UNIQUE(c2, c3));
```
-#### Conversion between an integer data type and a character data type
+The following examples show how to change the data type and length limit for a column of a numeric data type with a precision.
-```sql
-obclient> CREATE TABLE test06 (c1 INT);
-Query OK, 0 rows affected
+* Change the maximum precision of the `c2` column in the `test03` table to 5 digits.
-obclient> ALTER TABLE test06 MODIFY c1 VARCHAR(64);
-Query OK, 0 rows affected
+ ```shell
+ obclient> ALTER TABLE test03 MODIFY c2 FLOAT(5,2);
+ ```
-obclient> CREATE TABLE test07 (c1 VARCHAR(32));
-Query OK, 0 rows affected
+* Change the data type of the `c2` column in the `test03` table to DOUBLE, and change the maximum precision of the column to 10 digits, including 4 digits for the fractional part.
-obclient> ALTER TABLE test07 MODIFY c1 INT;
-Query OK, 0 rows affected
-```
+ ```shell
+ obclient> ALTER TABLE test03 MODIFY c2 DOUBLE(10,4);
+ ```
-## Set a default value for a column
+* Change the data type of the `c2` column in the `test03` table to DOUBLE, and change the maximum precision of the column to 5 digits, including 2 digits for the fractional part.
-The syntax for setting a default value for a column is as follows:
+ ```shell
+ obclient> ALTER TABLE test03 MODIFY c2 DOUBLE(5,2);
+ ```
-```sql
-ALTER TABLE table_name ALTER COLUMN column_name SET DEFAULT literal;
-```
+* Change the data type of the `c2` column in the `test03` table to DECIMAL, and change the maximum precision of the column to 20 digits, including 4 digits for the fractional part.
-Here is an example:
+ ```shell
+ obclient> ALTER TABLE test03 MODIFY c2 DECIMAL(20, 4);
+ ```
-```sql
-obclient> ALTER TABLE tbl1 ALTER COLUMN c1 SET DEFAULT 111;
-Query OK, 0 rows affected
+### Conversion for binary data types
-obclient> DESCRIBE tbl1;
-+-------+-------------+------+-----+---------+-------+
-| Field | Type | Null | Key | Default | Extra |
-+-------+-------------+------+-----+---------+-------+
-| c4 | int(11) | YES | | NULL | |
-| c1 | int(11) | NO | PRI | 111 | |
-| c2 | varchar(50) | YES | | NULL | |
-| c3 | varchar(50) | YES | | NULL | |
-+-------+-------------+------+-----+---------+-------+
-4 rows in set
+Create the test04 table as follows:
+
+```shell
+obclient> CREATE TABLE test04 (c1 TINYBLOB, c2 BINARY(64));
```
-## Drop the default value of a column
+The following examples show how to change the data type and length limit for a column of a binary data type.
-The syntax for dropping the default value of a column is as follows:
+* Change the data type of the `c1` column in the `test04` table to BLOB.
-```sql
-ALTER TABLE table_name ALTER COLUMN column_name DROP DEFAULT;
-```
+ ```shell
+ obclient> ALTER TABLE test04 MODIFY c1 BLOB;
+ ```
-Here is an example:
+* Change the data type of the `c1` column in the test04 table to BINARY and set the length limit to 256 bytes.
-```sql
-obclient> ALTER TABLE tbl1 ALTER COLUMN c1 DROP DEFAULT;
-Query OK, 0 rows affected
-```
+ ```shell
+ obclient> ALTER TABLE test04 MODIFY c1 BINARY(256);
+ ```
-## Change the value of an auto-increment column
+* Change the data type of the `c1` column in the `test04` table to VARCHAR and set the length limit to 256 characters.
-The syntax for changing the value of an auto-increment column is as follows:
+ ```shell
+ obclient> ALTER TABLE test04 MODIFY c1 VARCHAR(256);
+ ```
-```sql
-ALTER TABLE table_name AUTO_INCREMENT=next_value;
+### Conversion between an integer data type and a character data type
+
+Create the `test05` table as follows:
+
+```shell
+obclient> CREATE TABLE test05 (c1 INT);
```
-Here is an example:
+1. Execute the following statement to change the data type of the `c1` column in the `test05` table to VARCHAR and set the length limit to 64 characters.
+
+ ```shell
+ obclient> ALTER TABLE test05 MODIFY c1 VARCHAR(64);
+ ```
+
+2. Execute the following statement to change the data type of the `c1` column in the `test05` table to INT.
+
+ ```shell
+ obclient> ALTER TABLE test05 MODIFY c1 INT;
+ ```
+
+## Manage the default value of a column
+
+In MySQL mode of OceanBase Database, you can change or drop the default value of a column. This section describes how to manage the default value of a column.
+
+### Change the default value of a column
+
+If a column is not configured with a default value, the default value is `NULL` by default. The syntax for changing the default value of a column is as follows:
```sql
-obclient> ALTER TABLE tbl1 AUTO_INCREMENT=12;
-Query OK, 0 rows affected
+ALTER TABLE table_name ALTER COLUMN column_name SET DEFAULT const_value;
```
-## Change a column to an auto-increment column
+where
-The syntax for changing a column to an auto-increment column is as follows:
+* `table_name` specifies the name of the table to which the column belongs.
-```sql
-ALTER TABLE table_name MODIFY column_name data_type AUTO_INCREMENT;
+* `column_name` specifies the name of the column whose default value is to be changed.
+
+* `const_value` specifies the new default value of the column.
+
+Assume that the tbl1 table exists in the database and the table structure of tbl1 is as follows:
+
+```shell
++-------+-------------+------+-----+---------+-------+
+| Field | Type | Null | Key | Default | Extra |
++-------+-------------+------+-----+---------+-------+
+| id | int(11) | NO | PRI | NULL | |
+| name | varchar(50) | NO | | NULL | |
+| age | int(11) | YES | | NULL | |
++-------+-------------+------+-----+---------+-------+
```
-Here is an example:
+The following example changes the default value of the `age` column to 18:
-```sql
-obclient> ALTER TABLE tbl1 MODIFY c1 INT AUTO_INCREMENT;
-Query OK, 0 rows affected
+```shell
+obclient> ALTER TABLE tbl1 ALTER COLUMN age SET DEFAULT 18;
```
-## Set a `NULL` constraint on a column
+Execute `DESCRIBE tbl1;` again to view the table structure of tbl1. The result is as follows, showing that the default value of `age` is 18:
-The syntax for setting a `NULL` constraint on a column is as follows:
+```shell
++-------+-------------+------+-----+---------+-------+
+| Field | Type | Null | Key | Default | Extra |
++-------+-------------+------+-----+---------+-------+
+| id | int(11) | NO | PRI | NULL | |
+| name | varchar(50) | NO | | NULL | |
+| age | int(11) | YES | | 18 | |
++-------+-------------+------+-----+---------+-------+
+```
+
+### Drop the default value of a column
+
+The syntax for dropping the default value of a column is as follows:
```sql
-ALTER TABLE table_name MODIFY COLUMN column_name data_type NULL;
+ALTER TABLE table_name ALTER COLUMN column_name DROP DEFAULT;
```
-Here is an example:
+Here, `table_name` specifies the name of the table to which the column belongs, and `column_name` specifies the name of the column whose default value is to be dropped.
-```sql
-obclient> DESCRIBE tbl1;
+Assume that the tbl1 table exists in the database and the table structure of tbl1 is as follows:
+
+```shell
+-------+-------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-------+-------------+------+-----+---------+-------+
-| c4 | int(11) | YES | | NULL | |
-| c1 | int(11) | NO | PRI | 1 | |
-| c2 | varchar(50) | YES | | NULL | |
-| c3 | varchar(50) | YES | | NULL | |
+| id | int(11) | NO | PRI | NULL | |
+| name | varchar(50) | NO | | NULL | |
+| age | int(11) | YES | | 18 | |
+-------+-------------+------+-----+---------+-------+
-4 rows in set
+```
+
+The following example drops the default value of the `age` column:
+
+```shell
+obclient> ALTER TABLE tbl1 ALTER COLUMN age DROP DEFAULT;
+```
-obclient> ALTER TABLE tbl1 MODIFY COLUMN c1 INT NULL;
-Query OK, 0 rows affected
+Execute `DESCRIBE tbl1;` again to view the table structure of tbl1. The result is as follows, showing that the default value of `age` is `NULL`:
-obclient> DESCRIBE tbl1;
+```shell
+-------+-------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-------+-------------+------+-----+---------+-------+
-| c4 | int(11) | YES | | NULL | |
-| c1 | int(11) | NO | PRI | 1 | |
-| c2 | varchar(50) | YES | | NULL | |
-| c3 | varchar(50) | YES | | NULL | |
+| id | int(11) | NO | PRI | NULL | |
+| name | varchar(50) | NO | | NULL | |
+| age | int(11) | YES | | NULL | |
+-------+-------------+------+-----+---------+-------+
-4 rows in set
```
-## Set a `NOT NULL` constraint on a column
+## Manage constraints
+
+In the MySQL mode of OceanBase Database, you can add column constraints to tables. For example, you can change an existing column to an auto-increment column, set or cancel a NOT NULL constraint for a column, and set or cancel a UNIQUE constraint for a column.
-The syntax for setting a `NOT NULL` constraint on a column is as follows:
+The syntax for managing constraints is as follows:
```sql
-ALTER TABLE table_name MODIFY COLUMN column_name data_type NOT NULL;
+ALTER TABLE table_name
+ MODIFY [COLUMN] column_name data_type
+ [AUTO_INCREMENT]
+ [NULL | NOT NULL]
+ [[PRIMARY] KEY]
+ [UNIQUE [KEY]];
```
-Here is an example:
+where
-```sql
-obclient> ALTER TABLE tbl1 MODIFY COLUMN c1 INT NOT NULL;
-Query OK, 0 rows affected
+* `table_name` specifies the name of the table to which the column belongs.
+
+* `column_name` specifies the name of the column to which the constraint is to be added.
+
+* `data_type` specifies the new data type of the column. You can specify the current data type or another data type. For more information, see [Change the data type of a column](#Change_the_data_type_of_a_column).
+
+* `AUTO_INCREMENT` specifies to set the column as an auto-increment column.
-obclient> DESCRIBE tbl1;
+* `NULL | NOT NULL` specifies whether the column can contain null values (`NULL`) or cannot contain null values (`NOT NULL`).
+
+* `[PRIMARY] KEY` specifies to set the column as the primary key column.
+
+* `UNIQUE [KEY]`: specifies to set the UNIQUE constraint for the column.
+
+Assume that the tal1 table exists in the database and the table structure of tal1 is as follows:
+
+```shell
+-------+-------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-------+-------------+------+-----+---------+-------+
-| c4 | int(11) | YES | | NULL | |
-| c1 | int(11) | NO | PRI | 1 | |
-| c2 | varchar(50) | YES | | NULL | |
-| c3 | varchar(50) | YES | | NULL | |
+| id | int(11) | YES | | NULL | |
+| name | varchar(50) | YES | | NULL | |
+| age | int(11) | YES | | NULL | |
+-------+-------------+------+-----+---------+-------+
-4 rows in set
```
+
+1. Set the `id` column as the primary key column.
+
+ ```shell
+ obclient> ALTER TABLE tal1 MODIFY COLUMN id INT PRIMARY KEY;
+ ```
+
+2. Set the `id` column as an auto-increment column.
+
+ ```shell
+ obclient> ALTER TABLE tal1 MODIFY COLUMN id INT AUTO_INCREMENT;
+ ```
+
+3. Set the NOT NULL constraint for the `name` column.
+
+ ```shell
+ obclient> ALTER TABLE tal1 MODIFY COLUMN name VARCHAR(50) NOT NULL;
+ ```
+
+4. Set the UNIQUE constraint for the `age` column.
+
+ ```shell
+ obclient> ALTER TABLE tal1 MODIFY COLUMN age INT UNIQUE;
+ ```
+
+5. Execute `DESCRIBE tal1;` again to view the table structure of tal1.
+
+ ```shell
+ +-------+-------------+------+-----+---------+----------------+
+ | Field | Type | Null | Key | Default | Extra |
+ +-------+-------------+------+-----+---------+----------------+
+ | id | int(11) | NO | PRI | NULL | auto_increment |
+ | name | varchar(50) | NO | | NULL | |
+ | age | int(11) | YES | UNI | NULL | |
+ +-------+-------------+------+-----+---------+----------------+
+ ```
+
+## Change the value of an auto-increment column
+
+The syntax for changing the value of an auto-increment column is as follows:
+
+```sql
+ALTER TABLE table_name [SET] AUTO_INCREMENT = next_value;
+```
+
+Assume that the table tbl1 exists in the database. The following example changes the value of the auto-increment column in tbl1 to 12:
+
+```shell
+obclient> ALTER TABLE tbl1 AUTO_INCREMENT = 12;
+```
+
+After the change, the next auto-increment value of the tbl1 table is 12. When new records are inserted, the value of the auto-increment column (`c1` in the example) increments from 12.
+
+## References
+
+[ALTER TABLE](../600.sql-statement-of-mysql-mode/1600.alter-table-of-mysql-mode.md)
\ No newline at end of file
diff --git a/en-US/700.reference/500.sql-reference/100.sql-syntax/300.common-tenant-of-oracle-mode/1000.ddl-function-of-oracle-mode/200.index-operations-of-oracle-mode.md b/en-US/700.reference/500.sql-reference/100.sql-syntax/300.common-tenant-of-oracle-mode/1000.ddl-function-of-oracle-mode/200.index-operations-of-oracle-mode.md
index 1865d86cd9..bc80364071 100644
--- a/en-US/700.reference/500.sql-reference/100.sql-syntax/300.common-tenant-of-oracle-mode/1000.ddl-function-of-oracle-mode/200.index-operations-of-oracle-mode.md
+++ b/en-US/700.reference/500.sql-reference/100.sql-syntax/300.common-tenant-of-oracle-mode/1000.ddl-function-of-oracle-mode/200.index-operations-of-oracle-mode.md
@@ -7,55 +7,57 @@
# Index operations
-In the Oracle mode of OceanBase Database, you can create (or add), query, drop, and rename an index.
+This topic describes how to perform simple index operations in Oracle mode of OceanBase Database, including creating an index, querying indexes, dropping an index, and renaming an index.
-## Create/add an index
+## Create an index
-The syntax for creating (or adding) an index is as follows:
+The syntax for creating an index is as follows:
```sql
-CREATE [UNIQUE] INDEX index_name
- [USING BTREE] ON table_name (sort_column_key [, sort_column_key...])
+CREATE [UNIQUE] INDEX index_name
+ [USING BTREE] ON table_name (sort_column_key [, sort_column_key...])
[index_option...] [partition_option];
```
where
* `index_name` specifies the name of the index to be created.
-
-* `table_name` specifies the name of the table for which the index is to be created.
-
-* `sort_column_key` specifies the key of a sorting column. When you create an index, you can specify multiple sorting columns and separate them by a comma (`,`).
-
-* `index_option` specifies the index options. When you create an index, you can specify multiple index options and separate them by a space.
-
-* `partition_option` specifies the options for creating index partitions. For detailed syntax information, see [CREATE INDEX](../900.sql-statement-of-oracle-mode/100.ddl-of-oracle-mode/1600.create-index-of-oracle-mode.md).
-When you create an index for a certain table, you can still use the table for read, write, and insert operations.
+* `table_name` specifies the name of the table on which the index is to be created.
+
+* `sort_column_key` specifies the column as the sort key. You can specify multiple columns and separate them with commas (`,`).
+
+* `index_option` specifies the index option. You can specify multiple index options and separate them with spaces.
+
+* `partition_option` specifies the index partitioning option.
+
+For more information about the syntax, see [CREATE INDEX](../900.sql-statement-of-oracle-mode/100.ddl-of-oracle-mode/1600.create-index-of-oracle-mode.md).
+
+When an index is being created on a table, this table still supports read and write operations.
+
+Assume that the table `t1` exists in the database. The following sample statement creates an index named `index1` on the table `t1` with the index in ascending order on the `c1` column, and sorts NULL values after non-NULL values:
-Assuming that a table named `t1` already exists in the database. Now you want to create an index named `index1` on the `t1` table and specify to sort based on the `c1` column in ascending order, with null values appearing after non-null values in the sorting result. Here is an execution example:
-
```shell
obclient> CREATE INDEX index1 ON t1 (c1 ASC NULLS LAST);
```
-## Query an index
+## Query indexes
-You can use system views to query the following indexes:
+You can query indexes in the following system views:
-* USER_INDEXES: Query information about indexes owned by the user for all tables. For more information, see [USER_INDEXES](../../../../700.system-views/500.system-view-of-oracle-mode/200.dictionary-view-of-oracle-mode/15400.user_indexes-of-oracle-mode.md).
+* USER_INDEXES: displays indexes on all tables owned by the current user. For more information, see [USER_INDEXES](../../../../700.system-views/500.system-view-of-oracle-mode/200.dictionary-view-of-oracle-mode/15400.user_indexes-of-oracle-mode.md).
-* ALL_INDEXES: Query all indexes for a table. For more information, see [ALL_INDEXES](../../../../700.system-views/500.system-view-of-oracle-mode/200.dictionary-view-of-oracle-mode/1100.all_indexes-of-oracle-mode.md).
+* ALL_INDEXES: displays all indexes on a table. For more information, see [ALL_INDEXES](../../../../700.system-views/500.system-view-of-oracle-mode/200.dictionary-view-of-oracle-mode/1100.all_indexes-of-oracle-mode.md).
-* DBA_INDEXES: Query information about indexes for all tables in the database. For more information, see [DBA_INDEXES](../../../../700.system-views/500.system-view-of-oracle-mode/200.dictionary-view-of-oracle-mode/6800.dba_indexes-of-oracle-mode.md).
+* DBA_INDEXES: displays indexes on all tables in the database. For more information, see [DBA_INDEXES](../../../../700.system-views/500.system-view-of-oracle-mode/200.dictionary-view-of-oracle-mode/6800.dba_indexes-of-oracle-mode.md).
-* ALL_IND_COLUMNS: Query information about index columns for all tables that the user can access. For more information, see [ALL_IND_COLUMNS](../../../../700.system-views/500.system-view-of-oracle-mode/200.dictionary-view-of-oracle-mode/1200.all_ind_columns-of-oracle-mode.md).
+* ALL_IND_COLUMNS: displays the columns of indexes on all tables accessible to the current user. For more information, see [ALL_IND_COLUMNS](../../../../700.system-views/500.system-view-of-oracle-mode/200.dictionary-view-of-oracle-mode/1200.all_ind_columns-of-oracle-mode.md).
-* DBA_IND_COLUMNS: Query information about index columns for all tables in the database. For more information, see [DBA_IND_COLUMNS](../../../../700.system-views/500.system-view-of-oracle-mode/200.dictionary-view-of-oracle-mode/6900.dba_ind_columns-of-oracle-mode.md).
+* DBA_IND_COLUMNS: displays the columns of indexes on all tables in the database. For more information, see [DBA_IND_COLUMNS](../../../../700.system-views/500.system-view-of-oracle-mode/200.dictionary-view-of-oracle-mode/6900.dba_ind_columns-of-oracle-mode.md).
-* USER_IND_COLUMNS: Query detailed information about the indexes for a table. For more information, see [USER_IND_COLUMNS](../../../../700.system-views/500.system-view-of-oracle-mode/200.dictionary-view-of-oracle-mode/15500.user_ind_columns-of-oracle-mode.md).
+* USER_IND_COLUMNS: displays the details of indexes on a table. For more information, see [USER_IND_COLUMNS](../../../../700.system-views/500.system-view-of-oracle-mode/200.dictionary-view-of-oracle-mode/15500.user_ind_columns-of-oracle-mode.md).
-You can view different views based on different situations. Here is an example of how to view index details for table `t1`:
+You can query different views as needed. The following sample statement queries the details of indexes on the `t1` table:
```shell
obclient> SELECT * FROM USER_IND_COLUMNS WHERE table_name='T1';
@@ -73,19 +75,19 @@ The output is as follows:
In the output:
-* `INDEX_NAME`: The name of the index.
+* `INDEX_NAME`: the name of the index.
-* `TABLE_NAME`: The name of the table where the index is located.
+* `TABLE_NAME`: the name of the table containing the index.
-* `COLUMN_NAME`: The name of the column where the index is located.
+* `COLUMN_NAME`: the column that is indexed.
-* `COLUMN_POSITION`: The position of the column in the index.
+* `COLUMN_POSITION`: the position of the indexed column in the index.
-* `COLUMN_LENGTH`: The length of the index column.
+* `COLUMN_LENGTH`: the length of the indexed column.
-* `CHAR_LENGTH`: The character length of the index column.
+* `CHAR_LENGTH`: the character length of the indexed column.
-* `DESCEND`: The sorting order of the index column.
+* `DESCEND`: the sorting method of the indexed column.
## Drop an index
@@ -97,10 +99,9 @@ DROP INDEX [schema.]index_name;
For more information about the syntax, see [DROP INDEX](../900.sql-statement-of-oracle-mode/100.ddl-of-oracle-mode/3200.drop-index-of-oracle-mode.md).
-When you drop an index from a table, you can still use the table for read and write operations.
-
-Assuming there is a table named `t1` in the database and there exists an index `index1` on table `t1`, you can drop the index as follows:
+When you drop an index from a table, this table still supports read and write operations.
+Assume that the table `t1` exists in the database and has an index named `index1`. The following sample statement drops the index:
```shell
obclient> DROP INDEX index1;
@@ -132,4 +133,4 @@ obclient> ALTER INDEX index1 RENAME TO index2;
* [About indexes](../../../../100.oceanbase-database-concepts/400.database-objects/100.database-objects-of-oracle-mode/300.index-of-oracle-mode/100.the-index-overview-of-oracle-mode.md)
-* [Create an index](../../../../300.database-object-management/200.manage-object-of-oracle-mode/400.manage-indexes-of-oracle-mode/200.create-an-index-of-oracle-mode.md)
+* [Create and manage indexes](../../../../300.database-object-management/200.manage-object-of-oracle-mode/400.manage-indexes-of-oracle-mode/200.create-an-index-of-oracle-mode.md)
\ No newline at end of file
diff --git a/en-US/700.reference/500.sql-reference/100.sql-syntax/300.common-tenant-of-oracle-mode/1000.ddl-function-of-oracle-mode/300.primary-key-operations-of-oracle-mode.md b/en-US/700.reference/500.sql-reference/100.sql-syntax/300.common-tenant-of-oracle-mode/1000.ddl-function-of-oracle-mode/300.primary-key-operations-of-oracle-mode.md
index c1ade33cbf..f95581764b 100644
--- a/en-US/700.reference/500.sql-reference/100.sql-syntax/300.common-tenant-of-oracle-mode/1000.ddl-function-of-oracle-mode/300.primary-key-operations-of-oracle-mode.md
+++ b/en-US/700.reference/500.sql-reference/100.sql-syntax/300.common-tenant-of-oracle-mode/1000.ddl-function-of-oracle-mode/300.primary-key-operations-of-oracle-mode.md
@@ -7,7 +7,7 @@
# Primary key operations
-This topic introduces simple primary key operations in the Oracle mode of OceanBase Database, including adding, modifying, and deleting primary keys.
+This topic describes simple primary key operations in Oracle mode of OceanBase Database, including adding, modifying, and dropping primary keys.
## Add a primary key
@@ -67,4 +67,4 @@ obclient> ALTER TABLE tbl1 DROP PRIMARY KEY;
## References
-[Query indexes](../../../../300.database-object-management/200.manage-object-of-oracle-mode/400.manage-indexes-of-oracle-mode/300.view-indexes-of-oracle-mode.md)
+[Query indexes](../../../../300.database-object-management/200.manage-object-of-oracle-mode/400.manage-indexes-of-oracle-mode/300.view-indexes-of-oracle-mode.md)
\ No newline at end of file
diff --git a/en-US/700.reference/500.sql-reference/100.sql-syntax/300.common-tenant-of-oracle-mode/1000.ddl-function-of-oracle-mode/400.column-operations-of-oracle-mode.md b/en-US/700.reference/500.sql-reference/100.sql-syntax/300.common-tenant-of-oracle-mode/1000.ddl-function-of-oracle-mode/400.column-operations-of-oracle-mode.md
index 843a9581ed..e2549d7b09 100644
--- a/en-US/700.reference/500.sql-reference/100.sql-syntax/300.common-tenant-of-oracle-mode/1000.ddl-function-of-oracle-mode/400.column-operations-of-oracle-mode.md
+++ b/en-US/700.reference/500.sql-reference/100.sql-syntax/300.common-tenant-of-oracle-mode/1000.ddl-function-of-oracle-mode/400.column-operations-of-oracle-mode.md
@@ -7,7 +7,7 @@
# Column operations
-Column operations in the OceanBase Database's Oracle mode include appending columns at the end, deleting columns, renaming columns, modifying column types, managing column default values, managing constraints, and setting auto-increment column values.
+This topic describes the column operations in the Oracle mode of OceanBase Database, including adding a column to the end of a table, dropping a column, renaming a column, changing the data type of a column, managing the default value of a column, managing constraints, and changing the value of an auto-increment column.
## Add a column to the end of a table
@@ -27,7 +27,7 @@ where
For more information, see [ALTER TABLE](../900.sql-statement-of-oracle-mode/100.ddl-of-oracle-mode/1000.alter-table-of-oracle-mode.md).
-Assuming there is a table named `tal1` in the database with the following structure:
+Assuming there is a table named `tbl1` in the database with the following structure:
```shell
+-------+--------------+------+------+---------+-------+
@@ -39,13 +39,13 @@ Assuming there is a table named `tal1` in the database with the following struct
+-------+--------------+------+------+---------+-------+
```
-Here is an example of how to add a `C4` column at the end of the `tal1` table:
+Here is an example of how to add a `C4` column at the end of the `tbl1` table:
```shell
obclient> ALTER TABLE tbl1 ADD C4 INT;
```
-Run the `DESCRIBE tal1; `command to view the table structure of `tal1`. The following output indicates that the new `C4` column has been added to the `tal1` table:
+Execute `DESCRIBE tbl1;` again to view the table structure of `tbl1`. The result is as follows, showing that the `C4` column has been added to table `tbl1`:
```shell
+-------+--------------+------+------+---------+-------+
@@ -66,10 +66,9 @@ The syntax for dropping a column is as follows:
ALTER TABLE table_name DROP COLUMN column_name
```
-Here, `table_name` specifies the name of the table from which the column is to be deleted, and `column_name` specifies the name of the column to be deleted.
+Here, `table_name` specifies the name of the table to which the column belongs, and `column_name` specifies the name of the column to be dropped.
-
-Assuming there is a table named `tal1` in the database with the following structure:
+Assuming there is a table named `tbl1` in the database with the following structure:
```shell
+-------+--------------+------+------+---------+-------+
@@ -81,13 +80,13 @@ Assuming there is a table named `tal1` in the database with the following struct
+-------+--------------+------+------+---------+-------+
```
-The following example shows how to delete the `C3` column from the `tal1` table:
+The following example shows how to drop the `C3` column from the tbl1 table:
```sql
obclient> ALTER TABLE tbl1 DROP COLUMN C3;
```
-Run the `DESCRIBE tbl1;` command to view the table structure. The following output indicates that the `C3` column has been removed from the `tal1` table:
+Execute `DESCRIBE tbl1;` again to view the table structure of tbl1. The result is as follows, showing that the tbl1 table no longer has the `C3` column:
```shell
+-------+--------------+------+------+---------+-------+
@@ -114,7 +113,7 @@ where
* `new_col_name` specifies the new name for the column after renaming.
-Assuming there is a table named `tal1` in the database with the following structure:
+Assuming there is a table named `tbl1` in the database with the following structure:
```shell
+-------+--------------+------+------+---------+-------+
@@ -126,13 +125,13 @@ Assuming there is a table named `tal1` in the database with the following struct
+-------+--------------+------+------+---------+-------+
```
-The following example shows how to rename the `C3` column to `C4` in the `tal1` table:
+The following example shows how to rename the `C3` column to `C4` in the `tbl1` table:
```shell
obclient> ALTER TABLE tbl1 RENAME COLUMN C3 TO C4;
```
-Run the `DESCRIBE tbl1;` command to view the table structure. The following output indicates that the `C3` column has been renamed to `C4` in the `tal1` table:
+Execute `DESCRIBE tbl1;` again to view the table structure of tbl1. The result is as follows, showing that the `C3` column in the tbl1 table is renamed as `C4`:
```shell
+-------+--------------+------+------+---------+-------+
@@ -144,7 +143,7 @@ Run the `DESCRIBE tbl1;` command to view the table structure. The following outp
+-------+--------------+------+------+---------+-------+
```
-## Change the type of a column
+## Change the data type of a column
OceanBase Database supports the following data type conversions:
@@ -154,9 +153,9 @@ OceanBase Database supports the following data type conversions:
* Precision change for character data types `CHAR` (only precision increase supported), `VARCHAR2`, `NVARCHAR2`, and `NCHAR`.
-For more information about the rules for column type change in the Oracle mode of OceanBase Database, see [Column type change rules](900.column-type-change-rule-of-oracle-mode.md).
+For more information about the rules for changing the data types of columns in Oracle mode of OceanBase Database, see [Column type change rules](900.column-type-change-rule-of-oracle-mode.md).
-The syntax for changing the type of a column is as follows:
+The syntax for changing the data type of a column is as follows:
```sql
ALTER TABLE table_name MODIFY column_name data_type;
@@ -170,7 +169,7 @@ where
* `data_type` specifies the new data type after modification.
-### Examples of column type changes
+### Examples of column data type changes
#### Conversions between character data types
@@ -180,37 +179,37 @@ The following example shows how to create a table named `test01`:
obclient> CREATE TABLE test01 (C1 INT PRIMARY KEY, C2 CHAR(10), C3 VARCHAR2(32));
```
-Using the `test01` table as an example, the following examples demonstrate how to modify the data type and length of character columns:
+The following examples show how to change the data type and length limit of a column of a character data type.
-* Modify the length of the `C2` column in the `test01` table to 20 characters:
-
- ```shell
- obclient> ALTER TABLE test01 MODIFY C2 CHAR(20);
- ```
+* Change the length limit of the `c2` column in the `test01` table to 20 characters.
-* Modify the data type of the `C2` column in the `test01` table to VARCHAR and specify a maximum length of 20 characters:
+ ```shell
+ obclient> ALTER TABLE test01 MODIFY C2 CHAR(20);
+ ```
- ```shell
- obclient> ALTER TABLE test01 MODIFY C2 VARCHAR(20);
- ```
+* Change the data type of the `C2` column in the `test01` table to VARCHAR, and set the length limit to 20 characters.
-* Modify the maximum length of the `C3` column in the `test01` table to 64 characters:
+ ```shell
+ obclient> ALTER TABLE test01 MODIFY C2 VARCHAR(20);
+ ```
- ```shell
- obclient> ALTER TABLE test01 MODIFY C3 VARCHAR(64);
- ```
+* Change the length limit of the `C3` column in the `test01` table to 64 characters.
-* Modify the maximum length of the `C3` column in the `test01` table to 16 characters:
-
- ```shell
- obclient> ALTER TABLE test01 MODIFY C3 VARCHAR(16);
- ```
+ ```shell
+ obclient> ALTER TABLE test01 MODIFY C3 VARCHAR(64);
+ ```
-* Modify the data type of the `C3` column in the `test01` table to CHAR and specify a length of 256 characters:
+* Change the length limit of the `C3` column in the `test01` table to 16 characters.
- ```shell
- obclient> ALTER TABLE test01 MODIFY C3 CHAR(256);
- ```
+ ```shell
+ obclient> ALTER TABLE test01 MODIFY C3 VARCHAR(16);
+ ```
+
+* Change the data type of the `C3` column in the `test01` table to CHAR, and set the length limit to 256 characters.
+
+ ```shell
+ obclient> ALTER TABLE test01 MODIFY C3 CHAR(256);
+ ```
#### Precision change for a numeric data type
@@ -220,16 +219,15 @@ The following example shows how to create a table named `test02`:
obclient> CREATE TABLE test02(C1 NUMBER(10,2));
```
-Here we use the `test02` table as an example to demonstrate how to modify the precision of a numeric data type column with precision:
+The following example shows how to change the precision for a column of a numeric data type that has a precision.
```shell
obclient> ALTER TABLE test02 MODIFY C1 NUMBER(11,3);
```
-## Manage default column values
-
-If not set, the default value for a column is `NULL`. The syntax for managing default values is as follows:
+## Manage the default value of a column
+If a column is not configured with a default value, the default value is `NULL` by default. The syntax for managing the default value of a column is as follows:
```sql
ALTER TABLE table_name MODIFY column_name data_type DEFAULT const_value;
@@ -245,7 +243,7 @@ where
* `const_value` specifies the new default value for the modified column.
-Assuming there is a table named `tal1` in the database with the following structure:
+Assuming there is a table named `tbl1` in the database with the following structure:
```shell
+-------+--------------+------+------+---------+-------+
@@ -257,18 +255,19 @@ Assuming there is a table named `tal1` in the database with the following struct
+-------+--------------+------+------+---------+-------+
```
-* You can execute the following command to set the default value for a column. In this case, modify the default value for the `C1` column to `111`:
-
- ```shell
- obclient> ALTER TABLE tbl1 MODIFY C1 NUMBER(10) DEFAULT 111;
- ```
+* The following example changes the default value of the `C1` column to `111`:
-* You can execute the following command to remove the default value for a column. In this case, remove the default value for the `C3` column:
-
- ```shell
- obclient> ALTER TABLE tbl1 MODIFY C3 NUMBER(3) DEFAULT NULL;
- ```
-Run the `DESCRIBE tbl1;` command again to view the table structure. The following output indicates that the default value for the `C1` column in the `tal1` table is `111` and the default value for the `C3` column is `NULL`.
+ ```shell
+ obclient> ALTER TABLE tbl1 MODIFY C1 NUMBER(10) DEFAULT 111;
+ ```
+
+* The following example drops the default value of the `C3` column:
+
+ ```shell
+ obclient> ALTER TABLE tbl1 MODIFY C3 NUMBER(3) DEFAULT NULL;
+ ```
+
+Execute `DESCRIBE tbl1;` again to view the table structure of `tbl1`. The result is as follows, showing that the default value of the `C1` column is `111`, and the default value of the `C3` column is `NULL`:
```shell
+-------+--------------+------+------+---------+-------+
@@ -282,15 +281,15 @@ Run the `DESCRIBE tbl1;` command again to view the table structure. The followin
## Manage constraints
-In the Oracle mode of OceanBase Database, you can add column constraints to a table, such as modifying existing columns to auto-increment, specifying if a column can be NULL, and specifying the uniqueness of a column. This topic introduces how to perform these operations.
+In the Oracle mode of OceanBase Database, you can add column constraints to tables. For example, you can change an existing column to an auto-increment column, set or cancel a NOT NULL constraint for a column, and set or cancel a UNIQUE constraint for a column.
The syntax for managing constraints is as follows:
```sql
-ALTER TABLE table_name
- MODIFY column_name data_type
- [NULL | NOT NULL]
- [PRIMARY KEY]
+ALTER TABLE table_name
+ MODIFY column_name data_type
+ [NULL | NOT NULL]
+ [PRIMARY KEY]
[UNIQUE];
```
@@ -308,7 +307,7 @@ where
* `UNIQUE` specifies setting the uniqueness constraint for the selected column.
-Assuming there is a table named `tal1` in the database with the following structure:
+Assuming there is a table named `tbl1` in the database with the following structure:
```shell
+-------+--------------+------+-----+---------+-------+
@@ -320,25 +319,25 @@ Assuming there is a table named `tal1` in the database with the following struct
+-------+--------------+------+-----+---------+-------+
```
-1. Set the `C1` column as the primary key:
+1. Set the `C1` column as the primary key column.
```shell
- obclient> ALTER TABLE tal1 MODIFY C1 NUMBER(10) PRIMARY KEY;
+ obclient> ALTER TABLE tbl1 MODIFY C1 NUMBER(10) PRIMARY KEY;
```
-2. Set the `C2` column as not nullable:
+2. Set the NOT NULL constraint for the `C2` column.
```shell
- obclient> ALTER TABLE tal1 MODIFY C2 VARCHAR(50) NOT NULL;
+ obclient> ALTER TABLE tbl1 MODIFY C2 VARCHAR(50) NOT NULL;
```
-3. Set the `C3` column to be unique:
+3. Set the UNIQUE constraint for the `C3` column.
```shell
- obclient> ALTER TABLE tal1 MODIFY C3 NUMBER(3) UNIQUE;
+ obclient> ALTER TABLE tbl1 MODIFY C3 NUMBER(3) UNIQUE;
```
-4. Execute the` DESCRIBE tbl1;` command again to view the table structure of the `tal1` table.
+4. Execute `DESCRIBE tbl1;` again to view the table structure of `tbl1`.
```shell
+-------+--------------+------+-----+---------+-------+
@@ -350,9 +349,9 @@ Assuming there is a table named `tal1` in the database with the following struct
+-------+--------------+------+-----+---------+-------+
```
-## Set auto-increment column values
+## Change the value of an auto-increment column
-To set auto-increment column values, you must first use `CREATE SEQUENCE` to create an auto-increment field, and then use the `nextval` function to retrieve the next value from the sequence.
+Before you can change the value of an auto-increment column, you must use `CREATE SEQUENCE` to create an auto-increment field, and then call the `nextval` function to retrieve the next value from the sequence.
The following example shows how to manage auto-increment column values:
@@ -362,16 +361,16 @@ The following example shows how to manage auto-increment column values:
obclient> CREATE SEQUENCE seq1 MINVALUE 1 START WITH 1 INCREMENT BY 1 CACHE 10;
```
-2. Insert data into the `tal1` table:
+2. Insert data into the `tbl1` table.
```shell
- obclient> INSERT INTO tal1(C1, C2, C3) VALUES (seq1.nextval, 'zhangsan', 20), (seq1.nextval, 'lisi', 21), (seq1.nextval, 'wangwu', 22);
+ obclient> INSERT INTO tbl1(C1, C2, C3) VALUES (seq1.nextval, 'zhangsan', 20), (seq1.nextval, 'lisi', 21), (seq1.nextval, 'wangwu', 22);
```
-
-3. Execute the following command to view the data in the `tal1` table:
+
+3. View the data in the `tbl1` table.
```shell
- obclient> SELECT * FROM tal1;
+ obclient> SELECT * FROM tbl1;
```
The output below shows that the values in the `C1` column increment from 1.
@@ -402,16 +401,16 @@ The following example shows how to manage auto-increment column values:
+---------+
```
-5. Insert data into the `tal1` table again:
+5. Insert data into the `tbl1` table again:
```shell
- obclient> INSERT INTO tal1(C1, C2, C3) VALUES (seq1.nextval, 'oceanbase', 12);
+ obclient> INSERT INTO tbl1(C1, C2, C3) VALUES (seq1.nextval, 'oceanbase', 12);
```
-6. Execute the following command to view the data in the `tal1` table again:
+6. Query the data in the `tbl1` table.
```shell
- obclient> SELECT * FROM tal1;
+ obclient> SELECT * FROM tbl1;
```
The output below shows that there is no row with the value `4` in the `C1` column. Instead, the value `5` is inserted directly.
diff --git a/en-US/700.reference/500.sql-reference/100.sql-syntax/300.common-tenant-of-oracle-mode/900.sql-statement-of-oracle-mode/100.ddl-of-oracle-mode/100.alter-index-of-oracle-mode.md b/en-US/700.reference/500.sql-reference/100.sql-syntax/300.common-tenant-of-oracle-mode/900.sql-statement-of-oracle-mode/100.ddl-of-oracle-mode/100.alter-index-of-oracle-mode.md
index f658682484..0c2db2275c 100644
--- a/en-US/700.reference/500.sql-reference/100.sql-syntax/300.common-tenant-of-oracle-mode/900.sql-statement-of-oracle-mode/100.ddl-of-oracle-mode/100.alter-index-of-oracle-mode.md
+++ b/en-US/700.reference/500.sql-reference/100.sql-syntax/300.common-tenant-of-oracle-mode/900.sql-statement-of-oracle-mode/100.ddl-of-oracle-mode/100.alter-index-of-oracle-mode.md
@@ -9,50 +9,48 @@
## Purpose
-You can use this statement to modify the name of an existing index, its degree of parallelism (DOP), or the tablespace in which the index is stored.
+You can use the `AlTER INDEX` statement to rename an existing index, modify the degree of parallelism (DOP) of queries on the index, or modify the index storage tablespace.
## Required privileges
-To execute the `ALTER INDEX` statement, you need to have the system privilege `ALTER ANY INDEX`. For more information about privileges in OceanBase Database, see [Privilege types in Oracle mode](../../../../../../600.manage/500.security-and-permissions/300.access-control/200.user-and-permission/300.permission-of-oracle-mode/000.permission-classification-of-oracle-mode.md).
+To execute the `AlTER INDEX` statement, you must have the `ALTER ANY INDEX` system privilege. For more information about privileges in OceanBase Database, see [Privilege types in Oracle mode](../../../../../../600.manage/500.security-and-permissions/300.access-control/200.user-and-permission/300.permission-of-oracle-mode/000.permission-classification-of-oracle-mode.md).
## Syntax
```sql
-ALTER INDEX [ schema.]index_name
- { RENAME TO new_name
- | parallel_option
+ALTER INDEX [ schema.]index_name
+ { RENAME TO new_name
+ | parallel_option
| TABLESPACE tablespace_name
- };
+ };
parallel_option:
- PARALLEL [COMP_EQ] integer
+ PARALLEL [COMP_EQ] integer
| NOPARALLEL
```
## Parameters
| Parameter | Description |
-|-----------------|-----------------|
+|-------|--------|
| schema. | The schema where the index is located. If `schema.` is omitted, the index is in your own schema by default. |
| index_name | The name of the index to be modified. |
| new_name | The new name of the index. |
-| parallel_option | The DOP of queries on the index. - `NOPARALLEL`: indicates serial execution with a DOP of `1`, which is the default value.
- `PARALLEL [COMP_EQ] integer`: indicates parallel execution with a DOP specified by the integer parameter. The `PARALLEL` keyword indicates enabling parallel processing for the index. `COMP_EQ` is an optional keyword used to specify constraints on the DOP. `integer` is an integer greater than or equal to 1, representing the DOP level.
|
-| tablespace_name | Specifies the tablespace where the index should be stored. |
+| parallel_option | The DOP of queries on the index. - `NOPARALLEL`: enables serial execution, with a DOP of 1. This is the default value.
- `PARALLEL [COMP_EQ] integer`: the DOP, which specifies the number of parallel threads used in parallel operations. The `PARALLEL` keyword specifies parallel processing of the index. `COMP_EQ` is an optional keyword that specifies the DOP constraint. `integer` specifies the DOP and is an integer greater than or equal to 1.
|
+| tablespace_name | The index storage tablespace. |
## Examples
* Assume that there is an index named `index1` in the database. The following example shows how to rename it to `index2`:
-
- ```shell
- obclient> ALTER INDEX index1 RENAME TO index2;
- ```
-* Assume that there is an index named `index3` in the database with a DOP of 3 for queries. The following example shows how to modify the DOP to 1:
-
- ```shell
- obclient> ALTER INDEX index3 NOPARALLEL;
- ```
+ ```shell
+ obclient> ALTER INDEX index1 RENAME TO index2;
+ ```
+* Assume that there is an index named `index3` in the database with a DOP of 3 for queries. The following example shows how to modify the DOP to 1:
+ ```shell
+ obclient> ALTER INDEX index3 NOPARALLEL;
+ ```
## References
@@ -60,4 +58,4 @@ parallel_option:
* [Query indexes](../../../../../300.database-object-management/200.manage-object-of-oracle-mode/400.manage-indexes-of-oracle-mode/300.view-indexes-of-oracle-mode.md)
-* [Drop an index](../../../../../300.database-object-management/200.manage-object-of-oracle-mode/400.manage-indexes-of-oracle-mode/400.delete-an-index-of-oracle-mode.md)
+* [Drop an index](../../../../../300.database-object-management/200.manage-object-of-oracle-mode/400.manage-indexes-of-oracle-mode/400.delete-an-index-of-oracle-mode.md)
\ No newline at end of file
diff --git a/en-US/700.reference/500.sql-reference/100.sql-syntax/300.common-tenant-of-oracle-mode/900.sql-statement-of-oracle-mode/100.ddl-of-oracle-mode/1600.create-index-of-oracle-mode.md b/en-US/700.reference/500.sql-reference/100.sql-syntax/300.common-tenant-of-oracle-mode/900.sql-statement-of-oracle-mode/100.ddl-of-oracle-mode/1600.create-index-of-oracle-mode.md
index f3131ba73b..2e972d2dbe 100644
--- a/en-US/700.reference/500.sql-reference/100.sql-syntax/300.common-tenant-of-oracle-mode/900.sql-statement-of-oracle-mode/100.ddl-of-oracle-mode/1600.create-index-of-oracle-mode.md
+++ b/en-US/700.reference/500.sql-reference/100.sql-syntax/300.common-tenant-of-oracle-mode/900.sql-statement-of-oracle-mode/100.ddl-of-oracle-mode/1600.create-index-of-oracle-mode.md
@@ -20,7 +20,7 @@ In the current version of OceanBase Database, indexes are classified into unique
## Required privileges
-You must have the INDEX privilege on the data objects to create indexes by using the `CREATE INDEX` statement. For more information about privileges in OceanBase Database, see [Privilege types in Oracle mode](../../../../../../600.manage/500.security-and-permissions/300.access-control/200.user-and-permission/300.permission-of-oracle-mode/000.permission-classification-of-oracle-mode.md).
+To execute the `CREATE INDEX` statement, you must have the INDEX privilege on the corresponding objects. For more information about privileges in OceanBase Database, see [Privilege types in Oracle mode](../../../../../../600.manage/500.security-and-permissions/300.access-control/200.user-and-permission/300.permission-of-oracle-mode/000.permission-classification-of-oracle-mode.md).
## Syntax
@@ -69,17 +69,17 @@ index_column_group_option:
| \[UNIQUE] | Optional. Specifies to create a unique index. |
| index_name | The name of the index to be created. |
| USING BTREE | Optional. Specifies to create a B-tree index. Note
OceanBase Database supports only USING BTREE
.
|
-| table_name | The table on which the index is created. You can directly specify the table name or specify the table name and the name of the database to which the table belongs in the `schema_name.table_name` format . |
-| sort_column_key | The key of a sort column. You can specify multiple sort columns for an index and separate them by commas (`,`). For more information, see [sort_column_key](#sort_column_key). |
-| index_option | The index options. You can specify multiple index options for an index and separate them by spaces. For more information, see [index_option](#index_option). |
-| partition_option | The index partitioning option. You can specify HASH partitioning, RANGE partitioning, LIST partitioning, and left-side table partitioning. |
-| index_column_group_option | The columnstore options for the index. For more information, see [index_column_group_option](#index_column_group_option). |
+| table_name | The table on which the index is created. You can directly specify the table name or specify the table name and the name of the database to which the table belongs in the `schema_name.table_name` format. |
+| sort_column_key | The column as the sort key. You can specify multiple columns and separate them with commas (`,`). For more information, see [sort_column_key](#sort_column_key). |
+| index_option | The index option. You can specify multiple index options and separate them with spaces. For more information, see [index_option](#index_option) below. |
+| partition_option | The index partitioning option. You can specify HASH partitioning, RANGE partitioning, LIST partitioning, or external table partitioning. |
+| index_column_group_option | The index option. For more information, see [index_column_group_option](#index_column_group_option) below. |
### sort_column_key
* `index_expr`: the sort column or expression, such as `c1=c1`. Boolean expressions are not allowed. Currently, you cannot create function-based indexes on generated columns in OceanBase Database. For more information about the expressions supported by function-based indexes, see [System functions supported for function-based indexes](../../../../../300.database-object-management/200.manage-object-of-oracle-mode/400.manage-indexes-of-oracle-mode/500.function-index-list-of-supported-functions-of-oracle-mode.md).
-* `ASC`: Optional. Specifies the ascending order. The descending order is not supported.
+* `ASC`: optional. The ascending order. Currently, the descending order is not supported.
* `opt_null_pos`: the position of NULL values after sorting. Valid values:
@@ -89,7 +89,7 @@ index_column_group_option:
* `NULLS FIRST`: specifies to sort NULL values before non-NULL values.
-* `ID id`: Optional. The ID of the sort key.
+* `ID id`: optional. The ID of the sort key.
The following sample statement creates an index named `index3` on the `t3` table, sorts the index by the `c1` column in ascending order, and specifies to sort NULL values after non-NULL values.
@@ -103,13 +103,13 @@ CREATE INDEX index3 ON t3 (c1 ASC NULLS LAST);
* `LOCAL`: specifies to create a local index.
-* `BLOCK_SIZE [=] integer`: the size of an index block. In other words, the number of bytes in each index block.
+* `BLOCK_SIZE [=] integer`: the size of an index block, that is, the number of bytes in each index block.
* `COMMENT STRING_VALUE`: adds a comment to the index.
-* `STORING (column_name_list)`: the columns to be sorted in the index.
+* `STORING (column_name_list)`: the columns to be stored in the index.
-* `WITH ROWID`: specifies to create an index that contains the row ID.
+* `WITH ROWID`: creates an index that contains the row ID.
* `WITH PARSER STRING_VALUE`: the parser required for the index.
@@ -121,32 +121,33 @@ CREATE INDEX index3 ON t3 (c1 ASC NULLS LAST);
* `INDEX_TABLE_ID [=] index_table_id`: the ID of the index table.
-* `MAX_USED_PART_ID [=] used_part_id`: the ID of the maximum used partition of the index.
+* `MAX_USED_PART_ID [=] used_part_id`: the maximum partition ID allowed for the index.
* `physical_attributes_option`: the physical attributes of the index.
-* `parallel_option`: the DOP of the index.
+* `parallel_option`: the degree of parallelism (DOP) of the index.
- * `PARALLEL [=] integer`: the degree of parallelism (DOP) of the execution, which is an integer.
+ * `PARALLEL [=] integer`: the DOP of the execution, which is an integer.
* `NOPARALLEL`: disables parallel execution.
### index_column_group_option
* `WITH COLUMN GROUP(all columns, each column)`: specifies to create a rowstore-columnstore redundant index.
+* `WITH COLUMN GROUP(all columns)`: specifies to create a rowstore index.
* `WITH COLUMN GROUP(each column)`: specifies to create a columnstore index.
-## Example
+## Examples
-Create a columnstore index for a table.
+Create a columnstore index on a table.
-1. Use the following SQL statement to create a table named `test_tbl1`.
+1. Create a table named `test_tbl1`.
```sql
CREATE TABLE test_tbl1 (col1 NUMBER, col2 VARCHAR2(50));
```
-2. Create a columnstore index named `idx1_test_tbl1` on the `test_tbl1` table and reference the `col1` column.
+2. On the `test_tbl1` table, create a columnstore index named `idx1_test_tbl1`, which references the `col1` column.
```sql
CREATE INDEX idx1_test_tbl1 ON test_tbl1 (col1) WITH COLUMN GROUP(each column);
diff --git a/en-US/700.reference/500.sql-reference/100.sql-syntax/300.common-tenant-of-oracle-mode/900.sql-statement-of-oracle-mode/300.dcl-of-oracle-mode/1700.grant-of-oracle-mode.md b/en-US/700.reference/500.sql-reference/100.sql-syntax/300.common-tenant-of-oracle-mode/900.sql-statement-of-oracle-mode/300.dcl-of-oracle-mode/1700.grant-of-oracle-mode.md
index bb4c0d1cde..848f1ad4bf 100644
--- a/en-US/700.reference/500.sql-reference/100.sql-syntax/300.common-tenant-of-oracle-mode/900.sql-statement-of-oracle-mode/300.dcl-of-oracle-mode/1700.grant-of-oracle-mode.md
+++ b/en-US/700.reference/500.sql-reference/100.sql-syntax/300.common-tenant-of-oracle-mode/900.sql-statement-of-oracle-mode/300.dcl-of-oracle-mode/1700.grant-of-oracle-mode.md
@@ -23,24 +23,24 @@ To execute the `GRANT` statement, you need to have the privileges being granted,
## Syntax
```sql
-/* Grant object privileges. */
-GRANT {obj_all_col_priv [, obj_all_col_priv...]}
- ON obj_clause
- TO {grant_user [, grant_user...]}
+/*Grant object privileges*/
+GRANT {obj_all_col_priv [, obj_all_col_priv...]}
+ ON obj_clause
+ TO {grant_user [, grant_user...]}
[WITH GRANT OPTION]
-/* Grant system privileges or roles. */
-GRANT obj_all_col_priv [, obj_all_col_priv...]
- TO grantee_clause
+/* Grant system privileges or roles */
+GRANT obj_all_col_priv [, obj_all_col_priv...]
+ TO grantee_clause
[WITH ADMIN OPTION]
grantee_clause:
- grant_user [, grant_user...]
+ grant_user [, grant_user...]
| grant_user IDENTIFIED BY password
obj_all_col_priv:
- role
- | sys_and_obj_priv [(column_list)]
+ role
+ | sys_and_obj_priv [(column_list)]
| ALL [PRIVILEGES] [(column_list)]
```
@@ -48,40 +48,40 @@ obj_all_col_priv:
| Parameter | Description |
|-------------------|-------------------------------------------------------------------------------------|
-| obj_all_col_priv | Specifies the privilege to be granted. You can directly or indirectly grant privileges to users through granting permissions or roles. When granting multiple permissions to a user, separate the permission types with a comma (`,`). For specific permission types and their description, please refer to [Privilege types in Oracle mode](../../../../../../600.manage/500.security-and-permissions/300.access-control/200.user-and-permission/300.permission-of-oracle-mode/000.permission-classification-of-oracle-mode.md). |
-| obj_clause | Specifies the object to which the privilege is granted. There are several ways to specify the object to be granted:- Specify all objects, such as all databases and all tables (`*.*`).
- Specify specific objects, such as a specific database (`db_name.*`), a specific table (`table_name`), or a specific table in a specific database (`db_name.table_name`).
- Specify a directory object (`[DIRECTORY] relation_name`).
|
-| grant_user | Specifies the user or role to whom the privilege is to be granted. It can take the following values:- `user [USER_VARIABLE]`: Grants the privilege to a specific user.
- `CONNECT`: Grants the privilege to the CONNECT role.
- `RESOURCE`: Grants the privilege to the RESOURCE role.
- `PUBLIC`: Grants the privilege to the public role.
|
-| IDENTIFIED BY password | Specifies a password for the user to be granted privileges. The password provided here is in plaintext. After being stored in the `dba_users` table, it will be encrypted on the server-side. If the password contains special characters such as ~!@#%^&*_-+=\|(){}[]:;',.?/
, it should be enclosed in double quotation marks (""
). |
+| obj_all_col_priv | The privilege to be granted. You can grant privileges to users or roles. To grant multiple privileges to a user, separate the privileges with commas (`,`). For more information about the privilege types, see [Privilege types in Oracle mode](../../../../../../600.manage/500.security-and-permissions/300.access-control/200.user-and-permission/300.permission-of-oracle-mode/000.permission-classification-of-oracle-mode.md). |
+| obj_clause | The object to be authorized. You can specify the object to be authorized in the following ways: - Specify all objects, that is, all databases and all tables (`*.*`).
- Specify a specific object, that is, a specific database (`db_name.*`), a specific table (`table_name`), or a specific table in a specific database (`db_name.table_name`).
- Specify a directory object (`[DIRECTORY] relation_name`).
|
+| grant_user | The user or role to which the privilege is to be granted. Valid values: - `user [USER_VARIABLE]`: a specific user.
- `CONNECT`: the CONNECT role.
- `RESOURCE`: the RESOURCE role.
- `PUBLIC`: a public role.
|
+| IDENTIFIED BY password | The password for the user to be authorized. The password is in plaintext and is saved in ciphertext on the server after it is saved to the `dba_users` table. Enclose special characters in the password in double quotation marks (""
). Special characters include the following ones: ~!@#%^&*_-+=`\|(){}[]:;',.?/
. |
| WITH GRANT OPTION | Specifies whether to enable privilege delegation. When privilege delegation is enabled, grant revocation extends to dependent users. |
| WITH ADMIN OPTION | Specifies whether to enable admin privilege delegation. When admin privilege delegation is enabled, grant revocation does not extend to dependent users. |
-| role | Specifies the role to be granted, with the following possible values:- `role_name`: Represents the name of a custom role.
- `DBA`: Represents the database administrator role, which has complete database management permissions. Users granted the DBA role can perform any database operation.
- `RESOURCE`: Represents the RESOURCE role.
- `CONNECT`: Represents the CONNECT role.
- `PUBLIC`: Represents the public role.
|
+| role | The role to be granted. Valid values: - `role_name`: the name of a custom role.
- `DBA`: the database administrator role, with full database administration privileges. A user with the DBA role can perform any database operation.
- `RESOURCE`: the RESOURCE role.
- `CONNECT` the CONNECT role.
- `PUBLIC`: a public role.
|
## Examples
-* Grant the `CREATE VIEW` privilege to user `user1` and allow the user to grant the same privilege to other users.
-
- ```shell
- obclient> GRANT CREATE VIEW TO user1 WITH ADMIN OPTION;
- ```
+* Grant the `CREATE VIEW` privilege to `user1`, and enable privilege delegation for the privilege.
-* Grant the `CONNECT` role to user `user1` and change the password for `user1`.
-
- ```shell
- obclient> GRANT CONNECT TO user1 IDENTIFIED by '********';
- ```
+ ```shell
+ obclient> GRANT CREATE VIEW TO user1 WITH ADMIN OPTION;
+ ```
- Then, check the password of `user1` in the `dba_users` table. You will find that the password has been updated.
+* Grant the `CONNECT` role to `user1`, and change the password for `user1`.
-* Grant the `COMMENT ANY TABLE` privilege to the role `role1`.
-
- ```shell
- GRANT COMMENT ANY TABLE TO role1;
- ```
+ ```shell
+ obclient> GRANT CONNECT TO user1 IDENTIFIED by '********';
+ ```
+
+ Check the password of `user1` in the `dba_users` table. The password is updated to the new one.
+
+* Grant the `COMMENT ANY TABLE` privilege to `role1`.
+
+ ```shell
+ GRANT COMMENT ANY TABLE TO role1;
+ ```
## References
-* For operations to view user privileges, see [Viewing User Permissions](../../../../../../600.manage/500.security-and-permissions/300.access-control/200.user-and-permission/300.permission-of-oracle-mode/600.view-user-permissions-of-oracle-mode.md).
+* For more information about how to view user privileges, see [View user privileges](../../../../../../600.manage/500.security-and-permissions/300.access-control/200.user-and-permission/300.permission-of-oracle-mode/600.view-user-permissions-of-oracle-mode.md).
-* For operations to view roles and privileges within roles, see [Viewing Roles](../../../../../../600.manage/500.security-and-permissions/300.access-control/200.user-and-permission/300.permission-of-oracle-mode/400.manage-roles-of-oracle-mode/600.view-roles-of-oracle-mode.md).
+* For more information about how to view roles and role privileges, see [View roles](../../../../../../600.manage/500.security-and-permissions/300.access-control/200.user-and-permission/300.permission-of-oracle-mode/400.manage-roles-of-oracle-mode/600.view-roles-of-oracle-mode.md).
-* You can view the formation about a created user from the `dba_users` table. For detailed information on the `dba_users` table, see [DBA_USERS](../../../../../700.system-views/500.system-view-of-oracle-mode/200.dictionary-view-of-oracle-mode/11800.dba_users-of-oracle-mode.md).
\ No newline at end of file
+* You can query the information about the created user in the `dba_users` table. For more information about the `dba_users` table, see [DBA_USERS](../../../../../700.system-views/500.system-view-of-oracle-mode/200.dictionary-view-of-oracle-mode/11800.dba_users-of-oracle-mode.md).
\ No newline at end of file
From 6e8bc394820d54ecf6df4e329be2097a0464c91f Mon Sep 17 00:00:00 2001
From: Jackie Qu
Date: Tue, 16 Apr 2024 16:03:59 +0800
Subject: [PATCH 26/63] v430-beta-100.sql-syntax-update-1
---
.../200.alter-system/1300.merge.md | 98 ++++++++++++---
.../200.alter-system/2600.PARAMETER.md | 64 ++++++++--
.../300.alter-resource-pool.md | 18 +--
.../1500.alter-system-freeze-of-mysql-mode.md | 33 +++--
.../1600.alter-table-of-mysql-mode.md | 113 ++++++++++++------
...200.create-external-table-of-mysql-mode.md | 6 +-
.../2600.create-table-of-mysql-mode.md | 60 +++++++---
.../5900.load-data-of-mysql-mode.md | 56 +++++++--
.../1000.alter-table-of-oracle-mode.md | 64 ++++++++--
.../1600.create-index-of-oracle-mode.md | 32 ++---
.../2400.create-table-of-oracle-mode.md | 34 ++++--
...lter-system-major-freeze-of-oracle-mode.md | 49 ++++++--
.../100.alter-function-mysql.md | 2 +-
.../1000.drop-trigger-mysql.md | 2 +-
.../200.alter-procedure-mysql.md | 2 +-
.../500.create-function-mysql.md | 5 +-
.../600.create-procedure-mysql.md | 6 +-
.../700.create-trigger-mysql.md | 2 +-
.../800.drop-function-mysql.md | 2 +-
.../900.drop-procedure-mysql.md | 2 +-
20 files changed, 485 insertions(+), 165 deletions(-)
diff --git a/en-US/700.reference/500.sql-reference/100.sql-syntax/100.system-tenants/200.alter-system/1300.merge.md b/en-US/700.reference/500.sql-reference/100.sql-syntax/100.system-tenants/200.alter-system/1300.merge.md
index 2b45324db7..6835a91738 100644
--- a/en-US/700.reference/500.sql-reference/100.sql-syntax/100.system-tenants/200.alter-system/1300.merge.md
+++ b/en-US/700.reference/500.sql-reference/100.sql-syntax/100.system-tenants/200.alter-system/1300.merge.md
@@ -5,37 +5,32 @@
| dir-name-en | |
| tenant-type | |
-# MERGE
+# MAJOR and MINOR
## Purpose
-You can use this statement in the `sys` tenant to initiate a compaction in the storage layer.
+You can use this statement in the `sys` tenant to initiate a compaction in the storage layer. You can initiate a major compaction at the tenant or partition level. You can initiate a minor compaction at the tenant, zone, OBServer node, log stream, or partition level.
## Syntax
```sql
-alter_system_merge_stmt:
- ALTER SYSTEM merge_action;
+ALTER SYSTEM merge_action;
merge_action:
MAJOR FREEZE [tenant_list]
- | MAJOR FREEZE TENANT [=] tenant_name TABLET_ID = tablet_id
- | MINOR FREEZE [tenant_list | replica] [server_list]
+ | MAJOR FREEZE tenant_list TABLET_ID = tablet_id
+ | MINOR FREEZE [tenant_list | TABLET_ID = tablet_id] [server_list] [zone_list] [LS [=] ls_id];
| {SUSPEND | RESUME} MERGE [tenant_list]
| CLEAR MERGE ERROR [tenant_list]
tenant_list:
- TENANT [=] { all | all_user | all_meta } | tenant_name_list
-
-tenant_name_list:
- tenant_name [, tenant_name ...]
-
-replica:
- TABLET_ID [=] tablet_id
+ TENANT [=] { all | all_user | all_meta } | tenant_name [, tenant_name ...]
server_list:
SERVER [=] ('ip:port' [, 'ip:port'...])
+zone_list:
+ ZONE [=] ('zone_name' [, 'zone_name' ...]);
```
## Parameters
@@ -47,9 +42,11 @@ server_list:
| {SUSPEND \| RESUME} MERGE | Suspends or resumes a major compaction.
You can use `TENANT=all` or `TENANT=all_user` to suspend or resume a major compaction for all user tenants. We recommend that you use `all_user`, because `all` will be deprecated in future versions. You can use `TENANT=all_meta` to suspend or resume a major compaction for all meta tenants. You can also use `TENANT=tenant_name [, tenant_name ...]` to suspend or resume a major compaction only for specified tenants. |
| CLEAR MERGE ERROR | Removes major compaction error tags.
You can use `TENANT=all` or `TENANT=all_user` to remove major compaction error tags for all user tenants. We recommend that you use `all_user`, because `all` will be deprecated in future versions. You can use `TENANT=all_meta` to remove major compaction error tags for all meta tenants. You can also use `TENANT=tenant_name [, tenant_name ...]` to remove major compaction error tags only for specified tenants. |
| MAJOR FREEZE TENANT [=] tenant_name TABLET_ID = tablet_id | Initiates a partition-level major compaction for a specified tablet ID. Note
You can execute this statement only in the `sys` tenant.
|
-| tenant_name | The tenant on which a minor compaction is performed. |
-| TABLET_ID | The partition on which a minor compaction is performed. |
-| SERVER | The server on which a minor compaction is performed. |
+| tenant_name | The name of the tenant for which a major or minor compaction is to be initiated. |
+| TABLET_ID | The ID of the tablet for which a major or minor compaction is to be initiated. |
+| SERVER | The server for which a minor compaction is to be initiated. |
+| ZONE | The zone for which a minor compaction is to be initiated. |
+| LS | The log stream for which a minor compaction is to be initiated. |
## Considerations
@@ -64,7 +61,7 @@ One partition corresponds to one tablet. When you initiate a partition-level maj
## Examples
-### Major compaction in the storage layer
+### Major compactions in the storage layer
* Initiate a major compaction in the `sys` tenant for the tenant itself.
@@ -94,7 +91,7 @@ One partition corresponds to one tablet. When you initiate a partition-level maj
Query OK, 0 rows affected
```
-### Minor compaction in the storage layer
+### Minor compactions in the storage layer
* Initiate a minor compaction in the `sys` tenant for the tenant itself.
@@ -139,6 +136,20 @@ One partition corresponds to one tablet. When you initiate a partition-level maj
Query OK, 0 rows affected
```
+* Initiate a minor compaction in the `sys` tenant for a specified log stream in a specified tenant.
+
+ ```sql
+ obclient> ALTER SYSTEM MINOR FREEZE tenant = t1 LS 1;
+ Query OK, 0 rows affected
+ ```
+
+* Initiate a minor compaction in the `sys` tenant for a specified log stream and tablet in a specified tenant.
+
+ ```sql
+ obclient> ALTER SYSTEM MINOR FREEZE tenant = t1 ls 1 tablet_id = 60000;
+ Query OK, 0 rows affected
+ ```
+
### Suspend or resume a major compaction
* Suspend a major compaction in the `sys` tenant for all user tenants.
@@ -254,8 +265,59 @@ One partition corresponds to one tablet. When you initiate a partition-level maj
Query OK, 0 rows affected
```
+### Initiate a partition-level minor compaction
+
+1. Query the tablet IDs of a table.
+
+ Here is an example:
+
+ ```sql
+ SELECT t1.tenant_id, t2.tenant_name, t1.database_name, t1.table_id, t1.table_name, t1.tablet_id, t1.PARTITION_NAME, t1.SUBPARTITION_NAME
+ FROM oceanbase.CDB_OB_TABLE_LOCATIONS t1, oceanbase.DBA_OB_TENANTS t2
+ WHERE t1.tenant_id=t2.tenant_id
+ AND t1.table_name = 'test_tbl1'
+ AND t2.tenant_name = 'oracle001';
+ ```
+
+ The return result is as follows:
+
+ ```shell
+ +-----------+-------------+---------------+----------+------------+-----------+----------------+-------------------+
+ | tenant_id | tenant_name | database_name | table_id | table_name | tablet_id | PARTITION_NAME | SUBPARTITION_NAME |
+ +-----------+-------------+---------------+----------+------------+-----------+----------------+-------------------+
+ | 1004 | oracle001 | SYS | 500011 | TEST_TBL1 | 200008 | P1 | SP0 |
+ | 1004 | oracle001 | SYS | 500011 | TEST_TBL1 | 200009 | P1 | SP1 |
+ | 1004 | oracle001 | SYS | 500011 | TEST_TBL1 | 200010 | P1 | SP2 |
+ | 1004 | oracle001 | SYS | 500011 | TEST_TBL1 | 200011 | P1 | SP3 |
+ | 1004 | oracle001 | SYS | 500011 | TEST_TBL1 | 200012 | P2 | SP4 |
+ | 1004 | oracle001 | SYS | 500011 | TEST_TBL1 | 200013 | P2 | SP5 |
+ | 1004 | oracle001 | SYS | 500011 | TEST_TBL1 | 200014 | P2 | SP6 |
+ | 1004 | oracle001 | SYS | 500011 | TEST_TBL1 | 200015 | P2 | SP7 |
+ +-----------+-------------+---------------+----------+------------+-----------+----------------+-------------------+
+ 8 rows in set
+ ```
+
+ For more information about columns in the view, see [oceanbase.CDB_OB_TABLE_LOCATIONS](../../../../700.system-views/300.system-view-of-sys-tenant/200.dictionary-view-of-sys-tenant/17700.oceanbase-cdb_ob_table_locations-of-sys-tenant.md).
+
+2. Initiate a minor compaction.
+
+ Here is an example:
+
+ ```sql
+ ALTER SYSTEM MINOR FREEZE TENANT = oracle001 TABLET_ID = 200008;
+ ```
+
+ The return result is as follows:
+
+ ```shell
+ Query OK, 0 rows affected
+ ```
+
## References
* [Overview](../../../../200.system-management/500.manage-data-storage/200.merge-management/100.consolidation-management-overview.md)
* [Manually initiate a major compaction](../../../../200.system-management/500.manage-data-storage/200.merge-management/400.manually-trigger-a-merge.md)
* [View the major compaction process](../../../../200.system-management/500.manage-data-storage/200.merge-management/500.view-merge-process.md)
+* [Overview](../../../../200.system-management/500.manage-data-storage/100.dump-management/100.dump-management-overview.md)
+* [Manually initiate a minor compaction](../../../../200.system-management/500.manage-data-storage/100.dump-management/300.trigger-dump-manually.md)
+* [View minor compaction information](../../../../200.system-management/500.manage-data-storage/100.dump-management/400.view-dump-information.md)
\ No newline at end of file
diff --git a/en-US/700.reference/500.sql-reference/100.sql-syntax/100.system-tenants/200.alter-system/2600.PARAMETER.md b/en-US/700.reference/500.sql-reference/100.sql-syntax/100.system-tenants/200.alter-system/2600.PARAMETER.md
index 65377d8591..e6a2375b81 100644
--- a/en-US/700.reference/500.sql-reference/100.sql-syntax/100.system-tenants/200.alter-system/2600.PARAMETER.md
+++ b/en-US/700.reference/500.sql-reference/100.sql-syntax/100.system-tenants/200.alter-system/2600.PARAMETER.md
@@ -9,46 +9,67 @@
## Purpose
-You can use this statement to modify a parameter.
+- You can use the `ALTER SYSTEM [SET] parameter_name = expression` statement to modify a parameter.
+
+- You can use the `ALTER SYSTEM [RESET] parameter_name = expression` statement to reset a parameter.
OceanBase Database provides cluster-level and tenant-level parameters. Cluster-level parameters apply to all OBServer nodes in a cluster. Tenant-level parameters apply to all OBServer nodes in a tenant. For more information about parameters, see [Overview](../../../../800.configuration-items-and-system-variables/000.configuration-items-and-system-variables-overview.md).
## Limitations and considerations
-* You can modify cluster-level parameters only in the `sys` tenant.
+* You can modify or reset a cluster-level parameter only from the `sys` tenant.
-* Generally, parameters take effect dynamically or upon the restart of OBServer nodes. Most parameters take effect dynamically without the need to restart OBServer nodes. Before you modify a parameter, you can execute the `SHOW PARAMETERS LIKE` statement to query its effective mode. For more information, see [Cluster parameters](../../../../../600.manage/100.cluster-management/200.cluster-configuration-items.md).
+* Generally, parameter settings take effect dynamically or upon the restart of OBServer nodes. Most parameters take effect dynamically without the need to restart OBServer nodes. Before you modify a parameter, you can execute the `SHOW PARAMETERS LIKE` statement to query its effective mode. For more information, see [Cluster parameters](../../../../../600.manage/100.cluster-management/200.cluster-configuration-items.md).
## Required privileges
-To execute the `ALTER SYSTEM [SET] parameter_name = expression` statement, you must have the `ALTER SYSTEM` privilege. For more information about the privileges in OceanBase Database, see [Privilege types in MySQL mode](../../../../../600.manage/500.security-and-permissions/300.access-control/200.user-and-permission/200.permission-of-mysql-mode/100.permission-classification-of-mysql.md) and [Privilege types in Oracle mode](../../../../../600.manage/500.security-and-permissions/300.access-control/200.user-and-permission/300.permission-of-oracle-mode/000.permission-classification-of-oracle-mode.md).
+To execute the `ALTER SYSTEM [SET | RESET] parameter_name = expression` statement, you must have the `ALTER SYSTEM` privilege. For more information about the privileges in OceanBase Database, see [Privilege types in MySQL mode](../../../../../600.manage/500.security-and-permissions/300.access-control/200.user-and-permission/200.permission-of-mysql-mode/100.permission-classification-of-mysql.md) and [Privilege types in Oracle mode](../../../../../600.manage/500.security-and-permissions/300.access-control/200.user-and-permission/300.permission-of-oracle-mode/000.permission-classification-of-oracle-mode.md).
## Syntax
-```sql
-ALTER SYSTEM [SET] alter_system_set_parameter_actions;
+### Change the value of a parameter
-alter_system_set_parameter_actions:
-alter_system_set_parameter_action [, alter_system_set_parameter_action...]
+```sql
+ALTER SYSTEM [SET] alter_system_set_parameter_action [, alter_system_set_parameter_action...]
alter_system_set_parameter_action:
parameter_name = expression [COMMENT [=] 'text'] [SCOPE = {MEMORY | SPFILE | BOTH}] [SERVER [=] 'ip:port' | ZONE [=] 'zone_name' | TENANT [=] {sys | all_user | all | all_meta | tenant_name}]
```
-## Parameters
+#### Parameters
| **Parameter** | **Description** |
|-----------------|-----------------------|
| parameter_name | The name of the parameter to be modified. |
| expression | The value of the parameter after modification. |
| COMMENT | The comment to be added for the modification. This parameter is optional. We recommend that you specify this parameter. |
-| SCOPE | The effective scope of the parameter modification. Valid values: - `MEMORY`: indicates that the parameter is modified only in the memory. The modification takes effect immediately and becomes invalid after OBServer nodes are restarted. Currently, no parameter supports this effective scope.
- `SPFILE`: indicates that the parameter is modified only in the configuration table. The modification takes effect after OBServer nodes are restarted.
- `BOTH`: indicates that the parameter is modified in both the configuration table and memory. The modification takes effect immediately and remains effective after OBServer nodes are restarted.
The default value is `BOTH`. If you set `SCOPE` to `BOTH` or `MEMORY` for parameters that cannot take effect immediately, an error is returned. |
+| SCOPE | The effective scope of the parameter modification. Valid values: - `MEMORY`: indicates that the parameter is modified only in the memory. The modification takes effect immediately and becomes invalid after OBServer nodes are restarted. Currently, no parameter supports this effective scope.
- `SPFILE`: indicates that the parameter is modified only in the configuration table. The modification takes effect after OBServer nodes are restarted.
- `BOTH`: indicates that the parameter is modified in both the configuration table and memory. The modification takes effect immediately and remains effective after OBServer nodes are restarted.
The default value is `BOTH`. If you set `SCOPE` to `BOTH` or `MEMORY` for parameters whose settings cannot take effect immediately, an error is returned. |
| SERVER | The OBServer node for which you want to modify the parameter. You can specify only one OBServer node. |
| ZONE | The zone for which you want to modify the parameter. You can specify only one zone. If you specify a zone, the modification takes effect on all OBServer nodes in the zone. You cannot specify both `ZONE` and `SERVER`. |
| TENANT | The tenant for which you want to modify the parameter. If no tenant is specified, the modification takes effect on the current tenant. Valid values:sys
: specifies to modify the parameter for the `sys` tenant. all_user
/all
: specifies to modify the parameter for all user tenants. Note
Starting from OceanBase Database V4.2.1, TENANT = all_user
and TENANT = all
express the same semantics, but we recommend that you use TENANT = all_user
. TENANT = all
will be deprecated.
all_meta
: specifies to modify the parameter for all meta tenants. tenant_name
: the name of the tenant for which you want to modify the parameter. You can specify only one tenant at a time.
This clause is required only when you modify a tenant-level parameter for a specified tenant in the `sys` tenant. For more information about tenant-level parameters, see [Tenant-level parameters](../../../../800.configuration-items-and-system-variables/100.system-configuration-items/200.system-configuration-items-overview-list.md). |
+### Reset the value of a parameter
+
+```sql
+ALTER SYSTEM [RESET] alter_system_set_parameter_action [, alter_system_set_parameter_action...]
+
+alter_system_set_parameter_action:
+parameter_name = expression [SCOPE = {MEMORY | SPFILE | BOTH}] [TENANT [=] tenant_name]
+```
+
+#### Parameters
+
+| **Parameter** | **Description** |
+|-----------------|-----------------------|
+| parameter_name | The name of the parameter to be reset. |
+| expression | The value of the parameter after reset. |
+| SCOPE | The effective scope of the reset operation. Valid values: - `MEMORY`: specifies to reset only the parameter value in the memory. The modification takes effect immediately and becomes invalid after the OBServer node is restarted. At present, no parameter supports this effective scope.
- `SPFILE`: specifies to reset only the parameter value in the configuration table. The modification takes effect after the OBServer node is restarted.
- `BOTH`: specifies to reset the parameter value both in the configuration table and memory. The modification takes effect immediately and remains effective after the OBServer node is restarted.
The default value is `BOTH`. If you set `SCOPE` to `BOTH` or `MEMORY` for parameters whose settings cannot take effect immediately, an error is returned. |
+| TENANT | The tenant for which the tenant-level parameter is to be reset. If no tenant is specified, the default value is the current tenant. This clause is required only when you modify a tenant-level parameter for a specified tenant in the `sys` tenant. For more information about tenant-level parameters, see [Tenant-level parameters](../../../../800.configuration-items-and-system-variables/100.system-configuration-items/200.system-configuration-items-overview-list.md). |
+
## Examples
+### Modify parameters
+
* In the `sys` tenant, modify the cluster-level parameter `enable_sql_audit`.
```sql
@@ -83,8 +104,29 @@ parameter_name = expression [COMMENT [=] 'text'] [SCOPE = {MEMORY | SPFILE | BOT
obclient [oceanbase]> ALTER SYSTEM SET major_freeze_duty_time='01:00';
```
+* In the `sys` tenant, reset the cluster-level parameter `enable_sql_audit` to its default value. The modification takes effect immediately and is persisted to `SPFILE`.
+
+ ```sql
+ obclient [oceanbase]> ALTER SYSTEM RESET ENABLE_SQL_AUDIT SCOPE = BOTH;
+ ```
+
+### Reset parameters
+
+* Reset the `log_disk_utilization_threshold` parameter.
+
+ ```sql
+ obclient> ALTER SYSTEM RESET log_disk_utilization_threshold;
+ ```
+
+* Modify a tenant-level parameter for all tenants or a specified tenant in the sys tenant.
+
+ ```sql
+ obclient> ALTER SYSTEM RESET log_disk_utilization_threshold TENANT='ALL';
+ obclient> ALTER SYSTEM RESET log_disk_utilization_threshold TENANT='Oracle';
+ ```
+
## References
* [Set parameters](../../../../200.system-management/200.configuration-management/200.set-parameters.md)
-* [Overview](../../../../800.configuration-items-and-system-variables/000.configuration-items-and-system-variables-overview.md)
+* [Overview](../../../../800.configuration-items-and-system-variables/000.configuration-items-and-system-variables-overview.md)
\ No newline at end of file
diff --git a/en-US/700.reference/500.sql-reference/100.sql-syntax/100.system-tenants/300.alter-resource-pool.md b/en-US/700.reference/500.sql-reference/100.sql-syntax/100.system-tenants/300.alter-resource-pool.md
index 71f932c72a..59d4c2e007 100644
--- a/en-US/700.reference/500.sql-reference/100.sql-syntax/100.system-tenants/300.alter-resource-pool.md
+++ b/en-US/700.reference/500.sql-reference/100.sql-syntax/100.system-tenants/300.alter-resource-pool.md
@@ -13,13 +13,13 @@ You can use the `ALTER RESOURCE POOL` statement to modify the attributes of a re
## Limitations and considerations
-* When you use the `ALTER RESOURCE POOL` statement to modify the attributes of a resource pool, you can modify only one attribute at a time. If you want to modify two or more attributes, execute the statement repeatedly.
+* You can use the `ALTER RESOURCE POOL` statement to modify only one attribute of a resource pool at a time. To modify two or all of the `UNIT`, `UNIT_NUM`, and `ZONE_LIST` attributes, execute the statement repeatedly.
-* If you want to modify the `UNIT_NUM` attribute of a resource pool that has been allocated to a tenant, you must use the `ALTER RESOURCE TENANT` statement instead of the `ALTER RESOURCE POOL` statement. For more information about the `ALTER RESOURCE TENANT` statement, see [ALTER RESOURCE TENANT](300.alter-resource-tenant.md).
+* To modify the `UNIT_NUM` attribute of a resource pool that has been allocated to a tenant, you must use the `ALTER RESOURCE TENANT` statement instead of the `ALTER RESOURCE POOL` statement. For more information about the `ALTER RESOURCE TENANT` statement, see [ALTER RESOURCE TENANT](300.alter-resource-tenant.md).
## Required privileges
-You can modify the attributes of a tenant only as the `root` user of the `sys` tenant (namely `root@sys`).
+Only the `root` user of the `sys` tenant (namely the `root@sys` user) can execute this statement.
## Syntax
@@ -37,9 +37,9 @@ ALTER RESOURCE POOL pool_name
| **Parameter** | **Description** |
|-------------------------------------------|----------------------------------------------------------------------------------------------|
-| unit_name | The unit config used by the resource pool. |
-| unit_num | The number of units in a single zone after modification. When you increase the value of `unit_num`, make sure that the value is smaller than or equal to the number of nodes in the target zone. |
-| unit_id_list | When you decrease the value of `unit_num`, you can use the `DELETE UNIT` clause to specify the IDs of the units to be deleted. If you do not use the `DELETE UNIT` clause, the system randomly deletes a corresponding number of units. Notice
When you use DELETE UNIT
to specify the IDs of the units to be deleted, take note of the following considerations:
- In the
unit_id
list, the number of units to be deleted from each zone must be the same. - The number of remaining units after the deletion in each zone must be consistent with the value of
unit_num
.
|
+| unit_name | The name of the unit config for the resource pool. |
+| unit_num | The number of resource units in a single zone. If you want to increase the value of `unit_num`, make sure that the parameter value is less than or equal to the number of OBServer nodes in the target zone. |
+| unit_id_list | The list of IDs of resource units to be deleted. When you decrease the value of `unit_num`, you can use the `DELETE UNIT` keyword to specify the resource units to be deleted. If you do not specify `DELETE UNIT`, the system will randomly delete the corresponding number of resource units. Notice
When you use the DELETE UNIT
keyword to specify the resource units (unit_id
) to be deleted, the following conditions must be met:
- The number of resource units to be deleted from each zone must be the same.
- The number of resource units in each zone after deletion must be the same as the value of
unit_num
.
|
| ZONE_LIST | The list of zones for the resource pool after modification. |
## Examples
@@ -51,13 +51,13 @@ ALTER RESOURCE POOL pool_name
ERROR 1235 (0A000): alter unit_num, resource_unit, zone_list in one cmd not supported
```
- When you decrease the value of `UNIT_NUM`, you can delete specified units.
+ When you decrease the value of `UNIT_NUM`, you can specify the resource units to be deleted.
```sql
obclient [oceanbase]> ALTER RESOURCE POOL pool1 UNIT_NUM=1 DELETE UNIT = (1002);
```
-* Change the unit config of the resource pool `pool1` to `unit2`.
+* Change the unit config for the resource pool `pool1` to `unit2`.
```sql
obclient [oceanbase]> ALTER RESOURCE POOL pool1 UNIT='unit2';
@@ -67,4 +67,4 @@ ALTER RESOURCE POOL pool_name
* [View resource pools](../../../../600.manage/200.tenant-management/600.common-tenant-operations/1500.resource-pool-management/100.view-resource-pools.md)
-* [Modify attributes of a resource pool](../../../../600.manage/200.tenant-management/600.common-tenant-operations/900.modify-resource-pool-properties.md)
+* [Modify attributes of a resource pool](../../../../600.manage/200.tenant-management/600.common-tenant-operations/900.modify-resource-pool-properties.md)
\ No newline at end of file
diff --git a/en-US/700.reference/500.sql-reference/100.sql-syntax/200.common-tenant-of-mysql-mode/600.sql-statement-of-mysql-mode/1500.alter-system-freeze-of-mysql-mode.md b/en-US/700.reference/500.sql-reference/100.sql-syntax/200.common-tenant-of-mysql-mode/600.sql-statement-of-mysql-mode/1500.alter-system-freeze-of-mysql-mode.md
index d39eb8ce8f..7e47711aa1 100644
--- a/en-US/700.reference/500.sql-reference/100.sql-syntax/200.common-tenant-of-mysql-mode/600.sql-statement-of-mysql-mode/1500.alter-system-freeze-of-mysql-mode.md
+++ b/en-US/700.reference/500.sql-reference/100.sql-syntax/200.common-tenant-of-mysql-mode/600.sql-statement-of-mysql-mode/1500.alter-system-freeze-of-mysql-mode.md
@@ -5,21 +5,20 @@
| dir-name-en | |
| tenant-type | MySQL Mode |
-# MERGE
+# MAJOR and MINOR
## Purpose
-You can use this statement to initiate a minor or major compaction in the storage layer of a user tenant. You can manually initiate a tenant-level, partition-level, or OBServer node-level minor compaction.
+You can use this statement to initiate a major or minor compaction in the storage layer for a user tenant from the current tenant. You can initiate a major or minor compaction at the tenant or partition level.
## Syntax
```sql
-alter_system_merge_stmt:
- ALTER SYSTEM merge_action;
+ALTER SYSTEM merge_action;
merge_action:
- MAJOR FREEZE
- | MINOR FREEZE
+ MAJOR FREEZE [TABLET_ID = tablet_id]
+ | MINOR FREEZE [TABLET_ID = tablet_id]
| {SUSPEND | RESUME} MERGE
| CLEAR MERGE ERROR
```
@@ -32,13 +31,11 @@ merge_action:
| MINOR FREEZE | Initiates a minor compaction.
**Note**: A user tenant can initiate a minor compaction only for the current tenant. |
| {SUSPEND \| RESUME} MERGE | Suspends or resumes a major compaction.
**Note**: A user tenant can suspend or resume a major compaction only for the current tenant. |
| CLEAR MERGE ERROR | Removes major compaction error tags.
**Note**: A user tenant can remove major compaction error tags only in the current tenant. |
-| tenant_name | The tenant on which a minor compaction is performed. |
-| TABLET_ID | The partition on which a minor compaction is performed. |
-| SERVER | The server on which a minor compaction is performed. |
+| TABLET_ID | The tablet for which a minor compaction is to be initiated. |
## Examples
-### Major compaction in the storage layer
+### Major compactions in the storage layer
* Initiate a major compaction for a user tenant.
@@ -47,6 +44,13 @@ merge_action:
Query OK, 0 rows affected
```
+* Initiate a major compaction at the partition level for a user tenant.
+
+ ```shell
+ obclient> ALTER SYSTEM MAJOR FREEZE TABLET_ID = 5;
+ Query OK, 0 rows affected
+ ```
+
* Suspend the major compaction of a user tenant.
```sql
@@ -68,7 +72,7 @@ merge_action:
Query OK, 0 rows affected
```
-### Minor compaction in the storage layer
+### Minor compactions in the storage layer
* Initiate a minor compaction for a user tenant.
@@ -76,3 +80,10 @@ merge_action:
obclient> ALTER SYSTEM MINOR FREEZE;
Query OK, 0 rows affected
```
+
+* Initiate a minor compaction at the partition level for a user tenant.
+
+ ```shell
+ obclient> ALTER SYSTEM MINOR FREEZE TABLET_ID = 5;
+ Query OK, 0 rows affected
+ ```
\ No newline at end of file
diff --git a/en-US/700.reference/500.sql-reference/100.sql-syntax/200.common-tenant-of-mysql-mode/600.sql-statement-of-mysql-mode/1600.alter-table-of-mysql-mode.md b/en-US/700.reference/500.sql-reference/100.sql-syntax/200.common-tenant-of-mysql-mode/600.sql-statement-of-mysql-mode/1600.alter-table-of-mysql-mode.md
index 167b7af0a4..fe03ab3027 100644
--- a/en-US/700.reference/500.sql-reference/100.sql-syntax/200.common-tenant-of-mysql-mode/600.sql-statement-of-mysql-mode/1600.alter-table-of-mysql-mode.md
+++ b/en-US/700.reference/500.sql-reference/100.sql-syntax/200.common-tenant-of-mysql-mode/600.sql-statement-of-mysql-mode/1600.alter-table-of-mysql-mode.md
@@ -63,8 +63,14 @@ column_definition_list:
column_definition:
column_name data_type
[DEFAULT const_value] [AUTO_INCREMENT]
- [NULL | NOT NULL] [[PRIMARY] KEY] [UNIQUE [KEY]] comment
- [ opt_position_column ]
+ [NULL | NOT NULL] [[PRIMARY] KEY] [UNIQUE [KEY]] [COMMENT string_value] [SKIP_INDEX(skip_index_option_list)] [ opt_position_column ]
+
+skip_index_option_list:
+ skip_index_option [,skip_index_option ...]
+
+skip_index_option:
+ MIN_MAX
+ | SUM
opt_position_column:
FIRST | BEFORE | AFTER column_name
@@ -101,7 +107,7 @@ index_option:
| block_size
| compression
| STORING(column_name_list)
- | comment
+ | COMMENT string_value
index_column_group_option:
WITH COLUMN GROUP([all columns, ]each column)
@@ -115,7 +121,7 @@ table_option:
|lob_inrow_threshold [=] num
| compression
| AUTO_INCREMENT [=] INT_VALUE
- | comment
+ | COMMENT string_value
| parallel_clause
parallel_clause:
@@ -177,10 +183,10 @@ partition_count | subpartition_count:
| ADD FOREIGN KEY | Adds a foreign key. If you do not specify the name of the foreign key, it will be named in the format of table name + `OBFK` + time when the foreign key was created. For example, the foreign key created for table `t1` at 00:00:00 on August 1, 2021 is named as `t1_OBFK_1627747200000000`. A foreign key enables one table (child table) to reference data from another table (parent table). When an `UPDATE` or `DELETE` operation affects a key value in the parent table that has matching rows in the child table, the result depends on the referential action specified in the `ON UPDATE` or `ON DELETE` clause. Valid referential actions:- `CASCADE`: deletes or updates the affected row in the parent table and automatically deletes or updates the matching rows in the child table.
- `SET NULL`: deletes or updates the affected row in the parent table and sets the foreign key column in the child table to `NULL`.
- `RESTRICT`: rejects the delete or update operation on the parent table.
- `NO ACTION`: defers the check.
The `SET DEFAULT` action is also supported. |
| ALTER INDEX | Modifies whether an index is visible. When the status of an index is `INVISIBLE`, the SQL optimizer will not select this index. |
| key_part | Creates a normal or function-based index. |
-| index_col_name | The column name of the index. You can add `ASC` (ascending order) to the end of each column name. `DESC` (descending order) is not supported. By default, the columns are sorted in ascending order. To be specific, data is first sorted by values in the first column of `index_col_name` and by values in the next column for the records with the same values in the first column, and so forth. |
+| index_col_name | The column name of the index. You can add `ASC` (ascending order) to the end of each column name. `DESC` (descending order) is not supported. By default, the columns are sorted in ascending order. Index-based sorting method: Data is first sorted by the values in the first column of `index_col_name` and by the values in the next column for the records with the same values in the first column. |
| expr | A valid function-based index expression. A Boolean expression, such as `c1=c1`, is allowed.
**Notice**: You cannot create a function-based index on a generated column in the current version of OceanBase Database. |
| ADD \[PARTITION\] | Adds a partition to a partitioned table. |
-| DROP {PARTITION \| SUBPARTITION} | Drops a partition. Valid values: - `PARTITION`: drops the specified RANGE or LIST partitions, as well as all subpartitions that exist under these partitions. The partition definitions and partition data are also deleted, and the local indexes defined on the partitions are maintained.
- `SUBPARTITION`: drops the specified `*-RANGE` or `*-LIST` subpartitions, including the subpartition definitions and subpartition data, and maintains the indexes on the subpartitions.
Separate multiple partition names with commas (,).
**Notice**: Before you drop a partition, ensure that no active transactions or queries exist in this partition. Otherwise, SQL statement errors or exceptions may occur. |
+| DROP {PARTITION \| SUBPARTITION} | Drops a partition. Valid values: - `PARTITION`: drops the specified RANGE or LIST partitions and all subpartitions that exist under these partitions, including partition definitions and partition data, and maintains the indexes on the partitions.
- `SUBPARTITION`: drops the specified `*-RANGE` or `*-LIST` subpartitions, including the subpartition definitions and subpartition data, and maintains the indexes on the subpartitions.
Separate multiple partition names with commas (,).
**Notice**: Before you drop a partition, ensure that no active transactions or queries exist in this partition. Otherwise, SQL statement errors or exceptions may occur. |
| TRUNCATE {PARTITION \| SUBPARTITION} | Truncates a partition. Valid values: - `PARTITION`: deletes all data in the specified RANGE or LIST partitions, as well as data in all subpartitions that exist under these partitions, and maintains the indexes on the partitions.
- `SUBPARTITION`: deletes all data in the specified `*-RANGE` or `*-LIST` subpartitions, and maintains the indexes on the subpartitions.
Separate multiple partition names with commas (,).
**Notice**: Before you delete partition data, ensure that no active transactions or queries exist in this partition. Otherwise, SQL statement errors or exceptions may occur. |
| RENAME COLUMN old_col_name TO new_col_name | Renames a column. Only the column name is changed. The column definition is not changed. Notice
- If the new column name already exists in the table, an error is reported.
- However, you can swap names through a cycle, for example,
ALTER TABLE t1 RENAME COLUMN a to b, RENAME COLUMN b to a;
. - If the renamed column has an index or `FOREIGN KEY` constraint,
RENAME COLUMN
can be executed normally, and the change cascades to the index definition and `FOREIGN KEY` constraint. - An
ALTER TABLE
statement cannot contain any combination of RENAME COLUMN
, ADD PARTITION
, and ALTER COLUMN
.
|
| RENAME \[TO\] table_name | Renames a table. |
@@ -188,12 +194,13 @@ partition_count | subpartition_count:
| DROP \[TABLEGROUP\] | Drops a table group. |
| DROP \[FOREIGN KEY\] | Drops a foreign key. |
| DROP \[PRIMARY KEY\] | Drops a primary key. Note
In OceanBase Database, you cannot drop a primary key from a MySQL tenant in the following conditions:
- The table is a parent table whose primary key is referenced by a foreign key column of a child table.
- The table is a child table, but its primary key contains a foreign key column.
|
-| \[SET\] table_option | Sets table attributes. The following parameters are supported: - `REPLICA_NUM`: sets the number of replicas of the table (not supported).
- `tablegroup_name`: sets the group to which the table belongs.
- `BLOCK_SIZE`: sets the microblock size of the table. Default Value: `16384`, which is 16 KB. Value range: [1024,1048576].
- `lob_inrow_threshold`: sets the `INROW` threshold. When the size of LOB data exceeds this threshold, the LOB data is stored by using the `OUTROW` method instead of the `INROW` method in the LOB meta table. The default value is 4 KB.
- `COMPRESSION`: sets the compression mode of the table. The default value is `None`, which means that data is not compressed.
- `AUTO_INCREMENT`: sets the next value of the auto-increment column in the table.
- `comment`: sets the comments for the table.
- `PROGRESSIVE_MERGE_NUM`: sets the number of progressive compaction steps. Value range: \[0,100\].
- `parallel_clause`: specifies the degree of parallelism (DOP) at the table level.
- `NOPARALLEL`: sets the DOP to `1`, which is the default value.
- `PARALLEL integer`: sets the DOP to an integer greater than or equal to `1`.
|
+| \[SET\] table_option | Sets table attributes. The following parameters are supported: - `REPLICA_NUM`: sets the number of replicas of the table (not supported).
- `tablegroup_name`: sets the group to which the table belongs.
- `BLOCK_SIZE`: sets the microblock size of the table. Default Value: `16384`, which is 16 KB. Value range: [1024,1048576].
- `lob_inrow_threshold`: sets the `INROW` threshold. LOB data sized greater than this threshold is converted to `OUTROW` and stored in the LOB meta table. The default value is 4KB.
- `COMPRESSION`: sets the compression mode of the table. The default value is `None`, which means that data is not compressed.
- `AUTO_INCREMENT`: sets the next value of the auto-increment column in the table.
- `comment`: sets the comments for the table.
- `PROGRESSIVE_MERGE_NUM`: sets the number of progressive compaction steps. Value range: \[0,100\].
- `parallel_clause`: specifies the degree of parallelism (DOP) at the table level.
- `NOPARALLEL`: sets the DOP to `1`, which is the default value.
- `PARALLEL integer`: sets the DOP to an integer greater than or equal to `1`.
|
| CHECK | Modifies the `CHECK` constraint. The following operations are supported: - Add a new `CHECK` constraint.
- Drop the current `CHECK` constraint named `constraint_name`.
|
| \[NOT\] ENFORCED | Specifies whether to forcibly apply the `CHECK` constraint named `constraint_name`. - If you do not specify this parameter or set it to `ENFORCED`, a `CHECK` constraint is created and forcibly applied. The default value is `ENFORCED`.
- If you set it to `NOT ENFORCED`, a `CHECK` constraint is created but not forcibly applied.
|
-| ADD COLUMN GROUP([all columns, ]each column) | Changes the table to a columnar storage table. Specifically:ADD COLUMN GROUP(all columns, each column)
: Changes the table to a row-column redundant table.ADD COLUMN GROUP(each column)
: Changes the table to a columnar storage table.
|
-| DROP COLUMN GROUP([all columns, ]each column) | Removes the columnar storage method from the table. Specifically:DROP COLUMN GROUP(all columns, each column)
: Removes the row-column redundant storage method from the table.DROP COLUMN GROUP(each column)
: Removes the columnar storage method from the table.
|
-| index_column_group_option | Specifies the columnar storage options for the index. Specifically:WITH COLUMN GROUP(all columns, each column)
: Specifies adding the row-column redundant index.WITH COLUMN GROUP(each column)
: Specifies adding a columnar index.
|
+| ADD COLUMN GROUP([all columns,]each column) | Converts a table to a columnstore table. The following options are supported:ADD COLUMN GROUP(all columns, each column)
: converts the table to a rowstore-columnstore redundant table. ADD COLUMN GROUP(each column)
: converts the table to a columnstore table.
|
+| DROP COLUMN GROUP([all columns,]each column) | Drops the store format of the table. The following options are supported:DROP COLUMN GROUP(all columns, each column)
: drops the rowstore-columnstore redundant format of the table. DROP COLUMN GROUP(all columns)
: drops the rowstore format of the table. DROP COLUMN GROUP(each column)
: drops the columnstore format of the table.
|
+| index_column_group_option | The index options. The following options are supported:WITH COLUMN GROUP(all columns, each column)
: adds an rowstore-columnstore redundant index. WITH COLUMN GROUP(all columns)
: adds a rowstore index. WITH COLUMN GROUP(each column)
: adds an columnstore index.
|
+| SKIP_INDEX | Modifies the Skip Index attribute of a column. Valid values:MIN_MAX
: specifies to store the maximum value, minimum value, and number of null values of the indexed column on an index node. It is the most common aggregate data type in Skip Index. It can accelerate the pushdown of filters and `MIN/MAX` functions. -
SUM
: used to accelerate the pushdown of the `SUM` function on data of the numeric type.
Notice
- The Skip Index attribute is not supported for a column of the JSON or spatial data type.
- The Skip Index attribute is not supported for a generated column.
|
## Examples
@@ -415,15 +422,15 @@ obclient> DESCRIBE tbl1;
1 row in set
```
-* Create a columnar index for a table.
+* Create a columnstore index for a table.
+
+ 1. Create a table named `tbl3`.
- 1. Create table `tbl3` using the SQL statement below:
-
```sql
CREATE TABLE tbl3 (col1 INT, col2 VARCHAR(50));
```
- 2. Create a columnar index `idx1_tbl3` on table `tbl3`, referencing column `col1`.
+ 2. On the `tbl3` table, create a columnstore index named `idx1_tbl3` that references the `col1` column.
```sql
ALTER TABLE tbl3 ADD INDEX idx1_tbl3 (col1) WITH COLUMN GROUP(each column);
@@ -550,7 +557,6 @@ obclient> DESCRIBE tbl1;
Query OK, 0 rows affected
```
-
### Rename a column
* `RENAME COLUMN` only changes the column name and does not change the column definition. If the new column name already exists in the table, an error is reported when `RENAME COLUMN` is executed. However, no error is reported when the old and new column names are the same.
@@ -672,7 +678,7 @@ OceanBase Database does not support change or cascading change in the following
ERROR 3959 (HY000): Check constraint 'd_check' uses column 'd', hence column cannot be dropped or renamed.
```
-* The renamed column is referenced by a function index. In this case, an error is reported when you execute `RENAME COLUMN`.
+* The renamed column is referenced by a function-based index. In this case, an error is reported when you execute `RENAME COLUMN`.
```shell
DROP TABLE IF EXISTS tbl12;
@@ -721,36 +727,71 @@ OceanBase Database does not support change or cascading change in the following
ERROR 1054 (42S22): Unknown column 'a' in 'field list'
```
-### Modify columnar storage properties of a table
+### Modify the columnstore attribute of a table
-1. Create table `tbl1` using the SQL statement below.
+1. Create a table named `tbl1`.
- ```sql
- CREATE TABLE tbl1 (col1 INT PRIMARY KEY, col2 VARCHAR(50));
- ```
+ ```sql
+ CREATE TABLE tbl1 (col1 INT PRIMARY KEY, col2 VARCHAR(50));
+ ```
-2. Change `tbl1` to a row-column redundant table, then remove the row-column redundant property.
+2. Convert the `tbl1` table to a rowstore-columnstore redundant table, and then drop the rowstore-columnstore redundancy attribute.
- ```sql
- ALTER TABLE tbl1 ADD COLUMN GROUP(all columns, each column);
- ```
+ ```sql
+ ALTER TABLE tbl1 ADD COLUMN GROUP(all columns, each column);
+ ```
- ```sql
- ALTER TABLE tbl1 DROP COLUMN GROUP(all columns, each column);
- ```
+ ```sql
+ ALTER TABLE tbl1 DROP COLUMN GROUP(all columns, each column);
+ ```
+
+3. Convert the `tbl1` table to a columnstore table, and then delete the columnstore attribute.
-3. Change `tbl1` to a columnar storage table, then remove the columnar storage property.
+ ```sql
+ ALTER TABLE tbl1 ADD COLUMN GROUP(each column);
+ ```
+ ```sql
+ ALTER TABLE tbl1 DROP COLUMN GROUP(each column);
+ ```
- ```sql
- ALTER TABLE tbl1 ADD COLUMN GROUP(each column);
- ```
+### Modify the Skip Index attribute of a column
- ```sql
- ALTER TABLE tbl1 DROP COLUMN GROUP(each column);
- ```
+Execute the following statement to create a table named `test_skidx`:
+```sql
+CREATE TABLE test_skidx(
+ col1 INT SKIP_INDEX(MIN_MAX, SUM),
+ col2 FLOAT SKIP_INDEX(MIN_MAX),
+ col3 VARCHAR(1024) SKIP_INDEX(MIN_MAX),
+ col4 CHAR(10)
+ );
+```
+
+* Change the Skip Index attribute of the `col2` column in the `test_skidx` table to `SUM`.
+
+ ```sql
+ ALTER TABLE test_skidx MODIFY COLUMN col2 FLOAT SKIP_INDEX(SUM);
+ ```
+
+* Add the `MIN_MAX` Skip Index attribute for the `col4` column in the `test_skidx` table.
+
+ ```sql
+ ALTER TABLE test_skidx MODIFY COLUMN col4 CHAR(10) SKIP_INDEX(MIN_MAX);
+ ```
+
+* Delete the Skip Index attribute for the `col1` column in the `test_skidx` table.
+
+ ```sql
+ ALTER TABLE test_skidx MODIFY COLUMN col1 INT SKIP_INDEX();
+ ```
+
+ or
+
+ ```sql
+ ALTER TABLE test_skidx MODIFY COLUMN col1 INT;
+ ```
## References
-[Modify a table](../../../../300.database-object-management/100.manage-object-of-mysql-mode/200.manage-tables-of-mysql-mode/600.change-table-of-mysql-mode.md)
+[Modify a table](../../../../300.database-object-management/100.manage-object-of-mysql-mode/200.manage-tables-of-mysql-mode/600.change-table-of-mysql-mode.md)
\ No newline at end of file
diff --git a/en-US/700.reference/500.sql-reference/100.sql-syntax/200.common-tenant-of-mysql-mode/600.sql-statement-of-mysql-mode/2200.create-external-table-of-mysql-mode.md b/en-US/700.reference/500.sql-reference/100.sql-syntax/200.common-tenant-of-mysql-mode/600.sql-statement-of-mysql-mode/2200.create-external-table-of-mysql-mode.md
index 561bb453a0..06d3c04543 100644
--- a/en-US/700.reference/500.sql-reference/100.sql-syntax/200.common-tenant-of-mysql-mode/600.sql-statement-of-mysql-mode/2200.create-external-table-of-mysql-mode.md
+++ b/en-US/700.reference/500.sql-reference/100.sql-syntax/200.common-tenant-of-mysql-mode/600.sql-statement-of-mysql-mode/2200.create-external-table-of-mysql-mode.md
@@ -48,7 +48,7 @@ CREATE EXTERNAL TABLE
| column_type | The type of a column in the external table. You cannot define constraints, such as `DEFAULT`, `NOT NULL`, `UNIQUE`, `CHECK`, `PRIMARY KEY`, and `FOREIGN KEY`, for an external table. |
| AS | Used to manually specify column mappings between files and the external table. When the column order in files is different from that in the external table, you can use the pseudo column `metadata$filecol{N}` to map a column in the external table to the Nth column in files. For example, `c2 INT AS (metadata$filecol4)` maps the `c2` column in the external table to the fourth column in files. If you manually map one column, all automatic mappings become invalid, which means that you must then manually map all other columns. |
| LOCATION | The path where the files of the external table are stored. Generally, the data files of an external table are stored in a separate directory. The folder can contain subdirectories. When you create an external table, the table automatically collects all files in the specified directory. - A local location is in the format of `LOCATION = '[file://] local_file_path'`, where `local_file_path` can be a relative or absolute path. If you enter a relative path, the current directory must be the installation directory of OceanBase Database. `secure_file_priv` specifies the file path that the OBServer node can access. Therefore, `local_file_path` can only be a subpath of `secure_file_priv`.
- A remote location is in the format of `LOCATION = '{oss\|cos}://$ACCESS_ID:$ACCESS_KEY@$HOST/remote_file_path'`, where `$ACCESS_ID`, `$ACCESS_KEY`, and `$HOST` are required for accessing Alibaba Cloud Object Storage Service (OSS) or Tencent Cloud Object Storage (COS). These sensitive access information is encrypted and stored in the system table of the database.
|
-| FORMAT | The format of external files. - `TYPE`: the type of the external file. Only the CSV type is supported.
- `LINE_DELIMITER`: the line delimiter for the CSV file. Default value: `'\n'`.
- `FIELD_DELIMITER`: the field delimiter for the CSV file. Default value: `'\t'`.
- `ESCAPE`: the escape character for the CSV file, which can be only 1 byte in length. Default value: `'\'`.
- `FIELD_OPTIONALLY_ENCLOSED_BY`: the characters that enclose the field values in the CSV file. The default value is none.
- `ENCODING`: the character set encoding used by the file. For more information about all character sets supported in MySQL mode, see [Character sets](../100.basic-elements-of-mysql-mode/300.character-set-and-collation-of-mysql-mode/200.character-set-of-mysql-mode.md). If this parameter is not specified, the default value `UTF8MB4` takes effect.
- `NULL_IF`: the strings to be treated as `NULL` values. If you do not set this parameter, all strings are treated as valid non-NULL values and loaded to the external table.
- `SKIP_HEADER`: specifies to skip the file header, and specifies the number of lines to skip.
- `SKIP_BLANK_LINES`: specifies whether to skip blank lines. Default value: `FALSE`, which indicates not to skip blank lines.
- `TRIM_SPACE`: specifies whether to remove leading and trailing spaces from fields in the file. Default value: `FALSE`, which indicates not to remove leading and trailing spaces from fields in the file.
- `EMPTY_FIELD_AS_NULL`: specifies whether to treat empty strings as `NULL` values. Default value: `FALSE`, which indicates not to treat empty strings as `NULL` values.
|
+| FORMAT | The format of external files. - `TYPE`: the type of the external file. Only the CSV type is supported.
- `LINE_DELIMITER`: the line delimiter for the CSV file. Default value: `'\n'`.
- `FIELD_DELIMITER`: the field delimiter for the CSV file. Default value: `'\t'`.
- `ESCAPE`: the escape character for the CSV file, which can be only 1 byte in length. Default value: `'\'`.
- `FIELD_OPTIONALLY_ENCLOSED_BY`: the characters that enclose the field values in the CSV file. The default value is none.
- `ENCODING`: the character set encoding used by the file. For more information about all character sets supported in MySQL mode, see [Character sets](../100.basic-elements-of-mysql-mode/300.character-set-and-collation-of-mysql-mode/200.character-set-of-mysql-mode.md). If this parameter is not specified, the default value `UTF8MB4` takes effect.
- `NULL_IF`: the strings to be treated as `NULL` values. The default value is none.
- `SKIP_HEADER`: specifies to skip the file header, and specifies the number of lines to skip.
- `SKIP_BLANK_LINES`: specifies whether to skip blank lines. Default value: `FALSE`, which indicates not to skip blank lines.
- `TRIM_SPACE`: specifies whether to remove leading and trailing spaces from fields in the file. Default value: `FALSE`, which indicates not to remove leading and trailing spaces from fields in the file.
- `EMPTY_FIELD_AS_NULL`: specifies whether to treat empty strings as `NULL` values. Default value: `FALSE`, which indicates not to treat empty strings as `NULL` values.
|
| PATTERN | The regular pattern string for filtering files in the `LOCATION` directory. For each file in the `LOCATION` directory, if the file path matches the pattern string, the external table accesses the file. Otherwise, the external table skips the file. If this parameter is not specified, all files in the `LOCATION` directory are accessible by default. The external table stores the list of the files that match `PATTERN` in the system table of the database. During the scan, the external table accesses external files based on this list. |
## Considerations
@@ -72,7 +72,7 @@ CREATE EXTERNAL TABLE
Note
- Because secure_file_priv
is a GLOBAL
variable, you need to run \q
to exit for the settings to take effect.
+ secure_file_priv
is a GLOBAL
variable. Therefore, you need to run \q
to exit for the settings to take effect.
The content of the CSV file is as follows:
@@ -126,4 +126,4 @@ CREATE EXTERNAL TABLE
[Manage external files](../../../../300.database-object-management/100.manage-object-of-mysql-mode/200.manage-tables-of-mysql-mode/1000.manage-external-tables-of-mysql-mode/300.manage-external-files-of-mysql-mode.md)
-[Update the file list for an external table](../600.sql-statement-of-mysql-mode/1300.alter-external-table-of-mysql-mode.md)
+[Update the file list for an external table](../600.sql-statement-of-mysql-mode/1300.alter-external-table-of-mysql-mode.md)
\ No newline at end of file
diff --git a/en-US/700.reference/500.sql-reference/100.sql-syntax/200.common-tenant-of-mysql-mode/600.sql-statement-of-mysql-mode/2600.create-table-of-mysql-mode.md b/en-US/700.reference/500.sql-reference/100.sql-syntax/200.common-tenant-of-mysql-mode/600.sql-statement-of-mysql-mode/2600.create-table-of-mysql-mode.md
index b401d488dd..959b4ad78a 100644
--- a/en-US/700.reference/500.sql-reference/100.sql-syntax/200.common-tenant-of-mysql-mode/600.sql-statement-of-mysql-mode/2600.create-table-of-mysql-mode.md
+++ b/en-US/700.reference/500.sql-reference/100.sql-syntax/200.common-tenant-of-mysql-mode/600.sql-statement-of-mysql-mode/2600.create-table-of-mysql-mode.md
@@ -24,7 +24,7 @@ table_definition_list:
table_definition [, table_definition ...]
table_definition:
- column_definition
+ column_definition_list
| [CONSTRAINT [constraint_name]] PRIMARY KEY index_desc
| [CONSTRAINT [constraint_name]] UNIQUE {INDEX | KEY}
[index_name] index_desc
@@ -42,11 +42,18 @@ column_definition_list:
column_definition:
column_name data_type
[DEFAULT const_value] [AUTO_INCREMENT]
- [NULL | NOT NULL] [[PRIMARY] KEY] [UNIQUE [KEY]] comment
+ [NULL | NOT NULL] [[PRIMARY] KEY] [UNIQUE [KEY]] [COMMENT string_value] [SKIP_INDEX(skip_index_option_list)]
| column_name data_type
[GENERATED ALWAYS] AS (expr) [VIRTUAL | STORED]
[opt_generated_column_attribute]
+skip_index_option_list:
+ skip_index_option [,skip_index_option ...]
+
+skip_index_option:
+ MIN_MAX
+ | SUM
+
index_desc:
(column_desc_list) [index_type] [index_option_list]
@@ -74,7 +81,7 @@ index_option:
| block_size
| compression
| STORING(column_name_list)
- | comment
+ | COMMENT string_value
table_option_list:
table_option [ table_option ...]
@@ -87,7 +94,7 @@ table_option:
| lob_inrow_threshold [=] num
| compression
| AUTO_INCREMENT [=] INT_VALUE
- | comment
+ | COMMENT string_value
| ROW_FORMAT [=] REDUNDANT|COMPACT|DYNAMIC|COMPRESSED|DEFAULT
| PCTFREE [=] num
| parallel_clause
@@ -142,7 +149,9 @@ partition_count | subpartition_count:
INT_VALUE
table_column_group_option/index_column_group_option:
- WITH COLUMN GROUP([all columns, ]each column)
+ WITH COLUMN GROUP(all columns)
+ | WITH COLUMN GROUP(each column)
+ | WITH COLUMN GROUP(all columns, each column)
```
## Parameters
@@ -154,10 +163,10 @@ table_column_group_option/index_column_group_option:
| FOREIGN KEY | The foreign key of the created table. If you do not specify the name of the foreign key, it will be named in the format of table name + `OBFK` + time when the foreign key was created. For example, the foreign key created for table `t1` at 00:00:00 on August 1, 2021 is named as `t1_OBFK_1627747200000000`. A foreign key enables one table (child table) to reference data from another table (parent table). When an `UPDATE` or `DELETE` operation affects a key value in the parent table that has matching rows in the child table, the result depends on the referential action specified in the `ON UPDATE` or `ON DELETE` clause. Valid referential actions:- `CASCADE`: deletes or updates the affected row in the parent table and automatically deletes or updates the matching rows in the child table.
- `SET NULL`: deletes or updates the affected row in the parent table and sets the foreign key column in the child table to `NULL`.
- `RESTRICT`: rejects the delete or update operation on the parent table.
- `NO ACTION`: defers the check.
The `SET DEFAULT` action is also supported. |
| KEY \| INDEX | The key or index of the created table. If you do not specify the name of the index, the name of the first column referenced by the index is used as the index name. If duplicate index names exist, the index will be named in the format of underscore (_) + sequence number. For example, if the name of the index created based on column `c1` conflicts with an existing index name, the index will be named as `c1_2`. You can execute the `SHOW INDEX` statement to query the indexes of a table. |
| key_part | Creates a normal or function-based index. |
-| index_col_name | The column name of the index. You can add `ASC` (ascending order) to the end of each column name. `DESC` (descending order) is not supported. By default, the columns are sorted in ascending order. To be specific, data is first sorted by values in the first column of `index_col_name` and by values in the next column for the records with the same values in the first column, and so forth. |
+| index_col_name | The column name of the index. You can add `ASC` (ascending order) to the end of each column name. `DESC` (descending order) is not supported. By default, the columns are sorted in ascending order. Index-based sorting method: Data is first sorted by the values in the first column of `index_col_name` and by the values in the next column for the records with the same values in the first column. |
| expr | A valid function-based index expression. A Boolean expression, such as `c1=c1`, is allowed. Notice
Currently, you cannot create function-based indexes on generated columns in OceanBase Database.
|
| ROW_FORMAT | Specifies whether to enable the encoding storage format. - `redundant`: indicates that the encoding storage format is not enabled.
- `compact`: indicates that the encoding storage format is not enabled.
- `dynamic`: an encoding storage format.
- `compressed`: an encoding storage format.
- `default`: This value is equivalent to `dynamic`.
|
-| \[GENERATED ALWAYS\] AS (expr) \[VIRTUAL \| STORED\] | Creates a generated column. `expr` specifies the expression used to calculate the column value. - `VIRTUAL`: indicates that column values are not stored, but are immediately calculated after any `BEFORE` trigger fires when a row is read. Virtual columns do not occupy storage space.
- `STORED`: evaluates and stores column values when you insert or update a row. Stored columns occupy storage space and can be indexed.
|
+| \[GENERATED ALWAYS\] AS (expr) \[VIRTUAL \| STORED\] | Creates a generated column. `expr` specifies the expression used to calculate the column value. - `VIRTUAL`: indicates that column values are not stored, but are immediately calculated after any `BEFORE` trigger when a row is read. Virtual columns do not occupy storage space.
- `STORED`: evaluates and stores column values when you insert or update a row. Stored columns occupy storage space and can be indexed.
|
| BLOCK_SIZE | The microblock size for the table. |
| lob_inrow_threshold | Sets the `INROW` threshold. LOB data sized greater than this threshold is converted to `OUTROW` and stored in the LOB meta table. The default value is 4KB. |
| COMPRESSION | The compression algorithm for the table. Valid values: - `none`: indicates that no compression algorithm is used.
- `lz4_1.0`: indicates that the `lz4` compression algorithm is used.
- `zstd_1.0`: indicates that the `zstd` compression algorithm is used.
- `snappy_1.0`: indicates that the `snappy` compression algorithm is used.
|
@@ -165,14 +174,15 @@ table_column_group_option/index_column_group_option:
| COLLATE | The default collation for columns in the table. For more information, see [Collations](../100.basic-elements-of-mysql-mode/300.character-set-and-collation-of-mysql-mode/300.collation-of-mysql-mode.md) |
| table_tablegroup | The table group to which the table belongs. |
| AUTO_INCREMENT | The start value of an auto-increment column in the table. OceanBase Database allows you to use auto-increment columns as the partitioning key. |
-| comment | The comment. The value is case-insensitive. |
+| COMMENT | The comment. The value is case-insensitive. |
| PCTFREE | The percentage of space reserved for macroblocks. |
| parallel_clause | The degree of parallelism (DOP) at the table level. - `NOPARALLEL`: sets the DOP to `1`, which is the default value.
- `PARALLEL integer`: sets the DOP to an integer greater than or equal to `1`.
|
| DUPLICATE_SCOPE | The replicated table attribute. Valid values:- `none`: specifies that the table is a normal table. This is the default value.
- `cluster`: specifies that the table is a replicated table. The leader needs to replicate transactions to all full-featured replicas and read-only replicas of the current tenant.
Currently, OceanBase Database supports only cluster-level replicated tables. |
| CHECK | Specifies to restrict the range of values in the column. - If you define a `CHECK` constraint on a single column, you can write this column-level constraint in the column definition and specify a name for this constraint.
- If you define a `CHECK` constraint on a table, this constraint is applied to multiple columns in the table and can appear before a column definition. When you drop the table, the `CHECK` constraint on the table is also dropped.
You can view constraint information in the following ways: Use the `SHOW CREATE TABLE` statement. Query the `information_schema.TABLE_CONSTRAINTS` view. Query the `information_schema.CHECK_CONSTRAINTS` view. |
| constraint_name | The name of the constraint, which contains at most 64 characters. - Spaces are allowed at the beginning, in the middle, and at the end of a constraint name. However, the beginning and end of the constraint name must be identified with a backtick (\`).
- A constraint name can contain the dollar sign character ($).
- If a constraint name is a reserved word, it must be identified with a backtick (\`). Otherwise, an error is returned.
- `CHECK` constraint names must be unique in the same database.
|
| expression | The expression of the constraint. - `expression` cannot be empty.
- The result of `expression` must be of the Boolean data type.
- `expression` cannot contain a column that does not exist.
|
-| table_column_group_option/index_column_group_option | The columnstore options for the table or index. The following options are supported:WITH COLUMN GROUP(all columns, each column)
: specifies to create a rowstore-columnstore redundant table or index. WITH COLUMN GROUP(each column)
: specifies to create a columnstore table or index.
|
+| table_column_group_option/index_column_group_option | The columnstore options for the table or index. The following options are supported:WITH COLUMN GROUP(all columns, each column)
: specifies to create a rowstore-columnstore redundant table or index. WITH COLUMN GROUP(all columns)
: specifies to create a rowstore table or index. WITH COLUMN GROUP(each column)
: specifies to create a columnstore table or index.
|
+| SKIP_INDEX | The Skip Index attribute of the column. Valid values:MIN_MAX
: specifies to store the maximum value, minimum value, and number of null values of the indexed column on an index node. It is the most common aggregate data type in Skip Index. It can accelerate the pushdown of filters and `MIN/MAX` functions. -
SUM
: used to accelerate the pushdown of the `SUM` function on data of the numeric type.
Notice
- The Skip Index attribute is not supported for a column of the JSON or spatial data type.
- The Skip Index attribute is not supported for a generated column.
|
## Examples
@@ -341,7 +351,9 @@ table_column_group_option/index_column_group_option:
Query OK, 0 rows affected
```
- 5. (Optional) View the broadcast log stream. The replicated table is created on this log stream.
+ 5. (Optional) View the broadcast log stream.
+
+ The replicated table is created on this log stream.
```shell
obclient> SELECT * FROM oceanbase.DBA_OB_LS WHERE FLAG LIKE "%DUPLICATE%";
@@ -351,9 +363,11 @@ table_column_group_option/index_column_group_option:
| 1003 | NORMAL | z1;z2 | 0 | 0 | 1683267390195713284 | NULL | 1683337744205408139 | 1683337744205408139 | DUPLICATE |
+-------+--------+--------------+---------------+-------------+---------------------+----------+---------------------+---------------------+-----------+
1 row in set
- ```
- 6. (Optional) View the replica distribution of the replicated table in the sys tenant. The `REPLICA_TYPE` field indicates the replica type.
+
+ 6. (Optional) View the replica distribution of the replicated table in the sys tenant.
+
+ The `REPLICA_TYPE` field indicates the replica type.
```shell
obclient> SELECT * FROM oceanbase.CDB_OB_TABLE_LOCATIONS WHERE TABLE_NAME = "dup_t1";
@@ -370,7 +384,9 @@ table_column_group_option/index_column_group_option:
6 rows in set
```
- 7. Insert data into, read data from, and write data to the replicated table. If you connect to the database by using an OceanBase Database Proxy (ODP), the read request may be routed to any OBServer node. If you directly connect to an OBServer node, the read request is executed on the connected OBServer node as long as the local replica is readable.
+ 7. Insert data into, read data from, and write data to the replicated table.
+
+ If you connect to the database by using an OceanBase Database Proxy (ODP), the read request may be routed to any OBServer node. If you directly connect to an OBServer node, the read request is executed on the connected OBServer node as long as the local replica is readable.
```shell
obclient> INSERT INTO dup_t1 VALUES(1);
@@ -401,4 +417,20 @@ table_column_group_option/index_column_group_option:
```sql
CREATE TABLE tbl3_cg (col1 INT PRIMARY KEY, col2 INT, col3 INT, INDEX i1 (col2) WITH COLUMN GROUP(each column)) WITH COLUMN GROUP(each column);
- ```
\ No newline at end of file
+ ```
+
+* Specify the Skip Index attribute for columns while creating a table.
+
+ ```sql
+ CREATE TABLE test_skidx(
+ col1 INT SKIP_INDEX(MIN_MAX, SUM),
+ col2 FLOAT SKIP_INDEX(MIN_MAX),
+ col3 VARCHAR(1024) SKIP_INDEX(MIN_MAX),
+ col4 CHAR(10)
+ );
+ ```
+
+## References
+
+* [Create a table](../../../../300.database-object-management/100.manage-object-of-mysql-mode/200.manage-tables-of-mysql-mode/200.create-a-table-for-mysql-tenant-of-mysql-mode.md)
+* [Modify a table](../../../../300.database-object-management/100.manage-object-of-mysql-mode/200.manage-tables-of-mysql-mode/600.change-table-of-mysql-mode.md)
\ No newline at end of file
diff --git a/en-US/700.reference/500.sql-reference/100.sql-syntax/200.common-tenant-of-mysql-mode/600.sql-statement-of-mysql-mode/5900.load-data-of-mysql-mode.md b/en-US/700.reference/500.sql-reference/100.sql-syntax/200.common-tenant-of-mysql-mode/600.sql-statement-of-mysql-mode/5900.load-data-of-mysql-mode.md
index 29b467c7cf..1a357b0a51 100644
--- a/en-US/700.reference/500.sql-reference/100.sql-syntax/200.common-tenant-of-mysql-mode/600.sql-statement-of-mysql-mode/5900.load-data-of-mysql-mode.md
+++ b/en-US/700.reference/500.sql-reference/100.sql-syntax/200.common-tenant-of-mysql-mode/600.sql-statement-of-mysql-mode/5900.load-data-of-mysql-mode.md
@@ -24,7 +24,7 @@ OceanBase Database supports the following input files for the `LOAD DATA` statem
Note
- When executing LOAD DATA LOCAL INFILE
in OceanBase Database, the system automatically adds the IGNORE
option.
+ When you execute the LOAD DATA LOCAL INFILE
statement in OceanBase Database, the system automatically adds the IGNORE
option to the statement.
* Files in an OSS file system. You can execute the `LOAD DATA REMOTE_OSS INFILE` statement to load data from files in an OSS file system into database tables.
@@ -69,7 +69,7 @@ LOAD DATA
| APPEND | A hint for enabling the bypass import feature. This feature allows you to allocate space and write data to data files. By default, this hint is equivalent to `direct(false, 0)` and can collect statistics online like the `GATHER_OPTIMIZER_STATISTICS` hint does. |
| direct | A hint for enabling the bypass import feature. The `bool` parameter in `direct(bool, int)` specifies whether the given CSV file is ordered. The value `true` indicates that the file is ordered. The `int` parameter specifies the maximum number of error rows allowed. |
| REMOTE_OSS \| LOCAL | An optional parameter. -
REMOTE_OSS
specifies whether to read data from an OSS file system. Notice
If this parameter is specified, file_name
must be an OSS address.
-
LOCAL
specifies whether to read data from the file system of the local client. If you do not specify the `LOCAL` parameter, data will be read from the file system of an OBServer node.
|
-| file_name | The path and file name of the input file. file_name
can be in one of the following formats:- For an input file on an OBServer node:
/\$PATH/\$FILENAME
- For an input file in an OSS file system:
oss://\$PATH/\$FILENAME/?host=\$HOST&access_id=\$ACCESS_ID&access_key=\$ACCESSKEY
The parameters are described as follows:\$PATH
: the file path in the bucket, which represents the directory where the file is located. \$FILENAME
: the name of the file to be accessed. \$HOST
: the host name or CDN domain name of the OSS service, that is, the address of the OSS service to be accessed. \$ACCESS_ID
: the AccessKey ID required for authentication when accessing the OSS service. \$ACCESSKEY
: the AccessKey secret required for authentication when accessing the OSS service.
Note
When you import a file from OSS, make sure that:
- You have privileges to access the corresponding OSS bucket and file. That is, you can read data from the bucket and file. Usually, you need to set access privileges on the OSS console or through OSS APIs, and configure AccessKey information (AccessKey ID and AccessKey secret) as credentials with corresponding privileges.
- The database server can connect to the specified
$HOST
address to access the OSS service. The CDN is configured correctly and the network connection is normal if you want to use the CDN domain name to access the OSS service.
|
+| file_name | The path and file name of the input file. file_name
can be in one of the following formats:- For an input file on an OBServer node or client:
/\$PATH/\$FILENAME
- For an input file in an OSS file system:
oss://\$PATH/\$FILENAME/?host=\$HOST&access_id=\$ACCESS_ID&access_key=\$ACCESSKEY
The parameters are described as follows:\$PATH
: the file path in the bucket, which represents the directory where the file is located. \$FILENAME
: the name of the file to be accessed. \$HOST
: the host name or CDN domain name of the OSS service, that is, the address of the OSS service to be accessed. \$ACCESS_ID
: the AccessKey ID required for authentication when accessing the OSS service. \$ACCESSKEY
: the AccessKey secret required for authentication when accessing the OSS service.
Note
When you import a file from OSS, make sure that:
- You have privileges to access the corresponding OSS bucket and file. That is, you can read data from the bucket and file. Usually, you need to set access privileges on the OSS console or through OSS APIs, and configure AccessKey information (AccessKey ID and AccessKey secret) as credentials with corresponding privileges.
- The database server can connect to the specified
$HOST
address to access the OSS service. The CDN is configured correctly and the network connection is normal if you want to use the CDN domain name to access the OSS service.
|
| REPLACE \| IGNORE | If a unique key conflict occurs, `REPLACE` indicates that conflicting rows are overwritten, and `IGNORE` indicates that conflicting rows are ignored. The `LOAD DATA` statement checks whether a table contains duplicate data based on its primary key. If the table does not have a primary key, the `REPLACE` and `IGNORE` options are equivalent. If duplicate data exists, the `LOAD DATA` statement records the incorrect data to a log file by default. Notice
- When you execute
LOAD DATA LOCAL INFILE
in the MySQL mode of OceanBase Database, the system automatically adds the IGNORE
option. This behavior provides better compatibility with MySQL databases. - If you use the
REPLACE
or IGNORE
clause and set the DOP to a value greater than 1
, the last record inserted into the conflicting row may differ from the result of serial execution. If you need to strictly guarantee the insertion result of conflicting records, do not specify the DOP or set the DOP to 1
for the statement.
|
| table_name | The name of the table from which data is imported. Partitioned and non-partitioned tables are supported. |
| FIELDS \| COLUMNS | The format of the field. - `ENCLOSED BY`: specifies the modifier of the exported value.
- `TERMINATED BY`: specifies the end character of the exported column.
- `ESCAPED BY`: specifies the characters ignored for the exported value.
|
@@ -78,9 +78,47 @@ LOAD DATA
| IGNORES number { LINES \| ROWS } | Specifies to ignore the first few lines. `LINES` indicates the first few lines of the file. `ROWS` indicates the first few rows of data specified by the field delimiter. By default, fields in the input file are mapped to columns in the destination table one by one. If the input file does not contain all the columns, the missing columns are filled based on the following mappings: - Character data type: null string
- Numeric data type: 0
- Date data type: `0000-00-00`
|
| column_name_var | The name of the imported column. |
+### Considerations
+
+#### Use wildcards for bypass import
+
+In the `LOAD DATA` statement, the `direct` keyword is used as a hint to specify to use bypass import. When you import data in bypass mode, you can specify only one file in Alibaba Cloud Object Storage Service (OSS) as the file source. To import multiple files, you must execute the `LOAD DATA` statement repeatedly. When you import multiple data files from the file system of a cluster or from Alibaba Cloud OSS, you can use wildcards to specify the file names. This method is inapplicable when the file source is a client disk.
+
+**Import data from the file system of a cluster in bypass mode**
+
+- Here are some examples:
+
+ - Use wildcards to match file names: `load data /*+ parallel(20) direct(true, 0) */ infile '/xxx/test.*.csv' replace into table t1 fields terminated by '|';`
+
+ - Use wildcards to match a directory: `load data /*+ parallel(20) direct(true, 0) */ infile '/aaa*bb/test.1.csv' replace into table t1 fields terminated by '|';`
+
+ - Use wildcards to match a directory and file names: `load data /*+ parallel(20) direct(true, 0) */ infile '/aaa*bb/test.*.csv' replace into table t1 fields terminated by '|';`
+
+- Take note of the following considerations:
+
+ - At least one file must be matched. Otherwise, the error `OB_FILE_NOT_EXIST` will be returned.
+
+ - For `load data /*+ parallel(20) direct(true, 0) */ infile '/xxx/test.1*.csv,/xxx/test.6*.csv' replace into table t1 fields terminated by '|';`, `/xxx/test.1*.csv,/xxx/test.6*.csv` is matched as a whole. If no file or directory is matched, the error `OB_FILE_NOT_EXIST` will be returned.
+
+ - Only wildcards compatible with the GLOB function in Portable Operating System Interface (POSIX) are supported. For example, `test.6*(6|0).csv` and `test.6*({0.csv,6.csv}|.csv)` can be added to the `ls` command but are not supported by the GLOB function, and therefore the error `OB_FILE_NOT_EXIST` will be returned.
+
+**Import data from Alibaba Cloud OSS in bypass mode**
+
+- Here is an example:
+
+ - Use wildcards to match file names: `load data /*+ parallel(20) direct(true, 0) */ remote_oss infile 'oss://xxx/test.*.csv?host=xxx&access_id=xxx&access_key=xxx' replace into table t1 fields terminated by '|';`
+
+- Take note of the following considerations:
+
+ - You cannot use wildcards to match a directory. For example, if you execute the statement `load data /*+ parallel(20) direct(true, 0) */ remote_oss infile 'oss://aa*bb/test.*.csv?host=xxx&access_id=xxx&access_key=xxx' replace into table t1 fields terminated by '|';`, the error `OB_NOT_SUPPORTED` will be returned.
+
+ - Only the asterisk (`*`) and question mark (`?`) are supported as the wildcards for file names. You can enter other wildcards but these wildcards cannot match any result.
+
+For more information, see [Import data in bypass mode by using the LOAD DATA statement](../../../../../500.data-migration/1100.bypass-import/200.use-load-data-statement-to-bypass-import-data.md).
+
## Examples
-### Import data from a file on an OBServer node
+### Import data from a file on the server
Example 1: Import data from a file on an OBServer node.
@@ -95,7 +133,7 @@ Example 1: Import data from a file on an OBServer node.
Note
- Because secure_file_priv
is a GLOBAL
variable, you need to run \q
to exit for the settings to take effect.
+ secure_file_priv
is a GLOBAL
variable. Therefore, you need to run \q
to exit for the settings to take effect.
2. After you reconnect to the database, import data from an external file.
@@ -119,7 +157,7 @@ Example 3: Use the `direct(bool, int)` hint to enable the bypass import feature.
load data /*+ parallel(1) direct(false,0)*/ remote_oss infile 'oss://antsys-oceanbasebackup/backup_rd/xiaotao.ht/lineitem2.tbl?host=***.oss-cdn.***&access_id=***&access_key=***' into table lineitem fields terminated by '|' enclosed by '' lines starting by '' terminated by '\n';
```
-### Import data from a file on the local client
+### Import data from a local file on the client
Example 4: Import data from a local file to a table in OceanBase Database.
@@ -145,10 +183,10 @@ Example 4: Import data from a local file to a table in OceanBase Database.
Notice
- To execute the LOAD DATA LOCAL INFILE
statement, you must use OBClient V2.2.4 or later. If you do not have OBClient of the desired version, you can use the MySQL client to connect to the database.
+ To use the LOAD DATA LOCAL INFILE
feature, use OBClient of V2.2.4 or later. If you do not have OBClient of the required version, you can also use a MySQL client to connect to OceanBase Database.
-2. In the client, execute the `LOAD DATA LOCAL INFILE` statement to load the local data file.
+2. Execute the `LOAD DATA LOCAL INFILE` statement on the client to load data from a local file.
```shell
obclient [test]> LOAD DATA LOCAL INFILE '/home/admin/test_data/tbl1.csv' INTO TABLE tbl1 FIELDS TERMINATED BY ',';
@@ -172,5 +210,5 @@ LOAD DATA /*+ direct(true,1024) parallel(16) */ REMOTE_OSS INFILE 'oss://antsys-
## References
* For more information about how to connect to OceanBase Database, see [Overview](../../../../../300.develop/100.application-development-of-mysql-mode/100.connect-to-oceanbase-database-of-mysql-mode/100.connection-methods-overview-of-mysql-mode.md).
-* For more examples of the `LOAD DATA` statement, see [Import data by using the LOAD DATA statement](../../../../../500.data-migration/700.migrate-data-from-csv-file-to-oceanbase-database/200.use-the-load-command-to-load-the-csv-data-file-to-the-oceanbase-database.md).
-* For more bypass import examples of the `LOAD DATA` statement, see [Import data in bypass mode by using the LOAD DATA statement](../../../../../500.data-migration/1100.bypass-import/200.use-load-data-statement-to-bypass-import-data.md).
+* For more information about how to use the `LOAD DATA` statement, see [Import data by using the LOAD DATA statement](../../../../../500.data-migration/700.migrate-data-from-csv-file-to-oceanbase-database/200.use-the-load-command-to-load-the-csv-data-file-to-the-oceanbase-database.md).
+* For more bypass import examples of the `LOAD DATA` statement, see [Import data in bypass mode by using the LOAD DATA statement](../../../../../500.data-migration/1100.bypass-import/200.use-load-data-statement-to-bypass-import-data.md).
\ No newline at end of file
diff --git a/en-US/700.reference/500.sql-reference/100.sql-syntax/300.common-tenant-of-oracle-mode/900.sql-statement-of-oracle-mode/100.ddl-of-oracle-mode/1000.alter-table-of-oracle-mode.md b/en-US/700.reference/500.sql-reference/100.sql-syntax/300.common-tenant-of-oracle-mode/900.sql-statement-of-oracle-mode/100.ddl-of-oracle-mode/1000.alter-table-of-oracle-mode.md
index 7f4aed14c1..88033bec7e 100644
--- a/en-US/700.reference/500.sql-reference/100.sql-syntax/300.common-tenant-of-oracle-mode/900.sql-statement-of-oracle-mode/100.ddl-of-oracle-mode/1000.alter-table-of-oracle-mode.md
+++ b/en-US/700.reference/500.sql-reference/100.sql-syntax/300.common-tenant-of-oracle-mode/900.sql-statement-of-oracle-mode/100.ddl-of-oracle-mode/1000.alter-table-of-oracle-mode.md
@@ -63,8 +63,14 @@ column_definition_list:
column_definition:
column_name data_type
[DEFAULT const_value] [AUTO_INCREMENT]
- [NULL | NOT NULL] [[PRIMARY] KEY] [UNIQUE [KEY]] comment
+ [NULL | NOT NULL] [[PRIMARY] KEY] [UNIQUE [KEY]] [COMMENT string_value] [SKIP_INDEX(skip_index_option_list)]
+skip_index_option_list:
+ skip_index_option [,skip_index_option ...]
+
+skip_index_option:
+ MIN_MAX
+ | SUM
column_desc_list:
column_desc [, column_desc ...]
@@ -81,7 +87,7 @@ index_option:
| block_size
| compression
| STORING(column_name_list)
- | comment
+ | COMMENT string_value
table_option_list:
table_option [ table_option ...]
@@ -91,7 +97,7 @@ table_option:
| block_size
| compression
| AUTO_INCREMENT [=] INT_VALUE
- | comment
+ | COMMENT string_value
| parallel_clause
parallel_clause:
@@ -188,7 +194,7 @@ partition_count | subpartition_count:
INT_VALUE
```
-## Syntax description
+## Parameters
| Parameter | Description |
|------------------------------------------------|----------------------------------------------------------------------------------------|
@@ -218,7 +224,8 @@ partition_count | subpartition_count:
| {ENABLE \| DISABLE} CONSTRAINT constraint_name | Enables or disables the `FOREIGN KEY` constraint or `CHECK` constraint. |
| MODIFY PRIMARY KEY | Modifies a primary key. |
| ADD COLUMN GROUP([all columns,]each column) | Converts a table to a columnstore table. The following options are supported:ADD COLUMN GROUP(all columns, each column)
: converts the table to a rowstore-columnstore redundant table. ADD COLUMN GROUP(each column)
: converts the table to a columnstore table.
|
-| DROP COLUMN GROUP([all columns,]each column) | Drops the columnstore attribute of a table. The following options are supported:DROP COLUMN GROUP(all columns, each column)
: drops the rowstore-columnstore redundancy attribute of the table. DROP COLUMN GROUP(each column)
: drops the columnstore attribute of the table.
|
+| DROP COLUMN GROUP([all columns,]each column) | Drops the store format of the table. The following options are supported:DROP COLUMN GROUP(all columns, each column)
: drops the rowstore-columnstore redundant format of the table. DROP COLUMN GROUP(all columns)
: drops the rowstore format of the table. DROP COLUMN GROUP(each column)
: drops the columnstore format of the table.
|
+| SKIP_INDEX | Modifies the Skip Index attribute of a column. Valid values:MIN_MAX
: specifies to store the maximum value, minimum value, and number of null values of the indexed column on an index node. It is the most common aggregate data type in Skip Index. It can accelerate the pushdown of filters and `MIN/MAX` functions. -
SUM
: used to accelerate the pushdown of the `SUM` function on data of the numeric type.
Notice
- The Skip Index attribute is not supported for a column of the JSON or spatial data type.
- The Skip Index attribute is not supported for a generated column.
|
## Examples
@@ -353,7 +360,9 @@ partition_count | subpartition_count:
Query OK, 0 rows affected
```
-* Add the `p4` partition to a non-template-based subpartitioned table named `tbl3`. You need to specify both the partition definition and the subpartition definition.
+* Add the `p4` partition to a non-template-based subpartitioned table named `tbl3`.
+
+ You need to specify both the partition definition and the subpartition definition.
```sql
obclient> ALTER TABLE tbl3 ADD PARTITION p4 VALUES LESS THAN (400)
@@ -365,7 +374,9 @@ partition_count | subpartition_count:
Query OK, 0 rows affected
```
-* Add the `p3` partition to a template-based subpartitioned table named `tbl4`. You need to specify only the partition definition. The subpartition definition is filled in automatically based on the template.
+* Add the `p3` partition to a template-based subpartitioned table named `tbl4`.
+
+ You need to specify only the partition definition. The subpartition definition is filled in automatically based on the template.
```sql
obclient> CREATE TABLE tbl4(col1 INT, col2 INT, PRIMARY KEY(col1,col2))
@@ -648,7 +659,7 @@ partition_count | subpartition_count:
ALTER TABLE tbl1 DROP COLUMN GROUP(all columns, each column);
```
- 3. Convert the `tbl1` table to a columnstore table, and then drop the columnstore attribute.
+ 3. Convert the `tbl1` table to a columnstore table, and then delete the columnstore attribute.
```sql
ALTER TABLE tbl1 ADD COLUMN GROUP(each column);
@@ -656,4 +667,39 @@ partition_count | subpartition_count:
```sql
ALTER TABLE tbl1 DROP COLUMN GROUP(each column);
- ```
\ No newline at end of file
+ ```
+
+* Modify the Skip Index attribute for columns in a table.
+
+ 1. Execute the following statement to create a table named `test_skidx`:
+
+ ```sql
+ CREATE TABLE test_skidx(
+ col1 NUMBER SKIP_INDEX(MIN_MAX, SUM),
+ col2 FLOAT SKIP_INDEX(MIN_MAX),
+ col3 VARCHAR2(1024) SKIP_INDEX(MIN_MAX),
+ col4 CHAR(10)
+ );
+ ```
+
+ 2. Change the Skip Index attribute of the `col2` column in the `test_skidx` table to `SUM`.
+
+ ```sql
+ ALTER TABLE test_skidx MODIFY col2 FLOAT SKIP_INDEX(SUM);
+ ```
+
+ 3. Add the `MIN_MAX` Skip Index attribute for the `col4` column in the `test_skidx` table.
+
+ ```sql
+ ALTER TABLE test_skidx MODIFY col4 CHAR(10) SKIP_INDEX(MIN_MAX);
+ ```
+
+ 4. Delete the Skip Index attribute for the `col1` column in the `test_skidx` table.
+
+ ```sql
+ ALTER TABLE test_skidx MODIFY col1 NUMBER SKIP_INDEX();
+ ```
+
+## References
+
+[Modify a table](../../../../../300.database-object-management/200.manage-object-of-oracle-mode/100.manage-tables-of-oracle-mode/600.change-table-of-oracle-mode.md)
\ No newline at end of file
diff --git a/en-US/700.reference/500.sql-reference/100.sql-syntax/300.common-tenant-of-oracle-mode/900.sql-statement-of-oracle-mode/100.ddl-of-oracle-mode/1600.create-index-of-oracle-mode.md b/en-US/700.reference/500.sql-reference/100.sql-syntax/300.common-tenant-of-oracle-mode/900.sql-statement-of-oracle-mode/100.ddl-of-oracle-mode/1600.create-index-of-oracle-mode.md
index 2e972d2dbe..7055f3bc8a 100644
--- a/en-US/700.reference/500.sql-reference/100.sql-syntax/300.common-tenant-of-oracle-mode/900.sql-statement-of-oracle-mode/100.ddl-of-oracle-mode/1600.create-index-of-oracle-mode.md
+++ b/en-US/700.reference/500.sql-reference/100.sql-syntax/300.common-tenant-of-oracle-mode/900.sql-statement-of-oracle-mode/100.ddl-of-oracle-mode/1600.create-index-of-oracle-mode.md
@@ -38,7 +38,7 @@ opt_null_pos:
| NULLS FIRST
index_option:
- Global
+ GLOBAL
| LOCAL
| BLOCK_SIZE [=] integer
| COMMENT STRING_VALUE
@@ -70,26 +70,26 @@ index_column_group_option:
| index_name | The name of the index to be created. |
| USING BTREE | Optional. Specifies to create a B-tree index. Note
OceanBase Database supports only USING BTREE
.
|
| table_name | The table on which the index is created. You can directly specify the table name or specify the table name and the name of the database to which the table belongs in the `schema_name.table_name` format. |
-| sort_column_key | The column as the sort key. You can specify multiple columns and separate them with commas (`,`). For more information, see [sort_column_key](#sort_column_key). |
-| index_option | The index option. You can specify multiple index options and separate them with spaces. For more information, see [index_option](#index_option) below. |
+| sort_column_key | The key of a sort column. You can specify multiple sort columns for an index and separate them by commas (`,`). For more information, see [sort_column_key](#sort_column_key). |
+| index_option | The index options. You can specify multiple index options for an index and separate them by spaces. For more information, see [index_option](#index_option). |
| partition_option | The index partitioning option. You can specify HASH partitioning, RANGE partitioning, LIST partitioning, or external table partitioning. |
-| index_column_group_option | The index option. For more information, see [index_column_group_option](#index_column_group_option) below. |
+| index_column_group_option | The index options. For more information, see [index_column_group_option](#index_column_group_option). |
### sort_column_key
* `index_expr`: the sort column or expression, such as `c1=c1`. Boolean expressions are not allowed. Currently, you cannot create function-based indexes on generated columns in OceanBase Database. For more information about the expressions supported by function-based indexes, see [System functions supported for function-based indexes](../../../../../300.database-object-management/200.manage-object-of-oracle-mode/400.manage-indexes-of-oracle-mode/500.function-index-list-of-supported-functions-of-oracle-mode.md).
-* `ASC`: optional. The ascending order. Currently, the descending order is not supported.
+* `ASC`: Optional. Specifies the ascending order. The descending order is not supported.
-* `opt_null_pos`: the position of NULL values after sorting. Valid values:
+* `opt_null_pos`: The position of NULL values after sorting. Valid values:
* `empty`: The position is not specified, and the default behavior of the database management system is performed.
- * `NULLS LAST`: specifies to sort NULL values after non-NULL values.
+ * `NULLS LAST`: Specifies to sort NULL values after non-NULL values.
- * `NULLS FIRST`: specifies to sort NULL values before non-NULL values.
+ * `NULLS FIRST`: Specifies to sort NULL values before non-NULL values.
-* `ID id`: optional. The ID of the sort key.
+* `ID id`: Optional. The ID of the sort key.
The following sample statement creates an index named `index3` on the `t3` table, sorts the index by the `c1` column in ascending order, and specifies to sort NULL values after non-NULL values.
@@ -103,13 +103,13 @@ CREATE INDEX index3 ON t3 (c1 ASC NULLS LAST);
* `LOCAL`: specifies to create a local index.
-* `BLOCK_SIZE [=] integer`: the size of an index block, that is, the number of bytes in each index block.
+* `BLOCK_SIZE [=] integer`: the size of an index block. In other words, the number of bytes in each index block.
* `COMMENT STRING_VALUE`: adds a comment to the index.
-* `STORING (column_name_list)`: the columns to be stored in the index.
+* `STORING (column_name_list)`: the columns to be sorted in the index.
-* `WITH ROWID`: creates an index that contains the row ID.
+* `WITH ROWID`: specifies to create an index that contains the row ID.
* `WITH PARSER STRING_VALUE`: the parser required for the index.
@@ -121,7 +121,7 @@ CREATE INDEX index3 ON t3 (c1 ASC NULLS LAST);
* `INDEX_TABLE_ID [=] index_table_id`: the ID of the index table.
-* `MAX_USED_PART_ID [=] used_part_id`: the maximum partition ID allowed for the index.
+* `MAX_USED_PART_ID [=] used_part_id`: the ID of the maximum used partition of the index.
* `physical_attributes_option`: the physical attributes of the index.
@@ -139,15 +139,15 @@ CREATE INDEX index3 ON t3 (c1 ASC NULLS LAST);
## Examples
-Create a columnstore index on a table.
+Create a columnstore index for a table.
-1. Create a table named `test_tbl1`.
+1. Use the following SQL statement to create a table named `test_tbl1`.
```sql
CREATE TABLE test_tbl1 (col1 NUMBER, col2 VARCHAR2(50));
```
-2. On the `test_tbl1` table, create a columnstore index named `idx1_test_tbl1`, which references the `col1` column.
+2. Create a columnstore index named `idx1_test_tbl1` on the `test_tbl1` table and reference the `col1` column.
```sql
CREATE INDEX idx1_test_tbl1 ON test_tbl1 (col1) WITH COLUMN GROUP(each column);
diff --git a/en-US/700.reference/500.sql-reference/100.sql-syntax/300.common-tenant-of-oracle-mode/900.sql-statement-of-oracle-mode/100.ddl-of-oracle-mode/2400.create-table-of-oracle-mode.md b/en-US/700.reference/500.sql-reference/100.sql-syntax/300.common-tenant-of-oracle-mode/900.sql-statement-of-oracle-mode/100.ddl-of-oracle-mode/2400.create-table-of-oracle-mode.md
index d47296402f..2b1dda7d70 100644
--- a/en-US/700.reference/500.sql-reference/100.sql-syntax/300.common-tenant-of-oracle-mode/900.sql-statement-of-oracle-mode/100.ddl-of-oracle-mode/2400.create-table-of-oracle-mode.md
+++ b/en-US/700.reference/500.sql-reference/100.sql-syntax/300.common-tenant-of-oracle-mode/900.sql-statement-of-oracle-mode/100.ddl-of-oracle-mode/2400.create-table-of-oracle-mode.md
@@ -44,9 +44,16 @@ column_definition:
[CONSTRAINT [constraint_name] references_clause]
|
[GENERATED ALWAYS] AS (expression) [VIRTUAL]
- [NULL | NOT NULL] [UNIQUE KEY] [[PRIMARY] KEY] [UNIQUE LOWER_KEY] [COMMENT string]
+ [NULL | NOT NULL] [UNIQUE KEY] [[PRIMARY] KEY] [UNIQUE LOWER_KEY] [COMMENT string] [SKIP_INDEX(skip_index_option_list)]
}
+skip_index_option_list:
+ skip_index_option [,skip_index_option ...]
+
+skip_index_option:
+ MIN_MAX
+ | SUM
+
references_clause:
REFERENCES table_name [ (column_name, column_name ...) ] [ON DELETE {SET NULL | CASCADE}]
@@ -196,7 +203,9 @@ on_commit_option:
| ON COMMIT PRESERVE ROWS
table_column_group_option:
- WITH COLUMN GROUP([all columns, ]each column)
+ WITH COLUMN GROUP(all columns)
+ | WITH COLUMN GROUP(each column)
+ | WITH COLUMN GROUP(all columns, each column)
```
## Parameters
@@ -216,9 +225,10 @@ table_column_group_option:
| ENABLE/DISABLE ROW MOVEMENT | Specifies whether to allow movement between partitions for partitioning key updates. |
| ON COMMIT DELETE ROWS | Deletes data upon commit for a transaction-level temporary table. |
| ON COMMIT PRESERVE ROWS | Deletes data upon the end of the session for a session-level temporary table. |
-| parallel_clause | The degree of parallelism (DOP) at the table level. - `NOPARALLEL`: sets the DOP to 1, which is the default value.
- `PARALLEL integer`: sets the DOP to an integer greater than or equal to `1`.
Notice
When you specify the DOP, the following priority order applies: DOP specified by using a hint \> DOP specified in ALTER SESSION
\> DOP at the table level.
|
+| parallel_clause | The degree of parallelism (DOP) at the table level. - `NOPARALLEL`: sets the DOP to 1, which is the default value.
- `PARALLEL integer`: sets the DOP to an integer greater than or equal to `1`.
Notice
When you specify the DOP, the following priority order applies: DOP specified by using a hint \> DOP specified in ALTER SESSION
\> DOP at the table level.
|
| DUPLICATE_SCOPE | The replicated table attribute. Valid values:- `none`: specifies that the table is a normal table. This is the default value.
`cluster`: specifies that the table is a replicated table. The leader needs to replicate transactions to all full-featured replicas and read-only replicas of the current tenant.
Currently, OceanBase Database supports only cluster-level replicated tables. |
-| table_column_group_option | The columnstore option for the table. The following options are supported:WITH COLUMN GROUP(all columns, each column)
: specifies to create a rowstore-columnstore redundant table. WITH COLUMN GROUP(each column)
: specifies to create a columnstore table.
|
+| table_column_group_option | The columnstore option for the table. The following options are supported:WITH COLUMN GROUP(all columns, each column)
: specifies to create a rowstore-columnstore redundant table. WITH COLUMN GROUP(all columns)
: specifies to create a rowstore table. WITH COLUMN GROUP(each column)
: specifies to create a columnstore table.
|
+| SKIP_INDEX | The Skip Index attribute of the column. Valid values:MIN_MAX
: specifies to store the maximum value, minimum value, and number of null values of the indexed column on an index node. It is the most common aggregate data type in Skip Index. It can accelerate the pushdown of filters and `MIN/MAX` functions. -
SUM
: used to accelerate the pushdown of the `SUM` function on data of the numeric type.
Notice
- The Skip Index attribute is not supported for a column of the JSON or spatial data type.
- The Skip Index attribute is not supported for a generated column.
|
## Examples
@@ -240,7 +250,6 @@ table_column_group_option:
Query OK, 0 rows affected
```
-
* Create a HASH-partitioned table with eight partitions.
```sql
@@ -370,9 +379,20 @@ table_column_group_option:
CREATE TABLE tbl1_cg (col1 NUMBER PRIMARY KEY, col2 VARCHAR2(50)) WITH COLUMN GROUP(each column);
```
+* Specify the Skip Index attribute for columns while creating a table.
+
+ ```sql
+ CREATE TABLE test_skidx(
+ col1 NUMBER SKIP_INDEX(MIN_MAX, SUM),
+ col2 FLOAT SKIP_INDEX(MIN_MAX),
+ col3 VARCHAR2(1024) SKIP_INDEX(MIN_MAX),
+ col4 CHAR(10)
+ );
+ ```
+
## Limitations on global temporary tables in Oracle mode
-* With basic data correctness and functionality guarantees, temporary tables in OceanBase Database in Oracle mode are used in a lot of scenarios for migration from Oracle to OceanBase Database.
+* Temporary tables are used in multiple database upgrade scenarios in Oracle mode, with the basic correctness and functionality ensured.
* Generally, temporary tables are used for compatibility purposes with less business construction. You can use temporary tables in limited business scenarios with lower performance requirements. If the business scenarios support normal tables, we recommend that you use normal tables.
### Performance and stability
@@ -473,4 +493,4 @@ This way, you have changed SQL statements involving temporary tables to dynamic
### Data is not cleaned up due to a fault
-Data may not be fully cleaned up due to a fault. Currently, OceanBase Database does not support automatic cleanup in this case. Generally, this problem does not affect the use of the database. To remove excessive residual data, you can drop the temporary table and rebuild it.
+Data may not be fully cleaned up due to a fault. Currently, OceanBase Database does not support automatic cleanup in this case. Generally, this problem does not affect the use of the database. To remove excessive residual data, you can drop the temporary table and rebuild it.
\ No newline at end of file
diff --git a/en-US/700.reference/500.sql-reference/100.sql-syntax/300.common-tenant-of-oracle-mode/900.sql-statement-of-oracle-mode/100.ddl-of-oracle-mode/4600.alter-system-major-freeze-of-oracle-mode.md b/en-US/700.reference/500.sql-reference/100.sql-syntax/300.common-tenant-of-oracle-mode/900.sql-statement-of-oracle-mode/100.ddl-of-oracle-mode/4600.alter-system-major-freeze-of-oracle-mode.md
index 4312fbca79..d1c76de001 100644
--- a/en-US/700.reference/500.sql-reference/100.sql-syntax/300.common-tenant-of-oracle-mode/900.sql-statement-of-oracle-mode/100.ddl-of-oracle-mode/4600.alter-system-major-freeze-of-oracle-mode.md
+++ b/en-US/700.reference/500.sql-reference/100.sql-syntax/300.common-tenant-of-oracle-mode/900.sql-statement-of-oracle-mode/100.ddl-of-oracle-mode/4600.alter-system-major-freeze-of-oracle-mode.md
@@ -5,23 +5,22 @@
| dir-name-en | |
| tenant-type | Oracle Mode |
-# ALTER SYSTEM MAJOR FREEZE
+# MAJOR and MINOR
## Purpose
-You can use this statement to initiate a major compaction in the storage layer for a user tenant.
+You can use this statement to initiate a major or minor compaction in the storage layer for a user tenant from the current tenant. You can initiate a major or minor compaction at the tenant or partition level.
## Syntax
-```sql
-alter_system_merge_stmt:
- ALTER SYSTEM merge_action;
+```shell
+ALTER SYSTEM merge_action;
merge_action:
- MAJOR FREEZE
+ MAJOR FREEZE [TABLET_ID = tablet_id]
+ | MINOR FREEZE [TABLET_ID = tablet_id]
| {SUSPEND | RESUME} MERGE
| CLEAR MERGE ERROR
-
```
## Parameters
@@ -29,35 +28,63 @@ merge_action:
| **Parameter** | **Description** |
|---------------------------|------------------|
| MAJOR FREEZE | Initiates a major compaction.
**Note**: A user tenant can initiate a major compaction only for the current tenant. |
+| MINOR FREEZE | Initiates a minor compaction.
**Note**: A user tenant can initiate a minor compaction only for the current tenant. |
| {SUSPEND \| RESUME} MERGE | Suspends or resumes a major compaction.
**Note**: A user tenant can suspend or resume a major compaction only for the current tenant. |
| CLEAR MERGE ERROR | Removes major compaction error tags.
**Note**: A user tenant can remove major compaction error tags only in the current tenant. |
+| TABLET_ID | The tablet for which a minor compaction is to be initiated. |
+
## Examples
+### Major compactions in the storage layer
+
* Initiate a major compaction for a user tenant.
- ```sql
+ ```shell
obclient> ALTER SYSTEM MAJOR FREEZE;
Query OK, 0 rows affected
```
+* Initiate a major compaction at the partition level for a user tenant.
+
+ ```shell
+ obclient> ALTER SYSTEM MAJOR FREEZE TABLET_ID = 5;
+ Query OK, 0 rows affected
+ ```
+
* Suspend the major compaction of a user tenant.
- ```sql
+ ```shell
obclient> ALTER SYSTEM SUSPEND MERGE;
Query OK, 0 rows affected
```
* Resume the major compaction of a user tenant.
- ```sql
+ ```shell
obclient> ALTER SYSTEM RESUME MERGE;
Query OK, 0 rows affected
```
* Remove the major compaction error tags of a user tenant.
- ```sql
+ ```shell
obclient> ALTER SYSTEM CLEAR MERGE ERROR;
Query OK, 0 rows affected
```
+
+### Minor compactions in the storage layer
+
+* Initiate a minor compaction for a user tenant.
+
+ ```shell
+ obclient> ALTER SYSTEM MINOR FREEZE;
+ Query OK, 0 rows affected
+ ```
+
+* Initiate a minor compaction at the partition level for a user tenant.
+
+ ```shell
+ obclient> ALTER SYSTEM MINOR FREEZE TABLET_ID = 5;
+ Query OK, 0 rows affected
+ ```
\ No newline at end of file
diff --git a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/600.sql-statements-for-pl-stored-programs-mysql/100.alter-function-mysql.md b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/600.sql-statements-for-pl-stored-programs-mysql/100.alter-function-mysql.md
index 0432652370..f093c5af63 100644
--- a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/600.sql-statements-for-pl-stored-programs-mysql/100.alter-function-mysql.md
+++ b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/600.sql-statements-for-pl-stored-programs-mysql/100.alter-function-mysql.md
@@ -47,4 +47,4 @@ Here is an example:
```sql
obclient> ALTER FUNCTION my_func LANGUAGE SQL READS SQL DATA COMMENT 'Example';
Query OK, 0 rows affected
-```
+```
\ No newline at end of file
diff --git a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/600.sql-statements-for-pl-stored-programs-mysql/1000.drop-trigger-mysql.md b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/600.sql-statements-for-pl-stored-programs-mysql/1000.drop-trigger-mysql.md
index 8a10404b6f..b2b8a682a7 100644
--- a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/600.sql-statements-for-pl-stored-programs-mysql/1000.drop-trigger-mysql.md
+++ b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/600.sql-statements-for-pl-stored-programs-mysql/1000.drop-trigger-mysql.md
@@ -26,4 +26,4 @@ Here is an example:
```sql
obclient> DROP TRIGGER IF EXISTS test_trg;
Query OK, 0 rows affected
-```
+```
\ No newline at end of file
diff --git a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/600.sql-statements-for-pl-stored-programs-mysql/200.alter-procedure-mysql.md b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/600.sql-statements-for-pl-stored-programs-mysql/200.alter-procedure-mysql.md
index 445a1ea612..dcbb0fc8a6 100644
--- a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/600.sql-statements-for-pl-stored-programs-mysql/200.alter-procedure-mysql.md
+++ b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/600.sql-statements-for-pl-stored-programs-mysql/200.alter-procedure-mysql.md
@@ -44,4 +44,4 @@ Here is an example:
```sql
obclient> ALTER PROCEDURE proc_name LANGUAGE SQL READS SQL DATA SQL SECURITY INVOKER COMMENT 'Example';
Query OK, 0 rows affected
-```
+```
\ No newline at end of file
diff --git a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/600.sql-statements-for-pl-stored-programs-mysql/500.create-function-mysql.md b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/600.sql-statements-for-pl-stored-programs-mysql/500.create-function-mysql.md
index 14e9de6e24..646a0bd857 100644
--- a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/600.sql-statements-for-pl-stored-programs-mysql/500.create-function-mysql.md
+++ b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/600.sql-statements-for-pl-stored-programs-mysql/500.create-function-mysql.md
@@ -36,11 +36,12 @@ routine_body:
Valid SQL routine statement
```
+
By default, stored functions are associated with the default database. To associate a stored function with a specified database, use `database_name.sp_name` to specify the database name.
To call a stored function, reference it in an expression. This function will return a value during expression evaluation.
-To use the `CREATE FUNCTION` statement, you need to have the `CREATE ROUTINE` privilege. By default, OceanBase Database automatically grants the `ALTER ROUTINE` and `EXECUTE` privileges to the creator of routines (stored procedures and functions). If there is a `DEFINER` clause, the required privileges depend on the value of `user`.
+To use the `CREATE FUNCTION` statement, you must have the `CREATE ROUTINE` privilege. By default, OceanBase Database automatically grants the `ALTER ROUTINE` and `EXECUTE` privileges to the creator of a stored routine such as a stored procedure or a stored function. If the `DEFINER` clause is used, the required privileges depend on the value of `user`.
The `DEFINER` and `SQL SECURITY` clauses specify the security context used for checking the access privileges when the routine is executed.
@@ -93,4 +94,4 @@ A server processes the data types of routine parameters, local variables created
The `character_set_database` and `collation_database` system variables specify the character set and collation for the database.
-For examples of creating stored functions, see [Stored functions](../200.storage-object-mysql/400.pl-storage-function-mysql.md).
+For examples of creating stored functions, see [Stored functions](../200.storage-object-mysql/400.pl-storage-function-mysql.md).
\ No newline at end of file
diff --git a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/600.sql-statements-for-pl-stored-programs-mysql/600.create-procedure-mysql.md b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/600.sql-statements-for-pl-stored-programs-mysql/600.create-procedure-mysql.md
index 021f0ffe72..38550a9ca0 100644
--- a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/600.sql-statements-for-pl-stored-programs-mysql/600.create-procedure-mysql.md
+++ b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/600.sql-statements-for-pl-stored-programs-mysql/600.create-procedure-mysql.md
@@ -7,7 +7,6 @@
# CREATE PROCEDURE
-## Purpose
You can use the `CREATE PROCEDURE` statement to create a stored procedure.
@@ -41,7 +40,7 @@ By default, stored procedures are associated with the default database. To assoc
To call a stored procedure, use the `CALL` statement. For more information, see [CALL](../500.pl-manipulation-statement-mysql/100.CALL-mysql.md).
-To use the `CREATE PROCEDURE` statement, you need to have the `CREATE PROCEDURE` privilege. By default, OceanBase Database automatically grants the `ALTER ROUTINE` and `EXECUTE` privileges to the creator of routines (stored procedures and functions). If there is a `DEFINER` clause, the required privileges depend on the value of `user`.
+To use the `CREATE PROCEDURE` statement, you must have the `CREATE PROCEDURE` privilege. By default, OceanBase Database automatically grants the `ALTER ROUTINE` and `EXECUTE` privileges to the creator of a stored routine such as a stored procedure or a stored function. If the `DEFINER` clause is used, the required privileges depend on the value of `user`.
The `DEFINER` and `SQL SECURITY` clauses specify the security context used for checking the access privileges when the routine is executed.
@@ -96,6 +95,7 @@ BEGIN
END;
```
+
A server processes the data types of routine parameters, local variables created by using the `DECLARE` statement, and return values of functions as follows:
* Check for data type mismatch and overflow. Conversion and overflow will cause warnings or errors in strict SQL mode.
@@ -110,4 +110,4 @@ A server processes the data types of routine parameters, local variables created
The `character_set_database` and `collation_database` system variables specify the character set and collation for the database.
-For examples of creating stored procedures, see [Stored procedures](../200.storage-object-mysql/300.pl-stored-procedure-mysql.md).
+For examples of creating stored procedures, see [Stored procedures](../200.storage-object-mysql/300.pl-stored-procedure-mysql.md).
\ No newline at end of file
diff --git a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/600.sql-statements-for-pl-stored-programs-mysql/700.create-trigger-mysql.md b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/600.sql-statements-for-pl-stored-programs-mysql/700.create-trigger-mysql.md
index 07245802a6..11da7f72a9 100644
--- a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/600.sql-statements-for-pl-stored-programs-mysql/700.create-trigger-mysql.md
+++ b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/600.sql-statements-for-pl-stored-programs-mysql/700.create-trigger-mysql.md
@@ -74,4 +74,4 @@ The `DEFINER` user is involved when trigger privileges are checked.
In the trigger body, the `CURRENT_USER()` function is used to indicate that the account used for privilege check when the trigger fires is the `DEFINER` user but not the user whose action fires the trigger.
-For examples of creating triggers, see [Triggers](../200.storage-object-mysql/500.pl-trigger-mysql.md).
+For examples of creating triggers, see [Triggers](../200.storage-object-mysql/500.pl-trigger-mysql.md).
\ No newline at end of file
diff --git a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/600.sql-statements-for-pl-stored-programs-mysql/800.drop-function-mysql.md b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/600.sql-statements-for-pl-stored-programs-mysql/800.drop-function-mysql.md
index 8cde5fd7e6..8b6898e981 100644
--- a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/600.sql-statements-for-pl-stored-programs-mysql/800.drop-function-mysql.md
+++ b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/600.sql-statements-for-pl-stored-programs-mysql/800.drop-function-mysql.md
@@ -25,4 +25,4 @@ Here is an example:
```sql
obclient> DROP FUNCTION IF EXISTS my_func;
Query OK, 0 rows affected
-```
+```
\ No newline at end of file
diff --git a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/600.sql-statements-for-pl-stored-programs-mysql/900.drop-procedure-mysql.md b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/600.sql-statements-for-pl-stored-programs-mysql/900.drop-procedure-mysql.md
index 8e6e46e68c..ddf9d2b5da 100644
--- a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/600.sql-statements-for-pl-stored-programs-mysql/900.drop-procedure-mysql.md
+++ b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/600.sql-statements-for-pl-stored-programs-mysql/900.drop-procedure-mysql.md
@@ -25,4 +25,4 @@ Here is an example:
```sql
obclient> DROP PROCEDURE IF EXISTS proc_name;
Query OK, 0 rows affected
-```
+```
\ No newline at end of file
From 112fe11df155ced15253167e95786c41fb146fc4 Mon Sep 17 00:00:00 2001
From: Jackie Qu
Date: Tue, 16 Apr 2024 17:37:03 +0800
Subject: [PATCH 27/63] v430-beta-200.pl-mysql-update-2
---
.menu_en.yml | 1 +
.../100.system-package-overview-mysql.md | 3 +-
.../100.dbms-mview-stat-overview-mysql.md | 4 +-
.../200.purge-refresh-stats-mysql.md | 4 +-
.../300.set-mvref-stats-params-mysql.md | 4 +-
.../400.set-system-default-mysql.md | 4 +-
.../100.dbms-stats-overview-mysql.md | 7 ++-
.../1000.drop-stat-table-mysql.md | 2 +-
.../1100.export-column-stats-mysql.md | 2 +-
.../1200.export-index-stats-mysql.md | 2 +-
.../1300.export-table-stats-mysql.md | 2 +-
.../1400.export-schema-stats-mysql.md | 2 +-
...00.flush-database-monitoring-info-mysql.md | 2 +-
.../1600.gather-index-stats-mysql.md | 3 +-
.../1700.gather-table-stats-mysql.md | 5 +-
.../1800.gather-schema-stats-mysql.md | 4 +-
.../1850.gather-system-stats-mysql.md | 38 +++++++++++
...200.alter-stats-history-retention-mysql.md | 4 +-
.../2100.get-param-mysql.md | 6 +-
.../2200.get-prefs-mysql.md | 4 +-
.../2300.import-index-stats-mysql.md | 6 +-
.../2400.import-column-stats-mysql.md | 6 +-
.../2500.import-table-stats-mysql.md | 9 +--
.../2600.import-schema-stats-mysql.md | 5 +-
.../2700.lock-partition-stats-mysql.md | 4 +-
.../300.create-stat-table-mysql.md | 6 +-
.../3000.restore-table-stats-mysql.md | 5 +-
.../3100.restore-schema-stats-mysql.md | 6 +-
.../3200.reset-global-pref-defaults-mysql.md | 2 -
.../3400.purge-stats-mysql.md | 6 +-
.../3500.set-column-stats-mysql.md | 6 +-
.../3600.set-index-stats-mysql.md | 6 +-
.../3700.set-table-stats-mysql.md | 6 +-
.../3800.set-global-prefs-mysql.md | 9 +--
.../3900.set-param-mysql.md | 6 +-
.../400.delete-column-stats-mysql.md | 7 +--
.../4000.set-schema-prefs-mysql.md | 9 +--
.../4050.set-system-stats-mysql.md | 59 +++++++++++++++++
.../4100.set-table-prefs-mysql.md | 5 +-
.../500.delete-index-stats-mysql.md | 5 +-
.../600.delete-table-stats-mysql.md | 7 +--
.../700.delete-schema-stats-mysql.md | 4 +-
.../800.delete-schema-prefs-mysql.md | 3 +-
.../860.delete-system-stats-mysql.md | 38 +++++++++++
.../900.delete-table-prefs-mysql.md | 3 +-
...sted-certificate-manager-overview-mysql.md | 29 +++++++++
.../200.add-trusted-certificate-mysql.md | 63 +++++++++++++++++++
.../300.delete-trusted-certificat-mysql.md | 34 ++++++++++
.../400.update-trusted-certificat-mysql.md | 63 +++++++++++++++++++
.../100.dbms-mview-overview-mysql.md | 2 +-
.../200.purge-log-mysql.md | 2 +-
.../300.refresh-mysql.md | 2 +-
52 files changed, 385 insertions(+), 141 deletions(-)
create mode 100644 en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/1850.gather-system-stats-mysql.md
create mode 100644 en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/4050.set-system-stats-mysql.md
create mode 100644 en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/860.delete-system-stats-mysql.md
create mode 100644 en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/16000.dbms-trusted-certificate-manager-mysql/100.dbms-trusted-certificate-manager-overview-mysql.md
create mode 100644 en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/16000.dbms-trusted-certificate-manager-mysql/200.add-trusted-certificate-mysql.md
create mode 100644 en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/16000.dbms-trusted-certificate-manager-mysql/300.delete-trusted-certificat-mysql.md
create mode 100644 en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/16000.dbms-trusted-certificate-manager-mysql/400.update-trusted-certificat-mysql.md
diff --git a/.menu_en.yml b/.menu_en.yml
index 86d6fecb5d..866c0c2f8e 100644
--- a/.menu_en.yml
+++ b/.menu_en.yml
@@ -374,6 +374,7 @@
10050.dbms-mview-stat-mysql=DBMS_MVIEW_STAT
13300.dbms-resource-manager-mysql=DBMS_RESOURCE_MANAGER
15900.dbms-stats-mysql=DBMS_STATS
+ 16000.dbms-trusted-certificate-manager-mysql=DBMS_TRUSTED_CERTIFICATE_MANAGER
17800.dbms-udr-mysql=DBMS_UDR
17900.dbms-workload-repository-mysql=DBMS_WORKLOAD_REPOSITORY
20700.dbms-xplan-mysql=DBMS_XPLAN
diff --git a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/100.system-package-overview-mysql.md b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/100.system-package-overview-mysql.md
index 6e58f3d4dc..60503a1466 100644
--- a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/100.system-package-overview-mysql.md
+++ b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/100.system-package-overview-mysql.md
@@ -44,5 +44,6 @@ The following table describes the PL system packages supported by the current Oc
| [DBMS_UDR](17800.dbms-udr-mysql/100.dbms-udr-overview-mysql.md) | Provides the rewrite binding feature, which rewrites an SQL statement received by the database before execution based on the rewrite rule that the statement matches. |
| [DBMS_XPLAN](20700.dbms-xplan-mysql/100.dbms-xplan-overview-mysql.md) | Provides features for the management of logical plans, such as optimizing and tracing logical plans. |
| [DBMS_WORKLOAD_REPOSITORY](17900.dbms-workload-repository-mysql/100.dbms-workload-repository-overview-mysql.md) | Manages the Automatic Workload Repository (AWR). |
-| [DBMS_MVIEW](9950.dbms-mview-mysql/100.dbms-mview-overview-mysql.md) | Allows you to refresh materialized views that belong to different refresh groups and purge logs. |
+| [DBMS_MVIEW](9950.dbms-mview-mysql/100.dbms-mview-overview-mysql.md) | Provides you a general understanding of materialized views and their potential features such as query rewrite, and allows you to refresh materialized views that belong to different refresh groups and purge logs. |
| [DBMS_MVIEW_STAT](10050.dbms-mview-stat-mysql/100.dbms-mview-stat-overview-mysql.md) | Allows you to manage the collection and retention of materialized view refresh statistics through API operations. |
+| [DBMS_TRUSTED_CERTIFICATE_MANAGER](16000.dbms-trusted-certificate-manager-mysql/100.dbms-trusted-certificate-manager-overview-mysql.md) | Allows you to add, delete, or modify a trusted root CA certificate for a cluster, which is used for remote procedure call (RPC) security authentication. |
\ No newline at end of file
diff --git a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/10050.dbms-mview-stat-mysql/100.dbms-mview-stat-overview-mysql.md b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/10050.dbms-mview-stat-mysql/100.dbms-mview-stat-overview-mysql.md
index 45116f468d..9998b8c01e 100644
--- a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/10050.dbms-mview-stat-mysql/100.dbms-mview-stat-overview-mysql.md
+++ b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/10050.dbms-mview-stat-mysql/100.dbms-mview-stat-overview-mysql.md
@@ -3,7 +3,7 @@
| keywords | |
| dir-name | |
| dir-name-en | |
-| tenant-type | Oracle Mode |
+| tenant-type | MySQL Mode |
# Overview
@@ -22,4 +22,4 @@ The following table describes the `DBMS_MVIEW_STATS` subprograms supported by th
| ----------------------- | --------------------- |
| [PURGE_REFRESH_STATS](200.purge-refresh-stats-mysql.md) | Automatically purges the materialized view refresh statistics that exceed the specified retention period. |
| [SET_MVREF_STATS_PARAMS](300.set-mvref-stats-params-mysql.md) | Configures the collection level and retention period of materialized view refresh statistics. |
-| [SET_SYSTEM_DEFAULT](400.set-system-default-mysql.md) | Sets the default values of refresh statistics parameters. |
+| [SET_SYSTEM_DEFAULT](400.set-system-default-mysql.md) | Sets the default values of refresh statistics parameters. |
\ No newline at end of file
diff --git a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/10050.dbms-mview-stat-mysql/200.purge-refresh-stats-mysql.md b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/10050.dbms-mview-stat-mysql/200.purge-refresh-stats-mysql.md
index 5a52cd3e24..4167d55aff 100644
--- a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/10050.dbms-mview-stat-mysql/200.purge-refresh-stats-mysql.md
+++ b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/10050.dbms-mview-stat-mysql/200.purge-refresh-stats-mysql.md
@@ -3,7 +3,7 @@
| keywords | |
| dir-name | |
| dir-name-en | |
-| tenant-type | Oracle Mode |
+| tenant-type | MySQL Mode |
# PURGE_REFRESH_STATS
@@ -27,4 +27,4 @@ PROCEDURE purge_refresh_stats(
| **Parameter** | **Description** |
|------------------|-----------------------------------------------------|
| mv_name | The name of the materialized view. - When the value is `NULL`, the operation is performed on all materialized views.
|
-| retention_period | The statistics retention period, which ranges from 1 to 365,000 days. The default value is 31 days. The system automatically purges statistics that exceed the specified retention period. - If the value is `NULL`, the default cleanup strategy of the automatic statistics cleanup mechanism is used.
- If the value is `-1`, all statistics are purged.
|
+| retention_period | The statistics retention period, which ranges from 1 to 365,000 days. The default value is 31 days. The system automatically purges statistics that exceed the specified retention period. - If the value is `NULL`, the default cleanup strategy of the automatic statistics cleanup mechanism is used.
- If the value is `-1`, all statistics are purged.
|
\ No newline at end of file
diff --git a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/10050.dbms-mview-stat-mysql/300.set-mvref-stats-params-mysql.md b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/10050.dbms-mview-stat-mysql/300.set-mvref-stats-params-mysql.md
index 7c397e0a69..a7bcbfddfd 100644
--- a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/10050.dbms-mview-stat-mysql/300.set-mvref-stats-params-mysql.md
+++ b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/10050.dbms-mview-stat-mysql/300.set-mvref-stats-params-mysql.md
@@ -3,7 +3,7 @@
| keywords | |
| dir-name | |
| dir-name-en | |
-| tenant-type | Oracle Mode |
+| tenant-type | MySQL Mode |
# SET_MVREF_STATS_PARAMS
@@ -29,4 +29,4 @@ PROCEDURE set_mvref_stats_params(
|------------------|---------------------------------------------------------------------------------------------------------------------------------------------------|
| mv_name | The name of the materialized view. - When the value is `NULL`, the operation is performed on all materialized views.
|
| collection_level | The collection level of statistics. The default value is `TYPICAL`. Valid values:- `NONE`: specifies not to collect statistics.
- `TYPICAL`: specifies to collect basic refresh statistics.
- `ADVANCED`: specifies to collect detailed refresh statistics.
- `NULL`: specifies to use the default value, that is, `TYPICAL`.
|
-| retention_period | The statistics retention period, which ranges from 1 to 365,000 days. The default value is 31 days. The system automatically purges statistics that exceed the specified retention period. - If the value is `NULL`, the default setting is used.
- If the value is `-1`, the statistics will be permanently retained.
|
+| retention_period | The statistics retention period, which ranges from 1 to 365,000 days. The default value is 31 days. The system automatically purges statistics that exceed the specified retention period. - If the value is `NULL`, the default setting is used.
- If the value is `-1`, the statistics will be permanently retained.
|
\ No newline at end of file
diff --git a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/10050.dbms-mview-stat-mysql/400.set-system-default-mysql.md b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/10050.dbms-mview-stat-mysql/400.set-system-default-mysql.md
index c1fd43d58d..ab8b63938c 100644
--- a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/10050.dbms-mview-stat-mysql/400.set-system-default-mysql.md
+++ b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/10050.dbms-mview-stat-mysql/400.set-system-default-mysql.md
@@ -3,7 +3,7 @@
| keywords | |
| dir-name | |
| dir-name-en | |
-| tenant-type | Oracle Mode |
+| tenant-type | MySQL Mode |
# SET_SYSTEM_DEFAULT
@@ -27,4 +27,4 @@ PROCEDURE set_system_default(
| **Parameter** | **Description** |
|------------------|--------------------------------------------------------------------------------------------------|
| parameter_name | The parameter name. Valid values:- `COLLECTION_LEVEL`: the statistics collection level. The default value is `TYPICAL`.
- `RETENTION_PERIOD`: the retention period (in days) of statistics. The default value is 31 days.
|
-| value | When `parameter_name` is `COLLECTION_LEVEL`, the parameter value indicates the collection level. Valid values:
- `NONE`: specifies not to collect statistics.
- `TYPICAL`: specifies to collect basic refresh statistics.
- `ADVANCED`: specifies to collect detailed refresh statistics.
- `NULL`: specifies to use the default value, that is, `TYPICAL`.
When `parameter_name` is `RETENTION_PERIOD`, the parameter value indicates the retention period (in days) of statistics, which ranges from 1 to 365,000 days.
- `NULL`: specifies to use the default value.
- `-1`: specifies to permanently retain statistics.
|
+| value | When `parameter_name` is `COLLECTION_LEVEL`, the parameter value indicates the collection level. Valid values:
- `NONE`: specifies not to collect statistics.
- `TYPICAL`: specifies to collect basic refresh statistics.
- `ADVANCED`: specifies to collect detailed refresh statistics.
- `NULL`: specifies to use the default value, that is, `TYPICAL`.
When `parameter_name` is `RETENTION_PERIOD`, the parameter value indicates the retention period (in days) of statistics, which ranges from 1 to 365,000 days.
- `NULL`: specifies to use the default value.
- `-1`: specifies to permanently retain statistics.
|
\ No newline at end of file
diff --git a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/100.dbms-stats-overview-mysql.md b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/100.dbms-stats-overview-mysql.md
index 3effa7fc7c..9919b06d8d 100644
--- a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/100.dbms-stats-overview-mysql.md
+++ b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/100.dbms-stats-overview-mysql.md
@@ -36,6 +36,7 @@ The following table describes the `DBMS_STATS` subprograms supported by the curr
| [DELETE_TABLE_STATS](../15900.dbms-stats-mysql/600.delete-table-stats-mysql.md) | Deletes table-level statistics. |
| [DELETE_SCHEMA_STATS](../15900.dbms-stats-mysql/700.delete-schema-stats-mysql.md) | Deletes the statistics on all tables in the specified schema. |
| [DELETE_SCHEMA_PREFS](../15900.dbms-stats-mysql/800.delete-schema-prefs-mysql.md) | Deletes the statistics preferences of all tables in the specified schema. |
+| [DELETE_SYSTEM_STATS](../15900.dbms-stats-mysql/860.delete-system-stats-mysql.md) | Deletes system statistics. |
| [DELETE_TABLE_PREFS](../15900.dbms-stats-mysql/900.delete-table-prefs-mysql.md) | Deletes the statistics preferences of a table owned by the specified user. |
| [DROP_STAT_TABLE](../15900.dbms-stats-mysql/1000.drop-stat-table-mysql.md) | Drops a user statistics table. |
| [EXPORT_COLUMN_STATS](../15900.dbms-stats-mysql/1100.export-column-stats-mysql.md) | Exports column-level statistics. |
@@ -46,6 +47,7 @@ The following table describes the `DBMS_STATS` subprograms supported by the curr
| [GATHER_INDEX_STATS](../15900.dbms-stats-mysql/1600.gather-index-stats-mysql.md) | Collects index statistics. |
| [GATHER_TABLE_STATS](../15900.dbms-stats-mysql/1700.gather-table-stats-mysql.md) | Collects statistics on tables and columns. |
| [GATHER_SCHEMA_STATS](../15900.dbms-stats-mysql/1800.gather-schema-stats-mysql.md) | Collects statistics on all objects in a schema. |
+| [GATHER_SYSTEM_STATS](../15900.dbms-stats-mysql/1850.gather-system-stats-mysql.md) | Collects system statistics. |
| [GET_STATS_HISTORY_AVAILABILITY](../15900.dbms-stats-mysql/1900.get-stats-history-availability-mysql.md) | Obtains the time of the earliest available historical statistics. You cannot restore historical statistics that are earlier than this time. |
| [GET_STATS_HISTORY_RETENTION](../15900.dbms-stats-mysql/2000.get-stats-history-retention-mysql.md) | Obtains the retention period of the historical statistics. |
| [GET_PARAM](../15900.dbms-stats-mysql/2100.get-param-mysql.md) | Obtains the default values of parameters of procedures in the `DBMS_STATS` package. |
@@ -68,9 +70,8 @@ The following table describes the `DBMS_STATS` subprograms supported by the curr
| [SET_GLOBAL_PREFS](../15900.dbms-stats-mysql/3800.set-global-prefs-mysql.md) | Sets a global statistics preference. |
| [SET_PARAM](../15900.dbms-stats-mysql/3900.set-param-mysql.md) | Sets the default value for a parameter of procedures in the `DBMS_STATS` package. |
| [SET_SCHEMA_PREFS](../15900.dbms-stats-mysql/4000.set-schema-prefs-mysql.md) | Sets the statistics preferences in the specified schema. |
+| [SET_SYSTEM_STATS](../15900.dbms-stats-mysql/4050.set-system-stats-mysql.md) | Sets system statistics. |
| [SET_TABLE_PREFS](../15900.dbms-stats-mysql/4100.set-table-prefs-mysql.md) | Sets a statistics preference of a table owned by the specified user. |
| [UNLOCK_PARTITION_STATS](../15900.dbms-stats-mysql/4200.unlock-partition-stats-mysql.md) | Unlocks the statistics on a partition. |
| [UNLOCK_SCHEMA_STATS](../15900.dbms-stats-mysql/4300.unlock-schema-stats-mysql.md) | Unlocks the statistics on all tables in a schema. |
-| [UNLOCK_TABLE_STATS](../15900.dbms-stats-mysql/4400.unlock-table-stats-mysql.md) | Unlocks the statistics on a table. |
-
-
+| [UNLOCK_TABLE_STATS](../15900.dbms-stats-mysql/4400.unlock-table-stats-mysql.md) | Unlocks the statistics on a table. |
\ No newline at end of file
diff --git a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/1000.drop-stat-table-mysql.md b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/1000.drop-stat-table-mysql.md
index 717e5ea5d8..2a1db8c5c7 100644
--- a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/1000.drop-stat-table-mysql.md
+++ b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/1000.drop-stat-table-mysql.md
@@ -28,7 +28,7 @@ DBMS_STATS.DROP_STAT_TABLE (
## Exceptions
-The error code `ORA-20000` indicates that the table does not exist, or you do not have the required privileges.
+The error code `HY000` indicates that the table does not exist, or that you do not have the required privileges.
## Considerations
diff --git a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/1100.export-column-stats-mysql.md b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/1100.export-column-stats-mysql.md
index 22e802cc0e..89027930a1 100644
--- a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/1100.export-column-stats-mysql.md
+++ b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/1100.export-column-stats-mysql.md
@@ -35,7 +35,7 @@ DBMS_STATS.EXPORT_COLUMN_STATS (
## Exceptions
-The error code `ORA-20000` indicates that the object does not exist, or you do not have the required privileges.
+The error code `HY000` indicates that the object does not exist, or that you do not have the required privileges.
## Considerations
diff --git a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/1200.export-index-stats-mysql.md b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/1200.export-index-stats-mysql.md
index 09dd2006ec..5556227aec 100644
--- a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/1200.export-index-stats-mysql.md
+++ b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/1200.export-index-stats-mysql.md
@@ -36,7 +36,7 @@ tabname VARCHAR2 DEFAULT NULL;
## Exceptions
-The error code `ORA-20000` indicates that the object does not exist, or you do not have the required privileges.
+The error code `HY000` indicates that the object does not exist, or that you do not have the required privileges.
## Considerations
diff --git a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/1300.export-table-stats-mysql.md b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/1300.export-table-stats-mysql.md
index b191b00525..91791d1720 100644
--- a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/1300.export-table-stats-mysql.md
+++ b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/1300.export-table-stats-mysql.md
@@ -38,7 +38,7 @@ DBMS_STATS.EXPORT_TABLE_STATS (
## Exceptions
-The error code `ORA-20000` indicates that the object does not exist, or you do not have the required privileges.
+The error code `HY000` indicates that the object does not exist, or that you do not have the required privileges.
## Considerations
diff --git a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/1400.export-schema-stats-mysql.md b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/1400.export-schema-stats-mysql.md
index 373d4405a2..e9d8f9e5bd 100644
--- a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/1400.export-schema-stats-mysql.md
+++ b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/1400.export-schema-stats-mysql.md
@@ -29,7 +29,7 @@ DBMS_STATS.EXPORT_SCHEMA_STATS (
## Exceptions
-The error code `ORA-20000` indicates that the object does not exist, or you do not have the required privileges.
+The error code `HY000` indicates that the object does not exist, or that you do not have the required privileges.
## Considerations
diff --git a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/1500.flush-database-monitoring-info-mysql.md b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/1500.flush-database-monitoring-info-mysql.md
index e46d8c7ddd..c38137e782 100644
--- a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/1500.flush-database-monitoring-info-mysql.md
+++ b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/1500.flush-database-monitoring-info-mysql.md
@@ -17,7 +17,7 @@ DBMS_STATS.FLUSH_DATABASE_MONITORING_INFO;
## Exceptions
-The error code `ORA-20000` indicates that you do not have the required privileges.
+The error code `HY000` indicates that you do not have the required privileges.
## Considerations
diff --git a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/1600.gather-index-stats-mysql.md b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/1600.gather-index-stats-mysql.md
index a05ab6fdfb..62580fb0fb 100644
--- a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/1600.gather-index-stats-mysql.md
+++ b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/1600.gather-index-stats-mysql.md
@@ -49,8 +49,7 @@ DBMS_STATS.GATHER_INDEX_STATS (
| Error code | Description |
|-----------|--------------|
-| ORA-20000 | The index does not exist, or you do not have the required privileges. |
-| ORA-20001 | The input value is incorrect. |
+| HY000 | - The index does not exist, or you do not have the required privileges.
- The input value is incorrect.
|
diff --git a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/1700.gather-table-stats-mysql.md b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/1700.gather-table-stats-mysql.md
index f4f01596ec..146ed4be89 100644
--- a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/1700.gather-table-stats-mysql.md
+++ b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/1700.gather-table-stats-mysql.md
@@ -50,10 +50,7 @@ DBMS_STATS.GATHER_TABLE_STATS (
| Error code | Description |
|-----------|------------|
-| ORA-20000 | The table does not exist, or you do not have the required privileges. |
-| ORA-20001 | The input value is incorrect. |
-
-
+| HY000 | - The table does not exist, or you do not have the required privileges.
- The input value is incorrect.
|
## Considerations
diff --git a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/1800.gather-schema-stats-mysql.md b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/1800.gather-schema-stats-mysql.md
index b1ee9d2f21..7320f10bc5 100644
--- a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/1800.gather-schema-stats-mysql.md
+++ b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/1800.gather-schema-stats-mysql.md
@@ -47,9 +47,7 @@ DBMS_STATS.GATHER_SCHEMA_STATS (
| Error code | Description |
|-----------|--------------------|
-| ORA-20000 | The schema does not exist, or you do not have the required privileges. |
-| ORA-20001 | The input value is incorrect. |
-
+| HY000 | - The schema does not exist, or you do not have the required privileges.
- The input value is incorrect.
|
## Considerations
diff --git a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/1850.gather-system-stats-mysql.md b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/1850.gather-system-stats-mysql.md
new file mode 100644
index 0000000000..1a9c3de625
--- /dev/null
+++ b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/1850.gather-system-stats-mysql.md
@@ -0,0 +1,38 @@
+| Description | |
+|---------------|-----------------|
+| keywords | |
+| dir-name | |
+| dir-name-en | |
+| tenant-type | MySQL Mode |
+
+# GATHER_SYSTEM_STATS
+
+The `GATHER_SYSTEM_STATS` procedure collects system statistics.
+
+## Syntax
+
+```sql
+DBMS_STATS.GATHER_SYSTEM_STATS();
+```
+
+## Exceptions
+
+| Error code | Description |
+|-----------|------------------|
+| HY000 | The entered name of the system statistical item is incorrect. |
+
+## Considerations
+
+* To call this procedure, you must connect to the database as the specified user, or have the `SYSDBA` privilege.
+
+## Examples
+
+Call the `DBMS_STATS.GATHER_SYSTEM_STATS` procedure to collect system statistics, such as the CPU speed, disk I/O performance, and network throughput.
+
+```sql
+BEGIN
+ -- Collect system statistics.
+ DBMS_STATS.GATHER_SYSTEM_STATS();
+END;
+/
+```
diff --git a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/200.alter-stats-history-retention-mysql.md b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/200.alter-stats-history-retention-mysql.md
index 60316cc5cc..a8a267271e 100644
--- a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/200.alter-stats-history-retention-mysql.md
+++ b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/200.alter-stats-history-retention-mysql.md
@@ -24,7 +24,7 @@ The `retention` parameter specifies the retention period of historical statistic
## Exceptions
-The error code `ORA-20000` indicates that you do not have the required privileges.
+The error code `HY000` indicates that you do not have the required privileges.
## Examples
@@ -34,4 +34,4 @@ Change the retention period of historical statistics to 15 days.
```sql
obclient> CALL DBMS_STATS.ALTER_STATS_HISTORY_RETENTION(15);
Query OK, 0 rows affected
-```
+```
\ No newline at end of file
diff --git a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/2100.get-param-mysql.md b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/2100.get-param-mysql.md
index dc5ee6d0ce..e88ceea1ef 100644
--- a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/2100.get-param-mysql.md
+++ b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/2100.get-param-mysql.md
@@ -24,7 +24,7 @@ The `pname` parameter specifies the name of the parameter whose default value is
## Exceptions
-The error code `ORA-20001` indicates that the input value is invalid.
+The error code `HY000` indicates that the input value is invalid.
## Examples
@@ -38,6 +38,4 @@ obclient> SELECT DBMS_STATS.GET_PARAM ('METHOD_OPT') FROM DUAL;
| FOR ALL COLUMNS SIZE AUTO |
+-------------------------------------+
1 row in set
-```
-
-
+```
\ No newline at end of file
diff --git a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/2200.get-prefs-mysql.md b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/2200.get-prefs-mysql.md
index 1c3c8b1ec7..53e2734e5a 100644
--- a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/2200.get-prefs-mysql.md
+++ b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/2200.get-prefs-mysql.md
@@ -32,9 +32,7 @@ DBMS_STATS.GET_PREFS (
| Error code | Description |
|-----------|----------------------|
-| ORA-20000 | The resource manager is not started and statistics cannot be collected. |
-| ORA-20001 | The input value is invalid. |
-
+| HY000 | - The resource manager is not started and statistics cannot be collected.
- The input value is invalid.
|
## Examples
diff --git a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/2300.import-index-stats-mysql.md b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/2300.import-index-stats-mysql.md
index f11518cd19..b7b3223277 100644
--- a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/2300.import-index-stats-mysql.md
+++ b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/2300.import-index-stats-mysql.md
@@ -42,11 +42,7 @@ DBMS_STATS.IMPORT_INDEX_STATS (
| Error code | Description |
|-----------|-------------------|
-| ORA-20000 | The object does not exist, or you do not have the required privileges. |
-| ORA-20001 | The values in the user statistics table are invalid or inconsistent. |
-| ORA-20005 | Statistics on the object are locked. |
-
-
+| HY000 | - The object does not exist, or you do not have the required privileges.
- The values in the user statistics table are invalid or inconsistent.
- Statistics on the object are locked.
|
## Considerations
diff --git a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/2400.import-column-stats-mysql.md b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/2400.import-column-stats-mysql.md
index 9dc98ddfc3..86ee61b172 100644
--- a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/2400.import-column-stats-mysql.md
+++ b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/2400.import-column-stats-mysql.md
@@ -42,10 +42,7 @@ DBMS_STATS.IMPORT_COLUMN_STATS (
| Error code | Description |
|-----------|-------------------|
-| ORA-20000 | The object does not exist, or you do not have the required privileges. |
-| ORA-20001 | The values in the user statistics table are invalid or inconsistent. |
-| ORA-20005 | Statistics on the object are locked. |
-
+| HY000 | - The object does not exist, or you do not have the required privileges.
- The values in the user statistics table are invalid or inconsistent.
- Statistics on the object are locked.
|
## Considerations
@@ -61,4 +58,3 @@ obclient> CALL DBMS_STATS.IMPORT_COLUMN_STATS ('testUser01', 'tbl1','col1',null,
Query OK, 0 rows affected
```
-
diff --git a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/2500.import-table-stats-mysql.md b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/2500.import-table-stats-mysql.md
index eee556e447..ef02fb7dfc 100644
--- a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/2500.import-table-stats-mysql.md
+++ b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/2500.import-table-stats-mysql.md
@@ -43,10 +43,7 @@ DBMS_STATS.IMPORT_TABLE_STATS (
| Error code | Description |
|-----------|-------------------|
-| ORA-20000 | The object does not exist, or you do not have the required privileges. |
-| ORA-20001 | The values in the user statistics table are invalid or inconsistent. |
-
-
+| HY000 | - The object does not exist, or you do not have the required privileges.
- The values in the user statistics table are invalid or inconsistent.
|
## Considerations
@@ -65,6 +62,4 @@ Query OK, 0 rows affected
obclient> CALL DBMS_STATS.IMPORT_TABLE_STATS('testUser01', 'tbl1', stattab=>'test_stat', statown=>'testUser02');
Query OK, 0 rows affected
-```
-
-
+```
\ No newline at end of file
diff --git a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/2600.import-schema-stats-mysql.md b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/2600.import-schema-stats-mysql.md
index 81ff83ea3d..ac2c37732a 100644
--- a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/2600.import-schema-stats-mysql.md
+++ b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/2600.import-schema-stats-mysql.md
@@ -37,10 +37,7 @@ DBMS_STATS.IMPORT_SCHEMA_STATS (
| Error code | Description |
|-----------|-------------------|
-| ORA-20000 | The object does not exist, or you do not have the required privileges. |
-| ORA-20001 | The values in the user statistics table are invalid or inconsistent. |
-
-
+| HY000 | - The object does not exist, or you do not have the required privileges.
- The values in the user statistics table are invalid or inconsistent.
|
## Considerations
diff --git a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/2700.lock-partition-stats-mysql.md b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/2700.lock-partition-stats-mysql.md
index 124da36b7f..8849fff685 100644
--- a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/2700.lock-partition-stats-mysql.md
+++ b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/2700.lock-partition-stats-mysql.md
@@ -37,6 +37,4 @@ Lock the statistics on the `p0` partition in the `t1` table of the `testUser01`
```sql
obclient> CALL DBMS_STATS.LOCK_PARTITION_STATS('testUser01', 't1', 'p0');
Query OK, 0 rows affected
-```
-
-
+```
\ No newline at end of file
diff --git a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/300.create-stat-table-mysql.md b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/300.create-stat-table-mysql.md
index 3d0466f209..1c1153d106 100644
--- a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/300.create-stat-table-mysql.md
+++ b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/300.create-stat-table-mysql.md
@@ -36,9 +36,7 @@ DBMS_STATS.CREATE_STAT_TABLE(
| Error code | Description |
|-----------|-------------|
-| ORA-20000 | The table does not exist, or you do not have the required privileges. |
-| ORA-20001 | The tablespace does not exist. |
-
+| HY000 | - The table does not exist, or you do not have the required privileges.
- The tablespace does not exist.
|
## Considerations
@@ -52,4 +50,4 @@ Create the user statistics table `test_stat` for the `testUser01` user.
```sql
obclient> CALL DBMS_STATS.CREATE_STAT_TABLE('testUser01', 'test_stat');
Query OK, 0 rows affected
-```
+```
\ No newline at end of file
diff --git a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/3000.restore-table-stats-mysql.md b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/3000.restore-table-stats-mysql.md
index 9f66c6a9de..93b0bc6c8d 100644
--- a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/3000.restore-table-stats-mysql.md
+++ b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/3000.restore-table-stats-mysql.md
@@ -35,10 +35,7 @@ DBMS_STATS.RESTORE_TABLE_STATS (
| Error code | Description |
|-----------|---------------------|
-| ORA-20000 | The object does not exist, or you do not have the required privileges. |
-| ORA-20001 | Values are invalid or inconsistent. |
-| ORA-20006 | The historical statistics are unavailable and cannot be restored. |
-
+| HY000 | - The object does not exist, or you do not have the required privileges.
- Values are invalid or inconsistent.
- The historical statistics are unavailable and cannot be restored.
|
## Considerations
diff --git a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/3100.restore-schema-stats-mysql.md b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/3100.restore-schema-stats-mysql.md
index e60ec1ba72..2d64aae231 100644
--- a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/3100.restore-schema-stats-mysql.md
+++ b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/3100.restore-schema-stats-mysql.md
@@ -36,11 +36,7 @@ DBMS_STATS.RESTORE_SCHEMA_STATS(
| Error code | Description |
|-----------|---------------------|
-| ORA-20000 | The object does not exist, or you do not have the required privileges. |
-| ORA-20001 | Values are invalid or inconsistent. |
-| ORA-20006 | The historical statistics are unavailable and cannot be restored. |
-
-
+| HY000 | - The object does not exist, or you do not have the required privileges.
- Values are invalid or inconsistent.
- The historical statistics are unavailable and cannot be restored.
|
## Considerations
diff --git a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/3200.reset-global-pref-defaults-mysql.md b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/3200.reset-global-pref-defaults-mysql.md
index f7cd53237e..f08fa7718a 100644
--- a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/3200.reset-global-pref-defaults-mysql.md
+++ b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/3200.reset-global-pref-defaults-mysql.md
@@ -15,8 +15,6 @@ The `RESET_GLOBAL_PREF_DEFAULTS` procedure resets global preferences to their de
DBMS_STATS.RESET_GLOBAL_PREF_DEFAULTS;
```
-
-
## Examples
Reset all global preferences to default values.
diff --git a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/3400.purge-stats-mysql.md b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/3400.purge-stats-mysql.md
index 96c0b91eaf..283587c061 100644
--- a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/3400.purge-stats-mysql.md
+++ b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/3400.purge-stats-mysql.md
@@ -24,11 +24,7 @@ Statistics saved before the timestamp specified by the `before_timestamp` parame
| Error code | Description |
|-----------|--------------|
-| ORA-20000 | The object does not exist, or you do not have the required privileges. |
-| ORA-20001 | Values are invalid or inconsistent. |
-
-
-
+| HY000 | - The object does not exist, or you do not have the required privileges.
- Values are invalid or inconsistent.
|
## Examples
diff --git a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/3500.set-column-stats-mysql.md b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/3500.set-column-stats-mysql.md
index 298be4d293..c27cb6b1d8 100644
--- a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/3500.set-column-stats-mysql.md
+++ b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/3500.set-column-stats-mysql.md
@@ -46,11 +46,7 @@ DBMS_STATS.SET_COLUMN_STATS (
| Error code | Description |
|-----------|--------------|
-| ORA-20000 | The object does not exist, or you do not have the required privileges. |
-| ORA-20001 | Input values are invalid or inconsistent. |
-| ORA-20005 | Statistics on the object are locked. |
-
-
+| HY000 | - The object does not exist, or you do not have the required privileges.
- Values are invalid or inconsistent.
- Statistics on the object are locked.
|
## Considerations
diff --git a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/3600.set-index-stats-mysql.md b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/3600.set-index-stats-mysql.md
index 8a1c157376..2e687c8ef1 100644
--- a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/3600.set-index-stats-mysql.md
+++ b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/3600.set-index-stats-mysql.md
@@ -46,11 +46,7 @@ DBMS_STATS.SET_INDEX_STATS (
| Error code | Description |
|-----------|--------------|
-| ORA-20000 | The object does not exist, or you do not have the required privileges. |
-| ORA-20001 | The input value is invalid. |
-| ORA-20005 | Statistics on the object are locked. |
-
-
+| HY000 | - The object does not exist, or you do not have the required privileges.
- The input value is invalid.
- Statistics on the object are locked.
|
## Considerations
diff --git a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/3700.set-table-stats-mysql.md b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/3700.set-table-stats-mysql.md
index 24c58d84e5..87d063f4c0 100644
--- a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/3700.set-table-stats-mysql.md
+++ b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/3700.set-table-stats-mysql.md
@@ -44,11 +44,7 @@ DBMS_STATS.SET_TABLE_STATS (
| Error code | Description |
|-----------|--------------|
-| ORA-20000 | The object does not exist, or you do not have the required privileges. |
-| ORA-20001 | Input values are invalid or inconsistent. |
-| ORA-20005 | Statistics on the object are locked. |
-
-
+| HY000 | - The object does not exist, or you do not have the required privileges.
- Input values are invalid or inconsistent.
- Statistics on the object are locked.
|
## Considerations
diff --git a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/3800.set-global-prefs-mysql.md b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/3800.set-global-prefs-mysql.md
index 5ad495b81d..9574dc6d5e 100644
--- a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/3800.set-global-prefs-mysql.md
+++ b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/3800.set-global-prefs-mysql.md
@@ -33,10 +33,7 @@ DBMS_STATS.SET_GLOBAL_PREFS (
| Error code | Description |
|-----------|-------------|
-| ORA-20000 | You do not have the required privilege. |
-| ORA-20001 | Input values are invalid. |
-
-
+| HY000 | - You do not have the required privilege.
- Input values are invalid.
|
## Considerations
@@ -52,6 +49,4 @@ Set the default value of the global-level `APPROXIMATE_NDV` preference to `FALSE
```sql
obclient> CALL DBMS_STATS.SET_GLOBAL_PREFS ('APPROXIMATE_NDV', 'FALSE');
Query OK, 0 rows affected
-```
-
-
+```
\ No newline at end of file
diff --git a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/3900.set-param-mysql.md b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/3900.set-param-mysql.md
index 1793461027..8696247341 100644
--- a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/3900.set-param-mysql.md
+++ b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/3900.set-param-mysql.md
@@ -34,9 +34,7 @@ DBMS_STATS.SET_PARAM (
| Error code | Description |
|-----------|--------------|
-| ORA-20000 | The object does not exist, or you do not have the required privileges. |
-| ORA-20001 | Input values are invalid. |
-
+| HY000 | - The object does not exist, or you do not have the required privileges.
- Input values are invalid.
|
## Considerations
@@ -57,5 +55,3 @@ Set the default value of the `DEGREE` parameter.
obclient> CALL DBMS_STATS.SET_PARAM('DEGREE','20');
Query OK, 0 rows affected
```
-
-
diff --git a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/400.delete-column-stats-mysql.md b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/400.delete-column-stats-mysql.md
index 404f460679..00c0189af9 100644
--- a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/400.delete-column-stats-mysql.md
+++ b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/400.delete-column-stats-mysql.md
@@ -43,10 +43,7 @@ DBMS_STATS.DELETE_COLUMN_STATS (
| Error code | Description |
|-----------|--------------|
-| ORA-20000 | The object does not exist, or you do not have the required privileges. |
-| ORA-20005 | Statistics on the object are locked. |
-
-
+| HY000 | - The table does not exist, or you do not have the required privileges.
- Statistics on the object are locked.
|
## Considerations
@@ -59,4 +56,4 @@ Delete all statistics on the `col1` column in the `tbl1` table of the `testUser0
```sql
obclient> CALL DBMS_STATS.DELETE_COLUMN_STATS('testUser01', 'tbl1', 'col1',col_stat_type=>'ALL');
Query OK, 0 rows affected
-```
+```
\ No newline at end of file
diff --git a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/4000.set-schema-prefs-mysql.md b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/4000.set-schema-prefs-mysql.md
index b056253a02..a4b6e78341 100644
--- a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/4000.set-schema-prefs-mysql.md
+++ b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/4000.set-schema-prefs-mysql.md
@@ -35,10 +35,7 @@ DBMS_STATS.SET_SCHEMA_PREFS (
| Error code | Description |
|-----------|--------------|
-| ORA-20000 | The object does not exist, or you do not have the required privileges. |
-| ORA-20001 | Input values are invalid. |
-
-
+| HY000 | - The object does not exist, or you do not have the required privileges.
- Input values are invalid.
|
## Considerations
@@ -52,6 +49,4 @@ DBMS_STATS.SET_SCHEMA_PREFS (
```sql
obclient> CALL DBMS_STATS.SET_SCHEMA_PREFS('hr', 'DEGREE','10');
Query OK, 0 rows affected
-```
-
-
+```
\ No newline at end of file
diff --git a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/4050.set-system-stats-mysql.md b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/4050.set-system-stats-mysql.md
new file mode 100644
index 0000000000..79eeae19f6
--- /dev/null
+++ b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/4050.set-system-stats-mysql.md
@@ -0,0 +1,59 @@
+| Description | |
+|---------------|-----------------|
+| keywords | |
+| dir-name | |
+| dir-name-en | |
+| tenant-type | MySQL Mode |
+
+# SET_SYSTEM_STATS
+
+The `SET_SYSTEM_STATS` procedure sets system statistics.
+
+## Syntax
+
+```sql
+DBMS_STATS.SET_SYSTEM_STATS (
+ pname VARCHAR2,
+ pvalue NUMBER);
+```
+
+## Parameters
+
+| Parameter | Description |
+|---------|------------|
+| pname | The name of the parameter. Valid values: `cpu_speed`, `disk_seq_read_speed`, `disk_rnd_read_speed`, and `network_speed`. |
+| pvalue | The value of the preference. |
+
+
+## Exceptions
+
+| Error code | Description |
+|-----------|------------------|
+| HY000 | The entered name of the system statistical item is incorrect. |
+
+## Considerations
+
+* To call this procedure, you must connect to the database as the specified user, or have the `SYSDBA` privilege.
+
+
+## Examples
+
+Call the `DBMS_STATS.SET_SYSTEM_STATS` stored procedure to set system statistics. In this example, the parameter name is set to `cpu_speed` and `pvalue` is set to `5000`. This value indicates the number of CPU cycles that can be executed per second in the current hardware environment and must be estimated or obtained based on actual measurements.
+
+```sql
+BEGIN
+ -- Set the CPU speed to 5000, which is a CPU speed in a specific measurement unit.
+ DBMS_STATS.SET_SYSTEM_STATS('cpu_speed', 5000);
+END;
+/
+```
+
+If you want to set the sequential read speed of the disk (`disk_seq_read_speed`), call the procedure as follows:
+
+```sql
+BEGIN
+ -- Set the sequential read speed of the disk to 200, which is a disk read speed in a specific measurement unit.
+ DBMS_STATS.SET_SYSTEM_STATS('disk_seq_read_speed', 200);
+END;
+/
+```
diff --git a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/4100.set-table-prefs-mysql.md b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/4100.set-table-prefs-mysql.md
index bfd60d756e..f55dd133f0 100644
--- a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/4100.set-table-prefs-mysql.md
+++ b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/4100.set-table-prefs-mysql.md
@@ -36,10 +36,7 @@ DBMS_STATS.SET_TABLE_PREFS (
| Error code | Description |
|-----------|--------------|
-| ORA-20000 | The object does not exist, or you do not have the required privileges. |
-| ORA-20001 | Input values are invalid. |
-
-
+| HY000 | - The object does not exist, or you do not have the required privileges.
- Input values are invalid.
|
## Considerations
diff --git a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/500.delete-index-stats-mysql.md b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/500.delete-index-stats-mysql.md
index a87191bf08..ffe4ef04e8 100644
--- a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/500.delete-index-stats-mysql.md
+++ b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/500.delete-index-stats-mysql.md
@@ -42,10 +42,7 @@ DBMS_STATS.DELETE_INDEX_STATS (
| Error code | Description |
|-----------|--------------|
-| ORA-20000 | The object does not exist, or you do not have the required privileges. |
-| ORA-20005 | Statistics on the object are locked. |
-
-
+| HY000 | - The table does not exist, or you do not have the required privileges.
- Statistics on the object are locked.
|
## Considerations
diff --git a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/600.delete-table-stats-mysql.md b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/600.delete-table-stats-mysql.md
index 335868f250..971bc6e7cb 100644
--- a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/600.delete-table-stats-mysql.md
+++ b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/600.delete-table-stats-mysql.md
@@ -42,10 +42,7 @@ DBMS_STATS.DELETE_TABLE_STATS (
| Error code | Description |
|-----------|------------------|
-| ORA-20000 | The object does not exist, or you do not have the required privileges. |
-| ORA-20002 | The user statistics table is damaged and needs to be upgraded. |
-| ORA-20005 | Statistics on the object are locked. |
-
+| HY000 | - The table does not exist, or you do not have the required privileges.
- The user statistics table is damaged and needs to be upgraded.
- Statistics on the object are locked.
|
## Considerations
@@ -59,4 +56,4 @@ Delete all statistics on the `tbl1` table of the `testUser01` user.
```sql
obclient> CALL DBMS_STATS.DELETE_TABLE_STATS('testUser01', 'tbl1');
Query OK, 0 rows affected
-```
+```
\ No newline at end of file
diff --git a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/700.delete-schema-stats-mysql.md b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/700.delete-schema-stats-mysql.md
index 92651fb78f..491b8310f3 100644
--- a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/700.delete-schema-stats-mysql.md
+++ b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/700.delete-schema-stats-mysql.md
@@ -33,7 +33,7 @@ DBMS_STATS.DELETE_SCHEMA_STATS (
## Exceptions
-The error code `ORA-20000` indicates that the object does not exist, or you do not have the required privileges.
+The error code `HY000` indicates that the object does not exist, or that you do not have the required privileges.
## Considerations
@@ -46,4 +46,4 @@ Delete the statistics on all tables in the `hr` schema.
```sql
obclient> CALL DBMS_STATS.DELETE_SCHEMA_STATS('hr');
Query OK, 0 rows affected
-```
+```
\ No newline at end of file
diff --git a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/800.delete-schema-prefs-mysql.md b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/800.delete-schema-prefs-mysql.md
index b23599b5dd..3c87313da3 100644
--- a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/800.delete-schema-prefs-mysql.md
+++ b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/800.delete-schema-prefs-mysql.md
@@ -32,8 +32,7 @@ DBMS_STATS.DELETE_SCHEMA_PREFS (
| Error code | Description |
|-----------|------------------|
-| ORA-20000 | The schema does not exist, or you do not have the required privileges. |
-| ORA-20001 | Input values are invalid. |
+| HY000 | The schema does not exist, you do not have the required privileges, or the input value is invalid. |
diff --git a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/860.delete-system-stats-mysql.md b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/860.delete-system-stats-mysql.md
new file mode 100644
index 0000000000..cf72168040
--- /dev/null
+++ b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/860.delete-system-stats-mysql.md
@@ -0,0 +1,38 @@
+| Description | |
+|---------------|-----------------|
+| keywords | |
+| dir-name | |
+| dir-name-en | |
+| tenant-type | MySQL Mode |
+
+# DELETE_SYSTEM_STATS
+
+The `DELETE_SYSTEM_STATS` procedure deletes system statistics.
+
+## Syntax
+
+```sql
+DBMS_STATS.DELETE_SYSTEM_STATS();
+```
+
+## Exceptions
+
+| Error code | Description |
+|-----------|------------------|
+| HY000 | The entered name of the system statistical item is incorrect. |
+
+## Considerations
+
+* To call this procedure, you must connect to the database as the specified user, or have the `SYSDBA` privilege.
+
+## Examples
+
+Call the `DBMS_STATS.DELETE_SYSTEM_STATS` stored procedure to delete all system statistics without setting any parameters.
+
+```sql
+BEGIN
+ -- Delete all system statistics.
+ DBMS_STATS.DELETE_SYSTEM_STATS();
+END;
+/
+```
diff --git a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/900.delete-table-prefs-mysql.md b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/900.delete-table-prefs-mysql.md
index 8afcd0736a..9e3bbb208a 100644
--- a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/900.delete-table-prefs-mysql.md
+++ b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/900.delete-table-prefs-mysql.md
@@ -32,8 +32,7 @@ DBMS_STATS.DELETE_TABLE_PREFS (
| Error code | Description |
|-----------|-------------|
-| ORA-20000 | You do not have the required privilege. |
-| ORA-20001 | Input values are invalid. |
+| HY000 | - You do not have the required privilege.
- Input values are invalid.
|
diff --git a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/16000.dbms-trusted-certificate-manager-mysql/100.dbms-trusted-certificate-manager-overview-mysql.md b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/16000.dbms-trusted-certificate-manager-mysql/100.dbms-trusted-certificate-manager-overview-mysql.md
new file mode 100644
index 0000000000..27c34459aa
--- /dev/null
+++ b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/16000.dbms-trusted-certificate-manager-mysql/100.dbms-trusted-certificate-manager-overview-mysql.md
@@ -0,0 +1,29 @@
+| Description | |
+|---------------|-----------------|
+| keywords | |
+| dir-name | |
+| dir-name-en | |
+| tenant-type | MySQL Mode |
+
+# Overview
+
+The `DBMS_TRUSTED_CERTIFICATE_MANAGER` system package allows you to add, delete, or modify a trusted root CA certificate for a cluster, which is used for remote procedure call (RPC) security authentication.
+
+
+ Applicability
+ This topic applies only to OceanBase Database Enterprise Edition. OceanBase Database Community Edition provides only MySQL mode.
+
+
+## Privileges
+
+You can call this system package only from the sys tenant.
+
+## Subprograms
+
+The following table describes the `DBMS_TRUSTED_CERTIFICATE_MANAGER` subprograms supported by the current OceanBase Database version.
+
+| Subprogram | Description |
+| --------- | ----------------------------------------- |
+| [ADD_TRUSTED_CERTIFICATE](200.add-trusted-certificate-mysql.md) | Adds a trusted root CA certificate for a cluster. Components such as OceanBase clusters and OceanBase Migration Service (OMS) that use digital certificates issued by using this root certificate can pass RPC authentication and connect to the target cluster. |
+| [DELETE_TRUSTED_CERTIFICATE](300.delete-trusted-certificat-mysql.md) | Deletes a trusted root CA certificate for a cluster. After the deletion, components such as OceanBase clsuters and OMS that use digital certificates issued by using this root certificate can no longer connect to the target cluster. |
+| [UPDATE_TRUSTED_CERTIFICATE](400.update-trusted-certificat-mysql.md) | Updates the trusted root CA certificate of a cluster. |
diff --git a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/16000.dbms-trusted-certificate-manager-mysql/200.add-trusted-certificate-mysql.md b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/16000.dbms-trusted-certificate-manager-mysql/200.add-trusted-certificate-mysql.md
new file mode 100644
index 0000000000..48e7596890
--- /dev/null
+++ b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/16000.dbms-trusted-certificate-manager-mysql/200.add-trusted-certificate-mysql.md
@@ -0,0 +1,63 @@
+| Description | |
+|---------------|-----------------|
+| keywords | |
+| dir-name | |
+| dir-name-en | |
+| tenant-type | MySQL Mode |
+
+# ADD_TRUSTED_CERTIFICATE
+
+The `ADD_TRUSTED_CERTIFICATE` procedure adds a trusted root CA certificate for a cluster. Components such as OceanBase clusters and OceanBase Migration Service (OMS) that use digital certificates issued by using this root certificate can pass remote procedure call (RPC) authentication and connect to the target cluster.
+
+
+ Applicability
+ This topic applies only to OceanBase Database Enterprise Edition. OceanBase Database Community Edition provides only MySQL mode.
+
+
+## Syntax
+
+```sql
+PROCEDURE ADD_TRUSTED_CERTIFICATE(
+ common_name VARCHAR(256),
+ description VARCHAR(256),
+ content LONGTEXT);
+```
+
+## Parameters
+
+| **Parameter** | **Description** |
+|------------|----------------------------------------|
+| common_name | The unique identifier of the certificate. |
+| description | The purpose and associated cluster of the certificate. |
+| content | The details about the certificate. |
+
+## Examples
+
+```shell
+obclient>CALL DBMS_TRUSTED_CERTIFICATE_MANAGER.ADD_TRUSTED_CERTIFICATE
+('MySQL_Server_5.7.2_Auto_Geanrateed_C_Certificate',
+'cluster B CA',
+'-----BEGIN CERTIFICATE-----
+MIIDbTCCAlWgAwIBAgIJANYnM/dk7iDWMA0GCSqGSIb3DQEBCwUAMEwxCzAJBgNV
+BAYTAkNOMRAwDgYDVQQIDAdCZWlKaW5nMQwwCgYDVQQKDANBbnQxCzAJBgNVBAsM
+Ak9CMRAwDgYDVQQDDAdyb290IGNhMCAXDTIzMTEyMjA3NDUzMFoYDzMwMjMwMzI1
+MDc0NTMwWjBMMQswCQYDVQQGEwJDTjEQMA4GA1UECAwHQmVpSmluZzEMMAoGA1UE
+CgwDQW50MQswCQYDVQQLDAJPQjEQMA4GA1UEAwwHcm9vdCBjYTCCASIwDQYJKoZI
+hvcNAQEBBQADggEPADCCAQoCggEBANWsMw/GZqv7SgZDKUFt7DhyIbfVzVapoE+/
+MLkgD5Pncu+vb87k9jj6ddG6ekTq9ccih3m01j9EmXqoKEXbqn/TT9w4IGxR1STJ
+aL87Xe7NK8kpYPjtFfGMuXvzHQtsTy+KMHDW8+BLmitud4EEW75o/mA93AmtxmNZ
+bHBVkIn5Q2MoU/TyMG6PPSbdMIDH2AlF4QAcM8Jsqn7lqs1J1M/ock0sYOi2YcM8
+ceKUtcn7Xks22dSkOWYfMRlBXupXo/WqnKWJHpkZ0JWQn7b8De+qGwUwVoXU+DGH
+sadmc4ESFwPlzar+yCfRo0rRyQ4MhYTZb5A3HKgAOh3HpMjNeykCAwEAAaNQME4w
+HQYDVR0OBBYEFPG/BWN8y9A8Ti/ogcvAErbFXPxVMB8GA1UdIwQYMBaAFPG/BWN8
+y9A8Ti/ogcvAErbFXPxVMAwGA1UdEwQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEB
+AKEDH+ytxMjgKVM/kPczI/iQRUj4Ql8I3/alyrH24X46KRDRP9010i12GB+uvMkC
+HDjik5TAdzjcFzUWQNoDlO4FIL+lXkxZhqU4Ixd5/4MphMNubbcvMsVuui2WiyLE
+oI5KBoRJrJntQYPBTrmVSgjb6BaIFhOXD6O8Et48Iel6jFcy38d4ZtUxXxzavWGN
+n9iRXQeSBBA3XPYxlRoeeuePKmiqMSH1xltns0OyMRTzuUGOIfjIzQ8DKXx1STBw
+LznB/zI0o8/aCMqt/jthRfC/OKSZaWWHD3wwx9BE5vHzenQ0o+lUtjZGklSnIze7
+JXaALXmRyp5+2y6Z7pT1+6g=
+-----END CERTIFICATE-----'
+);
+Query OK, 0 rows affected
+```
diff --git a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/16000.dbms-trusted-certificate-manager-mysql/300.delete-trusted-certificat-mysql.md b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/16000.dbms-trusted-certificate-manager-mysql/300.delete-trusted-certificat-mysql.md
new file mode 100644
index 0000000000..b09d3e3d2f
--- /dev/null
+++ b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/16000.dbms-trusted-certificate-manager-mysql/300.delete-trusted-certificat-mysql.md
@@ -0,0 +1,34 @@
+| Description | |
+|---------------|-----------------|
+| keywords | |
+| dir-name | |
+| dir-name-en | |
+| tenant-type | MySQL Mode |
+
+# DELETE_TRUSTED_CERTIFICATE
+
+The `DELETE_TRUSTED_CERTIFICATE` procedure deletes the trusted root CA certificate for a cluster. After the deletion, components such as OceanBase clusters and OceanBase Migration Service (OMS) that use the digital certificates issued by using this root certificate can no longer connect to the target cluster.
+
+
+ Applicability
+ This topic applies only to OceanBase Database Enterprise Edition. OceanBase Database Community Edition provides only MySQL mode.
+
+
+## Syntax
+
+```sql
+PROCEDURE DELETE_TRUSTED_CERTIFICATE(common_name VARCHAR(256));
+```
+
+## Parameters
+
+| **Parameter** | **Description** |
+|------------|----------------------------------------|
+| common_name | The unique identifier of the certificate. |
+
+## Examples
+
+```shell
+obclient>CALL DBMS_TRUSTED_CERTIFICATE_MANAGER.DELETE_TRUSTED_CERTIFICATE('MySQL_Server_5.7.2_Auto_Geanrateed_C_Certificate');
+Query OK, 0 rows affected
+```
diff --git a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/16000.dbms-trusted-certificate-manager-mysql/400.update-trusted-certificat-mysql.md b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/16000.dbms-trusted-certificate-manager-mysql/400.update-trusted-certificat-mysql.md
new file mode 100644
index 0000000000..cb43dd50d2
--- /dev/null
+++ b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/16000.dbms-trusted-certificate-manager-mysql/400.update-trusted-certificat-mysql.md
@@ -0,0 +1,63 @@
+| Description | |
+|---------------|-----------------|
+| keywords | |
+| dir-name | |
+| dir-name-en | |
+| tenant-type | MySQL Mode |
+
+# UPDATE_TRUSTED_CERTIFICATE
+
+The `UPDATE_TRUSTED_CERTIFICATE` procedure updates the trusted root CA certificate for a cluster.
+
+
+ Applicability
+ This topic applies only to OceanBase Database Enterprise Edition. OceanBase Database Community Edition provides only MySQL mode.
+
+
+## Syntax
+
+```sql
+PROCEDURE UPDATE_TRUSTED_CERTIFICATE(
+ common_name VARCHAR(256),
+ description VARCHAR(256),
+ content LONGTEXT);
+```
+
+## Parameters
+
+| **Parameter** | **Description** |
+|------------|----------------------------------------|
+| common_name | The unique identifier of the certificate. |
+| description | The purpose and associated cluster of the certificate. |
+| content | The details about the certificate. |
+
+## Examples
+
+```shell
+obclient>CALL DBMS_TRUSTED_CERTIFICATE_MANAGER.UPDATE_TRUSTED_CERTIFICATE
+('MySQL_Server_5.7.2_Auto_Geanrateed_C_Certificate',
+'cluster B CA OMS',
+'-----BEGIN CERTIFICATE-----
+MIIDbTCCAlWgAwIBAgIJANYnM/dk7iDWMA0GCSqGSIb3DQEBCwUAMEwxCzAJBgNV
+BAYTAkNOMRAwDgYDVQQIDAdCZWlKaW5nMQwwCgYDVQQKDANBbnQxCzAJBgNVBAsM
+Ak9CMRAwDgYDVQQDDAdyb290IGNhMCAXDTIzMTEyMjA3NDUzMFoYDzMwMjMwMzI1
+MDc0NTMwWjBMMQswCQYDVQQGEwJDTjEQMA4GA1UECAwHQmVpSmluZzEMMAoGA1UE
+CgwDQW50MQswCQYDVQQLDAJPQjEQMA4GA1UEAwwHcm9vdCBjYTCCASIwDQYJKoZI
+hvcNAQEBBQADggEPADCCAQoCggEBANWsMw/GZqv7SgZDKUFt7DhyIbfVzVapoE+/
+MLkgD5Pncu+vb87k9jj6ddG6ekTq9ccih3m01j9EmXqoKEXbqn/TT9w4IGxR1STJ
+aL87Xe7NK8kpYPjtFfGMuXvzHQtsTy+KMHDW8+BLmitud4EEW75o/mA93AmtxmNZ
+bHBVkIn5Q2MoU/TyMG6PPSbdMIDH2AlF4QAcM8Jsqn7lqs1J1M/ock0sYOi2YcM8
+ceKUtcn7Xks22dSkOWYfMRlBXupXo/WqnKWJHpkZ0JWQn7b8De+qGwUwVoXU+DGH
+sadmc4ESFwPlzar+yCfRo0rRyQ4MhYTZb5A3HKgAOh3HpMjNeykCAwEAAaNQME4w
+HQYDVR0OBBYEFPG/BWN8y9A8Ti/ogcvAErbFXPxVMB8GA1UdIwQYMBaAFPG/BWN8
+y9A8Ti/ogcvAErbFXPxVMAwGA1UdEwQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEB
+AKEDH+ytxMjgKVM/kPczI/iQRUj4Ql8I3/alyrH24X46KRDRP9010i12GB+uvMkC
+HDjik5TAdzjcFzUWQNoDlO4FIL+lXkxZhqU4Ixd5/4MphMNubbcvMsVuui2WiyLE
+oI5KBoRJrJntQYPBTrmVSgjb6BaIFhOXD6O8Et48Iel6jFcy38d4ZtUxXxzavWGN
+n9iRXQeSBBA3XPYxlRoeeuePKmiqMSH1xltns0OyMRTzuUGOIfjIzQ8DKXx1STBw
+LznB/zI0o8/aCMqt/jthRfC/OKSZaWWHD3wwx9BE5vHzenQ0o+lUtjZGklSnIze7
+JXaALXmRyp5+2y6Z7pT1+6g=
+-----END CERTIFICATE-----'
+);
+Query OK, 0 rows affected
+```
diff --git a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/9950.dbms-mview-mysql/100.dbms-mview-overview-mysql.md b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/9950.dbms-mview-mysql/100.dbms-mview-overview-mysql.md
index b73c67b5c2..bf4b90534f 100644
--- a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/9950.dbms-mview-mysql/100.dbms-mview-overview-mysql.md
+++ b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/9950.dbms-mview-mysql/100.dbms-mview-overview-mysql.md
@@ -3,7 +3,7 @@
| keywords | |
| dir-name | |
| dir-name-en | |
-| tenant-type | Oracle Mode |
+| tenant-type | MySQL Mode |
# Overview
diff --git a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/9950.dbms-mview-mysql/200.purge-log-mysql.md b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/9950.dbms-mview-mysql/200.purge-log-mysql.md
index de5d423f49..29f9fb13cb 100644
--- a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/9950.dbms-mview-mysql/200.purge-log-mysql.md
+++ b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/9950.dbms-mview-mysql/200.purge-log-mysql.md
@@ -3,7 +3,7 @@
| keywords | |
| dir-name | |
| dir-name-en | |
-| tenant-type | Oracle Mode |
+| tenant-type | MySQL Mode |
# PURGE_LOG
diff --git a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/9950.dbms-mview-mysql/300.refresh-mysql.md b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/9950.dbms-mview-mysql/300.refresh-mysql.md
index 568b5b60b5..4e40dc6a56 100644
--- a/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/9950.dbms-mview-mysql/300.refresh-mysql.md
+++ b/en-US/700.reference/500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/9950.dbms-mview-mysql/300.refresh-mysql.md
@@ -3,7 +3,7 @@
| keywords | |
| dir-name | |
| dir-name-en | |
-| tenant-type | Oracle Mode |
+| tenant-type | MySQL Mode |
# REFRESH
From 106f6c8d7f67185a09502a73f580bc493a1a316e Mon Sep 17 00:00:00 2001
From: Jackie Qu
Date: Tue, 16 Apr 2024 17:45:37 +0800
Subject: [PATCH 28/63] v430-beta-300.pl-oracle-update-1
---
.../1100.create-trigger-oracle.md | 34 +++++-----
.../100.dbms-stats-overview-oracle.md | 6 +-
.../1850.gather-system-stats-oracle.md | 43 +++++++++++++
.../4050.set-system-stats-oracle.md | 64 +++++++++++++++++++
.../850.delete-system-stats-oracle.md | 43 +++++++++++++
.../100.pl-static-sql-overview-oracle.md | 7 +-
6 files changed, 177 insertions(+), 20 deletions(-)
create mode 100644 en-US/700.reference/500.sql-reference/300.pl-reference/300.pl-oracle/1400.pl-system-package-oracle/15900.dbms-stats-oracle/1850.gather-system-stats-oracle.md
create mode 100644 en-US/700.reference/500.sql-reference/300.pl-reference/300.pl-oracle/1400.pl-system-package-oracle/15900.dbms-stats-oracle/4050.set-system-stats-oracle.md
create mode 100644 en-US/700.reference/500.sql-reference/300.pl-reference/300.pl-oracle/1400.pl-system-package-oracle/15900.dbms-stats-oracle/850.delete-system-stats-oracle.md
diff --git a/en-US/700.reference/500.sql-reference/300.pl-reference/300.pl-oracle/1100.ddl-operations-on-stored-pl-units-oracle/1100.create-trigger-oracle.md b/en-US/700.reference/500.sql-reference/300.pl-reference/300.pl-oracle/1100.ddl-operations-on-stored-pl-units-oracle/1100.create-trigger-oracle.md
index 8fe7bcfed6..778a7923f8 100644
--- a/en-US/700.reference/500.sql-reference/300.pl-reference/300.pl-oracle/1100.ddl-operations-on-stored-pl-units-oracle/1100.create-trigger-oracle.md
+++ b/en-US/700.reference/500.sql-reference/300.pl-reference/300.pl-oracle/1100.ddl-operations-on-stored-pl-units-oracle/1100.create-trigger-oracle.md
@@ -34,9 +34,7 @@ CREATE [ OR REPLACE ]
TRIGGER plsql_trigger_source
```
-
-
-The following shows more details:
+Specifically:
* The syntax of `plsql_trigger_source` is as follows:
@@ -114,29 +112,29 @@ The following shows more details:
| Syntax | Keyword or syntax node | Description |
|------------------------|-------------------------|--------------------------------------------------------|
| create_trigger | OR REPLACE | Re-creates this trigger (if any) and recompiles it. Before the trigger is redefined, users granted the access privilege can still access this trigger without the need to obtain the access privilege again. |
-| create_trigger | trigger_name | The name of the trigger to be created. |
+| create_trigger | trigger_name | The name of the trigger to be created. |
| plsql_trigger_source | schema | The name of the schema where the trigger is located. The default value is your schema. |
-| plsql_trigger_source | simple_dml_trigger | Creates a simple DML trigger. |
-| plsql_trigger_source | instead_of_dml_trigger | Creates an `INSTEAD OF` DML trigger. `INSTEAD OF` triggers can read `:OLD` and `:NEW` values, but cannot modify `:OLD` and `:NEW` values. |
-| plsql_trigger_source | trigger_ordering_clause | Specifies the firing order of triggers with the same timing point. The specified triggers must exist and have been successfully compiled, but they do not need to be enabled. |
+| plsql_trigger_source | simple_dml_trigger | Creates a simple DML trigger. |
+| plsql_trigger_source | instead_of_dml_trigger | Creates an `INSTEAD OF` DML trigger. An `INSTEAD OF` trigger can read the `:OLD` and `:NEW` values, but cannot modify these values. |
+| plsql_trigger_source | trigger_ordering_clause | The firing sequence of triggers with the same point in time. The specified triggers must exist and be correctly compiled. However, you do not need to enable the specified triggers. |
| simple_dml_trigger | BEFORE | Enables the database to fire the trigger before a trigger event occurs. A row-level trigger will fire before the corresponding row is modified. In a statement-level `BEFORE` trigger, the trigger body cannot read the `:NEW` or `:OLD` field. However, in a row-level `BEFORE` trigger, the trigger body can read and write the `:OLD` and `:NEW` fields. |
| simple_dml_trigger | AFTER | Enables the database to fire the trigger after a trigger event occurs. A row-level trigger will fire each time a row is modified. In a statement-level `AFTER` trigger, the trigger body cannot read the `:NEW` or `:OLD` field. However, in a row-level `AFTER` trigger, the trigger body can read and write the `:OLD` and `:NEW` fields. |
| simple_dml_trigger | FOR EACH ROW | Creates the trigger as a row-level trigger. When the optional trigger constraint defined in the `WHEN` condition is met, the database fires the row-level trigger. If this clause is ignored, the trigger is a statement-level trigger. If the optional trigger constraint is met, the database fires the trigger as a statement-level trigger only when a trigger statement is executed. |
| simple_dml_trigger | \[ ENABLE \| DISABLE \] | Creates a trigger that is in the enabled or disabled state. By default, a trigger in the enabled state is created. Creating a trigger that is in the disabled state ensures that the trigger can be correctly compiled before it is enabled. |
| simple_dml_trigger | WHEN (condition) | An SQL condition. The database evaluates each row affected by the trigger statement. If the `condition` value of an affected row is `TRUE`, `trigger_body` runs in this row. Otherwise, `trigger_body` does not run in this row. The trigger statement will run regardless of the condition value. In the `condition` statement, do not place a semicolon (:) prior to `NEW`, `OLD`, or `PARENT`. If `WHEN (condition)` is specified, `FOR EACH ROW` must also be specified. `condition` must not contain subqueries or PL expressions, such as calls to user-defined functions. |
-| simple_dml_trigger | trigger_body | The PL block or `CALL` subroutine that the database uses to fire the trigger. The `CALL` subroutine is a PL subprogram wrapped in a PL package. If the `trigger_body` is a PL block and contains errors, the `CREATE [OR REPLACE]` statement will fail. |
-| simple_dml_trigger | dml_event_clause | Defines the conditions for triggering DML events in the trigger. DML events include insert (INSERT), update (UPDATE), and delete (DELETE) operations. |
+| simple_dml_trigger | trigger_body | The PL block or `CALL` subprogram used by the database to fire the trigger. The `CALL` subprogram is encapsulated in PL. If `trigger_body` is a PL block that contains errors, the `CREATE [OR REPLACE]` statement will fail to be executed. |
+| simple_dml_trigger | dml_event_clause | The condition for triggering a DML event in the trigger. DML events include INSERT, UPDATE, and DELETE operations. |
| instead_of_dml_trigger | view | The name of the view on which the trigger is to be created. |
| instead_of_dml_trigger | FOR EACH ROW | Creates the `INSTEAD OF` trigger as a row-level trigger. |
-| instead_of_dml_trigger | \[ ENABLE \| DISABLE \] | Enables or disables the trigger. The default value is ENABLE. Creating the trigger in a disabled state ensures that the trigger is enabled only after it compiles correctly. |
-| instead_of_dml_trigger | DELETE | If the trigger is created on a view, it fires whenever a `DELETE` statement removes a row from the table defined by the view. |
-| instead_of_dml_trigger | INSERT | If the trigger is created on a view, the database fires the trigger whenever an `INSERT` statement adds a row to the table defined by the view. |
-| instead_of_dml_trigger | UPDATE | If the trigger is created on a view, the database fires the trigger whenever an `UPDATE` statement changes the value of a column in the table defined by the view. |
-| instead_of_dml_trigger | schema | The name of the schema where the trigger resides. The default value is the current user's schema. |
-| instead_of_dml_trigger | trigger_body | The PL block or `CALL` subroutine that the database uses to fire the trigger. The `CALL` subroutine is a PL subprogram wrapped in a PL package. If the `trigger_body` is a PL block and contains errors, the `CREATE [OR REPLACE]` statement will fail. |
-| trigger_ordering_clause | / | Specifies the firing order of triggers with the same timing point. The specified triggers must exist and have been successfully compiled, but they do not need to be enabled. |
-| trigger_ordering_clause | FOLLOWS | Indicates that the trigger being created must fire after the specified trigger.|
-| trigger_ordering_clause | PRECEDES | Indicates that the trigger being created must fire before the specified trigger.|
+| instead_of_dml_trigger | \[ ENABLE \| DISABLE \] | Specifies whether to enable or disable the trigger. By default, the trigger is enabled. Creating a trigger in the disabled state can ensure that the trigger is enabled after it is correctly compiled. |
+| instead_of_dml_trigger | DELETE | Enables the database to fire the trigger created on a view each time the `DELETE` statement deletes a row from the defined table. |
+| instead_of_dml_trigger | INSERT | Enables the database to fire the trigger created on a view each time the `INSERT` statement inserts a row to the defined table. |
+| instead_of_dml_trigger | UPDATE | Enables the database to fire the trigger created on a view each time the `UPDATE` statement modifies the value of a column in the defined table. |
+| instead_of_dml_trigger | schema | The name of the schema where the trigger is located. The default value is your schema. |
+| instead_of_dml_trigger | trigger_body | The PL block or `CALL` subprogram used by the database to fire the trigger. The `CALL` subprogram is encapsulated in PL. If `trigger_body` is a PL block that contains errors, the `CREATE [OR REPLACE]` statement will fail to be executed. |
+| trigger_ordering_clause | No keyword specified |The firing sequence of triggers with the same point in time. The specified triggers must exist and be correctly compiled. However, you do not need to enable the specified triggers. |
+| trigger_ordering_clause | FOLLOWS | Indicates that the new trigger fires after the specified trigger. |
+| trigger_ordering_clause | PRECEDES | Indicates that the new trigger fires before the specified trigger. |
## Examples
diff --git a/en-US/700.reference/500.sql-reference/300.pl-reference/300.pl-oracle/1400.pl-system-package-oracle/15900.dbms-stats-oracle/100.dbms-stats-overview-oracle.md b/en-US/700.reference/500.sql-reference/300.pl-reference/300.pl-oracle/1400.pl-system-package-oracle/15900.dbms-stats-oracle/100.dbms-stats-overview-oracle.md
index 10f16756fc..b9e1ebd32c 100644
--- a/en-US/700.reference/500.sql-reference/300.pl-reference/300.pl-oracle/1400.pl-system-package-oracle/15900.dbms-stats-oracle/100.dbms-stats-overview-oracle.md
+++ b/en-US/700.reference/500.sql-reference/300.pl-reference/300.pl-oracle/1400.pl-system-package-oracle/15900.dbms-stats-oracle/100.dbms-stats-overview-oracle.md
@@ -40,6 +40,7 @@ The following table describes the `DBMS_STATS` subprograms supported by the curr
| [DELETE_TABLE_STATS](../15900.dbms-stats-oracle/600.delete-table-stats-oracle.md) | Deletes table-level statistics. |
| [DELETE_SCHEMA_STATS](../15900.dbms-stats-oracle/700.delete-schema-stats-oracle.md) | Deletes the statistics on all tables of the specified user. |
| [DELETE_SCHEMA_PREFS](../15900.dbms-stats-oracle/800.delete-schema-prefs-oracle.md) | Deletes a statistics preference from the statistics on all tables of the specified user. |
+| [DELETE_SYSTEM_STATS](../15900.dbms-stats-oracle/850.delete-system-stats-oracle.md) | Deletes system statistics. |
| [DELETE_TABLE_PREFS](../15900.dbms-stats-oracle/900.delete-table-prefs-oracle.md) | Deletes the statistics preferences of a table owned by the specified user. |
| [DROP_STAT_TABLE](../15900.dbms-stats-oracle/1000.drop-stat-table-oracle.md) | Drops a user statistics table. |
| [EXPORT_COLUMN_STATS](../15900.dbms-stats-oracle/1100.export-column-stats-oracle.md) | Exports column-level statistics. |
@@ -50,6 +51,7 @@ The following table describes the `DBMS_STATS` subprograms supported by the curr
| [GATHER_INDEX_STATS](../15900.dbms-stats-oracle/1600.gather-index-stats-oracle.md) | Collects index statistics. |
| [GATHER_TABLE_STATS](../15900.dbms-stats-oracle/1700.gather-table-stats-oracle.md) | Collects statistics on tables and columns. |
| [GATHER_SCHEMA_STATS](../15900.dbms-stats-oracle/1800.gather-schema-stats-oracle.md) | Collects the statistics on all objects of the specified user. |
+| [GATHER_SYSTEM_STATS](../15900.dbms-stats-oracle/1850.gather-system-stats-oracle.md) | Collects system statistics. |
| [GET_STATS_HISTORY_AVAILABILITY](../15900.dbms-stats-oracle/1900.get-stats-history-availability-oracle.md) | Obtains the time of the earliest available historical statistics. You cannot restore historical statistics that are earlier than this time. |
| [GET_STATS_HISTORY_RETENTION](../15900.dbms-stats-oracle/2000.get-stats-history-retention-oracle.md) | Obtains the retention period of the historical statistics. |
| [GET_PARAM](../15900.dbms-stats-oracle/2100.get-param-oracle.md) | Obtains the default values of parameters of procedures in the `DBMS_STATS` package. |
@@ -72,7 +74,9 @@ The following table describes the `DBMS_STATS` subprograms supported by the curr
| [SET_GLOBAL_PREFS](../15900.dbms-stats-oracle/3800.set-global-prefs-oracle.md) | Sets a global statistics preference. |
| [SET_PARAM](../15900.dbms-stats-oracle/3900.set-param-oracle.md) | Sets the default value for a parameter of procedures in the `DBMS_STATS` package. |
| [SET_SCHEMA_PREFS](../15900.dbms-stats-oracle/4000.set-schema-prefs-oracle.md) | Sets a statistics preference for the specified user. |
+| [SET_SYSTEM_STATS](../15900.dbms-stats-oracle/4050.set-system-stats-oracle.md) | Sets system statistics. |
| [SET_TABLE_PREFS](../15900.dbms-stats-oracle/4100.set-table-prefs-oracle.md) | Sets a statistics preference of a table owned by the specified user. |
| [UNLOCK_PARTITION_STATS](../15900.dbms-stats-oracle/4200.unlock-partition-stats-oracle.md) | Unlocks the statistics on a partition. |
| [UNLOCK_SCHEMA_STATS](../15900.dbms-stats-oracle/4300.unlock-schema-stats-oracle.md) | Unlocks the statistics on all tables of the specified user. |
-| [UNLOCK_TABLE_STATS](../15900.dbms-stats-oracle/4400.unlock-table-stats-oracle.md) | Unlocks the statistics on a table. |
\ No newline at end of file
+| [UNLOCK_TABLE_STATS](../15900.dbms-stats-oracle/4400.unlock-table-stats-oracle.md) | Unlocks the statistics on a table. |
+| [COPY_TABLE_STATS](4500.copy-table-stat-of-oracle-mode.md) | Copies statistics of the source partition or subpartition to the target partition or subpartition. |
\ No newline at end of file
diff --git a/en-US/700.reference/500.sql-reference/300.pl-reference/300.pl-oracle/1400.pl-system-package-oracle/15900.dbms-stats-oracle/1850.gather-system-stats-oracle.md b/en-US/700.reference/500.sql-reference/300.pl-reference/300.pl-oracle/1400.pl-system-package-oracle/15900.dbms-stats-oracle/1850.gather-system-stats-oracle.md
new file mode 100644
index 0000000000..ae52d74f1d
--- /dev/null
+++ b/en-US/700.reference/500.sql-reference/300.pl-reference/300.pl-oracle/1400.pl-system-package-oracle/15900.dbms-stats-oracle/1850.gather-system-stats-oracle.md
@@ -0,0 +1,43 @@
+| Description | |
+|---------------|-----------------|
+| keywords | |
+| dir-name | |
+| dir-name-en | |
+| tenant-type | Oracle Mode |
+
+# GATHER_SYSTEM_STATS
+
+The `GATHER_SYSTEM_STATS` procedure collects system statistics.
+
+
+ Applicability
+ This topic applies only to OceanBase Database Enterprise Edition. OceanBase Database Community Edition provides only the MySQL mode.
+
+
+## Syntax
+
+```sql
+DBMS_STATS.GATHER_SYSTEM_STATS();
+```
+
+## Exceptions
+
+| Error code | Description |
+|-----------|------------------|
+| ORA-20001 | The entered name of the system statistical item is incorrect. |
+
+## Considerations
+
+* To call this procedure, you must connect to the database as the specified user, or have the `SYSDBA` privilege.
+
+## Examples
+
+Call the `DBMS_STATS.GATHER_SYSTEM_STATS` procedure to collect system statistics, such as the CPU speed, disk I/O performance, and network throughput.
+
+```sql
+BEGIN
+ -- Collect system statistics.
+ DBMS_STATS.GATHER_SYSTEM_STATS();
+END;
+/
+```
diff --git a/en-US/700.reference/500.sql-reference/300.pl-reference/300.pl-oracle/1400.pl-system-package-oracle/15900.dbms-stats-oracle/4050.set-system-stats-oracle.md b/en-US/700.reference/500.sql-reference/300.pl-reference/300.pl-oracle/1400.pl-system-package-oracle/15900.dbms-stats-oracle/4050.set-system-stats-oracle.md
new file mode 100644
index 0000000000..0368a83f38
--- /dev/null
+++ b/en-US/700.reference/500.sql-reference/300.pl-reference/300.pl-oracle/1400.pl-system-package-oracle/15900.dbms-stats-oracle/4050.set-system-stats-oracle.md
@@ -0,0 +1,64 @@
+| Description | |
+|---------------|-----------------|
+| keywords | |
+| dir-name | |
+| dir-name-en | |
+| tenant-type | Oracle Mode |
+
+# SET_SYSTEM_STATS
+
+The `SET_SYSTEM_STATS` procedure sets system statistics.
+
+
+ Applicability
+ This topic applies only to OceanBase Database Enterprise Edition. OceanBase Database Community Edition provides only the MySQL mode.
+
+
+## Syntax
+
+```sql
+DBMS_STATS.SET_SYSTEM_STATS (
+ pname VARCHAR2,
+ pvalue NUMBER);
+```
+
+## Parameters
+
+| Parameter | Description |
+|---------|------------|
+| pname | The name of the parameter. Valid values: `cpu_speed`, `disk_seq_read_speed`, `disk_rnd_read_speed`, and `network_speed`. |
+| pvalue | The value of the preference. |
+
+
+## Exceptions
+
+| Error code | Description |
+|-----------|------------------|
+| ORA-20001 | The entered name of the system statistical item is incorrect. |
+
+## Considerations
+
+* To call this procedure, you must connect to the database as the specified user, or have the `SYSDBA` privilege.
+
+
+## Examples
+
+Call the `DBMS_STATS.SET_SYSTEM_STATS` stored procedure to set system statistics. In this example, the parameter name is set to `cpu_speed` and `pvalue` is set to `5000`. This value indicates the number of CPU cycles that can be executed per second in the current hardware environment and must be estimated or obtained based on actual testing.
+
+```sql
+BEGIN
+ -- Set the CPU speed to 5000, which is a CPU speed in a specific measurement unit.
+ DBMS_STATS.SET_SYSTEM_STATS('cpu_speed', 5000);
+END;
+/
+```
+
+If you want to set the sequential read speed of the disk (`disk_seq_read_speed`), call the procedure as follows:
+
+```sql
+BEGIN
+ -- Set the sequential read speed of the disk to 200, which is a disk read speed in a specific measurement unit.
+ DBMS_STATS.SET_SYSTEM_STATS('disk_seq_read_speed', 200);
+END;
+/
+```
diff --git a/en-US/700.reference/500.sql-reference/300.pl-reference/300.pl-oracle/1400.pl-system-package-oracle/15900.dbms-stats-oracle/850.delete-system-stats-oracle.md b/en-US/700.reference/500.sql-reference/300.pl-reference/300.pl-oracle/1400.pl-system-package-oracle/15900.dbms-stats-oracle/850.delete-system-stats-oracle.md
new file mode 100644
index 0000000000..4c6efc4074
--- /dev/null
+++ b/en-US/700.reference/500.sql-reference/300.pl-reference/300.pl-oracle/1400.pl-system-package-oracle/15900.dbms-stats-oracle/850.delete-system-stats-oracle.md
@@ -0,0 +1,43 @@
+| Description | |
+|---------------|-----------------|
+| keywords | |
+| dir-name | |
+| dir-name-en | |
+| tenant-type | Oracle Mode |
+
+# DELETE_SYSTEM_STATS
+
+The `DELETE_SYSTEM_STATS` procedure deletes system statistics.
+
+
+ Applicability
+ This topic applies only to OceanBase Database Enterprise Edition. OceanBase Database Community Edition provides only the MySQL mode.
+
+
+## Syntax
+
+```sql
+DBMS_STATS.DELETE_SYSTEM_STATS();
+```
+
+## Exceptions
+
+| Error code | Description |
+|-----------|------------------|
+| ORA-20001 | The entered name of the system statistical item is incorrect. |
+
+## Considerations
+
+* To call this procedure, you must connect to the database as the specified user, or have the `SYSDBA` privilege.
+
+## Examples
+
+Call the `DBMS_STATS.DELETE_SYSTEM_STATS` stored procedure to delete all system statistics without setting any parameters.
+
+```sql
+BEGIN
+ -- Delete all system statistics.
+ DBMS_STATS.DELETE_SYSTEM_STATS();
+END;
+/
+```
\ No newline at end of file
diff --git a/en-US/700.reference/500.sql-reference/300.pl-reference/300.pl-oracle/500.pl-static-sql-oracle/100.pl-static-sql-overview-oracle.md b/en-US/700.reference/500.sql-reference/300.pl-reference/300.pl-oracle/500.pl-static-sql-oracle/100.pl-static-sql-overview-oracle.md
index 3ca657906b..5510a45aa2 100644
--- a/en-US/700.reference/500.sql-reference/300.pl-reference/300.pl-oracle/500.pl-static-sql-oracle/100.pl-static-sql-overview-oracle.md
+++ b/en-US/700.reference/500.sql-reference/300.pl-reference/300.pl-oracle/500.pl-static-sql-oracle/100.pl-static-sql-overview-oracle.md
@@ -38,4 +38,9 @@ In OceanBase Database, PL static SQL supports complex data types for return valu
* `SELECT INTO`/`BULK INTO`
* `RETURING INTO`/`BULK INTO` in DML statements
-* `FETCH INTO`/`BULK INTO`
\ No newline at end of file
+* `FETCH INTO`/`BULK INTO`
+
+
+Notice
+OceanBase Database V4.2.0 and later allow you to import data of multiple columns into a single RECORD variable by using the SELECT INTO
statement or into multiple OBJECT variables by using the BULK INTO
statement.
+
From a308f3269c91a9d2cfded02924f030ef0e8cfa6b Mon Sep 17 00:00:00 2001
From: Jackie Qu
Date: Tue, 16 Apr 2024 21:17:46 +0800
Subject: [PATCH 29/63] v430-beta-300.database-object-management-update-3
---
.../100.transaction/600.redo-logs.md | 13 +-
...-a-table-for-mysql-tenant-of-mysql-mode.md | 20 +--
...ify-skip-index-properties-of-mysql-mode.md | 70 +++++++++
.../600.change-table-of-mysql-mode.md | 44 +++---
...-table-for-oracle-tenant-of-oracle-mode.md | 14 +-
...fy-skip-index-properties-of-oracle-mode.md | 70 +++++++++
.../600.change-table-of-oracle-mode.md | 48 +++---
.../100.create-a-dblink-of-oracle-mode.md | 59 ++++----
...ote-database-by-a-dblink-of-oracle-mode.md | 61 ++++----
...ote-database-by-a-dblink-of-oracle-mode.md | 6 +-
.../500.delete-a-dblink-of-oracle-mode.md | 6 +-
.../600.install-and-configure-the-oci.md | 141 +++++++++++++-----
12 files changed, 384 insertions(+), 168 deletions(-)
create mode 100644 en-US/700.reference/300.database-object-management/100.manage-object-of-mysql-mode/200.manage-tables-of-mysql-mode/250.identify-skip-index-properties-of-mysql-mode.md
create mode 100644 en-US/700.reference/300.database-object-management/200.manage-object-of-oracle-mode/100.manage-tables-of-oracle-mode/250.identify-skip-index-properties-of-oracle-mode.md
diff --git a/en-US/700.reference/100.oceanbase-database-concepts/800.transaction-management/100.transaction/600.redo-logs.md b/en-US/700.reference/100.oceanbase-database-concepts/800.transaction-management/100.transaction/600.redo-logs.md
index 6051cbd82d..c11b257d0d 100644
--- a/en-US/700.reference/100.oceanbase-database-concepts/800.transaction-management/100.transaction/600.redo-logs.md
+++ b/en-US/700.reference/100.oceanbase-database-concepts/800.transaction-management/100.transaction/600.redo-logs.md
@@ -17,7 +17,7 @@ OceanBase Database uses the redo logs for the following two purposes:
* Downtime recovery
- Like most mainstream databases, OceanBase Database follows the write-ahead logging (WAL) principle. Redo logs are persisted before the transactions are committed to ensure the atomicity and durability of transactions, which conforms to the principle of atomicity, consistency, isolation, and durability (ACID). If an observer process exits or the server on which the process resides is down, you can recover data by restarting the OBServer node and scanning and replaying local redo logs. Data that is not persisted when the server is down can be recovered by replaying redo logs.
+ Like most mainstream databases, OceanBase Database follows the write-ahead logging (WAL) principle. Redo logs are persisted before the transactions are committed to ensure the atomicity and durability of transactions, which conforms to the principle of atomicity, consistency, isolation, and durability (ACID). If an observer process exits or the OBServer node on which the process resides is down, you can recover data by restarting the OBServer node and scanning and replaying local redo logs. Data that is not persisted when the server is down can be recovered by replaying redo logs.
* Multi-replica data consistency
@@ -37,7 +37,12 @@ A partition in OceanBase Database may contain three to five replicas. Only one r
## Log replay
-The replay of redo logs is the foundation of the high availability capability provided by OceanBase Database. After logs are synchronized to a follower replica, the follower replica will hash the logs based on the transaction ID and distribute them into different task queues within the same thread pool for replay. Redo logs of different transactions in OceanBase Database are replayed in parallel, while redo logs of the same transaction are replayed in sequence. This approach ensures both the correctness and the speed of log replay. During the replay on a replica, a transaction context is first created, and then the operation history is reconstructed within that transaction context. Finally, when reaching the clog, the transaction is committed. This is actually another execution of the transaction on the image of the replica.
+
+ Note
+ OceanBase Database supports parallel replay and parallel submission of redo logs at the transaction layer.
+
+
+The replay of redo logs is the foundation of the high availability capability provided by OceanBase Database. After logs are synchronized to a follower replica, the follower replica will hash the logs based on the **`transaction_id` and index in the linked list of callback operations**, and then distribute them into different task queues within the thread pool for log replay in the current tenant. Redo logs of different transactions in OceanBase Database can be replayed in parallel, and different redo logs of the same transaction can also be replayed in parallel. This approach ensures both the correctness and the speed of log replay. During the replay on a replica, a transaction context is first created, and then the operation history is reconstructed within that transaction context. Finally, when reaching the clog, the transaction is committed. This is actually another execution of the transaction on the image of the replica.
## Log-based disaster recovery
@@ -45,6 +50,10 @@ By replaying redo logs, a follower executes the transaction that has been execut
For traditional databases, the state of an active transaction is lost with the memory information when the server is down or a new leader replica is elected. Active transactions that are recovered by replaying logs can only be rolled back because their states are unknown. From the aspect of redo logs, the log recording the commit operation is not found after all the redo logs are replayed. In OceanBase Database, if a new replica is elected as the leader, an active transaction can write its data and state to logs for a certain period of time and submit the logs to the majority of replicas. In this way, the transaction can continue to be executed in the new leader.
+## Log supplement
+
+When executing transactions, the database system generates redo logs to ensure operation durability and recoverability for UPDATE statements. In the default full mode, a redo log records the complete information of the updated rows, including the column values that have not been modified. In minimal mode, a redo log records only the modified columns and their necessary context information to reduce the log size and enhance storage efficiency.
+
## Log control and recycling
Logs record all changes made to data in the database. Before recycling logs, you need to make sure that data related to the logs is persisted to disks. If you recycle logs before related data is persisted, the data cannot be recovered after a fault.
diff --git a/en-US/700.reference/300.database-object-management/100.manage-object-of-mysql-mode/200.manage-tables-of-mysql-mode/200.create-a-table-for-mysql-tenant-of-mysql-mode.md b/en-US/700.reference/300.database-object-management/100.manage-object-of-mysql-mode/200.manage-tables-of-mysql-mode/200.create-a-table-for-mysql-tenant-of-mysql-mode.md
index b8a6f0e22a..5e44a7cee1 100644
--- a/en-US/700.reference/300.database-object-management/100.manage-object-of-mysql-mode/200.manage-tables-of-mysql-mode/200.create-a-table-for-mysql-tenant-of-mysql-mode.md
+++ b/en-US/700.reference/300.database-object-management/100.manage-object-of-mysql-mode/200.manage-tables-of-mysql-mode/200.create-a-table-for-mysql-tenant-of-mysql-mode.md
@@ -9,7 +9,7 @@
You can execute the `CREATE TABLE` statement to create a table.
-This topic describes how to create non-partitioned tables. For information about how to create partitioned tables, see [Create a partitioned table](../300.manage-partitions-of-mysql-mode/200.create-a-partition-table-of-mysql-mode.md).
+For information about how to create partitioned tables, see [Create a partitioned table](../300.manage-partitions-of-mysql-mode/200.create-a-partition-table-of-mysql-mode.md).
## Create a non-partitioned table
@@ -79,7 +79,7 @@ After you create a replicated table, a replica of the replicated table is create
All followers must report their status, including the replica replay progress (data synchronization progress), to the leader. Generally, the replica replay progress on a follower lags behind that on the leader. A follower is considered by the leader as healthy only if the data latency between the follower and the leader is within the specified threshold. A healthy follower can quickly synchronize data modifications from the leader. If the leader considers a follower as healthy within a period of time, it will grant a lease period to the follower. In other words, the leader believes that the follower can keep healthy and provide strong-consistency read services within the lease period. During this lease period, the leader confirms the replay progress on the follower before each replicated table transaction is committed. The leader returns the commit result of a transaction only after the follower successfully replays the modifications in the transaction. At this time, you can read the modifications in the committed transaction from the follower.
-The replicated table feature is already supported in OceanBase Database V3.x. However, the database architecture is significantly modified in OceanBase Database V4.x. Therefore, to adapt to the new architecture of standalone log streams (LSs), OceanBase Database V4.x builds the partition-based readable version verification and LS-based lease granting mechanisms to ensure the correctness in strong-consistency reads.
+The replicated table feature is already supported in OceanBase Database V3.x. However, the database architecture is significantly modified in OceanBase Database V4.x. Therefore, to adapt to the new architecture of standalone log streams, OceanBase Database V4.x builds the partition-based readable version verification and log stream-based lease granting mechanisms to ensure the correctness in strong-consistency reads.
In addition, OceanBase Database V4.x improves the capability of switching the leader without terminating transactions. In OceanBase Database V4.x, replicated table transactions that are not committed when a leader switch is initiated by you or the load balancer can continue after the leader is switched, which is not supported in OceanBase Database V3.x. Compared with a replicated table of OceanBase Database V3.x, a replicated table of OceanBase Database V4.x has higher transaction write performance and more powerful disaster recovery capabilities. In addition, a replica crash has slighter impacts on read operations.
@@ -117,7 +117,7 @@ A sample query result is as follows:
1 rows in set
```
-In this example, the log stream with the ID of `1003` is a broadcast log stream. All replicated tables of the current tenant are created in this log stream. For more information about broadcast log streams, see [Replicas](../../../../600.manage/300.replica-management/100.replica-introduction.md).
+In this example, the log stream with the ID of `1003` is a broadcast log stream. All replicated tables of the current tenant are created in this log stream. For more information about broadcast log streams, see [About replicas](../../../../600.manage/300.replica-management/100.replica-introduction.md).
After a replicated table is created, you can perform insert and read/write operations on the replicated table as on a normal table. When you connect to OceanBase Database by using OceanBase Database Proxy (ODP), your read requests may be routed to any OBServer node. When you directly connect to OceanBase Database, if the local replica is readable, your read requests will be executed on the OBServer node that you directly connect to. For more information about database connection methods, see [Connection methods](../../../../300.develop/100.application-development-of-mysql-mode/100.connect-to-oceanbase-database-of-mysql-mode/100.connection-methods-overview-of-mysql-mode.md).
@@ -152,7 +152,7 @@ OceanBase Database allows you to create rowstore tables and convert rowstore tab
When [default_table_store_format](../../../800.configuration-items-and-system-variables/100.system-configuration-items/400.tenant-level-configuration-items/27000.default_table_store_format.md) is set to `row`, which is the default value, a rowstore table is created by default. When [default_table_store_format](../../../800.configuration-items-and-system-variables/100.system-configuration-items/400.tenant-level-configuration-items/27000.default_table_store_format.md) is not set to `row`, you can specify the `WITH COLUMN GROUP(all columns)` option to create a rowstore table.
-For information about how to convert a rowstore table to a columnstore table, see [Modify a table](600.change-table-of-mysql-mode.md). For information about how to create a columnstore index, see [Create an index](../500.manage-indexes-of-mysql-mode/200.create-an-index-of-mysql-mode.md).
+For information about how to convert a rowstore table into a columnstore table, see [Modify a table](600.change-table-of-mysql-mode.md). For information about how to create a columnstore index, see [Create an index](../500.manage-indexes-of-mysql-mode/200.create-an-index-of-mysql-mode.md).
You can specify the `WITH COLUMN GROUP(all columns)` option to create a rowstore table.
@@ -164,14 +164,14 @@ CREATE TABLE tbl1_cg (col1 INT PRIMARY KEY, col2 VARCHAR(50)) WITH COLUMN GROUP(
Note
- If you choose to specify the WITH COLUMN GROUP(all columns)
option to create a rowstore table, the table is still in the rowstore format even after you execute the DROP COLUMN GROUP(all columns)
statement to drop the column group.
+ If you create a rowstore table by specifying the WITH COLUMN GROUP(all columns)
option, the table remains in the rowstore format even after you execute the DROP COLUMN GROUP(all columns)
command to drop this column group.
## Create a columnstore table
-OceanBase Database allows you to create a columnstore table, switch a rowstore table to a columnstore table, and create a columnstore index. When you create a table in OceanBase Database, the table is a rowstore table by default. You can use the `WITH COLUMN GROUP` option to explicitly specify to create a columnstore table or a rowstore-columnstore redundant table.
+OceanBase Database allows you to create a columnstore table, convert a rowstore table into a columnstore table, and create a columnstore index. When you create a table in OceanBase Database, the table is a rowstore table by default. You can use the `WITH COLUMN GROUP` option to explicitly specify to create a columnstore table or a rowstore-columnstore redundant table.
-For information about how to convert a rowstore table to a columnstore table, see [Modify a table](600.change-table-of-mysql-mode.md). For information about how to create a columnstore index, see [Create an index](../500.manage-indexes-of-mysql-mode/200.create-an-index-of-mysql-mode.md).
+For information about how to convert a rowstore table into a columnstore table, see [Modify a table](600.change-table-of-mysql-mode.md). For information about how to create a columnstore index, see [Create an index](../500.manage-indexes-of-mysql-mode/200.create-an-index-of-mysql-mode.md).
You can specify the `WITH COLUMN GROUP(all columns, each column)` option to create a rowstore-columnstore redundant table.
@@ -189,14 +189,14 @@ Here is an example:
CREATE TABLE tbl2_cg (col1 INT PRIMARY KEY, col2 VARCHAR(50)) WITH COLUMN GROUP(each column);
```
-If you import a large amount of data to a columnstore table, you must initiate a major compaction to improve the read performance and start statistics collection to assist execution strategy adjustment.
+If you import a large amount of data to a columnstore table, you need to execute a **major compaction** to optimize the read performance and perform **statistics collection** to adjust the execution strategy.
-- **Major compaction**: After a batch data import, we recommend that you perform a major compaction to improve the read performance. The major compaction will consolidate segmented data for continuous physical storage, thereby reducing the disk I/Os for reading data. After a data import, initiate a major compaction in the tenant to ensure that all data is compacted to the baseline layer. For more information, see [`MAJOR and MINOR`](../../../500.sql-reference/100.sql-syntax/200.common-tenant-of-mysql-mode/600.sql-statement-of-mysql-mode/1500.alter-system-freeze-of-mysql-mode.md).
+- **Major compaction**: After a batch data import, we recommend that you perform a major compaction to improve the read performance. The major compaction will consolidate segmented data for continuous physical storage, thereby reducing the disk I/Os for reading data. After a data import, initiate a major compaction in the tenant to ensure that all data is compacted to the baseline layer. For more information, see [MAJOR and MINOR](../../../500.sql-reference/100.sql-syntax/200.common-tenant-of-mysql-mode/600.sql-statement-of-mysql-mode/1500.alter-system-freeze-of-mysql-mode.md).
- **Statistics collection**: After the major compaction, we recommend that you start statistics collection to help the optimizer generate an efficient query plan and execution strategy. You can execute the [`GATHER_SCHEMA_STATS`](../../../500.sql-reference/300.pl-reference/200.pl-mysql/1000.pl-system-package-mysql/15900.dbms-stats-mysql/1800.gather-schema-stats-mysql.md) procedure to collect statistics for all tables and query the [`GV$OB_OPT_STAT_GATHER_MONITOR`](../../../700.system-views/400.system-view-of-mysql-mode/300.performance-view-of-mysql-mode/13800.gv_ob_opt_stat_gather_monitor-of-mysql-mode.md) view for the collection progress.
-Note that the major compaction may slow down as the amount of data in the columnstore table increases.
+Note that when the data amount of a columnstore table increases, the major compaction takes more time.
## References
diff --git a/en-US/700.reference/300.database-object-management/100.manage-object-of-mysql-mode/200.manage-tables-of-mysql-mode/250.identify-skip-index-properties-of-mysql-mode.md b/en-US/700.reference/300.database-object-management/100.manage-object-of-mysql-mode/200.manage-tables-of-mysql-mode/250.identify-skip-index-properties-of-mysql-mode.md
new file mode 100644
index 0000000000..32093787c4
--- /dev/null
+++ b/en-US/700.reference/300.database-object-management/100.manage-object-of-mysql-mode/200.manage-tables-of-mysql-mode/250.identify-skip-index-properties-of-mysql-mode.md
@@ -0,0 +1,70 @@
+|description||
+|---|---|
+|keywords||
+|dir-name||
+|dir-name-en||
+|tenant-type|MySQL Mode|
+
+# Column skip index attribute
+
+Data skipping is an optimization method that calculates data at the storage layer to skip unnecessary I/O. A skip index is a sparse index structure that provides the data skipping capability by storing pre-aggregated data, aiming to enhance the query efficiency. A skip index extends the metadata stored in the index tree to add column-level metadata fields for aggregating and storing the maximum value, minimum value, null count, and sum of the specified column data in the range corresponding to the index node. The aggregated data on the index is then used to dynamically prune the data during the calculation of pushed-down expressions, thereby reducing scanning overheads.
+
+
+ Note
+ The essence of pre-aggregation is to move calculation in the query execution phase ahead to the data writing phase. The pre-calculated results are stored to improve the query efficiency. This method requires extra calculation in the compaction task, and pre-aggregated data consumes storage space. Skip indexes are stored in the baseline data. Data updates in the pre-aggregation range can invalidate the pre-aggregated data. Therefore, frequent random updates can make skip indexes invalid and undermine the optimization effect.
+
+
+Skip indexes are a column attribute. In a columnstore table, OceanBase Database creates skip indexes of the `MIN_MAX` and `SUM` types by default for columns whose type meets the skip index requirements. An explicit setting of the skip index attribute takes effect mainly on rowstore columns, and is currently invalid for columnstore columns. In addition, when you query the column attributes of a table by using the `DESC table_name` or `SHOW CREATE TABLE table_name` statement, the skip index attribute is not displayed for a columnstore table, and only the explicitly set skip index attribute is displayed.
+
+## Skip index DDL behavior
+
+* The maintenance of skip index data is completed on the baseline data during the major compaction. All DDL operations for updating aggregated data depend on the progressive major compaction. That is, a skip index can be partially effective. For example, when a skip index is created on a column, each time a major compaction is completed, the skip index takes effect on the newly written data. After a full major compaction is completed and all data is rewritten, the skip index takes effect on all data in this column.
+
+* Skip indexes are a column attribute that can be applied by online DDL operations.
+
+* The skip index attribute of a column is restricted by the data type and characteristics of the column. A column with a cascading relationship, such as an indexed column, can inherit the corresponding aggregation attribute.
+
+* When you add the skip index attribute to a column, if the skip index size of the table may exceed the maximum storage size, the system will report an error. Using skip indexes is an optimization strategy that trades storage space for query performance. Therefore, when you attempt to add the skip index attribute to a column, make sure that your operation can improve the query performance, so as not to waste storage resources.
+
+* By default, the system creates a skip index that stores aggregated data of the `MIN_MAX` and `SUM` types for columnstore columns whose type meets the requirements.
+
+## Skip index limitations
+
+* You cannot create a skip index for a JSON column or a spatial column.
+
+* You cannot create a skip index of the `SUM` type for a non-numeric column. Numeric types include integer types, fixed-point types, and floating-point types. The bit value type is not supported.
+
+* You cannot create a skip index for a generated column.
+
+## Skip index identification method
+
+You can use `SKIP_INDEX(skip_index_option)` to specify the skip index attribute for a column. Values of `skip_index_option` are as follows:
+
+* `MIN_MAX`: the most common skip index type. A skip index of this type stores the maximum value, minimum value, and null count of the indexed column at the index node granularity. This type of skip indexes can accelerate the pushdown of filters and `MIN/MAX` aggregation.
+
+* `SUM`: the skip index type that is used to accelerate the pushdown of `SUM` aggregation for numeric values.
+
+* `MIN_MAX, SUM`: the skip index type that uses both `MIN_MAX` and `SUM` aggregation.
+
+For information about how to modify the skip index attribute, see [Modify a table](600.change-table-of-mysql-mode.md).
+
+## Examples
+
+Create a table and specify the skip index attribute for columns.
+
+```sql
+CREATE TABLE test_skidx(
+ col1 INT SKIP_INDEX(MIN_MAX, SUM),
+ col2 FLOAT SKIP_INDEX(MIN_MAX),
+ col3 VARCHAR(1024) SKIP_INDEX(MIN_MAX),
+ col4 CHAR(10)
+ );
+```
+
+## References
+
+* [CREATE TABLE](../../../500.sql-reference/100.sql-syntax/200.common-tenant-of-mysql-mode/600.sql-statement-of-mysql-mode/2600.create-table-of-mysql-mode.md)
+
+* [ALTER TABLE](../../../500.sql-reference/100.sql-syntax/200.common-tenant-of-mysql-mode/600.sql-statement-of-mysql-mode/1600.alter-table-of-mysql-mode.md)
+
+* [Modify a table](600.change-table-of-mysql-mode.md)
diff --git a/en-US/700.reference/300.database-object-management/100.manage-object-of-mysql-mode/200.manage-tables-of-mysql-mode/600.change-table-of-mysql-mode.md b/en-US/700.reference/300.database-object-management/100.manage-object-of-mysql-mode/200.manage-tables-of-mysql-mode/600.change-table-of-mysql-mode.md
index ac17e9eef2..35fe283318 100644
--- a/en-US/700.reference/300.database-object-management/100.manage-object-of-mysql-mode/200.manage-tables-of-mysql-mode/600.change-table-of-mysql-mode.md
+++ b/en-US/700.reference/300.database-object-management/100.manage-object-of-mysql-mode/200.manage-tables-of-mysql-mode/600.change-table-of-mysql-mode.md
@@ -13,7 +13,7 @@ After a table is created, you can use the `ALTER TABLE` statement to modify it.
If you do not specify the collation or character set when you create a table, the character set and collation of the database are used by default. For more information, see [Database-level character sets and collations](../../../500.sql-reference/100.sql-syntax/200.common-tenant-of-mysql-mode/100.basic-elements-of-mysql-mode/300.character-set-and-collation-of-mysql-mode/400.specify-character-set-and-collation-of-mysql-mode.md).
-After you create a table, you can modify the collation and character set of the table. The statement is as follows:
+After you create a table, you can modify the collation and character set of the table. The syntax is as follows:
```sql
ALTER TABLE table_name [[DEFAULT] CHARACTER SET [=] charset_name] [COLLATE [=] collation_name];
@@ -48,7 +48,7 @@ Here is an example:
## Modify the schema of a table
-OceanBase Database allows you to add columns, modify a column and its attributes, and delete a column.
+OceanBase Database allows you to add columns, modify a column and its attributes, and drop a column.
### Add columns
@@ -138,7 +138,7 @@ Assume that the schema of a table named `test` is as follows:
### Modify column attributes
-You can modify the name, type, default value, and Skip Index attribute of a column.
+You can rename a column, and modify the data type, default value, and skip index attribute of a column.
#### Rename a column
@@ -262,7 +262,7 @@ Execute the `DESCRIBE test` statement to query the table schema. The query resul
#### Change the default value of a column
-The following sample code changes the default value of the `c2` column to `2`:
+The following sample code changes the default value of the `c2` column to `2`.
```sql
obclient> ALTER TABLE test CHANGE COLUMN c2 c2 varchar(50) DEFAULT 2;
@@ -286,15 +286,15 @@ You can also use the following statement to change the default value of a column
ALTER TABLE table_name ALTER [COLUMN] column_name {SET DEFAULT const_value | DROP DEFAULT}
```
-#### Modify the Skip Index attribute of a column
+#### Modify the skip index attribute of a column
-OceanBase Database allows you to use the `ALTER TABLE` statement to add, modify, and delete the Skip Index attribute for a column.
+OceanBase Database allows you to use the `ALTER TABLE` statement to add, modify, and delete the skip index attribute.
-For more information about the Skip Index attribute, see [Skip Index attribute of columns](250.identify-skip-index-properties-of-mysql-mode.md).
+For more information about the skip index attribute, see [Column skip index attribute](250.identify-skip-index-properties-of-mysql-mode.md).
Here is an example:
-1. Execute the following statement to create a table named `test_skidx`:
+1. Use the following SQL statement to create a table named `test_skidx`.
```sql
CREATE TABLE test_skidx(
@@ -305,19 +305,19 @@ Here is an example:
);
```
-2. Change the Skip Index attribute of the `col2` column in the `test_skidx` table to `SUM`.
+2. Change the type of the skip index on the `col2` column in the `test_skidx` table to `SUM`.
```sql
ALTER TABLE test_skidx MODIFY COLUMN col2 FLOAT SKIP_INDEX(SUM);
```
-3. Add the `MIN_MAX` Skip Index attribute for the `col4` column in the `test_skidx` table.
+3. Add the skip index attribute for a column after the table is created. That is, add a skip index of the `MIN_MAX` type for the `col4` column in the `test_skidx` table.
```sql
ALTER TABLE test_skidx MODIFY COLUMN col4 CHAR(10) SKIP_INDEX(MIN_MAX);
```
-4. Delete the Skip Index attribute for the `col1` column in the `test_skidx` table.
+4. Delete the skip index attribute for a column after the table is created. That is, delete the skip index attribute of the `col1` column in the `test_skidx` table.
```sql
ALTER TABLE test_skidx MODIFY COLUMN col1 INT SKIP_INDEX();
@@ -339,7 +339,7 @@ Assume that a table named `tbl1` is created by using the following statement:
obclient> CREATE TABLE tbl1 (c1 int, c2 varchar(32), c3 varchar(32), PRIMARY KEY(c1), UNIQUE KEY uk1(c2));
```
-The following sample code changes the collation of the `c2` column of the `tbl1` table:
+The following sample code changes the collation of the `c2` column of the `tbl1` table.
```sql
obclient> ALTER TABLE tbl1 MODIFY COLUMN c2 varchar(32) COLLATE utf8mb4_bin;
@@ -415,9 +415,7 @@ Assume that the schema of a table named `test` is as follows:
obclient> ALTER TABLE test DROP c2;
```
- Execute the `DESCRIBE test` statement to query the table schema.
-
- The result is as follows:
+ Execute the `DESCRIBE test` statement to query the table schema. The result is as follows:
```shell
+-------+-------------+------+-----+---------+-------+
@@ -509,7 +507,7 @@ obclient> RENAME TABLE test TO t1;
When you create a table in OceanBase Database, the table is a rowstore table by default. You can use the `WITH COLUMN GROUP` option to explicitly specify to create a columnstore table or a rowstore-columnstore redundant table. You can convert a rowstore table to a columnstore table by using the `ALTER TABLE` statement.
-### Convert a rowstore table to a columnstore table
+### Convert a rowstore table into a columnstore table
Here is an example:
@@ -519,13 +517,13 @@ Here is an example:
obclient> CREATE TABLE tbl1(col1 INT, col2 VARCHAR(30), col3 DATE);
```
-2. Convert the rowstore table `tbl1` to columnar storage.
+2. Convert the rowstore table `tbl1` into a columnstore table.
```sql
obclient> ALTER TABLE tbl1 ADD COLUMN GROUP(each column);
```
-### Convert a rowstore table to a rowstore-columnstore redundant table
+### Convert a rowstore table into a rowstore-columnstore redundant table
Here is an example:
@@ -535,13 +533,13 @@ Here is an example:
obclient> CREATE TABLE tbl2(col1 INT, col2 VARCHAR(30), col3 DATE);
```
-2. Convert the rowstore table `tbl2` to a rowstore-columnstore redundant table.
+2. Convert the rowstore table `tbl2` into a rowstore-columnstore redundant table.
```sql
obclient> ALTER TABLE tbl2 ADD COLUMN GROUP(all columns, each column);
```
-### Convert a rowstore-columnstore redundant table to a columnstore table
+### Convert a rowstore-columnstore redundant table into a columnstore table
Here is an example:
@@ -551,13 +549,13 @@ Here is an example:
obclient> CREATE TABLE tbl3(col1 INT, col2 VARCHAR(30), col3 DATE) WITH COLUMN GROUP(all columns, each column);
```
-2. Convert the rowstore-columnstore redundant table `tbl3` to a columnstore table.
+2. Convert the rowstore-columnstore redundant table `tbl3` into a columnstore table.
```sql
obclient> ALTER TABLE tbl3 DROP COLUMN GROUP(all columns);
```
-### Convert a rowstore-columnstore redundant table to a rowstore table
+### Convert a rowstore-columnstore redundant table into a rowstore table
Here is an example:
@@ -567,7 +565,7 @@ Here is an example:
obclient> CREATE TABLE tbl4(col1 INT, col2 VARCHAR(30), col3 DATE) WITH COLUMN GROUP(all columns, each column);
```
-2. Convert the rowstore-columnstore redundant table `tbl4` to a rowstore table.
+2. Convert the rowstore-columnstore redundant table `tbl4` into a rowstore table.
```sql
obclient> ALTER TABLE tbl4 DROP COLUMN GROUP(each column);
diff --git a/en-US/700.reference/300.database-object-management/200.manage-object-of-oracle-mode/100.manage-tables-of-oracle-mode/200.create-a-table-for-oracle-tenant-of-oracle-mode.md b/en-US/700.reference/300.database-object-management/200.manage-object-of-oracle-mode/100.manage-tables-of-oracle-mode/200.create-a-table-for-oracle-tenant-of-oracle-mode.md
index 6afb52b356..65bb17bda7 100644
--- a/en-US/700.reference/300.database-object-management/200.manage-object-of-oracle-mode/100.manage-tables-of-oracle-mode/200.create-a-table-for-oracle-tenant-of-oracle-mode.md
+++ b/en-US/700.reference/300.database-object-management/200.manage-object-of-oracle-mode/100.manage-tables-of-oracle-mode/200.create-a-table-for-oracle-tenant-of-oracle-mode.md
@@ -76,7 +76,7 @@ After you create a replicated table, a replica of the replicated table is create
All followers must report their status, including the replica replay progress (data synchronization progress), to the leader. Generally, the replica replay progress on a follower lags behind that on the leader. A follower is considered by the leader as healthy only if the data latency between the follower and the leader is within the specified threshold. A healthy follower can quickly synchronize data modifications from the leader. If the leader considers a follower as healthy within a period of time, it will grant a lease period to the follower. In other words, the leader believes that the follower can keep healthy and provide strong-consistency read services within the lease period. During this lease period, the leader confirms the replay progress on the follower before each replicated table transaction is committed. The leader returns the commit result of a transaction only after the follower successfully replays the modifications in the transaction. At this time, you can read the modifications in the committed transaction from the follower.
-The replicated table feature is already supported in OceanBase Database V3.x. However, the database architecture is significantly modified in OceanBase Database V4.x. Therefore, to adapt to the new architecture of standalone log streams (LSs), OceanBase Database V4.x builds the partition-based readable version verification and LS-based lease granting mechanisms to ensure the correctness in strong-consistency reads.
+The replicated table feature is already supported in OceanBase Database V3.x. However, the database architecture is significantly modified in OceanBase Database V4.x. Therefore, to adapt to the new architecture of standalone log streams, OceanBase Database V4.x builds the partition-based readable version verification and log stream-based lease granting mechanisms to ensure the correctness in strong-consistency reads.
In addition, OceanBase Database V4.x improves the capability of switching the leader without terminating transactions. In OceanBase Database V4.x, replicated table transactions that are not committed when a leader switch is initiated by you or the load balancer can continue after the leader is switched, which is not supported in OceanBase Database V3.x. Compared with a replicated table of OceanBase Database V3.x, a replicated table of OceanBase Database V4.x has higher transaction write performance and more powerful disaster recovery capabilities. In addition, a replica crash has slighter impacts on read operations.
@@ -136,7 +136,7 @@ OceanBase Database allows you to create rowstore tables and convert rowstore tab
When [default_table_store_format](../../../800.configuration-items-and-system-variables/100.system-configuration-items/400.tenant-level-configuration-items/27000.default_table_store_format.md) is set to `row`, which is the default value, a rowstore table is created by default. When [default_table_store_format](../../../800.configuration-items-and-system-variables/100.system-configuration-items/400.tenant-level-configuration-items/27000.default_table_store_format.md) is not set to `row`, you can specify the `WITH COLUMN GROUP(all columns)` option to create a rowstore table.
-For information about how to convert a rowstore table to a columnstore table, see [Modify a table](600.change-table-of-oracle-mode.md). For information about how to create a columnstore index, see [Create an index](../400.manage-indexes-of-oracle-mode/200.create-an-index-of-oracle-mode.md).
+For information about how to convert a rowstore table into a columnstore table, see [Modify a table](600.change-table-of-oracle-mode.md). For information about how to create a columnstore index, see [Create an index](../400.manage-indexes-of-oracle-mode/200.create-an-index-of-oracle-mode.md).
You can specify the `WITH COLUMN GROUP(all columns)` option to create a rowstore table.
@@ -148,14 +148,14 @@ CREATE TABLE tbl1_cg (col1 INT PRIMARY KEY, col2 VARCHAR(50)) WITH COLUMN GROUP(
Note
- If you choose to specify the WITH COLUMN GROUP(all columns)
option to create a rowstore table, the table is still in the rowstore format even after you execute the DROP COLUMN GROUP(all columns)
statement to drop the column group.
+ If you create a rowstore table by specifying the WITH COLUMN GROUP(all columns)
option, the table remains in the rowstore format even after you execute the DROP COLUMN GROUP(all columns)
command to drop this column group.
## Create a columnstore table
-OceanBase Database allows you to create a columnstore table, switch a rowstore table to a columnstore table, and create a columnstore index. You can use the `WITH COLUMN GROUP` option to explicitly specify to create a columnstore table or a rowstore-columnstore redundant table. You can also set the `default_table_store_format` parameter to specify columnstore or rowstore-columnstore redundant as the default store format.
+OceanBase Database allows you to create a columnstore table, convert a rowstore table into a columnstore table, and create a columnstore index. You can use the `WITH COLUMN GROUP` option to explicitly specify to create a columnstore table or a rowstore-columnstore redundant table. You can also set the `default_table_store_format` parameter to specify columnstore or rowstore-columnstore redundancy as the default storage format.
-For information about how to convert a rowstore table to a columnstore table, see [Modify a table](600.change-table-of-oracle-mode.md). For information about how to create a columnstore index, see [Create an index](../400.manage-indexes-of-oracle-mode/200.create-an-index-of-oracle-mode.md).
+For information about how to convert a rowstore table into a columnstore table, see [Modify a table](600.change-table-of-oracle-mode.md). For information about how to create a columnstore index, see [Create an index](../400.manage-indexes-of-oracle-mode/200.create-an-index-of-oracle-mode.md).
You can specify the `WITH COLUMN GROUP(all columns, each column)` option to create a rowstore-columnstore redundant table.
@@ -173,14 +173,14 @@ Here is an example:
CREATE TABLE tbl2_cg (col1 NUMBER PRIMARY KEY, col2 VARCHAR2(50)) WITH COLUMN GROUP(each column);
```
-If you import a large amount of data to a columnstore table, you must initiate a major compaction to improve the read performance and start statistics collection to assist execution strategy adjustment.
+If you import a large amount of data to a columnstore table, you need to execute a **major compaction** to optimize the read performance and perform **statistics collection** to adjust the execution strategy.
- **Major compaction**: After a batch data import, we recommend that you perform a major compaction to improve the read performance. The major compaction will consolidate segmented data for continuous physical storage, thereby reducing the disk I/Os for reading data. After a data import, initiate a major compaction in the tenant to ensure that all data is compacted to the baseline layer. For more information, see [`MAJOR and MINOR`](../../../500.sql-reference/100.sql-syntax/300.common-tenant-of-oracle-mode/900.sql-statement-of-oracle-mode/100.ddl-of-oracle-mode/4600.alter-system-major-freeze-of-oracle-mode.md).
- **Statistics collection**: After the major compaction, we recommend that you start statistics collection to help the optimizer generate an efficient query plan and execution strategy. You can execute the [`GATHER_SCHEMA_STATS`](../../../500.sql-reference/300.pl-reference/300.pl-oracle/1400.pl-system-package-oracle/15900.dbms-stats-oracle/1800.gather-schema-stats-oracle.md) procedure to collect statistics for all tables and query the [`GV$OB_OPT_STAT_GATHER_MONITOR`](../../../700.system-views/500.system-view-of-oracle-mode/300.performance-view-of-oracle-mode/12000.gv$ob_opt_stat_gather_monitor-of-oracle-mode.md) view for the collection progress.
-Note that the major compaction may slow down as the amount of data in the columnstore table increases.
+Note that when the data amount of a columnstore table increases, the major compaction takes more time.
## References
diff --git a/en-US/700.reference/300.database-object-management/200.manage-object-of-oracle-mode/100.manage-tables-of-oracle-mode/250.identify-skip-index-properties-of-oracle-mode.md b/en-US/700.reference/300.database-object-management/200.manage-object-of-oracle-mode/100.manage-tables-of-oracle-mode/250.identify-skip-index-properties-of-oracle-mode.md
new file mode 100644
index 0000000000..d0c894c121
--- /dev/null
+++ b/en-US/700.reference/300.database-object-management/200.manage-object-of-oracle-mode/100.manage-tables-of-oracle-mode/250.identify-skip-index-properties-of-oracle-mode.md
@@ -0,0 +1,70 @@
+|description||
+|---|---|
+|keywords||
+|dir-name||
+|dir-name-en||
+|tenant-type|Oracle Mode|
+
+# Column skip index attribute
+
+Data skipping is an optimization method that calculates data at the storage layer to skip unnecessary I/O. A skip index is a sparse index structure that provides the data skipping capability by storing pre-aggregated data, aiming to enhance the query efficiency. A skip index extends the metadata stored in the index tree to add column-level metadata fields for aggregating and storing the maximum value, minimum value, null count, and sum of the specified column data in the range corresponding to the index node. The aggregated data on the index is then used to dynamically prune the data during the calculation of pushed-down expressions, thereby reducing scanning overheads.
+
+
+ Note
+ The essence of pre-aggregation is to move calculation in the query execution phase ahead to the data writing phase. The pre-calculated results are stored to improve the query efficiency. This method requires extra calculation in the compaction task, and pre-aggregated data consumes storage space. Skip indexes are stored in the baseline data. Data updates in the pre-aggregation range can invalidate the pre-aggregated data. Therefore, frequent random updates can make skip indexes invalid and undermine the optimization effect.
+
+
+Skip indexes are a column attribute. In a columnstore table, OceanBase Database creates skip indexes of the `MIN_MAX` and `SUM` types by default for columns whose type meets the skip index requirements. An explicit setting of the skip index attribute takes effect mainly on rowstore columns, and is currently invalid for columnstore columns. In addition, when you query the column attributes of a table by using the `DESC table_name` or `SHOW CREATE TABLE table_name` statement, the skip index attribute is not displayed for a columnstore table, and only the explicitly set skip index attribute is displayed.
+
+## Skip index DDL behavior
+
+* The maintenance of skip index data is completed on the baseline data during the major compaction. All DDL operations for updating aggregated data depend on the progressive major compaction. That is, a skip index can be partially effective. For example, when a skip index is created on a column, each time a major compaction is completed, the skip index takes effect on the newly written data. After a full major compaction is completed and all data is rewritten, the skip index takes effect on all data in this column.
+
+* Skip indexes are a column attribute that can be applied by online DDL operations.
+
+* The skip index attribute of a column is restricted by the data type and characteristics of the column. A column with a cascading relationship, such as an indexed column, can inherit the corresponding aggregation attribute.
+
+* When you add the skip index attribute to a column, if the skip index size of the table may exceed the maximum storage size, the system will report an error. Using skip indexes is an optimization strategy that trades storage space for query performance. Therefore, when you attempt to add the skip index attribute to a column, make sure that your operation can improve the query performance, so as not to waste storage resources.
+
+* By default, the system creates a skip index that stores aggregated data of the `MIN_MAX` and `SUM` types for columnstore columns whose type meets the requirements.
+
+## Skip index limitations
+
+* You cannot create a skip index for a JSON column or a spatial column.
+
+* You cannot create a skip index of the `SUM` type for a non-numeric column. Numeric types include integer types, fixed-point types, and floating-point types. The bit value type is not supported.
+
+* You cannot create a skip index for a generated column.
+
+## Skip index identification method
+
+You can use `SKIP_INDEX(skip_index_option)` to specify the skip index attribute for a column. Values of `skip_index_option` are as follows:
+
+* `MIN_MAX`: the most common skip index type. A skip index of this type stores the maximum value, minimum value, and null count of the indexed column at the index node granularity. This type of skip indexes can accelerate the pushdown of filters and `MIN/MAX` aggregation.
+
+* `SUM`: the skip index type that is used to accelerate the pushdown of `SUM` aggregation for numeric values.
+
+* `MIN_MAX, SUM`: the skip index type that uses both `MIN_MAX` and `SUM` aggregation.
+
+For information about how to modify the skip index attribute, see [Modify a table](600.change-table-of-oracle-mode.md).
+
+## Examples
+
+Create a table and specify the skip index attribute for columns.
+
+```sql
+CREATE TABLE test_skidx(
+ col1 NUMBER SKIP_INDEX(MIN_MAX, SUM),
+ col2 FLOAT SKIP_INDEX(MIN_MAX),
+ col3 VARCHAR2(1024) SKIP_INDEX(MIN_MAX),
+ col4 CHAR(10)
+ );
+```
+
+## References
+
+* [CREATE TABLE](../../../500.sql-reference/100.sql-syntax/300.common-tenant-of-oracle-mode/900.sql-statement-of-oracle-mode/100.ddl-of-oracle-mode/2400.create-table-of-oracle-mode.md)
+
+* [ALTER TABLE](../../../500.sql-reference/100.sql-syntax/300.common-tenant-of-oracle-mode/900.sql-statement-of-oracle-mode/100.ddl-of-oracle-mode/1000.alter-table-of-oracle-mode.md)
+
+* [Modify a table](600.change-table-of-oracle-mode.md)
diff --git a/en-US/700.reference/300.database-object-management/200.manage-object-of-oracle-mode/100.manage-tables-of-oracle-mode/600.change-table-of-oracle-mode.md b/en-US/700.reference/300.database-object-management/200.manage-object-of-oracle-mode/100.manage-tables-of-oracle-mode/600.change-table-of-oracle-mode.md
index c2763d8349..414fbeac7a 100644
--- a/en-US/700.reference/300.database-object-management/200.manage-object-of-oracle-mode/100.manage-tables-of-oracle-mode/600.change-table-of-oracle-mode.md
+++ b/en-US/700.reference/300.database-object-management/200.manage-object-of-oracle-mode/100.manage-tables-of-oracle-mode/600.change-table-of-oracle-mode.md
@@ -15,7 +15,7 @@ You cannot perform other DDL operations when you modify the primary key or colum
## Modify the schema of a table
-OceanBase Database allows you to add a column to a table, modify a column and its attributes, and delete a column from a table.
+OceanBase Database allows you to add a column to a table, modify a column and its attributes, and drop a column from a table.
* You can add columns except for a primary key column to a table. To add a primary key column, you can add a normal column and then add a primary key to the column. For more information, see [Define column constraints](../100.manage-tables-of-oracle-mode/400.define-the-constraint-type-for-a-column-of-oracle-mode.md).
@@ -80,9 +80,7 @@ OceanBase Database allows you to add a column to a table, modify a column and it
2 rows in set
```
-* You can rename a column.
-
- Here is an example:
+* You can rename a column. Here is an example:
```sql
obclient> ALTER TABLE test RENAME COLUMN c1 TO c;
@@ -98,9 +96,7 @@ OceanBase Database allows you to add a column to a table, modify a column and it
2 rows in set
```
-* Change the default value of a column.
-
- Here is an example:
+* You can change the default value of a column. Here is an example:
```sql
obclient> ALTER TABLE test MODIFY c DEFAULT 1;
@@ -116,9 +112,7 @@ OceanBase Database allows you to add a column to a table, modify a column and it
2 rows in set
```
-* You can modify the `NOT NULL` constraint on a column.
-
- Here is an example:
+* You can modify the `NOT NULL` constraint on a column. Here is an example:
```sql
obclient> DESCRIBE test;
@@ -143,9 +137,7 @@ OceanBase Database allows you to add a column to a table, modify a column and it
2 rows in set
```
-* You can delete columns except for the primary key column and indexed columns from a table.
-
- Here is an example:
+* You can drop columns except for the primary key column and indexed columns from a table. Here is an example:
```sql
obclient> DESCRIBE test;
@@ -211,15 +203,15 @@ obclient> ALTER TABLE test SET REPLICA_NUM=2;
Query OK, 0 rows affected
```
-## Modify the Skip Index attribute of a column
+## Modify the skip index attribute of a column
-OceanBase Database allows you to use the `ALTER TABLE` statement to add, modify, and delete the Skip Index attribute for a column.
+OceanBase Database allows you to use the `ALTER TABLE` statement to add, modify, and delete the skip index attribute.
-For more information about the Skip Index attribute, see [Skip Index attribute of columns](250.identify-skip-index-properties-of-oracle-mode.md).
+For more information about the skip index attribute, see [Column skip index attribute](250.identify-skip-index-properties-of-oracle-mode.md).
Here is an example:
-1. Execute the following statement to create a table named `test_skidx`:
+1. Use the following SQL statement to create a table named `test_skidx`.
```sql
CREATE TABLE test_skidx(
@@ -230,19 +222,19 @@ Here is an example:
);
```
-2. Change the Skip Index attribute of the `col2` column in the `test_skidx` table to `SUM`.
+2. Change the type of the skip index on the `col2` column in the `test_skidx` table to `SUM`.
```sql
ALTER TABLE test_skidx MODIFY col2 FLOAT SKIP_INDEX(SUM);
```
-3. Add the `MIN_MAX` Skip Index attribute for the `col4` column in the `test_skidx` table.
+3. Add the skip index attribute for a column after the table is created. That is, add a skip index of the `MIN_MAX` type for the `col4` column in the `test_skidx` table.
```sql
ALTER TABLE test_skidx MODIFY col4 CHAR(10) SKIP_INDEX(MIN_MAX);
```
-4. Delete the Skip Index attribute for the `col1` column in the `test_skidx` table.
+4. Delete the skip index attribute for a column after the table is created. That is, delete the skip index attribute of the `col1` column in the `test_skidx` table.
```sql
ALTER TABLE test_skidx MODIFY col1 NUMBER SKIP_INDEX();
@@ -252,7 +244,7 @@ Here is an example:
When you create a table in OceanBase Database, the table is a rowstore table by default. You can use the `WITH COLUMN GROUP` option to explicitly specify to create a columnstore table or a rowstore-columnstore redundant table. You can convert a rowstore table to a columnstore table by using the `ALTER TABLE` statement.
-### Convert a rowstore table to a columnstore table
+### Convert a rowstore table into a columnstore table
Here is an example:
@@ -262,13 +254,13 @@ Here is an example:
obclient> CREATE TABLE tbl1(col1 NUMBER, col2 VARCHAR2(30));
```
-2. Convert the rowstore table `tbl1` to columnar storage.
+2. Convert the rowstore table `tbl1` into a columnstore table.
```sql
obclient> ALTER TABLE tbl1 ADD COLUMN GROUP(each column);
```
-### Convert a rowstore table to a rowstore-columnstore redundant table
+### Convert a rowstore table into a rowstore-columnstore redundant table
Here is an example:
@@ -278,13 +270,13 @@ Here is an example:
obclient> CREATE TABLE tbl2(col1 NUMBER, col2 VARCHAR2(30));
```
-2. Convert the rowstore table `tbl2` to a rowstore-columnstore redundant table.
+2. Convert the rowstore table `tbl2` into a rowstore-columnstore redundant table.
```sql
obclient> ALTER TABLE tbl2 ADD COLUMN GROUP(all columns, each column);
```
-### Convert a rowstore-columnstore redundant table to a columnstore table
+### Convert a rowstore-columnstore redundant table into a columnstore table
Here is an example:
@@ -294,13 +286,13 @@ Here is an example:
obclient> CREATE TABLE tbl3(col1 NUMBER, col2 VARCHAR2(30)) WITH COLUMN GROUP(all columns, each column);
```
-2. Convert the rowstore-columnstore redundant table `tbl3` to a columnstore table.
+2. Convert the rowstore-columnstore redundant table `tbl3` into a columnstore table.
```sql
obclient> ALTER TABLE tbl3 DROP COLUMN GROUP(all columns);
```
-### Convert a rowstore-columnstore redundant table to a rowstore table
+### Convert a rowstore-columnstore redundant table into a rowstore table
Here is an example:
@@ -310,7 +302,7 @@ Here is an example:
obclient> CREATE TABLE tbl4(col1 NUMBER, col2 VARCHAR2(30)) WITH COLUMN GROUP(all columns, each column);
```
-2. Convert the rowstore-columnstore redundant table `tbl4` to a rowstore table.
+2. Convert the rowstore-columnstore redundant table `tbl4` into a rowstore table.
```sql
obclient> ALTER TABLE tbl4 DROP COLUMN GROUP(each column);
diff --git a/en-US/700.reference/300.database-object-management/200.manage-object-of-oracle-mode/1000.manage-dblink-of-oracle-mode/100.create-a-dblink-of-oracle-mode.md b/en-US/700.reference/300.database-object-management/200.manage-object-of-oracle-mode/1000.manage-dblink-of-oracle-mode/100.create-a-dblink-of-oracle-mode.md
index 203a9fc982..31999feded 100644
--- a/en-US/700.reference/300.database-object-management/200.manage-object-of-oracle-mode/1000.manage-dblink-of-oracle-mode/100.create-a-dblink-of-oracle-mode.md
+++ b/en-US/700.reference/300.database-object-management/200.manage-object-of-oracle-mode/1000.manage-dblink-of-oracle-mode/100.create-a-dblink-of-oracle-mode.md
@@ -9,7 +9,7 @@
OceanBase Database provides DBLinks to support cross-data source access. You can use a DBLink to access a remote database from your local database.
-In the Oracle mode of OceanBase Database, you can create a DBLink between two Oracle tenants of OceanBase Database or between an Oracle tenant of OceanBase Database to an Oracle database.
+In the Oracle mode of OceanBase Database, you can create a DBLink between two Oracle tenants of OceanBase Database or between an Oracle tenant of OceanBase Database and an Oracle database.
## Prerequisites
@@ -24,7 +24,7 @@ To create a DBLink, you must specify the DBLink name and provide information, su
The SQL syntax for creating a DBLink from one Oracle tenant of OceanBase Database to another is as follows:
```sql
-obclient> CREATE DATABASE LINK dblink_name CONNECT TO user@tenant IDENTIFIED BY remote_password [OB] HOST 'ip:port' [CLUSTER "cluster_name"]
+obclient> CREATE DATABASE LINK dblink_name CONNECT TO user@tenant IDENTIFIED BY remote_password [OB] HOST 'ip:port' [CLUSTER "cluster_name"]
[MY_NAME local_user@local_tenant IDENTIFIED BY local_password HOST 'local_ip:local_port' [CLUSTER "local_cluster_name"]];
```
@@ -36,7 +36,7 @@ where
* `tenant` specifies the tenant name of the remote OceanBase database.
-* `remote_password` specifies the password for logging on to the remote OceanBase database. If the password contains a special character, such as `@`, `#`, or `!`, you must enclose the password with double quotation marks ("") to avoid syntax errors.
+* `remote_password` specifies the password for logging on to the remote OceanBase database. If the password contains a special character, such as `@`, `#`, or `!`, you must enclose the password with double quotation marks (") to avoid syntax errors.
* `OB` specifies that the type of the remote database is OceanBase. This parameter is optional.
@@ -50,30 +50,30 @@ where
If the SQL port of an OBServer node is specified, network connectivity must be ensured between the local database and the specified OBServer node.
-* `cluster_name` specifies the name of the cluster on the remote OceanBase database. You need to specify the cluster name only when the specified IP address and port number are those of the ODP of the cluster and the ODP is deployed by using ConfigUrl. The cluster name is case-sensitive and must be enclosed with double quotation marks ("").
+* `cluster_name` specifies the name of the cluster on the remote OceanBase database. You need to specify the cluster name only when the specified IP address and port number are those of the ODP of the cluster and the ODP is deployed by using ConfigUrl. The cluster name is case-sensitive and must be enclosed with double quotation marks (").
-
- Note
- You can deploy a proxy such as ODP for a cluster by using the ConfigURL or RootService list.
-
- - ConfigUrl: When ODP is started, the
obproxy_config_server_url
parameter specified in the command is used to query the RootServer information of the OceanBase cluster.
- - RoortService list: When ODP is started, the
-r
parameter specified in the command is used to query the RootServer information of the OceanBase cluster.
-
-
+
+ Note
+ You can deploy a proxy such as ODP for a cluster by using the ConfigURL or RootService list.
+
+ - ConfigUrl: When ODP is started, the
obproxy_config_server_url
parameter specified in the command is used to query the RootServer information of the OceanBase cluster.
+ - RoortService list: When ODP is started, the
-r
parameter specified in the command is used to query the RootServer information of the OceanBase cluster.
+
+
-* `[MY_NAME local_user@local_tenant IDENTIFIED BY local_password HOST 'local_ip:local_port' [CLUSTER "local_cluster_name"]]` specifies the username, tenant name, password, IP address, port number, and cluster of the local database, which are required only if you want to use the reverse link feature of DBLink.
-
- * `local_user`: the username of the local database.
-
- * `local_tenant`: the name of the tenant to which the local database belongs.
-
- * `local_password`: the password used to log on to the local database. If the password contains a special character, such as `@`, `#`, or `!`, you must enclose the password with double quotation marks ("") to avoid syntax errors.
- * `local_ip`: the IP address of an OBServer node in the cluster of the local database.
-
- * `local_port`: the SQL port number of an OBServer node in the cluster of the local database. By default, the SQL port number of the OBServer node is 2881.
-
- * `local_cluster_name`: the name of the cluster of the local OceanBase database. You need to specify the cluster name only when the specified IP address and port number are those of the ODP of the cluster and the ODP is deployed by using ConfigUrl. The cluster name must be enclosed with double quotation marks ("").
+* `[MY_NAME local_user@local_tenant IDENTIFIED BY local_password HOST 'local_ip:local_port' [CLUSTER "local_cluster_name"]]` specifies the username, tenant name, password, IP address, port number, and cluster of the local database, which are required only if you want to use the reverse link feature of DBLink.
+* `local_user`: the username of the local database.
+
+* `local_tenant`: the name of the tenant to which the local database belongs.
+
+* `local_password`: the password used to log on to the local database. If the password contains a special character, such as `@`, `#`, or `!`, you must enclose the password with double quotation marks (") to avoid syntax errors.
+
+* `local_ip`: the IP address of an OBServer node in the cluster of the local database.
+
+* `local_port`: the SQL port number of an OBServer node in the cluster of the local database. By default, the SQL port number of the OBServer node is 2881.
+
+* `local_cluster_name`: the name of the cluster of the local OceanBase database. You need to specify the cluster name only when the specified IP address and port number are those of the ODP of the cluster and the ODP is deployed by using ConfigUrl. The cluster name must be enclosed with double quotation marks (").
Here is an example:
@@ -90,11 +90,12 @@ Here is an example:
obclient> CREATE DATABASE LINK ob_dblink_proxy CONNECT TO ob_user@oracle IDENTIFIED BY ****** OB HOST 'xx.xx.xx.xx:2883' CLUSTER "ob410";
Query OK, 1 row affected
```
+
+
+ Notice
+ Enclose the cluster name with double quotation marks (") to prevent the cluster name from being capitalized.
+
-
- Notice
- Enclose the cluster name with double quotation marks ("") to prevent the cluster name from being capitalized.
-
* Create a DBLink named `ob_dblink_reverse_link` that connects to a remote OceanBase database and provides the reverse link feature.
@@ -134,7 +135,7 @@ where
* `oracle_service_name` specifies the name of the service provided by the remote Oracle Database instance.
-Example: Create a DBLink to connect to a remote Oracle database:
+Example: Create a DBLink to connect to a remote Oracle database.
```sql
obclient> CREATE DATABASE LINK orcl_dblink CONNECT TO orcl_user@oracle IDENTIFIED BY ****** OCI HOST 'xx.xx.xx.xx:1521/ORCL';
diff --git a/en-US/700.reference/300.database-object-management/200.manage-object-of-oracle-mode/1000.manage-dblink-of-oracle-mode/300.access-a-remote-database-by-a-dblink-of-oracle-mode.md b/en-US/700.reference/300.database-object-management/200.manage-object-of-oracle-mode/1000.manage-dblink-of-oracle-mode/300.access-a-remote-database-by-a-dblink-of-oracle-mode.md
index 2b218072ab..4e79e0cfe5 100644
--- a/en-US/700.reference/300.database-object-management/200.manage-object-of-oracle-mode/1000.manage-dblink-of-oracle-mode/300.access-a-remote-database-by-a-dblink-of-oracle-mode.md
+++ b/en-US/700.reference/300.database-object-management/200.manage-object-of-oracle-mode/1000.manage-dblink-of-oracle-mode/300.access-a-remote-database-by-a-dblink-of-oracle-mode.md
@@ -7,15 +7,15 @@
# Use a DBLink to access data in a remote database
-You can use a DBLink to access objects, such as tables, views, synonyms, and sequences in a remote database.
+You can use a DBLink to access objects, such as tables, views, synonyms, and sequences, in a remote database.
## Prerequisites
* You have created a DBLink. For more information about how to create a DBLink, see [Create a DBLink](100.create-a-dblink-of-oracle-mode.md).
-* For OceanBase Database V4.2.1 and later, when you use the DBLink feature to access a remote Oracle database, you need to install and configure OCI 12.2 on all OBServer nodes in the cluster. Additionally, when you upgrade from OceanBase Database V4.2.1 or earlier to V4.2.1 or later, you will also need to reconfigure the previously configured OCI 11.2 to OCI 12.2.
-
- For more information about how to install and configure OCI 12.2, see [Install and configure OCI](600.install-and-configure-the-oci.md).
+* When you use a DBLink to access a remote Oracle database, if the local OceanBase database is of V4.2.1 or later, you must install and configure Oracle Call Interface (OCI) 12.2 on all OBServer nodes in the cluster. If the local OceanBase database was upgraded from an earlier version to V4.2.1 or later, you must upgrade the originally configured OCI 11.2 to OCI 12.2.
+
+ For more information about how to install and configure OCI 12.2, see [Install and configure OCI](600.install-and-configure-the-oci.md).
## Access data of tables in a remote database
@@ -56,7 +56,7 @@ obclient [SYS]> SELECT * FROM tbl1@my_link;
## Access sequences in a remote database
-In OceanBase Database V4.2.1, you can access sequence values in a remote OceanBase database or Oracle database by using a DBLink.
+Starting from OceanBase Database V4.2.1, you can access sequence values in a remote OceanBase database or Oracle database by using a DBLink.
Here is an example:
@@ -167,13 +167,13 @@ INSERT INTO local_tbl1 SELECT remote_tbl1.NAME, seq.nextval@my_link FROM remote_
## Call a stored procedure in a remote database
-Starting from V4.2.1, OceanBase Database allows you to call stored procedures in remote Oracle databases through DBLink. However, currently, calling stored procedures in remote OceanBase databases is not supported.
+Starting from OceanBase Database V4.2.1, you can call a stored procedure in a remote Oracle database by using a DBLink. At present, you cannot call stored procedures in a remote OceanBase database.
Before you call a stored procedure in a remote database by using a DBLink, note that:
-* Currently, only calling standalone stored procedures and package stored procedures in a remote database is supported via DBLink. User-Defined Functions (UDFs) and remote calls to package functions are not supported at this time.
+* You can call only independent stored procedures and packages in a remote database by using a DBLink. User-defined functions (UDFs) and package functions cannot be remotely called.
-* The following basic data types of INOUT parameters are supported in the call of a standalone stored procedure:
+* The following basic data types of inout parameters are supported in the call of an independent stored procedure:
* String data types such as VARCHAR2, VARCHAR, and CHAR
@@ -215,13 +215,13 @@ Before you call a stored procedure in a remote database by using a DBLink, note
* You cannot use a DBLink to call remote packages of the constructor type.
-Here are some examples: Assume that you have created, in the local OceanBase database, a DBLink named `orcl_link` to a remote Oracle database by executing the following statement:
+Here are some examples: Assume that you have created, in the local OceanBase database, a DBLink named `orcl_link` to a remote Oracle database.
```sql
obclient [SYS]> CREATE DATABASE LINK orcl_dblink CONNECT TO orcl_user@oracle IDENTIFIED BY ****** OCI HOST 'xx.xx.xx.xx:1521/ORCL';
```
-* Call a simple standalone stored procedure.
+* Call a simple independent stored procedure
1. Prepare the data environment.
@@ -276,41 +276,42 @@ obclient [SYS]> CREATE DATABASE LINK orcl_dblink CONNECT TO orcl_user@oracle IDE
obclient [SYS]> CALL get_customer_id@orcl_dblink(2);
```
- However, if you call the stored procedure by using the following statement, which is not supported at present, an error is returned:
+
+However, if you call the stored procedure by using the following statement, which is not supported at present, an error is returned:
```sql
SELECT get_customer_id@orcl_dblink(2) FROM DUAL;
```
- In addition, you can create synonyms of the `get_customer_id` stored procedure in the remote Oracle database by using the following methods:
+In addition, you can create synonyms of the `get_customer_id` stored procedure in the remote Oracle database by using the following methods:
- * Create a synonym named `syn_remote_customer_id` in the remote Oracle database.
+ * Create a synonym named `syn_remote_customer_id` in the remote Oracle database
- ```sql
- CREATE OR REPLACE SYNONYM syn_remote_customer_id FOR get_customer_id;
- ```
+ ```sql
+ CREATE OR REPLACE SYNONYM syn_remote_customer_id FOR get_customer_id;
+ ```
- * Create a synonym named `syn_local_customer_id` for the stored procedure `get_customer_id` in the local OceanBase database by using a DBLink.
+ * Create a synonym named `syn_local_customer_id` in the local OceanBase database by using the DBLink``
- ```sql
- CREATE OR REPLACE SYNONYM syn_local_customer_id FOR get_customer_id@orcl_dblink;
- ```
+ ```sql
+ CREATE OR REPLACE SYNONYM syn_local_customer_id FOR get_customer_id@orcl_dblink;
+ ```
- You can execute the following statements through a DBLink to call these synonyms.
+ You can use the DBLink to call these synonyms by executing the following statements:
- * Call the synonym created in the remote Oracle database.
+ * Call the synonym created in the remote Oracle database
- ```sql
- CALL syn_remote_customer_id@orcl_dblink(2);
- ```
+ ```sql
+ CALL syn_remote_customer_id@orcl_dblink(2);
+ ```
- * Call the synonym created in the local OceanBase database.
+ * Call the synonym created in the local OceanBase database
- ```sql
- CALL syn_local_customer_id(2);
- ```
+ ```sql
+ CALL syn_local_customer_id(2);
+ ```
-* Call a stored procedure that contains inout parameters.
+* Call a stored procedure that contains inout parameters
Assume that you have created a table named `employees` to store employee information in the remote Oracle database by using the following statement:
diff --git a/en-US/700.reference/300.database-object-management/200.manage-object-of-oracle-mode/1000.manage-dblink-of-oracle-mode/400.update-data-in-remote-database-by-a-dblink-of-oracle-mode.md b/en-US/700.reference/300.database-object-management/200.manage-object-of-oracle-mode/1000.manage-dblink-of-oracle-mode/400.update-data-in-remote-database-by-a-dblink-of-oracle-mode.md
index 893c323d5c..faa4210d19 100644
--- a/en-US/700.reference/300.database-object-management/200.manage-object-of-oracle-mode/1000.manage-dblink-of-oracle-mode/400.update-data-in-remote-database-by-a-dblink-of-oracle-mode.md
+++ b/en-US/700.reference/300.database-object-management/200.manage-object-of-oracle-mode/1000.manage-dblink-of-oracle-mode/400.update-data-in-remote-database-by-a-dblink-of-oracle-mode.md
@@ -15,7 +15,7 @@ After you create a DBLink, you can use the DBLink to modify data in the remote d
* The data write feature of DBLink supports reverse links. The remote database can access objects, such as tables, views, and synonyms, in the local database through a reverse link. To use the reverse link feature, you must provide information about the local database, such as the `ip`, `port`, `user_name`, `tenant_name`, and `pass_word`, when you create a DBLink. For more information, see [Create a DBLink](../1000.manage-dblink-of-oracle-mode/100.create-a-dblink-of-oracle-mode.md).
-* The current reverse link feature only supports access between Oracle schemas in OceanBase databases and Oracle schemas in OceanBase databases. It does not currently support access between Oracle schemas in OceanBase databases and Oracle databases.
+* The reverse link feature supports only the access between two OceanBase databases in Oracle mode.
## Prerequisites
@@ -213,7 +213,7 @@ obclient> SELECT * FROM t3@orcl_dblink;
Replacing the data of a table in a remote database by using a DBLink is similar to directly replacing the data of a table. The only difference is that you need to suffix `@dblink_name` to the name of the target table in the statement. For more information about how to replace the data of a table, see [Replace data](../../../../300.develop/200.application-development-of-oracle-mode/400.write-data-of-oracle-mode/400.replace-data-of-oracle-mode-in-develop.md).
-For more information about the `MERGE INTO` statement, see [MERGE](../../../500.sql-reference/100.sql-syntax/300.common-tenant-of-oracle-mode/900.sql-statement-of-oracle-mode/200.dml-of-oracle-mode/300.merge-of-oracle-mode.md).
+For more information about the `MERGE INTO` statement, see [MERGE INTO](../../../500.sql-reference/100.sql-syntax/300.common-tenant-of-oracle-mode/900.sql-statement-of-oracle-mode/200.dml-of-oracle-mode/300.merge-of-oracle-mode.md).
### Delete data
@@ -258,7 +258,7 @@ obclient> SELECT * FROM t5@orcl_dblink;
Deleting the data of a table in a remote database by using a DBLink is similar to directly deleting the data of a table. The only difference is that you need to suffix `@dblink_name` to the name of the target table in the statement. For more information about how to delete the data of a table, see [Delete data](../../../../300.develop/200.application-development-of-oracle-mode/400.write-data-of-oracle-mode/300.delete-data-of-oracle-mode-in-develop.md).
-For more information about the `DELETE` statement, see [DELETE](../../../500.sql-reference/100.sql-syntax/300.common-tenant-of-oracle-mode/900.sql-statement-of-oracle-mode/200.dml-of-oracle-mode/100.delete-of-oracle-mode.md)
+For more information about the `DELETE` statement, see [DELETE](../../../500.sql-reference/100.sql-syntax/300.common-tenant-of-oracle-mode/900.sql-statement-of-oracle-mode/200.dml-of-oracle-mode/100.delete-of-oracle-mode.md).
## References
diff --git a/en-US/700.reference/300.database-object-management/200.manage-object-of-oracle-mode/1000.manage-dblink-of-oracle-mode/500.delete-a-dblink-of-oracle-mode.md b/en-US/700.reference/300.database-object-management/200.manage-object-of-oracle-mode/1000.manage-dblink-of-oracle-mode/500.delete-a-dblink-of-oracle-mode.md
index da97dbcaf5..9c84fbc1df 100644
--- a/en-US/700.reference/300.database-object-management/200.manage-object-of-oracle-mode/1000.manage-dblink-of-oracle-mode/500.delete-a-dblink-of-oracle-mode.md
+++ b/en-US/700.reference/300.database-object-management/200.manage-object-of-oracle-mode/1000.manage-dblink-of-oracle-mode/500.delete-a-dblink-of-oracle-mode.md
@@ -11,12 +11,16 @@ You can drop a DBLink that is no longer required.
## Procedure
-* To execute this statement, you must have the `DROP DATABASE LINK` privilege. For information about how to grant user privileges, see [Modify user privileges](../../../../600.manage/500.security-and-permissions/300.access-control/200.user-and-permission/300.permission-of-oracle-mode/700.modify-user-permissions-of-oracle-mode.md). The syntax for dropping a DBLink is as follows:
+The syntax for dropping a DBLink is as follows:
```sql
obclient> DROP DATABASE LINK dblink_name;
```
+The parameters in the syntax are described as follows:
+
+* To execute this statement, you must have the `DROP DATABASE LINK` privilege. For information about how to grant user privileges, see [Modify user privileges](../../../../600.manage/500.security-and-permissions/300.access-control/200.user-and-permission/300.permission-of-oracle-mode/700.modify-user-permissions-of-oracle-mode.md).
+
* Here, `dblink_name` specifies the name of the DBLink to be dropped.
Here is an example:
diff --git a/en-US/700.reference/300.database-object-management/200.manage-object-of-oracle-mode/1000.manage-dblink-of-oracle-mode/600.install-and-configure-the-oci.md b/en-US/700.reference/300.database-object-management/200.manage-object-of-oracle-mode/1000.manage-dblink-of-oracle-mode/600.install-and-configure-the-oci.md
index d6b1f53369..226416717f 100644
--- a/en-US/700.reference/300.database-object-management/200.manage-object-of-oracle-mode/1000.manage-dblink-of-oracle-mode/600.install-and-configure-the-oci.md
+++ b/en-US/700.reference/300.database-object-management/200.manage-object-of-oracle-mode/1000.manage-dblink-of-oracle-mode/600.install-and-configure-the-oci.md
@@ -68,66 +68,137 @@ The following example describes how to install and configure an OCI library on a
cd /usr/lib/oracle/12.2/client64/lib/
```
- 3. Copy the following files from the OCI installation directory to the OceanBase Database installation directory: `libclntsh.so.12.1`, `libclntshcore.so.12.1`, `libipc1.so`, `libmql1.so`, `libnnz12.so`, `libociei.so`, and `libons.so`.
+ 3. Copy the following files from the OCI installation directory to the OceanBase Database installation directory: `libclntshcore.so.12.1`, `libclntsh.so.12.1`, `libipc1.so`, `libmql1.so`, `libnnz12.so`, `libocci.so.12.1`, `libociei.so`, `libocijdbc12.so`, `libons.so`, and `liboramysql12.so`.
You need to replace `$DIR` with the OceanBase Database installation directory, which is `/home/admin/oceanbase` in this example.
-
+
+ Notice
+ After you copy the .so
files from the OCI installation directory to the OceanBase Database installation directory, you need to copy the libclntsh.so.12.1
file in the OceanBase Database installation directory and rename the new file as libclntsh.so
, or you can create a soft link that points to the libclntsh.so.12.1
file in the installation directory and name the soft link as libclntsh.so
.
+
+
+ ```shell
+ cp libclntshcore.so.12.1 $DIR/lib
+ ```
+
+ ```shell
+ cp libclntsh.so.12.1 $DIR/lib
+ ```
+
+ ```shell
+ cp libclntsh.so.12.1 $DIR/lib/libclntsh.so
+ ```
+
+ ```shell
+ cp libipc1.so $DIR/lib
+ ```
+
+ ```shell
+ cp libmql1.so $DIR/lib
+ ```
+
+ ```shell
+ cp libnnz12.so $DIR/lib
+ ```
+
+ ```shell
+ cp libocci.so.12.1 $DIR/lib
+ ```
+
+ ```shell
+ cp libociei.so $DIR/lib
+ ```
+
+ ```shell
+ cp libocijdbc12.so $DIR/lib
+ ```
+
+ ```shell
+ cp libons.so $DIR/lib
+ ```
+
+ ```shell
+ cp liboramysql12.so $DIR/lib
+ ```
+
+5. Specify the `LD_LIBRARY_PATH` variable.
+
+ To load the `libclntsh.so` file, OceanBase Database searches for dependent library files in the directory specified by the `LD_LIBRARY_PATH` variable. Therefore, you need to set the `LD_LIBRARY_PATH` variable to the directory of the OCI library.
+
+ In this example, the OCI library is located in `home/admin/oceanbase/lib`. Here is a sample command:
+
+ ```javascript
+ export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/home/admin/oceanbase/lib:"
+ ```
+
+
Notice
- You need to rename the libclntsh.so.12.1
file to libclntsh.so
.
+ You must append a colon (:) to /home/admin/oceanbase/lib
when you configure environment variables.
- ```shell
- cp libclntsh.so.12.1 $DIR/lib/libclntsh.so
- ```
- ```shell
- cp libclntshcore.so.12.1 $DIR/lib
- ```
+6. After you configure the OCI library and the `LD_LIBRARY_PATH` environment variable, if the OBServer node fails to correctly load all the `.so` files of OCI, you need to restart the OBServer node. For more information, see [Restart a node](../../../../600.manage/100.cluster-management/300.common-cluster-operations/300.restart-a-node.md).
- ```shell
- cp libipc1.so $DIR/lib
- ```
+ If it is inconvenient to restart the OBServer node, you can run the `mv` command to move all the 10 `.so` files of the OCI library in the `$DIR/lib/` directory to the `/lib64` directory, and retain only the `libclntsh.so` file in the `$DIR/lib/` directory. Then, the OBServer node can load the OCI library without a restart.
- ```shell
- cp libmql1.so $DIR/lib
- ```
+ ```shell
+ cd /home/admin/oceanbase/lib
+ ```
- ```shell
- cp libnnz12.so $DIR/lib
- ```
+ ```shell
+ mv libclntshcore.so.12.1 /lib64
+ ```
- ```shell
- cp libociei.so $DIR/lib
- ```
+ ```shell
+ mv libclntsh.so.12.1 /lib64
+ ```
- ```shell
- cp libons.so $DIR/lib
- ```
+ ```shell
+ mv libipc1.so /lib64
+ ```
-5. Specify the `LD_LIBRARY_PATH` variable.
+ ```shell
+ mv libmql1.so /lib64
+ ```
- To load the `libclntsh.so` file, OceanBase Database searches for dependent library files in the directory specified by the `LD_LIBRARY_PATH` variable. Therefore, you need to set the `LD_LIBRARY_PATH` variable to the directory of the OCI library.
+ ```shell
+ mv libnnz12.so /lib64
+ ```
- In this example, the OCI library is located in `home/admin/oceanbase/lib`. Here is a sample command:
+ ```shell
+ mv libocci.so.12.1 /lib64
+ ```
- ```javascript
- export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/home/admin/oceanbase/lib:"
+ ```shell
+ mv libociei.so /lib64
```
-
- Notice
- You must append a colon (:) to /home/admin/oceanbase/lib
when you configure environment variables.
-
+ ```shell
+ mv libocijdbc12.so /lib64
+ ```
+
+ ```shell
+ mv libons.so /lib64
+ ```
+
+ ```shell
+ mv liboramysql12.so /lib64
+ ```
## Considerations
After you configure the OCI library on all OBServer nodes in the cluster, note that:
-* OceanBase Database requires dependent libraries to load the OCI library. After you configure the OCI library, you can run the `ldd libclntsh.so` command in the directory of the OCI library, which is `home/admin/oceanbase/lib` in this example, to query other Linux libraries on which the OCI library depends. If the required dependent libraries do not exist, contact OceanBase Technical Support for assistance.
+* OceanBase Database requires dependent libraries to load the OCI library. After you configure the OCI library, you can run the `ldd libclntsh.so` command in the directory of the OCI library, which is `home/admin/oceanbase/lib` in this example, to query other Linux libraries on which the OCI library depends.`` If the required dependent libraries do not exist, contact OceanBase Technical Support for assistance.
```shell
ldd libclntsh.so
```
-* In general, you do not need to restart the OBServer node after the OCI library is configured. However, if an error message is returned and indicates that OCI is not found, you need to restart the OBServer node.
+* On the x86 platform, OBServer nodes of V4.2.1 and later versions can only run OCI library files in the `oracle-instantclient12.2-basic-12.2.0.1.0-1.x86_64.rpm` package. If you use OCI library files of an earlier or later version or of multiple versions, processes may crash on the OBServer node.
+
+ On the x86 platform, OBServer nodes of a version earlier than V4.2.1 can only run library files of OCI 11.2. For more information, see [Install and configure OCI 11.2](https://www.oceanbase.com/docs/common-oceanbase-database-cn-1000000000323041).
+
+* On the ARM platform, OBServer nodes can only run OCI library files in the `oracle-instantclient19.10-basic-19.10.0.0.0-1.aarch64.rpm` package. To download the `oracle-instantclient19.10-basic-19.10.0.0.0-1.aarch64.rpm` package, click [oracle-instantclient19.10-basic-19.10.0.0.0-1.aarch64.rpm](https://yum.oracle.com/repo/OracleLinux/OL8/oracle/instantclient/aarch64/index.html).
+
+ The process for configuring the OCI library on the ARM platform is similar to that of the x86 platform. You can configure all the `.so` files in the `oracle-instantclient19.10-basic-19.10.0.0.0-1.aarch64.rpm` package by referring to this topic.
\ No newline at end of file
From 6968142da2f3ada02821ea3e30f5d9d54b8e2812 Mon Sep 17 00:00:00 2001
From: Jackie Qu
Date: Wed, 17 Apr 2024 09:57:21 +0800
Subject: [PATCH 30/63] v430-beta-100.sql-syntax-update-4
---
.../500.connection-id-of-mysql-mode.md | 33 +++----
.../1600.alter-table-of-mysql-mode.md | 34 ++++---
.../2600.create-table-of-mysql-mode.md | 25 ++---
.../5800.kill-of-mysql-mode.md | 71 +++++---------
.../8700.show-of-mysql-mode.md | 10 +-
...line-and-offline-ddl-list-of-mysql-mode.md | 13 +--
...0.column-type-change-rule-of-mysql-mode.md | 94 +++++++++++++++++--
...ine-and-offline-ddl-list-of-oracle-mode.md | 13 +--
.../100.lnnvl-of-oracle-mode.md | 34 +++++--
.../500.userenv-of-oracle-mode.md | 86 ++++++++++++-----
.../1000.alter-table-of-oracle-mode.md | 38 ++++----
.../2400.create-table-of-oracle-mode.md | 10 +-
.../150.alter-system-kill-session.md | 6 +-
.../1800.kill-of-oracle-mode.md | 17 ++--
.../3600.show-of-oracle-mode.md | 4 +-
15 files changed, 301 insertions(+), 187 deletions(-)
diff --git a/en-US/700.reference/500.sql-reference/100.sql-syntax/200.common-tenant-of-mysql-mode/400.functions-of-mysql-mode/600.information-functions-of-mysql-mode/500.connection-id-of-mysql-mode.md b/en-US/700.reference/500.sql-reference/100.sql-syntax/200.common-tenant-of-mysql-mode/400.functions-of-mysql-mode/600.information-functions-of-mysql-mode/500.connection-id-of-mysql-mode.md
index b4637eb986..a1f20ba8ea 100644
--- a/en-US/700.reference/500.sql-reference/100.sql-syntax/200.common-tenant-of-mysql-mode/400.functions-of-mysql-mode/600.information-functions-of-mysql-mode/500.connection-id-of-mysql-mode.md
+++ b/en-US/700.reference/500.sql-reference/100.sql-syntax/200.common-tenant-of-mysql-mode/400.functions-of-mysql-mode/600.information-functions-of-mysql-mode/500.connection-id-of-mysql-mode.md
@@ -15,26 +15,27 @@ CONNECTION_ID()
## Purpose
-`CONNECTION_ID()` returns the `session_id` of the current session. This ID is the unique identifier of a session in a client.
-
-You can execute the `SHOW PROCESSLIST` statement to query the quantity and IDs of sessions in the current database.
+`CONNECTION_ID()` returns the client session ID of the current session. This ID is the unique identifier of a session in a client.
## Examples
+Obtain the client session ID of the current session.
+
```sql
-obclient> select CONNECTION_ID();
-+-----------------+
-| CONNECTION_ID() |
-+-----------------+
-| 3221638476 |
-+-----------------+
-1 row in set
+obclient [(none)]> SELECT CONNECTION_ID() AS Client_Session_ID;
+```
-obclient> SELECT session_id,trans_id FROM oceanbase.__all_virtual_trans_stat WHERE session_id=CONNECTION_ID();
-+------------+------------------------------------------------------------------------------------------+
-| session_id | trans_id |
-+------------+------------------------------------------------------------------------------------------+
-| 3221638476 | {hash:6868349667767780996, inc:95279944, addr:"xxx.xxx.xx.xxx:xxxx", t:1626333606027937} |
-+------------+------------------------------------------------------------------------------------------+
+The return result is as follows:
+
+```shell
++-------------------+
+| Client_Session_ID |
++-------------------+
+| 3221488032 |
++-------------------+
1 row in set
```
+
+## References
+
+You can execute the `SHOW PROCESSLIST` statement to query the quantity and IDs of sessions in the current database. For more information, see [View tenant sessions](../../../../../1200.database-proxy/1500.view-tenant-sessions.md).
diff --git a/en-US/700.reference/500.sql-reference/100.sql-syntax/200.common-tenant-of-mysql-mode/600.sql-statement-of-mysql-mode/1600.alter-table-of-mysql-mode.md b/en-US/700.reference/500.sql-reference/100.sql-syntax/200.common-tenant-of-mysql-mode/600.sql-statement-of-mysql-mode/1600.alter-table-of-mysql-mode.md
index fe03ab3027..4a3d9f4fd0 100644
--- a/en-US/700.reference/500.sql-reference/100.sql-syntax/200.common-tenant-of-mysql-mode/600.sql-statement-of-mysql-mode/1600.alter-table-of-mysql-mode.md
+++ b/en-US/700.reference/500.sql-reference/100.sql-syntax/200.common-tenant-of-mysql-mode/600.sql-statement-of-mysql-mode/1600.alter-table-of-mysql-mode.md
@@ -9,7 +9,7 @@
## Purpose
-You can use this statement to modify the structure of an existing table. For example, you can use this statement to modify a table and its attributes, add columns to it, modify its columns and attributes, and delete its columns.
+You can use this statement to modify the structure of an existing table. For example, you can use this statement to modify a table and its attributes, add columns to it, modify its columns and attributes, and drop its columns.
## Syntax
@@ -176,14 +176,14 @@ partition_count | subpartition_count:
|------------------------------------------------------------|---------------------------------------------------------------------|
| ADD \[COLUMN\] | Adds a column. You can add a generated column. |
| \[FIRST \| BEFORE \| AFTER column_name\] | Specifies the added column as the first column of the table or to be after the `column_name` column. Currently, OceanBase Database allows you to specify the position of a column only in the `ADD COLUMN` syntax. |
-| CHANGE \[COLUMN\] | When you change the type of a column, you can only increase the column length for specific character data types, such as `VARCHAR`, `VARBINARY`, and `CHAR`. |
+| CHANGE \[COLUMN\] | Changes the column name and definition. You can increase the column length only for specific character data types, such as `VARCHAR`, `VARBINARY`, and `CHAR`. |
| MODIFY \[COLUMN\] | Modifies the column attribute. |
| ALTER \[COLUMN\] {SET DEFAULT const_value \| DROP DEFAULT} | Changes the default value of a column. |
| DROP \[COLUMN\] | Drops a column. You cannot drop a primary key column or columns that contain indexes. |
| ADD FOREIGN KEY | Adds a foreign key. If you do not specify the name of the foreign key, it will be named in the format of table name + `OBFK` + time when the foreign key was created. For example, the foreign key created for table `t1` at 00:00:00 on August 1, 2021 is named as `t1_OBFK_1627747200000000`. A foreign key enables one table (child table) to reference data from another table (parent table). When an `UPDATE` or `DELETE` operation affects a key value in the parent table that has matching rows in the child table, the result depends on the referential action specified in the `ON UPDATE` or `ON DELETE` clause. Valid referential actions:- `CASCADE`: deletes or updates the affected row in the parent table and automatically deletes or updates the matching rows in the child table.
- `SET NULL`: deletes or updates the affected row in the parent table and sets the foreign key column in the child table to `NULL`.
- `RESTRICT`: rejects the delete or update operation on the parent table.
- `NO ACTION`: defers the check.
The `SET DEFAULT` action is also supported. |
| ALTER INDEX | Modifies whether an index is visible. When the status of an index is `INVISIBLE`, the SQL optimizer will not select this index. |
| key_part | Creates a normal or function-based index. |
-| index_col_name | The column name of the index. You can add `ASC` (ascending order) to the end of each column name. `DESC` (descending order) is not supported. By default, the columns are sorted in ascending order. Index-based sorting method: Data is first sorted by the values in the first column of `index_col_name` and by the values in the next column for the records with the same values in the first column. |
+| index_col_name | The column name of the index. You can add `ASC` (ascending order) to the end of each column name. `DESC` (descending order) is not supported. By default, the columns are sorted in ascending order. Index-based sorting method: Data is first sorted by the values in the first column of `index_col_name` and by the values in the next column for the records with the same values in the first column. |
| expr | A valid function-based index expression. A Boolean expression, such as `c1=c1`, is allowed.
**Notice**: You cannot create a function-based index on a generated column in the current version of OceanBase Database. |
| ADD \[PARTITION\] | Adds a partition to a partitioned table. |
| DROP {PARTITION \| SUBPARTITION} | Drops a partition. Valid values: - `PARTITION`: drops the specified RANGE or LIST partitions and all subpartitions that exist under these partitions, including partition definitions and partition data, and maintains the indexes on the partitions.
- `SUBPARTITION`: drops the specified `*-RANGE` or `*-LIST` subpartitions, including the subpartition definitions and subpartition data, and maintains the indexes on the subpartitions.
Separate multiple partition names with commas (,).
**Notice**: Before you drop a partition, ensure that no active transactions or queries exist in this partition. Otherwise, SQL statement errors or exceptions may occur. |
@@ -194,13 +194,13 @@ partition_count | subpartition_count:
| DROP \[TABLEGROUP\] | Drops a table group. |
| DROP \[FOREIGN KEY\] | Drops a foreign key. |
| DROP \[PRIMARY KEY\] | Drops a primary key. Note
In OceanBase Database, you cannot drop a primary key from a MySQL tenant in the following conditions:
- The table is a parent table whose primary key is referenced by a foreign key column of a child table.
- The table is a child table, but its primary key contains a foreign key column.
|
-| \[SET\] table_option | Sets table attributes. The following parameters are supported: - `REPLICA_NUM`: sets the number of replicas of the table (not supported).
- `tablegroup_name`: sets the group to which the table belongs.
- `BLOCK_SIZE`: sets the microblock size of the table. Default Value: `16384`, which is 16 KB. Value range: [1024,1048576].
- `lob_inrow_threshold`: sets the `INROW` threshold. LOB data sized greater than this threshold is converted to `OUTROW` and stored in the LOB meta table. The default value is 4KB.
- `COMPRESSION`: sets the compression mode of the table. The default value is `None`, which means that data is not compressed.
- `AUTO_INCREMENT`: sets the next value of the auto-increment column in the table.
- `comment`: sets the comments for the table.
- `PROGRESSIVE_MERGE_NUM`: sets the number of progressive compaction steps. Value range: \[0,100\].
- `parallel_clause`: specifies the degree of parallelism (DOP) at the table level.
- `NOPARALLEL`: sets the DOP to `1`, which is the default value.
- `PARALLEL integer`: sets the DOP to an integer greater than or equal to `1`.
|
+| \[SET\] table_option | Sets table attributes. The following parameters are supported: - `REPLICA_NUM`: sets the number of replicas of the table (not supported).
- `tablegroup_name`: sets the group to which the table belongs.
- `BLOCK_SIZE`: sets the microblock size of the table. Default Value: `16384`, which is 16 KB. Value range: [1024,1048576].
- `lob_inrow_threshold`: sets the `INROW` threshold. LOB data sized greater than this threshold is stored in `OUTROW` mode in the LOB meta table. The default value is 4KB.
- `COMPRESSION`: sets the compression mode of the table. The default value is `None`, which means that data is not compressed.
- `AUTO_INCREMENT`: sets the next value of the auto-increment column in the table.
- `comment`: sets the comments for the table.
- `PROGRESSIVE_MERGE_NUM`: sets the number of progressive compaction steps. Value range: \[0,100\].
- `parallel_clause`: specifies the degree of parallelism (DOP) at the table level.
- `NOPARALLEL`: sets the DOP to `1`, which is the default value.
- `PARALLEL integer`: sets the DOP to an integer greater than or equal to `1`.
|
| CHECK | Modifies the `CHECK` constraint. The following operations are supported: - Add a new `CHECK` constraint.
- Drop the current `CHECK` constraint named `constraint_name`.
|
| \[NOT\] ENFORCED | Specifies whether to forcibly apply the `CHECK` constraint named `constraint_name`. - If you do not specify this parameter or set it to `ENFORCED`, a `CHECK` constraint is created and forcibly applied. The default value is `ENFORCED`.
- If you set it to `NOT ENFORCED`, a `CHECK` constraint is created but not forcibly applied.
|
-| ADD COLUMN GROUP([all columns,]each column) | Converts a table to a columnstore table. The following options are supported:ADD COLUMN GROUP(all columns, each column)
: converts the table to a rowstore-columnstore redundant table. ADD COLUMN GROUP(each column)
: converts the table to a columnstore table.
|
-| DROP COLUMN GROUP([all columns,]each column) | Drops the store format of the table. The following options are supported:DROP COLUMN GROUP(all columns, each column)
: drops the rowstore-columnstore redundant format of the table. DROP COLUMN GROUP(all columns)
: drops the rowstore format of the table. DROP COLUMN GROUP(each column)
: drops the columnstore format of the table.
|
-| index_column_group_option | The index options. The following options are supported:WITH COLUMN GROUP(all columns, each column)
: adds an rowstore-columnstore redundant index. WITH COLUMN GROUP(all columns)
: adds a rowstore index. WITH COLUMN GROUP(each column)
: adds an columnstore index.
|
-| SKIP_INDEX | Modifies the Skip Index attribute of a column. Valid values:MIN_MAX
: specifies to store the maximum value, minimum value, and number of null values of the indexed column on an index node. It is the most common aggregate data type in Skip Index. It can accelerate the pushdown of filters and `MIN/MAX` functions. -
SUM
: used to accelerate the pushdown of the `SUM` function on data of the numeric type.
Notice
- The Skip Index attribute is not supported for a column of the JSON or spatial data type.
- The Skip Index attribute is not supported for a generated column.
|
+| ADD COLUMN GROUP([all columns,]each column) | Converts a table into a columnstore table. The following options are supported:ADD COLUMN GROUP(all columns, each column)
: converts the table into a rowstore-columnstore redundant table. ADD COLUMN GROUP(each column)
: converts the table into a columnstore table.
|
+| DROP COLUMN GROUP([all columns,]each column) | Removes the storage format of the table. The following options are supported:DROP COLUMN GROUP(all columns, each column)
: removes the rowstore-columnstore redundancy format for the table. DROP COLUMN GROUP(all columns)
: removes the rowstore format for the table. DROP COLUMN GROUP(each column)
: removes the columnstore format for the table.
|
+| index_column_group_option | Specifies the index options. The following options are supported:WITH COLUMN GROUP(all columns, each column)
: adds a rowstore-columnstore redundant index. WITH COLUMN GROUP(all columns)
: adds a rowstore index. WITH COLUMN GROUP(each column)
: adds a columnstore index.
|
+| SKIP_INDEX | Modifies the skip index attribute of a column. Valid values:MIN_MAX
: the most common skip index type. A skip index of this type stores the maximum value, minimum value, and null count of the indexed column at the index node granularity. This type of skip indexes can accelerate the pushdown of filters and `MIN/MAX` aggregation. -
SUM
: the skip index type that is used to accelerate the pushdown of `SUM` aggregation for numeric values.
Notice
- You cannot create a skip index for a JSON column or a spatial column.
- You cannot create a skip index for a generated column.
|
## Examples
@@ -735,7 +735,7 @@ OceanBase Database does not support change or cascading change in the following
CREATE TABLE tbl1 (col1 INT PRIMARY KEY, col2 VARCHAR(50));
```
-2. Convert the `tbl1` table to a rowstore-columnstore redundant table, and then drop the rowstore-columnstore redundancy attribute.
+2. Convert the `tbl1` table into a rowstore-columnstore redundant table, and then drop the rowstore-columnstore redundancy attribute.
```sql
ALTER TABLE tbl1 ADD COLUMN GROUP(all columns, each column);
@@ -745,7 +745,7 @@ OceanBase Database does not support change or cascading change in the following
ALTER TABLE tbl1 DROP COLUMN GROUP(all columns, each column);
```
-3. Convert the `tbl1` table to a columnstore table, and then delete the columnstore attribute.
+3. Convert the `tbl1` table into a columnstore table, and then drop the columnstore attribute.
```sql
ALTER TABLE tbl1 ADD COLUMN GROUP(each column);
@@ -755,9 +755,9 @@ OceanBase Database does not support change or cascading change in the following
ALTER TABLE tbl1 DROP COLUMN GROUP(each column);
```
-### Modify the Skip Index attribute of a column
+### Modify the skip index attribute of a column
-Execute the following statement to create a table named `test_skidx`:
+Use the following SQL statement to create a table named `test_skidx`.
```sql
CREATE TABLE test_skidx(
@@ -768,19 +768,23 @@ CREATE TABLE test_skidx(
);
```
-* Change the Skip Index attribute of the `col2` column in the `test_skidx` table to `SUM`.
+* Change the type of the skip index on the `col2` column in the `test_skidx` table to `SUM`.
```sql
ALTER TABLE test_skidx MODIFY COLUMN col2 FLOAT SKIP_INDEX(SUM);
```
-* Add the `MIN_MAX` Skip Index attribute for the `col4` column in the `test_skidx` table.
+* Add the skip index attribute for a column after the table is created.
+
+ Add a skip index of the `MIN_MAX` type for the `col4` column in the `test_skidx` table.
```sql
ALTER TABLE test_skidx MODIFY COLUMN col4 CHAR(10) SKIP_INDEX(MIN_MAX);
```
-* Delete the Skip Index attribute for the `col1` column in the `test_skidx` table.
+* Delete the skip index attribute for a column after the table is created.
+
+ Delete the skip index attribute of the `col1` column in the `test_skidx` table.
```sql
ALTER TABLE test_skidx MODIFY COLUMN col1 INT SKIP_INDEX();
diff --git a/en-US/700.reference/500.sql-reference/100.sql-syntax/200.common-tenant-of-mysql-mode/600.sql-statement-of-mysql-mode/2600.create-table-of-mysql-mode.md b/en-US/700.reference/500.sql-reference/100.sql-syntax/200.common-tenant-of-mysql-mode/600.sql-statement-of-mysql-mode/2600.create-table-of-mysql-mode.md
index 959b4ad78a..a255a65954 100644
--- a/en-US/700.reference/500.sql-reference/100.sql-syntax/200.common-tenant-of-mysql-mode/600.sql-statement-of-mysql-mode/2600.create-table-of-mysql-mode.md
+++ b/en-US/700.reference/500.sql-reference/100.sql-syntax/200.common-tenant-of-mysql-mode/600.sql-statement-of-mysql-mode/2600.create-table-of-mysql-mode.md
@@ -34,7 +34,7 @@ table_definition:
[match_action][opt_reference_option_list]
| {INDEX | KEY} [index_name] [index_type] (key_part,...)
[index_option_list] [index_column_group_option]
- | [CONSTRAINT [constraint_name]] CHECK(expression) constranit_state
+ | [CONSTRAINT [constraint_name]] CHECK(expression) constraint_state
column_definition_list:
column_definition [, column_definition ...]
@@ -158,7 +158,7 @@ table_column_group_option/index_column_group_option:
| **Parameter** | **Description** |
|------------------------------------------------------|-----------------------------------|
-| IF NOT EXISTS | If `IF NOT EXISTS` is specified, the system does not return an error even when the table to be created already exists. If `IF NOT EXISTS` is not specified and the table to be created already exists, the system returns an error. |
+| IF NOT EXISTS | If you specify `IF NOT EXISTS` and the table to be created already exists, no error will be reported. If you do not specify this parameter and the table to be created already exists, the system reports an error. |
| PRIMARY KEY | The primary key of the created table. If this parameter is not specified, a hidden primary key is used. OceanBase Database does not allow you to modify the primary key of a table or use the `ALTER TABLE` statement to add a primary key to a table. Therefore, we recommend that you specify a primary key when you create a table. |
| FOREIGN KEY | The foreign key of the created table. If you do not specify the name of the foreign key, it will be named in the format of table name + `OBFK` + time when the foreign key was created. For example, the foreign key created for table `t1` at 00:00:00 on August 1, 2021 is named as `t1_OBFK_1627747200000000`. A foreign key enables one table (child table) to reference data from another table (parent table). When an `UPDATE` or `DELETE` operation affects a key value in the parent table that has matching rows in the child table, the result depends on the referential action specified in the `ON UPDATE` or `ON DELETE` clause. Valid referential actions:- `CASCADE`: deletes or updates the affected row in the parent table and automatically deletes or updates the matching rows in the child table.
- `SET NULL`: deletes or updates the affected row in the parent table and sets the foreign key column in the child table to `NULL`.
- `RESTRICT`: rejects the delete or update operation on the parent table.
- `NO ACTION`: defers the check.
The `SET DEFAULT` action is also supported. |
| KEY \| INDEX | The key or index of the created table. If you do not specify the name of the index, the name of the first column referenced by the index is used as the index name. If duplicate index names exist, the index will be named in the format of underscore (_) + sequence number. For example, if the name of the index created based on column `c1` conflicts with an existing index name, the index will be named as `c1_2`. You can execute the `SHOW INDEX` statement to query the indexes of a table. |
@@ -168,10 +168,10 @@ table_column_group_option/index_column_group_option:
| ROW_FORMAT | Specifies whether to enable the encoding storage format. - `redundant`: indicates that the encoding storage format is not enabled.
- `compact`: indicates that the encoding storage format is not enabled.
- `dynamic`: an encoding storage format.
- `compressed`: an encoding storage format.
- `default`: This value is equivalent to `dynamic`.
|
| \[GENERATED ALWAYS\] AS (expr) \[VIRTUAL \| STORED\] | Creates a generated column. `expr` specifies the expression used to calculate the column value. - `VIRTUAL`: indicates that column values are not stored, but are immediately calculated after any `BEFORE` trigger when a row is read. Virtual columns do not occupy storage space.
- `STORED`: evaluates and stores column values when you insert or update a row. Stored columns occupy storage space and can be indexed.
|
| BLOCK_SIZE | The microblock size for the table. |
-| lob_inrow_threshold | Sets the `INROW` threshold. LOB data sized greater than this threshold is converted to `OUTROW` and stored in the LOB meta table. The default value is 4KB. |
+| lob_inrow_threshold | Sets the `INROW` threshold. LOB data sized greater than this threshold is stored in `OUTROW` mode in the LOB meta table. The default value is 4KB. |
| COMPRESSION | The compression algorithm for the table. Valid values: - `none`: indicates that no compression algorithm is used.
- `lz4_1.0`: indicates that the `lz4` compression algorithm is used.
- `zstd_1.0`: indicates that the `zstd` compression algorithm is used.
- `snappy_1.0`: indicates that the `snappy` compression algorithm is used.
|
| CHARSET \| CHARACTER SET | The default character set for columns in the table. For more information, see [Character sets](../100.basic-elements-of-mysql-mode/300.character-set-and-collation-of-mysql-mode/200.character-set-of-mysql-mode.md). |
-| COLLATE | The default collation for columns in the table. For more information, see [Collations](../100.basic-elements-of-mysql-mode/300.character-set-and-collation-of-mysql-mode/300.collation-of-mysql-mode.md) |
+| COLLATE | The default collation for columns in the table. For more information, see [Collations](../100.basic-elements-of-mysql-mode/300.character-set-and-collation-of-mysql-mode/300.collation-of-mysql-mode.md). |
| table_tablegroup | The table group to which the table belongs. |
| AUTO_INCREMENT | The start value of an auto-increment column in the table. OceanBase Database allows you to use auto-increment columns as the partitioning key. |
| COMMENT | The comment. The value is case-insensitive. |
@@ -182,7 +182,7 @@ table_column_group_option/index_column_group_option:
| constraint_name | The name of the constraint, which contains at most 64 characters. - Spaces are allowed at the beginning, in the middle, and at the end of a constraint name. However, the beginning and end of the constraint name must be identified with a backtick (\`).
- A constraint name can contain the dollar sign character ($).
- If a constraint name is a reserved word, it must be identified with a backtick (\`). Otherwise, an error is returned.
- `CHECK` constraint names must be unique in the same database.
|
| expression | The expression of the constraint. - `expression` cannot be empty.
- The result of `expression` must be of the Boolean data type.
- `expression` cannot contain a column that does not exist.
|
| table_column_group_option/index_column_group_option | The columnstore options for the table or index. The following options are supported:WITH COLUMN GROUP(all columns, each column)
: specifies to create a rowstore-columnstore redundant table or index. WITH COLUMN GROUP(all columns)
: specifies to create a rowstore table or index. WITH COLUMN GROUP(each column)
: specifies to create a columnstore table or index.
|
-| SKIP_INDEX | The Skip Index attribute of the column. Valid values:MIN_MAX
: specifies to store the maximum value, minimum value, and number of null values of the indexed column on an index node. It is the most common aggregate data type in Skip Index. It can accelerate the pushdown of filters and `MIN/MAX` functions. -
SUM
: used to accelerate the pushdown of the `SUM` function on data of the numeric type.
Notice
- The Skip Index attribute is not supported for a column of the JSON or spatial data type.
- The Skip Index attribute is not supported for a generated column.
|
+| SKIP_INDEX | The skip index attribute of the column. Valid values:MIN_MAX
: the most common skip index type. A skip index of this type stores the maximum value, minimum value, and null count of the indexed column at the index node granularity. This type of skip indexes can accelerate the pushdown of filters and `MIN/MAX` aggregation. -
SUM
: the skip index type that is used to accelerate the pushdown of `SUM` aggregation for numeric values.
Notice
- You cannot create a skip index for a JSON column or a spatial column.
- You cannot create a skip index for a generated column.
|
## Examples
@@ -351,9 +351,7 @@ table_column_group_option/index_column_group_option:
Query OK, 0 rows affected
```
- 5. (Optional) View the broadcast log stream.
-
- The replicated table is created on this log stream.
+ 5. (Optional) View the broadcast log stream. The replicated table is created on this log stream.
```shell
obclient> SELECT * FROM oceanbase.DBA_OB_LS WHERE FLAG LIKE "%DUPLICATE%";
@@ -363,11 +361,10 @@ table_column_group_option/index_column_group_option:
| 1003 | NORMAL | z1;z2 | 0 | 0 | 1683267390195713284 | NULL | 1683337744205408139 | 1683337744205408139 | DUPLICATE |
+-------+--------+--------------+---------------+-------------+---------------------+----------+---------------------+---------------------+-----------+
1 row in set
+ ```
- 6. (Optional) View the replica distribution of the replicated table in the sys tenant.
-
- The `REPLICA_TYPE` field indicates the replica type.
+ 6. (Optional) View the replica distribution of the replicated table in the sys tenant. The `REPLICA_TYPE` field indicates the replica type.
```shell
obclient> SELECT * FROM oceanbase.CDB_OB_TABLE_LOCATIONS WHERE TABLE_NAME = "dup_t1";
@@ -384,9 +381,7 @@ table_column_group_option/index_column_group_option:
6 rows in set
```
- 7. Insert data into, read data from, and write data to the replicated table.
-
- If you connect to the database by using an OceanBase Database Proxy (ODP), the read request may be routed to any OBServer node. If you directly connect to an OBServer node, the read request is executed on the connected OBServer node as long as the local replica is readable.
+ 7. Insert data into, read data from, and write data to the replicated table. If you connect to the database by using an OceanBase Database Proxy (ODP), the read request may be routed to any OBServer node. If you directly connect to an OBServer node, the read request is executed on the connected OBServer node as long as the local replica is readable.
```shell
obclient> INSERT INTO dup_t1 VALUES(1);
@@ -419,7 +414,7 @@ table_column_group_option/index_column_group_option:
CREATE TABLE tbl3_cg (col1 INT PRIMARY KEY, col2 INT, col3 INT, INDEX i1 (col2) WITH COLUMN GROUP(each column)) WITH COLUMN GROUP(each column);
```
-* Specify the Skip Index attribute for columns while creating a table.
+* Create a table and specify the skip index attribute for a column.
```sql
CREATE TABLE test_skidx(
diff --git a/en-US/700.reference/500.sql-reference/100.sql-syntax/200.common-tenant-of-mysql-mode/600.sql-statement-of-mysql-mode/5800.kill-of-mysql-mode.md b/en-US/700.reference/500.sql-reference/100.sql-syntax/200.common-tenant-of-mysql-mode/600.sql-statement-of-mysql-mode/5800.kill-of-mysql-mode.md
index 8d0b3af02c..88a89a4cc4 100644
--- a/en-US/700.reference/500.sql-reference/100.sql-syntax/200.common-tenant-of-mysql-mode/600.sql-statement-of-mysql-mode/5800.kill-of-mysql-mode.md
+++ b/en-US/700.reference/500.sql-reference/100.sql-syntax/200.common-tenant-of-mysql-mode/600.sql-statement-of-mysql-mode/5800.kill-of-mysql-mode.md
@@ -25,68 +25,39 @@ KILL [CONNECTION | QUERY] 'session_id'
| **Parameter** | **Description** |
|-----------------|------------------------------------------|
-| KILL CONNECTION | Like the `KILL` statement without a modifier, this statement can terminate a thread with the specified *`thread_id`*. |
+| KILL CONNECTION | It is equivalent to the `KILL` statement with no modifier and can be used to terminate a session specified by the client session ID. |
| KILL QUERY | Terminates the ongoing statement of the connection but retains the current status of the connection. |
-| session_id | The unique ID of the session. You can execute the `SHOW PROCESSLIST` or `SHOW FULL PROCESSLIST` statement to obtain the session ID. |
+| session_id | The client session ID of the current session, which is the unique identifier of a session in a client. You can execute the `SHOW PROCESSLIST` or `SHOW FULL PROCESSLIST` statement to obtain the session ID. |
## Examples
-If business times out for a long period and an unknown long transaction in a session holds the lock, which blocks the execution of other transactions, you must find and terminate the long transaction.
+Query and then terminate a session.
-1. Based on the transaction time, find the `trans_id` of the unfinished transaction that takes the longest time to execute.
+1. Query connected sessions.
```sql
- obclient> SELECT * FROM __all_virtual_trans_lock_stat ORDER BY ctx_create_time LIMIT 5\G
- *************************** 1. row ***************************
- tenant_id: 1002
- trans_id: {hash:6605492148156030705, inc:3284929, addr:"xxx.xxx.xx.xxx:xxxx", t:1600440036535233}
- svr_ip: xxx.xxx.xx.xxx
- svr_port: xxxx
- partition: {tid:1101710651081554, partition_id:0, part_cnt:0}
- table_id: 1101710651081554
- rowkey: table_id=1101710651081554 hash=779dd9b202397d7 rowkey_object=[{"VARCHAR":"pk", collation:"utf8mb4_general_ci"}]
- session_id: 3221577520
- proxy_id: NULL
- ctx_create_time: 2020-09-18 22:41:03.583285
- expired_time: 2020-09-19 01:27:16.534919
+ obclient [test]> SHOW PROCESSLIST;
```
-2. Find all locks held by the transaction based on its `trans_id`, and identify the transaction to be terminated based on the `rowkey`.
+ The return result is as follows:
+
+ ```shell
+ +------------+------+----------------------+------+---------+-------+--------+------------------+
+ | Id | User | Host | db | Command | Time | State | Info |
+ +------------+------+----------------------+------+---------+-------+--------+------------------+
+ | 3221487617 | root | xxx.xx.xxx.xxx:54284 | NULL | Sleep | 21560 | SLEEP | NULL |
+ | 3221487619 | root | xxx.xx.xxx.xxx:21977 | test | Query | 0 | ACTIVE | SHOW PROCESSLIST |
+ | 3221487628 | root | xxx.xx.xxx.xxx:58550 | NULL | Sleep | 9 | SLEEP | NULL |
+ +------------+------+----------------------+------+---------+-------+--------+------------------+
+ 3 rows in set
+ ```
- In the following example, the `rowkey` of the first row is the same as that queried above. Therefore, the corresponding transaction is holding the lock.
+2. Terminate a session.
```sql
- obclient> SELECT * FROM __all_virtual_trans_lock_stat WHERE trans_id LIKE '%hash:6605492148156030705, inc:3284929%'\G
- *************************** 1. row ***************************
- tenant_id: 1002
- trans_id: {hash:6605492148156030705, inc:3284929, addr:"xxx.xxx.xx.xxx:xxxx", t:1600440036535233}
- svr_ip: xxx.xxx.xx.xxx
- svr_port: xxxx
- partition: {tid:1101710651081554, partition_id:0, part_cnt:0}
- table_id: 1101710651081554
- rowkey: table_id=1101710651081554 hash=779dd9b202397d7 rowkey_object=[{"VARCHAR":"pk", collation:"utf8mb4_general_ci"}]
- session_id: 3221577520
- proxy_id: NULL
- ctx_create_time: 2020-09-18 22:41:03.583285
- expired_time: 2020-09-19 01:27:16.534919
- *************************** 2. row ***************************
- tenant_id: 1002
- trans_id: {hash:6605492148156030705, inc:3284929, addr:"xxx.xxx.xx.xxx:xxxx", t:1600440036535233}
- svr_ip: xxx.xxx.xx.xxx
- svr_port: xxxx
- partition: {tid:1101710651081554, partition_id:0, part_cnt:0}
- table_id: 1101710651081554
- rowkey: table_id=1101710651081554 hash=89413aecf767cd7 rowkey_object=[{"VARCHAR":"ob", collation:"utf8mb4_general_ci"}]
- session_id: 3221577520
- proxy_id: NULL
- ctx_create_time: 2020-09-18 22:41:03.583285
- expired_time: 2020-09-19 01:27:16.534919
- 2 rows in set
+ obclient [test]> KILL 3221487617;
```
-3. Confirm the `session_id` of the transaction to be terminated, and terminate the corresponding session.
+## References
- ```sql
- obclient> KILL 3221577520;
- Query OK, 0 rows affected
- ```
+For more information about how to query the quantity and IDs of sessions in the current database, see [View tenant sessions](../../../../1200.database-proxy/1500.view-tenant-sessions.md).
\ No newline at end of file
diff --git a/en-US/700.reference/500.sql-reference/100.sql-syntax/200.common-tenant-of-mysql-mode/600.sql-statement-of-mysql-mode/8700.show-of-mysql-mode.md b/en-US/700.reference/500.sql-reference/100.sql-syntax/200.common-tenant-of-mysql-mode/600.sql-statement-of-mysql-mode/8700.show-of-mysql-mode.md
index eb6ae3aa94..5d13a8edeb 100644
--- a/en-US/700.reference/500.sql-reference/100.sql-syntax/200.common-tenant-of-mysql-mode/600.sql-statement-of-mysql-mode/8700.show-of-mysql-mode.md
+++ b/en-US/700.reference/500.sql-reference/100.sql-syntax/200.common-tenant-of-mysql-mode/600.sql-statement-of-mysql-mode/8700.show-of-mysql-mode.md
@@ -62,9 +62,9 @@ opt_for_grant_user:
| Parameter | Description |
|--------------------------------------------------|-------------------------------------------------------------------|
-| \[FULL\] TABLES \{FROM \| IN} database_name | Queries all tables in `database_name`.The `FULL` keyword specifies to query the table type. |
+| \[FULL\] TABLES \{FROM \| IN} database_name | Queries all tables in `database_name`. The `FULL` keyword specifies to query the table type. |
| {DATABASES \| SCHEMAS} \[STATUS\] | Queries all databases in the current tenant. The `STATUS` keyword specifies to query the read/write attribute of the database. |
-| \[FULL\] {COLUMNS \| FIELDS} {FROM \| IN} rel_name | Queries columns of the `rel_name` relationship. The `FULL` keyword specifies to query the collation, privileges, and comments of the column. |
+| \[FULL\] {COLUMNS \| FIELDS} {FROM \| IN} rel_name | Queries columns of the `rel_name` relationship. The `FULL` keyword specifies to query the collation, privileges, and comments of the column. |
| TABLE STATUS \[{FROM \| IN} database_name\] | Queries details of all tables in `database_name`. |
| PROCEDURE STATUS \[{FROM \| IN} database_name\] | Queries details of all stored procedures in `database_name`. |
| FUNCTION STATUS \[{FROM \| IN} database_name\] | Queries details of all functions in `database_name`. |
@@ -74,11 +74,11 @@ opt_for_grant_user:
| COLLATION | Queries supported collations. |
| PARAMETERS \[like_or_where_clause\] tenant_name | Queries system parameters. |
| {INDEX \| INDEXES \| KEYS} {FROM \| IN} rel_name \[{FROM \| IN} database_name\] | Queries indexes or keys of the `rel_name` relationship. |
-| \[FULL\] PROCESSLIST | Queries the current session and its status. The `FULL` keyword specifies the IP address and port number of the connection and the session ID of the OceanBase Database Proxy (ODP). |
+| \[FULL\] PROCESSLIST | Queries the list of processes in the current tenant. The following options are supported: SHOW PROCESSLIST
displays the brief list of processes with the following information: Id
: the ID of the process, that is, the client session ID of the current session. This ID is the unique identifier of the session in the client. User
: the username used for database connection. Host
: the IP address and port number of the client. If an OceanBase Database Proxy (ODP) is used to connect to the database, the IP address and port number of the ODP are displayed here. db
: the name of the connected database. Command
: the type of the command being executed, such as Query
and Sleep
. Time
: the execution time of the current command, in seconds. If the command is retried, the time is reset and recalculated. State
: The status of the current session, such as SLEEP
and ACTIVE
. Info
: the content of the command being executed. A maximum of 100 characters can be displayed and the exceeding part is truncated.
SHOW FULL PROCESSLIST
displays the complete list of processes with the following information: Id
: the ID of the process, that is, the client session ID of the current session. This ID is the unique identifier of the session in the client. User
: the username used for database connection. Tenant
: the connected tenant. Host
: the IP address and port number of the client. If an ODP is used to connect to the database, the IP address and port number of the ODP are displayed here. db
: the name of the connected database. Command
: the type of the command being executed, such as Query
and Sleep
. Time
: the execution time of the current command, in seconds. If the command is retried, the time is reset and recalculated. State
: The status of the current session, such as SLEEP
and ACTIVE
. Info
: the content of the command being executed. Ip
: the IP address of the server. Port
: the SQL port number.
Note
You can execute the SHOW PROCESSLIST
statement to query the quantity and IDs of sessions in the current database. For more information, see View tenant sessions.
|
| TABLEGROUPS | Queries table groups. |
| {GLOBAL \| SESSION \| LOCAL} STATUS | Queries about the status of the session. |
| TENANT \[STATUS\] | Queries the name of the current tenant. The `STATUS` keyword specifies the read/write status of the tenant. |
-| CREATE TENANT tenant_name | Queries the CREATE TENANT statement. The sys tenant can query the CREATE statements of all tenants. A user tenant can query only its own CREATE statement. |
+| CREATE TENANT tenant_name | Queries the CREATE TENANT statement. The sys tenant can query the CREATE TENANT statements of all tenants. A user tenant can query only its own CREATE TENANT statement. |
| CREATE TABLEGROUP tablegroup_name | Queries the CREATE TABLEGROUP statement. |
| CREATE {DATABASE \| SCHEMA} \[IF NOT EXISTS\] database_name | Queries the CREATE DATABASE statement. The `IF NOT EXISTS` keyword is used to add `IF NOT EXISTS` to the create statement. |
| CREATE TABLE table_name | Queries the CREATE TABLE statement. |
@@ -269,4 +269,4 @@ opt_for_grant_user:
*************************** 1. row ***************************
ShowTraceJSON: [{"logs": null, "tags": [[{"sess_id": 3221487676}, {"action_name": ""}, {"module_name": ""}, {"client_info": ""}, {"receive_ts": 1686734801498147}, {"log_trace_id": "YB42AC1E87DE-0005FDE675EF77C4-0-0"}]], "elapse": 4716, "end_ts": "2023-06-14 17:26:41.502925", "parent": "0005fe13-8cac-6fd6-8035-4c299e621239", "span_id": "0005fe13-8cac-7061-6648-1148424d99fa", "start_ts": "2023-06-14 17:26:41.498209", "trace_id": "0005fe13-8cac-6fd6-0b2a-658fb95ee88f", "span_name": "com_query_process", "tenant_id": 1002, "rec_svr_ip": "172.xx.xxx.xxx", "rec_svr_port": 2882}, {"logs": null, "tags": null, "elapse": 4698, "end_ts": "2023-06-14 17:26:41.502914", "parent": "0005fe13-8cac-7061-6648-1148424d99fa", "span_id": "0005fe13-8cac-7068-d79c-9a1e3df2f09f", "start_ts": "2023-06-14 17:26:41.498216", "trace_id": "0005fe13-8cac-6fd6-0b2a-658fb95ee88f", "span_name": "mpquery_single_stmt", "tenant_id": 1002, "rec_svr_ip": "172.xx.xxx.xxx", "rec_svr_port": 2882}, {"logs": null, "tags": [[{"sql_text": "SHOW TRACE FORMAT='JSON'"}], [{"sql_id": "D2A6E68D54F4B888F9443FD4EABB490C"}, {"database_id": 500001}, {"plan_hash": 13465692160314901852}], [{"hit_plan": false}]], "elapse": 2623, "end_ts": "2023-06-14 17:26:41.500858", "parent": "0005fe13-8cac-7068-d79c-9a1e3df2f09f", "span_id": "0005fe13-8cac-707b-84fc-7259f7a5afa4", "start_ts": "2023-06-14 17:26:41.498235", "trace_id": "0005fe13-8cac-6fd6-0b2a-658fb95ee88f", "span_name": "sql_compile", "tenant_id": 1002, "rec_svr_ip": "172.xx.xxx.xxx", "rec_svr_port": 2882}, {"logs": null, "tags": null, "elapse": 5, "end_ts": "2023-06-14 17:26:41.498244", "parent": "0005fe13-8cac-707b-84fc-7259f7a5afa4", "span_id": "0005fe13-8cac-707f-c009-4663739d39ed", "start_ts": "2023-06-14 17:26:41.498239", "trace_id": "0005fe13-8cac-6fd6-0b2a-658fb95ee88f", "span_name": "pc_get_plan", "tenant_id": 1002, "rec_svr_ip": "172.xx.xxx.xxx", "rec_svr_port": 2882}, {"logs": null, "tags": null, "elapse": 2543, "end_ts": "2023-06-14 17:26:41.500841", "parent": "0005fe13-8cac-707b-84fc-7259f7a5afa4", "span_id": "0005fe13-8cac-70ba-5c5c-1dbf433776e1", "start_ts": "2023-06-14 17:26:41.498298", "trace_id": "0005fe13-8cac-6fd6-0b2a-658fb95ee88f", "span_name": "hard_parse", "tenant_id": 1002, "rec_svr_ip": "172.xx.xxx.xxx", "rec_svr_port": 2882}, {"logs": null, "tags": null, "elapse": 27, "end_ts": "2023-06-14 17:26:41.498327", "parent": "0005fe13-8cac-70ba-5c5c-1dbf433776e1", "span_id": "0005fe13-8cac-70bc-91c6-d074e4c11eb8", "start_ts": "2023-06-14 17:26:41.498300", "trace_id": "0005fe13-8cac-6fd6-0b2a-658fb95ee88f", "span_name": "parse", "tenant_id": 1002, "rec_svr_ip": "172.xx.xxx.xxx", "rec_svr_port": 2882}, {"logs": null, "tags": null, "elapse": 777, "end_ts": "2023-06-14 17:26:41.499126", "parent": "0005fe13-8cac-70ba-5c5c-1dbf433776e1", "span_id": "0005fe13-8cac-70ed-8482-917a5e0c16e7", "start_ts": "2023-06-14 17:26:41.498349", "trace_id": "0005fe13-8cac-6fd6-0b2a-658fb95ee88f", "span_name": "resolve", "tenant_id": 1002, "rec_svr_ip": "172.xx.xxx.xxx", "rec_svr_port": 2882}, {"logs": null, "tags": null, "elapse": 855, "end_ts": "2023-06-14 17:26:41.500028", "parent": "0005fe13-8cac-70ba-5c5c-1dbf433776e1", "span_id": "0005fe13-8cac-7425-2af1-497b7573f962", "start_ts": "2023-06-14 17:26:41.499173", "trace_id": "0005fe13-8cac-6fd6-0b2a-658fb95ee88f", "span_name": "rewrite", "tenant_id": 1002, "rec_svr_ip": "172.xx.xxx.xxx", "rec_svr_port": 2882}, {"logs": null, "tags": null, "elapse": 477, "end_ts": "2023-06-14 17:26:41.500522", "parent": "0005fe13-8cac-70ba-5c5c-1dbf433776e1", "span_id": "0005fe13-8cac-778d-5e60-51648fcad206", "start_ts": "2023-06-14 17:26:41.500045", "trace_id": "0005fe13-8cac-6fd6-0b2a-658fb95ee88f", "span_name": "optimize", "tenant_id": 1002, "rec_svr_ip": "172.xx.xxx.xxx", "rec_svr_port": 2882}, {"logs": null, "tags": null, "elapse": 176, "end_ts": "2023-06-14 17:26:41.500712", "parent": "0005fe13-8cac-70ba-5c5c-1dbf433776e1", "span_id": "0005fe13-8cac-7978-2ba6-2d656301eaef", "start_ts": "2023-06-14 17:26:41.500536", "trace_id": "0005fe13-8cac-6fd6-0b2a-658fb95ee88f", "span_name": "code_generate", "tenant_id": 1002, "rec_svr_ip": "172.xx.xxx.xxx", "rec_svr_port": 2882}, {"logs": null, "tags": null, "elapse": 1980, "end_ts": "2023-06-14 17:26:41.502845", "parent": "0005fe13-8cac-7068-d79c-9a1e3df2f09f", "span_id": "0005fe13-8cac-7ac1-2183-d9f0749c518d", "start_ts": "2023-06-14 17:26:41.500865", "trace_id": "0005fe13-8cac-6fd6-0b2a-658fb95ee88f", "span_name": "sql_execute", "tenant_id": 1002, "rec_svr_ip": "172.xx.xxx.xxx", "rec_svr_port": 2882}, {"logs": null, "tags": null, "elapse": 21, "end_ts": "2023-06-14 17:26:41.500887", "parent": "0005fe13-8cac-7ac1-2183-d9f0749c518d", "span_id": "0005fe13-8cac-7ac2-cc2b-a9dcea52ac30", "start_ts": "2023-06-14 17:26:41.500866", "trace_id": "0005fe13-8cac-6fd6-0b2a-658fb95ee88f", "span_name": "open", "tenant_id": 1002, "rec_svr_ip": "172.xx.xxx.xxx", "rec_svr_port": 2882}, {"logs": null, "tags": null, "elapse": 1874, "end_ts": "2023-06-14 17:26:41.502770", "parent": "0005fe13-8cac-7ac1-2183-d9f0749c518d", "span_id": "0005fe13-8cac-7ae0-063a-65a2b4aefdd6", "start_ts": "2023-06-14 17:26:41.500896", "trace_id": "0005fe13-8cac-6fd6-0b2a-658fb95ee88f", "span_name": "response_result", "tenant_id": 1002, "rec_svr_ip": "172.xx.xxx.xxx", "rec_svr_port": 2882}, {"logs": null, "tags": null, "elapse": 44, "end_ts": "2023-06-14 17:26:41.500947", "parent": "0005fe13-8cac-7ae0-063a-65a2b4aefdd6", "span_id": "0005fe13-8cac-7ae7-2357-70f9191525f2", "start_ts": "2023-06-14 17:26:41.500903", "trace_id": "0005fe13-8cac-6fd6-0b2a-658fb95ee88f", "span_name": "do_local_das_task", "tenant_id": 1002, "rec_svr_ip": "172.xx.xxx.xxx", "rec_svr_port": 2882}, {"logs": null, "tags": [[{"sql_id": "7F33FD22651F99E8AB2BAC5428623BCD"}, {"database_id": 201001}], [{"sql_text": "START TRANSACTION WITH CONSISTENT SNAPSHOT"}], [{"hit_plan": false}]], "elapse": 95, "end_ts": "2023-06-14 17:26:41.501466", "parent": "0005fe13-8cac-7ae0-063a-65a2b4aefdd6", "span_id": "0005fe13-8cac-7cbb-24eb-6fd68edef4b3", "start_ts": "2023-06-14 17:26:41.501371", "trace_id": "0005fe13-8cac-6fd6-0b2a-658fb95ee88f", "span_name": "sql_compile", "tenant_id": 1002, "rec_svr_ip": "172.xx.xxx.xxx", "rec_svr_port": 2882}, {"logs": null, "tags": null, "elapse": 5, "end_ts": "2023-06-14 17:26:41.501378", "parent": "0005fe13-8cac-7cbb-24eb-6fd68edef4b3", "span_id": "0005fe13-8cac-7cbd-d28e-243bd51c8f52", "start_ts": "2023-06-14 17:26:41.501373", "trace_id": "0005fe13-8cac-6fd6-0b2a-658fb95ee88f", "span_name": "pc_get_plan", "tenant_id": 1002, "rec_svr_ip": "172.xx.xxx.xxx", "rec_svr_port": 2882}, {"logs": null, "tags": null, "elapse": 53, "end_ts": "2023-06-14 17:26:41.501456", "parent": "0005fe13-8cac-7cbb-24eb-6fd68edef4b3", "span_id": "0005fe13-8cac-7cdb-6744-4e35d589f69e", "start_ts": "2023-06-14 17:26:41.501403", "trace_id": "0005fe13-8cac-6fd6-0b2a-658fb95ee88f", "span_name": "hard_parse", "tenant_id": 1002, "rec_svr_ip": "172.xx.xxx.xxx", "rec_svr_port": 2882}, {"logs": null, "tags": null, "elapse": 14, "end_ts": "2023-06-14 17:26:41.501418", "parent": "0005fe13-8cac-7cdb-6744-4e35d589f69e", "span_id": "0005fe13-8cac-7cdc-d22e-b917f1f800ac", "start_ts": "2023-06-14 17:26:41.501404", "trace_id": "0005fe13-8cac-6fd6-0b2a-658fb95ee88f", "span_name": "parse", "tenant_id": 1002, "rec_svr_ip": "172.xx.xxx.xxx", "rec_svr_port": 2882}, {"logs": null, "tags": null, "elapse": 10, "end_ts": "2023-06-14 17:26:41.501446", "parent": "0005fe13-8cac-7cdb-6744-4e35d589f69e", "span_id": "0005fe13-8cac-7cfc-a422-4419ea2a8f67", "start_ts": "2023-06-14 17:26:41.501436", "trace_id": "0005fe13-8cac-6fd6-0b2a-658fb95ee88f", "span_name": "resolve", "tenant_id": 1002, "rec_svr_ip": "172.xx.xxx.xxx", "rec_svr_port": 2882}, {"logs": null, "tags": null, "elapse": 32, "end_ts": "2023-06-14 17:26:41.501531", "parent": "0005fe13-8cac-7ae0-063a-65a2b4aefdd6", "span_id": "0005fe13-8cac-7d3b-4877-b72cd0df1355", "start_ts": "2023-06-14 17:26:41.501499", "trace_id": "0005fe13-8cac-6fd6-0b2a-658fb95ee88f", "span_name": "open", "tenant_id": 1002, "rec_svr_ip": "172.xx.xxx.xxx", "rec_svr_port": 2882}, {"logs": null, "tags": null, "elapse": 22, "end_ts": "2023-06-14 17:26:41.501522", "parent": "0005fe13-8cac-7d3b-4877-b72cd0df1355", "span_id": "0005fe13-8cac-7d3c-4373-3698da80bc1d", "start_ts": "2023-06-14 17:26:41.501500", "trace_id": "0005fe13-8cac-6fd6-0b2a-658fb95ee88f", "span_name": "cmd_open", "tenant_id": 1002, "rec_svr_ip": "172.xx.xxx.xxx", "rec_svr_port": 2882}, {"logs": null, "tags": [[{"trans_id": 0}]], "elapse": 1, "end_ts": "2023-06-14 17:26:41.501502", "parent": "0005fe13-8cac-7d3c-4373-3698da80bc1d", "span_id": "0005fe13-8cac-7d3d-8ba4-00096f64fb96", "start_ts": "2023-06-14 17:26:41.501501", "trace_id": "0005fe13-8cac-6fd6-0b2a-658fb95ee88f", "span_name": "end_transaction", "tenant_id": 1002, "rec_svr_ip": "172.xx.xxx.xxx", "rec_svr_port": 2882}, {"logs": null, "tags": null, "elapse": 2, "end_ts": "2023-06-14 17:26:41.501552", "parent": "0005fe13-8cac-7ae0-063a-65a2b4aefdd6", "span_id": "0005fe13-8cac-7d6e-608e-1f9e19e7615d", "start_ts": "2023-06-14 17:26:41.501550", "trace_id": "0005fe13-8cac-6fd6-0b2a-658fb95ee88f", "span_name": "close", "tenant_id": 1002, "rec_svr_ip": "172.xx.xxx.xxx", "rec_svr_port": 2882}, {"logs": null, "tags": null, "elapse": 536, "end_ts": "2023-06-14 17:26:41.502144", "parent": "0005fe13-8cac-7ae0-063a-65a2b4aefdd6", "span_id": "0005fe13-8cac-7da8-8fe7-73d23fcb23bf", "start_ts": "2023-06-14 17:26:41.501608", "trace_id": "0005fe13-8cac-6fd6-0b2a-658fb95ee88f", "span_name": "inner_execute_read", "tenant_id": 1002, "rec_svr_ip": "172.xx.xxx.xxx", "rec_svr_port": 2882}, {"logs": null, "tags": [[{"sql_text": "SELECT svr_ip, svr_port, tenant_id, trace_id, request_id, span_id, parent_span_id, span_name, ref_type, start_ts, end_ts, tags, logs FROM __all_virtual_trace_span_info WHERE tenant_id = 1002 AND trace_id = '0005fe13-8bf2-47d5-2cdd-5d819739c997'"}], [{"sql_id": "9B307250A34F95FE531FDC05F9F87300"}, {"database_id": 201001}, {"plan_hash": 13345609059733987708}, {"hit_plan": true}]], "elapse": 96, "end_ts": "2023-06-14 17:26:41.501714", "parent": "0005fe13-8cac-7da8-8fe7-73d23fcb23bf", "span_id": "0005fe13-8cac-7db2-b9c2-ef1bdb1e916b", "start_ts": "2023-06-14 17:26:41.501618", "trace_id": "0005fe13-8cac-6fd6-0b2a-658fb95ee88f", "span_name": "sql_compile", "tenant_id": 1002, "rec_svr_ip": "172.xx.xxx.xxx", "rec_svr_port": 2882}, {"logs": null, "tags": null, "elapse": 68, "end_ts": "2023-06-14 17:26:41.501687", "parent": "0005fe13-8cac-7db2-b9c2-ef1bdb1e916b", "span_id": "0005fe13-8cac-7db3-0fc7-6c76f9923e90", "start_ts": "2023-06-14 17:26:41.501619", "trace_id": "0005fe13-8cac-6fd6-0b2a-658fb95ee88f", "span_name": "pc_get_plan", "tenant_id": 1002, "rec_svr_ip": "172.xx.xxx.xxx", "rec_svr_port": 2882}, {"logs": null, "tags": null, "elapse": 5, "end_ts": "2023-06-14 17:26:41.501730", "parent": "0005fe13-8cac-7da8-8fe7-73d23fcb23bf", "span_id": "0005fe13-8cac-7e1d-fffd-bb0c14004d0b", "start_ts": "2023-06-14 17:26:41.501725", "trace_id": "0005fe13-8cac-6fd6-0b2a-658fb95ee88f", "span_name": "open", "tenant_id": 1002, "rec_svr_ip": "172.xx.xxx.xxx", "rec_svr_port": 2882}, {"logs": null, "tags": null, "elapse": 35, "end_ts": "2023-06-14 17:26:41.501788", "parent": "0005fe13-8cac-7da8-8fe7-73d23fcb23bf", "span_id": "0005fe13-8cac-7e39-e47b-68b94d34b9f2", "start_ts": "2023-06-14 17:26:41.501753", "trace_id": "0005fe13-8cac-6fd6-0b2a-658fb95ee88f", "span_name": "do_local_das_task", "tenant_id": 1002, "rec_svr_ip": "172.xx.xxx.xxx", "rec_svr_port": 2882}, {"logs": null, "tags": null, "elapse": 32, "end_ts": "2023-06-14 17:26:41.502270", "parent": "0005fe13-8cac-7ae0-063a-65a2b4aefdd6", "span_id": "0005fe13-8cac-801e-95e9-f013f18e58ae", "start_ts": "2023-06-14 17:26:41.502238", "trace_id": "0005fe13-8cac-6fd6-0b2a-658fb95ee88f", "span_name": "close", "tenant_id": 1002, "rec_svr_ip": "172.xx.xxx.xxx", "rec_svr_port": 2882}, {"logs": null, "tags": null, "elapse": 12, "end_ts": "2023-06-14 17:26:41.502253", "parent": "0005fe13-8cac-801e-95e9-f013f18e58ae", "span_id": "0005fe13-8cac-8021-a2b8-efb5ea82362a", "start_ts": "2023-06-14 17:26:41.502241", "trace_id": "0005fe13-8cac-6fd6-0b2a-658fb95ee88f", "span_name": "close_das_task", "tenant_id": 1002, "rec_svr_ip": "172.xx.xxx.xxx", "rec_svr_port": 2882}, {"logs": null, "tags": null, "elapse": 131, "end_ts": "2023-06-14 17:26:41.502419", "parent": "0005fe13-8cac-7ae0-063a-65a2b4aefdd6", "span_id": "0005fe13-8cac-8050-4e15-2c08b254c1a5", "start_ts": "2023-06-14 17:26:41.502288", "trace_id": "0005fe13-8cac-6fd6-0b2a-658fb95ee88f", "span_name": "inner_commit", "tenant_id": 1002, "rec_svr_ip": "172.xx.xxx.xxx", "rec_svr_port": 2882}, {"logs": null, "tags": [[{"sql_text": "COMMIT"}], [{"sql_id": "1D0BA376E273B9D622641124D8C59264"}, {"database_id": 201001}]], "elapse": 50, "end_ts": "2023-06-14 17:26:41.502344", "parent": "0005fe13-8cac-8050-4e15-2c08b254c1a5", "span_id": "0005fe13-8cac-8056-1325-2808b1b2c771", "start_ts": "2023-06-14 17:26:41.502294", "trace_id": "0005fe13-8cac-6fd6-0b2a-658fb95ee88f", "span_name": "sql_compile", "tenant_id": 1002, "rec_svr_ip": "172.xx.xxx.xxx", "rec_svr_port": 2882}, {"logs": null, "tags": null, "elapse": 43, "end_ts": "2023-06-14 17:26:41.502338", "parent": "0005fe13-8cac-8056-1325-2808b1b2c771", "span_id": "0005fe13-8cac-8057-68cd-3b71b27d0efc", "start_ts": "2023-06-14 17:26:41.502295", "trace_id": "0005fe13-8cac-6fd6-0b2a-658fb95ee88f", "span_name": "hard_parse", "tenant_id": 1002, "rec_svr_ip": "172.xx.xxx.xxx", "rec_svr_port": 2882}, {"logs": null, "tags": null, "elapse": 6, "end_ts": "2023-06-14 17:26:41.502302", "parent": "0005fe13-8cac-8057-68cd-3b71b27d0efc", "span_id": "0005fe13-8cac-8058-f520-2ec9b2347039", "start_ts": "2023-06-14 17:26:41.502296", "trace_id": "0005fe13-8cac-6fd6-0b2a-658fb95ee88f", "span_name": "parse", "tenant_id": 1002, "rec_svr_ip": "172.xx.xxx.xxx", "rec_svr_port": 2882}, {"logs": null, "tags": null, "elapse": 9, "end_ts": "2023-06-14 17:26:41.502328", "parent": "0005fe13-8cac-8057-68cd-3b71b27d0efc", "span_id": "0005fe13-8cac-806f-120c-2d223c5cafed", "start_ts": "2023-06-14 17:26:41.502319", "trace_id": "0005fe13-8cac-6fd6-0b2a-658fb95ee88f", "span_name": "resolve", "tenant_id": 1002, "rec_svr_ip": "172.xx.xxx.xxx", "rec_svr_port": 2882}, {"logs": null, "tags": null, "elapse": 31, "end_ts": "2023-06-14 17:26:41.502381", "parent": "0005fe13-8cac-8050-4e15-2c08b254c1a5", "span_id": "0005fe13-8cac-808e-48e0-16311abf387d", "start_ts": "2023-06-14 17:26:41.502350", "trace_id": "0005fe13-8cac-6fd6-0b2a-658fb95ee88f", "span_name": "open", "tenant_id": 1002, "rec_svr_ip": "172.xx.xxx.xxx", "rec_svr_port": 2882}, {"logs": null, "tags": null, "elapse": 26, "end_ts": "2023-06-14 17:26:41.502376", "parent": "0005fe13-8cac-808e-48e0-16311abf387d", "span_id": "0005fe13-8cac-808e-d038-2dd226a767c8", "start_ts": "2023-06-14 17:26:41.502350", "trace_id": "0005fe13-8cac-6fd6-0b2a-658fb95ee88f", "span_name": "cmd_open", "tenant_id": 1002, "rec_svr_ip": "172.xx.xxx.xxx", "rec_svr_port": 2882}, {"logs": null, "tags": [[{"trans_id": 638124}]], "elapse": 15, "end_ts": "2023-06-14 17:26:41.502366", "parent": "0005fe13-8cac-808e-d038-2dd226a767c8", "span_id": "0005fe13-8cac-808f-c5be-0aea9943a94e", "start_ts": "2023-06-14 17:26:41.502351", "trace_id": "0005fe13-8cac-6fd6-0b2a-658fb95ee88f", "span_name": "end_transaction", "tenant_id": 1002, "rec_svr_ip": "172.xx.xxx.xxx", "rec_svr_port": 2882}, {"logs": null, "tags": null, "elapse": 1, "end_ts": "2023-06-14 17:26:41.502394", "parent": "0005fe13-8cac-8050-4e15-2c08b254c1a5", "span_id": "0005fe13-8cac-80b9-b0dc-197d3c4b2ffd", "start_ts": "2023-06-14 17:26:41.502393", "trace_id": "0005fe13-8cac-6fd6-0b2a-658fb95ee88f", "span_name": "close", "tenant_id": 1002, "rec_svr_ip": "172.xx.xxx.xxx", "rec_svr_port": 2882}, {"logs": null, "tags": null, "elapse": 55, "end_ts": "2023-06-14 17:26:41.502838", "parent": "0005fe13-8cac-7ac1-2183-d9f0749c518d", "span_id": "0005fe13-8cac-823f-9393-a4c4d9ef623d", "start_ts": "2023-06-14 17:26:41.502783", "trace_id": "0005fe13-8cac-6fd6-0b2a-658fb95ee88f", "span_name": "close", "tenant_id": 1002, "rec_svr_ip": "172.xx.xxx.xxx", "rec_svr_port": 2882}, {"logs": null, "tags": null, "elapse": 5, "end_ts": "2023-06-14 17:26:41.502789", "parent": "0005fe13-8cac-823f-9393-a4c4d9ef623d", "span_id": "0005fe13-8cac-8240-73a1-98f019c5455a", "start_ts": "2023-06-14 17:26:41.502784", "trace_id": "0005fe13-8cac-6fd6-0b2a-658fb95ee88f", "span_name": "close_das_task", "tenant_id": 1002, "rec_svr_ip": "172.xx.xxx.xxx", "rec_svr_port": 2882}, {"logs": null, "tags": [[{"trans_id": 0}]], "elapse": 3, "end_ts": "2023-06-14 17:26:41.502828", "parent": "0005fe13-8cac-823f-9393-a4c4d9ef623d", "span_id": "0005fe13-8cac-8269-cc9b-b89fe9a1cda3", "start_ts": "2023-06-14 17:26:41.502825", "trace_id": "0005fe13-8cac-6fd6-0b2a-658fb95ee88f", "span_name": "end_transaction", "tenant_id": 1002, "rec_svr_ip": "172.xx.xxx.xxx", "rec_svr_port": 2882}]
1 row in set
- ```
\ No newline at end of file
+ ```
diff --git a/en-US/700.reference/500.sql-reference/100.sql-syntax/200.common-tenant-of-mysql-mode/700.ddl-function-of-mysql-mode/150.online-and-offline-ddl-list-of-mysql-mode.md b/en-US/700.reference/500.sql-reference/100.sql-syntax/200.common-tenant-of-mysql-mode/700.ddl-function-of-mysql-mode/150.online-and-offline-ddl-list-of-mysql-mode.md
index 1fd1a9980b..84d33af700 100644
--- a/en-US/700.reference/500.sql-reference/100.sql-syntax/200.common-tenant-of-mysql-mode/700.ddl-function-of-mysql-mode/150.online-and-offline-ddl-list-of-mysql-mode.md
+++ b/en-US/700.reference/500.sql-reference/100.sql-syntax/200.common-tenant-of-mysql-mode/700.ddl-function-of-mysql-mode/150.online-and-offline-ddl-list-of-mysql-mode.md
@@ -10,12 +10,13 @@
The following table describes the online DDL operations supported by the MySQL mode of OceanBase Database V4.x.
| Type | Operation | Time spent | Remarks | DDL support after creating mlog |
-|-------------|--------------------------|--------------------------|--------------------|---|
+|-------------|--------------------------|--------------------------|--------------------|--------------------|
| Index operation | Add an index | Related to the data volume, because data is reorganized (or rewritten) | This operation mainly involves global indexes, local indexes, global indexes with specified partitions, and spatial indexes (supported in OceanBase Database V4.1.0 and later). | Supported |
| Index operation | Drop an index | Related to whether active transactions exist | N/A | Supported |
| Index operation | Rename an index | Only for metadata modification | N/A | Supported |
-| Index operation | Hybrid index operations | Related to the data volume and requires completion of index data | For example, `ALTER TABLE t1 ADD INDEX i4(c1), DROP INDEX i2, RENAME INDEX i1 TO i1x` involves hybrid index operations. | Supported |
-| Column operation | Add a column to the end of a table | Only for metadata modification | For example, add a `LOB` (`TEXT`) column by executing a statement similar to `ALTER TABLE tbl1 ADD c3 LOB`. | Not supported |
+| Index operation | Hybrid index operations | Related to the data amount and requires index data completion | For example, `ALTER TABLE t1 ADD INDEX i4(c1), DROP INDEX i2, RENAME INDEX i1 TO i1x` involves hybrid index operations. | Supported |
+| Column operation | Add, change, or delete the skip index type | Only for table schema modification | N/A | Not supported |
+| Column operation | Add a column to the end of a table | Only for metadata modification | For example, add a `LOB` (`TEXT`) column by executing a statement similar to `ALTER TABLE tbl1 ADD c3 LOB`. | Not supported |
| Column operation | Add a `VIRTUAL` column | Only for metadata modification | N/A | Not supported |
| Column operation | Drop a `VIRTUAL` column | Only for metadata modification | N/A | Not supported |
| Column operation | Set a `NOT NULL` constraint on a column | Related to the data volume, because data is queried | N/A | Not supported |
@@ -26,8 +27,8 @@ The following table describes the online DDL operations supported by the MySQL m
| Column operation | Rename a column | Only for metadata modification | N/A | Not supported |
| Column operation | Increase the length or precision of the data type for a column | Only for metadata modification | For example, increase the length of the `INT` type, increase the length of the `VARCHAR` type, or convert the `NUMBER` type. | Not supported |
| Column operation | Hybrid column operations | Related to the operation with the longest execution time | If an offline column operation is involved, the operation is upgraded to an offline DDL operation. | Not supported |
-| Foreign key operation | Add a foreign key or a `CHECK`/`NOT NULL` constraint | Related to the data volume, because data is queried | N/A | Adding foreign key constraints is supported. Adding `CHECK`/`NOT NULL` constraints is not supported. |
-| Foreign key operation | Drop a foreign key or a `CHECK`/`NOT NULL` constraint | Related to the data volume, because data is queried | N/A | Adding foreign key constraints is supported. Adding `CHECK`/`NOT NULL` constraints is not supported. |
+| Foreign key operation | Add a foreign key or a `CHECK`/`NOT NULL` constraint | Related to the data volume, because data is queried | N/A | You can add a foreign key constraint but not a `CHECK` or `NOT NULL` constraint. |
+| Foreign key operation | Drop a foreign key or a `CHECK`/`NOT NULL` constraint | Related to the data volume, because data is queried | N/A | You can drop a foreign key constraint but not a `CHECK` or `NOT NULL` constraint. |
| Table operation | Rename a table | Only for metadata modification | N/A | Not supported |
| Table operation | Change the row format | Only for metadata modification | N/A | Not supported |
| Table operation | Change the block size | Only for metadata modification | N/A | Not supported |
@@ -40,7 +41,7 @@ The following table describes the offline DDL operations supported by the MySQL
| Type | Operation | Time spent | Remarks | DDL support after creating mlog |
|-------------|--------------------------|--------------------------|--------------------|--------------------|
-| Column operation | Add a column (`BEFORE`/`AFTER`/`FIRST`) | Related to the data volume, because data is reorganized | This operation is not supported in Oracle mode. | Not supported |
+| Column operation | Add a column (`BEFORE`/`AFTER`/`FIRST`) | Related to the data volume, because data is reorganized (or rewritten) | This operation is not supported in Oracle mode. | Not supported |
| Column operation | Reorder columns (`BEFORE`/`AFTER`/`FIRST`) | Related to the data volume, because data is reorganized (or rewritten) | This operation is not supported in Oracle mode. | Not supported |
| Column operation | Add an auto-increment column | Related to the data volume, because data is reorganized (or rewritten) | N/A | Not supported |
| Column operation | Change a column to an auto-increment column | Related to the data volume, because data is queried | N/A | Not supported |
diff --git a/en-US/700.reference/500.sql-reference/100.sql-syntax/200.common-tenant-of-mysql-mode/700.ddl-function-of-mysql-mode/900.column-type-change-rule-of-mysql-mode.md b/en-US/700.reference/500.sql-reference/100.sql-syntax/200.common-tenant-of-mysql-mode/700.ddl-function-of-mysql-mode/900.column-type-change-rule-of-mysql-mode.md
index 3f8d530365..91d25228c7 100644
--- a/en-US/700.reference/500.sql-reference/100.sql-syntax/200.common-tenant-of-mysql-mode/700.ddl-function-of-mysql-mode/900.column-type-change-rule-of-mysql-mode.md
+++ b/en-US/700.reference/500.sql-reference/100.sql-syntax/200.common-tenant-of-mysql-mode/700.ddl-function-of-mysql-mode/900.column-type-change-rule-of-mysql-mode.md
@@ -13,18 +13,19 @@ When you change the data type of a column in the MySQL mode of OceanBase Databas
### Prohibitory rules
-* FOREIGN KEY constraints
+* `FOREIGN KEY` constraints
The data type of a foreign key column cannot be changed by using online or offline DDL operations except in the following special cases. Note that only the display precision is changed and the storage structure is not changed.
+
* If the data type of the column is `FLOAT(m,n)`, you can change the precision. However, conversion between `SIGNED` and `UNSIGNED` is not supported.
* If the data type of the column is `DOUBLE(m,n)`, you can change the precision. However, conversion between `SIGNED` and `UNSIGNED` is not supported.
* If the data type of the column is `VARCHAR(m)`, you can only increase the precision. You cannot decrease the precision or change the data type to `CHAR`.
-* CHECK constraints
+* `CHECK` constraints
- For an integer column with a `CHECK` constraint, you can use an online DDL operation to change the data type to another integer data type that supports a wider value range, such as from INT to BIGINT. However, you cannot change the data type of other columns with a `CHECK` constraint.
+ You cannot change the data type of a column with a `CHECK` constraint, except for a column of the `INTEGER` type, by using online or offline DDL operations.
* Trigger constraints
@@ -32,15 +33,18 @@ When you change the data type of a column in the MySQL mode of OceanBase Databas
### Change rules
-* To change the length of a column without changing its data type, if the column is just an index column, you can perform an online DDL operation. For example, you can change a `CHAR` column to a `CHAR(11)` column online. Since OceanBase Database can store `CHAR` data online, data rewriting is not required in this example. If the column is a primary key column or a partitioning key column, you need to perform an offline DDL operation. In other words, **to change the length of a primary key column or a partitioning key column without changing its data type, perform an offline DDL operation**.
-
- + To change the length of a `VARCHAR` column, you can perform an online DDL operation. However, you must also change the schema of dependent objects such as the index table. Note that an online DDL operation can be performed in this case to change the column length even though the column is used as a primary key or partitioning key. In other words, **even if the data does not change, the schema of the dependent object must still be changed.**
+* You can increase the length of a column without changing its data type, for example, change a `VARCHAR(8)` column to a `VARCHAR(11)` column, by using an online DDL operation without rewriting data.
- + To change the length of an integer column, you can perform an online DDL operation. The column can be a common column, a primary key column, a partitioning key column, an index column, a column that a generated column depends on, or a column with a `CHECK` constraint.
+ * If you perform an online DDL operation to change the length of a column with a dependent object such as an index table, for example, increase the length of a `VARCHAR` column, you must also change the schema of the dependent object. In other words, **the schema of the dependent object must still be changed even though data is expected not to change**.
+ * You can perform an online DDL operation to increase the length of an `INTEGER` or `DECIMALINT` column when it is an indexed column or a primary key column.
-* In OceanBase Database, to change the data type of a numeric column, character column, or date and time column, you can perform an online DDL operation if this column is not referenced by an index table, foreign key, primary key, or partitioning key. If the column is referenced by a dependent object, you need to perform an offline DDL operation. In other words, **offline DDL operations are required for changing the data type of columns on which any objects depend**.
+* OceanBase Database supports online DDL operations for increasing the length of `LOB` columns in the following scenarios:
-* In OceanBase Database, if an integer column is a foreign key column, a column on which a foreign key depends, or a generated column, you cannot change the column type.
+ * Modify a `LOB` column as a normal column.
+ * Modify a `LOB` column as a stored generated column or a virtual generated column.
+ * Modify a `LOB` column as a column on which a stored generated column or a virtual generated column depends.
+ * Add a `NOT NULL` or `CHECK` constraint to a `LOB` column.
+ * Modify the original `NOT NULL` constraint for a `LOB` column.
* When column type change is allowed, to convert between `SIGNED` and `UNSIGNED` values of a numeric data type, perform an offline DDL operation. When column type change is allowed, to change the character set and collation, perform an offline DDL operation.
@@ -158,3 +162,75 @@ The following table describes the conversion between character data types and da
| `TIMESTAMP` | Supported | Supported | See rules for conversion within the category | See rules for conversion within the category | See rules for conversion within the category | See rules for conversion within the category | See rules for conversion within the category |
| `TIME` | Supported | Supported | See rules for conversion within the category | See rules for conversion within the category | See rules for conversion within the category | See rules for conversion within the category | See rules for conversion within the category |
| `YEAR` | Supported | Supported | See rules for conversion within the category | See rules for conversion within the category | See rules for conversion within the category | See rules for conversion within the category | See rules for conversion within the category |
+
+## Conversion to larger column types by using online DDL operations
+
+### Conversion between integer data types
+
+| Data type | To SMALLINT | To MEDIUMINT | To INT | To BIGINT |
+|-----------|--------------|-------------|---------|-----------|
+| TINYINT | Supported | Supported | Supported | Supported |
+| SMALLINT | N/A | Supported | Supported | Supported |
+| MEDIUMINT | Not supported | N/A | Supported | Supported |
+| INT | Not supported | Not supported | N/A | Supported |
+
+| Data type | To USMALLINT | To UMEDIUMINT | To UINT | To UBIGINT |
+|------------|--------------|---------------|---------|------------|
+| UTINYINT | Supported | Supported | Supported | Supported |
+| USMALLINT | N/A | Supported | Supported | Supported |
+| UMEDIUMINT | Not supported | N/A | Supported | Supported |
+| UINT | Not supported | Not supported | N/A | Supported |
+
+### DECIMAL/DECIMALINT
+
+`DECIMAL(precision, scala)` is a decimal numeric data type. `precision` indicates the total length of a number, and `scala` indicates the number of digits in the fractional part. In online DDL operations, the ranges of `precision` are [1,9], [10,18], [19,38] and [39,65].
+
+
+ Notice
+ - In Oracle mode, the maximum value of
precision
is 38. - In MySQL mode, the maximum value of
precision
is 65.
+
+
+You can perform an online DDL operation to convert a DECIMAL type into another DECIMAL type that provides a `precision` in the same range and has the same `scala`, because such conversion does not change the underlying physical storage. For example, you can perform an online DDL operation to convert the `DECIMAL(10, 2)` type into the `DECIMAL(12, 2)` type.
+
+* You can perform an online DDL operation to convert a DECIMAL type into another DECIMAL type with a larger `precision`. For example, you can perform an online DDL operation to convert the `DECIMAL(10, 2)` type into the `DECIMAL(12, 2)` type. However, you can only perform an offline DDL operation to convert a DECIMAL type into another DECIMAL type with a smaller `precision`. You can only perform an offline DDL operation to convert a DECIMAL type into another DECIMAL type with a `precision` in another range.
+
+* You can only perform an offline DDL operation to convert a DECIMAL type into another DECIMAL type with a different `scala`, no matter whether the `scala` is increased or decreased. For example, you must perform an offline DDL operation to convert the `DECIMAL(10, 2)` type into the `DECIMAL(10, 4)` type, or the `DECIMAL(10, 2)` type into the `DECIMAL(10, 1)` type.
+
+### Increase the length of a VARCHAR data type
+
+You can perform an online DDL operation to convert the `VARCHAR(x)` type into the `VARCHAR(y)` type when `y` is greater than or equal to `x`. For example, you can perform an online DDL operation to convert the `VARCHAR(10)` type into the `VARCHAR(20)` type.
+
+### Increase the length of a VARBINARY data type
+
+You can perform an online DDL operation to convert the `VARBINARY(x)` type into the `VARBINARY(y)` type when `y` is greater than or equal to `x`. For example, you can perform an online DDL operation to convert the `VARBINARY(10)` type into the `VARBINARY(20)` type.
+
+### LOB types
+
+
+ Notice
+ LOB
columns cannot be used as indexed columns or primary key columns.
+
+
+| Data type | To MEDIUMBLOB | To LONGBLOB |
+|------------|--------------|--------------|
+| BLOB | Supported | Supported |
+| MEDIUMBLOB | N/A | Supported |
+
+| Data type | To MEDIUMTEXT | To LONGTEXT |
+|------------|--------------|--------------|
+| TEXT | Supported | Supported |
+| MEDIUMTEXT | N/A | Supported |
+
+### Conversion between VARCHAR and TINYTEXT
+
+| Data type | To VARCHAR(x), x >= 255 | To TINYTEXT |
+|--------------------|----------------------|-------------|
+| TINYTEXT | Supported | N/A |
+| VARCHAR(x), x <= 255 | N/A | Supported |
+
+### Conversion between VARBINARY and TINYBLOB
+
+| Data type | To VARBINARY(x), x >= 255 | To TINYBLOB |
+|----------------------|----------------------|-------------|
+| TINYBLOB | Supported | N/A |
+| VARBINARY(x), x <= 255 | N/A | Supported |
diff --git a/en-US/700.reference/500.sql-reference/100.sql-syntax/300.common-tenant-of-oracle-mode/1000.ddl-function-of-oracle-mode/150.online-and-offline-ddl-list-of-oracle-mode.md b/en-US/700.reference/500.sql-reference/100.sql-syntax/300.common-tenant-of-oracle-mode/1000.ddl-function-of-oracle-mode/150.online-and-offline-ddl-list-of-oracle-mode.md
index 60d5743542..7ed590ba7c 100644
--- a/en-US/700.reference/500.sql-reference/100.sql-syntax/300.common-tenant-of-oracle-mode/1000.ddl-function-of-oracle-mode/150.online-and-offline-ddl-list-of-oracle-mode.md
+++ b/en-US/700.reference/500.sql-reference/100.sql-syntax/300.common-tenant-of-oracle-mode/1000.ddl-function-of-oracle-mode/150.online-and-offline-ddl-list-of-oracle-mode.md
@@ -10,12 +10,13 @@
The following table describes the online DDL operations supported by the Oracle mode of OceanBase Database V4.x.
| Type | Operation | Time spent | Remarks | DDL support after creating mlog |
-|-------------|--------------------------|--------------------------|--------------------|---|
+|-------------|--------------------------|--------------------------|--------------------|--------------------|
| Index operation | Add an index | Related to the data volume, because data is reorganized (or rewritten) | This operation mainly involves global indexes, local indexes, global indexes with specified partitions, and spatial indexes (supported in OceanBase Database V4.1.0 and later). | Supported |
| Index operation | Drop an index | Related to whether active transactions exist | N/A | Supported |
| Index operation | Rename an index | Only for metadata modification | N/A | Supported |
-| Index operation | Hybrid index operations | Related to the operations with the longest duration | For example, `ALTER TABLE t1 ADD INDEX i4(c1), DROP INDEX i2, RENAME INDEX i1 TO i1x`. | Not supported |
-| Column operation | Add a column to the end of a table | Only for metadata modification | For example, add a `LOB` (`TEXT`) column by executing a statement similar to `ALTER TABLE tbl1 ADD c3 LOB`. | Not supported |
+| Index operation | Hybrid index operations | Related to the operation with the longest execution time | For example, `ALTER TABLE t1 ADD INDEX i4(c1), DROP INDEX i2, RENAME INDEX i1 TO i1x` involves hybrid index operations. It is not supported in Oracle mode. | Not supported |
+| Column operation | Add, change, or delete the skip index type | Only for table schema modification | N/A | Not supported |
+| Column operation | Add a column to the end of a table | Only for metadata modification | For example, add a `LOB` (`TEXT`) column by executing a statement similar to `ALTER TABLE tbl1 ADD c3 LOB`. | Not supported |
| Column operation | Add a `VIRTUAL` column | Only for metadata modification | N/A | Not supported |
| Column operation | Drop a `VIRTUAL` column | Only for metadata modification | N/A | Not supported |
| Column operation | Set a `NOT NULL` constraint on a column | Related to the data volume, because data is queried | N/A | Not supported |
@@ -26,8 +27,8 @@ The following table describes the online DDL operations supported by the Oracle
| Column operation | Rename a column | Only for metadata modification | N/A | Not supported |
| Column operation | Increase the length or precision of the data type for a column | Only for metadata modification | For example, increase the length of the `INT` type, increase the length of the `VARCHAR` type, or convert the `NUMBER` type. | Not supported |
| Column operation | Hybrid column operations | Related to the operation with the longest execution time | If an offline column operation is involved, the operation is upgraded to an offline DDL operation. | Not supported |
-| Foreign key operation | Add a foreign key or a `CHECK`/`NOT NULL` constraint | Related to the data volume, because data is queried | N/A | Adding foreign key and `CHECK` constraints is supported. Adding the `NOT NULL` constraint is not supported. |
-| Foreign key operation | Drop a foreign key or a `CHECK`/`NOT NULL` constraint | Related to the data volume, because data is queried | N/A | Adding foreign key constraints is supported. Adding `CHECK`/`NOT NULL` constraints is not supported. |
+| Foreign key operation | Add a foreign key or a `CHECK`/`NOT NULL` constraint | Related to the data volume, because data is queried | N/A | You can add a foreign key or a `CHECK` constraint but not a `NOT NULL` constraint. |
+| Foreign key operation | Drop a foreign key or a `CHECK`/`NOT NULL` constraint | Related to the data volume, because data is queried | N/A | You can drop a foreign key constraint but not a `CHECK` or `NOT NULL` constraint. |
| Table operation | Rename a table | Only for metadata modification | N/A | Not supported |
| Table operation | Change the row format | Only for metadata modification | N/A | Not supported |
| Table operation | Change the block size | Only for metadata modification | N/A | Not supported |
@@ -39,7 +40,7 @@ The following table describes the offline DDL operations supported by the Oracle
| Type | Operation | Time spent | Remarks | DDL support after creating mlog |
|-------------|--------------------------|--------------------------|--------------------|--------------------|
-| Column operation | Add an auto-increment column | Related to the data volume, because data is reorganized | N/A | Not supported |
+| Column operation | Add an auto-increment column | Related to the data volume, because data is reorganized (or rewritten) | N/A | Not supported |
| Column operation | Set a column as a primary key | Related to the data volume, because data is reorganized (or rewritten) | N/A | Not supported |
| Column operation | Add or drop a `STORED` column | Related to the data volume, because data is reorganized (or rewritten) | N/A | Not supported |
| Column operation | Drop a column | Related to the data volume, because data is reorganized (or rewritten) | N/A | Not supported |
diff --git a/en-US/700.reference/500.sql-reference/100.sql-syntax/300.common-tenant-of-oracle-mode/500.functions-of-oracle-mode/200.single-row-functions-of-oracle-mode/900.environment-and-identifier-functions-of-oracle-mode/100.lnnvl-of-oracle-mode.md b/en-US/700.reference/500.sql-reference/100.sql-syntax/300.common-tenant-of-oracle-mode/500.functions-of-oracle-mode/200.single-row-functions-of-oracle-mode/900.environment-and-identifier-functions-of-oracle-mode/100.lnnvl-of-oracle-mode.md
index d04a81f1ea..7c1a39e20c 100644
--- a/en-US/700.reference/500.sql-reference/100.sql-syntax/300.common-tenant-of-oracle-mode/500.functions-of-oracle-mode/200.single-row-functions-of-oracle-mode/900.environment-and-identifier-functions-of-oracle-mode/100.lnnvl-of-oracle-mode.md
+++ b/en-US/700.reference/500.sql-reference/100.sql-syntax/300.common-tenant-of-oracle-mode/500.functions-of-oracle-mode/200.single-row-functions-of-oracle-mode/900.environment-and-identifier-functions-of-oracle-mode/100.lnnvl-of-oracle-mode.md
@@ -40,11 +40,10 @@ SYS_CONTEXT('namespace', 'parameter' [, length ])
| INSTANCE_NAME | The name of the current instance. |
| IP_ADDRESS | The IP address of the client. |
| LANG | The abbreviated name of the language, which is shorter than `LANGUAGE`. |
-| LANGUAGE | The language, region, and database character set of the current session, which is |
-| SESSIONID | The auditing session ID. |
+| LANGUAGE | The language, region, and database character set of the current session. |
| SESSION_USER | The name of the logged-on database user, which remains unchanged during the session. |
| SESSION_USERID | The ID of the logged-on database user. |
-| SID | The session ID. |
+| SID \| SESSIONID | The client session ID of the current session, which is the unique identifier of a session in a client. |
## Return type
@@ -52,14 +51,29 @@ The return type is `VARCHAR2`.
## Examples
-View the session ID in the `USERENV` namespace.
+View the client session ID in the `USERENV` namespace.
```sql
-obclient> SELECT SYS_CONTEXT ('USERENV', 'SID') FROM DUAL;
-+------------------------------+
-| SYS_CONTEXT('USERENV','SID') |
-+------------------------------+
-| 3221638765 |
-+------------------------------+
+obclient [SYS]> SELECT SYS_CONTEXT ('USERENV', 'SESSIONID') AS Client_Session_ID FROM DUAL;
+```
+
+or
+
+```sql
+obclient [SYS]> SELECT SYS_CONTEXT ('USERENV', 'SID') AS Client_Session_ID FROM DUAL;
+```
+
+The return result is as follows:
+
+```shell
++-------------------+
+| CLIENT_SESSION_ID |
++-------------------+
+| 3221488043 |
++-------------------+
1 row in set
```
+
+## References
+
+You can execute the `SHOW PROCESSLIST` statement to query the quantity and IDs of sessions in the current database. For more information, see [View tenant sessions](../../../../../../1200.database-proxy/1500.view-tenant-sessions.md).
diff --git a/en-US/700.reference/500.sql-reference/100.sql-syntax/300.common-tenant-of-oracle-mode/500.functions-of-oracle-mode/200.single-row-functions-of-oracle-mode/900.environment-and-identifier-functions-of-oracle-mode/500.userenv-of-oracle-mode.md b/en-US/700.reference/500.sql-reference/100.sql-syntax/300.common-tenant-of-oracle-mode/500.functions-of-oracle-mode/200.single-row-functions-of-oracle-mode/900.environment-and-identifier-functions-of-oracle-mode/500.userenv-of-oracle-mode.md
index 8feba44dd7..d11a5ea366 100644
--- a/en-US/700.reference/500.sql-reference/100.sql-syntax/300.common-tenant-of-oracle-mode/500.functions-of-oracle-mode/200.single-row-functions-of-oracle-mode/900.environment-and-identifier-functions-of-oracle-mode/500.userenv-of-oracle-mode.md
+++ b/en-US/700.reference/500.sql-reference/100.sql-syntax/300.common-tenant-of-oracle-mode/500.functions-of-oracle-mode/200.single-row-functions-of-oracle-mode/900.environment-and-identifier-functions-of-oracle-mode/500.userenv-of-oracle-mode.md
@@ -22,35 +22,77 @@ USERENV('parameter')
The following describes the valid values of `parameter`.
| **Value** | **Description** |
-| --- | --- |
+| -------- | -------- |
| CLIENT_INFO | The information about the user session, which contains up to 64 bytes. Applications can store the information in the `DBMS_APPLICATION_INFO` system package. |
| INSTANCE | The ID of the current instance. |
| LANG | The abbreviated name of the language, which is shorter than LANGUAGE. |
-| LANGUAGE | The language, region, and database character set of the current session, which is in the `language_territory.characterset` format. |
+| LANGUAGE | The language, region, and database character set of the current session. The value is in the `language_territory.characterset` format. |
| SCHEMAID | The schema ID. |
-| SESSIONID | The ID of the audit session. |
-| SID | The session ID. |
+| SID \| SESSIONID | The client session ID of the current session, which is the unique identifier of a session in a client. |
-## Return type
+## Return value
The return type is `NUMBER` when the `SESSIONID` or `SID` parameter is specified. Otherwise, the return type is `VARCHAR2`.
## Examples
-```sql
-obclient> SELECT USERENV('LANGUAGE') "Language" FROM DUAL;
-+---------------------------+
-| Language |
-+---------------------------+
-| AMERICAN_AMERICA.AL32UTF8 |
-+---------------------------+
-1 row in set
-
-obclient> SELECT USERENV('SCHEMAID') FROM DUAL;
-+---------------------+
-| USERENV('SCHEMAID') |
-+---------------------+
-| 201006 |
-+---------------------+
-1 row in set
-```
+* Obtain the language, region, and database character set of the current session.
+
+ ```sql
+ obclient [SYS]> SELECT USERENV('LANGUAGE') "Language" FROM DUAL;
+ ```
+
+ The return result is as follows:
+
+ ```shell
+ +---------------------------+
+ | Language |
+ +---------------------------+
+ | AMERICAN_AMERICA.AL32UTF8 |
+ +---------------------------+
+ 1 row in set
+ ```
+
+* Obtain the schema ID of the current session.
+
+ ```sql
+ obclient [SYS]> SELECT USERENV('SCHEMAID') FROM DUAL;
+ ```
+
+ The return result is as follows:
+
+ ```shell
+ +---------------------+
+ | USERENV('SCHEMAID') |
+ +---------------------+
+ | 201006 |
+ +---------------------+
+ 1 row in set
+ ```
+
+* Obtain the client session ID of the current session.
+
+ ```sql
+ obclient [SYS]> SELECT USERENV('SESSIONID') AS Client_Session_ID FROM DUAL;
+ ```
+
+ or
+
+ ```sql
+ obclient [SYS]> SELECT USERENV('SID') AS Client_Session_ID FROM DUAL;
+ ```
+
+ The return result is as follows:
+
+ ```shell
+ +-------------------+
+ | CLIENT_SESSION_ID |
+ +-------------------+
+ | 3221488033 |
+ +-------------------+
+ 1 row in set
+ ```
+
+## References
+
+You can execute the `SHOW PROCESSLIST` statement to query the quantity and IDs of sessions in the current database. For more information, see [View tenant sessions](../../../../../../1200.database-proxy/1500.view-tenant-sessions.md).
diff --git a/en-US/700.reference/500.sql-reference/100.sql-syntax/300.common-tenant-of-oracle-mode/900.sql-statement-of-oracle-mode/100.ddl-of-oracle-mode/1000.alter-table-of-oracle-mode.md b/en-US/700.reference/500.sql-reference/100.sql-syntax/300.common-tenant-of-oracle-mode/900.sql-statement-of-oracle-mode/100.ddl-of-oracle-mode/1000.alter-table-of-oracle-mode.md
index 88033bec7e..efcaf060cd 100644
--- a/en-US/700.reference/500.sql-reference/100.sql-syntax/300.common-tenant-of-oracle-mode/900.sql-statement-of-oracle-mode/100.ddl-of-oracle-mode/1000.alter-table-of-oracle-mode.md
+++ b/en-US/700.reference/500.sql-reference/100.sql-syntax/300.common-tenant-of-oracle-mode/900.sql-statement-of-oracle-mode/100.ddl-of-oracle-mode/1000.alter-table-of-oracle-mode.md
@@ -208,8 +208,8 @@ partition_count | subpartition_count:
| ALTER INDEX | Modifies index attributes. |
| FOREIGN KEY | Adds a foreign key. If you do not specify the name of the foreign key, it will be named in the format of table name + `OBFK` + time when the foreign key was created. For example, the foreign key created for table `t1` at 00:00:00 on August 1, 2021 is named as `t1_OBFK_1627747200000000`. A foreign key enables one table (child table) to reference data from another table (parent table). |
| ADD PARTITION | Adds a partition. |
-| DROP {PARTITION \| SUBPARTITION} | Drops a partition. Valid values: - `PARTITION`: drops the specified RANGE or LIST partitions, as well as all subpartitions that exist under these partitions. The partition definitions and partition data are also deleted, but the local indexes defined on the partitions are maintained.
- `SUBPARTITION`: drops the specified \*-RANGE or \*-LIST subpartitions, including the subpartition definitions and data, and maintains the local indexes on the subpartitions.
If you specify `UPDATE GLOBAL INDEXES`, the system updates global indexes when dropping the partitions or subpartitions. If you do not specify UPDATE GLOBAL INDEXES, the global indexes on the partitioned or subpartitioned table must be in an unusable state. Separate multiple partition names with commas (,).
**Notice**: Before dropping a partition, ensure that there are no active transactions or queries in this partition. Otherwise, SQL statement errors or exceptions may occur. |
-| TRUNCATE {PARTITION \| SUBPARTITION} | Truncates a partition. Valid values: - `PARTITION`: deletes all data in the specified RANGE or LIST partitions, as well as data in all subpartitions that exist under these partitions, and maintains the local indexes on the partitions.
- `SUBPARTITION`: deletes all data in the specified \*-RANGE or \*-LIST subpartitions, and maintains the local indexes on the subpartitions.
If you specify `UPDATE GLOBAL INDEXES`, the system updates global indexes when deleting the data. If you do not specify UPDATE GLOBAL INDEXES, the global indexes on the partitioned or subpartitioned table must be in an unusable state. Separate multiple partition names with commas (,).
**Notice**: Before truncating a partition, ensure that there are no active transactions or queries in this partition. Otherwise, SQL statement errors or exceptions may occur. |
+| DROP {PARTITION \| SUBPARTITION} | Drops a partition. Valid values: - `PARTITION`: drops the specified RANGE or LIST partitions, as well as all subpartitions that exist under these partitions. The partition definitions and partition data are also deleted, but the local indexes defined on the partitions are maintained.
- `SUBPARTITION`: drops the specified \*-RANGE or \*-LIST subpartitions, including the subpartition definitions and data, and maintains the local indexes on the subpartitions.
If you specify `UPDATE GLOBAL INDEXES`, the system updates global indexes when dropping the partitions or subpartitions. If you do not specify UPDATE GLOBAL INDEXES, the global indexes on the partitioned or subpartitioned table must be in an unusable state. Separate multiple partition names with commas (,).
**Notice**: Before dropping a partition, ensure that there are no active transactions or queries in this partition. Otherwise, SQL statement errors or exceptions may occur. |
+| TRUNCATE {PARTITION \| SUBPARTITION} | Truncates a partition. Valid values: - `PARTITION`: deletes all data in the specified RANGE or LIST partitions, as well as data in all subpartitions that exist under these partitions, and maintains the local indexes on the partitions.
- `SUBPARTITION`: deletes all data in the specified \*-RANGE or \*-LIST subpartitions, and maintains the local indexes on the subpartitions.
If you specify `UPDATE GLOBAL INDEXES`, the system updates global indexes when deleting the data. If you do not specify UPDATE GLOBAL INDEXES, the global indexes on the partitioned or subpartitioned table must be in an unusable state. Separate multiple partition names with commas (,).
**Notice**: Before truncating a partition, ensure that there are no active transactions or queries in this partition. Otherwise, SQL statement errors or exceptions may occur. |
| RENAME \[TO\] table_name | Renames a table. |
| RENAME { PARTITION \| SUBPARITION } partition_name TO new_name | Renames a partition or subpartition. `new_name` indicates the new name, which is case insensitive. Renaming affects the partitions or subpartitions of the primary table but does not affect those of local indexes. You can query the `USER_TAB_PARTITIONS` and `USER_TAB_SUBPARTITIONS` views to check the renaming result. For more information, see [Rename a partition](../../../../../300.database-object-management/200.manage-object-of-oracle-mode/200.manage-partitions-of-oracle-mode/310.rename-a-partition-of-oracle-mode.md). If you attempt to rename a partition or subpartition locked for a DML operation, the rename operation is blocked. The partition or subpartition can be renamed only after the DML operation releases the lock. |
| DROP TABLEGROUP | Drops a table group. |
@@ -223,9 +223,9 @@ partition_count | subpartition_count:
| parallel_clause | The degree of parallelism (DOP) at the table level. - `NOPARALLEL`: sets the DOP to `1`, which is the default value.
- `PARALLEL integer`: sets the DOP to an integer greater than or equal to `1`.
**Notice**: When you specify the DOP, the following priority order applies: DOP specified by using a hint \> DOP specified by executing the `ALTER SESSION` statement \> DOP at the table level. |
| {ENABLE \| DISABLE} CONSTRAINT constraint_name | Enables or disables the `FOREIGN KEY` constraint or `CHECK` constraint. |
| MODIFY PRIMARY KEY | Modifies a primary key. |
-| ADD COLUMN GROUP([all columns,]each column) | Converts a table to a columnstore table. The following options are supported:ADD COLUMN GROUP(all columns, each column)
: converts the table to a rowstore-columnstore redundant table. ADD COLUMN GROUP(each column)
: converts the table to a columnstore table.
|
-| DROP COLUMN GROUP([all columns,]each column) | Drops the store format of the table. The following options are supported:DROP COLUMN GROUP(all columns, each column)
: drops the rowstore-columnstore redundant format of the table. DROP COLUMN GROUP(all columns)
: drops the rowstore format of the table. DROP COLUMN GROUP(each column)
: drops the columnstore format of the table.
|
-| SKIP_INDEX | Modifies the Skip Index attribute of a column. Valid values:MIN_MAX
: specifies to store the maximum value, minimum value, and number of null values of the indexed column on an index node. It is the most common aggregate data type in Skip Index. It can accelerate the pushdown of filters and `MIN/MAX` functions. -
SUM
: used to accelerate the pushdown of the `SUM` function on data of the numeric type.
Notice
- The Skip Index attribute is not supported for a column of the JSON or spatial data type.
- The Skip Index attribute is not supported for a generated column.
|
+| ADD COLUMN GROUP([all columns,]each column) | Converts a table into a columnstore table. The following options are supported:ADD COLUMN GROUP(all columns, each column)
: converts the table into a rowstore-columnstore redundant table. ADD COLUMN GROUP(each column)
: converts the table into a columnstore table.
|
+| DROP COLUMN GROUP([all columns,]each column) | Removes the storage format of the table. The following options are supported:DROP COLUMN GROUP(all columns, each column)
: removes the rowstore-columnstore redundancy format for the table. DROP COLUMN GROUP(all columns)
: removes the rowstore format for the table. DROP COLUMN GROUP(each column)
: removes the columnstore format for the table.
|
+| SKIP_INDEX | Modifies the skip index attribute of a column. Valid values:MIN_MAX
: the most common skip index type. A skip index of this type stores the maximum value, minimum value, and null count of the indexed column at the index node granularity. This type of skip indexes can accelerate the pushdown of filters and `MIN/MAX` aggregation. -
SUM
: the skip index type that is used to accelerate the pushdown of `SUM` aggregation for numeric values.
Notice
- You cannot create a skip index for a JSON column or a spatial column.
- You cannot create a skip index for a generated column.
|
## Examples
@@ -360,9 +360,7 @@ partition_count | subpartition_count:
Query OK, 0 rows affected
```
-* Add the `p4` partition to a non-template-based subpartitioned table named `tbl3`.
-
- You need to specify both the partition definition and the subpartition definition.
+* Add the `p4` partition to a non-template-based subpartitioned table named `tbl3`. You need to specify both the partition definition and the subpartition definition.
```sql
obclient> ALTER TABLE tbl3 ADD PARTITION p4 VALUES LESS THAN (400)
@@ -374,9 +372,7 @@ partition_count | subpartition_count:
Query OK, 0 rows affected
```
-* Add the `p3` partition to a template-based subpartitioned table named `tbl4`.
-
- You need to specify only the partition definition. The subpartition definition is filled in automatically based on the template.
+* Add the `p3` partition to a template-based subpartitioned table named `tbl4`. You need to specify only the partition definition. The subpartition definition is filled in automatically based on the template.
```sql
obclient> CREATE TABLE tbl4(col1 INT, col2 INT, PRIMARY KEY(col1,col2))
@@ -649,7 +645,7 @@ partition_count | subpartition_count:
CREATE TABLE tbl1 (col1 INT PRIMARY KEY, col2 VARCHAR(50));
```
- 2. Convert the `tbl1` table to a rowstore-columnstore redundant table, and then drop the rowstore-columnstore redundancy attribute.
+ 2. Convert the `tbl1` table into a rowstore-columnstore redundant table, and then drop the rowstore-columnstore redundancy attribute.
```sql
ALTER TABLE tbl1 ADD COLUMN GROUP(all columns, each column);
@@ -659,7 +655,7 @@ partition_count | subpartition_count:
ALTER TABLE tbl1 DROP COLUMN GROUP(all columns, each column);
```
- 3. Convert the `tbl1` table to a columnstore table, and then delete the columnstore attribute.
+ 3. Convert the `tbl1` table into a columnstore table, and then drop the columnstore attribute.
```sql
ALTER TABLE tbl1 ADD COLUMN GROUP(each column);
@@ -669,9 +665,9 @@ partition_count | subpartition_count:
ALTER TABLE tbl1 DROP COLUMN GROUP(each column);
```
-* Modify the Skip Index attribute for columns in a table.
+* Modify the skip index attribute of a column in the table.
- 1. Execute the following statement to create a table named `test_skidx`:
+ 1. Use the following SQL statement to create a table named `test_skidx`.
```sql
CREATE TABLE test_skidx(
@@ -682,19 +678,23 @@ partition_count | subpartition_count:
);
```
- 2. Change the Skip Index attribute of the `col2` column in the `test_skidx` table to `SUM`.
+ 2. Change the type of the skip index on the `col2` column in the `test_skidx` table to `SUM`.
```sql
ALTER TABLE test_skidx MODIFY col2 FLOAT SKIP_INDEX(SUM);
```
- 3. Add the `MIN_MAX` Skip Index attribute for the `col4` column in the `test_skidx` table.
+ 3. Add the skip index attribute for a column after the table is created.
+
+ Add a skip index of the `MIN_MAX` type for the `col4` column in the `test_skidx` table.
```sql
ALTER TABLE test_skidx MODIFY col4 CHAR(10) SKIP_INDEX(MIN_MAX);
```
- 4. Delete the Skip Index attribute for the `col1` column in the `test_skidx` table.
+ 4. Delete the skip index attribute for a column after the table is created.
+
+ Delete the skip index attribute of the `col1` column in the `test_skidx` table.
```sql
ALTER TABLE test_skidx MODIFY col1 NUMBER SKIP_INDEX();
@@ -702,4 +702,4 @@ partition_count | subpartition_count:
## References
-[Modify a table](../../../../../300.database-object-management/200.manage-object-of-oracle-mode/100.manage-tables-of-oracle-mode/600.change-table-of-oracle-mode.md)
\ No newline at end of file
+[Modify a table](../../../../../300.database-object-management/200.manage-object-of-oracle-mode/100.manage-tables-of-oracle-mode/600.change-table-of-oracle-mode.md)
diff --git a/en-US/700.reference/500.sql-reference/100.sql-syntax/300.common-tenant-of-oracle-mode/900.sql-statement-of-oracle-mode/100.ddl-of-oracle-mode/2400.create-table-of-oracle-mode.md b/en-US/700.reference/500.sql-reference/100.sql-syntax/300.common-tenant-of-oracle-mode/900.sql-statement-of-oracle-mode/100.ddl-of-oracle-mode/2400.create-table-of-oracle-mode.md
index 2b1dda7d70..bb37048a71 100644
--- a/en-US/700.reference/500.sql-reference/100.sql-syntax/300.common-tenant-of-oracle-mode/900.sql-statement-of-oracle-mode/100.ddl-of-oracle-mode/2400.create-table-of-oracle-mode.md
+++ b/en-US/700.reference/500.sql-reference/100.sql-syntax/300.common-tenant-of-oracle-mode/900.sql-statement-of-oracle-mode/100.ddl-of-oracle-mode/2400.create-table-of-oracle-mode.md
@@ -226,9 +226,9 @@ table_column_group_option:
| ON COMMIT DELETE ROWS | Deletes data upon commit for a transaction-level temporary table. |
| ON COMMIT PRESERVE ROWS | Deletes data upon the end of the session for a session-level temporary table. |
| parallel_clause | The degree of parallelism (DOP) at the table level. - `NOPARALLEL`: sets the DOP to 1, which is the default value.
- `PARALLEL integer`: sets the DOP to an integer greater than or equal to `1`.
Notice
When you specify the DOP, the following priority order applies: DOP specified by using a hint \> DOP specified in ALTER SESSION
\> DOP at the table level.
|
-| DUPLICATE_SCOPE | The replicated table attribute. Valid values:- `none`: specifies that the table is a normal table. This is the default value.
`cluster`: specifies that the table is a replicated table. The leader needs to replicate transactions to all full-featured replicas and read-only replicas of the current tenant.
Currently, OceanBase Database supports only cluster-level replicated tables. |
+| DUPLICATE_SCOPE | The replicated table attribute. Valid values:- `none`: specifies that the table is a normal table. This is the default value.
- `cluster`: specifies that the table is a replicated table. The leader needs to replicate transactions to all full-featured replicas and read-only replicas of the current tenant.
Currently, OceanBase Database supports only cluster-level replicated tables. |
| table_column_group_option | The columnstore option for the table. The following options are supported:WITH COLUMN GROUP(all columns, each column)
: specifies to create a rowstore-columnstore redundant table. WITH COLUMN GROUP(all columns)
: specifies to create a rowstore table. WITH COLUMN GROUP(each column)
: specifies to create a columnstore table.
|
-| SKIP_INDEX | The Skip Index attribute of the column. Valid values:MIN_MAX
: specifies to store the maximum value, minimum value, and number of null values of the indexed column on an index node. It is the most common aggregate data type in Skip Index. It can accelerate the pushdown of filters and `MIN/MAX` functions. -
SUM
: used to accelerate the pushdown of the `SUM` function on data of the numeric type.
Notice
- The Skip Index attribute is not supported for a column of the JSON or spatial data type.
- The Skip Index attribute is not supported for a generated column.
|
+| SKIP_INDEX | The skip index attribute of the column. Valid values:MIN_MAX
: the most common skip index type. A skip index of this type stores the maximum value, minimum value, and null count of the indexed column at the index node granularity. This type of skip indexes can accelerate the pushdown of filters and `MIN/MAX` aggregation. -
SUM
: the skip index type that is used to accelerate the pushdown of `SUM` aggregation for numeric values.
Notice
- You cannot create a skip index for a JSON column or a spatial column.
- You cannot create a skip index for a generated column.
|
## Examples
@@ -379,7 +379,7 @@ table_column_group_option:
CREATE TABLE tbl1_cg (col1 NUMBER PRIMARY KEY, col2 VARCHAR2(50)) WITH COLUMN GROUP(each column);
```
-* Specify the Skip Index attribute for columns while creating a table.
+* Create a table and specify the skip index attribute for columns.
```sql
CREATE TABLE test_skidx(
@@ -392,7 +392,7 @@ table_column_group_option:
## Limitations on global temporary tables in Oracle mode
-* Temporary tables are used in multiple database upgrade scenarios in Oracle mode, with the basic correctness and functionality ensured.
+* With basic data correctness and functionality guarantees, temporary tables in the Oracle mode of OceanBase Database are used in a lot of database upgrade scenarios.
* Generally, temporary tables are used for compatibility purposes with less business construction. You can use temporary tables in limited business scenarios with lower performance requirements. If the business scenarios support normal tables, we recommend that you use normal tables.
### Performance and stability
@@ -448,9 +448,11 @@ When the database executes an `UPDATE`, `DELETE`, or `SELECT` statement that con
### Routing for temporary tables
* Transaction temporary tables (`ON COMMIT DELETE ROWS`)
+
Access to a temporary table within a transaction can only be routed to the node that starts the transaction.
* Session temporary tables (`ON COMMIT PRESERVE ROWS`)
+
After a session accesses a temporary table, the OBServer node instructs the ODP to forward subsequent requests only to the current session.
### Drop a temporary table
diff --git a/en-US/700.reference/500.sql-reference/100.sql-syntax/300.common-tenant-of-oracle-mode/900.sql-statement-of-oracle-mode/300.dcl-of-oracle-mode/150.alter-system-kill-session.md b/en-US/700.reference/500.sql-reference/100.sql-syntax/300.common-tenant-of-oracle-mode/900.sql-statement-of-oracle-mode/300.dcl-of-oracle-mode/150.alter-system-kill-session.md
index ac09606e8d..2d8195c31d 100644
--- a/en-US/700.reference/500.sql-reference/100.sql-syntax/300.common-tenant-of-oracle-mode/900.sql-statement-of-oracle-mode/300.dcl-of-oracle-mode/150.alter-system-kill-session.md
+++ b/en-US/700.reference/500.sql-reference/100.sql-syntax/300.common-tenant-of-oracle-mode/900.sql-statement-of-oracle-mode/300.dcl-of-oracle-mode/150.alter-system-kill-session.md
@@ -23,7 +23,7 @@ ALTER SYSTEM KILL SESSION 'session_id' [IMMEDIATE];
| Parameter | Description |
|-----------------|------------------------------------------------------------|
-| session_id | The ID of the session to be terminated. Note
You can execute the SHOW PROCESSLIST;
SQL statement to view session_id
.
|
+| session_id | The client session ID of the current session, which is the unique identifier of a session in a client. Note
You can execute the SHOW PROCESSLIST;
SQL statement to view session_id
.
|
| serial# | This parameter is not implemented in the current version and is reserved only for syntax compatibility. |
| IMMEDIATE | Immediately switches back to the specified session to execute `KILL`. This parameter is optional. This parameter is not implemented in the current version and is reserved only for syntax compatibility. |
@@ -53,3 +53,7 @@ obclient [KILL_USER]> SHOW PROCESSLIST;
obclient [KILL_USER]> ALTER SYSTEM KILL SESSION '3221487726';
Query OK, 0 rows affected
```
+
+## References
+
+For more information about how to query the quantity and IDs of sessions in the current database, see [View tenant sessions](../../../../../1200.database-proxy/1500.view-tenant-sessions.md).
diff --git a/en-US/700.reference/500.sql-reference/100.sql-syntax/300.common-tenant-of-oracle-mode/900.sql-statement-of-oracle-mode/300.dcl-of-oracle-mode/1800.kill-of-oracle-mode.md b/en-US/700.reference/500.sql-reference/100.sql-syntax/300.common-tenant-of-oracle-mode/900.sql-statement-of-oracle-mode/300.dcl-of-oracle-mode/1800.kill-of-oracle-mode.md
index acdb913e2d..4e33c2b568 100644
--- a/en-US/700.reference/500.sql-reference/100.sql-syntax/300.common-tenant-of-oracle-mode/900.sql-statement-of-oracle-mode/300.dcl-of-oracle-mode/1800.kill-of-oracle-mode.md
+++ b/en-US/700.reference/500.sql-reference/100.sql-syntax/300.common-tenant-of-oracle-mode/900.sql-statement-of-oracle-mode/300.dcl-of-oracle-mode/1800.kill-of-oracle-mode.md
@@ -26,10 +26,9 @@ KILL [CONNECTION | QUERY] 'session_id'
| Parameter | Description |
|-----------------|--------------------------------------------------------------------------------------------------------------|
-| KILL | Terminates a session specified by `session_id`.
**Note**: You can execute the `SHOW PROCESSLIST;` SQL statement to view `session_id`. |
-| KILL CONNECTION | It is equivalent to the `KILL` statement with no modifier and can be used to terminate a session specified by `session_id`. |
+| KILL CONNECTION | It is equivalent to the `KILL` statement with no modifier and can be used to terminate a session specified by the client session ID. |
| KILL QUERY | Terminates the statement that is being executed by the session but leaves the session intact. |
-| session_id | The unique ID of the session. |
+| session_id | The client session ID of the current session, which is the unique identifier of a session in a client. You can execute the `SHOW PROCESSLIST` or `SHOW FULL PROCESSLIST` statement to obtain the session ID. |
## Examples
@@ -37,13 +36,13 @@ Query and then terminate a session.
```sql
obclient> SHOW PROCESSLIST;
-+------------+------+-------------------+------+---------+------+--------+------------------+
-| ID | USER | HOST | DB | COMMAND | TIME | STATE | INFO |
-+------------+------+-------------------+------+---------+------+--------+------------------+
++------------+------+----------------------+------+---------+------+--------+------------------+
+| ID | USER | HOST | DB | COMMAND | TIME | STATE | INFO |
++------------+------+----------------------+------+---------+------+--------+------------------+
| 3221849635 | SYS | 10.10.10.10:49142 | SYS | Sleep | 426 | SLEEP | NULL |
| 3221656012 | SYS | 10.10.10.10:57140 | SYS | Sleep | 426 | SLEEP | NULL |
| 3221671483 | SYS | 10.10.10.10:43154 | SYS | Query | 0 | ACTIVE | show processlist |
-+------------+------+-------------------+------+---------+------+--------+------------------+
++------------+------+----------------------+------+---------+------+--------+------------------+
3 rows in set
obclient> KILL 3221849635;
@@ -55,3 +54,7 @@ Query OK, 0 rows affected
obclient> KILL CONNECTION 3221671483;
Query OK, 0 rows affected
```
+
+## References
+
+For more information about how to query the quantity and IDs of sessions in the current database, see [View tenant sessions](../../../../../1200.database-proxy/1500.view-tenant-sessions.md).
diff --git a/en-US/700.reference/500.sql-reference/100.sql-syntax/300.common-tenant-of-oracle-mode/900.sql-statement-of-oracle-mode/300.dcl-of-oracle-mode/3600.show-of-oracle-mode.md b/en-US/700.reference/500.sql-reference/100.sql-syntax/300.common-tenant-of-oracle-mode/900.sql-statement-of-oracle-mode/300.dcl-of-oracle-mode/3600.show-of-oracle-mode.md
index 942bc648d7..48b16f1750 100644
--- a/en-US/700.reference/500.sql-reference/100.sql-syntax/300.common-tenant-of-oracle-mode/900.sql-statement-of-oracle-mode/300.dcl-of-oracle-mode/3600.show-of-oracle-mode.md
+++ b/en-US/700.reference/500.sql-reference/100.sql-syntax/300.common-tenant-of-oracle-mode/900.sql-statement-of-oracle-mode/300.dcl-of-oracle-mode/3600.show-of-oracle-mode.md
@@ -28,7 +28,7 @@ SHOW {
| GRANTS
| PRIVILEGES
| RECYCLEBIN
- | PROCESSLIST
+ | [FULL] PROCESSLIST
| TRACE [FORMAT='JSON']
};
```
@@ -50,7 +50,7 @@ SHOW {
| GRANTS | Queries the privileges of the current user. |
| PRIVILEGES | Queries the description of privileges. |
| RECYCLEBIN | Queries the recycle bin. |
-| PROCESSLIST | Queries the process list. |
+| \[FULL\] PROCESSLIST | Queries the list of processes in the current tenant. The following options are supported:SHOW PROCESSLIST
displays the brief list of processes with the following information: ID
: the ID of the process, that is, the client session ID of the current session. This ID is the unique identifier of the session in the client. USER
: the username used for database connection. HOST
: the IP address and port number of the client. If an OceanBase Data Proxy (ODP) is used to connect to the database, the IP address and port number of the ODP are displayed here. DB
: the name of the connected database. COMMAND
: the type of the command being executed, such as Query
and Sleep
. TIME
: the execution time of the current command, in seconds. If the command is retried, the time is reset and recalculated. STATE
: the status of the current session, such as SLEEP
and ACTIVE
. INFO
: the content of the command being executed. A maximum of 100 characters can be displayed and the exceeding part is truncated.
SHOW FULL PROCESSLIST
displays the complete list of processes with the following information: ID
: the ID of the process, that is, the client session ID of the current session. This ID is the unique identifier of the session in the client. USER
: the username used for database connection. TENANT
: the connected tenant. HOST
: the IP address and port number of the client. If an ODP is used to connect to the database, the IP address and port number of the ODP are displayed here. DB
: the name of the connected database. COMMAND
: the type of the command being executed, such as Query
and Sleep
. TIME
: the execution time of the current command, in seconds. If the command is retried, the time is reset and recalculated. STATE
: the status of the current session, such as SLEEP
and ACTIVE
. INFO
: the content of the command being executed. IP
: the IP address of the server. PORT
: the SQL port number.
Note
You can execute the SHOW PROCESSLIST
statement to query the quantity and IDs of sessions in the current database. For more information, see View tenant sessions.
|
| TRACE [FORMAT='JSON'] | Queries the execution status of SQL statements. You can choose to output the results in JSON format. |
## Examples
From e7236a429ad774eb4c8cf2fb7fc50c0e916015e4 Mon Sep 17 00:00:00 2001
From: Jackie Qu
Date: Wed, 17 Apr 2024 16:52:44 +0800
Subject: [PATCH 31/63] v430-beta-1000.performance-tuning-guide-update-1
---
...st-based-query-rewriting_20240411114437.md | 215 ++++++++
...st-based-query-rewriting_20240417164139.md | 215 ++++++++
.../1000.parallel-dml.md | 71 ++-
.../100.parallel-execution-concept.md | 522 +++++++++---------
.../200.concurrency-control-and-queuing.md | 24 +-
.../300.set-degree-of-parallelism.md | 275 ++++-----
.../400.parallel-parameter-tuning.md | 4 +-
.../700.quickstart-of-parallel-execution.md | 2 +-
.../100.query-rewrite-overview.md | 86 ++-
.../200.rule-based-query-rewriting.md | 34 +-
10 files changed, 1013 insertions(+), 435 deletions(-)
create mode 100644 .history/en-US/700.reference/1000.performance-tuning-guide/500.sql-optimization/400.sql-optimization/500.query-rewrite/300.cost-based-query-rewriting_20240411114437.md
create mode 100644 .history/en-US/700.reference/1000.performance-tuning-guide/500.sql-optimization/400.sql-optimization/500.query-rewrite/300.cost-based-query-rewriting_20240417164139.md
diff --git a/.history/en-US/700.reference/1000.performance-tuning-guide/500.sql-optimization/400.sql-optimization/500.query-rewrite/300.cost-based-query-rewriting_20240411114437.md b/.history/en-US/700.reference/1000.performance-tuning-guide/500.sql-optimization/400.sql-optimization/500.query-rewrite/300.cost-based-query-rewriting_20240411114437.md
new file mode 100644
index 0000000000..ed4e10dac3
--- /dev/null
+++ b/.history/en-US/700.reference/1000.performance-tuning-guide/500.sql-optimization/400.sql-optimization/500.query-rewrite/300.cost-based-query-rewriting_20240411114437.md
@@ -0,0 +1,215 @@
+# Cost-based query rewrite
+
+OceanBase Database supports only one type of cost-based query rewrite: OR-EXPANSION.
+
+Its later versions will support advanced cost-based rewriting rules for database administration such as complex view merge and window function rewrite.
+
+## OR-EXPANSION
+
+OR-EXPANSION rewrites a query to several subqueries, and the result sets of these subqueries are combined into the same result set through the `UNION` operator. This allows you to optimize each subquery. However, the rewrite also results in the execution of multiple subqueries. You must determine whether to perform the rewrite based on the cost analysis.
+
+Purposes of the OR-EXPANSION rewrite are as follows:
+
+* Allow subqueries to use different indexes to speed up the query.
+
+ In the following example, query Q1 is rewritten to Q2, where `LNNVL(t1.a = 1)`, the predicate in Q2, ensures that the two subqueries do not generate duplicate results. Before the rewrite, Q1 generally accesses the primary table. After the rewrite, if indexes (a) and (b) are created on table `t1`, subqueries of Q2 are allowed to use different indexes for data access.
+
+ ```javascript
+ Q1:
+ obclient> SELECT * FROM t1 WHERE t1.a = 1 OR t1.b = 1;
+ Q2:
+ obclient> SELECT * FROM t1 WHERE t1.a = 1 UNION ALL SELECT * FROM t1.b = 1
+ AND LNNVL(t1.a = 1);
+ ```
+
+ Here is a complete example:
+
+ ```javascript
+ obclient> CREATE TABLE t1(a INT, b INT, c INT, d INT, e INT, INDEX IDX_a(a), INDEX IDX_b(b));
+ Query OK, 0 rows affected
+
+ /*Without OR-EXPANSION rewrite, primary access path is the only option for the query*/
+ obclient> EXPLAIN SELECT/*+NO_REWRITE()*/ * FROM t1 WHERE t1.a = 1 OR t1.b = 1;
+ +--------------------------------------------------------------+
+ | Query Plan |
+ +--------------------------------------------------------------+
+ | ===================================
+ |ID|OPERATOR |NAME|EST. ROWS|COST|
+ -----------------------------------
+ |0 |TABLE SCAN|t1 |4 |649 |
+ ===================================
+
+ Outputs & filters:
+ -------------------------------------
+ 0 - output([t1.a], [t1.b], [t1.c], [t1.d], [t1.e]), filter([t1.a = 1 OR t1.b = 1]),
+ access([t1.a], [t1.b], [t1.c], [t1.d], [t1.e]), partitions(p0)
+
+ /*After the rewrite, different index access paths are available for each subquery*/
+ obclient>EXPLAIN SELECT * FROM t1 WHERE t1.a = 1 OR t1.b = 1;
+ +------------------------------------------------------------------------+
+ | Query Plan |
+ +------------------------------------------------------------------------+
+ | =========================================
+ |ID|OPERATOR |NAME |EST. ROWS|COST|
+ -----------------------------------------
+ |0 |UNION ALL | |3 |190 |
+ |1 | TABLE SCAN|t1(idx_a)|2 |94 |
+ |2 | TABLE SCAN|t1(idx_b)|1 |95 |
+ =========================================
+
+ Outputs & filters:
+ -------------------------------------
+ 0 - output([UNION(t1.a, t1.a)], [UNION(t1.b, t1.b)], [UNION(t1.c, t1.c)], [UNION(t1.d, t1.d)], [UNION(t1.e, t1.e)]), filter(nil)
+ 1 - output([t1.a], [t1.b], [t1.c], [t1.d], [t1.e]), filter(nil),
+ access([t1.a], [t1.b], [t1.c], [t1.d], [t1.e]), partitions(p0)
+ 2 - output([t1.a], [t1.b], [t1.c], [t1.d], [t1.e]), filter([lnnvl(t1.a = 1)]),
+ access([t1.a], [t1.b], [t1.c], [t1.d], [t1.e]), partitions(p02
+ ```
+
+* Allow subqueries to use different join algorithms to speed up the query and avoid using a Cartesian join.
+
+ In the following example, query Q1 is rewritten to Q2. For Q1, the nested loop join, which results in a Cartesian product, is the only join option available. After the rewrite, nested loop join, hash join, and merge join are available for each subquery, providing more options for optimization.
+
+ ```javascript
+ Q1:
+ obclient> SELECT * FROM t1, t2 WHERE t1.a = t2.a OR t1.b = t2.b;
+
+ Q2:
+ obclient> SELECT * FROM t1, t2 WHERE t1.a = t2.a UNION ALL
+ SELECT * FROM t1, t2 WHERE t1.b = t2.b AND LNNVL(t1.a = t2.a);
+ ```
+
+ Here is a complete example:
+
+ ```javascript
+ obclient> CREATE TABLE t1(a INT, b INT);
+ Query OK, 0 rows affected
+
+ obclient> CREATE TABLE t2(a INT, b INT);
+ Query OK, 0 rows affected
+
+ /*Without the rewrite, the nested loop join is the only available option*/
+ obclient> EXPLAIN SELECT/*+NO_REWRITE()*/ * FROM t1, t2
+ WHERE t1.a = t2.a OR t1.b = t2.b;
+ +--------------------------------------------------------------------------+
+ | Query Plan |
+ +--------------------------------------------------------------------------+
+ | ===========================================
+ |ID|OPERATOR |NAME|EST. ROWS|COST |
+ -------------------------------------------
+ |0 |NESTED-LOOP JOIN| |3957 |585457|
+ |1 | TABLE SCAN |t1 |1000 |499 |
+ |2 | TABLE SCAN |t2 |4 |583 |
+ ===========================================
+
+ Outputs & filters:
+ -------------------------------------
+ 0 - output([t1.a], [t1.b], [t2.a], [t2.b]), filter(nil),
+ conds(nil), nl_params_([t1.a], [t1.b])
+ 1 - output([t1.a], [t1.b]), filter(nil),
+ access([t1.a], [t1.b]), partitions(p0)
+ 2 - output([t2.a], [t2.b]), filter([? = t2.a OR ? = t2.b]),
+ access([t2.a], [t2.b]), partitions(p0)
+
+ /*After the rewrite, every subquery uses a hash join*/
+ obclient> EXPLAIN SELECT * FROM t1, t2 WHERE t1.a = t2.a OR t1.b = t2.b;
+ +--------------------------------------------------------------------------+
+ | Query Plan |
+ +--------------------------------------------------------------------------+
+ |ID|OPERATOR |NAME|EST. ROWS|COST|
+ -------------------------------------
+ |0 |UNION ALL | |2970 |9105|
+ |1 | HASH JOIN | |1980 |3997|
+ |2 | TABLE SCAN|t1 |1000 |499 |
+ |3 | TABLE SCAN|t2 |1000 |499 |
+ |4 | HASH JOIN | |990 |3659|
+ |5 | TABLE SCAN|t1 |1000 |499 |
+ |6 | TABLE SCAN|t2 |1000 |499 |
+ =====================================
+
+ Outputs & filters:
+ -------------------------------------
+ 0 - output([UNION(t1.a, t1.a)], [UNION(t1.b, t1.b)], [UNION(t2.a, t2.a)], [UNION(t2.b, t2.b)]), filter(nil)
+ 1 - output([t1.a], [t1.b], [t2.a], [t2.b]), filter(nil),
+ equal_conds([t1.a = t2.a]), other_conds(nil)
+ 2 - output([t1.a], [t1.b]), filter(nil),
+ access([t1.a], [t1.b]), partitions(p0)
+ 3 - output([t2.a], [t2.b]), filter(nil),
+ access([t2.a], [t2.b]), partitions(p0)
+ 4 - output([t1.a], [t1.b], [t2.a], [t2.b]), filter(nil),
+ equal_conds([t1.b = t2.b]), other_conds([lnnvl(t1.a = t2.a)])
+ 5 - output([t1.a], [t1.b]), filter(nil),
+ access([t1.a], [t1.b]), partitions(p0)
+ 6 - output([t2.a], [t2.b]), filter(nil),
+ access([t2.a], [t2.b]), partitions(p0)
+ ```
+
+* Allow each subquery to separately perform sorting elimination, which accelerates the retrieval of the TOP K results.
+
+ In the following example, query Q1 is rewritten to Q2. For Q1, the only way of execution is to find the rows that fit the condition, sort them, and then retrieve the TOP 10 results. Assume that Q2 has two subqueries. If indexes a and b are available, each subquery of Q2 can eliminate redundancy by using an index and retrieve the TOP 10 results. Finally, sort the 20 rows retrieved by these subqueries to retrieve the final TOP 10 rows.
+
+ ```javascript
+ Q1:
+ obclient> SELECT * FROM t1 WHERE t1.a = 1 OR t1.a = 2 ORDER BY b LIMIT 10;
+
+ Q2:
+ obclient> SELECT * FROM
+ (SELECT * FROM t1 WHERE t1.a = 1 ORDER BY b LIMIT 10 UNION ALL
+ SELECT * FROM t1 WHERE t1.a = 2 ORDER BY b LIMIT 10) AS TEMP
+ ORDER BY temp.b LIMIT 10;
+ ```
+
+ Here is a complete example:
+
+ ```javascript
+ obclient> CREATE TABLE t1(a INT, b INT, INDEX IDX_a(a, b));
+ Query OK, 0 rows affected
+
+ /*Before the rewrite, data is sorted to retrieve the TOP K results*/
+ obclient> EXPLAIN SELECT/*+NO_REWRITE()*/ * FROM t1 WHERE t1.a = 1 OR t1.a = 2 ORDER BY b LIMIT 10;
+ +-------------------------------------------------------------------------+
+ | Query Plan |
+ +-------------------------------------------------------------------------+
+ | ==========================================
+ |ID|OPERATOR |NAME |EST. ROWS|COST|
+ ------------------------------------------
+ |0 |LIMIT | |4 |77 |
+ |1 | TOP-N SORT | |4 |76 |
+ |2 | TABLE SCAN|t1(idx_a)|4 |73 |
+ ==========================================
+
+ Outputs & filters:
+ -------------------------------------
+ 0 - output([t1.a], [t1.b]), filter(nil), limit(10), offset(nil)
+ 1 - output([t1.a], [t1.b]), filter(nil), sort_keys([t1.b, ASC]), topn(10)
+ 2 - output([t1.a], [t1.b]), filter(nil),
+ access([t1.a], [t1.b]), partitions(p0)
+
+ /*After the rewrite, the subqueries remove the SORT operator and eventually retrieve the TOP K results*/
+ obclient>EXPLAIN SELECT * FROM t1 WHERE t1.a = 1 OR t1.a = 2
+ ORDER BY b LIMIT 10;
+ +-------------------------------------------------------------------------+
+ | Query Plan |
+ +-------------------------------------------------------------------------+
+ | ===========================================
+ |ID|OPERATOR |NAME |EST. ROWS|COST|
+ -------------------------------------------
+ |0 |LIMIT | |3 |76 |
+ |1 | TOP-N SORT | |3 |76 |
+ |2 | UNION ALL | |3 |74 |
+ |3 | TABLE SCAN|t1(idx_a)|2 |37 |
+ |4 | TABLE SCAN|t1(idx_a)|1 |37 |
+ ===========================================
+
+ Outputs & filters:
+ -------------------------------------
+ 0 - output([UNION(t1.a, t1.a)], [UNION(t1.b, t1.b)]), filter(nil), limit(10), offset(nil)
+ 1 - output([UNION(t1.a, t1.a)], [UNION(t1.b, t1.b)]), filter(nil), sort_keys([UNION(t1.b, t1.b), ASC]), topn(10)
+ 2 - output([UNION(t1.a, t1.a)], [UNION(t1.b, t1.b)]), filter(nil)
+ 3 - output([t1.a], [t1.b]), filter(nil),
+ access([t1.a], [t1.b]), partitions(p0),
+ limit(10), offset(nil)
+ 4 - output([t1.a], [t1.b]), filter([lnnvl(t1.a = 1)]),
+ access([t1.a], [t1.b]), partitions(p0),
+ limit(10), offset(nil)
+ ```
diff --git a/.history/en-US/700.reference/1000.performance-tuning-guide/500.sql-optimization/400.sql-optimization/500.query-rewrite/300.cost-based-query-rewriting_20240417164139.md b/.history/en-US/700.reference/1000.performance-tuning-guide/500.sql-optimization/400.sql-optimization/500.query-rewrite/300.cost-based-query-rewriting_20240417164139.md
new file mode 100644
index 0000000000..3d96770fcc
--- /dev/null
+++ b/.history/en-US/700.reference/1000.performance-tuning-guide/500.sql-optimization/400.sql-optimization/500.query-rewrite/300.cost-based-query-rewriting_20240417164139.md
@@ -0,0 +1,215 @@
+# Cost-based query rewrite
+
+OceanBase Database supports only one type of cost-based query rewrite: OR-EXPANSION.
+
+Its later versions will support advanced cost-based rewriting rules for database administration such as complex view merge and window function rewrite.
+
+## OR-EXPANSION
+
+OR-EXPANSION rewrites a query to several subqueries, and the result sets of these subqueries are combined into the same result set through the `UNION` operator. This allows you to optimize each subquery. However, the rewrite also results in the execution of multiple subqueries. You must determine whether to perform the rewrite based on the cost analysis.
+
+Purposes of the OR-EXPANSION rewrite are as follows:
+
+* Allow subqueries to use different indexes to speed up the query.
+
+ In the following example, query Q1 is rewritten to Q2, where `LNNVL(t1.a = 1)`, the predicate in Q2, ensures that the two subqueries do not generate duplicate results. Before the rewrite, Q1 generally accesses the primary table. After the rewrite, if indexes (a) and (b) are created on table `t1`, subqueries of Q2 are allowed to use different indexes for data access.
+
+ ```javascript
+ Q1:
+ obclient> SELECT * FROM t1 WHERE t1.a = 1 OR t1.b = 1;
+ Q2:
+ obclient> SELECT * FROM t1 WHERE t1.a = 1 UNION ALL SELECT * FROM t1.b = 1
+ AND LNNVL(t1.a = 1);
+ ```
+
+ Here is a complete example:
+
+ ```javascript
+ obclient> CREATE TABLE t1(a INT, b INT, c INT, d INT, e INT, INDEX IDX_a(a), INDEX IDX_b(b));
+ Query OK, 0 rows affected
+
+ /*Without OR-EXPANSION rewrite, primary access path is the only option for the query*/
+ obclient> EXPLAIN SELECT/*+NO_REWRITE()*/ * FROM t1 WHERE t1.a = 1 OR t1.b = 1;
+ +--------------------------------------------------------------+
+ | Query Plan |
+ +--------------------------------------------------------------+
+ | ===================================
+ |ID|OPERATOR |NAME|EST. ROWS|COST|
+ -----------------------------------
+ |0 |TABLE SCAN|t1 |4 |649 |
+ ===================================
+
+ Outputs & filters:
+ -------------------------------------
+ 0 - output([t1.a], [t1.b], [t1.c], [t1.d], [t1.e]), filter([t1.a = 1 OR t1.b = 1]),
+ access([t1.a], [t1.b], [t1.c], [t1.d], [t1.e]), partitions(p0)
+
+ /*After the rewrite, different index access paths are available for each subquery*/
+ obclient>EXPLAIN SELECT * FROM t1 WHERE t1.a = 1 OR t1.b = 1;
+ +------------------------------------------------------------------------+
+ | Query Plan |
+ +------------------------------------------------------------------------+
+ | =========================================
+ |ID|OPERATOR |NAME |EST. ROWS|COST|
+ -----------------------------------------
+ |0 |UNION ALL | |3 |190 |
+ |1 | TABLE SCAN|t1(idx_a)|2 |94 |
+ |2 | TABLE SCAN|t1(idx_b)|1 |95 |
+ =========================================
+
+ Outputs & filters:
+ -------------------------------------
+ 0 - output([UNION(t1.a, t1.a)], [UNION(t1.b, t1.b)], [UNION(t1.c, t1.c)], [UNION(t1.d, t1.d)], [UNION(t1.e, t1.e)]), filter(nil)
+ 1 - output([t1.a], [t1.b], [t1.c], [t1.d], [t1.e]), filter(nil),
+ access([t1.a], [t1.b], [t1.c], [t1.d], [t1.e]), partitions(p0)
+ 2 - output([t1.a], [t1.b], [t1.c], [t1.d], [t1.e]), filter([lnnvl(t1.a = 1)]),
+ access([t1.a], [t1.b], [t1.c], [t1.d], [t1.e]), partitions(p02
+ ```
+
+* Allow subqueries to use different join algorithms to speed up the query and avoid using a Cartesian join.
+
+ In the following example, query Q1 is rewritten to Q2. For Q1, the nested loop join, which results in a Cartesian product, is the only join option available. After the rewrite, nested loop join, hash join, and merge join are available for each subquery, providing more options for optimization.
+
+ ```javascript
+ Q1:
+ obclient> SELECT * FROM t1, t2 WHERE t1.a = t2.a OR t1.b = t2.b;
+
+ Q2:
+ obclient> SELECT * FROM t1, t2 WHERE t1.a = t2.a UNION ALL
+ SELECT * FROM t1, t2 WHERE t1.b = t2.b AND LNNVL(t1.a = t2.a);
+ ```
+
+ Here is a complete example:
+
+ ```javascript
+ obclient> CREATE TABLE t1(a INT, b INT);
+ Query OK, 0 rows affected
+
+ obclient> CREATE TABLE t2(a INT, b INT);
+ Query OK, 0 rows affected
+
+ /*Without the rewrite, the nested loop join is the only available option*/
+ obclient> EXPLAIN SELECT/*+NO_REWRITE()*/ * FROM t1, t2
+ WHERE t1.a = t2.a OR t1.b = t2.b;
+ +--------------------------------------------------------------------------+
+ | Query Plan |
+ +--------------------------------------------------------------------------+
+ | ===========================================
+ |ID|OPERATOR |NAME|EST. ROWS|COST |
+ -------------------------------------------
+ |0 |NESTED-LOOP JOIN| |3957 |585457|
+ |1 | TABLE SCAN |t1 |1000 |499 |
+ |2 | TABLE SCAN |t2 |4 |583 |
+ ===========================================
+
+ Outputs & filters:
+ -------------------------------------
+ 0 - output([t1.a], [t1.b], [t2.a], [t2.b]), filter(nil),
+ conds(nil), nl_params_([t1.a], [t1.b])
+ 1 - output([t1.a], [t1.b]), filter(nil),
+ access([t1.a], [t1.b]), partitions(p0)
+ 2 - output([t2.a], [t2.b]), filter([? = t2.a OR ? = t2.b]),
+ access([t2.a], [t2.b]), partitions(p0)
+
+ /*After the rewrite, every subquery uses a hash join*/
+ obclient> EXPLAIN SELECT * FROM t1, t2 WHERE t1.a = t2.a OR t1.b = t2.b;
+ +--------------------------------------------------------------------------+
+ | Query Plan |
+ +--------------------------------------------------------------------------+
+ |ID|OPERATOR |NAME|EST. ROWS|COST|
+ -------------------------------------
+ |0 |UNION ALL | |2970 |9105|
+ |1 | HASH JOIN | |1980 |3997|
+ |2 | TABLE SCAN|t1 |1000 |499 |
+ |3 | TABLE SCAN|t2 |1000 |499 |
+ |4 | HASH JOIN | |990 |3659|
+ |5 | TABLE SCAN|t1 |1000 |499 |
+ |6 | TABLE SCAN|t2 |1000 |499 |
+ =====================================
+
+ Outputs & filters:
+ -------------------------------------
+ 0 - output([UNION(t1.a, t1.a)], [UNION(t1.b, t1.b)], [UNION(t2.a, t2.a)], [UNION(t2.b, t2.b)]), filter(nil)
+ 1 - output([t1.a], [t1.b], [t2.a], [t2.b]), filter(nil),
+ equal_conds([t1.a = t2.a]), other_conds(nil)
+ 2 - output([t1.a], [t1.b]), filter(nil),
+ access([t1.a], [t1.b]), partitions(p0)
+ 3 - output([t2.a], [t2.b]), filter(nil),
+ access([t2.a], [t2.b]), partitions(p0)
+ 4 - output([t1.a], [t1.b], [t2.a], [t2.b]), filter(nil),
+ equal_conds([t1.b = t2.b]), other_conds([lnnvl(t1.a = t2.a)])
+ 5 - output([t1.a], [t1.b]), filter(nil),
+ access([t1.a], [t1.b]), partitions(p0)
+ 6 - output([t2.a], [t2.b]), filter(nil),
+ access([t2.a], [t2.b]), partitions(p0)
+ ```
+
+* Allow each subquery to separately perform sorting elimination, which accelerates the retrieval of the TOP K results.
+
+ In the following example, query Q1 is rewritten to Q2. For Q1, the only way of execution is to find the rows that fit the condition, sort them, and then retrieve the TOP 10 results. Assume that Q2 has two subqueries. If indexes a and b are available, each subquery of Q2 can eliminate redundancy by using an index and retrieve the TOP 10 results. Finally, sort the 20 rows retrieved by these subqueries to retrieve the final TOP 10 rows.
+
+ ```javascript
+ Q1:
+ obclient> SELECT * FROM t1 WHERE t1.a = 1 OR t1.a = 2 ORDER BY b LIMIT 10;
+
+ Q2:
+ obclient> SELECT * FROM
+ (SELECT * FROM t1 WHERE t1.a = 1 ORDER BY b LIMIT 10 UNION ALL
+ SELECT * FROM t1 WHERE t1.a = 2 ORDER BY b LIMIT 10) AS TEMP
+ ORDER BY temp.b LIMIT 10;
+ ```
+
+ Here is a complete example:
+
+ ```javascript
+ obclient> CREATE TABLE t1(a INT, b INT, INDEX IDX_a(a, b));
+ Query OK, 0 rows affected
+
+ /*Before the rewrite, data is sorted to retrieve the TOP K results*/
+ obclient> EXPLAIN SELECT/*+NO_REWRITE()*/ * FROM t1 WHERE t1.a = 1 OR t1.a = 2 ORDER BY b LIMIT 10;
+ +-------------------------------------------------------------------------+
+ | Query Plan |
+ +-------------------------------------------------------------------------+
+ | ==========================================
+ |ID|OPERATOR |NAME |EST. ROWS|COST|
+ ------------------------------------------
+ |0 |LIMIT | |4 |77 |
+ |1 | TOP-N SORT | |4 |76 |
+ |2 | TABLE SCAN|t1(idx_a)|4 |73 |
+ ==========================================
+
+ Outputs & filters:
+ -------------------------------------
+ 0 - output([t1.a], [t1.b]), filter(nil), limit(10), offset(nil)
+ 1 - output([t1.a], [t1.b]), filter(nil), sort_keys([t1.b, ASC]), topn(10)
+ 2 - output([t1.a], [t1.b]), filter(nil),
+ access([t1.a], [t1.b]), partitions(p0)
+
+ /*After the rewrite, the subqueries remove the SORT operator and eventually retrieve the TOP K results*/
+ obclient>EXPLAIN SELECT * FROM t1 WHERE t1.a = 1 OR t1.a = 2
+ ORDER BY b LIMIT 10;
+ +-------------------------------------------------------------------------+
+ | Query Plan |
+ +-------------------------------------------------------------------------+
+ | ===========================================
+ |ID|OPERATOR |NAME |EST. ROWS|COST|
+ -------------------------------------------
+ |0 |LIMIT | |3 |76 |
+ |1 | TOP-N SORT | |3 |76 |
+ |2 | UNION ALL | |3 |74 |
+ |3 | TABLE SCAN|t1(idx_a)|2 |37 |
+ |4 | TABLE SCAN|t1(idx_a)|1 |37 |
+ ===========================================
+
+ Outputs & filters:
+ -------------------------------------
+ 0 - output([UNION(t1.a, t1.a)], [UNION(t1.b, t1.b)]), filter(nil), limit(10), offset(nil)
+ 1 - output([UNION(t1.a, t1.a)], [UNION(t1.b, t1.b)]), filter(nil), sort_keys([UNION(t1.b, t1.b), ASC]), topn(10)
+ 2 - output([UNION(t1.a, t1.a)], [UNION(t1.b, t1.b)]), filter(nil)
+ 3 - output([t1.a], [t1.b]), filter(nil),
+ access([t1.a], [t1.b]), partitions(p0),
+ limit(10), offset(nil)
+ 4 - output([t1.a], [t1.b]), filter([lnnvl(t1.a = 1)]),
+ access([t1.a], [t1.b]), partitions(p0),
+ limit(10), offset(nil)
+ ```
diff --git a/en-US/700.reference/1000.performance-tuning-guide/500.sql-optimization/300.distributed-execution-plan/1000.parallel-dml.md b/en-US/700.reference/1000.performance-tuning-guide/500.sql-optimization/300.distributed-execution-plan/1000.parallel-dml.md
index 4c26105499..b02cc910ca 100644
--- a/en-US/700.reference/1000.performance-tuning-guide/500.sql-optimization/300.distributed-execution-plan/1000.parallel-dml.md
+++ b/en-US/700.reference/1000.performance-tuning-guide/500.sql-optimization/300.distributed-execution-plan/1000.parallel-dml.md
@@ -217,6 +217,75 @@ ALTER SESSION DISABLE PARALLEL DML;
When parallel DML is disabled, parallel DML is not executed even if the `PARALLEL` hint is used in SQL statements.
When parallel DML is enabled in a session, parallel execution is enabled for all DML statements in the session. If you use the `ENABLE_PARALLEL_DML` hint to enable parallel DML for an SQL statement, parallel execution is enabled only for the specified statement. However, if no table with parallel attributes exists, or the usage limitations on parallel operations are violated, parallel DML does not take effect even if it is enabled.
+### Support for partition-wise parallel DML
+
+Here is an example of the partition-wise parallel DML feature.
+
+1. Create a test table named `branch_sp_tbl_src`.
+
+ ```sql
+ CREATE TABLE branch_sp_tbl_src(id INT PRIMARY KEY, v INT) PARTITION BY KEY(id) PARTITIONS 4;
+ ```
+
+2. Create a test table named `branch_sp_tbl_dest`.
+
+ ```sql
+ CREATE TABLE branch_sp_tbl_dest LIKE branch_sp_tbl_src;
+ ```
+
+3. View the execution plan.
+
+ Execute the following SQL statement to show the execution plan of an INSERT operation.
+
+ ```sql
+ obclient [test]> EXPLAIN BASIC INSERT /*+enable_parallel_dml parallel(100) query_timeout(1000000000)*/ INTO branch_sp_tbl_dest SELECT id, v FROM branch_sp_tbl_src ON DUPLICATE KEY UPDATE v = v + 1;
+ ```
+
+ The return result is as follows:
+
+ ```shell
+ +-------------------------------------------------------------------------------------------------------------------------------------+
+ | Query Plan |
+ +-------------------------------------------------------------------------------------------------------------------------------------+
+ | ================================================ |
+ | |ID|OPERATOR |NAME | |
+ | ------------------------------------------------ |
+ | |0 |PX COORDINATOR | | |
+ | |1 |└─EXCHANGE OUT DISTR |:EX10000 | |
+ | |2 | └─PX PARTITION ITERATOR| | |
+ | |3 | └─INSERT_UP | | |
+ | |4 | └─SUBPLAN SCAN |ANONYMOUS_VIEW1 | |
+ | |5 | └─TABLE FULL SCAN|branch_sp_tbl_src| |
+ | ================================================ |
+ | Outputs & filters: |
+ | ------------------------------------- |
+ | 0 - output(nil), filter(nil), rowset=16 |
+ | 1 - output(nil), filter(nil), rowset=16 |
+ | dop=100 |
+ | 2 - output(nil), filter(nil), rowset=16 |
+ | partition wise, force partition granule |
+ | 3 - output(nil), filter(nil) |
+ | columns([{branch_sp_tbl_dest: ({branch_sp_tbl_dest: (branch_sp_tbl_dest.id, branch_sp_tbl_dest.v)})}]), partitions(p[0-3]), |
+ | column_values([column_conv(INT,PS:(11,0),NOT NULL,ANONYMOUS_VIEW1.id)], [column_conv(INT,PS:(11,0),NULL,ANONYMOUS_VIEW1.v)]), |
+ | update([branch_sp_tbl_dest.v=column_conv(INT,PS:(11,0),NULL,cast(branch_sp_tbl_dest.v + 1, INT(-1, 0)))]) |
+ | 4 - output([ANONYMOUS_VIEW1.id], [ANONYMOUS_VIEW1.v]), filter(nil), rowset=16 |
+ | access([ANONYMOUS_VIEW1.id], [ANONYMOUS_VIEW1.v]) |
+ | 5 - output([branch_sp_tbl_src.id], [branch_sp_tbl_src.v]), filter(nil), rowset=16 |
+ | access([branch_sp_tbl_src.id], [branch_sp_tbl_src.v]), partitions(p[0-3]) |
+ | is_index_back=false, is_global_index=false, |
+ | range_key([branch_sp_tbl_src.id]), range(MIN ; MAX)always true |
+ +-------------------------------------------------------------------------------------------------------------------------------------+
+ 27 rows in set
+ ```
+
+ The operators in the query plan are described as follows:
+
+ * Operator 0: a parallel execution coordinator for managing parallel execution processes.
+ * Operator 1: distributes data among different execution nodes.
+ * Operator 2: traverses partitions for the query. `partition wise` indicates that the query intelligently processes data across partitions.
+ * Operator 3: an INSERT or UPDATE operation. If the primary key to be inserted does not exist in the destination table, the database performs an INSERT operation. Otherwise, the database performs an UPDATE operation.
+ * Operators 4 and 5: perform a full scan on the `branch_sp_tbl_src` table. This table is the data source from which data is selected for the INSERT operation.
+
## Considerations
OceanBase Database supports parallel DML for the following SQL statements:
@@ -231,7 +300,7 @@ If a table has the following types of indexes, you need to enable parallel DML:
* Single-partition global indexes
* Multi-partition global indexes
-OceanBase Database does not support parallel DML for the following SQL statements:
+For the following SQL statements, OceanBase Database supports parallel DML only for partition-wise operations but not for operations of other types:
* `REPLACE`
* `INSERT INTO ON DUPLICATE KEY UPDATE`
diff --git a/en-US/700.reference/1000.performance-tuning-guide/500.sql-optimization/350.parallel-execution/100.parallel-execution-concept.md b/en-US/700.reference/1000.performance-tuning-guide/500.sql-optimization/350.parallel-execution/100.parallel-execution-concept.md
index 5523158068..9d0b3293d3 100644
--- a/en-US/700.reference/1000.performance-tuning-guide/500.sql-optimization/350.parallel-execution/100.parallel-execution-concept.md
+++ b/en-US/700.reference/1000.performance-tuning-guide/500.sql-optimization/350.parallel-execution/100.parallel-execution-concept.md
@@ -1,260 +1,262 @@
-# Overview
-
-Parallel execution is a strategy to optimize SQL query tasks. It splits a query task into multiple subtasks and allows them to run on multiple processors in parallel to improve the execution efficiency of the whole query task.
-
-In current computer systems, multi-core processors, multithreading, and high-speed network connections are widely used, which makes parallel execution an efficient query technology. This technology significantly reduces the response time of compute-intensive large queries and comes in handy in fields such as batch data import/export and quick index table creation. It is widely applied in business scenarios such as offline data warehouses, real-time reports, and online big data analytics.
-
-Parallel execution significantly improves the query performance in the following scenarios:
-
-* Scan or connection to large tables, and sorting or aggregation of a large amount of data
-* DDL operations on large tables, such as changing the primary key or column type and creating indexes
-* Table creation based on existing data, such as creating a table by using the `CREATE TABLE AS SELECT` statement
-* Batch data insertion, deletion, and updates
-
-## Application scenarios
-
-In addition to analytical systems such as offline data warehousing, real-time report, and online big data analytics systems, parallel execution can also be used to accelerate DDL operations and batch data processing in the online transaction processing (OLTP) field.
-
-Parallel execution makes full use of multiple CPU cores and I/O resources to reduce the SQL execution time. Parallel execution is superior to serial execution in the following circumstances:
-
-* Large amount of data to access
-* Low SQL query concurrency
-* Low requirements on the latency
-* Sufficient hardware resources
-
-Parallel execution uses multiple processors to concurrently handle the same task. This can improve the performance in the following system circumstances:
-
-* Symmetric multiprocessing (SMP) system and cluster
-* Sufficient I/O bandwidth
-* More memory resources than needed, which can be used for memory-intensive operations such as sorting and hash table creation
-* Appropriate system load or system load with peak-valley characteristics, such as a system load that remains below 30%
-
-If your system does not have the preceding characteristics, parallel execution cannot significantly improve the performance. It can result in poor performance in a system with a high load, small memory size, or insufficient I/O bandwidth.
-
-Parallel execution does not have special requirements on the hardware. However, the number of CPU cores, memory size, storage I/O performance, and network bandwidth can affect the parallel execution performance. If any of the factors becomes a bottleneck, the overall performance can be affected.
-
-## Technical mechanism
-
-Parallel execution splits an SQL query task into multiple subtasks and schedules these subtasks on multiple processors to improve the execution efficiency.
-
-When a parallel execution plan is generated for an SQL query, the query is executed in the following steps:
-
-1. The main thread, which is responsible for receiving and parsing SQL queries, allocates the worker threads required for parallel execution in advance. These worker threads may involve clusters on multiple servers.
-2. The main thread enables the parallel execution (PX) coordinator.
-3. The PX coordinator parses the execution plan into multiple steps and schedules the steps from bottom up. Each operation is designed to be suitable for parallel execution.
-4. After all operations are executed in parallel, the PX coordinator receives the calculation results and transfers the results to the upper-layer operator (such as the Aggregate operator) for serial execution of operations unsuitable for parallel execution, such as the final SUM operation.
-
-### Granules of parallel execution
-
-The basic working unit for parallel data scan is called a granule.
-
-OceanBase Database divides a table scan task into multiple granules. Each granule describes a part of the table scan task. A granule is a partition scan task for scanning a specific partition and cannot span across table partitions. Therefore, a partition scan task must belong within a partition.
-Two types of granules are supported:
-
-* Partition granule
-
- A partition granule describes a whole partition. Therefore, the number of partition granules of a table scan task is equal to the number of partitions involved in the task. Here, a partition can be one in a primary table or an index table. Partition granules are commonly used in partition-wise joins to ensure that the corresponding partitions of the two tables are processed based on partition granules.
-
-* Block granule
-
- A block granule describes a segment of continuous data in a partition. Generally, block granules are used to divide data in a data scan scenario. Each partition is divided into multiple blocks. The blocks are concatenated based on specific rules to form a task queue, which will be consumed by PX worker threads.
-
- The following figure shows the technical mechanism of block granules.
-
-
-
-Given a degree of parallelism (DOP), the optimizer will automatically choose to divide data into partition granules or block granules for balancing of scan tasks. If the optimizer chooses block granules, the parallel execution framework dynamically divides data into blocks during running and ensures that each block is of an appropriate size, which is not too large or small. An excessively large block size can lead to data skew, where some threads cannot be fully used. An excessively small size can lead to frequent scan, which increases the switching overhead.
-
-After partition granules are divided, each granule corresponds to a partition scan task. The TABLE SCAN operator handles the partition scan tasks one by one.
-
-### Parallel execution model
-
-#### Producer-consumer pipeline model
-
-The producer-consumer model is used for pipelined execution.
-
-The PX coordinator parses the execution plan into multiple steps. Each step is called a data flow operation (DFO).
-
-Generally, the PX coordinator starts two DFOs at the same time. The two DFOs are connected in producer-consumer mode for parallel execution. This is called inter-DFO parallel execution. Each DFO is executed by a group of threads. This is called intra-DFO parallel execution. The number of threads used for a DFO is called DOP.
-
-A consumer DFO in a phase will become a producer DFO in the next phase. Under the coordination by the PX coordinator, the consumer DFO and producer DFO are started at the same time.
-
-
-* The data generated by DFO A is transmitted in real time to DFO B for calculation.
-* After calculation, DFO B stores the data in the current thread and waits for the upper-layer DFO C to start.
-* When DFO B is notified that DFO C has been started, it switches to the producer role and starts to transmit data to DFO C. After DFO C receives the data, it starts calculation.
-
-
-
-In the following example, the execution plan of the `SELECT` statement first performs a full-table scan on the `game` table, performs summation by team based on the `team` table, and finally calculates the total score of the `team` table.
-
-```sql
-CREATE TABLE game (round INT PRIMARY KEY, team VARCHAR(10), score INT)
- PARTITION BY HASH(round) PARTITIONS 3;
-INSERT INTO game VALUES (1, "CN", 4), (2, "CN", 5), (3, "JP", 3);
-INSERT INTO game VALUES (4, "CN", 4), (5, "US", 4), (6, "JP", 4);
-SELECT /*+ PARALLEL(3) */ team, SUM(score) TOTAL FROM game GROUP BY team;
-
-obclient> EXPLAIN SELECT /*+ PARALLEL(3) */ team, SUM(score) TOTAL FROM game GROUP BY team;
-obclient> EXPLAIN SELECT /*+ PARALLEL(3) */ team, SUM(score) TOTAL FROM game GROUP BY team;
-+---------------------------------------------------------------------------------------------------------+
-| Query Plan |
-+---------------------------------------------------------------------------------------------------------+
-| ================================================================= |
-| |ID|OPERATOR |NAME |EST.ROWS|EST.TIME(us)| |
-| ----------------------------------------------------------------- |
-| |0 |PX COORDINATOR | |1 |4 | |
-| |1 | EXCHANGE OUT DISTR |:EX10001|1 |4 | |
-| |2 | HASH GROUP BY | |1 |4 | |
-| |3 | EXCHANGE IN DISTR | |3 |3 | |
-| |4 | EXCHANGE OUT DISTR (HASH)|:EX10000|3 |3 | |
-| |5 | HASH GROUP BY | |3 |2 | |
-| |6 | PX BLOCK ITERATOR | |1 |2 | |
-| |7 | TABLE SCAN |game |1 |2 | |
-| ================================================================= |
-| Outputs & filters: |
-| ------------------------------------- |
-| 0 - output([INTERNAL_FUNCTION(game.team, T_FUN_SUM(T_FUN_SUM(game.score)))]), filter(nil), rowset=256 |
-| 1 - output([INTERNAL_FUNCTION(game.team, T_FUN_SUM(T_FUN_SUM(game.score)))]), filter(nil), rowset=256 |
-| dop=3 |
-| 2 - output([game.team], [T_FUN_SUM(T_FUN_SUM(game.score))]), filter(nil), rowset=256 |
-| group([game.team]), agg_func([T_FUN_SUM(T_FUN_SUM(game.score))]) |
-| 3 - output([game.team], [T_FUN_SUM(game.score)]), filter(nil), rowset=256 |
-| 4 - output([game.team], [T_FUN_SUM(game.score)]), filter(nil), rowset=256 |
-| (#keys=1, [game.team]), dop=3 |
-| 5 - output([game.team], [T_FUN_SUM(game.score)]), filter(nil), rowset=256 |
-| group([game.team]), agg_func([T_FUN_SUM(game.score)]) |
-| 6 - output([game.team], [game.score]), filter(nil), rowset=256 |
-| 7 - output([game.team], [game.score]), filter(nil), rowset=256 |
-| access([game.team], [game.score]), partitions(p[0-2]) |
-| is_index_back=false, is_global_index=false, |
-| range_key([game.round]), range(MIN ; MAX)always true |
-+---------------------------------------------------------------------------------------------------------+
-29 rows in set
-```
-
-The following figure shows the execution process of the previous example.
-
-
-
-The figure shows that six threads are used for the query. The execution process is described as follows:
-
-* Step 1: The first three threads are responsible for scanning the `game` table. They separately aggregate the `game.team` data.
-* Step 2: The rest three threads are responsible for the final aggregation of the data aggregated in the first three threads.
-* Step 3: The PX coordinator returns the final aggregation results to the client.
-
-The data sent from Step 1 to Step 2 is hashed by using the `game.team` field to determine the thread to which the aggregated data is to be sent.
-
-#### Data distribution methods between the producer and the consumer
-
-A data distribution method is the method used by a group of PX worker threads (producers) to send data to another group of worker threads (consumers). The optimizer selects the optimal data redistribution method based on a series of optimization strategies to achieve the optimal performance.
-
-General data distribution methods in parallel execution are described as follows:
-
-* Hash distribution
-
- In the hash distribution method, the producer hashes the data based on the distribution key and obtains the modulus to determine the consumer worker thread to which the data is to be sent. In hash distribution, data can be evenly distributed to multiple consumer worker threads
-
-* Pkey distribution
-
- In pkey distribution, the producer determines through calculation the partition in the target table to which a data row belongs and sends the row data to a consumer thread responsible for this partition. The pkey distribution method is commonly used in partial partition-wise joins. The data at the consumer end can be directly used for partition-wise joins with that at the producer end without redistribution, thereby reducing network communication and improving the performance.
-
-* Pkey hash distribution
-
- In pkey hash distribution, the producer first calculates the partition in the target table to which a data row belongs and hashes the data based on the distribution key to determine the consumer thread to which the data is to be sent.
-
- Pkey hash distribution is commonly used in parallel DML scenarios where a partition can be concurrently updated by multiple threads. Pkey hash distribution can ensure that rows with identical values are processed by the same thread and that rows with different values are evenly distributed to multiple threads.
-
-* Broadcast distribution
-
- In broadcast distribution, the producer sends all data rows to each consumer thread so that each consumer thread has the full data of the producer. In broadcast distribution, data is copied from small tables to all nodes involved in a join. Then, joins are executed locally to reduce network communication.
-
-* Broadcast to host distribution
-
- In broadcast to host distribution, the producer sends all rows to each consumer node so that each consumer node has the full data of the producer. Then, the consumer threads on each node process the data in a collaborative manner.
-
- Broadcast to host distribution is commonly used in `NESTED LOOP JOIN` and `SHARED HASH JOIN` scenarios. In a `NESTED LOOP JOIN` scenario, each consumer thread obtains a part of the shared data as the driver data for the join operation on the target table. In a `SHARED HASH JOIN` scenario, the consumer threads jointly build a hash table based on the shared data. This avoids the situation where each consumer thread independently builds a hash table identical to that of others, thereby reducing the overhead.
-
-* Range distribution
-
- In range distribution, the producer divides data into ranges and different consumer threads process data of different ranges. Range distribution is commonly used in sorting scenarios. Each consumer thread only needs to sort the data allocated to it. This ensures that the data is globally ordered.
-
-* Random distribution
-
- In random distribution, the producer randomly scatters the data and sends the data to the consumer threads so that each consumer thread processes an almost equal amount of data, thereby achieving load balancing. Random distribution is commonly used in multithreaded parallel `UNION ALL` scenarios, where data is scattered only for load balancing and the scattered data is not associated.
-
-* Hybrid hash distribution
-
- Hybrid hash distribution is an adaptive distribution method used in join operations. Based on collected statistics, OceanBase Database provides a group of parameters to define regular values and frequent values. In hybrid hash distribution, hash distribution is used for regular values on both sides of a join, broadcast distribution is used for frequent values on the left side, and random distribution is used for frequent values on the right side.
-
-
-
-#### Data transmission mechanism between the producer and the consumer
-
-The two DFOs concurrently started by the PX coordinator are connected in producer-consumer mode for parallel execution. A transmission network is required for transmitting data between the producer and the consumer.
-
-For example, if the producer DFO uses two threads (DOP = 2) for data scan and the consumer DFO uses three threads (DOP = 3) for data aggregation, each producer thread creates three virtual links to the consumer threads. Totally six virtual links are created. The following figure shows the transmission mechanism.
-
-![DTL](https://obbusiness-private.oss-cn-shanghai.aliyuncs.com/doc/img/observer/V4.2.0/px-dtl.png)
-
-The created virtual transmission network is called the data transfer layer (DTL). In the parallel execution framework of OceanBase Database, all control messages and data rows are transmitted over the DTL. Each worker thread can establish thousands of virtual links, providing high scalability. The DTL also provides features such as data buffering, batch data sending, and automatic throttling.
-
-If the two ends of the DTL are on the same node, the DTL transfers messages through memory copy. If the two ends of the DTL are on different nodes, the DTL transfers messages through network communication.
-
-## Worker threads
-
-Two types of threads are used in parallel queries: one main thread and multiple PX worker threads. The main thread uses the same thread pool as that of a normal transaction processing (TP) query. PX worker threads come from a dedicated thread pool.
-
-OceanBase Database uses a dedicated thread pool model to allocate PX worker threads. Each tenant has a dedicated PX thread pool on each of its nodes. All PX worker threads are allocated from this thread pool.
-
-Before the PX coordinator schedules each DFO, it requests threads from the thread pool. After a DFO is executed, the threads for the DFO are immediately released.
-
-The initial size of the thread pool is 0. It can be dynamically scaled out without an upper limit. To avoid excessive idle threads, the thread pool introduces the automatic reclamation mechanism. For any thread:
-
-* If the thread is left idle for more than 10 minutes and the number of remaining threads in the thread pool exceeds 8, the thread will be reclaimed and destroyed.
-* If the thread is left idle for more than 60 minutes, it will be destroyed unconditionally.
-
-Theoretically, the thread pool has no upper limit in size. However, the following mechanisms actually constitute an upper limit:
-
-* Threads must be requested from the Admission module before parallel execution. Parallel execution can start only after threads are successfully requested. This mechanism can limit the number of parallel queries.
-* At most N threads can be requested from the resource pool at a time, where N = Value of `MIN_CPU` of the unit config for the resource units of the tenant × Value of `px_workers_per_cpu_quota`. At most N threads are allocated even if more than N threads are requested. `px_workers_per_cpu_quota` is a tenant-level parameter and is `10` by default. For example, if the DOP of a DFO is 100, the DFO needs to request 30 threads from node A and 70 threads from node B. Assuming that the value of `MIN_CPU` of the unit config is `4` and the value of `px_workers_per_cpu_quota` is `10`, N = 4 × 10 = 40. The DFO can actually request 30 threads from node A and 40 threads from node B. Its actual DOP is 70.
-
-## Performance optimization through load balancing
-
-To achieve the optimal performance, allocate the same number of tasks to each worker thread as far as possible.
-
-If data is divided based on block granules, the tasks are dynamically allocated to worker threads. This can minimize the imbalance in workloads. In other words, the workload of each worker thread does not obviously exceed those of others.
-
-If data is divided based on partition granules, you can optimize the performance by ensuring that the number of tasks is an integral multiple of the number of worker threads. This is very useful for partition-wise joins and parallel DML.
-
-Assume that a table has 16 partitions and that the amount of data in each partition is almost the same. You can use 16 worker threads (DOP = 16) to finish the job with 1/16 of the time otherwise required, 5 worker threads to finish the job with 1/5 of the time otherwise required, or 2 threads to finish the job with half the time otherwise required. However, if you use 15 worker threads to process the data of 16 partitions, the first thread will start to process the data of the 16th partition after it finishes processing the data of the first partition. Other threads will become idle after they finish processing the data of their respective allocated partition. If the amount of data in each partition is close, this configuration will result in poor performance. If the amount of data in each partition varies, the actual performance depends on the actual situation.
-
-Similarly, assume that you use six threads to process the data of 16 partitions and that each partition has a close amount of data.
-
-Each thread will start to process the data of a second partition after it finishes processing the data of the first partition. However, only four threads will process the data of a third partition while the other two threads will become idle.
-
-Given N partitions and P worker threads, you cannot simply calculate the time required for parallel execution by dividing N by P. You need to consider the situation where some threads may need to wait for other threads to complete data processing for the last partition. You can specify an appropriate DOP to minimize the imbalance in workloads and optimize the performance.
-
-## Inapplicable scenarios
-
-Parallel execution is inapplicable in the following scenarios:
-
-* Typical SQL queries in the system are executed within milliseconds.
-
- A parallel query has a millisecond-level scheduling overhead. For a short query, the benefit of parallel execution may be neutralized by the scheduling overhead.
-
-* The system load is high.
-
- Parallel execution is designed to make full use of idle system resources. For a system with a high load, parallel execution may fail to bring extra benefits but compromise the overall system performance.
-
-In serial execution, a single thread is used to execute database operations. Serial execution is preferred to parallel execution in the following circumstances:
-
-* A small amount of data to access
-* High concurrency
-* A query execution time less than 100 ms
-
-Parallel execution is partially inapplicable in the following circumstances:
-
-* The top-layer DFO does not require parallel execution. It interacts with the client and executes top-layer operations that do not require parallel execution, such as `LIMIT` and `PX COORDINATOR` operations.
-* A DFO that contains the user-defined function (UDF) `TABLE` can only be executed in serial. Other DFOs can still be executed in parallel.
-* Parallel execution is inapplicable to general `SELECT` and DML statements in an OLTP system.
+# Introduction to parallel execution
+
+Parallel execution is a strategy to optimize SQL query tasks. It splits a query task into multiple subtasks and allows them to run on multiple processors in parallel to improve the execution efficiency of the whole query task.
+
+In current computer systems, multi-core processors, multithreading, and high-speed network connections are widely used, which makes parallel execution an efficient query technology. This technology significantly reduces the response time of compute-intensive large queries, and comes in handy in fields such as batch data import/export and quick index table creation. It is widely applied in business scenarios such as offline data warehouses, real-time reports, and online big data analytics.
+
+Parallel execution significantly improves the query performance in the following scenarios:
+
+* Scan of or connection to large tables, and sorting or aggregation of a large amount of data
+* DDL operations on large tables, such as changing the primary key or column type and creating indexes
+* Table creation based on existing data, such as creating a table by using the `CREATE TABLE AS SELECT` statement
+* Batch data insertion, deletion, and updates
+
+## Application scenarios
+
+In addition to analytical systems such as offline data warehousing, real-time report, and online big data analytics systems, parallel execution can also be used to accelerate DDL operations and batch data processing in the online transaction processing (OLTP) field.
+
+Parallel execution makes full use of multiple CPU cores and I/O resources to reduce the SQL execution time. Parallel execution is superior to serial execution in the following circumstances:
+
+* Large amount of data to access
+* Low SQL query concurrency
+* Low requirements on the latency
+* Sufficient hardware resources
+
+Parallel execution uses multiple processors to concurrently handle the same task. This can improve the performance in the following system circumstances:
+
+* Symmetric multiprocessing (SMP) system and cluster
+* Sufficient I/O bandwidth
+* More memory resources than needed, which can be used for memory-intensive operations such as sorting and hash table creation
+* Appropriate system load or system load with peak-valley characteristics, such as a system load that remains below 30%
+
+If your system does not have the preceding characteristics, parallel execution cannot significantly improve the performance. It can result in poor performance in a system with a high load, small memory size, or insufficient I/O bandwidth.
+
+Parallel execution does not have special requirements on the hardware. However, the number of CPU cores, memory size, storage I/O performance, and network bandwidth can affect the parallel execution performance. If any of the factors becomes a bottleneck, the overall performance can be affected.
+
+## Technical mechanism
+
+Parallel execution splits an SQL query task into multiple subtasks and schedules these subtasks on multiple processors to improve the execution efficiency.
+
+When a parallel execution plan is generated for an SQL query, the query is executed in the following steps:
+
+1. The main thread, which is responsible for receiving and parsing SQL queries, allocates the worker threads required for parallel execution in advance. These worker threads may involve clusters on multiple servers.
+2. The main thread enables the parallel execution (PX) coordinator.
+3. The PX coordinator parses the execution plan into multiple steps and schedules the steps from bottom up. Each operation is designed to be eligible for parallel execution.
+4. After all operations are executed in parallel, the PX coordinator receives the calculation results and transfers the results to the upper-layer operator (such as the Aggregate operator) for serial execution of operations ineligible for parallel execution, such as the final SUM operation.
+
+### Granules of parallel execution
+
+The basic working unit for parallel data scan is called a granule.
+
+
+
+OceanBase Database divides a table scan task into multiple granules. Each granule describes a part of the table scan task. A granule covers the data of a single partition. Therefore, the data of each partition corresponds to an independent scan task. In other words, each granule corresponds to one partition scan task.
+
+Two types of granules are supported:
+
+* Partition granule
+
+ A partition granule describes a whole partition. Therefore, the number of partition granules of a scan task is equal to the number of partitions involved in the scan task. Here, a partition can be one in a primary table or an index table. Partition granules are commonly used in partition-wise joins to ensure that the corresponding partitions of the two tables are processed based on partition granules.
+
+* Block granule
+
+ A block granule describes a segment of continuous data in a partition. Generally, block granules are used to divide data in a data scan scenario. Each partition is divided into multiple blocks. The blocks are concatenated based on specific rules to form a task queue, which will be consumed by PX worker threads.
+
+ The following figure shows the technical mechanism of block granules.
+
+ ![PXBlockGranule](https://obbusiness-private.oss-cn-shanghai.aliyuncs.com/doc/img/observer/V4.2.0/px-block-granule.png)
+
+Given a degree of parallelism (DOP), the optimizer will automatically choose to divide data into partition granules or block granules for balancing of scan tasks. If the optimizer chooses block granules, the parallel execution framework dynamically divides data into blocks during running and ensures that each block is of an appropriate size, which is not too large or small. An excessively large block size can lead to data skew, where some threads cannot be fully used. An excessively small size can lead to frequent scan, which increases the switching overhead.
+
+After partition granules are divided, each granule corresponds to a scan task. The TABLE SCAN operator handles the scan tasks one by one.
+
+### Parallel execution model
+
+#### Producer-consumer pipeline model
+
+The producer-consumer model is used for pipelined execution.
+
+The PX coordinator parses the execution plan into multiple steps. Each step is called a data flow operation (DFO).
+
+Generally, the PX coordinator starts two DFOs at the same time. The two DFOs are connected in producer-consumer mode for parallel execution. This is called inter-DFO parallel execution. Each DFO is executed by a group of threads. This is called intra-DFO parallel execution. The number of threads used for a DFO is called DOP.
+
+A consumer DFO in a phase will become a producer DFO in the next phase. Under the coordination by the PX coordinator, the consumer DFO and producer DFO are started at the same time. The following figure shows the process in the producer-consumer model.
+
+* The data generated by DFO A is transmitted in real time to DFO B for calculation.
+* After calculation, DFO B stores the data in the current thread and waits for the upper-layer DFO C to start.
+* When DFO B is notified that DFO C has been started, it switches to the producer role and starts to transmit data to DFO C. After DFO C receives the data, it starts calculation.
+
+
+
+In the following example, the execution plan of the `SELECT` statement first performs a full-table scan on the `game` table, performs summation by team based on the `team` table, and finally calculates the total score of the `team` table.
+
+```sql
+CREATE TABLE game (round INT PRIMARY KEY, team VARCHAR(10), score INT)
+ PARTITION BY HASH(round) PARTITIONS 3;
+INSERT INTO game VALUES (1, "CN", 4), (2, "CN", 5), (3, "JP", 3);
+INSERT INTO game VALUES (4, "CN", 4), (5, "US", 4), (6, "JP", 4);
+SELECT /*+ PARALLEL(3) */ team, SUM(score) TOTAL FROM game GROUP BY team;
+
+obclient> EXPLAIN SELECT /*+ PARALLEL(3) */ team, SUM(score) TOTAL FROM game GROUP BY team;
+obclient> EXPLAIN SELECT /*+ PARALLEL(3) */ team, SUM(score) TOTAL FROM game GROUP BY team;
++---------------------------------------------------------------------------------------------------------+
+| Query Plan |
++---------------------------------------------------------------------------------------------------------+
+| ================================================================= |
+| |ID|OPERATOR |NAME |EST.ROWS|EST.TIME(us)| |
+| ----------------------------------------------------------------- |
+| |0 |PX COORDINATOR | |1 |4 | |
+| |1 | EXCHANGE OUT DISTR |:EX10001|1 |4 | |
+| |2 | HASH GROUP BY | |1 |4 | |
+| |3 | EXCHANGE IN DISTR | |3 |3 | |
+| |4 | EXCHANGE OUT DISTR (HASH)|:EX10000|3 |3 | |
+| |5 | HASH GROUP BY | |3 |2 | |
+| |6 | PX BLOCK ITERATOR | |1 |2 | |
+| |7 | TABLE SCAN |game |1 |2 | |
+| ================================================================= |
+| Outputs & filters: |
+| ------------------------------------- |
+| 0 - output([INTERNAL_FUNCTION(game.team, T_FUN_SUM(T_FUN_SUM(game.score)))]), filter(nil), rowset=256 |
+| 1 - output([INTERNAL_FUNCTION(game.team, T_FUN_SUM(T_FUN_SUM(game.score)))]), filter(nil), rowset=256 |
+| dop=3 |
+| 2 - output([game.team], [T_FUN_SUM(T_FUN_SUM(game.score))]), filter(nil), rowset=256 |
+| group([game.team]), agg_func([T_FUN_SUM(T_FUN_SUM(game.score))]) |
+| 3 - output([game.team], [T_FUN_SUM(game.score)]), filter(nil), rowset=256 |
+| 4 - output([game.team], [T_FUN_SUM(game.score)]), filter(nil), rowset=256 |
+| (#keys=1, [game.team]), dop=3 |
+| 5 - output([game.team], [T_FUN_SUM(game.score)]), filter(nil), rowset=256 |
+| group([game.team]), agg_func([T_FUN_SUM(game.score)]) |
+| 6 - output([game.team], [game.score]), filter(nil), rowset=256 |
+| 7 - output([game.team], [game.score]), filter(nil), rowset=256 |
+| access([game.team], [game.score]), partitions(p[0-2]) |
+| is_index_back=false, is_global_index=false, |
+| range_key([game.round]), range(MIN ; MAX)always true |
++---------------------------------------------------------------------------------------------------------+
+29 rows in set
+```
+
+The following figure shows the execution process of the previous example.
+
+
+
+The figure shows that six threads are used for the query. The execution process is described as follows:
+
+* Step 1: The first three threads are responsible for scanning the `game` table. They separately aggregate the `game.team` data.
+* Step 2: The rest three threads are responsible for the final aggregation of the data aggregated in the first three threads.
+* Step 3: The PX coordinator returns the final aggregation results to the client.
+
+The data sent from Step 1 to Step 2 is hashed by using the `game.team` field to determine the thread to which the aggregated data is to be sent.
+
+#### Data distribution methods between the producer and the consumer
+
+A data distribution method is the method used by a group of PX worker threads (producers) to send data to another group of worker threads (consumers). The optimizer selects the optimal data redistribution method based on a series of optimization strategies to achieve the optimal performance.
+
+General data distribution methods in parallel execution are described as follows:
+
+* Hash distribution
+
+ In the hash distribution method, the producer hashes the data based on the distribution key and obtains the modulus to determine the consumer worker thread to which the data is to be sent. In hash distribution, data can be evenly distributed to multiple consumer worker threads.
+
+* Pkey distribution
+
+ In pkey distribution, the producer determines through calculation the partition in the target table to which a data row belongs and sends the row data to a consumer thread responsible for this partition. The pkey distribution method is commonly used in partial partition-wise joins. The data at the consumer end can be directly used for partition-wise joins with that at the producer end without redistribution, thereby reducing network communication and improving the performance.
+
+* Pkey hash distribution
+
+ In pkey hash distribution, the producer first calculates the partition in the target table to which a data row belongs and hashes the data based on the distribution key to determine the consumer thread to which the data is to be sent.
+
+ Pkey hash distribution is commonly used in parallel DML scenarios where a partition can be concurrently updated by multiple threads. Pkey hash distribution can ensure that rows with identical values are processed by the same thread and that rows with different values are evenly distributed to multiple threads.
+
+* Broadcast distribution
+
+ In broadcast distribution, the producer sends all data rows to each consumer thread so that each consumer thread has the full data of the producer. In broadcast distribution, data is copied from small tables to all nodes involved in a join. Then, joins are executed locally to reduce network communication.
+
+* Broadcast to host distribution
+
+ In broadcast to host distribution, the producer sends all rows to each consumer node so that each consumer node has the full data of the producer. Then, the consumer threads on each node process the data in a collaborative manner.
+
+ Broadcast to host distribution is commonly used in `NESTED LOOP JOIN` and `SHARED HASH JOIN` scenarios. In a `NESTED LOOP JOIN` scenario, each consumer thread obtains a part of the shared data as the driver data for the join operation on the target table. In a `SHARED HASH JOIN` scenario, the consumer threads jointly build a hash table based on the shared data. This avoids the situation where each consumer thread independently builds a hash table identical to that of others, thereby reducing the overhead.
+
+* Range distribution
+
+ In range distribution, the producer divides data into ranges and different consumer threads process data of different ranges. Range distribution is commonly used in sorting scenarios. Each consumer thread only needs to sort the data allocated to it. This ensures that the data is globally ordered.
+
+* Random distribution
+
+ In random distribution, the producer randomly scatters the data and sends the data to the consumer threads so that each consumer thread processes an almost equal amount of data, thereby achieving load balancing. Random distribution is commonly used in multithreaded parallel `UNION ALL` scenarios, where data is scattered only for load balancing and the scattered data is not associated.
+
+* Hybrid hash distribution
+
+ Hybrid hash distribution is an adaptive distribution method used in join operations. Based on collected statistics, OceanBase Database provides a group of parameters to define regular values and frequent values. In hybrid hash distribution, hash distribution is used for regular values on both sides of a join, broadcast distribution is used for frequent values on the left side, and random distribution is used for frequent values on the right side.
+
+
+
+#### Data transmission mechanism between the producer and the consumer
+
+The two DFOs concurrently started by the PX coordinator are connected in producer-consumer mode for parallel execution. A transmission network is required for transmitting data between the producer and the consumer.
+
+For example, if the producer DFO uses two threads (DOP = 2) for data scan and the consumer DFO uses three threads (DOP = 3) for data aggregation, each producer thread creates three virtual links to the consumer threads. Totally six virtual links are created. The following figure shows the transmission mechanism.
+
+![DTL](https://obbusiness-private.oss-cn-shanghai.aliyuncs.com/doc/img/observer/V4.2.0/px-dtl.png)
+
+The created virtual transmission network is called the data transfer layer (DTL). In the parallel execution framework of OceanBase Database, all control messages and data rows are transmitted over the DTL. Each worker thread can establish thousands of virtual links, providing high scalability. The DTL also provides features such as data buffering, batch data sending, and automatic throttling.
+
+If the two ends of the DTL are on the same node, the DTL transfers messages through memory copy. If the two ends of the DTL are on different nodes, the DTL transfers messages through network communication.
+
+## Worker threads
+
+Two types of threads are used in parallel queries: one main thread and multiple PX worker threads. The main thread uses the same thread pool as normal transaction processing (TP) queries. PX worker threads come from a dedicated thread pool.
+
+OceanBase Database uses a dedicated thread pool model to allocate PX worker threads. Each tenant has a dedicated PX thread pool on each of its nodes. All PX worker threads are allocated from this thread pool.
+
+Before the PX coordinator schedules each DFO, it requests threads from the thread pool. After a DFO is executed, the threads for the DFO are immediately released.
+
+The initial size of the thread pool is 0. It can be dynamically scaled out without an upper limit. To avoid excessive idle threads, the thread pool introduces the automatic reclamation mechanism. For any thread:
+
+* If the thread is left idle for more than 10 minutes and the number of remaining threads in the thread pool exceeds 8, the thread will be reclaimed and destroyed.
+* If the thread is left idle for more than 60 minutes, it will be destroyed unconditionally.
+
+Theoretically, the thread pool has no upper limit in size. However, the following mechanisms actually contribute to an upper limit:
+
+* Threads must be requested from the Admission module before parallel execution. Parallel execution can start only after threads are successfully requested. This mechanism can limit the number of parallel queries. For more information about the Admission module, see [Concurrency control and queuing](300.deploy-parallel-execution/200.concurrency-control-and-queuing.md).
+* At most N threads can be requested from the resource pool at a time, where N = Value of `MIN_CPU` of the unit config for the resource units of the tenant × Value of `px_workers_per_cpu_quota`. At most N threads are allocated even if more than N threads are requested. `px_workers_per_cpu_quota` is a tenant-level parameter and is `10` by default. For example, if the DOP of a DFO is 100, the DFO needs to request 30 threads from node A and 70 threads from node B. Assuming that the value of `MIN_CPU` of the unit config is `4` and the value of `px_workers_per_cpu_quota` is `10`, N = 4 ×10 = 40. The DFO can actually request 30 threads from node A and 40 threads from node B. Its actual DOP is 70.
+
+## Performance optimization through load balancing
+
+To achieve the optimal performance, allocate the same number of tasks to each worker thread as far as possible.
+
+If data is divided based on block granules, the tasks are dynamically allocated to worker threads. This can minimize the imbalance in workloads. In other words, the workload of each worker thread does not obviously exceed those of others.
+
+If data is divided based on partition granules, you can optimize the performance by ensuring that the number of tasks is an integral multiple of the number of worker threads. This is very useful for partition-wise joins and parallel DML.
+
+Assume that a table has 16 partitions and that the amount of data in each partition is almost the same. You can use 16 worker threads (DOP = 16) to finish the job with 1/16 of the time otherwise required, five worker threads to finish the job with 1/5 of the time otherwise required, or two threads to finish the job with half the time otherwise required. However, if you use 15 worker threads to process the data of 16 partitions, the first thread will start to process the data of the 16th partition after it finishes processing the data of the first partition. Other threads will become idle after they finish processing the data of their respective allocated partition. If the amount of data in each partition is close, this configuration will result in poor performance. If the amount of data in each partition varies, the actual performance depends on the actual situation.
+
+Similarly, assume that you use six threads to process the data of 16 partitions and that each partition has a close amount of data.
+
+Each thread will start to process the data of a second partition after it finishes processing the data of the first partition. However, only four threads will process the data of a third partition while the other two threads will become idle.
+
+Given N partitions and P worker threads, you cannot simply calculate the time required for parallel execution by dividing N by P. You need to consider the situation where some threads may need to wait for other threads to complete data processing for the last partition. You can specify an appropriate DOP to minimize the imbalance in workloads and optimize the performance.
+
+## Inapplicable scenarios
+
+Parallel execution is inapplicable in the following scenarios:
+
+* Typical SQL queries in the system are executed within milliseconds.
+
+ A parallel query has a millisecond-level scheduling overhead. For a short query, the benefit of parallel execution may be neutralized by the scheduling overhead.
+
+* The system load is high.
+
+ Parallel execution is designed to make full use of idle system resources. For a system with a high load, parallel execution may fail to bring extra benefits but compromise the overall system performance.
+
+In serial execution, a single thread is used to execute database operations. Serial execution is preferred to parallel execution in the following circumstances:
+
+* A small amount of data to access
+* High concurrency
+* A query execution time less than 100 ms
+
+Parallel execution is partially inapplicable in the following circumstances:
+
+* The top-layer DFO does not require parallel execution. It interacts with the client and executes top-layer operations that do not require parallel execution, such as `LIMIT` and `PX COORDINATOR` operations.
+* A DFO that contains the user-defined function (UDF) `TABLE` can only be executed in serial. Other DFOs can still be executed in parallel.
+* Parallel execution is inapplicable to general `SELECT` and DML statements in an OLTP system.
diff --git a/en-US/700.reference/1000.performance-tuning-guide/500.sql-optimization/350.parallel-execution/300.deploy-parallel-execution/200.concurrency-control-and-queuing.md b/en-US/700.reference/1000.performance-tuning-guide/500.sql-optimization/350.parallel-execution/300.deploy-parallel-execution/200.concurrency-control-and-queuing.md
index abf331f20a..4dc97d6b45 100644
--- a/en-US/700.reference/1000.performance-tuning-guide/500.sql-optimization/350.parallel-execution/300.deploy-parallel-execution/200.concurrency-control-and-queuing.md
+++ b/en-US/700.reference/1000.performance-tuning-guide/500.sql-optimization/350.parallel-execution/300.deploy-parallel-execution/200.concurrency-control-and-queuing.md
@@ -34,18 +34,18 @@ Global control is responsible for resource acquisition in distributed scenarios.
The PX resource manager can query the `GV$OB_PX_TARGET_MONITOR` view for the thread usage information on each OBServer node of a tenant. For more information about fields in the view, see [V$OB_PX_TARGET_MONITOR](../../../../700.system-views/400.system-view-of-mysql-mode/300.performance-view-of-mysql-mode/5000.v-ob_px_target_monitor-of-mysql-mode.md).
```sql
-obclient> SELECT * FROM GV$OB_PX_TARGET_MONITOR;
-+--------------+----------+-----------+-----------+-----------------+--------------+-----------+-------------+------------------+-------------------+------------------------------+
-| SVR_IP | SVR_PORT | TENANT_ID | IS_LEADER | VERSION | PEER_IP | PEER_PORT | PEER_TARGET | PEER_TARGET_USED | LOCAL_TARGET_USED | LOCAL_PARALLEL_SESSION_COUNT |
-+--------------+----------+-----------+-----------+-----------------+--------------+-----------+-------------+------------------+-------------------+------------------------------+
-| 192.xx.xx.xx | 19512 | 1004 | N | 555393108309134 | 192.xx.xx.xx | 19510 | 10 | 6 | 0 | 0 |
-| 192.xx.xx.xx | 19512 | 1004 | N | 555393108309134 | 192.xx.xx.xx | 19512 | 10 | 0 | 0 | 0 |
-| 192.xx.xx.xx | 19510 | 1004 | Y | 555393108309134 | 192.xx.xx.xx | 19510 | 10 |
-
- 6 | 6 | 1 |
-| 192.xx.xx.xx | 19510 | 1004 | Y | 555393108309134 | 192.xx.xx.xx | 19512 | 10 | 0 | 0 | 1 |
-+--------------+----------+-----------+-----------+-----------------+--------------+-----------+-------------+------------------+-------------------+------------------------------+
-4 rows in set
+obclient> SELECT * FROM GV$OB_PX_TARGET_MONITOR;
++--------------+----------+-----------+-----------+-----------------+--------------+-----------+-------------+------------------+-------------------+------------------------------+
+| SVR_IP | SVR_PORT | TENANT_ID | IS_LEADER | VERSION | PEER_IP | PEER_PORT | PEER_TARGET | PEER_TARGET_USED | LOCAL_TARGET_USED | LOCAL_PARALLEL_SESSION_COUNT |
++--------------+----------+-----------+-----------+-----------------+--------------+-----------+-------------+------------------+-------------------+------------------------------+
+| 192.xx.xx.xx | 19512 | 1004 | N | 555393108309134 | 192.xx.xx.xx | 19510 | 10 | 6 | 0 | 0 |
+| 192.xx.xx.xx | 19512 | 1004 | N | 555393108309134 | 192.xx.xx.xx | 19512 | 10 | 0 | 0 | 0 |
+| 192.xx.xx.xx | 19510 | 1004 | Y | 555393108309134 | 192.xx.xx.xx | 19510 | 10 |
+
+ 6 | 6 | 1 |
+| 192.xx.xx.xx | 19510 | 1004 | Y | 555393108309134 | 192.xx.xx.xx | 19512 | 10 | 0 | 0 | 1 |
++--------------+----------+-----------+-----------+-----------------+--------------+-----------+-------------+------------------+-------------------+------------------------------+
+4 rows in set
```
The global resource usage status queried at a specific moment may be inconsistent on different OBServer nodes. However, the global status is synchronized every 500 ms at the background. Generally, the global resource usage status queried on the OBServer nodes is basically consistent without obvious deviations.
diff --git a/en-US/700.reference/1000.performance-tuning-guide/500.sql-optimization/350.parallel-execution/300.deploy-parallel-execution/300.set-degree-of-parallelism.md b/en-US/700.reference/1000.performance-tuning-guide/500.sql-optimization/350.parallel-execution/300.deploy-parallel-execution/300.set-degree-of-parallelism.md
index eaec768369..2fa62525ce 100644
--- a/en-US/700.reference/1000.performance-tuning-guide/500.sql-optimization/350.parallel-execution/300.deploy-parallel-execution/300.set-degree-of-parallelism.md
+++ b/en-US/700.reference/1000.performance-tuning-guide/500.sql-optimization/350.parallel-execution/300.deploy-parallel-execution/300.set-degree-of-parallelism.md
@@ -75,7 +75,7 @@ ALTER TABLE table_name PARALLEL 4;
ALTER TABLE table_name ALTER INDEX index_name PARALLEL 2;
```
-Assume that an SQL statement involves only one table. If the primary table is queried in the statement, not only the DFOs of the primary table but also other DFOs are executed based on a DOP of 4. If the index table is queried in the SQL statement, not only the DFOs of the index table but also other DFOs are executed based on a DOP of 2.
+If an SQL statement involves only one table and the primary table is queried in the statement, not only the DFOs of the primary table but also other DFOs are executed based on a DOP of 4. If the index table is queried in the SQL statement, not only the DFOs of the index table but also other DFOs are executed based on a DOP of 2.
If an SQL statement involves multiple tables, the maximum `PARALLEL` value is used as the DOP for the whole execution plan of the statement.
@@ -345,116 +345,117 @@ For more information about auto DOP, see [Auto DOP](../../300.distributed-execut
## DOP priorities
-The priorities of DOPs specified in different ways are sorted in descending order as follows: DOP specified by a global hint > DOP specified by a table-level PARALLEL hint > DOP specified for a session > DOP specified for a table.
+The priorities of DOPs specified in different ways are sorted in descending order as follows: DOP specified by a table-level PARALLEL hint > DOP specified by a global hint > DOP specified for a session > DOP specified for a table.
-The following example shows that when a global hint is specified, a table-level hint does not take effect.
+The following example shows that when a table-level PARALLEL hint is specified, a global hint does not take effect.
```sql
-obclient> EXPLAIN SELECT /*+ parallel(2) parallel(t1 3) */ * FROM t1;
-+-------------------------------------------------------------------+
-| Query Plan |
-+-------------------------------------------------------------------+
-| ======================================================= |
-| |ID|OPERATOR |NAME |EST.ROWS|EST.TIME(us)| |
-| ------------------------------------------------------- |
-| |0 |PX COORDINATOR | |1 |2 | |
-| |1 | EXCHANGE OUT DISTR|:EX10000|1 |2 | |
-| |2 | PX BLOCK ITERATOR| |1 |1 | |
-| |3 | TABLE SCAN |T1 |1 |1 | |
-| ======================================================= |
-| Outputs & filters: |
-| ------------------------------------- |
-| 0 - output([INTERNAL_FUNCTION(T1.C1)]), filter(nil), rowset=256 |
-| 1 - output([INTERNAL_FUNCTION(T1.C1)]), filter(nil), rowset=256 |
-| dop=2 |
-| 2 - output([T1.C1]), filter(nil), rowset=256 |
-| 3 - output([T1.C1]), filter(nil), rowset=256 |
-| access([T1.C1]), partitions(p0) |
-| is_index_back=false, is_global_index=false, |
-| range_key([T1.__pk_increment]), range(MIN ; MAX)always true |
-+-------------------------------------------------------------------+
+obclient> CREATE TABLE t1 (c1 int primary key, c2 int);
+obclient> EXPLAIN SELECT /*+ parallel(3) parallel(t1 5) */ * FROM t1;
++-------------------------------------------------------------------------+
+| Query Plan |
++-------------------------------------------------------------------------+
+| ======================================================= |
+| |ID|OPERATOR |NAME |EST.ROWS|EST.TIME(us)| |
+| ------------------------------------------------------- |
+| |0 |PX COORDINATOR | |1 |1 | |
+| |1 | EXCHANGE OUT DISTR|:EX10000|1 |21 | |
+| |2 | PX BLOCK ITERATOR| |1 |1 | |
+| |3 | TABLE SCAN |T1 |1 |1 | |
+| ======================================================= |
+| Outputs & filters: |
+| ------------------------------------- |
+| 0 - output([INTERNAL_FUNCTION(t1.c1, t1.c2)]), filter(nil), rowset=16 |
+| 1 - output([INTERNAL_FUNCTION(t1.c1, t1.c2)]), filter(nil), rowset=16 |
+| dop=5 |
+| 2 - output([t1.c1], [t1.c2]), filter(nil), rowset=16 |
+| 3 - output([t1.c1], [t1.c2]), filter(nil), rowset=16 |
+| access([t1.c1], [t1.c2]), partitions(p0) |
+| is_index_back=false, is_global_index=false, |
+| range_key([t1.c1]), range(MIN ; MAX)always true |
++-------------------------------------------------------------------------+
18 rows in set
obclient> EXPLAIN SELECT /*+ parallel(t1 3) */ * FROM t1;
-+-------------------------------------------------------------------+
-| Query Plan |
-+-------------------------------------------------------------------+
-| ======================================================= |
-| |ID|OPERATOR |NAME |EST.ROWS|EST.TIME(us)| |
-| ------------------------------------------------------- |
-| |0 |PX COORDINATOR | |1 |2 | |
-| |1 | EXCHANGE OUT DISTR|:EX10000|1 |1 | |
-| |2 | PX BLOCK ITERATOR| |1 |1 | |
-| |3 | TABLE SCAN |T1 |1 |1 | |
-| ======================================================= |
-| Outputs & filters: |
-| ------------------------------------- |
-| 0 - output([INTERNAL_FUNCTION(T1.C1)]), filter(nil), rowset=256 |
-| 1 - output([INTERNAL_FUNCTION(T1.C1)]), filter(nil), rowset=256 |
-| dop=3 |
-| 2 - output([T1.C1]), filter(nil), rowset=256 |
-| 3 - output([T1.C1]), filter(nil), rowset=256 |
-| access([T1.C1]), partitions(p0) |
-| is_index_back=false, is_global_index=false, |
-| range_key([T1.__pk_increment]), range(MIN ; MAX)always true |
-+-------------------------------------------------------------------+
++-------------------------------------------------------------------------+
+| Query Plan |
++-------------------------------------------------------------------------+
+| ======================================================= |
+| |ID|OPERATOR |NAME |EST.ROWS|EST.TIME(us)| |
+| ------------------------------------------------------- |
+| |0 |PX COORDINATOR | |1 |2 | |
+| |1 | EXCHANGE OUT DISTR|:EX10000|1 |1 | |
+| |2 | PX BLOCK ITERATOR| |1 |1 | |
+| |3 | TABLE SCAN |T1 |1 |1 | |
+| ======================================================= |
+| Outputs & filters: |
+| ------------------------------------- |
+| 0 - output([INTERNAL_FUNCTION(t1.c1, t1.c2)]), filter(nil), rowset=16 |
+| 1 - output([INTERNAL_FUNCTION(t1.c1, t1.c2)]), filter(nil), rowset=16 |
+| dop=3 |
+| 2 - output([t1.c1], [t1.c2]), filter(nil), rowset=16 |
+| 3 - output([t1.c1], [t1.c2]), filter(nil), rowset=16 |
+| access([t1.c1], [t1.c2]), partitions(p0) |
+| is_index_back=false, is_global_index=false, |
+| range_key([t1.c1]), range(MIN ; MAX)always true |
++-------------------------------------------------------------------------+
18 rows in set
```
-The following example shows that when a table-level hint is specified, the DOP specified for a session does not take effect.
+The following example shows that when a table-level PARALLEL hint is specified, the DOP specified for a session does not take effect.
```sql
obclient> ALTER SESSION FORCE PARALLEL QUERY PARALLEL 4;
Query OK, 0 rows affected (0.001 sec)
obclient> EXPLAIN SELECT /*+ parallel(t1 3) */ * FROM t1;
-+-------------------------------------------------------------------+
-| Query Plan |
-+-------------------------------------------------------------------+
-| ======================================================= |
-| |ID|OPERATOR |NAME |EST.ROWS|EST.TIME(us)| |
-| ------------------------------------------------------- |
-| |0 |PX COORDINATOR | |1 |2 | |
-| |1 | EXCHANGE OUT DISTR|:EX10000|1 |1 | |
-| |2 | PX BLOCK ITERATOR| |1 |1 | |
-| |3 | TABLE SCAN |T1 |1 |1 | |
-| ======================================================= |
-| Outputs & filters: |
-| ------------------------------------- |
-| 0 - output([INTERNAL_FUNCTION(T1.C1)]), filter(nil), rowset=256 |
-| 1 - output([INTERNAL_FUNCTION(T1.C1)]), filter(nil), rowset=256 |
-| dop=3 |
-| 2 - output([T1.C1]), filter(nil), rowset=256 |
-| 3 - output([T1.C1]), filter(nil), rowset=256 |
-| access([T1.C1]), partitions(p0) |
-| is_index_back=false, is_global_index=false, |
-| range_key([T1.__pk_increment]), range(MIN ; MAX)always true |
-+-------------------------------------------------------------------+
++-------------------------------------------------------------------------+
+| Query Plan |
++-------------------------------------------------------------------------+
+| ========================================================= |
+| |ID|OPERATOR |NAME |EST.ROWS|EST.TIME(us)| |
+| --------------------------------------------------------- |
+| |0 |PX COORDINATOR | |1 |3 | |
+| |1 |└─EXCHANGE OUT DISTR |:EX10000|1 |2 | |
+| |2 | └─PX BLOCK ITERATOR| |1 |2 | |
+| |3 | └─TABLE FULL SCAN|T1 |1 |2 | |
+| ========================================================= |
+| Outputs & filters: |
+| ------------------------------------- |
+| 0 - output([INTERNAL_FUNCTION(T1.C1, T1.C2)]), filter(nil), rowset=16 |
+| 1 - output([INTERNAL_FUNCTION(T1.C1, T1.C2)]), filter(nil), rowset=16 |
+| dop=3 |
+| 2 - output([T1.C1], [T1.C2]), filter(nil), rowset=16 |
+| 3 - output([T1.C1], [T1.C2]), filter(nil), rowset=16 |
+| access([T1.C1], [T1.C2]), partitions(p0) |
+| is_index_back=false, is_global_index=false, |
+| range_key([T1.C1]), range(MIN ; MAX)always true |
++-------------------------------------------------------------------------+
18 rows in set
obclient> EXPLAIN SELECT * FROM t1;
-+-------------------------------------------------------------------+
-| Query Plan |
-+-------------------------------------------------------------------+
-| ======================================================= |
-| |ID|OPERATOR |NAME |EST.ROWS|EST.TIME(us)| |
-| ------------------------------------------------------- |
-| |0 |PX COORDINATOR | |1 |1 | |
-| |1 | EXCHANGE OUT DISTR|:EX10000|1 |1 | |
-| |2 | PX BLOCK ITERATOR| |1 |1 | |
-| |3 | TABLE SCAN |T1 |1 |1 | |
-| ======================================================= |
-| Outputs & filters: |
-| ------------------------------------- |
-| 0 - output([INTERNAL_FUNCTION(T1.C1)]), filter(nil), rowset=256 |
-| 1 - output([INTERNAL_FUNCTION(T1.C1)]), filter(nil), rowset=256 |
-| dop=4 |
-| 2 - output([T1.C1]), filter(nil), rowset=256 |
-| 3 - output([T1.C1]), filter(nil), rowset=256 |
-| access([T1.C1]), partitions(p0) |
-| is_index_back=false, is_global_index=false, |
-| range_key([T1.__pk_increment]), range(MIN ; MAX)always true |
-+-------------------------------------------------------------------+
++-------------------------------------------------------------------------+
+| Query Plan |
++-------------------------------------------------------------------------+
+| ========================================================= |
+| |ID|OPERATOR |NAME |EST.ROWS|EST.TIME(us)| |
+| --------------------------------------------------------- |
+| |0 |PX COORDINATOR | |1 |2 | |
+| |1 |└─EXCHANGE OUT DISTR |:EX10000|1 |2 | |
+| |2 | └─PX BLOCK ITERATOR| |1 |1 | |
+| |3 | └─TABLE FULL SCAN|T1 |1 |1 | |
+| ========================================================= |
+| Outputs & filters: |
+| ------------------------------------- |
+| 0 - output([INTERNAL_FUNCTION(T1.C1, T1.C2)]), filter(nil), rowset=16 |
+| 1 - output([INTERNAL_FUNCTION(T1.C1, T1.C2)]), filter(nil), rowset=16 |
+| dop=4 |
+| 2 - output([T1.C1], [T1.C2]), filter(nil), rowset=16 |
+| 3 - output([T1.C1], [T1.C2]), filter(nil), rowset=16 |
+| access([T1.C1], [T1.C2]), partitions(p0) |
+| is_index_back=false, is_global_index=false, |
+| range_key([T1.C1]), range(MIN ; MAX)always true |
++-------------------------------------------------------------------------+
18 rows in set
```
@@ -465,55 +466,55 @@ obclient> ALTER TABLE t1 PARALLEL 5;
Query OK, 0 rows affected
obclient> EXPLAIN SELECT * FROM t1;
-+-------------------------------------------------------------------+
-| Query Plan |
-+-------------------------------------------------------------------+
-| ======================================================= |
-| |ID|OPERATOR |NAME |EST.ROWS|EST.TIME(us)| |
-| ------------------------------------------------------- |
-| |0 |PX COORDINATOR | |1 |1 | |
-| |1 | EXCHANGE OUT DISTR|:EX10000|1 |1 | |
-| |2 | PX BLOCK ITERATOR| |1 |1 | |
-| |3 | TABLE SCAN |T1 |1 |1 | |
-| ======================================================= |
-| Outputs & filters: |
-| ------------------------------------- |
-| 0 - output([INTERNAL_FUNCTION(T1.C1)]), filter(nil), rowset=256 |
-| 1 - output([INTERNAL_FUNCTION(T1.C1)]), filter(nil), rowset=256 |
-| dop=5 |
-| 2 - output([T1.C1]), filter(nil), rowset=256 |
-| 3 - output([T1.C1]), filter(nil), rowset=256 |
-| access([T1.C1]), partitions(p0) |
-| is_index_back=false, is_global_index=false, |
-| range_key([T1.__pk_increment]), range(MIN ; MAX)always true |
-+-------------------------------------------------------------------+
++-------------------------------------------------------------------------+
+| Query Plan |
++-------------------------------------------------------------------------+
+| ========================================================= |
+| |ID|OPERATOR |NAME |EST.ROWS|EST.TIME(us)| |
+| --------------------------------------------------------- |
+| |0 |PX COORDINATOR | |1 |2 | |
+| |1 |└─EXCHANGE OUT DISTR |:EX10000|1 |2 | |
+| |2 | └─PX BLOCK ITERATOR| |1 |1 | |
+| |3 | └─TABLE FULL SCAN|T1 |1 |1 | |
+| ========================================================= |
+| Outputs & filters: |
+| ------------------------------------- |
+| 0 - output([INTERNAL_FUNCTION(T1.C1, T1.C2)]), filter(nil), rowset=16 |
+| 1 - output([INTERNAL_FUNCTION(T1.C1, T1.C2)]), filter(nil), rowset=16 |
+| dop=4 |
+| 2 - output([T1.C1], [T1.C2]), filter(nil), rowset=16 |
+| 3 - output([T1.C1], [T1.C2]), filter(nil), rowset=16 |
+| access([T1.C1], [T1.C2]), partitions(p0) |
+| is_index_back=false, is_global_index=false, |
+| range_key([T1.C1]), range(MIN ; MAX)always true |
++-------------------------------------------------------------------------+
18 rows in set
obclient> ALTER SESSION FORCE PARALLEL QUERY PARALLEL 4;
Query OK, 0 rows affected
obclient> EXPLAIN SELECT * FROM t1;
-+-------------------------------------------------------------------+
-| Query Plan |
-+-------------------------------------------------------------------+
-| ======================================================= |
-| |ID|OPERATOR |NAME |EST.ROWS|EST.TIME(us)| |
-| ------------------------------------------------------- |
-| |0 |PX COORDINATOR | |1 |1 | |
-| |1 | EXCHANGE OUT DISTR|:EX10000|1 |1 | |
-| |2 | PX BLOCK ITERATOR| |1 |1 | |
-| |3 | TABLE SCAN |T1 |1 |1 | |
-| ======================================================= |
-| Outputs & filters: |
-| ------------------------------------- |
-| 0 - output([INTERNAL_FUNCTION(T1.C1)]), filter(nil), rowset=256 |
-| 1 - output([INTERNAL_FUNCTION(T1.C1)]), filter(nil), rowset=256 |
-| dop=4 |
-| 2 - output([T1.C1]), filter(nil), rowset=256 |
-| 3 - output([T1.C1]), filter(nil), rowset=256 |
-| access([T1.C1]), partitions(p0) |
-| is_index_back=false, is_global_index=false, |
-| range_key([T1.__pk_increment]), range(MIN ; MAX)always true |
-+-------------------------------------------------------------------+
++-------------------------------------------------------------------------+
+| Query Plan |
++-------------------------------------------------------------------------+
+| ========================================================= |
+| |ID|OPERATOR |NAME |EST.ROWS|EST.TIME(us)| |
+| --------------------------------------------------------- |
+| |0 |PX COORDINATOR | |1 |2 | |
+| |1 |└─EXCHANGE OUT DISTR |:EX10000|1 |2 | |
+| |2 | └─PX BLOCK ITERATOR| |1 |1 | |
+| |3 | └─TABLE FULL SCAN|T1 |1 |1 | |
+| ========================================================= |
+| Outputs & filters: |
+| ------------------------------------- |
+| 0 - output([INTERNAL_FUNCTION(T1.C1, T1.C2)]), filter(nil), rowset=16 |
+| 1 - output([INTERNAL_FUNCTION(T1.C1, T1.C2)]), filter(nil), rowset=16 |
+| dop=4 |
+| 2 - output([T1.C1], [T1.C2]), filter(nil), rowset=16 |
+| 3 - output([T1.C1], [T1.C2]), filter(nil), rowset=16 |
+| access([T1.C1], [T1.C2]), partitions(p0) |
+| is_index_back=false, is_global_index=false, |
+| range_key([T1.C1]), range(MIN ; MAX)always true |
++-------------------------------------------------------------------------+
18 rows in set
```
diff --git a/en-US/700.reference/1000.performance-tuning-guide/500.sql-optimization/350.parallel-execution/300.deploy-parallel-execution/400.parallel-parameter-tuning.md b/en-US/700.reference/1000.performance-tuning-guide/500.sql-optimization/350.parallel-execution/300.deploy-parallel-execution/400.parallel-parameter-tuning.md
index 94a5bc2db8..146f3cd0d4 100644
--- a/en-US/700.reference/1000.performance-tuning-guide/500.sql-optimization/350.parallel-execution/300.deploy-parallel-execution/400.parallel-parameter-tuning.md
+++ b/en-US/700.reference/1000.performance-tuning-guide/500.sql-optimization/350.parallel-execution/300.deploy-parallel-execution/400.parallel-parameter-tuning.md
@@ -33,7 +33,7 @@ Generally, you do not need to change the default value of `px_workers_per_cpu_qu
This parameter specifies the number of PX threads that can be requested from each node of the tenant. When thread resources are used up, subsequent PX requests need to wait in a queue. For the concept of queuing, see [Concurrency control and queuing](200.concurrency-control-and-queuing.md).
-In parallel execution, the CPU utilization can be very low due to factors such as an excessively small value for `parallel_servers_target`, which downgrades the DOP for the SQL statement, resulting in less threads allocated than expected.
+In parallel execution, the CPU utilization can be very low due to factors such as an excessively small value for `parallel_servers_target`, which downgrades the DOP for the SQL statement, resulting in fewer threads allocated than expected.
In OceanBase Database of a version earlier than V3.2.3, the default value of `parallel_servers_target` is very small. You can increase the value of `parallel_servers_target` to resolve the issue. We recommend that you set `parallel_servers_target` to the value of `MIN_CPU` × 10.
@@ -66,7 +66,7 @@ OceanBase Database of a version earlier than V4.2 does not support the `parallel
#### ob_sql_work_area_percentage
-This is a tenant-level variable that specifies the maximum memory space available for the SQL workarea. The value is in percentage that indicates the percentage of the memory space available for the SQL module to the total memory space of the tenant. The default value is `5`, which indicates 5%. When the memory space occupied by the SQL module exceeds the specified value, data in the memory is flushed to the disk.
+This is a tenant-level variable that specifies the maximum memory space available for the SQL workarea. The value is in percentage that indicates the percentage of the memory space available for the SQL workarea to the total memory space of the tenant. The default value is `5`, which indicates 5%. When the memory space occupied by the SQL workarea exceeds the specified value, data in the memory is flushed to the disk.
To view the actual memory usage of the SQL workarea, you can search for `WORK_AREA` in the `observer.log` file. Here is an example.
diff --git a/en-US/700.reference/1000.performance-tuning-guide/500.sql-optimization/350.parallel-execution/700.quickstart-of-parallel-execution.md b/en-US/700.reference/1000.performance-tuning-guide/500.sql-optimization/350.parallel-execution/700.quickstart-of-parallel-execution.md
index 505cb5cf97..ae83adc5f5 100644
--- a/en-US/700.reference/1000.performance-tuning-guide/500.sql-optimization/350.parallel-execution/700.quickstart-of-parallel-execution.md
+++ b/en-US/700.reference/1000.performance-tuning-guide/500.sql-optimization/350.parallel-execution/700.quickstart-of-parallel-execution.md
@@ -87,7 +87,7 @@ Generally, if multiple SQL statements will not be executed in parallel, you can
Set the `undo_retention` parameter to a value that is not less than the maximum execution time of a PDML statement. The default value of `undo_retention` is 30 minutes. If the execution time of a PDML statement exceeds 30 minutes, this error may be returned and the statement will be aborted and retried until it times out.
- This issue never occurs in OceanBase Database V4.1 and later. Therefore, you do not need to set the `undo_retention` parameter in OceanBase Database V4.1.
+ This issue never occurs in OceanBase Database V4.1 and later. Therefore, you do not need to set the `undo_retention` parameter in OceanBase Database V4.1 and later.
4. How do I enable parallel execution for business SQL statements without making any modifications to the business?
diff --git a/en-US/700.reference/1000.performance-tuning-guide/500.sql-optimization/400.sql-optimization/500.query-rewrite/100.query-rewrite-overview.md b/en-US/700.reference/1000.performance-tuning-guide/500.sql-optimization/400.sql-optimization/500.query-rewrite/100.query-rewrite-overview.md
index 0b4004e77f..3994a1b8f3 100644
--- a/en-US/700.reference/1000.performance-tuning-guide/500.sql-optimization/400.sql-optimization/500.query-rewrite/100.query-rewrite-overview.md
+++ b/en-US/700.reference/1000.performance-tuning-guide/500.sql-optimization/400.sql-optimization/500.query-rewrite/100.query-rewrite-overview.md
@@ -2,12 +2,88 @@
Query rewrite refers to rewriting an SQL query into another one that is easier to optimize.
-OceanBase Database supports two query rewrite rules: rule-based rewrite and cost-based rewrite.
+OceanBase Database supports rule-based query rewrite and cost-based query rewrite.
-Rule-based query rewrite always makes an SQL query better, making it possible to keep optimizing the SQL query. A typical rule-based rewrite is to join result sets of subqueries into the same result set. Before the rewrite, nested-loop join is the only option to execute subqueries. After the rewrite, the optimizer has two more options: hash join and merge join.
+**Rule-based query rewrite** always aims to make an SQL query "better", making it possible to keep optimizing the SQL query. For example, converting a subquery into a join is a typical rule-based query rewrite, which provides the optimizer with more execution plan options, such as hash join and merge join besides nested loop join.
-Cost-based query rewrite does not necessarily make an SQL query better. A cost model is required for evaluation. A typical cost-based rewrite is OR-EXPANSION.
+**Cost-based query rewrite** does not necessarily make an SQL query "better". It determines whether to rewrite an SQL statement based on the cost model. For example, OR-EXPANSION is a cost-driven rewrite strategy.
-Usually, a rewrite rule of a database can be implemented only when it meets specific conditions. Many rewrite rules are interactive, meaning that the rewrite of a rule triggers that of another. In OceanBase Database, rule-based query rewrites are classified into multiple sets of rules. Each set of rules in OceanBase Database is iteratively rewritten until the SQL query cannot be rewritten anymore, or the number of iterations reaches the preset threshold. The same applies to cost-based rewrites.
+Specific conditions must be met to apply a rewrite rule in a database. Rewrite rules may be mutually triggered. OceanBase Database splits a rule-based rewrite into multiple sets and iteratively processes each set until the query can no longer be rewritten or the number of iterations reaches the specified value. Cost-based rewrites are implemented in a similar mechanism as rule-based rewrites.
-Note that a cost-based rewrite may in turn trigger a rule-based rewrite. In other words, the cost-based rewrite and the rule-based rewrite are iteratively executed.
+Note that a cost-based rewrite may trigger a rule-based rewrite. Therefore, OceanBase Database actually rewrites a query by alternately using cost-based rewrites and rule-based rewrites.
+
+## Query rewrite models
+
+OceanBase Database supports two rewrite models for queries: rule-based rewrite and cost-based rewrite.
+
+During the optimization of queries in a database, the rule model and cost model are two key factors that determine how to rewrite a query and select an execution plan. Both models aim to improve the SQL query performance. The following sections describe some common rule models and cost models.
+
+### Rule models (rule-based query rewrite)
+
+Generally, a rule model rewrites queries based on fixed heuristic rules. Here are some typical rule-based rewrite strategies:
+
+1. **Subquery-related rewrite**:
+
+ - **View merge**: A nested query in a view is merged with the main query to reduce query layers and improve the execution efficiency.
+ - **Subquery unnesting**: The subquery in a `WHERE` condition is promoted to the parent query and unnested to produce a join condition in parallel with the parent query. The rewrite deconstructs the subquery and changes the outer parent query to a multi-table join.
+
+2. **Rewrite `ANY or ALL` by using `MAX or MIN`**:
+
+ - A subquery that contains an `ANY` or `ALL` operator is rewritten to an equivalent that uses the `MAX` or `MIN` aggregate function so as to apply indexes or a more efficient execution strategy.
+
+3. **Outer join elimination**:
+
+ - An outer join whose result does not affect the final result set is eliminated to convert the query into an inner join or a simple `SELECT` query.
+
+4. **Condition simplification**
+
+ - **`HAVING` condition elimination**: If no aggregate or `GROUP BY` operation exists in the query, the `HAVING` condition can be merged into the `WHERE` conditions, with the `HAVING` condition eliminated. This way, the `HAVING` condition can be managed with other conditions in `WHERE` for further optimization.
+ - **Equivalence relation deduction**: In the process of equivalence relation deduction, new conditional expressions are deduced based on the transitivity of comparison operators to reduce the number of rows to be processed or to select a more efficient index.
+ - **Identically true/false elimination**: Identically true and false conditions are eliminated from the query conditions to simplify the query logic.
+
+5. **Non-select project join (non-SPJ) rewrite**:
+
+ - **Redundant ORDER BY elimination**: If the query results do not need to be ordered or the ordering is ineffective, the ORDER BY operation is eliminated.
+ - **LIMIT pushdown to subquery**: If `LIMIT` can be used in a subquery without affecting the final result set, `LIMIT` is pushed down to the subquery to reduce the number of rows to be processed.
+ - **LIMIT pushdown to outer join or cross join**: `LIMIT` is pushed down to an appropriate part of an outer join or cross join to reduce the number of rows to be processed.
+ - **DISTINCT elimination**: If it can be guaranteed that rows in the result set are unique, the `DISTINCT` operation is eliminated.
+ - **MIN/MAX rewrite**: Where it is deemed appropriate, a query is rewritten to a more efficient equivalent to calculate the `MIN` or `MAX` values. For example, a query can be rewritten to use indexes for access.
+
+### Cost models (cost-based query rewrite)
+
+A cost model selects the optimal execution plan for a query based on cost estimation. Here are some cost-based strategies supported in OceanBase Database:
+
+- **OR-EXPANSION**: splits a predicate that contains an OR condition into multiple independent queries and merges the result sets of the queries. This can sometimes improve the query performance by using indexes or parallel processing.
+
+OceanBase Database estimates the resource consumption of a query based on a cost model, which evaluates the cost for executing an operator based on a set of formulas and constant parameters. For example, the cost for executing the `EXCHANGE IN` operator can be evaluated by using the following formula:
+
+```sql
+cost = rows * row_width * NETWORK_TRANS_PER_BYTE_COST +
+ rows * row_width * NETWORK_DESER_PER_BYTE_COST
+```
+
+The preceding formula involves two coefficients `rows` (the number of output rows) and `row_width` (the data width), and calculates the overall overhead based on `NETWORK_TRANS_PER_BYTE_COST` (the cost for network transmission) and `NETWORK_DESER_PER_BYTE_COST` (the cost for data deserialization).
+
+An adaptive cost model adjusts the cost coefficients to match a specific hardware environment. You can use an automatic script to try to calculate new cost coefficients when you install your database. However, the optimizer uses too many cost coefficients, which makes the calculation process extremely complex, leading to a high probability of errors and very poor user experience.
+
+The coefficients involved in cost models are normalized into functions of the following hardware parameters to simplify adaptive coefficient adjustment. You can normalize the required cost coefficients into combinations of the following four basic coefficients.
+
+- CPU frequency (CPU_SPEED)
+- Sequential disk read rate
+- Random disk read rate
+- Network bandwidth
+
+For example, the cost efficient for hash table probe can be converted to the number of CPU instructions (CPU_CYCLES). Therefore, the cost for probing a hash table in the current environment can be calculated based on the CPU frequency by using the following formula:
+
+```sql
+PROBE_HASH_PER_ROW_COST = CPU_CYCLES * CPU_SPEED;
+```
+
+This way, coefficients of a cost model can be decoupled from the current hardware. You can use different system information to calculate cost coefficients for different hardware environments.
+
+OceanBase Database V4.3.0 and later support adaptive cost models. You can evaluate the overhead of an execution plan based on the real-time hardware performance. Moreover, an API is provided for you to manually adjust cost coefficients to customize an optimization strategy when necessary.
+
+## References
+
+- [Rule-based query rewrite](200.rule-based-query-rewriting.md)
+- [Cost-based query rewrite](300.cost-based-query-rewriting.md)
diff --git a/en-US/700.reference/1000.performance-tuning-guide/500.sql-optimization/400.sql-optimization/500.query-rewrite/200.rule-based-query-rewriting.md b/en-US/700.reference/1000.performance-tuning-guide/500.sql-optimization/400.sql-optimization/500.query-rewrite/200.rule-based-query-rewriting.md
index df8d545196..4295b126cb 100644
--- a/en-US/700.reference/1000.performance-tuning-guide/500.sql-optimization/400.sql-optimization/500.query-rewrite/200.rule-based-query-rewriting.md
+++ b/en-US/700.reference/1000.performance-tuning-guide/500.sql-optimization/400.sql-optimization/500.query-rewrite/200.rule-based-query-rewriting.md
@@ -12,7 +12,7 @@ The optimizer usually executes subqueries in a nested way, which means that each
* After the join and filter conditions of the subquery are rewritten as the conditions of the parent query, more optimization options are available to the optimizer, such as conditional pushdown.
-Frequently applied methods to rewrite a subquery include view merging, subquery expansion, and rewriting `ANY` or `ALL` by using `MAX` or `MIN`.
+Frequently applied methods to rewrite a subquery include view merge, subquery expansion, and rewriting `ANY` or `ALL` by using `MAX` or `MIN`.
### View merge
@@ -71,7 +71,7 @@ This indicates that view merge increases options for the join order. For complex
### Subquery unnesting
-Subquery unnesting promotes the subquery in a `WHERE` condition to the parent query and unnests it as a join condition in parallel with the parent query. The rewrite deconstructs the subquery and changes the outer parent query to a multi-table join.
+Subquery unnesting promotes the subquery in a `WHERE` condition to the parent query and unnests it to produce a join condition in parallel with the parent query. The rewrite deconstructs the subquery and changes the outer parent query to a multi-table join.
The benefit is that the optimizer takes into account tables in the subquery when it selects access paths, join methods, and sorting methods, thus obtaining a better execution plan. Relevant subquery unnesting expressions include `NOT IN`, `IN`, `NOT EXIST`, `EXIST`, `ANY`, and `ALL`.
@@ -79,7 +79,7 @@ You can unnest a subquery by using the following methods:
* Rewrite conditions so that the generated join statement is enabled to return the same rows as the original statement.
-* Unnest the subquery as a semi join or anti join.
+* Unnest a subquery to produce a semi join or anti join.
In the following example, `t2.c2` is not unique and the statement is rewritten to a semi join. The new execution plan is as follows:
@@ -135,7 +135,7 @@ You can unnest a subquery by using the following methods:
access([t2.c2]), partitions(p0)
```
-* Unnest a subquery as an inner join.
+* Unnest a subquery to produce an inner join.
For query Q1 in the preceding example, if `t2.c2` is changed to `t2.c1`, which is the primary key, the output of the subquery is unique, and you can rewrite it to an inner join, as shown in the following example:
@@ -239,7 +239,7 @@ Here is an example:
obclient>SELECT t1.c1, t2.c2 FROM t1 LEFT JOIN t2 ON t1.c2 = t2.c2;
```
-This is an outer join, whose output row `t2.c2` may be `NULL`. If you add the condition `t2.c2 > 5` and filter by this condition, the output of `t2.c1` cannot be `NULL`, so that the outer join can be converted to an inner join.
+This is an outer join, whose output row `t2.c2` may be `NULL`. If you add the condition `t2.c2 > 5`, data is filtered by this condition, and the output of `t2.c1` cannot be `NULL`, so that the outer join can be converted into an inner join.
```javascript
obclient> SELECT t1.c1, t2.c2 FROM t1 LEFT JOIN t2 ON t1.c2 = t2.c2 WHERE t2.c2 > 5;
@@ -252,7 +252,7 @@ obclient> SELECT t1.c1, t2.c2 FROM t1 INNER JOIN t2 ON t1.c2 = t2.c2
### HAVING condition elimination
-The `HAVING` condition can be merged into the `WHERE` conditions, with the `HAVING` condition eliminated. If no aggregate or `GROUP BY` operation exists in the query, the `HAVING` condition can be managed collectively with other conditions in `WHERE` for further optimization.
+If no aggregate or `GROUP BY` operation exists in the query, the `HAVING` condition can be merged into the `WHERE` conditions, with the `HAVING` condition eliminated. This way, the `HAVING` condition can be managed with other conditions in `WHERE` for further optimization.
```javascript
obclient>SELECT * FROM t1, t2 WHERE t1.c1 = t2.c1 HAVING t1.c2 > 1;
@@ -286,9 +286,9 @@ Outputs & filters:
### Equivalence relation deduction
-In the process of equivalence relation deduction, new conditional expressions are deduced from the transitivity of comparison operators to reduce the number of rows to be processed or to select a more efficient index.
+In the process of equivalence relation deduction, new conditional expressions are deduced based on the transitivity of comparison operators to reduce the number of rows to be processed or to select a more efficient index.
-OceanBase Database supports the deduction of equality join conditions. For example, `a = b AND a > 1 AND b > 1` can be deduced from `a = b AND a > 1`. In this case, if column `b` is indexed and selectivity for the condition `b > 1` is low in the index, it is possible to significantly improve the performance of accessing the table that contains column `b` by using the deducted condition.
+OceanBase Database supports the deduction of equi-join conditions. For example, assume that a table contains columns `a` and `b`. `a = b AND a > 1 AND b > 1` can be deduced from `a = b AND a > 1`. In this case, if column `b` is indexed and selectivity of the index is low for the condition `b > 1`, the performance of accessing the table that contains column `b` can be significantly improved by using the deduced condition.
The following example shows that the condition `t1.c1 = t2.c2 AND t1.c1 > 2` is equivalently deduced to `t1.c1 = t2.c2 AND t1.c1 > 2 AND t2.c2 > 2`. You can learn from the plan that `t2.c2` is pushed down to `TABLE SCAN` and the corresponding index of `t2.c2` is applied.
@@ -335,7 +335,7 @@ The following identically true and false conditions can be eliminated:
* `true or expr` = Identically true
-In the following example, for the condition `WHERE 0 > 1 AND c1 = 3`, `AND` is identically false because of the condition `0 > 1`. So, the SQL query is not executed and directly returns the result, making the execution process faster.
+In the following example, for the condition `WHERE 0 > 1 AND c1 = 3`, `AND` is identically false because of the condition `0 > 1`. Therefore, the result can be directly returned without the need to execute the SQL query, thereby accelerating the execution process.
```javascript
obclient> EXPLAIN EXTENDED_NOADDR SELECT * FROM t1 WHERE 0 > 1 AND c1 = 3;
@@ -386,7 +386,7 @@ Redundancy elimination for sorting removes redundant ordered items to reduce res
obclient> (SELECT c1,c2 FROM t1) UNION (SELECT c3,c4 FROM t2);
```
-### LIMIT pushdown subquery
+### LIMIT pushdown to subquery
`LIMIT` pushdown rewrite means to push the `LIMIT` down to the subquery. OceanBase Database supports pushing down `LIMIT` to a view (Example 1) or to a subquery that contains the `UNION` operator (Example 2), without changing the semantics.
@@ -406,9 +406,9 @@ obclient> (SELECT c1,c2 FROM t1) UNION ALL (SELECT c3,c4 FROM t2) LIMIT 5;
obclient> (SELECT c1,c2 FROM t1 LIMIT 5) UNION ALL (SELECT c3,c4 FROM t2 limit 5) LIMIT 5;
```
-### LIMIT pushdown outer join or cross join
+### LIMIT pushdown to outer join or cross join
-If an SQL statement does not contain the `WINDOW FUNCTION`, `DISTINCT`, `GROUP BY`, or `HAVING` clause and does not contain the `WHERE` or `ORDER BY` condition or the `WHERE` or `ORDER BY` condition is related only to one side of the join, you can push down the `LIMIT` statement to one side (outer join) or multiple sides (cross join) of the joined table. This rewrite method is called `LIMIT` pushdown outer join or cross join. Pushdown of the `LIMIT` statement can effectively reduce the number of rows that are joined, thus reducing the overheads of queries.
+If an SQL statement does not contain the `WINDOW FUNCTION`, `DISTINCT`, `GROUP BY`, or `HAVING` clause and does not contain the `WHERE` or `ORDER BY` condition or the `WHERE` or `ORDER BY` condition is related only to one side of the join, you can push down the `LIMIT` statement to one side (outer join) or multiple sides (cross join) of the joined table. This rewrite method is called `LIMIT` pushdown to outer join or cross join. Pushdown of the `LIMIT` statement can effectively reduce the number of rows that are joined, thus reducing the overheads of queries.
When the `LIMIT` statement is pushed down for an outer join, a view is encapsulated on the table pushed down. The following example shows query Q1 for a left outer join:
@@ -463,7 +463,7 @@ V1: SELECT 1 FROM t1 WHERE t1.c1 > 0 ORDER BY t1.c1 LIMIT 1;
V2: SELECT 1 FROM t2 LIMIT 1;
```
-Query Q3 does not contain the conditions in the `LIMIT` pushdown outer join and contains only the `WHERE` and `ORDER BY` conditions applied to table `t1`. In this case, you can create views `V1` and `V2` on tables `t1` and `t2`, respectively. Then, you can push down the `LIMIT` statement to the views, thus rewriting query Q3 into query Q4. The execution plan is rewritten as follows:
+Query Q3 does not contain the conditions in the `LIMIT` pushdown to outer join and contains only the `WHERE` and `ORDER BY` conditions applied to table `t1`. In this case, you can create views `V1` and `V2` on tables `t1` and `t2`, respectively. Then, you can push down the `LIMIT` statement to the views, thus rewriting query Q3 into query Q4. The execution plan is rewritten as follows:
```sql
=====================================================
@@ -496,12 +496,12 @@ Outputs & filters:
limit(1), offset(nil)
```
-An SQL query for a multi-table join that meets the preceding conditions can execute `LIMIT` pushdown outer joins and cross joins multiple times to increase the room for query rewrite and achieve better rewrite results.
+An SQL query for a multi-table join that meets the preceding conditions can execute `LIMIT` pushdown to outer joins and cross joins multiple times to increase the room for query rewrite and achieve better rewrite results.
### DISTINCT elimination
-* If the SELECT statement contains only constants, `DISTINCT` can be eliminated, with `LIMIT 1` added.
+* If the `SELECT` statement contains only constants, `DISTINCT` can be eliminated, with `LIMIT 1` added.
```sql
obclient> SELECT DISTINCT 1,2 FROM t1 ;
@@ -529,7 +529,7 @@ An SQL query for a multi-table join that meets the preceding conditions can exec
range_key([t1.c1]), range(MIN ; MAX)always true
```
-* `DISTINCT` can be eliminated if the SELECT statement contains a column that ensures uniqueness. In the following example, `(c1, c2)` is the primary key. It ensures the uniqueness of `c1`, `c2`, and `c3`. Therefore, `DISTINCT` can be eliminated.
+* `DISTINCT` can be eliminated if the `SELECT` statement contains a column that ensures uniqueness. In the following example, `(c1, c2)` is the primary key. It ensures the uniqueness of `c1`, `c2`, and `c3`. Therefore, `DISTINCT` can be eliminated.
```javascript
obclient> CREATE TABLE t2(c1 INT, c2 INT, c3 INT, PRIMARY KEY(c1, c2));
@@ -612,7 +612,7 @@ An SQL query for a multi-table join that meets the preceding conditions can exec
range_key([t1.c1]), range(MIN ; MAX)always true
```
-* If all parameters of `SELECT MIN` or `SELECT MAX` are constants and `GROUP BY` is not contained, you can rewrite the query to scan one row only by using the index, as shown in the following example:
+* If all parameters of `SELECT MIN` or `SELECT MAX` are constants and `GROUP BY` is not contained, you can rewrite the query to scan only one row by using the index, as shown in the following example:
```javascript
obclient> SELECT MAX(1) FROM t1;
From 7f9dfeef7168f83b9e2001e8609948c6ae610d55 Mon Sep 17 00:00:00 2001
From: Jackie Qu
Date: Wed, 17 Apr 2024 17:25:56 +0800
Subject: [PATCH 32/63] v430-beta-400.physical-connection-update
---
.../400.physical-connection.md | 99 ++++++++++++++-----
1 file changed, 73 insertions(+), 26 deletions(-)
diff --git a/en-US/700.reference/1200.database-proxy/400.physical-connection.md b/en-US/700.reference/1200.database-proxy/400.physical-connection.md
index 406bfe424c..81a8b6d44e 100644
--- a/en-US/700.reference/1200.database-proxy/400.physical-connection.md
+++ b/en-US/700.reference/1200.database-proxy/400.physical-connection.md
@@ -6,46 +6,93 @@
# Physical connections
-This topic describes the two methods that you can use to query network connections on an OceanBase Database Proxy (ODP).
+This topic describes how to query the number of sessions in the current tenant and view the internal attributes of all network connections on an OceanBase Database Proxy (ODP).
## Query the number of sessions in the current tenant and the session IDs
-Execute the `SHOW PROCESSLIST` statement to query the number of sessions of the current tenant and the session IDs.
+
+ Note
+ This topic takes OceanBase Database V4.3.0 and ODP V4.2.3 as an example to describe the query methods and output information.
+
-```sql
-obclient> SHOW PROCESSLIST;
-+------------+--------+------+-----------------------+-----------+-------------+-------------------+-------------------+-------+-------+
-| Id | Tenant | User | Host | db | trans_count | svr_session_count | state | tid | pid |
-+------------+--------+------+-----------------------+-----------+-------------+-------------------+-------------------+-------+-------+
-| 2147549229 | sys | root | XXX.XXX.XXX.XXX:48292 | oceanbase | 97 | 1 | MCS_ACTIVE_READER | 14531 | 14531 |
-+------------+--------+------+-----------------------+-----------+-------------+-------------------+-------------------+-------+-------+
-1 row in set
-```
+1. Connect to OceanBase Database by using ODP. In this example, the `root@sys` user is used.
-The following table describes the fields in the returned result.
+ ```shell
+ obclient -h10.10.10.1 -P2883 -uroot@sys -p -Doceanbase -A
+ ```
-| Field | Description |
-|-------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| Id | The ID of the client session on ODP. This parameter is equivalent to the `cs_id` parameter. |
-| Tenant | The tenant. |
-| User | The user. |
-| Host | The IP address and port number of the user. |
-| db | The database. |
-| trans_count | The number of transactions transmitted by the ODP. |
-| svr_session_count | The number of sessions. |
-| state | The status of the client session. Valid values: - `MCS_INIT`: being initialized
- `MCS_ACTIVE_READER`: active
- `MCS_KEEP_ALIVE`: alive
- `MCS_HALF_CLOSE`: half-closed
- `MCS_CLOSED`: closed
|
-| tid | The thread ID. |
-| pid | The process ID. |
+2. Execute the `SHOW PROCESSLIST` statement to query the number of sessions of the current tenant and the session IDs.
+
+ ```shell
+ obclient> SHOW PROCESSLIST;
+ ```
+
+ The result of the statement varies with the value of `client_session_id_version`.
+
+
+ Note
+
+ The client_session_id_version
parameter specifies the computing logic for generating client session IDs. We recommend that you set the parameter to 2
. This value indicates using the new computing logic, which ensures that the generated client session IDs are globally unique.
+
+
+ * When the `client_session_id_version` parameter is set to `2` on the ODP, the ODP uses the new computing logic to generate globally unique client session IDs. In this case, the result of the statement shows the information about the sessions on the OBServer node of the corresponding tenant. A sample query result is as follows:
+
+ ```shell
+ +------------+---------+---------------------+-----------+---------+------+--------+------------------+
+ | Id | User | Host | db | Command | Time | State | Info |
+ +------------+---------+---------------------+-----------+---------+------+--------+------------------+
+ | 1 | root | 10.10.10.1:39512 | oceanbase | Query | 0 | ACTIVE | SHOW PROCESSLIST |
+ | 3221501386 | proxyro | 10.10.10.1:37728 | oceanbase | Sleep | 13 | SLEEP | NULL |
+ +------------+---------+---------------------+-----------+---------+------+--------+------------------+
+ ```
+
+ The following table describes the fields in the returned result.
+
+ | Field | Description |
+ |-------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+ | Id | The ID of the session, which is equivalent to the `cs id` field. |
+ | User | The user to which the session belongs. |
+ | Host | The IP address and port number of the client initiating the session. |
+ | db | The database to which the session is currently connected. In Oracle mode, it is the schema name that is the same as the username. |
+ | Command | The type of command that the session is executing. |
+ | Time | The execution duration of the current command, in seconds. If the command is retried, the system resets and recalculates the execution duration. |
+ | State | The current status of the session. |
+ | Info | The statement that the session is executing. |
+
+ * When the `client_session_id_version` parameter is set to `1` on the ODP, the ODP uses the original computing logic to generate client session IDs. The information about the corresponding ODP is displayed in the query result. A sample query result is as follows:
+
+ ```shell
+ +------+--------+------+---------------------+-----------+-------------+-------------------+-------------------+-------+-------+
+ | Id | Tenant | User | Host | db | trans_count | svr_session_count | state | tid | pid |
+ +------+--------+------+---------------------+-----------+-------------+-------------------+-------------------+-------+-------+
+ | 9 | sys | root | 10.10.10.1:17890 | oceanbase | 0 | 1 | MCS_ACTIVE_READER | 48243 | 48243 |
+ +------+--------+------+---------------------+-----------+-------------+-------------------+-------------------+-------+-------+
+ ```
+
+ The following table describes the fields in the returned result.
+
+ | Field | Description |
+ |-------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+ | Id | The ID of the client session on the ODP. This parameter is equivalent to the `cs_id` parameter. |
+ | Tenant | The tenant to which the session belongs. |
+ | User | The user to which the session belongs. |
+ | Host | The IP address and port number of the client initiating the session. |
+ | db | The database to which the session is currently connected. In Oracle mode, it is the schema name that is the same as the username. |
+ | trans_count | The number of transactions transmitted by the ODP in the session. |
+ | svr_session_count | The number of sessions. |
+ | state | The status of the session. Valid values: - `MCS_INIT`: being initialized
- `MCS_ACTIVE_READER`: active
- `MCS_KEEP_ALIVE`: alive
- `MCS_HALF_CLOSE`: half-closed
- `MCS_CLOSED`: closed
|
+ | tid | The thread ID. |
+ | pid | The process ID. |
## View the internal attributes of all network connections on the ODP
-Execute the `SHOW PROXYNET CONNECTION` statement to query the detailed internal attributes of all network connections on the ODP.
+Execute the `SHOW PROXYNET CONNECTION` statement to query the detailed internal attributes of network connections on the ODP.
```sql
SHOW PROXYNET CONNECTION [thread_id [LIMIT xx]]
```
-Take note of the following considerations:
+where
* If `thread_id` is not specified, the detailed internal attributes of all network connections on ODP are returned.
From d9e7d956d1261e71761c821c477ec6b9e845b95b Mon Sep 17 00:00:00 2001
From: Jackie Qu
Date: Wed, 17 Apr 2024 17:36:10 +0800
Subject: [PATCH 33/63] v430-beta-400.column-operations-of-mysql-mode-update
---
.../400.column-operations-of-mysql-mode.md | 10 +++++++---
1 file changed, 7 insertions(+), 3 deletions(-)
diff --git a/en-US/700.reference/500.sql-reference/100.sql-syntax/200.common-tenant-of-mysql-mode/700.ddl-function-of-mysql-mode/400.column-operations-of-mysql-mode.md b/en-US/700.reference/500.sql-reference/100.sql-syntax/200.common-tenant-of-mysql-mode/700.ddl-function-of-mysql-mode/400.column-operations-of-mysql-mode.md
index 78a8e9df4c..b101dd81ed 100644
--- a/en-US/700.reference/500.sql-reference/100.sql-syntax/200.common-tenant-of-mysql-mode/700.ddl-function-of-mysql-mode/400.column-operations-of-mysql-mode.md
+++ b/en-US/700.reference/500.sql-reference/100.sql-syntax/200.common-tenant-of-mysql-mode/700.ddl-function-of-mysql-mode/400.column-operations-of-mysql-mode.md
@@ -211,7 +211,8 @@ The two syntaxes for renaming a column are as follows:
* `new_col_name` specifies the new name of the column.
- * `data_type` specifies the new data type of the column to be renamed. You can specify the current data type or another data type. For more information, see [Change the data type of a column](#Change_the_data_type_of_a_column).
+ * `data_type` specifies the new data type of the column to be renamed. You can specify the current data type or another data type. For more information, see **Change the data type of a column** below.
+
* Rename a column only
@@ -283,7 +284,8 @@ where
* `column_name` specifies the name of the column to be relocated.
-* `data_type` specifies the new data type of the column to be relocated. You can specify the current data type or another data type. For more information, see [Change the data type of a column](#Change_the_data_type_of_a_column).
+* `data_type` specifies the new data type of the column to be relocated. You can specify the current data type or another data type. For more information, see **Change the data type of a column** below.
+
* `FIRST | BEFORE | AFTER` specifies the position to which the column is to be relocated. `FIRST` indicates the beginning of the table, and `BEFORE` or `AFTER` indicates the position before or after the specified column.
@@ -625,7 +627,9 @@ where
* `column_name` specifies the name of the column to which the constraint is to be added.
-* `data_type` specifies the new data type of the column. You can specify the current data type or another data type. For more information, see [Change the data type of a column](#Change_the_data_type_of_a_column).
+* `data_type` specifies the new data type of the column. You can specify the current data type or another data type.
+For more information, see **Change the data type of a column**.
+
* `AUTO_INCREMENT` specifies to set the column as an auto-increment column.
From 1986696fc3dd46ecaac1af7b36e2fa38ef886fac Mon Sep 17 00:00:00 2001
From: Jackie Qu
Date: Wed, 17 Apr 2024 20:41:01 +0800
Subject: [PATCH 34/63] v430-beta-700.system-views-update-2
---
...400.cdb_ob_aux_statistics-of-sys-tenant.md | 57 +++++++++++++++++
...500.dba_ob_aux_statistics-of-sys-tenant.md | 56 +++++++++++++++++
..._trusted_root_certificate-of-sys-tenant.md | 35 +++++++++++
...tablet_compaction_history-of-sys-tenant.md | 31 +++++++--
.../17400.gv-ob_session-of-sys-tenant.md | 63 +++++++++++++++++++
.../17500.v-ob_session-of-sys-tenant.md | 63 +++++++++++++++++++
...tablet_compaction_history-of-sys-tenant.md | 31 +++++++--
...500.dba_ob_aux_statistics-of-mysql-mode.md | 56 +++++++++++++++++
...tablet_compaction_history-of-mysql-mode.md | 45 +++++++++----
.../17400.gv-ob_session-of-mysql-mode.md | 63 +++++++++++++++++++
.../17500.v-ob_session-of-mysql-mode.md | 63 +++++++++++++++++++
...tablet_compaction_history-of-mysql-mode.md | 33 ++++++++--
...00.dba_ob_aux_statistics-of-oracle-mode.md | 56 +++++++++++++++++
.../17400.gv-ob_session-of-oracle-mode.md | 63 +++++++++++++++++++
.../17500.v-ob_session-of-oracle-mode.md | 63 +++++++++++++++++++
...ablet_compaction_history-of-oracle-mode.md | 31 +++++++--
...ablet_compaction_history-of-oracle-mode.md | 31 +++++++--
17 files changed, 802 insertions(+), 38 deletions(-)
create mode 100644 en-US/700.reference/700.system-views/300.system-view-of-sys-tenant/200.dictionary-view-of-sys-tenant/32400.cdb_ob_aux_statistics-of-sys-tenant.md
create mode 100644 en-US/700.reference/700.system-views/300.system-view-of-sys-tenant/200.dictionary-view-of-sys-tenant/32500.dba_ob_aux_statistics-of-sys-tenant.md
create mode 100644 en-US/700.reference/700.system-views/300.system-view-of-sys-tenant/200.dictionary-view-of-sys-tenant/32600.dba_ob_trusted_root_certificate-of-sys-tenant.md
create mode 100644 en-US/700.reference/700.system-views/300.system-view-of-sys-tenant/300.performance-view-of-sys-tenant/17400.gv-ob_session-of-sys-tenant.md
create mode 100644 en-US/700.reference/700.system-views/300.system-view-of-sys-tenant/300.performance-view-of-sys-tenant/17500.v-ob_session-of-sys-tenant.md
create mode 100644 en-US/700.reference/700.system-views/400.system-view-of-mysql-mode/200.dictionary-view-of-mysql-mode/28500.dba_ob_aux_statistics-of-mysql-mode.md
create mode 100644 en-US/700.reference/700.system-views/400.system-view-of-mysql-mode/300.performance-view-of-mysql-mode/17400.gv-ob_session-of-mysql-mode.md
create mode 100644 en-US/700.reference/700.system-views/400.system-view-of-mysql-mode/300.performance-view-of-mysql-mode/17500.v-ob_session-of-mysql-mode.md
create mode 100644 en-US/700.reference/700.system-views/500.system-view-of-oracle-mode/200.dictionary-view-of-oracle-mode/35600.dba_ob_aux_statistics-of-oracle-mode.md
create mode 100644 en-US/700.reference/700.system-views/500.system-view-of-oracle-mode/300.performance-view-of-oracle-mode/17400.gv-ob_session-of-oracle-mode.md
create mode 100644 en-US/700.reference/700.system-views/500.system-view-of-oracle-mode/300.performance-view-of-oracle-mode/17500.v-ob_session-of-oracle-mode.md
diff --git a/en-US/700.reference/700.system-views/300.system-view-of-sys-tenant/200.dictionary-view-of-sys-tenant/32400.cdb_ob_aux_statistics-of-sys-tenant.md b/en-US/700.reference/700.system-views/300.system-view-of-sys-tenant/200.dictionary-view-of-sys-tenant/32400.cdb_ob_aux_statistics-of-sys-tenant.md
new file mode 100644
index 0000000000..e26d8d2ae2
--- /dev/null
+++ b/en-US/700.reference/700.system-views/300.system-view-of-sys-tenant/200.dictionary-view-of-sys-tenant/32400.cdb_ob_aux_statistics-of-sys-tenant.md
@@ -0,0 +1,57 @@
+| description ||
+|---|---|
+| keywords ||
+| dir-name ||
+| dir-name-en ||
+| tenant-type ||
+
+# CDB_OB_AUX_STATISTICS
+
+
+Note
+This view is introduced since OceanBase Database V4.3.0.
+
+
+## Purpose
+
+The `CDB_OB_AUX_STATISTICS` view displays the auxiliary statistics of all tenants.
+
+## Columns
+
+| **Column** | **Type** | **Nullable?** | **Description** |
+| --- | --- | --- | --- |
+| TENANT_ID | bigint(20) | NO | The tenant ID. Valid values:- `1`: the sys tenant.
- Other values: a user tenant or meta tenant.
|
+| LAST_ANALYZED | timestamp(6) | NO | The timestamp of the last statistics collection. |
+| CPU_SPEED(MHZ) | bigint(20) | YES | The CPU speed of the current environment, in MHz. |
+| DISK_SEQ_READ_SPEED(MB/S) | bigint(20) | YES | The sequential read speed of the disk, in MB/s. |
+| DISK_RND_READ_SPEED(MB/S) | bigint(20) | YES | The random read speed of the disk, in MB/s. |
+| NETWORK_SPEED(MB/S) | bigint(20) | YES | The network transmission speed, in MB/s. |
+
+## Sample query
+
+1. Manually enable auxiliary statistics collection.
+
+ ```shell
+ obclient [oceanbase]> CALL dbms_stats.gather_system_stats();
+ ```
+
+2. Query the auxiliary statistics of tenants.
+
+ ```shell
+ obclient [oceanbase]> SELECT * FROM oceanbase.CDB_OB_AUX_STATISTICS;
+ ```
+
+3. The query result is as follows:
+
+ ```shell
+ -------+----------------------------+----------------+---------------------------+---------------------------+---------------------+
+ | TENANT_ID | LAST_ANALYZED | CPU_SPEED(MHZ) | DISK_SEQ_READ_SPEED(MB/S) | DISK_RND_READ_SPEED(MB/S) | NETWORK_SPEED(MB/S) |
+ +-----------+----------------------------+----------------+---------------------------+---------------------------+---------------------+
+ | 1 | 2024-03-14 11:36:55.196214 | 2500 | 3257 | 407 | 1250 |
+ +-----------+----------------------------+----------------+---------------------------+---------------------------+---------------------+
+ 1 row in set (0.028 sec)
+ ```
+
+## References
+
+[DBA_OB_AUX_STATISTICS](32500.dba_ob_aux_statistics-of-sys-tenant.md)
\ No newline at end of file
diff --git a/en-US/700.reference/700.system-views/300.system-view-of-sys-tenant/200.dictionary-view-of-sys-tenant/32500.dba_ob_aux_statistics-of-sys-tenant.md b/en-US/700.reference/700.system-views/300.system-view-of-sys-tenant/200.dictionary-view-of-sys-tenant/32500.dba_ob_aux_statistics-of-sys-tenant.md
new file mode 100644
index 0000000000..c14bb41c0a
--- /dev/null
+++ b/en-US/700.reference/700.system-views/300.system-view-of-sys-tenant/200.dictionary-view-of-sys-tenant/32500.dba_ob_aux_statistics-of-sys-tenant.md
@@ -0,0 +1,56 @@
+| description ||
+|---|---|
+| keywords ||
+| dir-name ||
+| dir-name-en ||
+| tenant-type ||
+
+# DBA_OB_AUX_STATISTICS
+
+
+Note
+This view is introduced since OceanBase Database V4.3.0.
+
+
+## Purpose
+
+The `DBA_OB_AUX_STATISTICS` view displays the auxiliary statistics of each tenant.
+
+## Columns
+
+| **Column** | **Type** | **Nullable?** | **Description** |
+| --- | --- | --- | --- |
+| LAST_ANALYZED | timestamp(6) | NO | The timestamp of the last statistics collection. |
+| CPU_SPEED(MHZ) | bigint(20) | YES | The CPU speed of the current environment, in MHz. |
+| DISK_SEQ_READ_SPEED(MB/S) | bigint(20) | YES | The sequential read speed of the disk, in MB/s. |
+| DISK_RND_READ_SPEED(MB/S) | bigint(20) | YES | The random read speed of the disk, in MB/s. |
+| NETWORK_SPEED(MB/S) | bigint(20) | YES | The network transmission speed, in MB/s. |
+
+## Sample query
+
+1. Manually enable auxiliary statistics collection.
+
+ ```shell
+ obclient [oceanbase]> CALL dbms_stats.gather_system_stats();
+ ```
+
+2. Query the auxiliary statistics of tenants.
+
+ ```shell
+ obclient [oceanbase]> SELECT * FROM oceanbase.DBA_OB_AUX_STATISTICS;
+ ```
+
+3. The query result is as follows:
+
+ ```shell
+ ------------------------+----------------+---------------------------+---------------------------+---------------------+
+ | LAST_ANALYZED | CPU_SPEED(MHZ) | DISK_SEQ_READ_SPEED(MB/S) | DISK_RND_READ_SPEED(MB/S) | NETWORK_SPEED(MB/S) |
+ +----------------------------+----------------+---------------------------+---------------------------+---------------------+
+ | 2024-03-14 11:36:55.196214 | 2500 | 3257 | 407 | 1250 |
+ +----------------------------+----------------+---------------------------+---------------------------+---------------------+
+ 1 row in set (0.002 sec)
+ ```
+
+## References
+
+[CDB_OB_AUX_STATISTICS](32400.cdb_ob_aux_statistics-of-sys-tenant.md)
\ No newline at end of file
diff --git a/en-US/700.reference/700.system-views/300.system-view-of-sys-tenant/200.dictionary-view-of-sys-tenant/32600.dba_ob_trusted_root_certificate-of-sys-tenant.md b/en-US/700.reference/700.system-views/300.system-view-of-sys-tenant/200.dictionary-view-of-sys-tenant/32600.dba_ob_trusted_root_certificate-of-sys-tenant.md
new file mode 100644
index 0000000000..045c77fc0a
--- /dev/null
+++ b/en-US/700.reference/700.system-views/300.system-view-of-sys-tenant/200.dictionary-view-of-sys-tenant/32600.dba_ob_trusted_root_certificate-of-sys-tenant.md
@@ -0,0 +1,35 @@
+| description ||
+|---|---|
+| keywords ||
+| dir-name ||
+| dir-name-en ||
+| tenant-type ||
+
+# DBA_OB_TRUSTED_ROOT_CERTIFICATE
+
+
+ Note
+ This view is introduced since OceanBase Database V4.3.0.
+
+
+## Purpose
+
+The `DBA_OB_TRUSTED_ROOT_CERTIFICATE` view displays the list of trusted root CA certificates of the current OceanBase cluster, as well as the expiration time of the certificates.
+
+## Columns
+
+| **Column** | **Type** | **Nullable?** | **Description** |
+| --- | --- | --- | --- |
+| COMMON_NAME | varchar(256) | NO | The common name of the certificate. |
+| DESCRIPTION | varchar(256) | NO | The source and purpose of the certificate. |
+| CERT_EXPIRED_TIME | timestamp(6) | NO | The expiration time of the certificate. |
+
+## Sample query
+
+```shell
+obclient [oceanbase]> SELECT * FROM oceanbase.DBA_OB_TRUSTED_ROOT_CERTIFICATE;
+```
+
+## References
+
+[RPC connection authentication](../../../../600.manage/500.security-and-permissions/300.access-control/400.1rpc-connection-authentication.md)
\ No newline at end of file
diff --git a/en-US/700.reference/700.system-views/300.system-view-of-sys-tenant/300.performance-view-of-sys-tenant/1000.gv-ob_tablet_compaction_history-of-sys-tenant.md b/en-US/700.reference/700.system-views/300.system-view-of-sys-tenant/300.performance-view-of-sys-tenant/1000.gv-ob_tablet_compaction_history-of-sys-tenant.md
index 4ed2eec8a6..1f34eb402e 100644
--- a/en-US/700.reference/700.system-views/300.system-view-of-sys-tenant/300.performance-view-of-sys-tenant/1000.gv-ob_tablet_compaction_history-of-sys-tenant.md
+++ b/en-US/700.reference/700.system-views/300.system-view-of-sys-tenant/300.performance-view-of-sys-tenant/1000.gv-ob_tablet_compaction_history-of-sys-tenant.md
@@ -9,7 +9,7 @@
## Purpose
-The `GV$OB_TABLET_COMPACTION_HISTORY` view displays the tablet-level compaction history.
+The `GV$OB_TABLET_COMPACTION_HISTORY` view displays information about historical tablet-level compactions.
Note
@@ -20,15 +20,15 @@ The `GV$OB_TABLET_COMPACTION_HISTORY` view displays the tablet-level compaction
| Column | Type | **Nullable?** | Description |
|--------------------------------------|--------------|----------------|--------|
-| SVR_IP | varchar(46) | NO | The IP address of the server. |
-| SVR_PORT | bigint(20) | NO | The port number of the server. |
+| SVR_IP | varchar(46) | NO | The IP address of the OBServer node. |
+| SVR_PORT | bigint(20) | NO | The port number of the OBServer node. |
| TENANT_ID | bigint(20) | NO | The ID of the tenant. |
| LS_ID | bigint(20) | NO | The ID of the log stream. |
| TABLET_ID | bigint(20) | NO | The ID of the tablet. |
| TYPE | varchar(64) | NO | The compaction type. Valid values: `MINI`: minor or L0 compaction that converts MemTables into SSTables. `MAJOR`: major compaction. `MINI MINOR`: L1 compaction that combines multiple mini SSTables into one. `BUF MINOR`: buffer minor compaction that generates special buffer minor SSTables. |
| COMPACTION_SCN | bigint(20) unsigned | NO | The major compaction version. |
-| START_TIME | timestamp(6) | NO | The start time. |
-| FINISH_TIME | timestamp(6) | NO | The end time. |
+| START_TIME | timestamp(6) | NO | The start time of the compaction. |
+| FINISH_TIME | timestamp(6) | NO | The end time of the compaction. |
| TASK_ID | varchar(64) | NO | The task execution trace. |
| OCCUPY_SIZE | bigint(20) | NO | The data amount. |
| MACRO_BLOCK_COUNT | bigint(20) | NO | The number of macroblocks. |
@@ -46,3 +46,24 @@ The `GV$OB_TABLET_COMPACTION_HISTORY` view displays the tablet-level compaction
| PARTICIPANT_TABLE | varchar(512) | NO | The table that participates in the compaction. |
| MACRO_ID_LIST | varchar(256) | NO | The list of output macroblocks. If the macroblock list is too long, it is not displayed. |
| COMMENTS | varchar(256) | NO | The history of failed compactions and the collection duration of the current compaction. |
+| START_CG_ID | bigint(20) | NO | The ID of the first column in the column group.Note
This column is introduced since OceanBase Database V4.3.0.
|
+| END_CG_ID | bigint(20) | NO | The ID of the last column in the column group. Note
This column is introduced since OceanBase Database V4.3.0.
|
+| KEPT_SNAPSHOT | varchar(128) | NO | The multi-version snapshot for the compaction.Note
This column is introduced since OceanBase Database V4.3.0.
|
+| MERGE_LEVEL | varchar(64) | NO | Indicates whether a major compaction was performed or microblocks were reused.Note
This column is introduced since OceanBase Database V4.3.0.
|
+
+## Sample query
+
+```shell
+obclient [oceanbase]> SELECT * FROM oceanbase.GV$OB_TABLET_COMPACTION_HISTORY LIMIT 1;
+```
+
+The query result is as follows:
+
+```shell
++----------------+----------+-----------+-------+-----------+-------------+---------------------+----------------------------+----------------------------+-----------------------------------+-------------+-------------------+-------------------------------+------------------------------+--------------------------------------+-----------------+-----------------------+-------------------+---------------------+------------------------------+----------------------------+-----------------+---------------+----------------------------------------+---------------+-------------------------------------------------------------------------------------+-------------+-----------+-----------------------------------------------------------------+-------------------+
+| SVR_IP | SVR_PORT | TENANT_ID | LS_ID | TABLET_ID | TYPE | COMPACTION_SCN | START_TIME | FINISH_TIME | TASK_ID | OCCUPY_SIZE | MACRO_BLOCK_COUNT | MULTIPLEXED_MACRO_BLOCK_COUNT | NEW_MICRO_COUNT_IN_NEW_MACRO | MULTIPLEXED_MICRO_COUNT_IN_NEW_MACRO | TOTAL_ROW_COUNT | INCREMENTAL_ROW_COUNT | COMPRESSION_RATIO | NEW_FLUSH_DATA_RATE | PROGRESSIVE_COMPACTION_ROUND | PROGRESSIVE_COMPACTION_NUM | PARALLEL_DEGREE | PARALLEL_INFO | PARTICIPANT_TABLE | MACRO_ID_LIST | COMMENTS | START_CG_ID | END_CG_ID | KEPT_SNAPSHOT | MERGE_LEVEL |
++----------------+----------+-----------+-------+-----------+-------------+---------------------+----------------------------+----------------------------+-----------------------------------+-------------+-------------------+-------------------------------+------------------------------+--------------------------------------+-----------------+-----------------------+-------------------+---------------------+------------------------------+----------------------------+-----------------+---------------+----------------------------------------+---------------+-------------------------------------------------------------------------------------+-------------+-----------+-----------------------------------------------------------------+-------------------+
+| xx.xx.xx.xx | 2882 | 1 | 1 | 60476 | MAJOR_MERGE | 1710352801420115046 | 2024-03-14 02:00:47.829176 | 2024-03-14 02:00:47.838409 | YB42AC1E87E2-0006138731B1172C-0-0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | - | table_cnt=1,[MAJOR]snapshot_version=1; | | comment="merge_reason="TENANT_MAJOR";time=add_time:1710352847829041|total=9.23ms;"; | 0 | 0 | {type:"SNAPSHOT_FOR_LS_RESERVED", snapshot:1710351031448088252} | MICRO_BLOCK_LEVEL |
++----------------+----------+-----------+-------+-----------+-------------+---------------------+----------------------------+----------------------------+-----------------------------------+-------------+-------------------+-------------------------------+------------------------------+--------------------------------------+-----------------+-----------------------+-------------------+---------------------+------------------------------+----------------------------+-----------------+---------------+----------------------------------------+---------------+-------------------------------------------------------------------------------------+-------------+-----------+-----------------------------------------------------------------+-------------------+
+1 row in set (0.004 sec)
+```
\ No newline at end of file
diff --git a/en-US/700.reference/700.system-views/300.system-view-of-sys-tenant/300.performance-view-of-sys-tenant/17400.gv-ob_session-of-sys-tenant.md b/en-US/700.reference/700.system-views/300.system-view-of-sys-tenant/300.performance-view-of-sys-tenant/17400.gv-ob_session-of-sys-tenant.md
new file mode 100644
index 0000000000..bea546645f
--- /dev/null
+++ b/en-US/700.reference/700.system-views/300.system-view-of-sys-tenant/300.performance-view-of-sys-tenant/17400.gv-ob_session-of-sys-tenant.md
@@ -0,0 +1,63 @@
+| description ||
+|---|---|
+| keywords ||
+| dir-name ||
+| dir-name-en ||
+| tenant-type ||
+
+# GV$OB_SESSION
+
+
+Note
+This view is introduced since OceanBase Database V4.3.0.
+
+
+## Purpose
+
+The `GV$OB_SESSION` view displays information about sessions created on all OBServer nodes.
+
+## Columns
+
+| **Column** | **Type** | **Nullable?** | **Description** |
+| --- | --- | --- | --- |
+| ID | bigint(20) unsigned | NO | The ID of the session. |
+| USER | varchar(32) | NO | The user to which the session belongs. |
+| TENANT | varchar(128) | NO | The name of the tenant accessed. |
+| HOST | varchar(128) | NO | The IP address and port number of the client that initiated the session. If OceanBase Database Proxy (ODP) was used to connect to the database, the value indicates the host IP address and port number of ODP. |
+| DB | varchar(128) | YES | The name of the database to which the session connects. |
+| COMMAND | varchar(4096) | NO | The type of the statement being executed in the session. |
+| SQL_ID | varchar(32) | NO | The ID of the SQL statement. |
+| TIME | BIGINT(21) | NO | The execution time of the current statement, in seconds. If a retry occurs, the value is cleared and recalculated. |
+| STATE | varchar(128) | YES | The current status of the session. |
+| INFO | varchar(262143) | YES | The statement being executed in the session. |
+| SVR_IP | varchar(46) | NO | The IP address of the OBServer node. |
+| SVR_PORT | bigint(20) | NO | The remote procedure call (RPC) port number of the OBServer node. |
+| SQL_PORT | bigint(20) | NO | The SQL port number of the OBServer node. |
+| PROXY_SESSID | bigint(20) unsigned | YES | The session ID of ODP, if ODP is used for connection. |
+| USER_CLIENT_IP | varchar(46) | YES | The IP address of the user client. |
+| USER_HOST | varchar(128) | YES | The hostname of the user client. |
+| TRANS_ID | bigint(20) unsigned | NO | The transaction ID. |
+| THREAD_ID | bigint(20) unsigned | NO | The thread ID. |
+| TRACE_ID | varchar(64) | YES | The trace ID. |
+| REF_COUNT | bigint(20) | NO | The reference count of the connection. |
+| BACKTRACE | varchar(16384) | YES | The call stack for connection references. |
+| TRANS_STATE | varchar(32) | YES | The transaction status. |
+| TOTAL_CPU_TIME | BIGINT(21) | NO | The CPU time spent on executing the current statement, in seconds. |
+
+## Sample query
+
+```shell
+obclient [oceanbase]> SELECT * FROM oceanbase.GV$OB_SESSION limit 2;
+```
+
+
+## References
+
+[V$OB_SESSION](17500.v-ob_session-of-sys-tenant.md)
diff --git a/en-US/700.reference/700.system-views/300.system-view-of-sys-tenant/300.performance-view-of-sys-tenant/17500.v-ob_session-of-sys-tenant.md b/en-US/700.reference/700.system-views/300.system-view-of-sys-tenant/300.performance-view-of-sys-tenant/17500.v-ob_session-of-sys-tenant.md
new file mode 100644
index 0000000000..754ad4ae7e
--- /dev/null
+++ b/en-US/700.reference/700.system-views/300.system-view-of-sys-tenant/300.performance-view-of-sys-tenant/17500.v-ob_session-of-sys-tenant.md
@@ -0,0 +1,63 @@
+| description ||
+|---|---|
+| keywords ||
+| dir-name ||
+| dir-name-en ||
+| tenant-type ||
+
+# V$OB_SESSION
+
+
+Note
+This view is introduced since OceanBase Database V4.3.0.
+
+
+## Purpose
+
+The `V$OB_SESSION` view displays information about sessions created on the current OBServer node.
+
+## Columns
+
+| **Column** | **Type** | **Nullable?** | **Description** |
+| --- | --- | --- | --- |
+| ID | bigint(20) unsigned | NO | The ID of the session. |
+| USER | varchar(32) | NO | The user to which the session belongs. |
+| TENANT | varchar(128) | NO | The name of the tenant accessed. |
+| HOST | varchar(128) | NO | The IP address and port number of the client that initiated the session. If OceanBase Database Proxy (ODP) was used to connect to the database, the value indicates the host IP address and port number of ODP. |
+| DB | varchar(128) | NO | The name of the database to which the session connects. |
+| COMMAND | varchar(4096) | NO | The type of the statement being executed in the session. |
+| SQL_ID | varchar(32) | NO | The ID of the SQL statement. |
+| TIME | BIGINT(21) | NO | The execution time of the current statement, in seconds. If a retry occurs, the value is cleared and recalculated. |
+| STATE | varchar(128) | NO | The current status of the session. |
+| INFO | varchar(262143) | NO | The statement being executed in the session. |
+| SVR_IP | varchar(46) | NO | The IP address of the OBServer node. |
+| SVR_PORT | bigint(20) | NO | The remote procedure call (RPC) port number of the OBServer node. |
+| SQL_PORT | bigint(20) | NO | The SQL port number of the OBServer node. |
+| PROXY_SESSID | bigint(20) unsigned | NO | The session ID of ODP, if ODP is used for connection. |
+| USER_CLIENT_IP | varchar(46) | NO | The IP address of the user client. |
+| USER_HOST | varchar(128) | NO | The hostname of the user client. |
+| TRANS_ID | bigint(20) unsigned | NO | The transaction ID. |
+| THREAD_ID | bigint(20) unsigned | NO | The thread ID. |
+| TRACE_ID | varchar(64) | NO | The trace ID. |
+| REF_COUNT | bigint(20) | NO | The reference count of the connection. |
+| BACKTRACE | varchar(16384) | NO | The call stack for connection references. |
+| TRANS_STATE | varchar(32) | NO | The transaction status. |
+| TOTAL_CPU_TIME | BIGINT(21) | NO | The CPU time spent on executing the current statement, in seconds. |
+
+## Sample query
+
+```shell
+obclient [oceanbase]> SELECT * FROM oceanbase.V$OB_SESSION limit 2;
+```
+
+
+## References
+
+[GV$OB_SESSION](17400.gv-ob_session-of-sys-tenant.md)
diff --git a/en-US/700.reference/700.system-views/300.system-view-of-sys-tenant/300.performance-view-of-sys-tenant/5200.v-ob_tablet_compaction_history-of-sys-tenant.md b/en-US/700.reference/700.system-views/300.system-view-of-sys-tenant/300.performance-view-of-sys-tenant/5200.v-ob_tablet_compaction_history-of-sys-tenant.md
index 2e5f2bbe18..8f6f1be515 100644
--- a/en-US/700.reference/700.system-views/300.system-view-of-sys-tenant/300.performance-view-of-sys-tenant/5200.v-ob_tablet_compaction_history-of-sys-tenant.md
+++ b/en-US/700.reference/700.system-views/300.system-view-of-sys-tenant/300.performance-view-of-sys-tenant/5200.v-ob_tablet_compaction_history-of-sys-tenant.md
@@ -9,7 +9,7 @@
## Purpose
-The `V$OB_TABLET_COMPACTION_HISTORY` view displays the tablet-level compaction history.
+The `V$OB_TABLET_COMPACTION_HISTORY` view displays information about historical tablet-level compactions.
Note
@@ -20,15 +20,15 @@ The `V$OB_TABLET_COMPACTION_HISTORY` view displays the tablet-level compaction h
| Column | Type | **Nullable?** | Description |
|--------------------------------------|--------------|----------------|--------|
-| SVR_IP | varchar(46) | NO | The IP address of the server. |
-| SVR_PORT | bigint(20) | NO | The port number of the server. |
+| SVR_IP | varchar(46) | NO | The IP address of the OBServer node. |
+| SVR_PORT | bigint(20) | NO | The port number of the OBServer node. |
| TENANT_ID | bigint(20) | NO | The ID of the tenant. |
| LS_ID | bigint(20) | NO | The ID of the log stream. |
| TABLET_ID | bigint(20) | NO | The ID of the tablet. |
| TYPE | varchar(64) | NO | The compaction type. Valid values: `MINI`: minor or L0 compaction that converts MemTables into SSTables. `MAJOR`: major compaction. `MINI MINOR`: L1 compaction that combines multiple mini SSTables into one. `BUF MINOR`: buffer minor compaction that generates special buffer minor SSTables. |
| COMPACTION_SCN | bigint(20) unsigned | NO | The major compaction version. |
-| START_TIME | timestamp(6) | NO | The start time. |
-| FINISH_TIME | timestamp(6) | NO | The end time. |
+| START_TIME | timestamp(6) | NO | The start time of the compaction. |
+| FINISH_TIME | timestamp(6) | NO | The end time of the compaction. |
| TASK_ID | varchar(64) | NO | The task execution trace. |
| OCCUPY_SIZE | bigint(20) | NO | The data amount. |
| MACRO_BLOCK_COUNT | bigint(20) | NO | The number of macroblocks. |
@@ -46,3 +46,24 @@ The `V$OB_TABLET_COMPACTION_HISTORY` view displays the tablet-level compaction h
| PARTICIPANT_TABLE | varchar(512) | NO | The table that participates in the compaction. |
| MACRO_ID_LIST | varchar(256) | NO | The list of output macroblocks. If the macroblock list is too long, it is not displayed. |
| COMMENTS | varchar(256) | NO | The history of failed compactions and the collection duration of the current compaction. |
+| START_CG_ID | bigint(20) | NO | The ID of the first column in the column group.Note
This column is introduced since OceanBase Database V4.3.0.
|
+| END_CG_ID | bigint(20) | NO | The ID of the last column in the column group. Note
This column is introduced since OceanBase Database V4.3.0.
|
+| KEPT_SNAPSHOT | varchar(128) | NO | The multi-version snapshot for the compaction.Note
This column is introduced since OceanBase Database V4.3.0.
|
+| MERGE_LEVEL | varchar(64) | NO | Indicates whether a major compaction was performed or microblocks were reused.Note
This column is introduced since OceanBase Database V4.3.0.
|
+
+## Sample query
+
+```shell
+obclient [oceanbase]> SELECT * FROM oceanbase.V$OB_TABLET_COMPACTION_HISTORY LIMIT 1;
+```
+
+The query result is as follows:
+
+```shell
++----------------+----------+-----------+-------+-----------+-------------+---------------------+----------------------------+----------------------------+-----------------------------------+-------------+-------------------+-------------------------------+------------------------------+--------------------------------------+-----------------+-----------------------+-------------------+---------------------+------------------------------+----------------------------+-----------------+---------------+----------------------------------------+---------------+-------------------------------------------------------------------------------------+-------------+-----------+-----------------------------------------------------------------+-------------------+
+| SVR_IP | SVR_PORT | TENANT_ID | LS_ID | TABLET_ID | TYPE | COMPACTION_SCN | START_TIME | FINISH_TIME | TASK_ID | OCCUPY_SIZE | MACRO_BLOCK_COUNT | MULTIPLEXED_MACRO_BLOCK_COUNT | NEW_MICRO_COUNT_IN_NEW_MACRO | MULTIPLEXED_MICRO_COUNT_IN_NEW_MACRO | TOTAL_ROW_COUNT | INCREMENTAL_ROW_COUNT | COMPRESSION_RATIO | NEW_FLUSH_DATA_RATE | PROGRESSIVE_COMPACTION_ROUND | PROGRESSIVE_COMPACTION_NUM | PARALLEL_DEGREE | PARALLEL_INFO | PARTICIPANT_TABLE | MACRO_ID_LIST | COMMENTS | START_CG_ID | END_CG_ID | KEPT_SNAPSHOT | MERGE_LEVEL |
++----------------+----------+-----------+-------+-----------+-------------+---------------------+----------------------------+----------------------------+-----------------------------------+-------------+-------------------+-------------------------------+------------------------------+--------------------------------------+-----------------+-----------------------+-------------------+---------------------+------------------------------+----------------------------+-----------------+---------------+----------------------------------------+---------------+-------------------------------------------------------------------------------------+-------------+-----------+-----------------------------------------------------------------+-------------------+
+| xx.xx.xx.xx | 2882 | 1 | 1 | 60476 | MAJOR_MERGE | 1710352801420115046 | 2024-03-14 02:00:47.829176 | 2024-03-14 02:00:47.838409 | YB42AC1E87E2-0006138731B1172C-0-0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | - | table_cnt=1,[MAJOR]snapshot_version=1; | | comment="merge_reason="TENANT_MAJOR";time=add_time:1710352847829041|total=9.23ms;"; | 0 | 0 | {type:"SNAPSHOT_FOR_LS_RESERVED", snapshot:1710351031448088252} | MICRO_BLOCK_LEVEL |
++----------------+----------+-----------+-------+-----------+-------------+---------------------+----------------------------+----------------------------+-----------------------------------+-------------+-------------------+-------------------------------+------------------------------+--------------------------------------+-----------------+-----------------------+-------------------+---------------------+------------------------------+----------------------------+-----------------+---------------+----------------------------------------+---------------+-------------------------------------------------------------------------------------+-------------+-----------+-----------------------------------------------------------------+-------------------+
+1 row in set (0.005 sec)
+```
diff --git a/en-US/700.reference/700.system-views/400.system-view-of-mysql-mode/200.dictionary-view-of-mysql-mode/28500.dba_ob_aux_statistics-of-mysql-mode.md b/en-US/700.reference/700.system-views/400.system-view-of-mysql-mode/200.dictionary-view-of-mysql-mode/28500.dba_ob_aux_statistics-of-mysql-mode.md
new file mode 100644
index 0000000000..1134c63c45
--- /dev/null
+++ b/en-US/700.reference/700.system-views/400.system-view-of-mysql-mode/200.dictionary-view-of-mysql-mode/28500.dba_ob_aux_statistics-of-mysql-mode.md
@@ -0,0 +1,56 @@
+| description ||
+|---|---|
+| keywords ||
+| dir-name ||
+| dir-name-en ||
+| tenant-type ||
+
+# DBA_OB_AUX_STATISTICS
+
+
+Note
+This view is introduced since OceanBase Database V4.3.0.
+
+
+## Purpose
+
+The `DBA_OB_AUX_STATISTICS` view displays the auxiliary statistics of each tenant.
+
+## Columns
+
+| **Column** | **Type** | **Nullable?** | **Description** |
+| --- | --- | --- | --- |
+| LAST_ANALYZED | timestamp(6) | NO | The timestamp of the last statistics collection. |
+| CPU_SPEED(MHZ) | bigint(20) | YES | The CPU speed of the current environment, in MHz. |
+| DISK_SEQ_READ_SPEED(MB/S) | bigint(20) | YES | The sequential read speed of the disk, in MB/s. |
+| DISK_RND_READ_SPEED(MB/S) | bigint(20) | YES | The random read speed of the disk, in MB/s. |
+| NETWORK_SPEED(MB/S) | bigint(20) | YES | The network transmission speed, in MB/s. |
+
+## Sample query
+
+1. Manually enable auxiliary statistics collection.
+
+ ```shell
+ obclient [oceanbase]> CALL dbms_stats.gather_system_stats();
+ ```
+
+2. Query the auxiliary statistics of tenants.
+
+ ```shell
+ obclient [oceanbase]> SELECT * FROM oceanbase.DBA_OB_AUX_STATISTICS;
+ ```
+
+3. The query result is as follows:
+
+ ```shell
+ ------------------------+----------------+---------------------------+---------------------------+---------------------+
+ | LAST_ANALYZED | CPU_SPEED(MHZ) | DISK_SEQ_READ_SPEED(MB/S) | DISK_RND_READ_SPEED(MB/S) | NETWORK_SPEED(MB/S) |
+ +----------------------------+----------------+---------------------------+---------------------------+---------------------+
+ | 2024-03-14 11:36:55.196214 | 2500 | 3257 | 407 | 1250 |
+ +----------------------------+----------------+---------------------------+---------------------------+---------------------+
+ 1 row in set (0.002 sec)
+ ```
+
+## References
+
+[CDB_OB_AUX_STATISTICS](../../300.system-view-of-sys-tenant/200.dictionary-view-of-sys-tenant/32400.cdb_ob_aux_statistics-of-sys-tenant.md)
\ No newline at end of file
diff --git a/en-US/700.reference/700.system-views/400.system-view-of-mysql-mode/300.performance-view-of-mysql-mode/1000.gv-ob_tablet_compaction_history-of-mysql-mode.md b/en-US/700.reference/700.system-views/400.system-view-of-mysql-mode/300.performance-view-of-mysql-mode/1000.gv-ob_tablet_compaction_history-of-mysql-mode.md
index 3e24875bb8..afbbb557cb 100644
--- a/en-US/700.reference/700.system-views/400.system-view-of-mysql-mode/300.performance-view-of-mysql-mode/1000.gv-ob_tablet_compaction_history-of-mysql-mode.md
+++ b/en-US/700.reference/700.system-views/400.system-view-of-mysql-mode/300.performance-view-of-mysql-mode/1000.gv-ob_tablet_compaction_history-of-mysql-mode.md
@@ -7,28 +7,28 @@
# GV$OB_TABLET_COMPACTION_HISTORY
-## Purpose
-
-The `GV$OB_TABLET_COMPACTION_HISTORY` view displays the tablet-level compaction history.
-
Note
This view is introduced since OceanBase Database V4.0.0.
+## Purpose
+
+The `GV$OB_TABLET_COMPACTION_HISTORY` view displays information about historical tablet-level compactions.
+
## Columns
-| Column | Type | **Nullable?** | Description |
-|--------------------------------------|--------------|----------------|--------|
-| SVR_IP | varchar(46) | NO | The IP address of the server. |
-| SVR_PORT | bigint(20) | NO | The port number of the server. |
+| Column | Type | Nullable? | Description |
+|--------------|--------------|----------------|--------|
+| SVR_IP | varchar(46) | NO | The IP address of the OBServer node. |
+| SVR_PORT | bigint(20) | NO | The port number of the OBServer node. |
| TENANT_ID | bigint(20) | NO | The ID of the tenant. |
| LS_ID | bigint(20) | NO | The ID of the log stream. |
| TABLET_ID | bigint(20) | NO | The ID of the tablet. |
| TYPE | varchar(64) | NO | The compaction type. Valid values: `MINI`: minor or L0 compaction that converts MemTables into SSTables. `MAJOR`: major compaction. `MINI MINOR`: L1 compaction that combines multiple mini SSTables into one. `BUF MINOR`: buffer minor compaction that generates special buffer minor SSTables. |
| COMPACTION_SCN | bigint(20) unsigned | NO | The major compaction version. |
-| START_TIME | timestamp(6) | NO | The start time. |
-| FINISH_TIME | timestamp(6) | NO | The end time. |
+| START_TIME | timestamp(6) | NO | The start time of the compaction. |
+| FINISH_TIME | timestamp(6) | NO | The end time of the compaction. |
| TASK_ID | varchar(64) | NO | The task execution trace. |
| OCCUPY_SIZE | bigint(20) | NO | The data amount. |
| MACRO_BLOCK_COUNT | bigint(20) | NO | The number of macroblocks. |
@@ -38,11 +38,32 @@ The `GV$OB_TABLET_COMPACTION_HISTORY` view displays the tablet-level compaction
| TOTAL_ROW_COUNT | bigint(20) | NO | The total number of rows. |
| INCREMENTAL_ROW_COUNT | bigint(20) | NO | The number of new rows. |
| COMPRESSION_RATIO | double | NO | The compression ratio of new data, which is the ratio of the size of compressed new macroblock data to the data size before compression. |
-| NEW_FLUSH_DATA_RATE | bigint(20) | NO | The output speed of new data. Unit: KB/s. |
+| NEW_FLUSH_DATA_RATE | bigint(20) | NO | The output speed of new data, in KB/s. |
| PROGRESSIVE_COMPACTION_ROUND | bigint(20) | NO | The current compaction round during a progressive compaction. For a full compaction, the value is -1. |
| PROGRESSIVE_COMPACTION_NUM | bigint(20) | NO | The total number of compaction rounds in a progressive compaction. |
| PARALLEL_DEGREE | bigint(20) | NO | The degree of parallelism. |
| PARALLEL_INFO | varchar(512) | NO | The parallel task information, including the scanned data amount, operating time, and output data amount of the parallel task. |
| PARTICIPANT_TABLE | varchar(512) | NO | The table that participates in the compaction. |
| MACRO_ID_LIST | varchar(256) | NO | The list of output macroblocks. If the macroblock list is too long, it is not displayed. |
-| COMMENTS | varchar(256) | NO | The history of failed compactions and the collection duration of the current compaction. |
+| COMMENTS | varchar(1024) | NO | The comments that describe the problems encountered during scheduling or execution. |
+| START_CG_ID | bigint(20) | NO | The ID of the first column in the column group.Note
This column is introduced since OceanBase Database V4.3.0.
|
+| END_CG_ID | bigint(20) | NO | The ID of the last column in the column group. Note
This column is introduced since OceanBase Database V4.3.0.
|
+| KEPT_SNAPSHOT | varchar(128) | NO | The multi-version snapshot for the compaction.Note
This column is introduced since OceanBase Database V4.3.0.
|
+| MERGE_LEVEL | varchar(64) | NO | Indicates whether a major compaction was performed or microblocks were reused.Note
This column is introduced since OceanBase Database V4.3.0.
|
+
+## Sample query
+
+```shell
+obclient [oceanbase]> SELECT * FROM oceanbase.GV$OB_TABLET_COMPACTION_HISTORY LIMIT 1;
+```
+
+The query result is as follows:
+
+```shell
++----------------+----------+-----------+-------+-----------+-------------+---------------------+----------------------------+----------------------------+-----------------------------------+-------------+-------------------+-------------------------------+------------------------------+--------------------------------------+-----------------+-----------------------+-------------------+---------------------+------------------------------+----------------------------+-----------------+---------------+----------------------------------------+---------------+-------------------------------------------------------------------------------------+-------------+-----------+-----------------------------------------------------------------+-------------------+
+| SVR_IP | SVR_PORT | TENANT_ID | LS_ID | TABLET_ID | TYPE | COMPACTION_SCN | START_TIME | FINISH_TIME | TASK_ID | OCCUPY_SIZE | MACRO_BLOCK_COUNT | MULTIPLEXED_MACRO_BLOCK_COUNT | NEW_MICRO_COUNT_IN_NEW_MACRO | MULTIPLEXED_MICRO_COUNT_IN_NEW_MACRO | TOTAL_ROW_COUNT | INCREMENTAL_ROW_COUNT | COMPRESSION_RATIO | NEW_FLUSH_DATA_RATE | PROGRESSIVE_COMPACTION_ROUND | PROGRESSIVE_COMPACTION_NUM | PARALLEL_DEGREE | PARALLEL_INFO | PARTICIPANT_TABLE | MACRO_ID_LIST | COMMENTS | START_CG_ID | END_CG_ID | KEPT_SNAPSHOT | MERGE_LEVEL |
++----------------+----------+-----------+-------+-----------+-------------+---------------------+----------------------------+----------------------------+-----------------------------------+-------------+-------------------+-------------------------------+------------------------------+--------------------------------------+-----------------+-----------------------+-------------------+---------------------+------------------------------+----------------------------+-----------------+---------------+----------------------------------------+---------------+-------------------------------------------------------------------------------------+-------------+-----------+-----------------------------------------------------------------+-------------------+
+| xx.xx.xx.xx | 2882 | 1 | 1 | 60476 | MAJOR_MERGE | 1710352801420115046 | 2024-03-14 02:00:47.829176 | 2024-03-14 02:00:47.838409 | YB42AC1E87E2-0006138731B1172C-0-0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | - | table_cnt=1,[MAJOR]snapshot_version=1; | | comment="merge_reason="TENANT_MAJOR";time=add_time:1710352847829041|total=9.23ms;"; | 0 | 0 | {type:"SNAPSHOT_FOR_LS_RESERVED", snapshot:1710351031448088252} | MICRO_BLOCK_LEVEL |
++----------------+----------+-----------+-------+-----------+-------------+---------------------+----------------------------+----------------------------+-----------------------------------+-------------+-------------------+-------------------------------+------------------------------+--------------------------------------+-----------------+-----------------------+-------------------+---------------------+------------------------------+----------------------------+-----------------+---------------+----------------------------------------+---------------+-------------------------------------------------------------------------------------+-------------+-----------+-----------------------------------------------------------------+-------------------+
+1 row in set (0.004 sec)
+```
diff --git a/en-US/700.reference/700.system-views/400.system-view-of-mysql-mode/300.performance-view-of-mysql-mode/17400.gv-ob_session-of-mysql-mode.md b/en-US/700.reference/700.system-views/400.system-view-of-mysql-mode/300.performance-view-of-mysql-mode/17400.gv-ob_session-of-mysql-mode.md
new file mode 100644
index 0000000000..fd00096aea
--- /dev/null
+++ b/en-US/700.reference/700.system-views/400.system-view-of-mysql-mode/300.performance-view-of-mysql-mode/17400.gv-ob_session-of-mysql-mode.md
@@ -0,0 +1,63 @@
+| description ||
+|---|---|
+| keywords ||
+| dir-name ||
+| dir-name-en ||
+| tenant-type ||
+
+# GV$OB_SESSION
+
+
+Note
+This view is introduced since OceanBase Database V4.3.0.
+
+
+## Purpose
+
+The `GV$OB_SESSION` view displays information about sessions created on all OBServer nodes.
+
+## Columns
+
+| **Column** | **Type** | **Nullable?** | **Description** |
+| --- | --- | --- | --- |
+| ID | bigint(20) unsigned | NO | The ID of the session. |
+| USER | varchar(32) | NO | The user to which the session belongs. |
+| TENANT | varchar(128) | NO | The name of the tenant accessed. |
+| HOST | varchar(128) | NO | The IP address and port number of the client that initiated the session. If OceanBase Database Proxy (ODP) was used to connect to the database, the value indicates the host IP address and port number of ODP. |
+| DB | varchar(128) | YES | The name of the database to which the session connects. |
+| COMMAND | varchar(4096) | NO | The type of the statement being executed in the session. |
+| SQL_ID | varchar(32) | NO | The ID of the SQL statement. |
+| TIME | BIGINT(21) | NO | The execution time of the current statement, in seconds. If a retry occurs, the value is cleared and recalculated. |
+| STATE | varchar(128) | YES | The current status of the session. |
+| INFO | varchar(262143) | YES | The statement being executed in the session. |
+| SVR_IP | varchar(46) | NO | The IP address of the OBServer node. |
+| SVR_PORT | bigint(20) | NO | The remote procedure call (RPC) port number of the OBServer node. |
+| SQL_PORT | bigint(20) | NO | The SQL port number of the OBServer node. |
+| PROXY_SESSID | bigint(20) unsigned | YES | The session ID of ODP, if ODP is used for connection. |
+| USER_CLIENT_IP | varchar(46) | YES | The IP address of the user client. |
+| USER_HOST | varchar(128) | YES | The hostname of the user client. |
+| TRANS_ID | bigint(20) unsigned | NO | The transaction ID. |
+| THREAD_ID | bigint(20) unsigned | NO | The thread ID. |
+| TRACE_ID | varchar(64) | YES | The trace ID. |
+| REF_COUNT | bigint(20) | NO | The reference count of the connection. |
+| BACKTRACE | varchar(16384) | YES | The call stack for connection references. |
+| TRANS_STATE | varchar(32) | YES | The transaction status. |
+| TOTAL_CPU_TIME | BIGINT(21) | NO | The CPU time spent on executing the current statement, in seconds. |
+
+## Sample query
+
+```shell
+obclient [oceanbase]> SELECT * FROM oceanbase.GV$OB_SESSION limit 2;
+```
+
+
+## References
+
+[V$OB_SESSION](17500.v-ob_session-of-mysql-mode.md)
diff --git a/en-US/700.reference/700.system-views/400.system-view-of-mysql-mode/300.performance-view-of-mysql-mode/17500.v-ob_session-of-mysql-mode.md b/en-US/700.reference/700.system-views/400.system-view-of-mysql-mode/300.performance-view-of-mysql-mode/17500.v-ob_session-of-mysql-mode.md
new file mode 100644
index 0000000000..58d344da81
--- /dev/null
+++ b/en-US/700.reference/700.system-views/400.system-view-of-mysql-mode/300.performance-view-of-mysql-mode/17500.v-ob_session-of-mysql-mode.md
@@ -0,0 +1,63 @@
+| description ||
+|---|---|
+| keywords ||
+| dir-name ||
+| dir-name-en ||
+| tenant-type ||
+
+# V$OB_SESSION
+
+
+Note
+This view is introduced since OceanBase Database V4.3.0.
+
+
+## Purpose
+
+The `V$OB_SESSION` view displays information about sessions created on the current OBServer node.
+
+## Columns
+
+| **Column** | **Type** | **Nullable?** | **Description** |
+| --- | --- | --- | --- |
+| ID | bigint(20) unsigned | NO | The ID of the session. |
+| USER | varchar(32) | NO | The user to which the session belongs. |
+| TENANT | varchar(128) | NO | The name of the tenant accessed. |
+| HOST | varchar(128) | NO | The IP address and port number of the client that initiated the session. If OceanBase Database Proxy (ODP) was used to connect to the database, the value indicates the host IP address and port number of ODP. |
+| DB | varchar(128) | NO | The name of the database to which the session connects. |
+| COMMAND | varchar(4096) | NO | The type of the statement being executed in the session. |
+| SQL_ID | varchar(32) | NO | The ID of the SQL statement. |
+| TIME | BIGINT(21) | NO | The execution time of the current statement, in seconds. If a retry occurs, the value is cleared and recalculated. |
+| STATE | varchar(128) | NO | The current status of the session. |
+| INFO | varchar(262143) | NO | The statement being executed in the session. |
+| SVR_IP | varchar(46) | NO | The IP address of the OBServer node. |
+| SVR_PORT | bigint(20) | NO | The remote procedure call (RPC) port number of the OBServer node. |
+| SQL_PORT | bigint(20) | NO | The SQL port number of the OBServer node. |
+| PROXY_SESSID | bigint(20) unsigned | NO | The session ID of ODP, if ODP is used for connection. |
+| USER_CLIENT_IP | varchar(46) | NO | The IP address of the user client. |
+| USER_HOST | varchar(128) | NO | The hostname of the user client. |
+| TRANS_ID | bigint(20) unsigned | NO | The transaction ID. |
+| THREAD_ID | bigint(20) unsigned | NO | The thread ID. |
+| TRACE_ID | varchar(64) | NO | The trace ID. |
+| REF_COUNT | bigint(20) | NO | The reference count of the connection. |
+| BACKTRACE | varchar(16384) | NO | The call stack for connection references. |
+| TRANS_STATE | varchar(32) | NO | The transaction status. |
+| TOTAL_CPU_TIME | BIGINT(21) | NO | The CPU time spent on executing the current statement, in seconds. |
+
+## Sample query
+
+```shell
+obclient [oceanbase]> SELECT * FROM oceanbase.V$OB_SESSION limit 2;
+```
+
+
+## References
+
+[GV$OB_SESSION](17400.gv-ob_session-of-mysql-mode.md)
diff --git a/en-US/700.reference/700.system-views/400.system-view-of-mysql-mode/300.performance-view-of-mysql-mode/5200.v-ob_tablet_compaction_history-of-mysql-mode.md b/en-US/700.reference/700.system-views/400.system-view-of-mysql-mode/300.performance-view-of-mysql-mode/5200.v-ob_tablet_compaction_history-of-mysql-mode.md
index 155137e1ce..9860f51b05 100644
--- a/en-US/700.reference/700.system-views/400.system-view-of-mysql-mode/300.performance-view-of-mysql-mode/5200.v-ob_tablet_compaction_history-of-mysql-mode.md
+++ b/en-US/700.reference/700.system-views/400.system-view-of-mysql-mode/300.performance-view-of-mysql-mode/5200.v-ob_tablet_compaction_history-of-mysql-mode.md
@@ -9,7 +9,7 @@
## Purpose
-The `V$OB_TABLET_COMPACTION_HISTORY` view displays the tablet-level compaction history.
+The `V$OB_TABLET_COMPACTION_HISTORY` view displays information about historical tablet-level compactions.
Note
@@ -20,15 +20,15 @@ The `V$OB_TABLET_COMPACTION_HISTORY` view displays the tablet-level compaction h
| Column | Type | **Nullable?** | Description |
|--------------------------------------|--------------|----------------|--------|
-| SVR_IP | varchar(46) | NO | The IP address of the server. |
-| SVR_PORT | bigint(20) | NO | The port number of the server. |
+| SVR_IP | varchar(46) | NO | The IP address of the OBServer node. |
+| SVR_PORT | bigint(20) | NO | The port number of the OBServer node. |
| TENANT_ID | bigint(20) | NO | The ID of the tenant. |
| LS_ID | bigint(20) | NO | The ID of the log stream. |
| TABLET_ID | bigint(20) | NO | The ID of the tablet. |
| TYPE | varchar(64) | NO | The compaction type. Valid values: `MINI`: minor or L0 compaction that converts MemTables into SSTables. `MAJOR`: major compaction. `MINI MINOR`: L1 compaction that combines multiple mini SSTables into one. `BUF MINOR`: buffer minor compaction that generates special buffer minor SSTables. |
| COMPACTION_SCN | bigint(20) unsigned | NO | The major compaction version. |
-| START_TIME | timestamp(6) | NO | The start time. |
-| FINISH_TIME | timestamp(6) | NO | The end time. |
+| START_TIME | timestamp(6) | NO | The start time of the compaction. |
+| FINISH_TIME | timestamp(6) | NO | The end time of the compaction. |
| TASK_ID | varchar(64) | NO | The task execution trace. |
| OCCUPY_SIZE | bigint(20) | NO | The data amount. |
| MACRO_BLOCK_COUNT | bigint(20) | NO | The number of macroblocks. |
@@ -38,7 +38,7 @@ The `V$OB_TABLET_COMPACTION_HISTORY` view displays the tablet-level compaction h
| TOTAL_ROW_COUNT | bigint(20) | NO | The total number of rows. |
| INCREMENTAL_ROW_COUNT | bigint(20) | NO | The number of new rows. |
| COMPRESSION_RATIO | double | NO | The compression ratio of new data, which is the ratio of the size of compressed new macroblock data to the data size before compression. |
-| NEW_FLUSH_DATA_RATE | bigint(20) | NO | The output speed of new data. Unit: KB/s. |
+| NEW_FLUSH_DATA_RATE | bigint(20) | NO | The output speed of new data, in KB/s. |
| PROGRESSIVE_COMPACTION_ROUND | bigint(20) | NO | The current compaction round during a progressive compaction. For a full compaction, the value is -1. |
| PROGRESSIVE_COMPACTION_NUM | bigint(20) | NO | The total number of compaction rounds in a progressive compaction. |
| PARALLEL_DEGREE | bigint(20) | NO | The degree of parallelism. |
@@ -46,3 +46,24 @@ The `V$OB_TABLET_COMPACTION_HISTORY` view displays the tablet-level compaction h
| PARTICIPANT_TABLE | varchar(512) | NO | The table that participates in the compaction. |
| MACRO_ID_LIST | varchar(256) | NO | The list of output macroblocks. If the macroblock list is too long, it is not displayed. |
| COMMENTS | varchar(256) | NO | The history of failed compactions and the collection duration of the current compaction. |
+| START_CG_ID | bigint(20) | NO | The ID of the first column in the column group.Note
This column is introduced since OceanBase Database V4.3.0.
|
+| END_CG_ID | bigint(20) | NO | The ID of the last column in the column group. Note
This column is introduced since OceanBase Database V4.3.0.
|
+| KEPT_SNAPSHOT | varchar(128) | NO | The multi-version snapshot for the compaction.Note
This column is introduced since OceanBase Database V4.3.0.
|
+| MERGE_LEVEL | varchar(64) | NO | Indicates whether a major compaction was performed or microblocks were reused.Note
This column is introduced since OceanBase Database V4.3.0.
|
+
+## Sample query
+
+```shell
+obclient [oceanbase]> SELECT * FROM oceanbase.V$OB_TABLET_COMPACTION_HISTORY LIMIT 1;
+```
+
+The query result is as follows:
+
+```shell
++----------------+----------+-----------+-------+-----------+-------------+---------------------+----------------------------+----------------------------+-----------------------------------+-------------+-------------------+-------------------------------+------------------------------+--------------------------------------+-----------------+-----------------------+-------------------+---------------------+------------------------------+----------------------------+-----------------+---------------+----------------------------------------+---------------+-------------------------------------------------------------------------------------+-------------+-----------+-----------------------------------------------------------------+-------------------+
+| SVR_IP | SVR_PORT | TENANT_ID | LS_ID | TABLET_ID | TYPE | COMPACTION_SCN | START_TIME | FINISH_TIME | TASK_ID | OCCUPY_SIZE | MACRO_BLOCK_COUNT | MULTIPLEXED_MACRO_BLOCK_COUNT | NEW_MICRO_COUNT_IN_NEW_MACRO | MULTIPLEXED_MICRO_COUNT_IN_NEW_MACRO | TOTAL_ROW_COUNT | INCREMENTAL_ROW_COUNT | COMPRESSION_RATIO | NEW_FLUSH_DATA_RATE | PROGRESSIVE_COMPACTION_ROUND | PROGRESSIVE_COMPACTION_NUM | PARALLEL_DEGREE | PARALLEL_INFO | PARTICIPANT_TABLE | MACRO_ID_LIST | COMMENTS | START_CG_ID | END_CG_ID | KEPT_SNAPSHOT | MERGE_LEVEL |
++----------------+----------+-----------+-------+-----------+-------------+---------------------+----------------------------+----------------------------+-----------------------------------+-------------+-------------------+-------------------------------+------------------------------+--------------------------------------+-----------------+-----------------------+-------------------+---------------------+------------------------------+----------------------------+-----------------+---------------+----------------------------------------+---------------+-------------------------------------------------------------------------------------+-------------+-----------+-----------------------------------------------------------------+-------------------+
+| xx.xx.xx.xx | 2882 | 1 | 1 | 60476 | MAJOR_MERGE | 1710352801420115046 | 2024-03-14 02:00:47.829176 | 2024-03-14 02:00:47.838409 | YB42AC1E87E2-0006138731B1172C-0-0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | - | table_cnt=1,[MAJOR]snapshot_version=1; | | comment="merge_reason="TENANT_MAJOR";time=add_time:1710352847829041|total=9.23ms;"; | 0 | 0 | {type:"SNAPSHOT_FOR_LS_RESERVED", snapshot:1710351031448088252} | MICRO_BLOCK_LEVEL |
++----------------+----------+-----------+-------+-----------+-------------+---------------------+----------------------------+----------------------------+-----------------------------------+-------------+-------------------+-------------------------------+------------------------------+--------------------------------------+-----------------+-----------------------+-------------------+---------------------+------------------------------+----------------------------+-----------------+---------------+----------------------------------------+---------------+-------------------------------------------------------------------------------------+-------------+-----------+-----------------------------------------------------------------+-------------------+
+1 row in set (0.005 sec)
+```
diff --git a/en-US/700.reference/700.system-views/500.system-view-of-oracle-mode/200.dictionary-view-of-oracle-mode/35600.dba_ob_aux_statistics-of-oracle-mode.md b/en-US/700.reference/700.system-views/500.system-view-of-oracle-mode/200.dictionary-view-of-oracle-mode/35600.dba_ob_aux_statistics-of-oracle-mode.md
new file mode 100644
index 0000000000..fa8c080c60
--- /dev/null
+++ b/en-US/700.reference/700.system-views/500.system-view-of-oracle-mode/200.dictionary-view-of-oracle-mode/35600.dba_ob_aux_statistics-of-oracle-mode.md
@@ -0,0 +1,56 @@
+| description ||
+|---|---|
+| keywords ||
+| dir-name ||
+| dir-name-en ||
+| tenant-type ||
+
+# DBA_OB_AUX_STATISTICS
+
+
+Note
+This view is introduced since OceanBase Database V4.3.0.
+
+
+## Purpose
+
+The `DBA_OB_AUX_STATISTICS` view displays the auxiliary statistics of each tenant.
+
+## Columns
+
+| **Column** | **Type** | **Nullable?** | **Description** |
+| --- | --- | --- | --- |
+| LAST_ANALYZED | TIMESTAMP(6) WITH LOCAL TIME ZONE | NO | The timestamp for the last statistics collection. |
+| CPU_SPEED(MHZ) | NUMBER(38) | YES | The CPU speed of the current environment, in MHz. |
+| DISK_SEQ_READ_SPEED(MB/S) | NUMBER(38) | YES | The sequential read speed of the disk, in MB/s. |
+| DISK_RND_READ_SPEED(MB/S) | NUMBER(38) | YES | The random read speed of the disk, in MB/s. |
+| NETWORK_SPEED(MB/S) | NUMBER(38) | YES | The network transmission speed, in MB/s. |
+
+## Sample query
+
+1. Manually enable auxiliary statistics collection.
+
+ ```shell
+ obclient [SYS]> CALL dbms_stats.gather_system_stats();
+ ```
+
+2. Query the auxiliary statistics of tenants.
+
+ ```shell
+ obclient [SYS]> SELECT * FROM SYS.DBA_OB_AUX_STATISTICS;
+ ```
+
+3. The query result is as follows:
+
+ ```shell
+ +------------------------------+----------------+---------------------------+---------------------------+---------------------+
+ | LAST_ANALYZED | CPU_SPEED(MHZ) | DISK_SEQ_READ_SPEED(MB/S) | DISK_RND_READ_SPEED(MB/S) | NETWORK_SPEED(MB/S) |
+ +------------------------------+----------------+---------------------------+---------------------------+---------------------+
+ | 14-MAR-24 02.40.36.904572 PM | 2499 | 2675 | 370 | 1250 |
+ +------------------------------+----------------+---------------------------+---------------------------+---------------------+
+ 1 row in set (0.017 sec)
+ ```
+
+## References
+
+[CDB_OB_AUX_STATISTICS](../../300.system-view-of-sys-tenant/200.dictionary-view-of-sys-tenant/32400.cdb_ob_aux_statistics-of-sys-tenant.md)
\ No newline at end of file
diff --git a/en-US/700.reference/700.system-views/500.system-view-of-oracle-mode/300.performance-view-of-oracle-mode/17400.gv-ob_session-of-oracle-mode.md b/en-US/700.reference/700.system-views/500.system-view-of-oracle-mode/300.performance-view-of-oracle-mode/17400.gv-ob_session-of-oracle-mode.md
new file mode 100644
index 0000000000..484ac8b488
--- /dev/null
+++ b/en-US/700.reference/700.system-views/500.system-view-of-oracle-mode/300.performance-view-of-oracle-mode/17400.gv-ob_session-of-oracle-mode.md
@@ -0,0 +1,63 @@
+| description ||
+|---|---|
+| keywords ||
+| dir-name ||
+| dir-name-en ||
+| tenant-type ||
+
+# GV$OB_SESSION
+
+
+Note
+This view is introduced since OceanBase Database V4.3.0.
+
+
+## Purpose
+
+The `GV$OB_SESSION` view displays information about sessions created on all OBServer nodes.
+
+## Columns
+
+| **Column** | **Type** | **Nullable?** | **Description** |
+| --- | --- | --- | --- |
+| ID | NUMBER(38) | NO | The ID of the session. |
+| USER | CHAR(193) | NO | The user to which the session belongs. |
+| TENANT | VARCHAR2(128) | NO | The name of the tenant accessed. |
+| HOST | VARCHAR2(128) | NO | The IP address and port number of the client that initiated the session. If OceanBase Database Proxy (ODP) was used to connect to the database, the value indicates the host IP address and port number of ODP. |
+| DB | VARCHAR2(128) | YES | The name of the database to which the session connects. |
+| COMMAND | VARCHAR2(4096) | NO | The type of the statement being executed in the session. |
+| SQL_ID | VARCHAR2(32) | NO | The ID of the SQL statement. |
+| TIME | NUMBER(38) | NO | The execution time of the current statement, in seconds. If a retry occurs, the value is cleared and recalculated. |
+| STATE | VARCHAR2(128) | YES | The current status of the session. |
+| INFO | VARCHAR2(262143) | YES | The statement being executed in the session. |
+| SVR_IP | VARCHAR2(46) | NO | The IP address of the OBServer node. |
+| SVR_PORT | NUMBER(38) | NO | The remote procedure call (RPC) port number of the OBServer node. |
+| SQL_PORT | NUMBER(38) | NO | The SQL port number of the OBServer node. |
+| PROXY_SESSID | NUMBER(38) | YES | The session ID of ODP, if ODP is used for connection. |
+| USER_CLIENT_IP | VARCHAR2(46) | YES | The IP address of the user client. |
+| USER_HOST | VARCHAR2(128) | YES | The hostname of the user client. |
+| TRANS_ID | NUMBER(38) | NO | The transaction ID. |
+| THREAD_ID | NUMBER(38) | NO | The thread ID. |
+| TRACE_ID | VARCHAR2(64) | YES | The trace ID. |
+| REF_COUNT | NUMBER(38) | NO | The reference count of the connection. |
+| BACKTRACE | VARCHAR2(16384) | YES | The call stack for connection references. |
+| TRANS_STATE | VARCHAR2(32) | YES | The transaction status. |
+| TOTAL_CPU_TIME | NUMBER(38) | NO | The CPU time spent on executing the current statement, in seconds. |
+
+## Sample query
+
+```shell
+obclient [SYS]> SELECT * FROM SYS.GV$OB_SESSION WHERE ROWNUM <= 2;
+```
+
+
+## References
+
+[V$OB_SESSION](17500.v-ob_session-of-oracle-mode.md)
diff --git a/en-US/700.reference/700.system-views/500.system-view-of-oracle-mode/300.performance-view-of-oracle-mode/17500.v-ob_session-of-oracle-mode.md b/en-US/700.reference/700.system-views/500.system-view-of-oracle-mode/300.performance-view-of-oracle-mode/17500.v-ob_session-of-oracle-mode.md
new file mode 100644
index 0000000000..524d084d37
--- /dev/null
+++ b/en-US/700.reference/700.system-views/500.system-view-of-oracle-mode/300.performance-view-of-oracle-mode/17500.v-ob_session-of-oracle-mode.md
@@ -0,0 +1,63 @@
+| description ||
+|---|---|
+| keywords ||
+| dir-name ||
+| dir-name-en ||
+| tenant-type ||
+
+# V$OB_SESSION
+
+
+Note
+This view is introduced since OceanBase Database V4.3.0.
+
+
+## Purpose
+
+The `V$OB_SESSION` view displays information about sessions created on the current OBServer node.
+
+## Columns
+
+| **Column** | **Type** | **Nullable?** | **Description** |
+| --- | --- | --- | --- |
+| ID | NUMBER(38) | NO | The ID of the session. |
+| USER | CHAR(193) | NO | The user to which the session belongs. |
+| TENANT | VARCHAR2(128) | NO | The name of the tenant accessed. |
+| HOST | VARCHAR2(128) | NO | The IP address and port number of the client that initiated the session. If OceanBase Database Proxy (ODP) was used to connect to the database, the value indicates the host IP address and port number of ODP. |
+| DB | VARCHAR2(128) | YES | The name of the database to which the session connects. |
+| COMMAND | VARCHAR2(4096) | NO | The type of the statement being executed in the session. |
+| SQL_ID | VARCHAR2(32) | NO | The ID of the SQL statement. |
+| TIME | NUMBER(38) | NO | The execution time of the current statement, in seconds. If a retry occurs, the value is cleared and recalculated. |
+| STATE | VARCHAR2(128) | YES | The current status of the session. |
+| INFO | VARCHAR2(262143) | YES | The statement being executed in the session. |
+| SVR_IP | VARCHAR2(46) | NO | The IP address of the OBServer node. |
+| SVR_PORT | NUMBER(38) | NO | The remote procedure call (RPC) port number of the OBServer node. |
+| SQL_PORT | NUMBER(38) | NO | The SQL port number of the OBServer node. |
+| PROXY_SESSID | NUMBER(38) | YES | The session ID of ODP, if ODP is used for connection. |
+| USER_CLIENT_IP | VARCHAR2(46) | YES | The IP address of the user client. |
+| USER_HOST | VARCHAR2(128) | YES | The hostname of the user client. |
+| TRANS_ID | NUMBER(38) | NO | The transaction ID. |
+| THREAD_ID | NUMBER(38) | NO | The thread ID. |
+| TRACE_ID | VARCHAR2(64) | YES | The trace ID. |
+| REF_COUNT | NUMBER(38) | NO | The reference count of the connection. |
+| BACKTRACE | VARCHAR2(16384) | YES | The call stack for connection references. |
+| TRANS_STATE | VARCHAR2(32) | YES | The transaction status. |
+| TOTAL_CPU_TIME | NUMBER(38) | NO | The CPU time spent on executing the current statement, in seconds. |
+
+## Sample query
+
+```shell
+obclient [SYS]> SELECT * FROM SYS.V$OB_SESSION WHERE ROWNUM <= 2;
+```
+
+
+## References
+
+[V$OB_SESSION](17400.gv-ob_session-of-oracle-mode.md)
diff --git a/en-US/700.reference/700.system-views/500.system-view-of-oracle-mode/300.performance-view-of-oracle-mode/4700.v-ob_tablet_compaction_history-of-oracle-mode.md b/en-US/700.reference/700.system-views/500.system-view-of-oracle-mode/300.performance-view-of-oracle-mode/4700.v-ob_tablet_compaction_history-of-oracle-mode.md
index df0ec9461a..6dd0e21a2c 100644
--- a/en-US/700.reference/700.system-views/500.system-view-of-oracle-mode/300.performance-view-of-oracle-mode/4700.v-ob_tablet_compaction_history-of-oracle-mode.md
+++ b/en-US/700.reference/700.system-views/500.system-view-of-oracle-mode/300.performance-view-of-oracle-mode/4700.v-ob_tablet_compaction_history-of-oracle-mode.md
@@ -9,7 +9,7 @@
## Purpose
-The `V$OB_TABLET_COMPACTION_HISTORY` view displays the tablet-level compaction history.
+The `V$OB_TABLET_COMPACTION_HISTORY` view displays information about historical tablet-level compactions.
Note
@@ -20,15 +20,15 @@ The `V$OB_TABLET_COMPACTION_HISTORY` view displays the tablet-level compaction h
| Column | Type | Nullable? | Description |
|--------------------------------------|---------------|------------|--------|
-| SVR_IP | VARCHAR2(46) | NO | The IP address of the server. |
-| SVR_PORT | NUMBER(38) | NO | The port number of the server. |
+| SVR_IP | VARCHAR2(46) | NO | The IP address of the OBServer node. |
+| SVR_PORT | NUMBER(38) | NO | The port number of the OBServer node. |
| TENANT_ID | NUMBER(38) | NO | The ID of the tenant. |
| LS_ID | NUMBER(38) | NO | The ID of the log stream. |
| TABLET_ID | NUMBER(38) | NO | The ID of the tablet. |
| TYPE | VARCHAR2(64) | NO | The compaction type. Valid values: `MINI`: minor or L0 compaction that converts MemTables into SSTables. `MAJOR`: major compaction. `MINI MINOR`: L1 compaction that combines multiple mini SSTables into one. `BUF MINOR`: buffer minor compaction that generates special buffer minor SSTables. |
| COMPACTION_SCN | NUMBER(38) | NO | The compaction version. The minor version is a snapshot version. |
-| START_TIME | TIMESTAMP(6) WITH LOCAL TIME ZONE | NO | The start time. |
-| FINISH_TIME | TIMESTAMP(6) WITH LOCAL TIME ZONE | NO | The end time. |
+| START_TIME | TIMESTAMP(6) WITH LOCAL TIME ZONE | NO | The start time of the compaction. |
+| FINISH_TIME | TIMESTAMP(6) WITH LOCAL TIME ZONE | NO | The end time of the compaction. |
| OCCUPY_SIZE | NUMBER(38) | NO | The data amount. |
| MACRO_BLOCK_COUNT | NUMBER(38) | NO | The number of macroblocks. |
| MULTIPLEXED_MACRO_BLOCK_COUNT | NUMBER(38) | NO | The number of reused macroblocks. |
@@ -43,3 +43,24 @@ The `V$OB_TABLET_COMPACTION_HISTORY` view displays the tablet-level compaction h
| PARALLEL_INFO | VARCHAR2(512) | NO | The parallel task information, including the scanned data amount, operating time, and output data amount of the parallel task. |
| MACRO_ID_LIST | varchar2(256) | NO | The list of output macroblocks. If the macroblock list is too long, it is not displayed. |
| COMMENTS | varchar2(256) | NO | The history of failed compactions and the collection duration of the current compaction. |
+| START_CG_ID | bigint(20) | NO | The ID of the first column in the column group.Note
This column is introduced since OceanBase Database V4.3.0.
|
+| END_CG_ID | bigint(20) | NO | The ID of the last column in the column group. Note
This column is introduced since OceanBase Database V4.3.0.
|
+| KEPT_SNAPSHOT | varchar(128) | NO | The multi-version snapshot for the compaction.Note
This column is introduced since OceanBase Database V4.3.0.
|
+| MERGE_LEVEL | varchar(64) | NO | Indicates whether a major compaction was performed or microblocks were reused.Note
This column is introduced since OceanBase Database V4.3.0.
|
+
+## Sample query
+
+```shell
+obclient [SYS]> SELECT * FROM SYS.V$OB_TABLET_COMPACTION_HISTORY WHERE ROWNUM = 1;
+```
+
+The query result is as follows:
+
+```shell
++----------------+----------+-----------+-------+-----------+-------------+---------------------+------------------------------+------------------------------+-----------------------------------+-------------+-------------------+-------------------------------+------------------------------+--------------------------------------+-----------------+-----------------------+-------------------+---------------------+------------------------------+----------------------------+-----------------+---------------+--------------------------------------------------------------------------------------+---------------+------------------------------------------------------------------------------------------------+-------------+-----------+
+| SVR_IP | SVR_PORT | TENANT_ID | LS_ID | TABLET_ID | TYPE | COMPACTION_SCN | START_TIME | FINISH_TIME | TASK_ID | OCCUPY_SIZE | MACRO_BLOCK_COUNT | MULTIPLEXED_MACRO_BLOCK_COUNT | NEW_MICRO_COUNT_IN_NEW_MACRO | MULTIPLEXED_MICRO_COUNT_IN_NEW_MACRO | TOTAL_ROW_COUNT | INCREMENTAL_ROW_COUNT | COMPRESSION_RATIO | NEW_FLUSH_DATA_RATE | PROGRESSIVE_COMPACTION_ROUND | PROGRESSIVE_COMPACTION_NUM | PARALLEL_DEGREE | PARALLEL_INFO | PARTICIPANT_TABLE | MACRO_ID_LIST | COMMENTS | START_CG_ID | END_CG_ID |
++----------------+----------+-----------+-------+-----------+-------------+---------------------+------------------------------+------------------------------+-----------------------------------+-------------+-------------------+-------------------------------+------------------------------+--------------------------------------+-----------------+-----------------------+-------------------+---------------------+------------------------------+----------------------------+-----------------+---------------+--------------------------------------------------------------------------------------+---------------+------------------------------------------------------------------------------------------------+-------------+-----------+
+| 172.30.135.217 | 2882 | 1004 | 1 | 1 | MAJOR_MERGE | 1708624802677337192 | 23-FEB-24 02.01.02.432048 AM | 23-FEB-24 02.01.02.468250 AM | YB42AC1E87D9-000611F47C98B260-0-0 | 24582 | 1 | 0 | 3 | 0 | 7375 | 7375 | 1 | 844 | 1 | 0 | 1 | - | table_cnt=5,[MAJOR]snapshot_version=1;[MINI]start_scn=1,end_scn=1708624852712804233; | 761 | comment="merge_reason="TENANT_MAJOR";cost_mb=2;time=add_time:1708624862431925|total=36.20ms;"; | 0 | 0 |
++----------------+----------+-----------+-------+-----------+-------------+---------------------+------------------------------+------------------------------+-----------------------------------+-------------+-------------------+-------------------------------+------------------------------+--------------------------------------+-----------------+-----------------------+-------------------+---------------------+------------------------------+----------------------------+-----------------+---------------+--------------------------------------------------------------------------------------+---------------+------------------------------------------------------------------------------------------------+-------------+-----------+
+1 row in set (0.007 sec)
+```
diff --git a/en-US/700.reference/700.system-views/500.system-view-of-oracle-mode/300.performance-view-of-oracle-mode/900.gv-ob_tablet_compaction_history-of-oracle-mode.md b/en-US/700.reference/700.system-views/500.system-view-of-oracle-mode/300.performance-view-of-oracle-mode/900.gv-ob_tablet_compaction_history-of-oracle-mode.md
index 8e72b782cd..e0a4448523 100644
--- a/en-US/700.reference/700.system-views/500.system-view-of-oracle-mode/300.performance-view-of-oracle-mode/900.gv-ob_tablet_compaction_history-of-oracle-mode.md
+++ b/en-US/700.reference/700.system-views/500.system-view-of-oracle-mode/300.performance-view-of-oracle-mode/900.gv-ob_tablet_compaction_history-of-oracle-mode.md
@@ -9,7 +9,7 @@
## Purpose
-The `GV$OB_TABLET_COMPACTION_HISTORY` view displays the tablet-level compaction history.
+The `GV$OB_TABLET_COMPACTION_HISTORY` view displays information about historical tablet-level compactions.
Note
@@ -20,15 +20,15 @@ The `GV$OB_TABLET_COMPACTION_HISTORY` view displays the tablet-level compaction
| Column | Type | Nullable? | Description |
|--------------------------------------|---------------|------------|--------|
-| SVR_IP | VARCHAR2(46) | NO | The IP address of the server. |
-| SVR_PORT | NUMBER(38) | NO | The port number of the server. |
+| SVR_IP | VARCHAR2(46) | NO | The IP address of the OBServer node. |
+| SVR_PORT | NUMBER(38) | NO | The port number of the OBServer node. |
| TENANT_ID | NUMBER(38) | NO | The ID of the tenant. |
| LS_ID | NUMBER(38) | NO | The ID of the log stream. |
| TABLET_ID | NUMBER(38) | NO | The ID of the tablet. |
| TYPE | VARCHAR2(64) | NO | The compaction type. Valid values: `MINI`: minor or L0 compaction that converts MemTables into SSTables. `MAJOR`: major compaction. `MINI MINOR`: L1 compaction that combines multiple mini SSTables into one. `BUF MINOR`: buffer minor compaction that generates special buffer minor SSTables. |
| COMPACTION_SCN | NUMBER(38) | NO | The compaction version. The minor version is a snapshot version. |
-| START_TIME | TIMESTAMP(6) WITH LOCAL TIME ZONE | NO | The start time. |
-| FINISH_TIME | TIMESTAMP(6) WITH LOCAL TIME ZONE | NO | The end time. |
+| START_TIME | TIMESTAMP(6) WITH LOCAL TIME ZONE | NO | The start time of the compaction. |
+| FINISH_TIME | TIMESTAMP(6) WITH LOCAL TIME ZONE | NO | The end time of the compaction. |
| OCCUPY_SIZE | NUMBER(38) | NO | The data amount. |
| MACRO_BLOCK_COUNT | NUMBER(38) | NO | The number of macroblocks. |
| MULTIPLEXED_MACRO_BLOCK_COUNT | NUMBER(38) | NO | The number of reused macroblocks. |
@@ -43,3 +43,24 @@ The `GV$OB_TABLET_COMPACTION_HISTORY` view displays the tablet-level compaction
| PARALLEL_INFO | VARCHAR2(512) | NO | The parallel task information, including the scanned data amount, operating time, and output data amount of the parallel task. |
| MACRO_ID_LIST | varchar2(256) | NO | The list of output macroblocks. If the macroblock list is too long, it is not displayed. |
| COMMENTS | varchar2(256) | NO | The history of failed compactions and the collection duration of the current compaction. |
+| START_CG_ID | bigint(20) | NO | The ID of the first column in the column group.Note
This column is introduced since OceanBase Database V4.3.0.
|
+| END_CG_ID | bigint(20) | NO | The ID of the last column in the column group. Note
This column is introduced since OceanBase Database V4.3.0.
|
+| KEPT_SNAPSHOT | varchar(128) | NO | The multi-version snapshot for the compaction.Note
This column is introduced since OceanBase Database V4.3.0.
|
+| MERGE_LEVEL | varchar(64) | NO | Indicates whether a major compaction was performed or microblocks were reused.Note
This column is introduced since OceanBase Database V4.3.0.
|
+
+## Sample query
+
+```shell
+obclient [SYS]> SELECT * FROM SYS.GV$OB_TABLET_COMPACTION_HISTORY WHERE ROWNUM = 1;
+```
+
+The query result is as follows:
+
+```shell
++----------------+----------+-----------+-------+-----------+-------------+---------------------+------------------------------+------------------------------+-----------------------------------+-------------+-------------------+-------------------------------+------------------------------+--------------------------------------+-----------------+-----------------------+-------------------+---------------------+------------------------------+----------------------------+-----------------+---------------+--------------------------------------------------------------------------------------+---------------+------------------------------------------------------------------------------------------------+-------------+-----------+
+| SVR_IP | SVR_PORT | TENANT_ID | LS_ID | TABLET_ID | TYPE | COMPACTION_SCN | START_TIME | FINISH_TIME | TASK_ID | OCCUPY_SIZE | MACRO_BLOCK_COUNT | MULTIPLEXED_MACRO_BLOCK_COUNT | NEW_MICRO_COUNT_IN_NEW_MACRO | MULTIPLEXED_MICRO_COUNT_IN_NEW_MACRO | TOTAL_ROW_COUNT | INCREMENTAL_ROW_COUNT | COMPRESSION_RATIO | NEW_FLUSH_DATA_RATE | PROGRESSIVE_COMPACTION_ROUND | PROGRESSIVE_COMPACTION_NUM | PARALLEL_DEGREE | PARALLEL_INFO | PARTICIPANT_TABLE | MACRO_ID_LIST | COMMENTS | START_CG_ID | END_CG_ID |
++----------------+----------+-----------+-------+-----------+-------------+---------------------+------------------------------+------------------------------+-----------------------------------+-------------+-------------------+-------------------------------+------------------------------+--------------------------------------+-----------------+-----------------------+-------------------+---------------------+------------------------------+----------------------------+-----------------+---------------+--------------------------------------------------------------------------------------+---------------+------------------------------------------------------------------------------------------------+-------------+-----------+
+| xx.xx.xx.xx | 2882 | 1004 | 1 | 1 | MAJOR_MERGE | 1708624802677337192 | 23-FEB-24 02.01.02.432048 AM | 23-FEB-24 02.01.02.468250 AM | YB42AC1E87D9-000611F47C98B260-0-0 | 24582 | 1 | 0 | 3 | 0 | 7375 | 7375 | 1 | 844 | 1 | 0 | 1 | - | table_cnt=5,[MAJOR]snapshot_version=1;[MINI]start_scn=1,end_scn=1708624852712804233; | 761 | comment="merge_reason="TENANT_MAJOR";cost_mb=2;time=add_time:1708624862431925|total=36.20ms;"; | 0 | 0 |
++----------------+----------+-----------+-------+-----------+-------------+---------------------+------------------------------+------------------------------+-----------------------------------+-------------+-------------------+-------------------------------+------------------------------+--------------------------------------+-----------------+-----------------------+-------------------+---------------------+------------------------------+----------------------------+-----------------+---------------+--------------------------------------------------------------------------------------+---------------+------------------------------------------------------------------------------------------------+-------------+-----------+
+1 row in set (0.028 sec)
+```
From 236d9472573d27083fd8dc0a5cda28621af3b93a Mon Sep 17 00:00:00 2001
From: Jackie Qu
Date: Wed, 17 Apr 2024 23:18:57 +0800
Subject: [PATCH 35/63] v430-beta-error-code-mysql-update
---
.../1000.9500-9999-of-mysql-mode.md | 52 ++++++++++++++++
.../1050.10000-12000-of-mysql-mode.md | 15 ++++-
.../200.0001-3999-of-mysql-mode.md | 30 +++++++++
.../300.4000-4499-of-mysql-mode.md | 28 +++++++++
.../400.4500-4999-of-mysql-mode.md | 2 +-
.../500.5000-5999-of-mysql-mode.md | 12 ++--
.../600.6000-6999-of-mysql-mode.md | 25 ++++++++
.../900.9000-9499-of-mysql-mode.md | 62 ++++++++++++++++++-
8 files changed, 219 insertions(+), 7 deletions(-)
diff --git a/en-US/700.reference/900.error-code/600.error-code-of-mysql-mode/1000.9500-9999-of-mysql-mode.md b/en-US/700.reference/900.error-code/600.error-code-of-mysql-mode/1000.9500-9999-of-mysql-mode.md
index 6622eb1fb0..35e1366849 100644
--- a/en-US/700.reference/900.error-code/600.error-code-of-mysql-mode/1000.9500-9999-of-mysql-mode.md
+++ b/en-US/700.reference/900.error-code/600.error-code-of-mysql-mode/1000.9500-9999-of-mysql-mode.md
@@ -434,6 +434,58 @@ ERROR 9695 (02000): Unhandled user-defined not found condition
This error code is introduced since OceanBase Database V4.2.2.
+## ERROR 9762 (42000): Loading local data is disabled; this must be enabled on both the client and server sides
+
+* Error code in OceanBase Database: 9762
+
+* Error code in MySQL: 3948
+
+* Cause: Local data import failed.
+* Solution: To use the local data import feature, make sure that:
+
+ * The version of OceanBase Client (OBClient) is V2.2.4 or later.
+ * The version of OceanBase Database Proxy (ODP) is V3.2.4 or later. If you directly connect to an OBServer node, ignore this requirement.
+ * The version of OceanBase Connector/J is V2.4.8 or later, if Java and OceanBase Connector/J are used.
+
+ You can directly use a MySQL client or a native MariaDB client of any version.
+
+
+ Note
+ When you use a MySQL or MariaDB client to connect to your database, the command-line option --local-infile
is required.
+
+
+ If the version requirements are met, you need to enable the `local_infile` variable.
+
+ * Enable the variable.
+
+ ```shell
+ set @@local_infile=1;
+ ```
+
+ * Check the variable.
+
+ ```shell
+ show variables like 'local_infile';
+ ```
+
+
+ Note
+ This error code is introduced since OceanBase Database V4.3.0.
+
+
+## ERROR 9765 (HY000): object '%.*s' must be of type function or array to be used this way
+
+* Error code in OceanBase Database: 9765
+
+* Cause: The object '%.*s' must be a function or an array.
+
+* Solution: Check the object type and adjust it as needed.
+
+
+ Note
+ This error code is introduced since OceanBase Database V4.3.0.
+
+
ERROR 20000 (HY000): The stored procedure 'raise_application_error' was called which causes this error to be generated","-%05ld: %.\*s
------------------------------------------------------------------------------------------------------------------------------------------------------------
diff --git a/en-US/700.reference/900.error-code/600.error-code-of-mysql-mode/1050.10000-12000-of-mysql-mode.md b/en-US/700.reference/900.error-code/600.error-code-of-mysql-mode/1050.10000-12000-of-mysql-mode.md
index 882f38cbc7..760647e4f6 100644
--- a/en-US/700.reference/900.error-code/600.error-code-of-mysql-mode/1050.10000-12000-of-mysql-mode.md
+++ b/en-US/700.reference/900.error-code/600.error-code-of-mysql-mode/1050.10000-12000-of-mysql-mode.md
@@ -7,7 +7,20 @@
# 10000 to 12000
-These error codes indicate OceanBase Database Proxy (ODP), OB Sharding, OBKV, and client errors.
+These error codes indicate OceanBase Database Proxy (ODP), OB Sharding, OBKV, and client errors.
+
+## ERROR 10500 (HY000): incorrect route for obkv global index, client router should refresh.
+
+* Error code in OceanBase Database: 10500
+
+* Cause: The route for the OBKV global index is incorrect, and the client router needs to be refreshed.
+
+* Solution: Check your client connection and routing information.
+
+
+ Note
+ This error code is introduced since OceanBase Database V4.3.0.
+
## ERROR 10501 (HY000): TTL feature is not enabled
diff --git a/en-US/700.reference/900.error-code/600.error-code-of-mysql-mode/200.0001-3999-of-mysql-mode.md b/en-US/700.reference/900.error-code/600.error-code-of-mysql-mode/200.0001-3999-of-mysql-mode.md
index 1ed3b6b57c..701a24ec94 100644
--- a/en-US/700.reference/900.error-code/600.error-code-of-mysql-mode/200.0001-3999-of-mysql-mode.md
+++ b/en-US/700.reference/900.error-code/600.error-code-of-mysql-mode/200.0001-3999-of-mysql-mode.md
@@ -2628,6 +2628,21 @@ These error codes are compatible with specific MySQL error codes.
This error code is introduced since OceanBase Database V3.2.4.
+## ERROR 3061 (42000): User variable name %.*s is illegal
+
+* Error code in OceanBase Database: 11013
+
+* Error code in MySQL: 3061
+
+* Cause: The name of the user variable cannot exceed 1024 characters in length.
+
+* Solution: Change the name of the user variable.
+
+
+ Note
+ This error code is introduced since OceanBase Database V4.3.0.
+
+
## ERROR 3064 (HY000): invalid type given for an argument
* Error code in OceanBase Database: 5351
@@ -2805,6 +2820,21 @@ These error codes are compatible with specific MySQL error codes.
* Error code in MySQL: 3158
+## ERROR 3162 (HY000): User %.*s does not exist
+
+* Error code in OceanBase Database: 11012
+
+* Error code in MySQL: 3162
+
+* Cause: The user %.*s does not exist.
+
+* Solution: Make sure that the target user %.*s exists, or use another valid user.
+
+
+ Note
+ This error code is introduced since OceanBase Database V4.3.0.
+
+
## ERROR 3165 (42000): A path expression is not a path to a cell in an array
* Error code in OceanBase Database: 5431
diff --git a/en-US/700.reference/900.error-code/600.error-code-of-mysql-mode/300.4000-4499-of-mysql-mode.md b/en-US/700.reference/900.error-code/600.error-code-of-mysql-mode/300.4000-4499-of-mysql-mode.md
index cb33b289e0..8ea98682bd 100644
--- a/en-US/700.reference/900.error-code/600.error-code-of-mysql-mode/300.4000-4499-of-mysql-mode.md
+++ b/en-US/700.reference/900.error-code/600.error-code-of-mysql-mode/300.4000-4499-of-mysql-mode.md
@@ -1154,6 +1154,19 @@ These error codes indicate common errors.
* [An error occurred during backup for an OceanBase cluster. Error code: 4184 (exit)](https://www.oceanbase.com/knowledge-base/oceanbase-database-1000000000208052)
* [An error indicating insufficient disk space is returned for an SQL query. Error code: 4184](https://www.oceanbase.com/knowledge-base/oceanbase-database-1000000000209971) -->
+## ERROR 4185 (HY000): Column group \'%.*s\' not found
+
+* Error code in OceanBase Database: 4185
+
+* Cause: The specified column group was not found.
+
+* Solution: Verify whether the name of the specified column group is correct. Make sure that this column group exists or enter a correct column group name.
+
+
+ Note
+ This error code is introduced since OceanBase Database V4.3.0.
+
+
## ERROR 4186 (HY000): ChunkServer failed to get compress library
* Error code in OceanBase Database: 4186
@@ -2536,6 +2549,21 @@ These error codes indicate common errors.
This error code is introduced since OceanBase Database V4.2.0.
+## ERROR 4401 (HY000): Client Session need be killed
+
+* Error code in OceanBase Database: 4401
+
+* MySQL error code: 4401
+
+* Cause: The client session was not terminated.
+
+* Solution: Verify whether the client session needs to be terminated. If yes, terminate the session by taking proper measures, for example, using an appropriate command or tool.
+
+
+ Note
+ This error code is introduced since OceanBase Database V4.3.0.
+
+
## ERROR 4402 (HY000): Kill Client Session failed
* Error code in OceanBase Database: 4402
diff --git a/en-US/700.reference/900.error-code/600.error-code-of-mysql-mode/400.4500-4999-of-mysql-mode.md b/en-US/700.reference/900.error-code/600.error-code-of-mysql-mode/400.4500-4999-of-mysql-mode.md
index d223f94258..6619fa2b85 100644
--- a/en-US/700.reference/900.error-code/600.error-code-of-mysql-mode/400.4500-4999-of-mysql-mode.md
+++ b/en-US/700.reference/900.error-code/600.error-code-of-mysql-mode/400.4500-4999-of-mysql-mode.md
@@ -863,7 +863,7 @@ These error codes indicate RootService errors.
* Error code in OceanBase Database: 4725
-* Cause: An internal error occurred. The tablet does not exist.
+* Cause: An internal error occurred. An attempt was made to access a tablet that does not exist. For example, if you attempt to perform a Truncate or Drop operation on a table or partition that is undergoing a minor compaction during the execution of an SQL query, an error indicating that the tablet does not exist will be returned.
* Solution: Contact OceanBase Technical Support for troubleshooting.
diff --git a/en-US/700.reference/900.error-code/600.error-code-of-mysql-mode/500.5000-5999-of-mysql-mode.md b/en-US/700.reference/900.error-code/600.error-code-of-mysql-mode/500.5000-5999-of-mysql-mode.md
index 1a816d0538..b3b28dd92a 100644
--- a/en-US/700.reference/900.error-code/600.error-code-of-mysql-mode/500.5000-5999-of-mysql-mode.md
+++ b/en-US/700.reference/900.error-code/600.error-code-of-mysql-mode/500.5000-5999-of-mysql-mode.md
@@ -3925,11 +3925,15 @@ ERROR 5959 (HY000): invalid SIZE specified
* Solution: Locate the cause based on the error code in the error message.
-ERROR 5976 (HY000): oci lib not founded
--------------------------------------------------------------
+## ERROR 5976 (HY000): can not find the expected version of OCI LIB: %.*s
* Error code in OceanBase Database: 5976
-* Cause: The Oracle Call Interface (OCI) library is not installed in the correct path.
+* Cause: The Oracle Call Interface (OCI) library of the expected version was not found.
+
+* Solution: See [Install and configure OCI](../..//300.database-object-management/200.manage-object-of-oracle-mode/1000.manage-dblink-of-oracle-mode/600.install-and-configure-the-oci.md).
-* Solution: Contact OceanBase Technical Support for troubleshooting.
\ No newline at end of file
+
+ Note
+ The error message for this error code is changed from oci lib not founded
to can not find the expected version of OCI LIB: %.*s
since OceanBase Database V4.3.0.
+
\ No newline at end of file
diff --git a/en-US/700.reference/900.error-code/600.error-code-of-mysql-mode/600.6000-6999-of-mysql-mode.md b/en-US/700.reference/900.error-code/600.error-code-of-mysql-mode/600.6000-6999-of-mysql-mode.md
index 940ef15e73..57a258babd 100644
--- a/en-US/700.reference/900.error-code/600.error-code-of-mysql-mode/600.6000-6999-of-mysql-mode.md
+++ b/en-US/700.reference/900.error-code/600.error-code-of-mysql-mode/600.6000-6999-of-mysql-mode.md
@@ -750,6 +750,31 @@ ERROR 6322 (HY000): ob clog slide timeout
This error code is introduced since OceanBase Database V4.2.2.
+## ERROR 6329 (HY000): pdml sql need retry under sequence number reorder
+
+* Error code in OceanBase Database: 6329
+
+* Cause: The execution sequence of PDML query statements is adjusted.
+
+* Solution: Re-execute the PDML query statements whose execution sequence is adjusted.
+
+
+ Note
+ This error code is introduced since OceanBase Database V4.3.0.
+
+
+## ERROR 6330 (HY000): user data disk is almost full
+
+* Error code in OceanBase Database: 6330
+
+* Cause: The user data disk is almost used up.
+* Solution: Clean up or scale out the data disk to make sure that sufficient space is available for storing user data.
+
+
+ Note
+ This error code is introduced since OceanBase Database V4.3.0.
+
+
## ERROR 6400(HY000): tablet_freeze timeout
* Error code in OceanBase Database: 6400
diff --git a/en-US/700.reference/900.error-code/600.error-code-of-mysql-mode/900.9000-9499-of-mysql-mode.md b/en-US/700.reference/900.error-code/600.error-code-of-mysql-mode/900.9000-9499-of-mysql-mode.md
index 52c1a5a17c..d11ea58e06 100644
--- a/en-US/700.reference/900.error-code/600.error-code-of-mysql-mode/900.9000-9499-of-mysql-mode.md
+++ b/en-US/700.reference/900.error-code/600.error-code-of-mysql-mode/900.9000-9499-of-mysql-mode.md
@@ -830,7 +830,7 @@ ERROR 9086 (HY000): backup advance checkpoint by flush timeout
This error code is introduced since OceanBase Database V4.2.0.
-## ERROR 9089 (HY000): log restore source ls state not match, switchover to primary not allowed
+## ERROR 9089 (HY000): log restore source LS state not match, switchover to primary not allowed
* Error code in OceanBase Database: 9089
* Cause: The status of the source tenant for log restoration or the log stream does not match that of the destination tenant. The current tenant cannot be switched to the primary tenant.
@@ -947,6 +947,66 @@ ERROR 9101 (HY000): file or directory already exist
* Solution: This is an internal error code. Contact OceanBase Technical Support for troubleshooting.
+## ERROR 9114 (HY000): storage destination is not valid
+
+* Error code in OceanBase Database: 9114
+
+* Cause: The object storage destination is invalid.
+* Solution: Verify whether the object storage destination is correct and make sure that it is a valid destination. You may need to change the object storage destination.
+
+
+ Note
+ This error code is introduced since OceanBase Database V4.3.0.
+
+
+## ERROR 9115 (HY000): can not connect to storage destination
+
+* Error code in OceanBase Database: 9115
+
+* Cause: The object storage destination cannot be connected.
+* Solution: Check the configurations for connecting to the object storage destination and make sure that the network connection is normal. You may need to modify the connection configurations or object storage access key, or fix network connection issues.
+
+
+ Note
+ This error code is introduced since OceanBase Database V4.3.0.
+
+
+## ERROR 9116 (HY000): no I/O operation permission of the object storage
+
+* Error code in OceanBase Database: 9116
+
+* Cause: You do not have the I/O operation privileges on object storage.
+* Solution: Check the object storage privilege settings and make sure that you have the I/O operation privileges. You may need to adjust the object storage privilege settings or contact the administrator to request for the required privileges.
+
+
+ Note
+ This error code is introduced since OceanBase Database V4.3.0.
+
+
+## ERROR 9117 (HY000): the specified s3_region does not match the endpoint
+
+* Error code in OceanBase Database: 9117
+
+* Cause: The `s3_region` option is not specified or the specified `s3_region` value does not match the endpoint.
+* Solution: Check whether the specified `s3_region` value matches the endpoint. Make sure that a correct `s3_region` value is used to match the corresponding endpoint.
+
+
+ Note
+ This error code is introduced since OceanBase Database V4.3.0.
+
+
+## ERROR 9118 (HY000): object storage endpoint is invalid
+
+* Error code in OceanBase Database: 9118
+
+* Cause: The object storage endpoint is invalid.
+* Solution: Check whether the object storage endpoint is correctly configured. Make sure that the endpoint is correctly configured and can properly access the object storage service.
+
+
+ Note
+ This error code is introduced since OceanBase Database V4.3.0.
+
+
ERROR 9200 (HY000): Extend ssblock file to smaller is not allowed
--------------------------------------------------------------------
From d8a857553e1f8cd71ef77d871a3262b2be6111c4 Mon Sep 17 00:00:00 2001
From: Jackie Qu
Date: Thu, 18 Apr 2024 14:23:19 +0800
Subject: [PATCH 36/63] v430-beta-700.error-code-of-oracle-mode-update
---
...0.ora-00000-to-ora-00999-of-oracle-mode.md | 107 +++++++++++++++++-
...0.ora-01000-to-ora-01499-of-oracle-mode.md | 8 +-
...0.ora-02000-to-ora-04999-of-oracle-mode.md | 24 +++-
...0.ora-05000-to-ora-10000-of-oracle-mode.md | 14 +++
...0.ora-20000-to-ora-29999-of-oracle-mode.md | 58 ++++++++++
...0.ora-30000-to-ora-49999-of-oracle-mode.md | 15 +++
6 files changed, 220 insertions(+), 6 deletions(-)
diff --git a/en-US/700.reference/900.error-code/700.error-code-of-oracle-mode/200.ora-00000-to-ora-00999-of-oracle-mode.md b/en-US/700.reference/900.error-code/700.error-code-of-oracle-mode/200.ora-00000-to-ora-00999-of-oracle-mode.md
index fe6cc6cfb3..f27237dd7d 100644
--- a/en-US/700.reference/900.error-code/700.error-code-of-oracle-mode/200.ora-00000-to-ora-00999-of-oracle-mode.md
+++ b/en-US/700.reference/900.error-code/700.error-code-of-oracle-mode/200.ora-00000-to-ora-00999-of-oracle-mode.md
@@ -10,7 +10,7 @@
Applicability
This topic applies only to OceanBase Database Enterprise Edition. OceanBase Database Community Edition provides only the MySQL mode.
-
+
ORA-00000: PX DOP downgrade from %ld to %ld
---------------------------------------------------------------
@@ -143,6 +143,111 @@ ORA-00060: deadlock detected while waiting for resource
* Solution: View the Trace file and the involved transactions and resources. Try again if necessary.
+## ORA-00060: internal error code, arguments: -6005, Try lock row conflict
+
+* Error code in OceanBase Database: 6005
+* SQLSTATE: HY000
+* Cause: The update operation failed to add a lock. This error is returned to the upper layer, and the system retries the operation.
+* Solution: This is an internal error code. Contact OceanBase Technical Support for troubleshooting.
+
+
+ Note
+ The exception indicated by this error code will not be captured by the PL exception handling mechanism in this version.
+
+
+## ORA-00060: internal error code, arguments: -4012, Timeout, query has reached the maximum query timeout: %ld(us), maybe you can adjust the session variable ob_query_timeout or query_timeout hint, and try again
+
+* Error code in OceanBase Database: 4012
+* SQLSTATE: HY000
+* Cause: The execution timed out.
+* Solution:
+ * Reduce the amount of time required through performance tuning.
+ * Extend the timeout period.
+ * Contact OceanBase Technical Support for troubleshooting.
+
+
+ Note
+ The exception indicated by this error code will not be captured by the PL exception handling mechanism in this version.
+
+
+## ORA-00060: internal error code, arguments: -4013, No memory or reach tenant memory limit
+
+* Error code in OceanBase Database: 4013
+* SQLSTATE: HY000
+* Cause: Memory allocation failed.
+ * The physical memory is exhausted.
+ * The memory allocated at a time is greater than 4 GB.
+ * The memory usage of the CTX, tenant, and OBServer node reaches the upper limit.
+* Solution: Contact OceanBase Technical Support for troubleshooting.
+
+
+ Note
+ The exception indicated by this error code will not be captured by the PL exception handling mechanism in this version.
+
+
+## ORA-00060: internal error code, arguments: -4016, Internal error
+
+* Error code in OceanBase Database: 4016
+* SQLSTATE: HY000
+* Cause: An internal error occurred.
+* Solution: This is an internal error code. Contact OceanBase Technical Support for troubleshooting.
+
+
+ Note
+ The exception indicated by this error code will not be captured by the PL exception handling mechanism in this version.
+
+
+## ORA-00060: internal error code, arguments: -4377, fatal internal error
+
+* Error code in OceanBase Database: 4377
+* SQLSTATE: HY000
+* Cause: OceanBase Database encountered an unexpected internal error, such as a program execution failure.
+* Solution: Contact OceanBase Technical Support for troubleshooting.
+
+
+ Note
+ The exception indicated by this error code will not be captured by the PL exception handling mechanism in this version.
+
+
+## ORA-00060: internal error code, arguments: -5065, Query execution was interrupted
+
+* Error code in OceanBase Database: 5065
+* SQLSTATE: HY000
+* Cause: The query was terminated.
+* Solution: Contact the system administrator or database administrator for troubleshooting.
+
+
+ Note
+ The exception indicated by this error code will not be captured by the PL exception handling mechanism in this version.
+
+
+## ORA-00060: internal error code, arguments: -6220, SQL sequence illegal
+
+* Error code in OceanBase Database: 6220
+* SQLSTATE: HY000
+* Cause: The sequence of SQL statements is invalid.
+* Solution: This is an internal error code. Contact OceanBase Technical Support for troubleshooting.
+
+
+ Note
+ The exception indicated by this error code will not be captured by the PL exception handling mechanism in this version.
+
+
+## ORA-00224: object '%.*s' must be of type function or array to be used this way
+
+* Error code in OceanBase Database: 9765
+
+* SQLSTATE: HY000
+
+* Cause: The object '%.*s' must be a function or array.
+
+* Solution: Check the object type and adjust it as needed.
+
+
+ Note
+ This error code is introduced since OceanBase Database V4.3.0.
+
+
## ORA-00600: arbitration service does not exist
* Error code in OceanBase Database: 4747
diff --git a/en-US/700.reference/900.error-code/700.error-code-of-oracle-mode/300.ora-01000-to-ora-01499-of-oracle-mode.md b/en-US/700.reference/900.error-code/700.error-code-of-oracle-mode/300.ora-01000-to-ora-01499-of-oracle-mode.md
index a5e0a41534..9844904b8e 100644
--- a/en-US/700.reference/900.error-code/700.error-code-of-oracle-mode/300.ora-01000-to-ora-01499-of-oracle-mode.md
+++ b/en-US/700.reference/900.error-code/700.error-code-of-oracle-mode/300.ora-01000-to-ora-01499-of-oracle-mode.md
@@ -165,8 +165,7 @@ ORA-01086: savepoint does not exist
* Solution: Roll back to a savepoint from the session where this savepoint is created.
-ORA-01092: OceanBase instance terminated. Disconnection forced
-----------------------------------------------------------------------------------
+## ORA-01092: OceanBase instance terminated. Disconnection forced
* Error code in OceanBase Database: 5066
@@ -176,6 +175,11 @@ ORA-01092: OceanBase instance terminated. Disconnection forced
* Solution: Check the alert logs for details. Then, restart the session.
+
+ Note
+ The exception indicated by this error code will not be captured by the PL exception handling mechanism in this version.
+
+
ORA-01400: cannot insert NULL into '(%.\*s)'
----------------------------------------------------------------
diff --git a/en-US/700.reference/900.error-code/700.error-code-of-oracle-mode/500.ora-02000-to-ora-04999-of-oracle-mode.md b/en-US/700.reference/900.error-code/700.error-code-of-oracle-mode/500.ora-02000-to-ora-04999-of-oracle-mode.md
index 4d6e3e4c39..7b8612053f 100644
--- a/en-US/700.reference/900.error-code/700.error-code-of-oracle-mode/500.ora-02000-to-ora-04999-of-oracle-mode.md
+++ b/en-US/700.reference/900.error-code/700.error-code-of-oracle-mode/500.ora-02000-to-ora-04999-of-oracle-mode.md
@@ -86,8 +86,7 @@ ORA-02024: database link not found
* Solution: Query the `[G]V$OB_TRANSACTION_PARTICIPANTS` view for the transaction that is being committed. View the logs on the corresponding server based on the transaction ID to determine why the commit has not been completed.
-ORA-02051: another session or branch in same transaction failed or finalized
-------------------------------------------------------------------------------------------------
+## ORA-02051: another session or branch in same transaction failed or finalized
* Error code in OceanBase Database: 6264
@@ -101,6 +100,11 @@ ORA-02051: another session or branch in same transaction failed or finalized
* [Error message "ORA-02051: another session or branch in same transaction failed or finalized" is returned in an Oracle tenant](https://www.oceanbase.com/knowledge-base/oceanbase-database-1000000000284436) -->
+
+ Note
+ The exception indicated by this error code will not be captured by the PL exception handling mechanism in this version.
+
+
ORA-02089: COMMIT is not allowed in a subordinate session
-----------------------------------------------------------------------------
@@ -1004,4 +1008,18 @@ ORA-04095: trigger '%.\*s' already exists on another table, cannot replace it
* Cause: The trigger cannot be replaced because it exists in another table.
-* Solution: Delete the triggers with the same name and re-create the trigger.
\ No newline at end of file
+* Solution: Delete the triggers with the same name and re-create the trigger.
+
+## ORA-04401: Client Session need be killed
+
+* Error code in OceanBase Database: 4401
+
+* SQLSTATE: HY000
+
+* Cause: The client session was not terminated.
+* Solution: Verify whether the client session needs to be terminated. If yes, terminate the session by taking proper measures, for example, using an appropriate command or tool.
+
+
+ Note
+ This error code is introduced since OceanBase Database V4.3.0.
+
\ No newline at end of file
diff --git a/en-US/700.reference/900.error-code/700.error-code-of-oracle-mode/600.ora-05000-to-ora-10000-of-oracle-mode.md b/en-US/700.reference/900.error-code/700.error-code-of-oracle-mode/600.ora-05000-to-ora-10000-of-oracle-mode.md
index 65c0313e98..d6d62876b9 100644
--- a/en-US/700.reference/900.error-code/700.error-code-of-oracle-mode/600.ora-05000-to-ora-10000-of-oracle-mode.md
+++ b/en-US/700.reference/900.error-code/700.error-code-of-oracle-mode/600.ora-05000-to-ora-10000-of-oracle-mode.md
@@ -278,6 +278,20 @@
This error code is introduced since OceanBase Database V4.0.0.
+## ORA-06577: output parameter not a bind variable
+
+* Error code in OceanBase Database: 9763
+
+* SQLSTATE: HY000
+
+* Cause: The output parameter is not a bind variable.
+* Solution: Check the use method of the output parameter and make sure that the output parameter is a bind variable.
+
+
+ Note
+ This error code is introduced since OceanBase Database V4.3.0.
+
+
## ORA-07452: specified resource manager plan does not exist in the data dictionary
* Error code in OceanBase Database: 4718
diff --git a/en-US/700.reference/900.error-code/700.error-code-of-oracle-mode/800.ora-20000-to-ora-29999-of-oracle-mode.md b/en-US/700.reference/900.error-code/700.error-code-of-oracle-mode/800.ora-20000-to-ora-29999-of-oracle-mode.md
index 7bbb248a64..747bfe7d7f 100644
--- a/en-US/700.reference/900.error-code/700.error-code-of-oracle-mode/800.ora-20000-to-ora-29999-of-oracle-mode.md
+++ b/en-US/700.reference/900.error-code/700.error-code-of-oracle-mode/800.ora-20000-to-ora-29999-of-oracle-mode.md
@@ -12,6 +12,20 @@
This topic applies only to OceanBase Database Enterprise Edition. OceanBase Database Community Edition provides only MySQL mode.
+## ORA-20000: '%.*s' invalid partition name
+
+* Error code in OceanBase Database: 11002
+
+* SQLSTATE: HY000
+
+* Cause: The partition name '%.*s' is incorrect.
+* Solution: Check whether the partition name '%.*s' is correct and complies with the naming rules. Make sure that the partition name is valid.
+
+
+ Note
+ This error code is introduced since OceanBase Database V4.3.0.
+
+
ORA-20000: The stored procedure 'raise_application_error' was called which causes this error to be generated", "ORA%06ld: %.*s
--------------------------------------------------------------------------------------------------------------------------------
@@ -229,6 +243,20 @@ ORA-22816: unsupported feature with RETURNING clause
This error code is introduced since OceanBase Database V4.2.2.
+## ORA-22902: CURSOR expression not allowed
+
+* Error code in OceanBase Database: 9766
+
+* SQLSTATE: HY000
+
+* Cause: The output parameter is not a bind variable.
+* Solution: Check the use method of the output parameter and make sure that the output parameter is a bind variable.
+
+
+ Note
+ This error code is introduced since OceanBase Database V4.3.0.
+
+
## ORA-22903: MULTISET expression not allowed
* Error code in OceanBase Database: 9716
@@ -362,6 +390,36 @@ ORA-22998: CLOB or NCLOB in multibyte character set not supported
This error code is introduced since OceanBase Database V4.2.2.
+## ORA-24323: value not allowed
+
+* Error code in OceanBase Database: 7297
+
+* SQLSTATE: HY000
+
+* Cause: An empty or invalid value is passed to a parameter.
+
+* Solution: Make sure that valid values are correctly passed to all required parameters.
+
+
+ Note
+ This error code is introduced since OceanBase Database V4.2.2.
+
+
+## ORA-23538: cannot explicitly refresh a NEVER REFRESH materialized view (%s)
+
+* Error code in OceanBase Database: 9761
+
+* SQLSTATE: HY000
+
+* Cause: You are attempting to explicitly perform a NEVER REFRESH MV operation.
+
+* Solution: Do not perform this refresh operation, or remove the materialized view from the list.
+
+
+ Note
+ This error code is introduced since OceanBase Database V4.2.2.
+
+
## ORA-24234: unable to get source of string \\'%.\*s\\'.\\'%.\*s\\', insufficient privileges or does not exist
* Error code in OceanBase Database: 5962
diff --git a/en-US/700.reference/900.error-code/700.error-code-of-oracle-mode/900.ora-30000-to-ora-49999-of-oracle-mode.md b/en-US/700.reference/900.error-code/700.error-code-of-oracle-mode/900.ora-30000-to-ora-49999-of-oracle-mode.md
index beb3e0826d..401270e465 100644
--- a/en-US/700.reference/900.error-code/700.error-code-of-oracle-mode/900.ora-30000-to-ora-49999-of-oracle-mode.md
+++ b/en-US/700.reference/900.error-code/700.error-code-of-oracle-mode/900.ora-30000-to-ora-49999-of-oracle-mode.md
@@ -269,6 +269,21 @@ ORA-30496: Argument should be a constant
* Cause: The argument value is not a constant.
+## ORA-30497: Argument should be a constant or a function of expressions in GROUP BY.
+
+* Error code in OceanBase Database: 11010
+
+* SQLSTATE: HY000
+
+* Cause: The delimiter for the `LISTAGG` aggregate function can only be a constant or a `GROUP BY` expression.
+
+* Solution: Check whether the delimiter for the `LISTAGG` aggregate function is correct.
+
+
+ Note
+ This error code is introduced since OceanBase Database V4.3.0.
+
+
ORA-30553: The function is not deterministic
----------------------------------------------------------------
From 21008992512c8317f94b9a3ebc108077350ed6fa Mon Sep 17 00:00:00 2001
From: Jackie Qu
Date: Thu, 18 Apr 2024 15:08:12 +0800
Subject: [PATCH 37/63] v430-beta-faq-view-update
---
...ystem-configuration-items-overview-list.md | 586 ++++++++++--------
.../13900.memstore_limit_percentage.md | 24 +-
.../29900.enable_rpc_authentication_bypass.md | 37 ++
.../30000.strict_check_os_params.md | 41 ++
.../30100.enable_dblink.md | 38 ++
.../27000.default_table_store_format.md | 55 ++
.../16900.ob_enable_pl_cache-global.md | 45 ++
en-US/800.FAQ/800.column-storage-faq.md | 108 ++++
8 files changed, 664 insertions(+), 270 deletions(-)
create mode 100644 en-US/700.reference/800.configuration-items-and-system-variables/100.system-configuration-items/300.cluster-level-configuration-items/29900.enable_rpc_authentication_bypass.md
create mode 100644 en-US/700.reference/800.configuration-items-and-system-variables/100.system-configuration-items/300.cluster-level-configuration-items/30000.strict_check_os_params.md
create mode 100644 en-US/700.reference/800.configuration-items-and-system-variables/100.system-configuration-items/300.cluster-level-configuration-items/30100.enable_dblink.md
create mode 100644 en-US/700.reference/800.configuration-items-and-system-variables/100.system-configuration-items/400.tenant-level-configuration-items/27000.default_table_store_format.md
create mode 100644 en-US/700.reference/800.configuration-items-and-system-variables/200.system-variable/300.global-system-variable/16900.ob_enable_pl_cache-global.md
create mode 100644 en-US/800.FAQ/800.column-storage-faq.md
diff --git a/en-US/700.reference/800.configuration-items-and-system-variables/100.system-configuration-items/200.system-configuration-items-overview-list.md b/en-US/700.reference/800.configuration-items-and-system-variables/100.system-configuration-items/200.system-configuration-items-overview-list.md
index 38289e124b..f8f5d09fe6 100644
--- a/en-US/700.reference/800.configuration-items-and-system-variables/100.system-configuration-items/200.system-configuration-items-overview-list.md
+++ b/en-US/700.reference/800.configuration-items-and-system-variables/100.system-configuration-items/200.system-configuration-items-overview-list.md
@@ -15,62 +15,61 @@ This topic describes cluster- and tenant-level parameters in OceanBase Database.
| Parameter | Description |
|---------|---------|
-| [mysql_port](300.cluster-level-configuration-items/15100.mysql_port.md) | Specifies the port number for the SQL service protocol. |
-| [rpc_port](300.cluster-level-configuration-items/17800.rpc_port.md) | Specifies the protocol port number for remote access. |
-| [sql_protocol_min_tls_version](300.cluster-level-configuration-items/29800.sql_protocol_min_tls_version.md) | Specifies the minimum version of the SSL/TLS protocol used by SSL connections for SQL statements. |
-| [ssl_client_authentication](300.cluster-level-configuration-items/19400.ssl_client_authentication.md) | Specifies whether to enable SSL authentication. |
-| [ssl_client_authentication](300.cluster-level-configuration-items/19400.ssl_client_authentication.md) | Specifies whether to enable SSL authentication. |
-| [ssl_external_kms_info](300.cluster-level-configuration-items/19500.ssl_external_kms_info.md) | Records information regarding the dependencies of the SSL functionality in OceanBase Database. It stores the relevant configurations required for different SSL scenarios in the form of a JSON string. The JSON should include at least the `ssl_mode` field. |
-| [enable_sql_audit](300.cluster-level-configuration-items/8700.enable_sql_audit.md) | Specifies whether to enable SQL audit. |
+| [mysql_port](300.cluster-level-configuration-items/15100.mysql_port.md) | The port number for the SQL service protocol. |
+| [rpc_port](300.cluster-level-configuration-items/17800.rpc_port.md) | The protocol port number for remote access. |
+| [sql_protocol_min_tls_version](300.cluster-level-configuration-items/29800.sql_protocol_min_tls_version.md) | The minimum version of the SSL/TLS protocol used by SSL connections for SQL statements. |
+| [ssl_client_authentication](300.cluster-level-configuration-items/19400.ssl_client_authentication.md) | Specifies whether to enable SSL authentication. |
+| [ssl_client_authentication](300.cluster-level-configuration-items/19400.ssl_client_authentication.md) | Specifies whether to enable SSL authentication. |
+| [ssl_external_kms_info](300.cluster-level-configuration-items/19500.ssl_external_kms_info.md) | The information that the SSL feature of OceanBase Database relies on, which is recorded in JSON strings for different SSL modes. Such a JSON string contains at least the `ssl_mode` field. |
+| [enable_sql_audit](300.cluster-level-configuration-items/8700.enable_sql_audit.md) | Specifies whether to enable SQL audit. |
### Backup and restore parameters
-| Parameter | Description |
+| Parameter | Description |
|----------|---------------|
-| [backup_backup_dest](300.cluster-level-configuration-items/900.backup_backup_dest.md) | Specifies the destination of data backup. |
-| [backup_backup_dest_option](300.cluster-level-configuration-items/1100.backup_backup_dest_option.md) | Specifies parameters related to secondary backup. |
-| [backup_dest_option](300.cluster-level-configuration-items/1200.backup_dest_option.md) | Specifies parameters related to backup. |
-| [backup_concurrency](300.cluster-level-configuration-items/1300.backup_concurrency.md) | Specifies the number of concurrent tasks for writing data to the file system during backup. |
-| [backup_dest](300.cluster-level-configuration-items/1400.backup_dest.md) | Specifies the path for archiving baseline backup and log files. |
-| [backup_log_archive_option](300.cluster-level-configuration-items/1600.backup_log_archive_option.md) | Specifies the archiving options for log backup. |
-| [backup_net_limit](300.cluster-level-configuration-items/1700.backup_net_limit.md) | Specifies the total bandwidth for cluster backup. |
-| [backup_recovery_window](300.cluster-level-configuration-items/1800.backup_recovery_window.md) | Specifies the time at which backup data can be restored. |
-| [backup_region](300.cluster-level-configuration-items/1900.backup_region.md) | Specifies the region where backup tasks are executed. |
-| [backup_zone](300.cluster-level-configuration-items/2000.backup_zone.md) | Specifies the zone where the backup tasks are executed. |
-| [log_archive_batch_buffer_limit](300.cluster-level-configuration-items/12700.log_archive_batch_buffer_limit.md) | Specifies the maximum memory available for log archiving on a single machine. |
-| [log_archive_checkpoint_interval](300.cluster-level-configuration-items/12800.log_archive_checkpoint_interval.md) | Specifies the interval at which log archive checkpoints are generated for cold data. |
-| [restore_concurrency](300.cluster-level-configuration-items/27500.restore_concurrency.md) | Specifies the maximum number of concurrent tasks of restoring tenant data from backup files. |
+| [backup_backup_dest](300.cluster-level-configuration-items/900.backup_backup_dest.md) | The destination for second backup. |
+| [backup_backup_dest_option](300.cluster-level-configuration-items/1100.backup_backup_dest_option.md) | The parameters for second backup. |
+| [backup_dest_option](300.cluster-level-configuration-items/1200.backup_dest_option.md) | The parameters for backup. |
+| [backup_concurrency](300.cluster-level-configuration-items/1300.backup_concurrency.md) | The concurrency of data writes to the file system during backup. |
+| [backup_dest](300.cluster-level-configuration-items/1400.backup_dest.md) | The path for baseline data backup and log archiving. |
+| [backup_log_archive_option](300.cluster-level-configuration-items/1600.backup_log_archive_option.md) | The options for log archiving during backup. |
+| [backup_net_limit](300.cluster-level-configuration-items/1700.backup_net_limit.md) | The total bandwidth for cluster backup. |
+| [backup_recovery_window](300.cluster-level-configuration-items/1800.backup_recovery_window.md) | The time window in which backups can be recovered. |
+| [backup_region](300.cluster-level-configuration-items/1900.backup_region.md) | The region where backup tasks are executed. |
+| [backup_zone](300.cluster-level-configuration-items/2000.backup_zone.md) | The zone where backup tasks are executed. |
+| [log_archive_batch_buffer_limit](300.cluster-level-configuration-items/12700.log_archive_batch_buffer_limit.md) | The maximum memory available for log archiving on a single server. |
+| [log_archive_checkpoint_interval](300.cluster-level-configuration-items/12800.log_archive_checkpoint_interval.md) | The interval between log archiving checkpoints of cold data. |
+| [restore_concurrency](300.cluster-level-configuration-items/27500.restore_concurrency.md) | The maximum concurrency of tenant data recovery from backups. |
### Cgroup parameters
| Parameter | Description |
|---------|----------|
-| [enable_cgroup](300.cluster-level-configuration-items/24400.enable_cgroup.md) | Specifies whether to enable the control group feature on the OBServer node. |
+| [enable_cgroup](300.cluster-level-configuration-items/24400.enable_cgroup.md) | Specifies whether to enable the control group (cgroup) feature for the OBServer node. |
### CPU parameters
-| Parameter | Description |
-|------------------------------------|-----------------------------------------------|
-| [cpu_count](300.cluster-level-configuration-items/4500.cpu_count.md) | Specifies the total number of system CPUs. If the parameter is set to 0, the system will automatically detect the number of CPU cores. |
-| [server_balance_cpu_mem_tolerance_percent](300.cluster-level-configuration-items/18200.server_balance_cpu_mem_tolerance_percent.md) | Specifies the tolerance of CPU and memory imbalance in the node load balancing strategy. |
-| [server_cpu_quota_max](300.cluster-level-configuration-items/18600.server_cpu_quota_max.md) | Specifies the maximum CPU quota available for the system. |
-| [server_cpu_quota_min](300.cluster-level-configuration-items/18700.server_cpu_quota_min.md) | Specifies the minimum CPU quota for the system. The quota is automatically reserved. |
-| [token_reserved_percentage](300.cluster-level-configuration-items/21700.token_reserved_percentage.md) | Specifies the percentage of idle tokens reserved for tenants in the scheduling of CPUs for the tenants. |
-| [workers_per_cpu_quota](300.cluster-level-configuration-items/23200.workers_per_cpu_quota.md) | Specifies the number of worker threads allocated to each CPU quota. |
-| [cpu_reserved](300.cluster-level-configuration-items/4100.cpu_reserved.md) | Specifies the number of CPU cores reserved in the system. The remaining CPU cores are exclusively occupied by OceanBase Database.|
-| [sys_cpu_limit_trigger](300.cluster-level-configuration-items/20600.sys_cpu_limit_trigger.md) | Specifies the CPU utilization threshold for suspending backend tasks in the system. |
-| [system_cpu_quota](300.cluster-level-configuration-items/21200.system_cpu_quota.md) | Specifies the CPU quota available to the system tenant. |
-| [tenant_cpu_variation_per_server](300.cluster-level-configuration-items/21400.tenant_cpu_variation_per_server.md) | Specifies the allowed offset of CPU quota scheduling among multiple units of a tenant.|
+| Parameter | Description |
+|-----------------------------------------------------------------------------------------|-----------------------------------------------|
+| [cpu_count](300.cluster-level-configuration-items/4500.cpu_count.md) | The total number of CPU cores in the system. If the parameter is set to `0`, the system will automatically detect the number of CPU cores. |
+| [server_balance_cpu_mem_tolerance_percent](300.cluster-level-configuration-items/18200.server_balance_cpu_mem_tolerance_percent.md) | The tolerance of CPU and memory resource imbalance in the node load balancing strategy. |
+| [server_cpu_quota_max](300.cluster-level-configuration-items/18600.server_cpu_quota_max.md) | The maximum CPU quota for the system. |
+| [server_cpu_quota_min](300.cluster-level-configuration-items/18700.server_cpu_quota_min.md) | The minimum CPU quota for the system. The system automatically reserves the quota. |
+| [token_reserved_percentage](300.cluster-level-configuration-items/21700.token_reserved_percentage.md) | The percentage of idle tokens reserved for tenants in the scheduling of CPU resources for the tenants. |
+| [workers_per_cpu_quota](300.cluster-level-configuration-items/23200.workers_per_cpu_quota.md) | The number of worker threads allocated to each CPU quota. |
+| [cpu_reserved](300.cluster-level-configuration-items/4100.cpu_reserved.md) | The number of CPUs reserved in the system. The remaining CPUs are exclusively occupied by OceanBase Database. |
+| [sys_cpu_limit_trigger](300.cluster-level-configuration-items/20600.sys_cpu_limit_trigger.md) | The CPU utilization threshold. When CPU utilization reaches the threshold, backend tasks in the system will be suspended. |
+| [system_cpu_quota](300.cluster-level-configuration-items/21200.system_cpu_quota.md) | The CPU quota for the sys tenant. |
+| [tenant_cpu_variation_per_server](300.cluster-level-configuration-items/21400.tenant_cpu_variation_per_server.md) | The variation allowed for CPU quota scheduling among multiple units of a tenant. |
### Read/Write and query parameters
-| Parameter | Description |
+| Parameter | Description |
|-----------------------------------------------------------------------------------|-----------------------------------------------------------|
-| [weak_read_version_refresh_interval](300.cluster-level-configuration-items/23100.weak_read_version_refresh_interval.md) | Specifies the version number refresh interval during reads under the weak consistency strategy. This parameter affects the latency of data reads under the weak consistency strategy. |
-| [large_query_worker_percentage](300.cluster-level-configuration-items/11600.large_query_worker_percentage.md) | Specifies the percentage of worker threads reserved for large queries. |
-| [large_query_threshold](300.cluster-level-configuration-items/11500.large_query_threshold.md) | Specifies the threshold of the query execution time. |
-| [trace_log_slow_query_watermark](300.cluster-level-configuration-items/21900.trace_log_slow_query_watermark.md) | Specifies the execution time threshold to identify a slow query. Trace logs of slow queries are written to system logs. |
-
+| [weak_read_version_refresh_interval](300.cluster-level-configuration-items/23100.weak_read_version_refresh_interval.md) | The version refresh interval for weak consistency reads. This parameter affects the latency of weak consistency reads. |
+| [large_query_worker_percentage](300.cluster-level-configuration-items/11600.large_query_worker_percentage.md) | The percentage of worker threads reserved for large queries. |
+| [large_query_threshold](300.cluster-level-configuration-items/11500.large_query_threshold.md) | The execution time threshold to identify a large query. |
+| [trace_log_slow_query_watermark](300.cluster-level-configuration-items/21900.trace_log_slow_query_watermark.md) | The execution time threshold to identify a slow query. Trace logs of slow queries are written to system logs. |
### Load balancing parameters
@@ -81,14 +80,14 @@ This topic describes cluster- and tenant-level parameters in OceanBase Database.
| [balancer_tolerance_percentage](300.cluster-level-configuration-items/3000.balancer_tolerance_percentage.md) | The tolerance for disk imbalance among multiple units of a tenant, which is set in the load balancing strategy. If the imbalance is within the range that starts at the average value minus the tolerance and ends at the average value plus the tolerance, no balancing action is triggered. |
| [server_balance_critical_disk_waterlevel](300.cluster-level-configuration-items/18300.server_balance_critical_disk_waterlevel.md) | The disk usage threshold that triggers disk load balancing. |
| [server_balance_disk_tolerance_percent](300.cluster-level-configuration-items/18400.server_balance_disk_tolerance_percent.md) | The tolerance for disk load imbalance among nodes, which is set in the load balancing strategy. |
-| [resource_hard_limit](300.cluster-level-configuration-items/16800.resource_hard_limit.md) | Defines the over-allocation percentage of CPU resources. |
+| [resource_hard_limit](300.cluster-level-configuration-items/16800.resource_hard_limit.md) | The over-allocation percentage of CPU resources. |
| [enable_sys_unit_standalone](300.cluster-level-configuration-items/9000.enable_sys_unit_standalone.md) | Specifies whether the unit of the sys tenant exclusively occupies a node. |
-| [balancer_emergency_percentage](300.cluster-level-configuration-items/2300.balancer_emergency_percentage.md) | Specifies the load threshold of each unit. When the load of a unit exceeds the specified threshold, you can enable replica migration to migrate data to an external system for load balancing even during a major compaction. |
-| [balancer_timeout_check_interval](300.cluster-level-configuration-items/2900.balancer_timeout_check_interval.md) | Specifies the time interval for checking the timeout of background tasks such as load balancing tasks. |
-| [data_copy_concurrency](300.cluster-level-configuration-items/4600.data_copy_concurrency.md) | Specifies the maximum number of concurrent data migration and replication tasks allowed in the system. |
-| [tenant_groups](300.cluster-level-configuration-items/22100.tenant_groups.md) | Specifies the tenant groups used in the load balancing strategy. |
-| [unit_balance_resource_weight](300.cluster-level-configuration-items/22900.unit_balance_resource_weight.md) | Specifies the weights of resources in unit load balancing strategies. Usually, you do not need to set this parameter. |
-| [resource_soft_limit](300.cluster-level-configuration-items/27400.resource_soft_limit.md) | Specifies whether to enable unit balancing. |
+| [balancer_emergency_percentage](300.cluster-level-configuration-items/2300.balancer_emergency_percentage.md) | The load threshold of each unit. When the load of a unit exceeds the specified threshold, you can enable replica migration to an external system for load balancing even during a major compaction. |
+| [balancer_timeout_check_interval](300.cluster-level-configuration-items/2900.balancer_timeout_check_interval.md) | The interval for checking whether backend tasks such as load balancing time out. |
+| [data_copy_concurrency](300.cluster-level-configuration-items/4600.data_copy_concurrency.md) | The maximum concurrency for data migration and replication in the system. |
+| [tenant_groups](300.cluster-level-configuration-items/22100.tenant_groups.md) | The tenant groups used in the load balancing strategy. |
+| [unit_balance_resource_weight](300.cluster-level-configuration-items/22900.unit_balance_resource_weight.md) | The resource weight used in the unit balancing strategy. Generally, you do not need to set this parameter. |
+| [resource_soft_limit](300.cluster-level-configuration-items/27400.resource_soft_limit.md) | Specifies whether to enable unit balancing. |
### Replica parameters
@@ -98,8 +97,19 @@ This topic describes cluster- and tenant-level parameters in OceanBase Database.
| [ls_meta_table_check_interval](300.cluster-level-configuration-items/16700.ls_meta_table_check_interval.md) | The interval at which the background inspection threads inspect the `DBA_OB_LS_LOCATIONS` or `CDB_OB_LS_LOCATIONS` view. |
| [sys_bkgd_migration_change_member_list_timeout](300.cluster-level-configuration-items/20000.sys_bkgd_migration_change_member_list_timeout.md) | The timeout period for modifying the Paxos member list during replica migration. |
| [sys_bkgd_migration_retry_num](300.cluster-level-configuration-items/20100.sys_bkgd_migration_retry_num.md) | The maximum number of retries after a replica migration task fails. |
-| [balance_blacklist_failure_threshold](300.cluster-level-configuration-items/2100.balance_blacklist_failure_threshold.md) | Specifies the threshold for the number of consecutive failures in background tasks such as replica migration. When the number of consecutive failures in these background tasks exceeds this threshold, they will be added to the blocklist. |
-| [balance_blacklist_retry_interval](300.cluster-level-configuration-items/2200.balance_blacklist_retry_interval.md) | Specifies the time interval for retrying background tasks such as replica migration after they are added to the blocklist. |
+| [balance_blacklist_failure_threshold](300.cluster-level-configuration-items/2100.balance_blacklist_failure_threshold.md) | The maximum number of consecutive failures of a backend task such as replica migration. upon which the task is added to the blocklist. |
+| [balance_blacklist_retry_interval](300.cluster-level-configuration-items/2200.balance_blacklist_retry_interval.md) | The retry interval for a backend task such as replica migration that is added to the blacklist. |
+| [election_cpu_quota](300.cluster-level-configuration-items/6300.election_cpu_quota.md) | The CPU quota allocated to backend tasks related to replica election. |
+| [election_blacklist_interval](300.cluster-level-configuration-items/6400.election_blacklist_interval.md) | The interval during which the dismissed leader cannot be re-elected as the leader. |
+| [enable_auto_leader_switch](300.cluster-level-configuration-items/6700.enable_auto_leader_switch.md) | Specifies whether to enable automatic leader switchover. |
+| [enable_smooth_leader_switch](300.cluster-level-configuration-items/8500.enable_smooth_leader_switch.md) | Specifies whether to enable smooth switchover to the leader. |
+| [global_index_build_single_replica_timeout](300.cluster-level-configuration-items/10200.global_index_build_single_replica_timeout.md) | The timeout period for creating a replica during global index creation. |
+| [get_leader_candidate_rpc_timeout](300.cluster-level-configuration-items/10400.get_leader_candidate_rpc_timeout.md) | The timeout period of an internal request for obtaining the leader candidate in the automatic leader switchover strategy. |
+| [migrate_concurrency](300.cluster-level-configuration-items/14600.migrate_concurrency.md) | The maximum concurrency for internal data migration. |
+| [rebuild_replica_data_lag_threshold](300.cluster-level-configuration-items/17000.rebuild_replica_data_lag_threshold.md) | The threshold of the size difference of transaction logs between the leader and a follower. When the difference reaches the threshold, replica reconstruction is triggered. |
+| [server_data_copy_out_concurrency](300.cluster-level-configuration-items/19200.server_data_copy_out_concurrency.md) | The maximum concurrency for data migration from a single node. |
+| [server_data_copy_in_concurrency](300.cluster-level-configuration-items/19300.server_data_copy_in_concurrency.md) | The maximum concurrency for data migration to a single node. |
+| [replica_safe_remove_time](300.cluster-level-configuration-items/27200.replica_safe_remove_time.md) | The retention period of a deleted replica before it can be cleared. |
### Cache parameters
@@ -116,7 +126,11 @@ This topic describes cluster- and tenant-level parameters in OceanBase Database.
| [opt_tab_stat_cache_priority](300.cluster-level-configuration-items/24000.opt_tab_stat_cache_priority.md) | The priority of the statistics cache. |
| [tablet_ls_cache_priority](300.cluster-level-configuration-items/23900.tablet_ls_cache_priority.md) | The priority of the tablet mapping cache. |
| [user_block_cache_priority](300.cluster-level-configuration-items/22500.user_block_cache_priority.md) | The priority of the data block cache in the cache system. |
-
+| [index_info_block_cache_priority](300.cluster-level-configuration-items/11100.index_info_block_cache_priority.md) | The priority of the block index in the cache system. |
+| [index_cache_priority](300.cluster-level-configuration-items/11200.index_cache_priority.md) | The priority of the index cache in the cache system. |
+| [user_tab_col_stat_cache_priority](300.cluster-level-configuration-items/23400.user_tab_col_stat_cache_priority.md) | The priority of the statistics cache in the cache system. |
+| [plan_cache_high_watermark](300.cluster-level-configuration-items/16000.plan_cache_high_watermark.md) | The memory threshold to trigger plan cache eviction. Automatic eviction is triggered when the memory occupied by the plan cache reaches the specified threshold. |
+| [plan_cache_low_watermark](300.cluster-level-configuration-items/16100.plan_cache_low_watermark.md) | The memory threshold to stop plan cache eviction. The eviction is stopped when the memory occupied by the plan cache decreases to the specified threshold. |
### Partition parameters
@@ -125,13 +139,19 @@ This topic describes cluster- and tenant-level parameters in OceanBase Database.
| [tablet_meta_table_check_interval](300.cluster-level-configuration-items/15700.tablet_meta_table_check_interval.md) | The interval at which the background inspection threads inspect the `DBA_OB_TABLET_REPLICAS` or `CDB_OB_TABLET_REPLICAS` view. |
| [tablet_meta_table_scan_batch_count](300.cluster-level-configuration-items/15800.tablet_meta_table_scan_batch_count.md) | The number of tablets cached in memory when the tablet meta table iterator is working. |
| [tablet_size](300.cluster-level-configuration-items/21000.tablet_size.md) | The size of each shard during intra-partition parallel processing such as parallel compactions and queries. |
-| [auto_broadcast_location_cache_rate_limit](300.cluster-level-configuration-items/200.auto_broadcast_location_cache_rate_limit.md) | Specifies the maximum number of partitions for which each OBServer can broadcast location information changes per second. |
+| [auto_broadcast_location_cache_rate_limit](300.cluster-level-configuration-items/200.auto_broadcast_location_cache_rate_limit.md) | The maximum number of partitions whose location changes can be broadcast on each OBServer per second. |
+| [auto_refresh_location_cache_rate_limit](300.cluster-level-configuration-items/500.auto_refresh_location_cache_rate_limit.md) | The maximum number of partitions that can be automatically refreshed on each OBServer at a time. |
+| [enable_pg](300.cluster-level-configuration-items/7800.enable_pg.md) | Specifies whether to enable the partition group feature. |
+| [gc_wait_archive](300.cluster-level-configuration-items/10000.gc_wait_archive.md) | Specifies whether to start garbage collection for a partition until all the logs in the partition have been archived. |
+| [partition_table_check_interval](300.cluster-level-configuration-items/26300.partition_table_check_interval.md) | The interval at which OBServer deletes non-existent replicas from a partitioned table. |
### Background execution thread parameters
| Parameter | Description |
|-------------|-----------|
| [sql_net_thread_count](300.cluster-level-configuration-items/25100.sql_net_thread_count.md) | The number of I/O threads for the MySQL cluster, that is, the number of `global_sql_nio_server` threads. The default value 0 indicates that the value of the parameter is the same as that of `net_thread_count`. |
+| [auto_leader_switch_interval](300.cluster-level-configuration-items/400.auto_leader_switch_interval.md) | The working interval of the backend thread for automatic leader switchover. |
+| [switchover_process_thread_count](300.cluster-level-configuration-items/20300.switchover_process_thread_count.md) | The size of the thread pool for primary/standby cluster switchover. |
### I/O parameters
@@ -141,6 +161,18 @@ This topic describes cluster- and tenant-level parameters in OceanBase Database.
| [syslog_io_bandwidth_limit](300.cluster-level-configuration-items/20400.syslog_io_bandwidth_limit.md) | The maximum I/O bandwidth available for system logs. If this value is reached, the remaining system logs are discarded. |
| [disk_io_thread_count](300.cluster-level-configuration-items/6100.disk_io_thread_count.md) | The number of disk I/O threads. The value must be an even number. |
| [net_thread_count](300.cluster-level-configuration-items/15200.net_thread_count.md) | The number of network I/O threads. |
+| [data_storage_error_tolerance_time](300.cluster-level-configuration-items/5100.data_storage_error_tolerance_time.md) | The tolerance period after which the status of an abnormal data disk is set to `ERROR`. |
+| [sys_bkgd_io_high_percentage](300.cluster-level-configuration-items/20400.sys_bkgd_io_high_percentage.md) | The highest percentage of disk bandwidth that can be used by backend I/O operations. |
+| [sys_bkgd_io_low_percentage](300.cluster-level-configuration-items/20500.sys_bkgd_io_low_percentage.md) | The lowest percentage of traffic for backend I/O operations. |
+| [user_iort_up_percentage](300.cluster-level-configuration-items/23200.user_iort_up_percentage.md) | The I/O latency threshold for the user disk. When the I/O latency of the user disk reaches this threshold, the traffic of backend I/O operations will be throttled. |
+| [ob_esi_rpc_port](300.cluster-level-configuration-items/24300.ob_esi_rpc_port.md) | The communication port between the obesi process and the observer process. |
+| [enable_ob_esi_process](300.cluster-level-configuration-items/24400.enable_ob_esi_process.md) | Specifies whether to enable the obesi process (external storage API). |
+| [ob_esi_session_timeout](300.cluster-level-configuration-items/24500.ob_esi_session_timeout.md) | The timeout period of the active session resources for the obesi process. |
+| [ob_esi_io_concurrency](300.cluster-level-configuration-items/24600.ob_esi_io_concurrency.md) | The I/O concurrency for the obesi process. |
+| [ob_esi_syslog_level](300.cluster-level-configuration-items/24800.ob_esi_syslog_level.md) | The current log level for the obesi process. |
+| [ob_esi_max_syslog_file_count](300.cluster-level-configuration-items/24900.ob_esi_max_syslog_file_count.md) | The maximum number of log files that can be retained for the obesi process. |
+| [multiblock_read_gap_size](300.cluster-level-configuration-items/25500.multiblock_read_gap_size.md) | The size of multiple block caches from which data can be read in one I/O operation. |
+| [multiblock_read_size](300.cluster-level-configuration-items/25600.multiblock_read_size.md) | The I/O aggregate throughput in data access. |
### Cluster parameters
@@ -150,6 +182,11 @@ This topic describes cluster- and tenant-level parameters in OceanBase Database.
| [cluster](300.cluster-level-configuration-items/4200.cluster.md) | The name of the current OceanBase cluster. |
| [cluster_id](300.cluster-level-configuration-items/4300.cluster_id.md) | The ID of the current OceanBase cluster. |
| [rpc_timeout](300.cluster-level-configuration-items/17900.rpc_timeout.md) | The timeout period of an internal request in the cluster. |
+| [all_cluster_list](300.cluster-level-configuration-items/100.all_cluster_list.md) | The list of servers that access the same URL specified for config_url. |
+| [enable_election_group](300.cluster-level-configuration-items/7100.enable_election_group.md) | Specifies whether to enable the election group strategy. |
+| [local_ip](300.cluster-level-configuration-items/29100.local_ip.md) | The IP address of the server where OceanBase Database is installed. |
+| [observer_id](300.cluster-level-configuration-items/29200.observer_id.md) | The unique identifier that RootService assigns to the OBServer node in the cluster. |
+| [min_observer_version](300.cluster-level-configuration-items/14800.min_observer_version.md) | The earliest OBServer node version in the cluster. |
### Bandwidth parameters
@@ -169,6 +206,7 @@ This topic describes cluster- and tenant-level parameters in OceanBase Database.
| [location_cache_cpu_quota](300.cluster-level-configuration-items/11900.location_cache_cpu_quota.md) | The CPU quota for the location cache module. |
| [location_fetch_concurrency](300.cluster-level-configuration-items/12500.location_fetch_concurrency.md) | The maximum number of concurrent requests for refreshing the location cache on a single server. |
| [location_refresh_thread_count](300.cluster-level-configuration-items/12600.location_refresh_thread_count.md) | The number of threads used by the OBServer node to obtain partition location information from RootService. |
+| [enable_auto_refresh_location_cache](300.cluster-level-configuration-items/6600.enable_auto_refresh_location_cache.md) | Specifies whether to enable automatic refresh of the location cache. |
### Directory parameters
@@ -177,23 +215,6 @@ This topic describes cluster- and tenant-level parameters in OceanBase Database.
| [config_additional_dir](300.cluster-level-configuration-items/4400.config_additional_dir.md) | The local directories for storing multiple copies of configuration files for redundancy. |
| [data_dir](300.cluster-level-configuration-items/4900.data_dir.md) | The directory for storing SSTables and other data. |
-### SQL request parameters
-
-| Parameter | Description |
-|------------------------------------------------------------|------------------------|
-| [sql_login_thread_count](300.cluster-level-configuration-items/24200.sql_login_thread_count.md) | The number of threads for processing SQL logon requests. |
-
-### CPU parameters
-
-| Parameter | Description |
-|-----------------------------------------------------------------------------------------|-----------------------------------------------|
-| [cpu_count](300.cluster-level-configuration-items/4500.cpu_count.md) | The total number of CPU cores in the system. If the parameter is set to 0, the system will automatically detect the number of CPU cores. |
-| [server_balance_cpu_mem_tolerance_percent](300.cluster-level-configuration-items/18200.server_balance_cpu_mem_tolerance_percent.md) | The tolerance of CPU and memory resource imbalance in the node load balancing strategy. |
-| [server_cpu_quota_max](300.cluster-level-configuration-items/18600.server_cpu_quota_max.md) | The maximum CPU quota for the system. |
-| [server_cpu_quota_min](300.cluster-level-configuration-items/18700.server_cpu_quota_min.md) | The minimum CPU quota for the system. The system automatically reserves the quota. |
-| [token_reserved_percentage](300.cluster-level-configuration-items/21700.token_reserved_percentage.md) | The percentage of idle tokens reserved for tenants in the scheduling of CPU resources for the tenants. |
-| [workers_per_cpu_quota](300.cluster-level-configuration-items/23200.workers_per_cpu_quota.md) | The number of worker threads allocated to each CPU quota. |
-
### Memory parameters
| Parameter | Description |
@@ -212,53 +233,127 @@ This topic describes cluster- and tenant-level parameters in OceanBase Database.
| [use_large_pages](300.cluster-level-configuration-items/22400.use_large_pages.md) | Manages the use of large memory pages by the database. |
| [datafile_maxsize](300.cluster-level-configuration-items/28400.datafile_maxsize.md) | The maximum space allowed in automatic scaling for disk files. |
| [datafile_next](300.cluster-level-configuration-items/28500.datafile_next.md) | The step size of automatic scaling for disk files. |
+| [storage_meta_cache_priority](300.cluster-level-configuration-items/29300.storage_meta_cache_priority.md) | The priority of the storage meta cache in KVCache. |
-### Debugging parameters
+### PX parameters
| Parameter | Description |
-|----------------------------------------------------------------------|--------------------------------------|
-| [debug_sync_timeout](300.cluster-level-configuration-items/5600.debug_sync_timeout.md) | The timeout period for a Debug Sync operation. When the value is set to 0, Debug Sync is disabled. |
-| [enable_rich_error_msg](300.cluster-level-configuration-items/8300.enable_rich_error_msg.md) | Specifies whether to add debugging information, such as the server address, error time, and trace ID, to the client message. |
+|-------------------------------------------------------------------------|------------------------------|
+| [px_workers_per_cpu_quota](300.cluster-level-configuration-items/16300.px_workers_per_cpu_quota.md) | The proportion of Parallel eXecution (PX) worker threads. |
+| [px_task_size](300.cluster-level-configuration-items/16200.px_task_size.md) | The amount of data processed by the SQL parallel query engine in each task. |
+| [max_px_worker_count](300.cluster-level-configuration-items/13200.max_px_worker_count.md) | The maximum number of threads for the SQL parallel query engine. |
-### Compression algorithm parameters
+### Other parameters
| Parameter | Description |
-|---------------------------------------------------------------------------------|-----------------------------------|
-| [default_compress_func](300.cluster-level-configuration-items/5800.default_compress_func.md) | The default algorithm for compressing table data. You can also specify another compression algorithm when creating a table. |
-| [default_compress](300.cluster-level-configuration-items/5700.default_compress.md) | The default compression strategy used during table creation in the Oracle mode. |
+|------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------|
+| [builtin_db_data_verify_cycle](300.cluster-level-configuration-items/3300.builtin_db_data_verify_cycle.md) | The cycle of a bad block check in the unit of days. When the value is set to 0, bad block checks are not performed. |
+| [data_storage_warning_tolerance_time](300.cluster-level-configuration-items/5200.data_storage_warning_tolerance_time.md) | The tolerance period after which the data disk is set to the WARNING state. |
+| [dead_socket_detection_timeout](300.cluster-level-configuration-items/5500.dead_socket_detection_timeout.md) | The interval for detecting invalid sockets. |
+| [migration_disable_time](300.cluster-level-configuration-items/14700.migration_disable_time.md) | The period during which data migration is suspended for a node after data migration to the node fails due to reasons such as full disk usage. |
+| [schema_history_expire_time](300.cluster-level-configuration-items/18000.schema_history_expire_time.md) | The validity period of history metadata. |
+| [datafile_size](300.cluster-level-configuration-items/5400.datafile_size.md) | The size of a data file. This parameter is generally left unspecified. |
+| [devname](300.cluster-level-configuration-items/6000.devname.md) | The name of the network interface card (NIC) to which the service process is bound. |
+| [enable_perf_event](300.cluster-level-configuration-items/7700.enable_perf_event.md) | Specifies whether to enable the information collection feature for performance events. |
+| [enable_record_trace_id](300.cluster-level-configuration-items/8000.enable_record_trace_id.md) | Specifies whether to record the trace ID configured by the application. |
+| [enable_upgrade_mode](300.cluster-level-configuration-items/9500.enable_upgrade_mode.md) | Specifies whether to enable the upgrade mode. In upgrade mode, some backend system features are suspended. |
+| [enable_ddl](300.cluster-level-configuration-items/6800.enable_ddl.md) | Specifies whether to allow the execution of DDL statements. |
+| [high_priority_net_thread_count](300.cluster-level-configuration-items/10700.high_priority_net_thread_count.md) | The number of network threads with a high priority. When this parameter is set to `0`, this feature is disabled. |
+| [obconfig_url](300.cluster-level-configuration-items/15500.obconfig_url.md) | The URL of the OBConfig service. |
+| [rpc_port](300.cluster-level-configuration-items/17800.rpc_port.md) | The RPC port. |
+| [stack_size](300.cluster-level-configuration-items/19600.stack_size.md) | The size of the function call stack for programs. |
+| [tenant_task_queue_size](300.cluster-level-configuration-items/21600.tenant_task_queue_size.md) | The request queue size of each tenant. |
+| [zone](300.cluster-level-configuration-items/23300.zone-cluster.md) | The name of the zone where the node is located. This parameter is generally left unspecified. |
+| [recyclebin_object_expire_time](300.cluster-level-configuration-items/16600.recyclebin_object_expire_time.md) | The period during which a schema object can be retained in the recycle bin. After the period elapses, the object will be purged from the recycle bin. |
+| [default_row_format](300.cluster-level-configuration-items/5900.default_row_format.md) | The default row format used in table creation in MySQL mode. |
+| [sys_bkgd_net_percentage](300.cluster-level-configuration-items/20200.sys_bkgd_net_percentage.md) | The maximum percentage of network bandwidth for backend system tasks. |
+| [schema_history_recycle_interval](300.cluster-level-configuration-items/18100.schema_history_recycle_interval.md) | The interval for recycling `schema` multi-version history files. |
+| [enable_asan_for_memory_context](300.cluster-level-configuration-items/24300.enable_asan_for_memory_context.md) | Specifies whether to enable ObAsanAllocator when ob_asan is working. By default, ObAllocator is the allocator of MemoryContext. |
+| [ofs_list](300.cluster-level-configuration-items/25700.ofs_list.md) | The list of URLs for connecting to the OceanBase File System (OFS) of all the zones so that RootService can access files across zones in OFS deployment mode. OFS is a distributed storage system independently designed for OceanBase Database. |
+
+
+### RootService parameters
+
+| Parameter | Description |
+|------------------------------------------------------------------------------------|----------------------------------------------------------|
+| [rootservice_async_task_queue_size](300.cluster-level-configuration-items/17100.rootservice_async_task_queue_size.md) | The size of the internal asynchronous task queue for RootService. |
+| [rootservice_async_task_thread_count](300.cluster-level-configuration-items/17200.rootservice_async_task_thread_count.md) | The size of the thread pool for internal asynchronous tasks of RootService. |
+| [rootservice_list](300.cluster-level-configuration-items/17300.rootservice_list.md) | The list of servers where the RootService and its replicas are deployed. |
+| [rootservice_ready_check_interval](300.cluster-level-configuration-items/17500.rootservice_ready_check_interval.md) | The wait time after the RootService is started, during which the cluster status is checked. |
+| [rootservice_memory_limit](300.cluster-level-configuration-items/17400.rootservice_memory_limit.md) | The maximum memory available to RootService. |
+| [lease_time](300.cluster-level-configuration-items/11800.lease_time.md) | The heartbeat lease period. |
+| [server_check_interval](300.cluster-level-configuration-items/18500.server_check_interval.md) | The interval at which the server checks the table consistency. |
+| [server_permanent_offline_time](300.cluster-level-configuration-items/19000.server_permanent_offline_time.md) | The time threshold for heartbeat missing at which a server is considered permanently offline. Data replicas on a permanently offline server must be automatically supplemented. |
+| [ob_event_history_recycle_interval](300.cluster-level-configuration-items/15300.ob_event_history_recycle_interval.md) | The interval at which historical events are recycled. |
+| [enable_rootservice_standalone](300.cluster-level-configuration-items/8600.enable_rootservice_standalone.md) | Specifies whether to allow the sys tenant and RootService to exclusively occupy an OBServer node. |
+| [fast_recovery_concurrency](300.cluster-level-configuration-items/9400.fast_recovery_concurrency.md) | The maximum concurrency for executing fast recovery tasks scheduled by RootService on an OBServer. |
+| [wait_leader_batch_count](300.cluster-level-configuration-items/23600.wait_leader_batch_count.md) | The maximum number of partitions to which RootService can send a command for leader switchover. |
+
+### RPC authentication parameters
+
+| Parameter | Description |
+|---------|---------|
+| [rpc_client_authentication_method](300.cluster-level-configuration-items/28100.rpc_client_authentication_method.md) | The security authentication method of the RPC client. |
+| [rpc_server_authentication_method](300.cluster-level-configuration-items/28200.rpc_server_authentication_method.md) | The security authentication method of the RPC server. |
+
+### SQL request parameters
+
+| Parameter | Description |
+|------------------------------------------------------------|------------------------|
+| [sql_login_thread_count](300.cluster-level-configuration-items/24200.sql_login_thread_count.md) | The number of threads for processing SQL logon requests. |
+| [sql_audit_memory_limit](300.cluster-level-configuration-items/19800.sql_audit_memory_limit.md) | The maximum memory available for SQL audit data. |
+| [enable_sys_table_ddl](300.cluster-level-configuration-items/8900.enable_sys_table_ddl.md) | Specifies whether to enable manual creation of system tables. |
+| [internal_sql_execute_timeout](300.cluster-level-configuration-items/11400.internal_sql_execute_timeout.md) | The interval of DML requests in the system. |
### Transaction and transaction log parameters
| Parameter | Description |
|---------------------------------------------------------------------------------|--------------------------------------------------|
-| [log_disk_size](300.cluster-level-configuration-items/23700.log_disk_size.md) | The size of the log disk where the redo logs are stored. |
-| [log_disk_percentage](300.cluster-level-configuration-items/23800.log_disk_percentage.md) | The percentage of the total disk space occupied by redo logs. |
-| [clog_sync_time_warn_threshold](300.cluster-level-configuration-items/3900.clog_sync_time_warn_threshold.md) | The warning threshold of time consumed for synchronizing transaction logs. When the consumed time reaches the threshold, a WARN-level log is generated. |
+| [log_disk_size](300.cluster-level-configuration-items/23700.log_disk_size.md) | The size of the log disk where the REDO logs are stored. |
+| [log_disk_percentage](300.cluster-level-configuration-items/23800.log_disk_percentage.md) | The percentage of the total disk space occupied by REDO logs. |
| [dtl_buffer_size](300.cluster-level-configuration-items/6200.dtl_buffer_size.md) | The size of the cache allocated to the SQL data transmission module. |
| [ignore_replay_checksum_error](300.cluster-level-configuration-items/10800.ignore_replay_checksum_error.md) | Specifies whether to ignore checksum errors that occur during transaction log playback. |
| [trx_2pc_retry_interval](300.cluster-level-configuration-items/22000.trx_2pc_retry_interval.md) | The interval for retrying a failed two-phase commit task. |
| [standby_fetch_log_bandwidth_limit](300.cluster-level-configuration-items/28600.standby_fetch_log_bandwidth_limit.md) | The maximum bandwidth per second available for the total traffic of synchronizing logs from the primary tenant by all servers in the cluster where the standby tenant resides. |
| [log_storage_warning_tolerance_time](300.cluster-level-configuration-items/24800.log_storage_warning_tolerance_time.md) | The maximum duration of I/O failures tolerable on the log disk before the log disk is considered damaged and follower-to-leader switchover is triggered. |
+| [clog_disk_utilization_threshold](300.cluster-level-configuration-items/3500.clog_disk_utilization_threshold.md) | The usage of the clog or ilog disk space that triggers log file reuse. |
+| [clog_expire_days](300.cluster-level-configuration-items/3600.clog_expire_days.md) | The retention period of clog files. When the retention period of a clog file expires, it will be deleted. |
+| [clog_cache_priority](300.cluster-level-configuration-items/3700.clog_cache_priority.md) | The caching priority of transaction logs. |
+| [clog_disk_usage_limit_percentage](300.cluster-level-configuration-items/3800.clog_disk_usage_limit_percentage.md) | The maximum percentage of disk space available for transaction logs. |
+| [clog_sync_time_warn_threshold](300.cluster-level-configuration-items/3900.clog_sync_time_warn_threshold.md) | The warning threshold of time consumed for synchronizing transaction logs. When the consumed time reaches the threshold, a WARN-level log is generated. |
+| [clog_transport_compress_func](300.cluster-level-configuration-items/) | The algorithm for compressing transaction logs for internal transmission. |
+| [enable_one_phase_commit](300.cluster-level-configuration-items/7600.enable_one_phase_commit.md) | Specifies whether to enable one-phase commit. |
+| [enable_separate_sys_clog](300.cluster-level-configuration-items/8400.enable_separate_sys_clog.md) | Specifies whether to separately store system transaction logs and user transaction logs. |
+| [flush_log_at_trx_commit](300.cluster-level-configuration-items/9600.flush_log_at_trx_commit.md) | The transaction log write strategy adopted when transactions are committed. |
+| [ignore_replay_checksum_error](300.cluster-level-configuration-items/10800.ignore_replay_checksum_error.md) | Specifies whether to ignore checksum errors that occur during transaction log playback. |
+| [index_clog_cache_priority](300.cluster-level-configuration-items/11000.index_clog_cache_priority.md) | The priority of the transaction log index in the cache system. |
+| [ilog_index_expire_time](300.cluster-level-configuration-items/11300.ilog_index_expire_time.md) | The validity period of ilog files on OBServers. Expired files can no longer be read. |
+| [trx_force_kill_threshold](300.cluster-level-configuration-items/22700.trx_force_kill_threshold.md) | The maximum amount of time that the system waits before killing transactions for a freeze or leader switchover. |
-### Minor and major compaction parameters
+### Lock parameters
| Parameter | Description |
-|------------------------------------------------------------------------------------|-------------------------------------|
-| [enable_major_freeze](300.cluster-level-configuration-items/7200.enable_major_freeze.md) | Specifies whether to enable automatic global freezing. |
-| [micro_block_merge_verify_level](300.cluster-level-configuration-items/14500.micro_block_merge_verify_level.md) | The verification level of macroblocks in a major compaction. |
-| [row_compaction_update_limit](300.cluster-level-configuration-items/17600.row_compaction_update_limit.md) | The number of data updates that triggers a major compaction of rows in the memory. |
+|---------|----------|
+| [trx_try_wait_lock_timeout](300.cluster-level-configuration-items/22800.trx_try_wait_lock_timeout.md) | The maximum amount of time that a statement waits for a locked row to be unlocked. |
-### PX parameters
+### Debugging parameters
| Parameter | Description |
-|-------------------------------------------------------------------------|------------------------------|
-| [px_workers_per_cpu_quota](300.cluster-level-configuration-items/16300.px_workers_per_cpu_quota.md) | The proportion of Parallel eXecution (PX) worker threads. |
-| [px_task_size](300.cluster-level-configuration-items/16200.px_task_size.md) | The amount of data processed by the SQL parallel query engine in each task. |
-| [max_px_worker_count](300.cluster-level-configuration-items/13200.max_px_worker_count.md) | The maximum number of threads for the SQL parallel query engine. |
+|----------------------------------------------------------------------|--------------------------------------|
+| [debug_sync_timeout](300.cluster-level-configuration-items/5600.debug_sync_timeout.md) | The timeout period for a Debug Sync operation. When the value is set to 0, Debug Sync is disabled. |
+| [enable_rich_error_msg](300.cluster-level-configuration-items/8300.enable_rich_error_msg.md) | Specifies whether to add debugging information, such as the server address, error time, and trace ID, to the client message. |
+### TCP parameters
+
+| Parameter | Description |
+|---------------------------------------------------------------------|-----------------------------------------------------|
+| [enable_tcp_keepalive](300.cluster-level-configuration-items/9300.enable_tcp_keepalive.md) | Specifies whether to enable the keepalive mechanism for client connections. |
+| [tcp_keepidle](300.cluster-level-configuration-items/21200.tcp_keepidle.md) | The interval in seconds before sending a keepalive probe packet when no data is sent on a client connection. |
+| [tcp_keepintvl](300.cluster-level-configuration-items/21300.tcp_keepintvl.md) | The interval between two probes in seconds when you enable the keepalive mechanism for client connections. |
+| [tcp_keepcnt](300.cluster-level-configuration-items/21100.tcp_keepcnt.md) | The maximum number of retries before terminating a non-active connection. |
-### Log parameters
+### System log parameters
| Parameter | Description |
|-------------------------------------------------------------------------------------|------------------------------------------------------|
@@ -271,47 +366,37 @@ This topic describes cluster- and tenant-level parameters in OceanBase Database.
| [syslog_level](300.cluster-level-configuration-items/20500.syslog_level.md) | The level of syslogs. |
| [trace_log_sampling_interval](300.cluster-level-configuration-items/21800.trace_log_sampling_interval.md) | The interval at which trace logs are printed. |
| [diag_syslog_per_error_limit](300.cluster-level-configuration-items/24500.diag_syslog_per_error_limit.md) | The number of DIAG system logs allowed for each error code per second. When this threshold is reached, no more logs will be printed. |
+| [enable_log_archive](300.cluster-level-configuration-items/7000.enable_log_archive.md) | Specifies whether to enable log archiving. |
+| [system_trace_level](300.cluster-level-configuration-items/20800.system_trace_level.md) | The level of system trace logs to be printed. |
-
-### Read/write and query parameters
-
-| Parameter | Description |
-|-----------------------------------------------------------------------------------|-----------------------------------------------------------|
-| [weak_read_version_refresh_interval](300.cluster-level-configuration-items/23100.weak_read_version_refresh_interval.md) | The version refresh interval for weak consistency reads. This parameter affects the latency of weak consistency reads. |
-| [large_query_worker_percentage](300.cluster-level-configuration-items/11600.large_query_worker_percentage.md) | The percentage of worker threads reserved for large queries. |
-| [large_query_threshold](300.cluster-level-configuration-items/11500.large_query_threshold.md) | The execution time threshold to identify a large query. |
-| [trace_log_slow_query_watermark](300.cluster-level-configuration-items/21900.trace_log_slow_query_watermark.md) | The execution time threshold to identify a slow query. Trace logs of slow queries are written to system logs. |
-
-### RootService parameters
-
-| Parameter | Description |
-|------------------------------------------------------------------------------------|----------------------------------------------------------|
-| [rootservice_async_task_queue_size](300.cluster-level-configuration-items/17100.rootservice_async_task_queue_size.md) | The size of the internal asynchronous task queue for RootService. |
-| [rootservice_async_task_thread_count](300.cluster-level-configuration-items/17200.rootservice_async_task_thread_count.md) | The size of the thread pool for internal asynchronous tasks of RootService. |
-| [rootservice_list](300.cluster-level-configuration-items/17300.rootservice_list.md) | The list of servers where the RootService and its replicas are deployed. |
-| [rootservice_ready_check_interval](300.cluster-level-configuration-items/17500.rootservice_ready_check_interval.md) | The wait time after the RootService is started, during which the cluster status is checked. |
-| [rootservice_memory_limit](300.cluster-level-configuration-items/17400.rootservice_memory_limit.md) | The maximum memory available to RootService. |
-| [lease_time](300.cluster-level-configuration-items/11800.lease_time.md) | The heartbeat lease period. |
-| [server_check_interval](300.cluster-level-configuration-items/18500.server_check_interval.md) | The interval at which the server checks the table consistency. |
-| [server_permanent_offline_time](300.cluster-level-configuration-items/19000.server_permanent_offline_time.md) | The time threshold for heartbeat missing at which a server is considered permanently offline. Data replicas on a permanently offline server must be automatically supplemented. |
-| [ob_event_history_recycle_interval](300.cluster-level-configuration-items/15300.ob_event_history_recycle_interval.md) | The interval at which historical events are recycled. |
-
-
-### TCP parameters
+### Compression algorithm parameters
| Parameter | Description |
-|---------------------------------------------------------------------|-----------------------------------------------------|
-| [enable_tcp_keepalive](300.cluster-level-configuration-items/9300.enable_tcp_keepalive.md) | Specifies whether to enable the keepalive mechanism for client connections. |
-| [tcp_keepidle](300.cluster-level-configuration-items/21200.tcp_keepidle.md) | The interval in seconds before sending a keepalive probe packet when no data is sent on a client connection. |
-| [tcp_keepintvl](300.cluster-level-configuration-items/21300.tcp_keepintvl.md) | The interval between two probes in seconds when you enable the keepalive mechanism for client connections. |
-| [tcp_keepcnt](300.cluster-level-configuration-items/21100.tcp_keepcnt.md) | The maximum number of retries before terminating a non-active connection. |
+|---------------------------------------------------------------------------------|-----------------------------------|
+| [default_compress_func](300.cluster-level-configuration-items/5800.default_compress_func.md) | The default algorithm for compressing table data. You can also specify another compression algorithm when creating a table. |
+| [default_compress](300.cluster-level-configuration-items/5700.default_compress.md) | The default compression strategy used during table creation in Oracle mode. |
+| [tableapi_transport_compress_func](300.cluster-level-configuration-items/20900.tableapi_transport_compress_func.md) | The algorithm for compressing TableAPI query results for transmission. |
+| [default_transport_compress_func](300.cluster-level-configuration-items/5800.default_transport_compress_func.md) | The RPC compression algorithm used for the entire cluster. |
-### RPC authentication parameters
+### Minor and major compaction parameters
| Parameter | Description |
-|---------|---------|
-| [rpc_client_authentication_method](300.cluster-level-configuration-items/28100.rpc_client_authentication_method.md) | The security authentication method of the RPC client. |
-| [rpc_server_authentication_method](300.cluster-level-configuration-items/28200.rpc_server_authentication_method.md) | The security authentication method of the RPC server. |
+|------------------------------------------------------------------------------------|-------------------------------------|
+| [enable_major_freeze](300.cluster-level-configuration-items/7200.enable_major_freeze.md) | Specifies whether to enable automatic global freezing. |
+| [micro_block_merge_verify_level](300.cluster-level-configuration-items/14500.micro_block_merge_verify_level.md) | The verification level of macroblocks in a major compaction. |
+| [row_compaction_update_limit](300.cluster-level-configuration-items/17600.row_compaction_update_limit.md) | The number of data updates that triggers a major compaction of rows in the memory. |
+| [enable_global_freeze_trigger](300.cluster-level-configuration-items/6900.enable_global_freeze_trigger.md) | Specifies whether to enable automatic triggering of a global freeze. |
+| [enable_merge_by_turn](300.cluster-level-configuration-items/7300.enable_merge_by_turn.md) | Specifies whether to enable the rotating major compaction strategy. |
+| [enable_manual_merge](300.cluster-level-configuration-items/7400.enable_manual_merge.md) | Specifies whether to enable manual major compaction. |
+| [global_major_freeze_residual_memory](300.cluster-level-configuration-items/10300.global_major_freeze_residual_memory.md) | The threshold of remaining memory for triggering a global freeze. When the available memory is less than this threshold, a global freeze is triggered. |
+| [minor_deferred_gc_time](300.cluster-level-configuration-items/15000.minor_deferred_gc_time.md) | The interval between the time when garbage collection starts for SSTables and the time when the major compaction ends. |
+| [zone_merge_concurrency](300.cluster-level-configuration-items/24000.zone_merge_concurrency.md) | The number of zones supported in a major compaction. If this parameter is set to 0, the system determines the best level of concurrency based on the actual deployment status. |
+| [zone_merge_order](300.cluster-level-configuration-items/24100.zone_merge_order.md) | The order of zones in a rotating major compaction. If you do not specify this parameter, the system will determine its value. |
+| [zone_merge_timeout](300.cluster-level-configuration-items/24200.zone_merge_timeout.md) | The timeout period for the major compaction of a zone. |
+| [minor_freeze_times](300.cluster-level-configuration-items/25200.minor_freeze_times.md) | 25200.minor_freeze_times.md |
+| [minor_merge_concurrency](300.cluster-level-configuration-items/25300.minor_merge_concurrency.md) | The number of concurrent threads in a minor compaction. |
+| [minor_warm_up_duration_time](300.cluster-level-configuration-items/25400.minor_warm_up_duration_time.md) | The preload time of the new MemTable generated after a minor compaction. |
+| [row_purge_thread_count](300.cluster-level-configuration-items/28300.row_purge_thread_count.md) | The maximum number of worker threads for a major compaction of rows in the memory. |
### Arbitration service parameters
@@ -319,64 +404,32 @@ This topic describes cluster- and tenant-level parameters in OceanBase Database.
|---------|---------|
| [ob_startup_mode](300.cluster-level-configuration-items/24600.ob_startup_mode.md) | The startup mode of the OBServer node. This parameter can be modified only when the OBServer node is started for the first time. |
-### Other parameters
-| Parameter | Description |
-|------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------|
-| [builtin_db_data_verify_cycle](300.cluster-level-configuration-items/3300.builtin_db_data_verify_cycle.md) | The cycle of a bad block check in the unit of days. When the value is set to 0, bad block checks are not performed. |
-| [data_storage_warning_tolerance_time](300.cluster-level-configuration-items/5200.data_storage_warning_tolerance_time.md) | The tolerance period after which the data disk is set to the WARNING state. |
-| [dead_socket_detection_timeout](300.cluster-level-configuration-items/5500.dead_socket_detection_timeout.md) | The interval for detecting invalid sockets. |
-| [enable_sys_table_ddl](300.cluster-level-configuration-items/8900.enable_sys_table_ddl.md) | Specifies whether to enable manual creation of system tables. |
-| [internal_sql_execute_timeout](300.cluster-level-configuration-items/11400.internal_sql_execute_timeout.md) | The interval of DML requests in the system. |
-| [migration_disable_time](300.cluster-level-configuration-items/14700.migration_disable_time.md) | The period during which data migration is suspended for a node after data migration to the node fails due to reasons such as full disk usage. |
-| [schema_history_expire_time](300.cluster-level-configuration-items/18000.schema_history_expire_time.md) | The validity period of history metadata. |
-| [datafile_size](300.cluster-level-configuration-items/5400.datafile_size.md) | The size of a data file. This parameter is generally left unspecified. |
-| [devname](300.cluster-level-configuration-items/6000.devname.md) | The name of the network interface card (NIC) to which the service process is bound. |
-| [enable_perf_event](300.cluster-level-configuration-items/7700.enable_perf_event.md) | Specifies whether to enable the information collection feature for performance events. |
-| [enable_record_trace_id](300.cluster-level-configuration-items/8000.enable_record_trace_id.md) | Specifies whether to record the trace ID configured by the application. |
-| [enable_upgrade_mode](300.cluster-level-configuration-items/9500.enable_upgrade_mode.md) | Specifies whether to enable the upgrade mode. In upgrade mode, some backend system features are suspended. |
-| [enable_ddl](300.cluster-level-configuration-items/6800.enable_ddl.md) | Specifies whether to allow the execution of DDL statements. |
-| [high_priority_net_thread_count](300.cluster-level-configuration-items/10700.high_priority_net_thread_count.md) | The number of network threads with a high priority. When this parameter is set to `0`, this feature is disabled. |
-| [mysql_port](300.cluster-level-configuration-items/15100.mysql_port.md) | The port number for the SQL service protocol. |
-| [obconfig_url](300.cluster-level-configuration-items/15500.obconfig_url.md) | The URL of the OBConfig service. |
-| [rpc_port](300.cluster-level-configuration-items/17800.rpc_port.md) | The RPC port. |
-| [sql_protocol_min_tls_version](300.cluster-level-configuration-items/29800.sql_protocol_min_tls_version.md) | The minimum version of the SSL/TLS protocol used by SSL connections for SQL statements. |
-| [ssl_client_authentication](300.cluster-level-configuration-items/19400.ssl_client_authentication.md) | Specifies whether to enable SSL authentication. |
-| [stack_size](300.cluster-level-configuration-items/19600.stack_size.md) | The size of the function call stack for programs. |
-| [tenant_task_queue_size](300.cluster-level-configuration-items/21600.tenant_task_queue_size.md) | The request queue size of each tenant. |
-| [zone](300.cluster-level-configuration-items/23300.zone-cluster.md) | The name of the zone where the node is located. This parameter is generally left unspecified. |
-| [ssl_external_kms_info](300.cluster-level-configuration-items/19500.ssl_external_kms_info.md) | Records information regarding the dependencies of the SSL functionality in OceanBase Database. It stores the relevant configurations required for different SSL scenarios in the form of a JSON string. The JSON should include at least the `ssl_mode` field. |
-| [recyclebin_object_expire_time](300.cluster-level-configuration-items/16600.recyclebin_object_expire_time.md) | The period during which a schema object can be retained in the recycle bin. After the period elapses, the object will be purged from the recycle bin. |
-| [default_row_format](300.cluster-level-configuration-items/5900.default_row_format.md) | The default row format used in table creation in the MySQL mode. |
-| [enable_sql_audit](300.cluster-level-configuration-items/8700.enable_sql_audit.md) | Specifies whether to enable SQL audit. |
-| [min_observer_version](300.cluster-level-configuration-items/14800.min_observer_version.md) | The earliest OBServer node version in the cluster. |
-| [sys_bkgd_net_percentage](300.cluster-level-configuration-items/20200.sys_bkgd_net_percentage.md) | The maximum percentage of network bandwidth for backend system tasks. |
-| [schema_history_recycle_interval](300.cluster-level-configuration-items/18100.schema_history_recycle_interval.md) | The interval for recycling `schema` multi-version history files. |
-| [enable_asan_for_memory_context](300.cluster-level-configuration-items/24300.enable_asan_for_memory_context.md) | Specifies whether to enable ObAsanAllocator when ob_asan is working. By default, ObAllocator is the allocator of MemoryContext. |
+## Tenant-level parameters
-### Unsupported parameters
+### Security parameters
| Parameter | Description |
-|------------------------------------------------------------------------------------|-------------------------------------|
-| [plan_cache_high_watermark](300.cluster-level-configuration-items/16000.plan_cache_high_watermark.md) | The memory threshold to trigger plan cache eviction. Automatic eviction is triggered when the memory occupied by the plan cache reaches the specified threshold. |
-| [plan_cache_low_watermark](300.cluster-level-configuration-items/16100.plan_cache_low_watermark.md) | The memory threshold to stop plan cache eviction. The eviction is stopped when the memory occupied by the plan cache decreases to the specified threshold. |
-| [tenant_cpu_variation_per_server](300.cluster-level-configuration-items/21400.tenant_cpu_variation_per_server.md) | The variation allowed for CPU quota scheduling among multiple units of a tenant. |
-| [system_trace_level](300.cluster-level-configuration-items/20800.system_trace_level.md) | The level of system trace logs to be printed. |
+|------------------------------------------------------------------|-----------------|
+| [external_kms_info](400.tenant-level-configuration-items/1100.external_kms_info.md) | The key management information. |
+| [tde_method](400.tenant-level-configuration-items/3400.tde_method.md) | The encryption method for a transparent tablespace. |
+| [audit_sys_operations](400.tenant-level-configuration-items/100.audit_sys_operations.md) | Specifies whether to track the operations of the SYS user. |
+| [audit_trail](400.tenant-level-configuration-items/200.audit_trail.md) | Specifies whether to enable database audit. |
-## Tenant-level parameters
+### CPU parameters
-### User logon parameters
+| Parameter | Description |
+|-----------------------------------------------------------------------------------------|-----------------------------------------------|
+| [cpu_quota_concurrency](400.tenant-level-configuration-items/5500.cpu_quota_concurrency.md) | The maximum concurrency allowed for each CPU quota of a tenant. |
-
- Note
- The parameters described in the following table take effect only in the MySQL mode.
-
+### Read/Write and query parameters
| Parameter | Description |
-|------------------------------------------------------------------------------------------------|------------------------------------|
-| [connection_control_failed_connections_threshold](400.tenant-level-configuration-items/500.connection_control_failed_connections_threshold.md) | The threshold of failed logon attempts. |
-| [connection_control_min_connection_delay](400.tenant-level-configuration-items/600.connection_control_min_connection_delay.md) | The minimum lock period for an account whose number of failed logon attempts reaches the specified threshold. |
-| [connection_control_max_connection_delay](400.tenant-level-configuration-items/700.connection_control_max_connection_delay.md) | The maximum lock period for an account whose number of failed logon attempts reaches the specified threshold. |
+|---------------------------------------------------------------------------|--------------|
+| [enable_monotonic_weak_read](400.tenant-level-configuration-items/1000.enable_monotonic_weak_read.md) | Specifies whether to enable monotonic reads. |
+| [query_response_time_stats](400.tenant-level-configuration-items/5000.query_response_time_stats.md) | Specifies whether to collect the statistics of the `information_schema.QUERY_RESPONSE_TIME` view. |
+| [query_response_time_flush](400.tenant-level-configuration-items/5100.query_response_time_flush.md) | Specifies whether to refresh the `information_schema.QUERY_RESPONSE_TIME` view and re-read `query_response_time_range_base`. |
+| [query_response_time_range_base](400.tenant-level-configuration-items/5200.query_response_time_range_base.md) | The time interval at which the time parameters of the `information_schema.QUERY_RESPONSE_TIME` view are collected. |
### Load balancing parameters
@@ -393,58 +446,63 @@ This topic describes cluster- and tenant-level parameters in OceanBase Database.
|----------------------------------------------------------------------------------------|----------------------------------|
| [log_restore_concurrency](400.tenant-level-configuration-items/24900.log_restore_concurrency.md) | The concurrency of log restoration. |
| [log_archive_concurrency](400.tenant-level-configuration-items/25000.log_archive_concurrency.md) | The concurrency of log archiving. |
+| [backup_data_file_size](400.tenant-level-configuration-items/1500.backup_data_file_size.md) | The size of the backup data files. |
-### Audit parameters
+### Background execution thread parameters
| Parameter | Description |
-|---------------------------------------------------------------------|--------------------|
-| [audit_sys_operations](400.tenant-level-configuration-items/100.audit_sys_operations.md) | Specifies whether to track the operations of the SYS user. |
-| [audit_trail](400.tenant-level-configuration-items/200.audit_trail.md) | Specifies whether to enable database audit. |
+|---------------------------------------------------------------------------|-------------------------------------|
+| [compaction_low_thread_score](400.tenant-level-configuration-items/1900.compaction_low_thread_score.md) | The weight of the CPU time slice occupied by the worker threads for low-priority compaction tasks. |
+| [compaction_high_thread_score](400.tenant-level-configuration-items/2100.compaction_high_thread_score.md) | The weight of the CPU time slice occupied by the worker threads for high-priority compaction tasks. |
+| [compaction_mid_thread_score](400.tenant-level-configuration-items/2300.compaction_mid_thread_score.md) | The weight of the CPU time slice occupied by the worker threads for medium-priority compaction tasks. |
+| [ha_high_thread_score](400.tenant-level-configuration-items/4100.ha_high_thread_score.md) | The current number of high-availability high-priority worker threads. |
+| [ha_mid_thread_score](400.tenant-level-configuration-items/4200.ha_mid_thread_score.md) | The current number of high-availability medium-priority worker threads. |
+| [ha_low_thread_score](400.tenant-level-configuration-items/4300.ha_low_thread_score.md) | The current number of high-availability low-priority worker threads. |
+| [ob_compaction_schedule_interval](400.tenant-level-configuration-items/4400.ob_compaction_schedule_interval.md) | The time interval for compaction scheduling. |
+| [compaction_dag_cnt_limit](400.tenant-level-configuration-items/26500.compaction_dag_cnt_limit.md) | The maximum number of directed acyclic graphs (DAGs) allowed in a compaction DAG queue. |
+| [compaction_schedule_tablet_batch_cnt](400.tenant-level-configuration-items/26600.compaction_schedule_tablet_batch_cnt.md) | The maximum number of partitions that can be scheduled per batch during batch scheduling for compactions. |
+| [tenant_sql_login_thread_count](400.tenant-level-configuration-items/6100.tenant_sql_login_thread_count.md) | The number of logon threads of a MySQL tenant, that is, the number of `mysql_queue` threads. The default value 0 indicates that the value of the parameter is the same as that of `unit_min_cpu`. |
+| [tenant_sql_net_thread_count](400.tenant-level-configuration-items/6200.tenant_sql_net_thread_count.md) | The number of I/O threads of a MySQL tenant, that is, the number of `sql_nio_server` threads. The default value 0 indicates that the value of the parameter is the same as that of `unit_min_cpu`. |
-### Memory parameters
+### I/O parameters
| Parameter | Description |
-|--------------|-------------|
-| [rpc_memory_limit_percentage](400.tenant-level-configuration-items/6000.rpc_memory_limit_percentage.md) | The percentage of the maximum RPC memory in the tenant to the total tenant memory. |
+|-------------------------------------------------------------------|---------------------|
+| [io_category_config](400.tenant-level-configuration-items/1500.io_category_config.md) | The percentages of all types of I/O requests. |
-### Transaction and transaction log parameters
+### Compatibility parameters
-| Parameter | Description |
-|--------------------------------------------------------------------------------------|-------------------------------------------------------|
-| [log_disk_utilization_limit_threshold](400.tenant-level-configuration-items/1600.log_disk_utilization_limit_threshold.md) | The maximum usage of the tenant log disk. When the occupied space of the tenant log disk exceeds its total space multiplied by the specified value, log write is not allowed. |
-| [log_disk_utilization_threshold](400.tenant-level-configuration-items/1700.log_disk_utilization_threshold.md) | The usage threshold of the tenant log disk. When the occupied space of the tenant log disk exceeds its total space multiplied by the specified value, log files are reused. |
-| [writing_throttling_maximum_duration](400.tenant-level-configuration-items/3500.writing_throttling_maximum_duration.md) | The time required for allocating the remaining memory of the MemStore after the write speed is limited. This parameter controls the write speed by controlling the memory allocation progress. |
-| [writing_throttling_trigger_percentage](400.tenant-level-configuration-items/3600.writing_throttling_trigger_percentage.md) | The upper limit of the write speed. |
-| [standby_db_fetch_log_rpc_timeout](400.tenant-level-configuration-items/25100.standby_db_fetch_log_rpc_timeout.md) | The timeout period of RPC requests sent by a standby cluster for pulling logs. If the specified timeout period is elapsed, the log transfer service of the standby cluster determines that the requested server in the primary cluster is unavailable and switches to another server. |
-| [log_disk_throttling_percentage](400.tenant-level-configuration-items/6600.log_disk_throttling_percentage.md) | The percentage of unrecyclable disk space that triggers log write throttling. |
-| [log_transport_compress_all](400.tenant-level-configuration-items/5800.log_transport_compress_all.md) | Specifies whether to compress logs for transmission. |
-| [log_transport_compress_func](400.tenant-level-configuration-items/5900.log_transport_compress_func.md) | The algorithm for compressing logs for transmission. |
+
+ Note
+ The parameters described in the following table take effect only in MySQL mode.
+
+| Parameter | Description |
+|---------------------------------------------------------------------|-----------------------|
+| [enable_sql_extension](400.tenant-level-configuration-items/1200.enable_sql_extension.md) | Specifies whether to enable SQL extension for tenants. |
+| [compatible](400.tenant-level-configuration-items/5700.compatible.md) | Controls the compatibility of related features in a tenant. This parameter cannot be set. |
-### Minor and major compaction parameters
+### Routing parameters
| Parameter | Description |
-|------------------------------------------------------------------------------|----------------------|
-| [default_progressive_merge_num](400.tenant-level-configuration-items/1400.default_progressive_merge_num.md) | The default number of progressive major compactions during table creation. |
-| [major_freeze_duty_time](400.tenant-level-configuration-items/4800.major_freeze_duty_time.md) | The time to trigger a freeze and a major compaction every day. |
-| [major_compact_trigger](400.tenant-level-configuration-items/1800.major_compact_trigger.md) | The number of minor compactions for triggering a global major compaction. |
-| [minor_compact_trigger](400.tenant-level-configuration-items/2200.minor_compact_trigger.md) | The threshold for triggering the next-level compaction in hierarchical minor compactions. |
-| [undo_retention](400.tenant-level-configuration-items/4900.undo_retention.md) | The time range in seconds of data versions to be retained by the system. This variable is used to control the collection of data of multiple versions in minor compactions. |
-| [merger_check_interval](400.tenant-level-configuration-items/5300.merger_check_interval.md) | The interval for scheduling the thread for checking the major compaction status. |
-| [freeze_trigger_percentage](400.tenant-level-configuration-items/5400.freeze_trigger_percentage.md) | The threshold of memory used by tenants for triggering a global freeze. |
+|---------------------------------------------------------------------------------------------|-------------------------------|
+| [ob_proxy_readonly_transaction_routing_policy](400.tenant-level-configuration-items/2500.ob_proxy_readonly_transaction_routing_policy.md) | Specifies whether OceanBase Database Proxy (ODP) routes a transaction based on read-only statements. |
-### CPU parameters
+### Memory parameters
| Parameter | Description |
-|-----------------------------------------------------------------------------------------|-----------------------------------------------|
-| [cpu_quota_concurrency](400.tenant-level-configuration-items/5500.cpu_quota_concurrency.md) | The maximum concurrency allowed for each CPU quota of a tenant. |
+|--------------|-------------|
+| [rpc_memory_limit_percentage](400.tenant-level-configuration-items/6000.rpc_memory_limit_percentage.md) | The percentage of the maximum RPC memory in the tenant to the total tenant memory. |
+| [range_optimizer_max_mem_size](400.tenant-level-configuration-items/7300.range_optimizer_max_mem_size.md) | The maximum memory space available for the Query Range module. |
-### Encryption parameters
+### OBKV parameters
| Parameter | Description |
-|------------------------------------------------------------------|-----------------|
-| [external_kms_info](400.tenant-level-configuration-items/1100.external_kms_info.md) | The key management information. |
-| [tde_method](400.tenant-level-configuration-items/3400.tde_method.md) | The encryption method for a transparent tablespace. |
+|---------|---------|
+| [kv_ttl_duty_duration](400.tenant-level-configuration-items/26100.kv_ttl_duty_duration.md) | The time period during which scheduled daily time-to-live (TTL) tasks are to be triggered. |
+| [kv_ttl_history_recycle_interval](400.tenant-level-configuration-items/26200.kv_ttl_history_recycle_interval.md) | The retention period of historical TTL tasks. |
+| [enable_kv_ttl](400.tenant-level-configuration-items/26300.enable_kv_ttl.md) | Specifies whether to enable background TTL tasks. This parameter is applicable to periodic TTL tasks. User management commands are not limited by this parameter. |
+| [ttl_thread_score](400.tenant-level-configuration-items/26400.ttl_thread_score.md) | The weight for using time slices by threads of TTL tasks. |
### PL parameters
@@ -453,58 +511,82 @@ This topic describes cluster- and tenant-level parameters in OceanBase Database.
| [plsql_code_type](400.tenant-level-configuration-items/2900.plsql_code_type.md) | The compilation mode of PL/SQL code. |
| [plsql_debug](400.tenant-level-configuration-items/3000.plsql_debug.md) | Specifies whether to compile code for debugging. |
| [plsql_optimize_level](400.tenant-level-configuration-items/3100.plsql_optimize_level.md) | The compilation optimization level. |
-| [plsql_v2_compatibility](400.tenant-level-configuration-items/3800.plsql_v2_compatibility.md) | Specifies whether compatibility with Oracle 8 is supported.
**Note**
This parameter applies only to the Oracle mode and does not take effect now. |
+| [plsql_v2_compatibility](400.tenant-level-configuration-items/3800.plsql_v2_compatibility.md) | Specifies whether compatibility with Oracle 8 is supported. |
| [plsql_warnings](400.tenant-level-configuration-items/3200.plsql_warnings.md) | Controls the error reporting behavior of the PL/SQL compiler. You can use this parameter to specify the status of a `warning` code or a warning code type to `enable`, `disable`, or `error`. |
-### Compatibility parameters
-
-
- Note
- The parameters described in the following table take effect only in the MySQL mode.
-
+### Others
| Parameter | Description |
-|---------------------------------------------------------------------|-----------------------|
-| [enable_sql_extension](400.tenant-level-configuration-items/1200.enable_sql_extension.md) | Specifies whether to enable SQL extension for tenants. |
-| [compatible](400.tenant-level-configuration-items/5700.compatible.md) | The compatible version for related features of a tenant. This parameter cannot be set. |
+|------------|-------------|
+| [enable_early_lock_release](400.tenant-level-configuration-items/900.enable_early_lock_release.md) | Specifies whether to enable the early lock release (ELR) feature. |
+| [workarea_size_policy](400.tenant-level-configuration-items/3700.workarea_size_policy.md) | Specifies whether the size of an SQL workarea is manually or automatically adjusted. |
+| [open_cursors](400.tenant-level-configuration-items/2700.open_cursors.md) | The maximum number of cursors that can be concurrently opened in a single session. |
+| [ob_ssl_invited_common_names](400.tenant-level-configuration-items/2600.ob_ssl_invited_common_names.md) | The list of identities of applications running under the current tenant. The identity of an application comes from the `cn` (common name) field of the `subject` of the client certificate in two-way SSL authentication. |
+| [ob_enable_batched_multi_statement](400.tenant-level-configuration-items/2400.ob_enable_batched_multi_statement.md) | Specifies whether to enable group-based execution optimization for the batch processing feature. |
+| [job_queue_processes](400.tenant-level-configuration-items/3900.job_queue_processes.md) | The maximum number of concurrent tasks that can be run under each tenant. You can set this parameter to prevent tenant resources from being excessively occupied by tasks. |
+| [default_auto_increment_mode](400.tenant-level-configuration-items/4500.default_auto_increment_mode.md) | The default auto-increment mode of auto-increment columns. |
+| [ob_query_switch_leader_retry_timeout](400.tenant-level-configuration-items/4600.ob_query_switch_leader_retry_timeout.md) | The maximum retry period for failed queries, in us. |
+| [default_enable_extended_rowid](400.tenant-level-configuration-items/4700.default_enable_extended_rowid.md) | `default_enable_extended_rowid` specifies whether to create the table in Extended ROWID mode. |
+| [dump_data_dictionary_to_log_interval](400.tenant-level-configuration-items/6300.dump_data_dictionary_to_log_interval.md) | The interval of data dictionary persistence for the tenant. |
+| [enable_user_defined_rewrite_rules](400.tenant-level-configuration-items/6400.enable_user_defined_rewrite_rules.md) | Specifies whether to enable user-defined rules. |
+
-### Read/write and query parameters
+### Transaction and transaction log parameters
| Parameter | Description |
-|---------------------------------------------------------------------------|--------------|
-| [enable_monotonic_weak_read](400.tenant-level-configuration-items/1000.enable_monotonic_weak_read.md) | Specifies whether to enable monotonic reads. |
-| [query_response_time_stats](400.tenant-level-configuration-items/5000.query_response_time_stats.md) | Specifies whether to collect the statistics of the `information_schema.QUERY_RESPONSE_TIME` view. |
-| [query_response_time_flush](400.tenant-level-configuration-items/5100.query_response_time_flush.md) | Specifies whether to refresh the `information_schema.QUERY_RESPONSE_TIME` view and re-read `query_response_time_range_base`. |
-| [query_response_time_range_base](400.tenant-level-configuration-items/5200.query_response_time_range_base.md) | The time interval at which the time parameters of the `information_schema.QUERY_RESPONSE_TIME` view are collected. |
+|--------------------------------------------------------------------------------------|-------------------------------------------------------|
+| [log_disk_utilization_limit_threshold](400.tenant-level-configuration-items/1600.log_disk_utilization_limit_threshold.md) | The maximum usage of the tenant log disk. When the occupied space of the tenant log disk exceeds its total space multiplied by the specified value, log write is not allowed. |
+| [log_disk_utilization_threshold](400.tenant-level-configuration-items/1700.log_disk_utilization_threshold.md) | The usage threshold of the tenant log disk. When the occupied space of the tenant log disk exceeds its total space multiplied by the specified value, log files are reused. |
+| [writing_throttling_maximum_duration](400.tenant-level-configuration-items/3500.writing_throttling_maximum_duration.md) | The time required for allocating the remaining memory of the MemStore after the write speed is limited. This parameter controls the write speed by controlling the memory allocation progress. |
+| [writing_throttling_trigger_percentage](400.tenant-level-configuration-items/3600.writing_throttling_trigger_percentage.md) | The upper limit of the write speed. |
+| [standby_db_fetch_log_rpc_timeout](400.tenant-level-configuration-items/25100.standby_db_fetch_log_rpc_timeout.md) | The timeout period of RPC requests sent by a standby cluster for pulling logs. If the specified timeout period is elapsed, the log transfer service of the standby cluster determines that the requested server in the primary cluster is unavailable and switches to another server. |
+| [log_disk_throttling_percentage](400.tenant-level-configuration-items/6600.log_disk_throttling_percentage.md) | The percentage of unrecyclable disk space that triggers log write throttling. |
+| [log_transport_compress_all](400.tenant-level-configuration-items/5800.log_transport_compress_all.md) | Specifies whether to compress logs for transmission. |
+| [log_transport_compress_func](400.tenant-level-configuration-items/5900.log_transport_compress_func.md) | The algorithm for compressing logs for transmission. |
+| [clog_max_unconfirmed_log_count](400.tenant-level-configuration-items/300.clog_max_unconfirmed_log_count.md) | The maximum number of unconfirmed logs allowed in the transaction module. |
+| [clog_persistence_compress_func](400.tenant-level-configuration-items/400.clog_persistence_compress_func.md) | The algorithm for compressing transaction logs for storage. |
+| [enable_clog_persistence_compress](400.tenant-level-configuration-items/900.enable_clog_persistence_compress.md) | Specifies whether to compress transaction logs for storage. |
-### Routing parameters
+### System log parameters
| Parameter | Description |
-|---------------------------------------------------------------------------------------------|-------------------------------|
-| [ob_proxy_readonly_transaction_routing_policy](400.tenant-level-configuration-items/2500.ob_proxy_readonly_transaction_routing_policy.md) | Specifies whether OceanBase Database Proxy (ODP) routes a transaction based on read-only statements. |
+|---------|----------|
+| [log_disk_throttling_maximum_duration](400.tenant-level-configuration-items/6900.log_disk_throttling_maximum_duration.md) | The maximum available duration of the log disk after log throttling is triggered. |
+| [ls_gc_delay_time](400.tenant-level-configuration-items/7000.ls_gc_delay_time.md) | The delay time before the log stream of a tenant is deleted. |
+| [standby_db_preferred_upstream_log_region](400.tenant-level-configuration-items/7100.standby_db_preferred_upstream_log_region.md) | The preferred region for the standby tenant to synchronize upstream logs in a Physical Standby Database scenario. |
+| [archive_lag_target](400.tenant-level-configuration-items/7200.archive_lag_target.md) | The delay time of log archiving in a tenant. |
-### I/O parameters
+### User logon parameters
+
+
+ Note
+ The parameters described in the following table take effect only in MySQL mode.
+
| Parameter | Description |
-|-------------------------------------------------------------------|---------------------|
-| [io_category_config](400.tenant-level-configuration-items/1500.io_category_config.md) | The percentages of all types of I/O requests. |
+|------------------------------------------------------------------------------------------------|------------------------------------|
+| [connection_control_failed_connections_threshold](400.tenant-level-configuration-items/500.connection_control_failed_connections_threshold.md) | The threshold of failed logon attempts. |
+| [connection_control_min_connection_delay](400.tenant-level-configuration-items/600.connection_control_min_connection_delay.md) | The minimum lock period for an account whose number of failed logon attempts reaches the specified threshold. |
+| [connection_control_max_connection_delay](400.tenant-level-configuration-items/700.connection_control_max_connection_delay.md) | The maximum lock period for an account whose number of failed logon attempts reaches the specified threshold. |
-### Background execution thread parameters
-| Parameter | Description |
-|---------------------------------------------------------------------------|-------------------------------------|
-| [compaction_low_thread_score](400.tenant-level-configuration-items/1900.compaction_low_thread_score.md) | The weight of the CPU time slice occupied by the worker threads for low-priority compaction tasks. |
-| [compaction_high_thread_score](400.tenant-level-configuration-items/2100.compaction_high_thread_score.md) | The weight of the CPU time slice occupied by the worker threads for high-priority compaction tasks. |
-| [compaction_mid_thread_score](400.tenant-level-configuration-items/2300.compaction_mid_thread_score.md) | The weight of the CPU time slice occupied by the worker threads for medium-priority compaction tasks. |
-| [ha_high_thread_score](400.tenant-level-configuration-items/4100.ha_high_thread_score.md) | The current number of high-availability high-priority worker threads. |
-| [ha_mid_thread_score](400.tenant-level-configuration-items/4200.ha_mid_thread_score.md) | The current number of high-availability medium-priority worker threads. |
-| [ha_low_thread_score](400.tenant-level-configuration-items/4300.ha_low_thread_score.md) | The current number of high-availability low-priority worker threads. |
-| [ob_compaction_schedule_interval](400.tenant-level-configuration-items/4400.ob_compaction_schedule_interval.md) | The time interval for compaction scheduling. |
-| [compaction_dag_cnt_limit](400.tenant-level-configuration-items/26500.compaction_dag_cnt_limit.md) | The maximum number of directed acyclic graphs (DAGs) allowed in a compaction DAG queue. |
-| [compaction_schedule_tablet_batch_cnt](400.tenant-level-configuration-items/26600.compaction_schedule_tablet_batch_cnt.md) | The maximum number of partitions that can be scheduled per batch during batch scheduling for compactions. |
-| [tenant_sql_login_thread_count](400.tenant-level-configuration-items/6100.tenant_sql_login_thread_count.md) | The number of logon threads of a MySQL tenant, that is, the number of `mysql_queue` threads. The default value 0 indicates that the value of the parameter is the same as that of `unit_min_cpu`. |
-| [tenant_sql_net_thread_count](400.tenant-level-configuration-items/6200.tenant_sql_net_thread_count.md) | The number of I/O threads of a MySQL tenant, that is, the number of `sql_nio_server` threads. The default value 0 indicates that the value of the parameter is the same as that of `unit_min_cpu`. |
+### Minor and major compaction parameters
+| Parameter | Description |
+|------------------------------------------------------------------------------|----------------------|
+| [default_progressive_merge_num](400.tenant-level-configuration-items/1400.default_progressive_merge_num.md) | The default number of progressive major compactions during table creation. |
+| [major_freeze_duty_time](400.tenant-level-configuration-items/4800.major_freeze_duty_time.md) | The time to trigger a freeze and a major compaction every day. |
+| [major_compact_trigger](400.tenant-level-configuration-items/1800.major_compact_trigger.md) | The number of minor compactions for triggering a global major compaction. |
+| [minor_compact_trigger](400.tenant-level-configuration-items/2200.minor_compact_trigger.md) | The threshold for triggering the next-level compaction in hierarchical minor compactions. |
+| [undo_retention](400.tenant-level-configuration-items/4900.undo_retention.md) | The time range in seconds of data versions to be retained by the system. This variable is used to control the collection of data of multiple versions in minor compactions. |
+| [merger_check_interval](400.tenant-level-configuration-items/5300.merger_check_interval.md) | The interval for scheduling the thread for checking the major compaction status. |
+| [freeze_trigger_percentage](400.tenant-level-configuration-items/5400.freeze_trigger_percentage.md) | The threshold of memory used by tenants for triggering a global freeze. |
+| [max_kept_major_version_number](300.cluster-level-configuration-items/13100.max_kept_major_version_number.md) | The number of frozen data versions to be retained. |
+| [merge_stat_sampling_ratio](300.cluster-level-configuration-items/14000.merge_stat_sampling_ratio.md) | The sampling rate of data column statistics in a major compaction. |
+| [merge_thread_count](300.cluster-level-configuration-items/14100.merge_thread_count.md) | The number of worker threads for daily major compactions. |
+| [merger_completion_percentage](300.cluster-level-configuration-items/14300.merger_completion_percentage.md) | The percentage of compacted replicas at which the major compaction task is considered completed. |
+| [merger_switch_leader_duration_time](300.cluster-level-configuration-items/14400.merger_switch_leader_duration_time.md) | The interval for a batch leader switchover in a daily major compaction. |
+| [merger_warm_up_duration_time](300.cluster-level-configuration-items/14500.merger_warm_up_duration_time.md) | The preload time of new baseline data in a major compaction. |
### Arbitration service parameters
@@ -512,22 +594,6 @@ This topic describes cluster- and tenant-level parameters in OceanBase Database.
|---------|---------|
| [arbitration_timeout](400.tenant-level-configuration-items/5600.arbitration_timeout.md) | The timeout period for triggering an automatic downgrade. |
-### Other parameters
-
-| Parameter | Description |
-|----------------------------------------------------------------------------------|---------------------------------------------------------------------------------|
-| [enable_early_lock_release](400.tenant-level-configuration-items/900.enable_early_lock_release.md) | Specifies whether to enable the early lock release (ELR) feature. |
-| [workarea_size_policy](400.tenant-level-configuration-items/3700.workarea_size_policy.md) | Specifies whether the size of an SQL workarea is manually or automatically adjusted. |
-| [open_cursors](400.tenant-level-configuration-items/2700.open_cursors.md) | The maximum number of cursors that can be concurrently opened in a single session. |
-| [ob_ssl_invited_common_names](400.tenant-level-configuration-items/2600.ob_ssl_invited_common_names.md) | The list of identities of applications running under the current tenant. The identity of an application comes from the `cn` (common name) field of the `subject` of the client certificate in two-way SSL authentication. |
-| [ob_enable_batched_multi_statement](400.tenant-level-configuration-items/2400.ob_enable_batched_multi_statement.md) | Specifies whether to enable group-based execution optimization for the batch processing feature. |
-| [job_queue_processes](400.tenant-level-configuration-items/3900.job_queue_processes.md) | The maximum number of concurrent tasks that can be run under each tenant. You can set this parameter to prevent tenant resources from being excessively occupied by tasks. **
Note**
This parameter takes effect only in the Oracle mode. |
-| [default_auto_increment_mode](400.tenant-level-configuration-items/4500.default_auto_increment_mode.md) | The default auto-increment mode of auto-increment columns. |
-| [ob_query_switch_leader_retry_timeout](400.tenant-level-configuration-items/4600.ob_query_switch_leader_retry_timeout.md) | The maximum retry period for failed queries, in us. |
-| [default_enable_extended_rowid](400.tenant-level-configuration-items/4700.default_enable_extended_rowid.md) | Specifies whether to create the table in Extended ROWID mode. |
-| [dump_data_dictionary_to_log_interval](400.tenant-level-configuration-items/6300.dump_data_dictionary_to_log_interval.md) | The interval of data dictionary persistence for the tenant. |
-| [enable_user_defined_rewrite_rules](400.tenant-level-configuration-items/6400.enable_user_defined_rewrite_rules.md) | Specifies whether to enable user-defined rules. |
-
### Unsupported parameters
| Parameter | Description |
diff --git a/en-US/700.reference/800.configuration-items-and-system-variables/100.system-configuration-items/300.cluster-level-configuration-items/13900.memstore_limit_percentage.md b/en-US/700.reference/800.configuration-items-and-system-variables/100.system-configuration-items/300.cluster-level-configuration-items/13900.memstore_limit_percentage.md
index 9ff5dd63fa..11a7220ddc 100644
--- a/en-US/700.reference/800.configuration-items-and-system-variables/100.system-configuration-items/300.cluster-level-configuration-items/13900.memstore_limit_percentage.md
+++ b/en-US/700.reference/800.configuration-items-and-system-variables/100.system-configuration-items/300.cluster-level-configuration-items/13900.memstore_limit_percentage.md
@@ -9,30 +9,34 @@
Note
- - This parameter is introduced since OceanBase Database V1.4.
- Starting from OceanBase Database V4.2.1 BP1, this parameter is changed from a cluster-level parameter to a tenant-level parameter.
+ This parameter is introduced since OceanBase Database V1.4.
-## Description
+## Purpose
-`memstore_limit_percentage` specifies the percentage of the memory that can be occupied by the MemStore to the total available memory of a tenant.
+`memstore_limit_percentage` specifies the percentage of the memory that can be occupied by MemStores to the total available memory of a tenant.
## Attributes
| **Attribute** | **Description** |
|------------------|-----------|
-| Type | Integer |
-| Default value | 50 |
-| Value range | \[1, 99\] |
-| Modifiable | Yes. It can be modified using the `ALTER SYSTEM SET` statement.|
+| Type | INT |
+| Default value | `0`: specifies to automatically adapt the value. Note
The default value of this parameter is changed from `50` to `0` since OceanBase Database V4.3.0.
|
+| Value range | [0, 100) Note
The value range of this parameter is changed from (0, 100) to [0, 100) since OceanBase Database V4.3.0.
|
+| Modifiable | Yes. You can use the `ALTER SYSTEM SET` statement to modify the parameter. |
| Effective upon OBServer node restart | No |
## Considerations
-`memstore_limit_percentage` is used to calculate the value of `memstore_limit`:
+* The `memstore_limit_percentage` parameter is used to calculate the value of `memstore_limit` by using the following formula:
-`memstore_limit_percentage` = `memstore_limit`/`memory_size`
+ `memstore_limit_percentage` = `memstore_limit`/`memory_size`
-Here, the value of `memory_size` is specified when you create a tenant, which indicates the memory available for the tenant. For more information about `memstore_limit`, see [memstore_limit](13600.memory_limit.md).
+ In the formula, `memory_size` indicates the total available memory of the tenant and is specified during tenant creation. For more information about `memstore_limit`, see [memstore_limit](13600.memory_limit.md).
+
+* OceanBase Database V4.3.0 and later allow you to use the tenant-level hidden parameter _memstore_limit_percentage
to specify the percentage of the memory that can be occupied by MemStores to the total available memory of a tenant. Except for the effective scope, its feature and default value are the same as those of the cluster-level parameter memstore_limit_percentage
. When you specify the two parameters, take note of the following considerations:
+
+- If you specify a value other than the default value only for
_memstore_limit_percentage
or memstore_limit_percentage
, the specified value prevails. - If you specify values, other than the default values, for both the tenant-level hidden parameter
_memstore_limit_percentage
and the cluster-level parameter memstore_limit_percentage
, the value of _memstore_limit_percentage
prevails. - If neither parameter is specified or the default values are used for both parameters, the system complies with the following rules:
- For a tenant with 8 GB of memory or less, the percentage of memory that can be occupied by MemStores is 40%.
- For a tenant with more than 8 GB of memory, the percentage of memory that can be occupied by MemStores is 50%.
## Examples
diff --git a/en-US/700.reference/800.configuration-items-and-system-variables/100.system-configuration-items/300.cluster-level-configuration-items/29900.enable_rpc_authentication_bypass.md b/en-US/700.reference/800.configuration-items-and-system-variables/100.system-configuration-items/300.cluster-level-configuration-items/29900.enable_rpc_authentication_bypass.md
new file mode 100644
index 0000000000..dabf34abd3
--- /dev/null
+++ b/en-US/700.reference/800.configuration-items-and-system-variables/100.system-configuration-items/300.cluster-level-configuration-items/29900.enable_rpc_authentication_bypass.md
@@ -0,0 +1,37 @@
+| description ||
+|---|---|
+| keywords ||
+| dir-name ||
+| dir-name-en ||
+| tenant-type ||
+
+# enable_rpc_authentication_bypass
+
+
+ Note
+ This parameter is introduced since OceanBase Database V4.3.0.
+
+
+## Purpose
+
+The `enable_rpc_authentication_bypass` parameter specifies whether to allow OceanBase Migration Service (OMS) and OBKV to bypass remote procedure call (RPC) security authentication when connecting to an OceanBase cluster for which RPC security authentication is enabled.
+
+## Attributes
+
+| **Attribute** | **Description** |
+| --- | --- |
+| Parameter type | BOOL |
+| Default value | True |
+| Value range | - `True`: enable
- `False`: disable
|
+| Modifiable | Yes. You can use the `ALTER SYSTEM SET` statement to modify the parameter. |
+| Effective upon OBServer node restart | No |
+
+## Examples
+
+```shell
+obclient> ALTER SYSTEM SET enable_rpc_authentication_bypass = True;
+```
+
+## References
+
+[RPC connection authentication](../../../../600.manage/500.security-and-permissions/300.access-control/400.1rpc-connection-authentication.md)
\ No newline at end of file
diff --git a/en-US/700.reference/800.configuration-items-and-system-variables/100.system-configuration-items/300.cluster-level-configuration-items/30000.strict_check_os_params.md b/en-US/700.reference/800.configuration-items-and-system-variables/100.system-configuration-items/300.cluster-level-configuration-items/30000.strict_check_os_params.md
new file mode 100644
index 0000000000..199044e716
--- /dev/null
+++ b/en-US/700.reference/800.configuration-items-and-system-variables/100.system-configuration-items/300.cluster-level-configuration-items/30000.strict_check_os_params.md
@@ -0,0 +1,41 @@
+| description ||
+|---|---|
+| keywords ||
+| dir-name ||
+| dir-name-en ||
+| tenant-type ||
+
+# strict_check_os_params
+
+
+ Note
+ This parameter is introduced since OceanBase Database V4.2.2.
+
+
+## Purpose
+
+`strict_check_os_params` specifies whether to enable the check on operating system parameters.
+
+
+
+ Notice
+
+ You can configure this parameter only in the sys tenant.
+
+
+
+## Attributes
+
+| **Attribute** | **Description** |
+| --- | --- |
+| Parameter type | Boolean |
+| Default value | false |
+| Value range | [true, false]- If the value is `true`, the system returns an error when the value of an operating system parameter is not within the specified value range, and the OBServer node cannot be started normally.
- If the value is `false`, the system returns a warning when the value of an operating system parameter is not within the specified value range, but the OBServer node can be started normally.
|
+| Modifiable | Yes. You can use the `ALTER SYSTEM SET` statement to modify the parameter. |
+| Effective upon OBServer node restart | No |
+
+## Examples
+
+```shell
+obclient> ALTER SYSTEM SET strict_check_os_params=false;
+```
diff --git a/en-US/700.reference/800.configuration-items-and-system-variables/100.system-configuration-items/300.cluster-level-configuration-items/30100.enable_dblink.md b/en-US/700.reference/800.configuration-items-and-system-variables/100.system-configuration-items/300.cluster-level-configuration-items/30100.enable_dblink.md
new file mode 100644
index 0000000000..516805e36b
--- /dev/null
+++ b/en-US/700.reference/800.configuration-items-and-system-variables/100.system-configuration-items/300.cluster-level-configuration-items/30100.enable_dblink.md
@@ -0,0 +1,38 @@
+| description ||
+|---|---|
+| keywords ||
+| dir-name ||
+| dir-name-en ||
+| tenant-type ||
+
+# enable_dblink
+
+
+ Note
+ This parameter is introduced since OceanBase Database V4.2.1 BP4.
+
+
+## Purpose
+
+The `enable_dblink` parameter specifies whether to enable the DBLink feature.
+
+## Attributes
+
+| **Attribute** | **Description** |
+| --- | --- |
+| Parameter type | Boolean |
+| Default value | true |
+| Value range | [true, false]- `true`: enables the DBLink feature.
- `false`: disables the DBLink feature, with the `OB_OP_NOT_ALLOW` error returned.
|
+| Modifiable | Yes. You can use the `ALTER SYSTEM SET` statement to modify the parameter. |
+| Effective upon OBServer node restart | No |
+
+## Examples
+
+```shell
+obclient> ALTER SYSTEM SET enable_dblink=true;
+```
+
+## References
+
+* [Create a DBLink](../../../../700.reference/300.database-object-management/100.manage-object-of-mysql-mode/900.manage-dblink-of-mysql-mode/100.create-a-dblink-of-mysql-mode.md)
+* [Drop a DBLink](../../../../700.reference/300.database-object-management/100.manage-object-of-mysql-mode/900.manage-dblink-of-mysql-mode/500.delete-a-dblink-of-mysql-mode.md)
diff --git a/en-US/700.reference/800.configuration-items-and-system-variables/100.system-configuration-items/400.tenant-level-configuration-items/27000.default_table_store_format.md b/en-US/700.reference/800.configuration-items-and-system-variables/100.system-configuration-items/400.tenant-level-configuration-items/27000.default_table_store_format.md
new file mode 100644
index 0000000000..e827867de6
--- /dev/null
+++ b/en-US/700.reference/800.configuration-items-and-system-variables/100.system-configuration-items/400.tenant-level-configuration-items/27000.default_table_store_format.md
@@ -0,0 +1,55 @@
+| description ||
+|---|---|
+| keywords ||
+| dir-name ||
+| dir-name-en ||
+| tenant-type ||
+
+# default_table_store_format
+
+
+ Note
+ This parameter is introduced since OceanBase Database V4.3.0.
+
+
+## Purpose
+
+The `default_table_store_format` parameter specifies the default format for a table created in a user tenant, which can be a rowstore table, columnstore table, or rowstore-columnstore redundant table.
+
+## Attributes
+
+| **Attribute** | **Description** |
+| -------- | -------- |
+| Parameter type | String |
+| Default value | row |
+| Value range | ("row", "column", "compound")Note
- `row`: specifies to create a rowstore table.
- `column`: specifies to create a pure columnstore table. If the
with column group
clause is not appended to the table creation statement, the with column group(each column)
clause is automatically appended to the statement. - `compound`: specifies to create a rowstore-columnstore redundant table. If the
with column group
clause is not appended to the table creation statement, the with column group(all columns, each column)
clause is automatically appended to the statement.
|
+| Modifiable | Yes. You can use the `ALTER SYSTEM SET` statement to modify the parameter. |
+| Effective upon OBServer node restart | No |
+
+## Considerations
+
+* This parameter is valid only for user tenants and does not affect meta tenants or the sys tenant.
+
+* When you create a table in a user tenant, you can directly specify the default format of the table as rowstore, columnstore, or rowstore-columnstore redundant without modifying the original table creation statement.
+
+* This parameter takes effect only for table creation statements without a `with column group` clause, and is invalid for index tables.
+
+
+## Examples
+
+* Set the default table format to `column` for a user tenant. The `with column group(each column)` clause is automatically appended to the table creation statement.
+
+ ```shell
+ obclient> ALTER SYSTEM SET default_table_store_format = "column";
+ ```
+
+* Set the default table format to `compound` for a user tenant. The `with column group(all columns, each column)` clause is automatically appended to the table creation statement.
+
+ ```shell
+ obclient> ALTER SYSTEM SET default_table_store_format = "compound";
+ ```
+
+## References
+
+* [Columnstore](../../../../700.reference/100.oceanbase-database-concepts/900.storage-architecture/200.data-storage/320.columnstore-engine.md)
+* [Create a columnstore table](../../../../700.reference/300.database-object-management/100.manage-object-of-mysql-mode/200.manage-tables-of-mysql-mode/200.create-a-table-for-mysql-tenant-of-mysql-mode.md)
\ No newline at end of file
diff --git a/en-US/700.reference/800.configuration-items-and-system-variables/200.system-variable/300.global-system-variable/16900.ob_enable_pl_cache-global.md b/en-US/700.reference/800.configuration-items-and-system-variables/200.system-variable/300.global-system-variable/16900.ob_enable_pl_cache-global.md
new file mode 100644
index 0000000000..486d7965c6
--- /dev/null
+++ b/en-US/700.reference/800.configuration-items-and-system-variables/200.system-variable/300.global-system-variable/16900.ob_enable_pl_cache-global.md
@@ -0,0 +1,45 @@
+| description ||
+|---|---|
+| keywords ||
+| dir-name ||
+| dir-name-en ||
+| tenant-type ||
+
+# ob_enable_pl_cache
+
+
+ Note
+ This variable is introduced since OceanBase Database V4.2.2.
+
+
+## Purpose
+
+`ob_enable_pl_cache` specifies whether to enable the PL cache module.
+
+## Attributes
+
+| **Attribute** | **Description** |
+|---------|---------------|
+| Parameter type | Boolean |
+| Default value | 1 |
+| Value range | - `true`: enable
- `false`: disable
|
+| Effective scope | |
+| Modifiable | Yes. You can use the `SET` statement to modify the variable. |
+
+## Considerations
+
+We recommend that you enable the PL cache module. If you disable the module, stored procedure recompilation will be triggered each time a stored procedure is executed.
+
+## Examples
+
+* Set the variable at the session level as follows:
+
+ ```shell
+ obclient> SET ob_enable_pl_cache=true;
+ ```
+
+* Set the variable at the global level as follows:
+
+ ```shell
+ obclient> SET GLOBAL ob_enable_pl_cache=true;
+ ```
diff --git a/en-US/800.FAQ/800.column-storage-faq.md b/en-US/800.FAQ/800.column-storage-faq.md
new file mode 100644
index 0000000000..bfef578954
--- /dev/null
+++ b/en-US/800.FAQ/800.column-storage-faq.md
@@ -0,0 +1,108 @@
+| description ||
+|---|---|
+| keywords ||
+| dir-name ||
+| dir-name-en ||
+| tenant-type ||
+
+# FAQ about columnstores
+
+## Can I add or delete columns in a columnstore table?
+
+* You can add or delete columns in a columnstore table.
+
+* You can increase or decrease the value length of columns of the VARCHAR data type.
+
+* Like rowstore tables, columstore tables also support multiple types of offline DDL operations.
+
+For more information about the conversion between rowstore and columnstore tables, see [Convert a rowstore table to a columnstore table (MySQL mode)](../700.reference/300.database-object-management/100.manage-object-of-mysql-mode/200.manage-tables-of-mysql-mode/600.change-table-of-mysql-mode.md) or [Convert a rowstore table to a columnstore table (Oracle mode)](../700.reference/300.database-object-management/200.manage-object-of-oracle-mode/100.manage-tables-of-oracle-mode/600.change-table-of-oracle-mode.md).
+
+## What are the characteristics of columnstore table queries?
+
+* In a rowstore-columnstore redundant table, by default, the columnstore mode is used for range scans and the rowstore mode is used for point get queries.
+
+* In a pure columnstore table, the columnstore mode is used for any queries.
+
+## Do columnstore tables support transactions? Do they have any limit on the transaction size?
+
+Like rowstore tables, columnstore tables also support transactions and do not have any limit on the transaction size. Columnstore tables also support high consistency.
+
+## Do log synchronization and backup and restore for columnstore tables have any special characteristics compared to those for rowstore tables?
+
+No. Log synchronization and backup and restore for columnstore tables are consistent with those for rowstore tables. Synchronized logs are stored in the rowstore format.
+
+## Can I convert a rowstore table into a columnstore table by using DDL statements?
+
+Yes. You can execute DDL statements to add columns and drop rows to convert a rowstore table into a columnstore table. The syntax is as follows:
+
+```shell
+create table t1( pk1 int, c2 int, primary key (pk1));
+
+alter table t1 add column group(all columns, each column);
+alter table t1 drop column group(all columns, each column);
+
+alter table t1 add column group(each column);
+alter table t1 drop column group(each column);
+```
+
+
+ Note
+ After alter table t1 drop column group(all columns, each column);
is executed, all columns will be put in the default group named DEFAUTL COLUMN GROUP
for storing data.
+
+
+
+## Can I store multiple columns as a whole in a columnstore table?
+
+In OceanBase Database V4.3.0, you can **store each column separately or store all columns as a whole in the rowstore format**. At present, you cannot store specific columns together.
+
+## Can I update a columnstore table? What is the data structure in MemTables?
+
+In OceanBase Database, the addition, deletion, and modification of data are performed in the memory. Data is stored in the rowstore format in MemTables. Baseline data is read-only and stored in the columnstore format on the disk. When you read data from a column, the data returned is a combination of the rowstore data in the MemTable and the columnstore data on the disk. This means that **OceanBase Database supports strong-consistency columnstore data read without a latency**. Data written to MemTables supports minor compactions. Compacted data is still stored in the rowstore format. After a major compaction, rowstore data and baseline columnstore data are integrated to form new baseline columnstore data.
+
+
+ Notice
+ If you perform a large number of update operations on a columnstore table without performing a major compaction in a timely manner, the query performance will be compromised. Therefore, we recommend that you initiate a major compaction after batch data import to achieve the optimal query performance. A small number of update operations will not affect the query performance.
+
+
+
+
+## Can I create an index on a specific column in a columnstore table?
+
+Yes. You can create indexes of the same index structure on columnstore tables and rowstore tables.
+
+You can create an index on one or more columns of a columnstore table to form a covering index to improve the query performance, or to sort specific columns to improve the sorting performance.
+
+## What is a columnstore index?
+
+OceanBase Database also supports columnstore indexes. A columnstore index is different from an index on a columnstore table. For a columnstore index, the index table is in the columnstore format.
+
+For example, if you want to calculate the sum of values in the `c3` column of the rowstore table `t6` while ensuring the optimal performance, you can create a columnstore index on the `c3` column.
+
+```shell
+create table t6(
+ c1 TINYINT,
+ c2 SMALLINT,
+ c3 MEDIUMINT
+);
+
+create /*+ parallel(2) */ index idx1 on t6(c3) with column group (each column);
+
+```
+
+You can also create indexes in other ways. Here are some examples:
+
+* Redundant rowstore in an index
+
+ ```shell
+ create index idx1 on t1(c2) storing(c1) with column group(all columns, each column);
+ alter table t1 add index idx1 (c2) storing(c1) with column group(all columns, each column);
+ ```
+
+* Pure columnstore in an index
+
+ ```
+ create index idx1 on t1(c2) storing(c1) with column group(each column);
+ alter table t1 add index idx1 (c2) storing(c1) with column group(each column);
+ ```
+
+You can use the `STORING` clause to store data of non-index columns in an index. This can improve the performance of specific queries by avoiding table access and reducing the index sorting cost. The performance can be significantly improved for a query that only needs to access columns stored in an index, without the need to query the original rows of the table.
\ No newline at end of file
From c462c82ee3638ee910c2349739285046df701043 Mon Sep 17 00:00:00 2001
From: Jackie Qu
Date: Thu, 18 Apr 2024 20:25:27 +0800
Subject: [PATCH 38/63] v430-beta-update-1
---
...st-based-query-rewriting_20240411114437.md | 215 ------------------
...st-based-query-rewriting_20240417164139.md | 215 ------------------
2 files changed, 430 deletions(-)
delete mode 100644 .history/en-US/700.reference/1000.performance-tuning-guide/500.sql-optimization/400.sql-optimization/500.query-rewrite/300.cost-based-query-rewriting_20240411114437.md
delete mode 100644 .history/en-US/700.reference/1000.performance-tuning-guide/500.sql-optimization/400.sql-optimization/500.query-rewrite/300.cost-based-query-rewriting_20240417164139.md
diff --git a/.history/en-US/700.reference/1000.performance-tuning-guide/500.sql-optimization/400.sql-optimization/500.query-rewrite/300.cost-based-query-rewriting_20240411114437.md b/.history/en-US/700.reference/1000.performance-tuning-guide/500.sql-optimization/400.sql-optimization/500.query-rewrite/300.cost-based-query-rewriting_20240411114437.md
deleted file mode 100644
index ed4e10dac3..0000000000
--- a/.history/en-US/700.reference/1000.performance-tuning-guide/500.sql-optimization/400.sql-optimization/500.query-rewrite/300.cost-based-query-rewriting_20240411114437.md
+++ /dev/null
@@ -1,215 +0,0 @@
-# Cost-based query rewrite
-
-OceanBase Database supports only one type of cost-based query rewrite: OR-EXPANSION.
-
-Its later versions will support advanced cost-based rewriting rules for database administration such as complex view merge and window function rewrite.
-
-## OR-EXPANSION
-
-OR-EXPANSION rewrites a query to several subqueries, and the result sets of these subqueries are combined into the same result set through the `UNION` operator. This allows you to optimize each subquery. However, the rewrite also results in the execution of multiple subqueries. You must determine whether to perform the rewrite based on the cost analysis.
-
-Purposes of the OR-EXPANSION rewrite are as follows:
-
-* Allow subqueries to use different indexes to speed up the query.
-
- In the following example, query Q1 is rewritten to Q2, where `LNNVL(t1.a = 1)`, the predicate in Q2, ensures that the two subqueries do not generate duplicate results. Before the rewrite, Q1 generally accesses the primary table. After the rewrite, if indexes (a) and (b) are created on table `t1`, subqueries of Q2 are allowed to use different indexes for data access.
-
- ```javascript
- Q1:
- obclient> SELECT * FROM t1 WHERE t1.a = 1 OR t1.b = 1;
- Q2:
- obclient> SELECT * FROM t1 WHERE t1.a = 1 UNION ALL SELECT * FROM t1.b = 1
- AND LNNVL(t1.a = 1);
- ```
-
- Here is a complete example:
-
- ```javascript
- obclient> CREATE TABLE t1(a INT, b INT, c INT, d INT, e INT, INDEX IDX_a(a), INDEX IDX_b(b));
- Query OK, 0 rows affected
-
- /*Without OR-EXPANSION rewrite, primary access path is the only option for the query*/
- obclient> EXPLAIN SELECT/*+NO_REWRITE()*/ * FROM t1 WHERE t1.a = 1 OR t1.b = 1;
- +--------------------------------------------------------------+
- | Query Plan |
- +--------------------------------------------------------------+
- | ===================================
- |ID|OPERATOR |NAME|EST. ROWS|COST|
- -----------------------------------
- |0 |TABLE SCAN|t1 |4 |649 |
- ===================================
-
- Outputs & filters:
- -------------------------------------
- 0 - output([t1.a], [t1.b], [t1.c], [t1.d], [t1.e]), filter([t1.a = 1 OR t1.b = 1]),
- access([t1.a], [t1.b], [t1.c], [t1.d], [t1.e]), partitions(p0)
-
- /*After the rewrite, different index access paths are available for each subquery*/
- obclient>EXPLAIN SELECT * FROM t1 WHERE t1.a = 1 OR t1.b = 1;
- +------------------------------------------------------------------------+
- | Query Plan |
- +------------------------------------------------------------------------+
- | =========================================
- |ID|OPERATOR |NAME |EST. ROWS|COST|
- -----------------------------------------
- |0 |UNION ALL | |3 |190 |
- |1 | TABLE SCAN|t1(idx_a)|2 |94 |
- |2 | TABLE SCAN|t1(idx_b)|1 |95 |
- =========================================
-
- Outputs & filters:
- -------------------------------------
- 0 - output([UNION(t1.a, t1.a)], [UNION(t1.b, t1.b)], [UNION(t1.c, t1.c)], [UNION(t1.d, t1.d)], [UNION(t1.e, t1.e)]), filter(nil)
- 1 - output([t1.a], [t1.b], [t1.c], [t1.d], [t1.e]), filter(nil),
- access([t1.a], [t1.b], [t1.c], [t1.d], [t1.e]), partitions(p0)
- 2 - output([t1.a], [t1.b], [t1.c], [t1.d], [t1.e]), filter([lnnvl(t1.a = 1)]),
- access([t1.a], [t1.b], [t1.c], [t1.d], [t1.e]), partitions(p02
- ```
-
-* Allow subqueries to use different join algorithms to speed up the query and avoid using a Cartesian join.
-
- In the following example, query Q1 is rewritten to Q2. For Q1, the nested loop join, which results in a Cartesian product, is the only join option available. After the rewrite, nested loop join, hash join, and merge join are available for each subquery, providing more options for optimization.
-
- ```javascript
- Q1:
- obclient> SELECT * FROM t1, t2 WHERE t1.a = t2.a OR t1.b = t2.b;
-
- Q2:
- obclient> SELECT * FROM t1, t2 WHERE t1.a = t2.a UNION ALL
- SELECT * FROM t1, t2 WHERE t1.b = t2.b AND LNNVL(t1.a = t2.a);
- ```
-
- Here is a complete example:
-
- ```javascript
- obclient> CREATE TABLE t1(a INT, b INT);
- Query OK, 0 rows affected
-
- obclient> CREATE TABLE t2(a INT, b INT);
- Query OK, 0 rows affected
-
- /*Without the rewrite, the nested loop join is the only available option*/
- obclient> EXPLAIN SELECT/*+NO_REWRITE()*/ * FROM t1, t2
- WHERE t1.a = t2.a OR t1.b = t2.b;
- +--------------------------------------------------------------------------+
- | Query Plan |
- +--------------------------------------------------------------------------+
- | ===========================================
- |ID|OPERATOR |NAME|EST. ROWS|COST |
- -------------------------------------------
- |0 |NESTED-LOOP JOIN| |3957 |585457|
- |1 | TABLE SCAN |t1 |1000 |499 |
- |2 | TABLE SCAN |t2 |4 |583 |
- ===========================================
-
- Outputs & filters:
- -------------------------------------
- 0 - output([t1.a], [t1.b], [t2.a], [t2.b]), filter(nil),
- conds(nil), nl_params_([t1.a], [t1.b])
- 1 - output([t1.a], [t1.b]), filter(nil),
- access([t1.a], [t1.b]), partitions(p0)
- 2 - output([t2.a], [t2.b]), filter([? = t2.a OR ? = t2.b]),
- access([t2.a], [t2.b]), partitions(p0)
-
- /*After the rewrite, every subquery uses a hash join*/
- obclient> EXPLAIN SELECT * FROM t1, t2 WHERE t1.a = t2.a OR t1.b = t2.b;
- +--------------------------------------------------------------------------+
- | Query Plan |
- +--------------------------------------------------------------------------+
- |ID|OPERATOR |NAME|EST. ROWS|COST|
- -------------------------------------
- |0 |UNION ALL | |2970 |9105|
- |1 | HASH JOIN | |1980 |3997|
- |2 | TABLE SCAN|t1 |1000 |499 |
- |3 | TABLE SCAN|t2 |1000 |499 |
- |4 | HASH JOIN | |990 |3659|
- |5 | TABLE SCAN|t1 |1000 |499 |
- |6 | TABLE SCAN|t2 |1000 |499 |
- =====================================
-
- Outputs & filters:
- -------------------------------------
- 0 - output([UNION(t1.a, t1.a)], [UNION(t1.b, t1.b)], [UNION(t2.a, t2.a)], [UNION(t2.b, t2.b)]), filter(nil)
- 1 - output([t1.a], [t1.b], [t2.a], [t2.b]), filter(nil),
- equal_conds([t1.a = t2.a]), other_conds(nil)
- 2 - output([t1.a], [t1.b]), filter(nil),
- access([t1.a], [t1.b]), partitions(p0)
- 3 - output([t2.a], [t2.b]), filter(nil),
- access([t2.a], [t2.b]), partitions(p0)
- 4 - output([t1.a], [t1.b], [t2.a], [t2.b]), filter(nil),
- equal_conds([t1.b = t2.b]), other_conds([lnnvl(t1.a = t2.a)])
- 5 - output([t1.a], [t1.b]), filter(nil),
- access([t1.a], [t1.b]), partitions(p0)
- 6 - output([t2.a], [t2.b]), filter(nil),
- access([t2.a], [t2.b]), partitions(p0)
- ```
-
-* Allow each subquery to separately perform sorting elimination, which accelerates the retrieval of the TOP K results.
-
- In the following example, query Q1 is rewritten to Q2. For Q1, the only way of execution is to find the rows that fit the condition, sort them, and then retrieve the TOP 10 results. Assume that Q2 has two subqueries. If indexes a and b are available, each subquery of Q2 can eliminate redundancy by using an index and retrieve the TOP 10 results. Finally, sort the 20 rows retrieved by these subqueries to retrieve the final TOP 10 rows.
-
- ```javascript
- Q1:
- obclient> SELECT * FROM t1 WHERE t1.a = 1 OR t1.a = 2 ORDER BY b LIMIT 10;
-
- Q2:
- obclient> SELECT * FROM
- (SELECT * FROM t1 WHERE t1.a = 1 ORDER BY b LIMIT 10 UNION ALL
- SELECT * FROM t1 WHERE t1.a = 2 ORDER BY b LIMIT 10) AS TEMP
- ORDER BY temp.b LIMIT 10;
- ```
-
- Here is a complete example:
-
- ```javascript
- obclient> CREATE TABLE t1(a INT, b INT, INDEX IDX_a(a, b));
- Query OK, 0 rows affected
-
- /*Before the rewrite, data is sorted to retrieve the TOP K results*/
- obclient> EXPLAIN SELECT/*+NO_REWRITE()*/ * FROM t1 WHERE t1.a = 1 OR t1.a = 2 ORDER BY b LIMIT 10;
- +-------------------------------------------------------------------------+
- | Query Plan |
- +-------------------------------------------------------------------------+
- | ==========================================
- |ID|OPERATOR |NAME |EST. ROWS|COST|
- ------------------------------------------
- |0 |LIMIT | |4 |77 |
- |1 | TOP-N SORT | |4 |76 |
- |2 | TABLE SCAN|t1(idx_a)|4 |73 |
- ==========================================
-
- Outputs & filters:
- -------------------------------------
- 0 - output([t1.a], [t1.b]), filter(nil), limit(10), offset(nil)
- 1 - output([t1.a], [t1.b]), filter(nil), sort_keys([t1.b, ASC]), topn(10)
- 2 - output([t1.a], [t1.b]), filter(nil),
- access([t1.a], [t1.b]), partitions(p0)
-
- /*After the rewrite, the subqueries remove the SORT operator and eventually retrieve the TOP K results*/
- obclient>EXPLAIN SELECT * FROM t1 WHERE t1.a = 1 OR t1.a = 2
- ORDER BY b LIMIT 10;
- +-------------------------------------------------------------------------+
- | Query Plan |
- +-------------------------------------------------------------------------+
- | ===========================================
- |ID|OPERATOR |NAME |EST. ROWS|COST|
- -------------------------------------------
- |0 |LIMIT | |3 |76 |
- |1 | TOP-N SORT | |3 |76 |
- |2 | UNION ALL | |3 |74 |
- |3 | TABLE SCAN|t1(idx_a)|2 |37 |
- |4 | TABLE SCAN|t1(idx_a)|1 |37 |
- ===========================================
-
- Outputs & filters:
- -------------------------------------
- 0 - output([UNION(t1.a, t1.a)], [UNION(t1.b, t1.b)]), filter(nil), limit(10), offset(nil)
- 1 - output([UNION(t1.a, t1.a)], [UNION(t1.b, t1.b)]), filter(nil), sort_keys([UNION(t1.b, t1.b), ASC]), topn(10)
- 2 - output([UNION(t1.a, t1.a)], [UNION(t1.b, t1.b)]), filter(nil)
- 3 - output([t1.a], [t1.b]), filter(nil),
- access([t1.a], [t1.b]), partitions(p0),
- limit(10), offset(nil)
- 4 - output([t1.a], [t1.b]), filter([lnnvl(t1.a = 1)]),
- access([t1.a], [t1.b]), partitions(p0),
- limit(10), offset(nil)
- ```
diff --git a/.history/en-US/700.reference/1000.performance-tuning-guide/500.sql-optimization/400.sql-optimization/500.query-rewrite/300.cost-based-query-rewriting_20240417164139.md b/.history/en-US/700.reference/1000.performance-tuning-guide/500.sql-optimization/400.sql-optimization/500.query-rewrite/300.cost-based-query-rewriting_20240417164139.md
deleted file mode 100644
index 3d96770fcc..0000000000
--- a/.history/en-US/700.reference/1000.performance-tuning-guide/500.sql-optimization/400.sql-optimization/500.query-rewrite/300.cost-based-query-rewriting_20240417164139.md
+++ /dev/null
@@ -1,215 +0,0 @@
-# Cost-based query rewrite
-
-OceanBase Database supports only one type of cost-based query rewrite: OR-EXPANSION.
-
-Its later versions will support advanced cost-based rewriting rules for database administration such as complex view merge and window function rewrite.
-
-## OR-EXPANSION
-
-OR-EXPANSION rewrites a query to several subqueries, and the result sets of these subqueries are combined into the same result set through the `UNION` operator. This allows you to optimize each subquery. However, the rewrite also results in the execution of multiple subqueries. You must determine whether to perform the rewrite based on the cost analysis.
-
-Purposes of the OR-EXPANSION rewrite are as follows:
-
-* Allow subqueries to use different indexes to speed up the query.
-
- In the following example, query Q1 is rewritten to Q2, where `LNNVL(t1.a = 1)`, the predicate in Q2, ensures that the two subqueries do not generate duplicate results. Before the rewrite, Q1 generally accesses the primary table. After the rewrite, if indexes (a) and (b) are created on table `t1`, subqueries of Q2 are allowed to use different indexes for data access.
-
- ```javascript
- Q1:
- obclient> SELECT * FROM t1 WHERE t1.a = 1 OR t1.b = 1;
- Q2:
- obclient> SELECT * FROM t1 WHERE t1.a = 1 UNION ALL SELECT * FROM t1.b = 1
- AND LNNVL(t1.a = 1);
- ```
-
- Here is a complete example:
-
- ```javascript
- obclient> CREATE TABLE t1(a INT, b INT, c INT, d INT, e INT, INDEX IDX_a(a), INDEX IDX_b(b));
- Query OK, 0 rows affected
-
- /*Without OR-EXPANSION rewrite, primary access path is the only option for the query*/
- obclient> EXPLAIN SELECT/*+NO_REWRITE()*/ * FROM t1 WHERE t1.a = 1 OR t1.b = 1;
- +--------------------------------------------------------------+
- | Query Plan |
- +--------------------------------------------------------------+
- | ===================================
- |ID|OPERATOR |NAME|EST. ROWS|COST|
- -----------------------------------
- |0 |TABLE SCAN|t1 |4 |649 |
- ===================================
-
- Outputs & filters:
- -------------------------------------
- 0 - output([t1.a], [t1.b], [t1.c], [t1.d], [t1.e]), filter([t1.a = 1 OR t1.b = 1]),
- access([t1.a], [t1.b], [t1.c], [t1.d], [t1.e]), partitions(p0)
-
- /*After the rewrite, different index access paths are available for each subquery*/
- obclient>EXPLAIN SELECT * FROM t1 WHERE t1.a = 1 OR t1.b = 1;
- +------------------------------------------------------------------------+
- | Query Plan |
- +------------------------------------------------------------------------+
- | =========================================
- |ID|OPERATOR |NAME |EST. ROWS|COST|
- -----------------------------------------
- |0 |UNION ALL | |3 |190 |
- |1 | TABLE SCAN|t1(idx_a)|2 |94 |
- |2 | TABLE SCAN|t1(idx_b)|1 |95 |
- =========================================
-
- Outputs & filters:
- -------------------------------------
- 0 - output([UNION(t1.a, t1.a)], [UNION(t1.b, t1.b)], [UNION(t1.c, t1.c)], [UNION(t1.d, t1.d)], [UNION(t1.e, t1.e)]), filter(nil)
- 1 - output([t1.a], [t1.b], [t1.c], [t1.d], [t1.e]), filter(nil),
- access([t1.a], [t1.b], [t1.c], [t1.d], [t1.e]), partitions(p0)
- 2 - output([t1.a], [t1.b], [t1.c], [t1.d], [t1.e]), filter([lnnvl(t1.a = 1)]),
- access([t1.a], [t1.b], [t1.c], [t1.d], [t1.e]), partitions(p02
- ```
-
-* Allow subqueries to use different join algorithms to speed up the query and avoid using a Cartesian join.
-
- In the following example, query Q1 is rewritten to Q2. For Q1, the nested loop join, which results in a Cartesian product, is the only join option available. After the rewrite, nested loop join, hash join, and merge join are available for each subquery, providing more options for optimization.
-
- ```javascript
- Q1:
- obclient> SELECT * FROM t1, t2 WHERE t1.a = t2.a OR t1.b = t2.b;
-
- Q2:
- obclient> SELECT * FROM t1, t2 WHERE t1.a = t2.a UNION ALL
- SELECT * FROM t1, t2 WHERE t1.b = t2.b AND LNNVL(t1.a = t2.a);
- ```
-
- Here is a complete example:
-
- ```javascript
- obclient> CREATE TABLE t1(a INT, b INT);
- Query OK, 0 rows affected
-
- obclient> CREATE TABLE t2(a INT, b INT);
- Query OK, 0 rows affected
-
- /*Without the rewrite, the nested loop join is the only available option*/
- obclient> EXPLAIN SELECT/*+NO_REWRITE()*/ * FROM t1, t2
- WHERE t1.a = t2.a OR t1.b = t2.b;
- +--------------------------------------------------------------------------+
- | Query Plan |
- +--------------------------------------------------------------------------+
- | ===========================================
- |ID|OPERATOR |NAME|EST. ROWS|COST |
- -------------------------------------------
- |0 |NESTED-LOOP JOIN| |3957 |585457|
- |1 | TABLE SCAN |t1 |1000 |499 |
- |2 | TABLE SCAN |t2 |4 |583 |
- ===========================================
-
- Outputs & filters:
- -------------------------------------
- 0 - output([t1.a], [t1.b], [t2.a], [t2.b]), filter(nil),
- conds(nil), nl_params_([t1.a], [t1.b])
- 1 - output([t1.a], [t1.b]), filter(nil),
- access([t1.a], [t1.b]), partitions(p0)
- 2 - output([t2.a], [t2.b]), filter([? = t2.a OR ? = t2.b]),
- access([t2.a], [t2.b]), partitions(p0)
-
- /*After the rewrite, every subquery uses a hash join*/
- obclient> EXPLAIN SELECT * FROM t1, t2 WHERE t1.a = t2.a OR t1.b = t2.b;
- +--------------------------------------------------------------------------+
- | Query Plan |
- +--------------------------------------------------------------------------+
- |ID|OPERATOR |NAME|EST. ROWS|COST|
- -------------------------------------
- |0 |UNION ALL | |2970 |9105|
- |1 | HASH JOIN | |1980 |3997|
- |2 | TABLE SCAN|t1 |1000 |499 |
- |3 | TABLE SCAN|t2 |1000 |499 |
- |4 | HASH JOIN | |990 |3659|
- |5 | TABLE SCAN|t1 |1000 |499 |
- |6 | TABLE SCAN|t2 |1000 |499 |
- =====================================
-
- Outputs & filters:
- -------------------------------------
- 0 - output([UNION(t1.a, t1.a)], [UNION(t1.b, t1.b)], [UNION(t2.a, t2.a)], [UNION(t2.b, t2.b)]), filter(nil)
- 1 - output([t1.a], [t1.b], [t2.a], [t2.b]), filter(nil),
- equal_conds([t1.a = t2.a]), other_conds(nil)
- 2 - output([t1.a], [t1.b]), filter(nil),
- access([t1.a], [t1.b]), partitions(p0)
- 3 - output([t2.a], [t2.b]), filter(nil),
- access([t2.a], [t2.b]), partitions(p0)
- 4 - output([t1.a], [t1.b], [t2.a], [t2.b]), filter(nil),
- equal_conds([t1.b = t2.b]), other_conds([lnnvl(t1.a = t2.a)])
- 5 - output([t1.a], [t1.b]), filter(nil),
- access([t1.a], [t1.b]), partitions(p0)
- 6 - output([t2.a], [t2.b]), filter(nil),
- access([t2.a], [t2.b]), partitions(p0)
- ```
-
-* Allow each subquery to separately perform sorting elimination, which accelerates the retrieval of the TOP K results.
-
- In the following example, query Q1 is rewritten to Q2. For Q1, the only way of execution is to find the rows that fit the condition, sort them, and then retrieve the TOP 10 results. Assume that Q2 has two subqueries. If indexes a and b are available, each subquery of Q2 can eliminate redundancy by using an index and retrieve the TOP 10 results. Finally, sort the 20 rows retrieved by these subqueries to retrieve the final TOP 10 rows.
-
- ```javascript
- Q1:
- obclient> SELECT * FROM t1 WHERE t1.a = 1 OR t1.a = 2 ORDER BY b LIMIT 10;
-
- Q2:
- obclient> SELECT * FROM
- (SELECT * FROM t1 WHERE t1.a = 1 ORDER BY b LIMIT 10 UNION ALL
- SELECT * FROM t1 WHERE t1.a = 2 ORDER BY b LIMIT 10) AS TEMP
- ORDER BY temp.b LIMIT 10;
- ```
-
- Here is a complete example:
-
- ```javascript
- obclient> CREATE TABLE t1(a INT, b INT, INDEX IDX_a(a, b));
- Query OK, 0 rows affected
-
- /*Before the rewrite, data is sorted to retrieve the TOP K results*/
- obclient> EXPLAIN SELECT/*+NO_REWRITE()*/ * FROM t1 WHERE t1.a = 1 OR t1.a = 2 ORDER BY b LIMIT 10;
- +-------------------------------------------------------------------------+
- | Query Plan |
- +-------------------------------------------------------------------------+
- | ==========================================
- |ID|OPERATOR |NAME |EST. ROWS|COST|
- ------------------------------------------
- |0 |LIMIT | |4 |77 |
- |1 | TOP-N SORT | |4 |76 |
- |2 | TABLE SCAN|t1(idx_a)|4 |73 |
- ==========================================
-
- Outputs & filters:
- -------------------------------------
- 0 - output([t1.a], [t1.b]), filter(nil), limit(10), offset(nil)
- 1 - output([t1.a], [t1.b]), filter(nil), sort_keys([t1.b, ASC]), topn(10)
- 2 - output([t1.a], [t1.b]), filter(nil),
- access([t1.a], [t1.b]), partitions(p0)
-
- /*After the rewrite, the subqueries remove the SORT operator and eventually retrieve the TOP K results*/
- obclient>EXPLAIN SELECT * FROM t1 WHERE t1.a = 1 OR t1.a = 2
- ORDER BY b LIMIT 10;
- +-------------------------------------------------------------------------+
- | Query Plan |
- +-------------------------------------------------------------------------+
- | ===========================================
- |ID|OPERATOR |NAME |EST. ROWS|COST|
- -------------------------------------------
- |0 |LIMIT | |3 |76 |
- |1 | TOP-N SORT | |3 |76 |
- |2 | UNION ALL | |3 |74 |
- |3 | TABLE SCAN|t1(idx_a)|2 |37 |
- |4 | TABLE SCAN|t1(idx_a)|1 |37 |
- ===========================================
-
- Outputs & filters:
- -------------------------------------
- 0 - output([UNION(t1.a, t1.a)], [UNION(t1.b, t1.b)]), filter(nil), limit(10), offset(nil)
- 1 - output([UNION(t1.a, t1.a)], [UNION(t1.b, t1.b)]), filter(nil), sort_keys([UNION(t1.b, t1.b), ASC]), topn(10)
- 2 - output([UNION(t1.a, t1.a)], [UNION(t1.b, t1.b)]), filter(nil)
- 3 - output([t1.a], [t1.b]), filter(nil),
- access([t1.a], [t1.b]), partitions(p0),
- limit(10), offset(nil)
- 4 - output([t1.a], [t1.b]), filter([lnnvl(t1.a = 1)]),
- access([t1.a], [t1.b]), partitions(p0),
- limit(10), offset(nil)
- ```
From 5db9c3736ad22de6aa4daecd347ec06e59873fe1 Mon Sep 17 00:00:00 2001
From: Jackie Qu
Date: Thu, 18 Apr 2024 23:27:53 +0800
Subject: [PATCH 39/63] v430-beta-update-0418
---
...-parallelly-importing-and-data-compression.md | 16 ++++++++--------
.../100.add-server.md | 5 -----
2 files changed, 8 insertions(+), 13 deletions(-)
diff --git a/en-US/200.quickstart/500.experience-advanced-features-of-oceanbase/300.experience-parallelly-importing-and-data-compression.md b/en-US/200.quickstart/500.experience-advanced-features-of-oceanbase/300.experience-parallelly-importing-and-data-compression.md
index f65148c881..80f6d180ca 100644
--- a/en-US/200.quickstart/500.experience-advanced-features-of-oceanbase/300.experience-parallelly-importing-and-data-compression.md
+++ b/en-US/200.quickstart/500.experience-advanced-features-of-oceanbase/300.experience-parallelly-importing-and-data-compression.md
@@ -230,13 +230,13 @@ After the data is imported, check the number of records in the table and the spa
4. Execute the following SQL statement in the `sys` tenant to query the space occupied by the imported data:
- ```shell
- obclient [oceanbase]> select b.table_name,a.svr_ip,data_size/1024/1024/1024 from CDB_OB_TABLET_REPLICAS a,CDB_OB_TABLE_LOCATIONS b where a.tablet_id=b.tablet_id and b.table_name='T_F1';
- +------------+---------------+----------------------------+
- | table_name | svr_ip | a.data_size/1024/1024/1024 |
- +------------+---------------+----------------------------+
- | t_f1 | xxx.xx.xxx.xx | 6.144531250000 |
- +------------+---------------+----------------------------+
- ```
+ ```shell
+ obclient [oceanbase]> select b.table_name,a.svr_ip,data_size/1024/1024/1024 from CDB_OB_TABLET_REPLICAS a,CDB_OB_TABLE_LOCATIONS b where a.tablet_id=b.tablet_id and b.table_name='T_F1';
+ +------------+---------------+----------------------------+
+ | table_name | svr_ip | a.data_size/1024/1024/1024 |
+ +------------+---------------+----------------------------+
+ | t_f1 | xxx.xx.xxx.xx | 6.144531250000 |
+ +------------+---------------+----------------------------+
+ ```
The compressed table is about 6.145 GB in size, and the compression ratio (uncompressed size divided by compressed size) is 10/6.145, namely 1.62.
diff --git a/en-US/400.deploy/300.deploy-oceanbase-enterprise-edition/300.deploy-through-a-graphical-interface/100.configuring-the-deploy-environment-through-oat/100.add-server.md b/en-US/400.deploy/300.deploy-oceanbase-enterprise-edition/300.deploy-through-a-graphical-interface/100.configuring-the-deploy-environment-through-oat/100.add-server.md
index 30c50c888e..909e21f2fe 100644
--- a/en-US/400.deploy/300.deploy-oceanbase-enterprise-edition/300.deploy-through-a-graphical-interface/100.configuring-the-deploy-environment-through-oat/100.add-server.md
+++ b/en-US/400.deploy/300.deploy-oceanbase-enterprise-edition/300.deploy-through-a-graphical-interface/100.configuring-the-deploy-environment-through-oat/100.add-server.md
@@ -73,11 +73,6 @@ You have installed OAT. For more information, see [Deploy OAT](../../200.prepara
* **Automatically Synchronize to OCP**: This parameter is optional and is disabled by default. After you enable this option, you must select an installed OCP. If no OCP is available, this option is disabled. If you enable **Automatically Synchronize to OCP**, OAT automatically calls an API of OCP to synchronize server information to the OCP that you specified during initialization for future use.
-
- Note
- The installed OCPs refer to those available on the page that appears after you choose Product Services > Products > OCP in the OAT console. For more information about how to install OCP, see Deploy OCP.
-
-
Notice
- The installed OCPs refer to those available on the page that appears after you choose Product Services > Products > OCP in the OAT console. For more information about how to install OCP, see Deploy OCP.
- Before deploying an OceanBase cluster using OCP, it is necessary to add the nodes where the OceanBase cluster will be deployed to the OCP resource pool.
From e370d02984eeb662cf8bdbcbe0995dceab919ddf Mon Sep 17 00:00:00 2001
From: Jackie Qu
Date: Thu, 18 Apr 2024 23:35:08 +0800
Subject: [PATCH 40/63] v430-beta-chinese-character-fix
---
.../17400.gv-ob_session-of-sys-tenant.md | 2 +-
.../17500.v-ob_session-of-sys-tenant.md | 2 +-
.../17400.gv-ob_session-of-mysql-mode.md | 2 +-
.../17500.v-ob_session-of-mysql-mode.md | 2 +-
.../17400.gv-ob_session-of-oracle-mode.md | 2 +-
5 files changed, 5 insertions(+), 5 deletions(-)
diff --git a/en-US/700.reference/700.system-views/300.system-view-of-sys-tenant/300.performance-view-of-sys-tenant/17400.gv-ob_session-of-sys-tenant.md b/en-US/700.reference/700.system-views/300.system-view-of-sys-tenant/300.performance-view-of-sys-tenant/17400.gv-ob_session-of-sys-tenant.md
index bea546645f..4f4ac5a2f9 100644
--- a/en-US/700.reference/700.system-views/300.system-view-of-sys-tenant/300.performance-view-of-sys-tenant/17400.gv-ob_session-of-sys-tenant.md
+++ b/en-US/700.reference/700.system-views/300.system-view-of-sys-tenant/300.performance-view-of-sys-tenant/17400.gv-ob_session-of-sys-tenant.md
@@ -52,7 +52,7 @@ obclient [oceanbase]> SELECT * FROM oceanbase.GV$OB_SESSION limit 2;