Skip to content

Commit

Permalink
PullRequest: 4904 更新OceanBase版本信息,包括不支持的功能、兼容性问题等
Browse files Browse the repository at this point in the history
Merge branch 'V4.3.0-qa-0412 of [email protected]:oceanbase-docs/oceanbase-database-enterprise-doc.git into V4.3.0

https://code.alipay.com/oceanbase-docs/oceanbase-database-enterprise-doc/pull_requests/4904


Signed-off-by: 张素娟 <[email protected]>
  • Loading branch information
子韶 authored and 蚂蚁代码服务 committed Apr 19, 2024
2 parents 3d008c1 + c0acefd commit b664f7b
Show file tree
Hide file tree
Showing 256 changed files with 7,963 additions and 3,073 deletions.
2 changes: 2 additions & 0 deletions .menu_en.yml
Original file line number Diff line number Diff line change
Expand Up @@ -182,6 +182,7 @@
500.distributed-database-objects=Distributed database objects
300.data-partitions-and-replicas=Data partitions and replicas
300.partition-replica-type=Partition replica type
500.data-balancing=Data balancing
400.dynamic-scaling=Dynamic scaling
200.scale-in-and-scale-out-of-tenant-resources=Tenant resource scaling
600.data-link=Data link
Expand Down Expand Up @@ -374,6 +375,7 @@
10050.dbms-mview-stat-mysql=DBMS_MVIEW_STAT
13300.dbms-resource-manager-mysql=DBMS_RESOURCE_MANAGER
15900.dbms-stats-mysql=DBMS_STATS
16000.dbms-trusted-certificate-manager-mysql=DBMS_TRUSTED_CERTIFICATE_MANAGER
17800.dbms-udr-mysql=DBMS_UDR
17900.dbms-workload-repository-mysql=DBMS_WORKLOAD_REPOSITORY
20700.dbms-xplan-mysql=DBMS_XPLAN
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -27,6 +27,7 @@ The following table describes the features supported by these two editions.
| High availability | Five IDCs across three regions | Supported | Supported |
| High availability | Transparent horizontal scaling | Supported | Supported |
| High availability | Multi-tenant management | Supported | Supported |
| High availability | Tenant cloning | Supported | Supported |
| High availability | Data backup and restore | Supported | Supported |
| High availability | Resource isolation | Supported | Supported |
| High availability | Physical standby database | Supported | Supported |
Expand All @@ -38,22 +39,24 @@ The following table describes the features supported by these two editions.
| Compatibility | Compatibility with Oracle syntax | Supported | Not supported |
| Compatibility | XA transactions | Supported | Not supported |
| Compatibility | Table locks | Supported | Not supported |
| Compatibility | Function indexes | Supported | Supported |
| Compatibility | Function-based indexes | Supported | Supported |
| High performance | Cost-based optimizer | Supported | Supported |
| High performance | Optimization and rewriting of complex queries | Supported | Supported |
| High performance | Parallel execution engine | Supported | Supported |
| High performance | Vectorized engine | Supported | Supported |
| High performance | Advanced SQL execution plan management (SPM) | Supported | Not supported |
| High performance | Columnar engine | Supported | Supported |
| High performance | Advanced SQL plan management (SPM) | Supported | Not supported |
| High performance | Minimum specifications | Supported | Supported |
| High performance | Paxos-based log transmission | Supported | Supported |
| High performance | Distributed strong-consistency transactions, complete atomicity, consistency, isolation, and durability (ACID), and multi-version support | Supported | Supported |
| High performance | Data partitioning (RANGE, HASH, or LIST) | Supported | Supported |
| High performance | Data partitioning (RANGE, HASH, and LIST) | Supported | Supported |
| High performance | Global indexes | Supported | Supported |
| High performance | Advanced compression | Supported | Supported |
| High performance | Dynamic sampling | Supported | Supported |
| High performance | Auto DOP | Supported | Supported |
| Cross-data source access | Read-only foreign tables (CSV format) | Supported | Supported |
| Cross-data source access | DBLink | Supported | Not supported |
| High performance | Auto degree of parallelism (DOP) | Supported | Supported |
| High performance | Materialized views | Supported | Supported |
| Cross-data source access | Read-only external tables (in the CSV format) | Supported | Supported |
| Cross-data source access | DBLink | Supported | Supported |
| Multimodel | TableAPI | Supported | Supported |
| Multimodel | HbaseAPI | Supported | Supported |
| Multimodel | JSON | Supported | Supported |
Expand All @@ -63,11 +66,11 @@ The following table describes the features supported by these two editions.
| Security | Privilege management | Supported | Supported |
| Security | Communication encryption | Supported | Supported |
| Security | Advanced security scaling | Supported | Not supported.<br>OceanBase Database Community Edition does not support transparent data encryption (TDE) for row-level labels, data, and logs. |
| O&M management | Full-link diagnostics | Supported | Supported |
| O&M management | End-to-end diagnostics | Supported | Supported |
| O&M management | O&M components (liboblog and ob_admin) | Supported | Supported |
| O&M management | OBLOADER & OBDUMPER | Supported | Supported |
| O&M management | GUI-based development and management tools | Supported | Supported.<br>OceanBase Database Community Edition supports GUI-based development and management tools such as OceanBase Cloud Platform (OCP), OceanBase Migration Service (OMS), and OceanBase Developer Center (ODC). You can download these tools for free. However, OceanBase Migration Assessment (OMA) is charged. |
| O&M management | GUI-based development and management tools | Supported | Supported.<br>OceanBase Database Community Edition supports GUI-based development and management tools such as OceanBase Cloud Platform (OCP), OceanBase Migration Service (OMS), and OceanBase Developer Center (ODC). You can download these tools for free. However, OceanBase Migration Assessment (OMA) is a paid service. |
| Support and services | Technical consultation (on products) | Supported | OceanBase Database Community Edition provides only community-based technical consultation on products. No commercial expert team is provided for technical consultation. |
| Support and services | Service acquisition (channels for obtaining technical support) | Professional commercial support team | OceanBase Database Community Edition provides online service consultation only on its official website or in its official community and does not provide commercial expert teams. |
| Support and services | Service acquisition (channels for obtaining technical support) | Commercial expert team | OceanBase Database Community Edition provides online service consultation only on its official website or in its official community and does not provide commercial expert teams. |
| Support and services | Expert services (planning, implementation, inspection, fault recovery, and production assurance) | On-site services by commercial experts | OceanBase Database Community Edition does not provide expert assurance services. |
| Support and services | Response to faults | 24/7 services | OceanBase Database Community Edition does not provide emergency troubleshooting services. |
Original file line number Diff line number Diff line change
@@ -1,14 +1,22 @@
|description||
| description ||
|---|---|
|keywords||
|dir-name||
|dir-name-en||
|tenant-type||
| keywords ||
| dir-name ||
| dir-name-en ||
| tenant-type ||

# Try out parallel import and data compression

This topic describes how to try out parallel import and data compression.

Common application scenarios are described as follows:

- **Data migration**: When you migrate a large amount of data from one system to another, you can use parallel import to significantly accelerate the data migration and use data compression to reduce the transmission bandwidth and storage resources required.

- **Backup and restore**: When you back up data, you can use data compression to minimize the size of backup files to reduce the occupied storage space. When you restore data, you can use parallel import to accelerate the restore process.

- **Columnstore table**: When you query data in a columnstore table, only the involved columns are accessed without the need to access entire rows. Compressed columns can still be scanned and processed. This can reduce I/O operations, accelerate queries, and improve the overall query performance. After you import data to a columnstore table in batches, we recommend that you initiate a data compression task to improve the read performance. Note that the major compaction for a columnstore table is slow.

## Parallel import

In addition to analytical queries, another important part of operational online analytical processing (OLAP) lies in the parallel import of massive amounts of data, namely the batch data processing capability. The parallel execution framework of OceanBase Database allows you to execute data manipulation language (DML) statements in parallel, which is known as parallel DML (PDML) operations. It supports concurrent data writes to multiple servers in a multi-node database without compromising data consistency for large transactions. The asynchronous minor compaction mechanism enhances the support of the LSM-tree-based storage engine for large transactions when memory is tight.
Expand All @@ -18,7 +26,7 @@ This topic describes how to try out the PDML feature of OceanBase Database. Firs
First, copy the schema of the `lineitem` table and create an empty table named `lineitem2`. Table partitioning is used to manage large tables in OceanBase Database. In this example, 16 partitions are used. Therefore, the `lineitem2` table must also be split into 16 partitions:

```shell
obclient [test]> SHOW CREATE TABLE lineitem\G;
obclient [test]> SHOW CREATE TABLE lineitem\G
*************************** 1. row ***************************
Table: lineitem

Expand Down Expand Up @@ -91,7 +99,7 @@ obclient [test]> TRUNCATE TABLE lineitem2;
obclient [test]> INSERT /*+ parallel(16) enable_parallel_dml */ INTO lineitem2 SELECT * FROM lineitem;
```

Execution result:
The execution result is as follows:

```shell
obclient> TRUNCATE TABLE lineitem2;
Expand Down Expand Up @@ -142,7 +150,9 @@ OceanBase Database allows you to import data from CSV files in multiple ways. Th
...
```

3. Create a table named `t_f1` in the `test` database of the `test` tenant. For more information about how to create a tenant, see "Manage tenants."
3. Create a table named `t_f1` in the `test` database of the `test` tenant.

For more information about how to create a tenant, see "Manage tenants."

```shell
# obclient -hxxx.xxx.xxx.xxx -P2881 -uroot@test -Dtest -A -p -c
Expand All @@ -161,7 +171,7 @@ obclient [test]> GRANT FILE ON *.* to username;

<main id="notice" type='notice'>
<h4>Notice</h4>
<p>For security reasons, the SQL statement for modifying <code>secure_file_priv</code> can only be executed locally and cannot be executed on a remote OBClient. In other words, you must log on to an OBClient (or a MySQL client) on the server where the observer process is located to execute it. For more information, see <a href="../../700.reference/800.configuration-items-and-system-variables/200.system-variable/300.global-system-variable/11500.secure_file_priv-global.md">secure_file_priv</a>.
<p>For security reasons, the SQL statement for modifying <code>secure_file_priv</code> can only be executed locally and cannot be executed on a remote OBClient. For more information, see <a href="../../700.reference/800.configuration-items-and-system-variables/200.system-variable/300.global-system-variable/11500.secure_file_priv-global.md">secure_file_priv</a>.
</main>

After you complete the settings, reconnect to the session to make the settings take effect. Then, set the transaction timeout period for the session to ensure that the execution does not exit due to timeout.
Expand All @@ -181,7 +191,9 @@ You can see that OceanBase Database takes about 4 minutes to import 10 GB of dat

After the data is imported, check the number of records in the table and the space occupied by the table.

1. The table contains 50 million records.
1. Check the number of records in the table.

The table contains 50 million records.

```shell
obclient [test]> SELECT COUNT(*) FROM t_f1;
Expand All @@ -192,16 +204,18 @@ After the data is imported, check the number of records in the table and the spa
+----------+
```

2. Trigger a major compaction.
2. Initiate a major compaction.

To check the compression result of baseline data, log on as the `root` user to the `sys` tenant of the cluster and initiate a major compaction to compact and compress the incremental data and baseline data. You can execute the following SQL statement to initiate a major compaction:
To check the compression results of baseline data, log on as the `root` user to the sys tenant of the cluster and initiate a major compaction to compact and compress the incremental data and baseline data. You can execute the following SQL statement to initiate a major compaction.

```shell
# obclient -h127.0.0.1 -P2881 -uroot@sys -Doceanbase -A -p -c
obclient[oceanbase]> ALTER SYSTEM MAJOR FREEZE;
```

3. Execute the following SQL statement to query the state of the major compaction. If the statement returns `IDLE`, the major compaction is completed.
3. Run the following command to query the state of the major compaction.

If the result returns `IDLE`, the major compaction is completed.

```shell
obclient [oceanbase]> SELECT * FROM oceanbase.CDB_OB_MAJOR_COMPACTION;
Expand All @@ -216,13 +230,13 @@ After the data is imported, check the number of records in the table and the spa

4. Execute the following SQL statement in the `sys` tenant to query the space occupied by the imported data:

```shell
obclient [oceanbase]> select b.table_name,a.svr_ip,data_size/1024/1024/1024 from CDB_OB_TABLET_REPLICAS a,DBA_OB_TABLE_LOCATIONS b where a.tablet_id=b.tablet_id and b.table_name='T_F1';
+------------+---------------+----------------------------+
| table_name | svr_ip | a.data_size/1024/1024/1024 |
+------------+---------------+----------------------------+
| t_f1 | xxx.xx.xxx.xx | 6.144531250000 |
+------------+---------------+----------------------------+
```
```shell
obclient [oceanbase]> select b.table_name,a.svr_ip,data_size/1024/1024/1024 from CDB_OB_TABLET_REPLICAS a,CDB_OB_TABLE_LOCATIONS b where a.tablet_id=b.tablet_id and b.table_name='T_F1';
+------------+---------------+----------------------------+
| table_name | svr_ip | a.data_size/1024/1024/1024 |
+------------+---------------+----------------------------+
| t_f1 | xxx.xx.xxx.xx | 6.144531250000 |
+------------+---------------+----------------------------+
```
The compressed table is about 6.145 GB in size, and the compression ratio (uncompressed size divided by compressed size) is 10/6.145, namely 1.62.
Loading

0 comments on commit b664f7b

Please sign in to comment.