title | summary | aliases | ||||
---|---|---|---|---|---|---|
LOAD DATA | TiDB SQL Statement Reference |
An overview of the usage of LOAD DATA for the TiDB database. |
|
The LOAD DATA
statement batch loads data into a TiDB table.
In TiDB v7.0.0, the LOAD DATA
SQL statement supports the following features:
- Support importing data from S3 and GCS
- Add a new parameter
FIELDS DEFINED NULL BY
Warning:
The new parameter
FIELDS DEFINED NULL BY
and support for importing data from S3 and GCS in v7.0.0 are experimental. It is not recommended that you use it in the production environment. This feature might be changed or removed without prior notice. If you find a bug, you can report an issue on GitHub.
Note:
This feature is only available on TiDB Serverless clusters.
LoadDataStmt ::=
'LOAD' 'DATA' LocalOpt 'INFILE' stringLit DuplicateOpt 'INTO' 'TABLE' TableName CharsetOpt Fields Lines IgnoreLines ColumnNameOrUserVarListOptWithBrackets LoadDataSetSpecOpt
LocalOpt ::= ('LOCAL')?
Fields ::=
('TERMINATED' 'BY' stringLit
| ('OPTIONALLY')? 'ENCLOSED' 'BY' stringLit
| 'ESCAPED' 'BY' stringLit
| 'DEFINED' 'NULL' 'BY' stringLit ('OPTIONALLY' 'ENCLOSED')?)?
You can use LOCAL
to specify data files on the client to be imported, where the file parameter must be the file system path on the client.
If you do not specify LOCAL
, the file parameter must be a valid S3 or GCS path, as detailed in external storage.
If you do not specify LOCAL
, the file parameter must be a valid S3 or GCS path, as detailed in external storage.
When the data files are stored on S3 or GCS, you can import individual files or use the wildcard character *
to match multiple files to be imported. Note that wildcards do not recursively process files in subdirectories. The following are some examples:
- Import a single file:
s3://<bucket-name>/path/to/data/foo.csv
- Import all files in the specified path:
s3://<bucket-name>/path/to/data/*
- Import all files ending with
.csv
under the specified path:s3://<bucket-name>/path/to/data/*.csv
- Import all files prefixed with
foo
under the specified path:s3://<bucket-name>/path/to/data/foo*
- Import all files prefixed with
foo
and ending with.csv
under the specified path:s3://<bucket-name>/path/to/data/foo*.csv
You can use the Fields
and Lines
parameters to specify how to handle the data format.
FIELDS TERMINATED BY
: specifies the data delimiter.FIELDS ENCLOSED BY
: specifies the enclosing character of the data.LINES TERMINATED BY
: specifies the line terminator, if you want to end a line with a certain character.
You can use DEFINED NULL BY
to specify how NULL values are represented in the data file.
- Consistent with MySQL behavior, if
ESCAPED BY
is not null, for example, if the default value\
is used, then\N
will be considered a NULL value. - If you use
DEFINED NULL BY
, such asDEFINED NULL BY 'my-null'
,my-null
is considered a NULL value. - If you use
DEFINED NULL BY ... OPTIONALLY ENCLOSED
, such asDEFINED NULL BY 'my-null' OPTIONALLY ENCLOSED
,my-null
and"my-null"
(assumingENCLOSED BY '"
) are considered NULL values. - If you do not use
DEFINED NULL BY
orDEFINED NULL BY ... OPTIONALLY ENCLOSED
, but useENCLOSED BY
, such asENCLOSED BY '"'
, thenNULL
is considered a NULL value. This behavior is consistent with MySQL. - In other cases, it is not considered a NULL value.
Take the following data format as an example:
"bob","20","street 1"\r\n
"alice","33","street 1"\r\n
If you want to extract bob
, 20
, and street 1
, specify the field delimiter as ','
, and the enclosing character as '\"'
:
FIELDS TERMINATED BY ',' ENCLOSED BY '\"' LINES TERMINATED BY '\r\n'
If you do not specify the preceding parameters, the imported data is processed in the following way by default:
FIELDS TERMINATED BY '\t' ENCLOSED BY '' ESCAPED BY '\\'
LINES TERMINATED BY '\n' STARTING BY ''
You can ignore the first number
lines of a file by configuring the IGNORE <number> LINES
parameter. For example, if you configure IGNORE 1 LINES
, the first line of a file is ignored.
The following example imports data using LOAD DATA
. Comma is specified as the field delimiter. The double quotation marks that enclose the data are ignored. The first line of the file is ignored.
If you see ERROR 1148 (42000): the used command is not allowed with this TiDB version
, refer to ERROR 1148 (42000): the used command is not allowed with this TiDB version for troubleshooting.
If you see ERROR 1148 (42000): the used command is not allowed with this TiDB version
, refer to ERROR 1148 (42000): the used command is not allowed with this TiDB version for troubleshooting.
LOAD DATA LOCAL INFILE '/mnt/evo970/data-sets/bikeshare-data/2017Q4-capitalbikeshare-tripdata.csv' INTO TABLE trips FIELDS TERMINATED BY ',' ENCLOSED BY '\"' LINES TERMINATED BY '\r\n' IGNORE 1 LINES (duration, start_date, end_date, start_station_number, start_station, end_station_number, end_station, bike_number, member_type);
Query OK, 815264 rows affected (39.63 sec)
Records: 815264 Deleted: 0 Skipped: 0 Warnings: 0
LOAD DATA
also supports using hexadecimal ASCII character expressions or binary ASCII character expressions as the parameters for FIELDS ENCLOSED BY
and FIELDS TERMINATED BY
. See the following example:
LOAD DATA LOCAL INFILE '/mnt/evo970/data-sets/bikeshare-data/2017Q4-capitalbikeshare-tripdata.csv' INTO TABLE trips FIELDS TERMINATED BY x'2c' ENCLOSED BY b'100010' LINES TERMINATED BY '\r\n' IGNORE 1 LINES (duration, start_date, end_date, start_station_number, start_station, end_station_number, end_station, bike_number, member_type);
In the above example, x'2c'
is the hexadecimal representation of the ,
character, and b'100010'
is the binary representation of the "
character.
The syntax of the LOAD DATA
statement is compatible with that of MySQL, except for character set options which are parsed but ignored. If you find any syntax compatibility difference, you can report it via an issue on GitHub.
Note:
- For versions earlier than TiDB v4.0.0,
LOAD DATA
commits every 20000 rows.- For versions from TiDB v4.0.0 to v6.6.0, TiDB commits all rows in one transaction by default.
- After upgrading from TiDB v4.0.0 or earlier versions,
ERROR 8004 (HY000) at line 1: Transaction is too large, size: 100000058
might occur. The recommended way to resolve this error is to increase thetxn-total-size-limit
value in yourtidb.toml
file. If you are unable to increase this limit, you can also restore the behavior before the upgrade by settingtidb_dml_batch_size
to20000
. Note that starting from v7.0.0,tidb_dml_batch_size
no longer takes effect on theLOAD DATA
statement.- No matter how many rows are committed in a transaction,
LOAD DATA
is not rolled back by theROLLBACK
statement in an explicit transaction.- The
LOAD DATA
statement is always executed in optimistic transaction mode, regardless of the TiDB transaction mode configuration.
Note:
- For versions earlier than TiDB v4.0.0,
LOAD DATA
commits every 20000 rows.- For versions from TiDB v4.0.0 to v6.6.0, TiDB commits all rows in one transaction by default.
- Starting from TiDB v7.0.0, the number of rows to be committed in a batch is controlled by the
WITH batch_size=<number>
parameter of theLOAD DATA
statement, which defaults to 1000 rows per commit.- After upgrading from TiDB v4.0.0 or earlier versions,
ERROR 8004 (HY000) at line 1: Transaction is too large, size: 100000058
might occur. To resolve this error, you can restore the behavior before the upgrade by settingtidb_dml_batch_size
to20000
.- No matter how many rows are committed in a transaction,
LOAD DATA
is not rolled back by theROLLBACK
statement in an explicit transaction.- The
LOAD DATA
statement is always executed in optimistic transaction mode, regardless of the TiDB transaction mode configuration.