IMPORT

Purpose

Use the IMPORT command to transfer data from external data sources into Exasol.

You can specify a table that the imported data should be inserted into. If no table is specified, the data is returned as a result set.

Prerequisites

  • In Exasol, you must have the system privileges IMPORT and INSERT.
  • In the source system, you must have privileges to read the table contents or the files.
  • When using a connection, you must either have the system privilege USE ANY CONNECTION or the connection must be granted to the user or to one of the user roles by the GRANT statement. For more information, see CREATE CONNECTION.
  • When using an error table, you must have the necessary privileges to write or insert data.

Syntax

import::=

IMPORT syntax diagram 1

IMPORT syntax diagram 2

import_columns::=

Import columns syntax diagram

dbms_src::=

Database source syntax diagram

Database source syntax diagram 2

file_src::=

File source syntax diagram 1

File source syntax diagram 2

connection_def::=

Connection definition syntax diagram

cloud_connection_def::=

Cloud connection syntax diagram

csv_cols::=

CSV columns syntax diagram

fbv_cols::=

FBV columns syntax diagram

file_opts::=

File options syntax diagram

script_src::=

Script source syntax diagram

error_clause::=

Error clause syntax diagram

reject_clause::=

Reject clause syntax diagram

error_dst::=

Error destination syntax diagram

Usage notes

  • Importing from a CSV file is the fastest way to import data into Exasol.

  • In case of an IMPORT from JDBC or CSV sources, decimals are truncated if the target data type has less precision than the source data type.

  • Lines starting with # (hash) in the imported file will be ignored. For more information about formatting rules for data records, see File Format and Details .

  • The IMPORT statement runs in parallel on multiple nodes, which means that row packets arrive in no particular order. Ordering is therefore not guaranteed when using IMPORT, even if the source query contains an ORDER BY clause.

  • Import statements can also be used within SELECT queries. For more information, see SELECT.

  • You can view the progress of the data transfer in the column ACTIVITY of the system table EXA_USER_SESSIONS on a second connection screen.

  • For more information about ETL processes, see ETL in Exasol.

Explanation of elements and parameters in IMPORT

import_columns

Instead of importing data into a table by specifying a table name, you can specify a list of columns to perform a temporary import. The data imported into Exasol is then not persistent, but is returned as a result set.

To specify the output columns to be the same as the columns of an existing table in the Exasol database, use the LIKE clause.

dbms_src

Defines the database source whose connection data is specified in connection_def.

You can choose among an Exasol connection (EXA), a native connection to an Oracle database (ORA), or a JDBC connection to any database (JDBC).

Some JDBC drivers are included by default (visible in EXAoperation) and can be used within the connection string. You can additionally configure JDBC drivers in EXAoperation and choose them using the DRIVER option if the prefix is ambiguous.

Only the pre-installed JDBC drivers (marked gray in EXAoperation) are tested and officially supported. If you need help with other drivers, contact support.

The source data can either be a database table as identifier (for example, MY_SCHEMA.MY_TABLE) or a database statement as a string (for example, 'SELECT * FROM DUAL'). When using a database statement as a string, the expression is executed on the source database, for example, a SQL query or a procedure call.

When using the TABLE syntax (as opposed to STATEMENT), the table name identifier is treated similarly to Exasol tables. If the remote system expects case-sensitive syntax, you must use quote marks to delimit the table names.

Parallel EXA to EXA connections

To achieve optimal parallelization: 

  • If you import from an Exasol database using FROM EXA, importing is always parallelized. This means that for Exasol, loading tables directly is significantly faster than using the STATEMENT option.

  • If you import data from Oracle sources using FROM ORA with the TABLE option, partitioned tables will be loaded in parallel.

  • Specifying multiple STATEMENT clauses will allow them to be executed in parallel, which may improve performance compared to loading all data as a single STATEMENT. You can only specify multiple statements for JDBC and Oracle sources.

file_src

Specifies the data file source.

  • The source file can be in either CSV or FBV format, and must comply to the format specifications described in CSV Data Format or Fixblock Data format (FBV).

  • File names may only consist of ASCII characters. Using a BOM (byte order mark) is not supported.

  • Compressed files are recognized by their file extension. Supported extensions are .zip, .gz (gzip) and .bz2 (bzip2).

  • When System.in is specified as filename, data is read from the standard input stream.

Remote files

FTP, FTPS, SFTP, HTTP, and HTTPS servers are supported if connection data is defined using connection_def. The following are some of the considerations when using remote data file source:

  • Certificates are not verified for encrypted connections.
  • If you specify a folder, the result contains one row for each file in the given folder, with one column containing the filename.
  • If the URL begins with ftps://, the implicit encryption is used.
  • If the URL begins with ftp://, Exasol encrypts the user name and password (explicit encryption) if the server supports this. If the server requires encryption, the whole data transfer is done encrypted.
  • For FTP and FTPS servers, only passive mode is supported.
  • For HTTP and HTTPS servers, only basic authentication is supported.
  • For HTTP and HTTPS connections, HTTP query parameters can be specified by appending them to the file name.

    For example: FILE 'file.csv?op=OPEN&user.name=user'

Local files

You can import local files from your client system. When importing local files, the JDBC driver opens an internal connection to the cluster and provides an HTTP or HTTPS server, depending on if the SECURE option is specified. If the SECURE keyword is used, the server uses the HTTPS protocol and the data is transferred encrypted. If the SECURE keyword is omitted, the server uses HTTP and the connection will be unencrypted.

The default option for the internal connection is determined by the main connection:

  • If the main connection is encrypted, the local import operation is also encrypted by default.

  • If the main connection is not encrypted, the local import operation is also not encrypted by default. In this case, you can override the driver behavior by using the SECURE keyword to enforce a secure connection.

When using the SECURE option, data transfer performance will be reduced.

This functionality is only supported when using EXAplus and the JDBC driver. It cannot be used in prepared statements or within database scripts. If you want to process a local file by an explicit program, you can use the tool EXAjload, which is included in the JDBC driver package. For more information about how to use EXAjload, execute the program without parameters.

Cloud storage

You can import files from cloud storage services such as Amazon S3, Azure Blob Storage, and Google Cloud Storage. The connection to the cloud storage service is then defined in cloud_connection_def. For more information, see also Load Data from CSV/FBV Files.

connection_def

Defines the connection to an external database or file server. The connection can be specified in a connection string, which may include authentication information.

For regular ETL jobs you can also make use of connections, where connection information such as username and password can easily be encapsulated. For more information, see CREATE CONNECTION.

The declaration of USER  and IDENTIFIED BY within the command are optional. If they are omitted, the authentication information in the connection string or the connection object are used.

Kerberos SSO

For JDBC connections, it is possible to use Kerberos authentication by specifying specific data in the IDENTIFIED BY field. This data consists of a key that indicates that Kerberos authentication should be used (ExaAuthType=Kerberos), a base64 encoded configuration file, and a base64 encoded keytab file containing the credentials for the principal.

Example
IMPORT INTO table1 
    FROM JDBC AT '<JDBC_URL>' 
    USER '<kerberos_principal>' 
    IDENTIFIED BY 'ExaAuthType=Kerberos;<base64_krb_conf>;<base64_keytab>' TABLE table2; 

cloud_connection_def

Defines a connection to a cloud storage service.

The connection can be specified in a connection string, which may include authentication information. The format of the connection string can be different depending on the cloud service.

For more information, see Load Data from CSV/FBV Files.

Examples
CLOUD AZURE BLOBSTORAGE 'DefaultEndpointsProtocol=https;EndpointSuffix=core.windows.net' 
    USER '<accountName>' IDENTIFIED BY '<key>';

csv_cols

Defines how the column data is written to a CSV file. For more information, see CSV DATA Format.

Option Description

col_nr

Defines the column number. The first column is 1.

You can also define a column range, for example, 5..8 for columns 5, 6, 7, 8. Column numbers must be in ascending order.

FORMAT

Optional format definition for numbers or datetime values. The default is the session format.

For more information, see Numeric format models and Date/time format models.

Example

In this example, four columns are loaded from the CSV file, with the fourth column using a specified date format.

(1..3,4 FORMAT='DD-MM-YYYY')

fbv_cols

Defines which columns are read from an FBV file, and how they are interpreted. For more information, see Fixblock Data Format.

The following elements can be specified in an FBV file:

Element Description
SIZE

Defines the number of bytes of the column. The size must always be specified.

START Start byte of the column. It starts with zero. The START values must be in ascending order.
FORMAT

Optional format definition for numbers or date time values. Default: session format.

For more information, see Numeric format models and Date/time format models.

ALIGN

Alignment of the column.

Allowed values are LEFT or RIGHT. The default value is LEFT.

PADDING

Padding characters for columns.

By default, a space character is used. You can also specify an ASCII character, either in plain text ('+'), as a hexadecimal value ('0x09'), or as an abbreviation ('NUL', 'TAB', 'LF', 'CR', 'ESC').

Example

In this example, four columns are imported from an FBV file. The first column is padded with “x” characters. After the first 12 bytes there is a gap, and the fourth column has the date format specified for the FBV file.

(SIZE=8 PADDING='x' ALIGN=RIGHT,
 SIZE=4,
 START=17 SIZE=8,
 SIZE=32 FORMAT='DD-MM-YYYY')

file_opts

Option Description
COLUMN DELIMITER

Defines the field delimiter for CSV files.

By default, the double quote character " is used. You can specify any string as a delimiter, either as plain text ('"'), as a hexadecimal value ('0x09'), or as an ASCII abbreviation ('NUL', 'TAB', 'LF', 'CR', 'ESC').

A plain text value is limited to 10 characters, which are automatically converted to the encoding set for the file with the ENCODING option.

A hexadecimal value is limited to 10 bytes (not characters) and will not be converted.

If you do not want to use any field delimiter, define an empty string ('').

COLUMN SEPARATOR

Defines the field separator for CSV files.

By default, comma is used. You can specify any string as a separator, either as plain text (','), as a hexadecimal value ('0x09'), or as an ASCII abbreviation ('NUL', 'TAB', 'LF', 'CR', 'ESC').

A plain text value is limited to 10 characters, which are automatically converted to the encoding set for the file with the ENCODING option.

A hexadecimal value is limited to 10 bytes (not characters) and will not be converted.

ENCODING

Encoding of the CSV or FBV files. Default is UTF-8.

For information about supported encodings, see Supported Encodings for ETL Processes.

NULL

Additional representation of NULL values. This option is only supported for CSV files and only applies to fields not enclosed in field delimiters. Regardless of this option, an empty string in the input data always represents a NULL value.

ROW SEPARATOR

Defines the line break character. 

'LF' (default) corresponds to the ASCII character 0x0a (Unix/Linux).

'CR' corresponds to the ASCII character 0x0d (macOS).

'CRLF' corresponds to the ASCII characters 0x0d and 0x0a (Windows).

'NONE' means no line break. This is only allowed in FBV files.

SKIP

Number of rows that will be omitted in the import. This can be useful if you have to include header information within the data files. SKIP defines rows by line breaks (defined by ROW SEPARATOR) even if they occur inside data.

TRIM, LTRIM, RTRIM

Defines whether spaces are removed at the border of CSV columns.

LTRIM: from the left, RTRIM: from the right, TRIM: from both sides.

By default, no spaces are removed.

Boolean values

The following boolean value pairs are automatically accepted when inserted into a boolean column:

'1/0', 'TRUE/FALSE', 'true/false', 'True/False', 'T/F', 't/f', 'y/n', 'Y/N', 'yes/no', 'Yes/No', 'YES/NO'

For more information, see EXPORT.

script_src

Specifies the UDF script to be used for a user-defined import. Optionally, you can define a connection or properties that will be forwarded to the script. The specified script will generate a SELECT statement internally that does the actual import. The script implements a special callback function that receives the import specification (for example, parameters and connection information) and returns a SELECT statement. For more information, see User-defined IMPORT/EXPORT Using UDFs.

connection_def

Optional connection definition for being able to encapsulate connection information such as password. For more information, see connection_def.

WITH parameter=value...

Optional parameters to be passed to the script. Each script can define the mandatory and optional parameters it supports. Parameters are simple key-value pairs, with the value being a string.

Example
... WITH PARAM_1='val1' PARAM_2 = 'val2';

error_clause

Defines how many invalid rows of the source are allowed.

For example, in the case of REJECT LIMIT 5, the statement will work fine if there are less than or equal to five invalid rows and would throw an exception after the sixth row. The exact row that causes the exception is non-deterministic and may vary.

You can write the faulty rows into a CSV file (not FBV) or to a local table within Exasol to process or analyze them later.

An optional expression can be specified for identification reasons in case you use the same error table or file multiple times. You can also use CURRENT_TIMESTAMP for this.

Table

For every faulty row, the following columns are created: row number, error message, [expression], and a truncated flag, followed by the actual data. The truncated flag indicates whether the data was truncated to the maximum string length.

CSV file For every faulty row, a comment row is created with the row number, error message, [expression], followed by the actual data row.

Constraint violation errors will throw an exception even if the REJECT LIMIT has not been reached.

Examples

IMPORT INTO table_1 FROM CSV
    AT 'http://192.168.1.1:8080/' USER 'my_user' IDENTIFIED BY 'my_secret'
    FILE 'tab1_part1.csv' FILE 'tab1_part2.csv'
    COLUMN SEPARATOR = ';'
    SKIP = 5;
CREATE CONNECTION my_fileserver
    TO 'ftp://192.168.1.2/' USER 'my_user' IDENTIFIED BY 'my_secret';

IMPORT INTO table_2 FROM FBV
    AT my_fileserver
    FILE 'tab2_part1.fbv'
    (SIZE=8 PADDING='+' ALIGN=RIGHT,
    SIZE=4,
    SIZE=8,
    SIZE=32 FORMAT='DD-MM-YYYY' );
CREATE CONNECTION my_oracle
    TO '(DESCRIPTION =
            (ADDRESS_LIST = 
                (ADDRESS =
                    (PROTOCOL = TCP)
                    (HOST = 192.168.0.25)
                    (PORT = 1521)
                )
            )
            (CONNECT_DATA = (SERVICE_NAME = orautf8))
        )';
        
IMPORT INTO table_3 (col1, col2, col4) FROM ORA
    AT my_oracle
    USER 'my_user' IDENTIFIED BY 'my_secret'
    STATEMENT ' SELECT * FROM orders WHERE order_state=''OK'' '
    ERRORS INTO error_table (CURRENT_TIMESTAMP) REJECT LIMIT 10;
IMPORT INTO table_4 FROM JDBC DRIVER='MSSQL'
    AT 'jdbc:sqlserver://dbserver;databaseName=testdb'
    USER 'my_user' IDENTIFIED BY 'my_secret'
    STATEMENT ' SELECT * FROM orders WHERE order_state=''OK'' ';
IMPORT INTO table_5 FROM CSV
    AT 'http://HadoopNode:50070/webhdfs/v1/tmp'
    FILE 'file.csv?op=OPEN&user.name=user';
IMPORT INTO table_6 FROM EXA
    AT my_exasol
    TABLE MY_SCHEMA.MY_TABLE;
IMPORT INTO table_7 FROM SCRIPT etl.import_hcat_table
    WITH HCAT_DB = 'default'
         HCAT_TABLE = 'my_hcat_table'
         HCAT_ADDRESS = 'hcatalog-server:50111'
         HDFS_USER = 'hdfs';
IMPORT INTO table_8  
    FROM LOCAL CSV FILE '~/my_table.csv'
    COLUMN SEPARATOR = ';' SKIP = 5;
-- getting a result set using IMPORT (which can also be used as a sub-select):
SELECT * FROM (
IMPORT INTO (i INT, v VARCHAR(200)) FROM EXA
    AT my_exasol
    TABLE MY_SCHEMA.MY_TABLE
);
-- result set IMPORT without INTO clause:
IMPORT FROM JDBC
    AT my_jdbc_conn
    STATEMENT ' SELECT * FROM orders WHERE order_state=''OK'' ';
-- result set IMPORT with INTO and LIKE clause:
IMPORT INTO (LIKE CAT) FROM JDBC
    AT my_exa_conn
    STATEMENT ' SELECT OBJECT_NAME, OBJECT_TYPE FROM EXA_USER_OBJECTS WHERE OBJECT_TYPE IN (''TABLE'', ''VIEW'') ';

Import from an Amazon S3 bucket:

IMPORT INTO table_1 FROM CSV
    AT 'https://<bucketname>.s3.amazonaws.com'
    USER '<AccessKeyID>' IDENTIFIED BY '<SecretAccessKey>'
    FILE 'file.csv';

Import from Amazon S3 bucket using a fully qualified bucket URL

A fully qualified S3 URL in the format <bucket-name>.s3.<region-code>.amazonaws.com will become available immediately when you have created the bucket. A URL in the legacy global endpoint format <bucket-name>.s3.amazonaws.com may need up to 24 hours to become available.

IMPORT INTO table_1 FROM CSV
       AT 'https://<bucketname>.s3-<region>.amazonaws.com/'
       USER '<AccessKeyID>' IDENTIFIED BY '<SecretAccessKey>'
       FILE 'file.csv';

Import from Azure Blob Storage

IMPORT INTO table1 FROM CSV 
    AT CLOUD AZURE BLOBSTORAGE 'DefaultEndpointsProtocol=https;EndpointSuffix=core.windows.net' 
    USER '<AccountName>' IDENTIFIED BY '<AccountKey>' 
    FILE '<container>/<blob>';

Import from Google Cloud Storage

IMPORT INTO table1 FROM CSV
    AT 'https://testbucket.storage.googleapis.com' 
    USER '<Access ID>' IDENTIFIED BY '<secret>'
    FILE 'my_path/file.csv';