Python 3

This section provides you with the details about Python 3. For additional information, see official Python Documentation.

run() and cleanup() Methods

The method run() is called for each input tuple (SCALAR) or each group (SET). Its parameter is a type of execution context and provides access to the data and the iterator in case of a SET script.

To initialize expensive steps (such as opening external connections), you can write code outside the run() method, since this code is executed once at the beginning by each virtual machine. For deinitialization purpose, the method cleanup() is available. It is called once for each virtual machine, at the end of the execution.

Parameters

The internal Python data types and the database SQL types are not identical. Therefore, casts must be done for the input and output data:

Datatype SQL Datatype Python 3
DECIMAL(p,0) int
DECIMAL(p,s) decimal.Decimal
DOUBLE float
DATE datetime.date
TIMESTAMP datetime.datetime
BOOLEAN bool
VARCHAR and CHAR str

The value None is the equivalent of the SQL NULL.

For better performance, you should prefer DOUBLE to DECIMAL for the parameter types.

The input parameters can be addressed by their names, for example, ctx.my_input. ctx refers to the name of the context parameter that is passed to the run() method.

You can also use a dynamic number of parameters via the notation (...), for example, CREATE PYTHON3 SCALAR SCRIPT my_script (...). The parameters can then be accessed through an index (ctx[0] for the first parameter). The number of parameters and their data types (both determined during the call of the script) are part of the metadata.

Metadata

You can access the following metadata through global variables:

Metadata Description

exa.meta.database_name

Database name

exa.meta.database_version

Database version

exa.meta.script_language

Name and version of the script language

exa.meta.script_name

Name of the script

exa.meta.script_schema

Schema in which the script is stored

exa.meta.current_schema

Schema which is currently opened

exa.meta.script_code

Code of the script

exa.meta.session_id

Session ID

exa.meta.statement_id

Statement ID within the session

exa.meta.current_user

Current user

exa.meta.scope_user

Scope user (current_user or the owner of a view if the udf script is called within a view)

exa.meta.node_count

Number of cluster nodes

exa.meta.node_id

Local node ID starting with 0

exa.meta.vm_id

Unique ID for the local machine (the IDs of the virtual machines have no relation to each other)

exa.meta.input_type

Type of the input data (SCALAR or SET)

exa.meta.input_column_count

Number of input columns

exa.meta.input_columns[]

Array including the following information: {name, type, sql_type, precision, scale, length}

exa.meta.output_type

Type of the output data (RETURNS or EMITS)

exa.meta.output_column_count

Number of output columns

exa.meta.output_columns[]

Array including the following information: {name, type, sql_type, precision, scale, length}

Data Iterator

For scripts having multiple input tuples per call (keyword SET), you can iterate through that data using the method next(), which is accessible through the context. Initially, the iterator points to the first input row. For iterating you can use a while True loop which is aborted in case if not ctx.next().

If the input data is empty, then the run() method will not be called, and similar to aggregate functions the NULL value is returned as result (for example, SELECT MAX(x) FROM t WHERE false).

Additionally, there is a method reset() that resets the iterator to the first input element. Therefore, you can do multiple iterations through the data, if this is required for your algorithm.

The method size() returns the number of input values.

emit()

You can return multiple output tuples per call (keyword EMITS) using the method emit(). The method expects as many parameters as output columns were defined. In the case of dynamic output parameters, it is handy in Python to use a list object which can be unpacked using * (like in the example above: ctx.emit(*currentRow)).

Import of Other Scripts

Other scripts can be imported through the method exa.import_script(). The return value of this method must be assigned to a variable, representing the imported module.

Syntax
<alias> = exa.import_script('<schema>.<script>')
Example
--/
CREATE OR REPLACE PYTHON3 SCALAR SCRIPT LIB.MYLIB() RETURNS INT AS
def helloWorld():
  return "Hello Python3 World!"
/

--/
CREATE OR REPLACE PYTHON3 SCALAR SCRIPT TEST.MYHELLOWORLD() RETURNS VARCHAR(2000) AS
l = exa.import_script('LIB.MYLIB')
def run(ctx):
  return l.helloWorld()
/

select TEST.MYHELLOWORLD();

Accessing Connection Definitions

The data that has been specified when defining connections with CREATE CONNECTION is available in Python UDF scripts through the method exa.get_connection("<connection_ name>"). The result is a Python object with the following fields:

Fields Description

type

The type of the connection definition. ConnectionType is an enumeration which currently only contains the entry PASSWORD

address

The part of the connection definition that followed the TO keyword in the CREATE CONNECTION command

user

The part of the connection definition that followed the USER keyword in the CREATE CONNECTION command

password

The part of the connection definition that followed the IDENTIFIED BY keyword in the CREATE CONNECTION command

Auxiliary Libraries

The following libraries are provided that are not already part of the language:

Libraries Description

lxml

XML processing. For more details, see http://pypi.python.org/pypi/lxml.

NumPy

Numeric calculations. For details, see http://www.numpy.org.

PyTables

Hierarchical database package. For details, see http://www.pytables.org.
For this library, the required build package HDF5 is available at http://www.h5py.org.

pytz

Time zone functions. For details, see http://pytz.sourceforge.net.

redis

Interface for Redis. For details, see http://pypi.python.org/pypi/redis/.

scikit-learn

Machine Learning. For details, see http://scikit-learn.org.

SciPy

Scientific tools. For details, see http://www.scipy.org. For this library, the required build tool atlas is available at http://pypi.python.org/pypi/atlas.

ujson

UltraJSON is an ultra fast JSON encoder and decoder written in pure C with bindings for Python 2.5+ and 3. For more details, see https://pypi.org/project/ujson/.

pyexasol

Official Python driver for Exasol. For more details, see https://github.com/exasol/pyexasol.

requests

Standard for making HTTP requests in Python. For more details, see https://pypi.org/project/requests/.

pycurl

Can be used to fetch objects identified by a URL from a Python program. For more details, see https://pypi.org/project/pycurl/.

boto

Boto is the (deprecated) Amazon Web Services (AWS) SDK for Python. For more details, see https://pypi.org/project/boto/.

ldap Python-ldap provides an object-oriented API to access LDAP directory servers from Python programs. For more details, see https://www.python-ldap.org/en/latest/.
roman Converts an integer to a roman numeral. For more details, see https://pypi.org/project/roman/.
OpenSSL A python wrapper module around the OpenSSL library. For more details, see https://www.pyopenssl.org/en/stable/.
smbc Binding for Samba client library libsmbclient. For more details, see https://pypi.org/project/pysmbc/.
leveldb Binding for the key-value database LevelDB. For more details, see https://code.google.com/archive/p/py-leveldb/.
pyodbc Database API module for ODBC. For more details, see https://github.com/mkleehammer/pyodbc/wiki.
pandas Data structures and data-analysis tools for working with structured and time-series data. For more details, https://pandas.pydata.org.
pycparser Parser for the C language. For more details, see https://github.com/eliben/pycparser.
cffi C Foreign Functions Interface to interact with C code from Python. For more details, see https://cffi.readthedocs.io/en/latest/.
protobuf Google's platform-neutral mechanism for serializing structured data. For more details, see https://developers.google.com/protocol-buffers/.
pykickstart Library for reading and writing kickstart files. For more details, see https://pykickstart.readthedocs.io/en/latest/.
martian Library for embedding configuration information in Python code. For more details, see https://pypi.org/project/martian/.

Dynamic Output Parameters Callback

If the UDF script is defined with dynamic output parameters and the output parameters cannot be determined (by specifying EMITS in the query or by INSERT INTO SELECT), the database calls the method default_output_columns() which you can implement. The expected return value is a String with the names and types of the output columns, "a int, b varchar(100)". For more details about when this method is called, see Dynamic Input and Output Parameters

You can access the Metadata exa.meta in the method to find out the number and types of input columns.

The method will be executed only once on a single node.

User Defined Import Callback Function

To support a user defined import you can implement the callback method generate_sql_for_import_spec(import_spec). For syntax, see Dynamic Input and Output Parameters and IMPORT statement. The parameter import_spec contains all information about the executed IMPORT FROM SCRIPT statement. The function has to generate and return a SELECT statement which will retrieve the data to be imported. import_spec has the following fields:

Fields Description

parameters[]

Parameters specified in the IMPORT statement. For example, parameters['FOO'] returns either the value of parameter FOO, or None, if FOO was not specified.

is_subselect

This is true, if the IMPORT is used inside a SELECT statement and not inside an IMPORT INTO table statement.

subselect_column_names[]

If is_subselect is true and the user specified the target column names and types, this returns the names of all specified columns.

subselect_column_types[]

If is_subselect is true and the user specified the target column names and types, this returns the types of all specified columns. The types are returned in SQL format ("VARCHAR(100)").

connection_name

This returns the name of the connection, if it was specified. Otherwise it returns None. The UDF script can then obtain the connection information through exa.get_connection(name).

connection

This is only defined, if the user provided connection information. It returns an object similar to the return value of exa.getConnection(name). Returns None if no connection information is specified.

The password is transferred into plaintext and could be visible in the logs. Therefore, it is recommended to create a CONNECTION before and to specify only the connection name (can be obtained from connection_name field). The actual connection information can be obtained through exa.get_connection(name).

User Defined Export Callback Function

To support a user defined export you can implement the callback method generate_sql_for_export_spec(export_spec). For syntax, see Dynamic Input and Output Parametersand EXPORT statement. The parameter export_spec contains all information about the executed EXPORT INTO SCRIPT statement. The function has to generate and return a SELECT statement which will generate the data to be exported. The FROM part of that string can be a dummy table (DUAL) since the export command is aware which table should be exported. But it has to be specified to be able to compile the SQL string successfully.

export_spec has the following fields:

Fields Description

parameters[]

Parameters specified in the EXPORT statement. For example, parameters['FOO'] returns either the value of parameter FOO, or None, if FOO was not specified.

source_column_names[]

List of column names of the resulting table that should be exported.

has_truncate

Boolean value from the EXPORT command option that defines whether the content of the target table should be truncated before the data transfer.

has_replace

Boolean value from the EXPORT command option that defines whether the target table should be deleted before the data transfer.

created_by

String value from the EXPORT command option that defines the creation text executed in the target system before the data transfer.

connection_name

This returns the name of the connection, if it was specified. Otherwise it returns None. The UDF script can then obtain the connection information through exa.get_connection(name).

connection

This is only defined, if the user provided connection information. It returns an object similar to the return value of exa.getConnection(name). Returns None if no connection information is specified.

The password is transferred into plaintext and could be visible in the logs. Therefore, It is recommended to create a CONNECTION before and to specify only the connection name (can be obtained from connection_name field). The actual connection information can be obtained through exa.get_connection(name).

Example
/*
  This example loads from a webserver
  and processes the following file goalies.xml:

  <?xml version='1.0' encoding='UTF-8'?>
  <users>
    <user active="1">
      <first_name>Manuel</first_name>
      <last_name>Neuer</last_name>    
    </user>
    <user active="1">
      <first_name>Joe</first_name>
      <last_name>Hart</last_name>
    </user>
    <user active="0">
      <first_name>Oliver</first_name>
      <last_name>Kahn</last_name>
    </user>
  </users>
*/

--/
CREATE PYTHON3 SCALAR SCRIPT process_users(url VARCHAR(500))
EMITS (firstname VARCHAR(20), lastname VARCHAR(20)) AS
import urllib.request
import lxml.etree as etree

def run(ctx):
   data = b''.join(urllib.request.urlopen(ctx.url).readlines())
   tree = etree.XML(data)
   for user in tree.findall('user/[@active="1"]'):
       fn = user.findtext('first_name')
       ln = user.findtext('family_name')
       ctx.emit(fn, ln)
/