CREATE SCRIPT

Purpose

Use this statement to create a script. A script can be either a user-defined function (UDF), a scripting program, or an adapter script.

Prerequisites

  • You need to have the system privilege CREATE SCRIPT if the script is created in your own schema, or in that of an assigned user role or CREATE ANY SCRIPT.
  • If the option OR REPLACE is specified and the script already exists, then you also need to have the rights for DROP SCRIPT.
  • To use a script, you need to have the system privilege EXECUTE ANY FUNCTION, the object privilege EXECUTE on the script or its schema, or you are the current owner.

Syntax

create_scripting_script::=

Create Scripting Script

Create Scripting Script

Create Scripting Script

scripting_metadata::=

Scripting Metadata

Scripting Metadata

create_udf_script::=

Create UDF Script

Create UDF Script

Create Scripting Script

Create Scripting Script

udf_metadata::=

Create Script UDF Metadata

Create Script UDF Metadata

udf_order_by_clause::=

Create Script UDF Order By

create_adapter_script::=

Create Scripting Script

Create Scripting Script

Usage Notes

  • The option OR REPLACE can be used to replace an existing script without having to explicitly delete it with DROP SCRIPT.
  • The content of a script is integrated in the corresponding system tables (for example EXA_ALL_SCRIPTS). It begins from the first line after the AS keyword which is not a blank line.
  • If you want to further integrate script languages for user-defined functions and adapter scripts, refer to the Expanding Script Languages Using BucketFS section.
  • Usage notes for scripting programs: 
    • Scripting programs provide a way to control the execution of several SQL commands (for example, for ETL jobs) and are executed with the EXECUTE SCRIPT command. For more information, refer to the Scripting section.
    • Lua is the supported programming language for scripting programs.
    • If neither of the two options RETURNS TABLE and RETURNS ROWCOUNT is specified, the option RETURNS ROWCOUNT is implicitly used. For more information, refer to the Return Value of a Script section.
  • Usage notes for UDF scripts: 
    • UDF scripts can be used directly within SELECT statements and can process big data volumes. For more information, refer to UDF scripts section.
    • In addition to scalar functions, you can also create aggregate and analytic functions. You can also implement the MapReduce algorithms. For more information, refer to UDF scripts section.
    • If you define the ORDER BY clause in the parameter list, the groups of the SET input data are processed in an ordered way. You can also specify this clause in the function call within a SELECT statement.
  • Usage notes for adapter scripts: 
    • Adapter scripts can only be implemented in Java, Lua, and Python.
    • For more information about adapter scripts and virtual schemas, refer to the Virtual Schemas section.
    • The existing open source adapters provided by Exasol can be accessed from the Exasol GitHub Repository.

Examples

-- define a reusable function definition
--/
CREATE SCRIPT function_lib AS
  function min_max(a,b,c)
    local min,max
    if a>b then max,min=a,b
           else max,min=b,a
    end
    if c>max then max=c
    else if c<min then min=c
         end
    end
    return min,max
  end
/
-- scripting program example for data insert
--/
CREATE SCRIPT insert_low_high (param1, param2, param3) AS
  import('function_lib') -- accessing external function
  lowest, highest = function_lib.min_max(param1, param2, param3)
  query([[INSERT INTO t VALUES (:x, :y)]], {x=lowest, y=highest})
/

EXECUTE SCRIPT insert_low_high(1, 3, 4);
EXECUTE SCRIPT insert_low_high(2, 6, 4);
EXECUTE SCRIPT insert_low_high(3, 3, 3);
-- UDF example
--/
CREATE LUA SCALAR SCRIPT my_average (a DOUBLE, b DOUBLE)
           RETURNS DOUBLE AS
function run(ctx)
    if ctx.a == nil or ctx.b==nil
        then return NULL
    end
    return (ctx.a+ctx.b)/2
end
/

SELECT x,y,my_average(x,y) FROM t;
-- Adapter script example
--/
CREATE JAVA ADAPTER SCRIPT my_script AS
  %jar hive_jdbc_adapter.jar;
/