Appropriate Data Types
- Choosing appropriate data types is a key factor during data modeling. It provides a good compression and avoids unnecessary type conversions or expression indexes.
- Keep formats short, do not use VARCHAR(2000000) or DECIMAL(32,X) if a smaller range is sufficient.
Prefer exact data types.
- Prefer DECIMAL over DOUBLE if possible. Double has a wider scale but is less efficient.
- Prefer CHAR(x) over VARCHAR(x) if the length of the strings is a known constant (CHAR is fixed size).
- Use identical types for same content: Avoid joining columns of different data types. For example, do not join an INT column with a VARCHAR column containing digits.
- See Exasol GitHub repository to know more about appropriate data types.
An Exasol Cluster uses a shared-nothing architecture where data is distributed between the cluster nodes. However, if tables are small, their data is not distributed. Instead, these small tables are replicated on every node. The replication guarantees local joins are possible with these tables. The replication border is used to define what “small” means. The default value is 100000 rows.
Consider to increase the replication border so that all your dimension tables are replicated if that doesn’t exhaust your node’s memory
You can use the database parameter soft_replicationborder_in_numrows to configure the replication border.
A table is considered as small and will be replicated across all active nodes if none of the two thresholds is reached.
Distribution key scenarios
Tables that cross the replication border (large tables) are distributed across all active nodes. Without specifying a distribution key, that distribution is random. For an example, let’s look at two tables:
The tables are joined on the join_col and filtered on the where_col – other columns are not displayed to keep the sketch small, we also ignore the fact that small tables like these would be replicated. A random distribution on a three node cluster may look like this:
If the tables are now joined on the join_col with a statement like:
SELECT <something> FROM t1 JOIN t2 ON t1.join_col = t2.join_col;
We realize that this random distribution is not optimum.
This is processed as a global join internally and requires network communication between the nodes on behalf of the join. It happens because the highlighted rows don't find local join partners on the same node.
If the two tables were distributed on their join columns with statements similar to the following:
ALTER TABLE t1 DISTRIBUTE BY join_col;
ALTER TABLE t2 DISTRIBUTE BY join_col;
Then the same query can be processed internally as a local join:
Every row finds a local join partner on the same node, so the network communication between the nodes on behalf of the join is not required. The performance with this local join is better than with the global join although it’s the same SELECT statement.
It’s a good idea to distribute on JOIN columns, however, it’s not good if you want to distribute columns that you use for filtering with WHERE conditions. If you distribute both tables on the where_col columns, the result will look like this:
This distribution is actually worse than the initial random distribution. Not only does this cause global joins between the two tables as already explained, statements like <Any DQL or DML> WHERE t2.where_col='A'; will utilize only one node (the first with this WHERE condition) and that disables one of Exasol’s best strengths, the Massive Parallel Processing (MPP) functionality. This distribution leads to poor performance because all other nodes in the cluster have to stand by being idle while one node has to do all the work alone.
- Do: Distribution on JOIN columns leads to local joins which perform better than global joins.
- Don't: Distribution on WHERE columns leads to global joins and disables the MPP functionality, both causing poor performance.
Check how a new distribution key distributes rows
Let's consider a scenario where you have a new distribution key on the column aa_00 and you want to see how it is going to distribute rows. This is a type of check that allows you to see the distribution before you implement the distribution key.
Run the following statement to check the distribution of the new distribution key.
SELECT value2proc(aa_000) as Node_Number, round(count(*) / sum(count(*)) over() * 100 ,2) as Percentage
from <table name>
group by 1;
If the distribution is very uneven or skewed, it is not recommended to implement the new distribution key.
Avoid ORDER BY
Views are usually treated like tables by end users who aren't aware of an ORDER BY clause being part for the view’s query. This means that clause is superfluous and slows down the view performance. Similarly, ORDER BY clauses in subqueries will slow down query performance and cause Materialization which is not required.
There should be only one ORDER BY clause (if any) at the end of an SQL statement.
No (hidden) ORDER BY in views or subqueries.
Manual Index Creation
Exasol creates and maintains required indexes automatically. It is not recommended to interfere in this.
In rare cases, additional indexes on filter columns may improve query performance.
But indexes will also slow down DML operations like INSERT.
Existing indexes can be seen in EXA_DBA_INDICES and created with ENFORCE GLOBAL|LOCAL INDEX ON <table>(<columns>);
Exasol recommends creating local indexes if possible.
Use Surrogate Keys
Joins on VARCHAR and DATE/TIMESTAMP columns are expensive compared to joins on DECIMAL columns.
Joins on multiple columns generate multi-column indexes which require more space and effort to maintain internally compared to single column indexes.
Use instead surrogate keys to avoid above problems.
Favor UNION ALL
The difference between UNION and UNION ALL is that UNION eliminates duplicates, thereby sorting the two sets. This means UNION is more expensive than UNION ALL, so in cases where it is known that no duplicates can occur or when duplicates are acceptable, UNION ALL should be used instead of UNION.
If tables are too large respectively too many to fit completely into the node’s memory, partitioning large tables can help improve performance. Take these two tables as example:
Say t2 is too large to fit in memory and may get partitioned therefore.
In contrast to distribution, partitioning should be done on columns that are used for filtering:
ALTER TABLE t2 PARTITION BY where_col;
Now without taking distribution into account (on a one-node cluster), the table t2 looks like this:
Partitioning changes the way the table is physically stored on disk. It may take much time to complete such an ALTER TABLE (distribution/partitioning) command with large tables.
A statement like SELECT * FROM t2 WHERE where_col=’A’; would have to load only the red part of the table into memory. Should the two tables reside on a three-node cluster with distribution on the join_col columns and the table t2 partitioned on the where_col column, they look like this:
Again, this means that each node has to load a smaller portion of the table into memory if statements are executed that filter on the where_col column while joins on the join_col column are still local joins.
EXA_(USER|ALL|DBA)_TABLES (see Metadata System Tables) shows both the distribution key and the partition key if any.
Exasol will automatically create an appropriate number of partitions.
Use Jumbo Frames
Exasol data blocks have a minimum size of 4k which does not fit into the default MTU size of 1500 byte. To get a best performance we recommend to use Jumbo Frames (MTU size 9000 byte). All network components have to be enabled to use MTU 9000. In addition, MTU 9000 should be enabled in the license server.
The tables in the Statistical System Tables schema can be used to monitor the health of the database. They come with detailed values for the last 24 hours and aggregated values on an hourly, daily and monthly basis: EXA_DB_SIZE*, EXA_MONITOR*, EXA_SQL* and EXA_USAGE*. Other important tables to monitor are EXA_DBA_TRANSACTION_CONFLICTS and EXA_SYSTEM_EVENTS. The meta view EXA_SYSCAT lists all these tables together with a description.
A high CPU utilization is normal for a healthy system. Low CPU utilization (on a busy system) is an indicator that a bottleneck exists somewhere else. CPU_MAX from EXA_MONITOR_DAILY should be normally above 85% therefore.
Analytical queries often consume a lot of memory for sort operations. The EXA_MONITOR* tables should be monitored to confirm that the total TEMP memory consumption is below 20% of the total available database RAM, while the EXA_SQL* and EXA_*_SESSIONS tables should be monitored to confirm that a single query doesn’t consume more than 10% of available database memory.
Exasol performs well with parallel in-memory processing analytical queries. This requires sufficient memory availability. The EXA_DB_SIZE* tables have RECOMMENDED_DB_RAM_SIZE where the system reports if it would be beneficial to add more RAM to the data nodes.
Avoid the USING Syntax
The use of the USING syntax in queries containing multiple joins leads to an inefficient execution and high amount of heap memory usage by the SQL processes.
In the above example, USING(id) is not just syntactic sugar for ON T1.id = T2.id. The database has to decide the values for the columns that appear in both the tables. For example, when T1.name has a NULL value, and T2.name is different from NULL, then the value of the output column name will be T2.name. This effort increases exponentially on the number of joins.
Instead, you can use the following statement for a better performance.