Redshift only supports a SERIALIZABLE
transaction isolation level, which differs
from relational databases like MySQL, which is REPEATABLE READ
by default.
Isolation Levels determine the behaviour of
the database for concurrent access to the tables within transactions.
When loading data into Redshift, from multiple appliers, this isolation level can cause locking issues that would manifest as errors in the Replicator Log similiar to the following:
Detail: Serializable isolation violation on table - 150379, transactions forming the cycle are: 2356786, 2356787 » (pid:17914) (../../tungsten-replicator//appliers/batch/redshift.js#219)
In some cases, the replicator will simply retry and carry on successfully, but on very busy systems
this can sometimes cause the replicator to fall back into an OFFLINE:ERROR
state
and manual intervention would be required.
To overcome this problem, the first step is to ensure that each applier has its own set of staging
tables that the CSV files are loaded into. By default all staging tables will be named with the prefix
stage_xxx_
First of all, to generate the staging tables, you would typically use ddlscan that would look something like the following:
shel> ddlscan -user tungsten -pass secret -url jdbc:mysql:thin://db01:3306/
» -db hr -template ddl-mysql-redshift-staging.vm > staging.sql
To change the default prefix of the staging table, for example, to stage_nyc_
you can provide the option to the ddlscan command as follows:
shel> ddlscan -user tungsten -pass secret -url jdbc:mysql:thin://db01:3306/
» -db hr -template ddl-mysql-redshift-staging.vm -opt tablePrefix stage_nyc_ > staging.sql
You would need to execute this for each applier, changing the prefix accordingly. Once this has
been executed and the tables have been built in Redshift, you will then need to add the additional
property to each applier to instruct which staging tables to use. The property should be added to the
tungsten.ini
file and a tpm update issued
property=replicator.applier.dbms.stageTablePrefix=stage_nyc_
The first and easiest step to try and overcome the isolation errors, would be to increase the batch commit levels and the batch commit interval. Each system works differently so there is no simple calculation to find the right level. These values should be adjusted in small increments to find the right balance for your system.
Within your configuration, adjust the following two parameters:
svc-block-commit-size
svc-block-commit-interval
Within the redshift applier, it is possible to introduce table locking. This will enable multiple appliers to process their own THL and load the transactions without impacting, or being impacted by, other appliers.
This configuration should only be used when multiple appliers are in use, however it must also be recognised that the addition of table locking could introduce latency in applying to Redshift on extremely busy systems, it could also impact client applications from reading the tables due to Redshift's isolation level. To avoid this, table locking should also include an increase in the block commit size and block commit interval properties mentioned above.
There are two types of table locking approaches, depending upon your environment will determine which approach is better for you.
Single Lock Table: This approach should be used for appliers in extremely busy systems where a block-commit-size of 500000 or greater does not eliminate isolation errors and where mutliple tables are updated within each transaction.
One Lock Table per Base Table: This approach should be used for appliers in less busy systems, or where parallel apply has been enabled within the applier, regardless of system activity levels.
To enable the single lock table approach:
The following option should be added to the
s3-config-
file:
servicename
.json
"multiServiceTarget": "true"
Connect to Redshift with the same account used by the applier, and using the DDL below, create the lock table:
CREATE TABLE public.tungsten_lock_table ( ID INT );
To enable the lock table per base table approach:
The following option should be added to the
s3-config-
file:
servicename
.json
"multiServiceTarget": "true", "singleLockTable": "false"
Create a lock table for each of the base tables within Redshift. A ddlscan template
can be used to generate the ddl. In the following example the ddlscan command is generating
lock table ddl for all tables within the hr
schema:
shel> ddlscan -user tungsten -pass secret -url jdbc:mysql:thin://db01:3306/
» -db hr -template ddl-mysql-redshift-lock.vm > outfile.sql
Execute the output from ddlscan into redshift
After enabling either of the above methods, if replication has already been installed you will need to simply restart the replicator by issuing the following:
shel> replicator restart