Tungsten Replicator keeps track of replication for a dataservice within a set of tables inside a schema in both source and target databases. This schema is referred to as the tracking schema and should never be updated directly or altered in anyway.
The schema, by default will be called tungsten_SERVICE where
SERVICE relates to the servicename designated during installation.
Within Composite Active/Passive clusters, the schema name will be based on the name of the
composite dataservice, for example tungsten_global
The following tables are created automatically upon startup of Tungsten Replicator:
The consistency table.
This table is written to as a result of issuing trepctl check and will trigger a consistency check on the source and target databases.
Structure of the consistency table:
+------------+-----------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +------------+-----------+------+-----+---------+-------+ | db | char(64) | NO | PRI | NULL | | | tbl | char(64) | NO | PRI | NULL | | | id | int | NO | PRI | NULL | | | row_offset | int | NO | | NULL | | | row_limit | int | NO | | NULL | | | this_crc | char(40) | YES | | NULL | | | this_cnt | int | YES | | NULL | | | master_crc | char(40) | YES | | NULL | | | master_cnt | int | YES | | NULL | | | ts | timestamp | YES | | NULL | | | method | char(32) | YES | | NULL | | +------------+-----------+------+-----+---------+-------+
The heartbeat table.
The heartbeat table is used to track heartbeats
and check replication is operational. Additionally this can be used to check latency
by comparing differences between the source_tstamp and
target_tstamp
This table is written too when trepctl heartbeat is issued from the primary node or when a cluster heartbeat is issued from within cctrl.
This table should typically only ever have 1 row, it is replaced each time a heartbeat is initiated.
Structure of the heartbeat table:
+---------------+--------------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +---------------+--------------+------+-----+---------+-------+ | id | bigint | NO | PRI | NULL | | | seqno | bigint | YES | | NULL | | | eventid | varchar(128) | YES | | NULL | | | source_tstamp | timestamp | YES | | NULL | | | target_tstamp | timestamp | YES | | NULL | | | lag_millis | bigint | YES | | NULL | | | salt | bigint | YES | | NULL | | | name | varchar(128) | YES | | NULL | | +---------------+--------------+------+-----+---------+-------+
The trep_commit_seqno table.
The trep_commit_seqno table is used by the
replicator to track the position of replication.
On a primary node, there will only ever be one row that is continually updated with the
last position of the extractor process for every extracted event. It will record the
specific binlog position and the seqno associated
with it.
This enables the rpelicator to restart from the correct position should the replicators be stopped for any reason.
On a replica, this table will typically only have 1 row, unless parallel appluy has been enabled, in which case there will be 1 row for each parallel channel.
On a replica, the frequency of this table being updated is controlled by the
svc-applier-block-commit-size. The setting of this option
(default 10) controls the number of events that will be committed to the target prior to
the update of the trep_commit_seqno. There may
be conditions where this update happens more frequently, depending on size of transactions
and whether forced commits are taking place.
Structure of the trep_commit_seqno table:
+-------------------+--------------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +-------------------+--------------+------+-----+---------+-------+ | task_id | int | NO | PRI | NULL | | | seqno | bigint | YES | | NULL | | | fragno | smallint | YES | | NULL | | | last_frag | char(1) | YES | | NULL | | | source_id | varchar(128) | YES | | NULL | | | epoch_number | bigint | YES | | NULL | | | eventid | varchar(128) | YES | | NULL | | | applied_latency | int | YES | | NULL | | | update_timestamp | timestamp | YES | | NULL | | | shard_id | varchar(128) | YES | | NULL | | | extract_timestamp | timestamp | YES | | NULL | | | connection_id | bigint | YES | | NULL | | +-------------------+--------------+------+-----+---------+-------+
The trep_shard table.
Unless parallel apply is enabled, this table will not be used and will be empty.
When parallel apply is enabled, this table will contain a list of the channels and any specific custom sharding that has been configured.
For more information see Section 4.1.7, “Controlling Assignment of Shards to Channels”
Structure of the trep_shard table:
+----------+--------------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +----------+--------------+------+-----+---------+-------+ | shard_id | varchar(128) | NO | PRI | NULL | | | master | varchar(128) | YES | | NULL | | | critical | tinyint | YES | | NULL | | +----------+--------------+------+-----+---------+-------+
The trep_shard_channel table.
Unless parallel apply is enabled, this table will not be used and will be empty.
When parallel apply is enabled, this table will contain a list of the channels and any specific custom sharding that has been configured.
For more information see Section 4.1.7, “Controlling Assignment of Shards to Channels”
Structure of the trep_shard_channel table:
+----------+--------------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +----------+--------------+------+-----+---------+-------+ | shard_id | varchar(128) | NO | PRI | NULL | | | channel | int | YES | | NULL | | +----------+--------------+------+-----+---------+-------+