To check the replication status you can use the trepctl command. This accepts a number of command-specific verbs that provide status and control information for your configured cluster. The basic format of the command is:
shell> trepctl [-host hostname] command
The -host
option is not required, and
enables you to check the status of a different host than the current node.
To get the basic information about the currently configured services on a node and current status, use the services verb command:
shell> trepctl services
Processing services command...
NAME VALUE
---- -----
appliedLastSeqno: 211
appliedLatency : 17.66
role : slave
serviceName : firstrep
serviceType : local
started : true
state : ONLINE
Finished services command...
In the above example, the output shows the last sequence number and latency of the host, in this case an Applier, compared to the Extractor from which it is processing information. In this example, the last sequence number and the latency between that sequence being processed on the Extractor and applied to the Target is 17.66 seconds. You can compare this information to that provided by the Extractor, either by logging into the Extractor and running the same command, or by using the host command-line option:
shell> trepctl -host host1 services
Processing services command...
NAME VALUE
---- -----
appliedLastSeqno: 365
appliedLatency : 0.614
role : master
serviceName : firstrep
serviceType : local
started : true
state : ONLINE
Finished services command...
By comparing the appliedLastSeqno
for the Extractor
against the value on the Applier, it is possible to determine that the Applier
and the Extractor are not yet synchronized.
For a more detailed output of the current status, use the status command, which provides much more detailed output of the current replication status:
shell> trepctl status
Processing status command...
NAME VALUE
---- -----
appliedLastEventId : mysql-bin.000064:0000000002757461;0
appliedLastSeqno : 212
appliedLatency : 263.43
channels : 1
clusterName : default
currentEventId : NONE
currentTimeMillis : 1365082088916
dataServerHost : host2
extensions :
latestEpochNumber : 0
masterConnectUri : thl://host1:2112/
masterListenUri : thl://host2:2112/
maximumStoredSeqNo : 724
minimumStoredSeqNo : 0
offlineRequests : NONE
pendingError : NONE
pendingErrorCode : NONE
pendingErrorEventId : NONE
pendingErrorSeqno : -1
pendingExceptionMessage: NONE
pipelineSource : thl://host1:2112/
relativeLatency : 655.915
resourcePrecedence : 99
rmiPort : 10000
role : slave
seqnoType : java.lang.Long
serviceName : firstrep
serviceType : local
simpleServiceName : firstrep
siteName : default
sourceId : host2
state : ONLINE
timeInStateSeconds : 893.32
uptimeSeconds : 9370.031
version : Tungsten Clustering (for MySQL) 8.0.0 build 10
Finished status command...
Similar to the host specification, trepctl provides information for the default service. If you have installed multiple services, you must specify the service explicitly:
shell> trepctrl -service servicename status
If the service has been configured to operate on an alternative management
port, this can be specified using the
-port
option. The default is to use
port 10000.
The above command was executed on the Target host,
host2
. Some key parameter values
from the generated output:
appliedLastEventId
This shows the last event from the source event stream that was
applied to the database. In this case, the output shows that source of
the data was a MySQL binary log. The portion before the colon,
mysql-bin.000064
is the
filename of the binary log on the Source. The portion after the colon
is the physical location, in bytes, within the binary log file.
appliedLastSeqno
The last sequence number for the transaction from the Tungsten stage that has been applied to the database. This indicates the last actual transaction information written into the Target database.
When using parallel replication, this parameter returns the minimum applied sequence number among all the channels applying data.
appliedLatency
The appliedLatency
is the latency between the
commit time and the time the last committed transaction reached the
end of the corresponding pipeline within the replicator.
In replicators that are operating with parallel apply,
appliedLatency
indicates the latency of the
trailing channel. Because the parallel apply mechanism does not update
all channels simultaneously, the figure shown may trail significantly
from the actual latency.
masterConnectUri
On an Extractor, the value will be empty.
On an Applier, the URI of the Extractor Tungsten Clustering (for MySQL) from which the transaction data is being read from. The value supports multiple URIs (separated by comma) for topologies with multiple Sources.
maximumStoredSeqNo
The maximum transaction ID that has been stored locally on the machine
in the THL. Because Tungsten Clustering (for MySQL) operates in
stages, it is sometimes important to compare the sequence and latency
between information being ready from the source into the THL, and then
from the THL into the database. You can compare this value to the
appliedLastSeqno
, which indicates the last
sequence committed to the database. The information is provided at a
resolution of milliseconds.
pipelineSource
Indicates the source of the information that is written into the THL.
For an Extractor, pipelineSource
is the MySQL
binary log. For an Applier, pipelineSource
is the
THL of the Extractor.
relativeLatency
The relativeLatency is the latency between now and timestamp of the
last event written into the local THL. An increasing
relativeLatency
indicates that the replicator
may have stalled and stopped applying changes to the dataserver.
state
Shows the current status for this node. In the event of a failure, the
status will indicate that the node is in a state other than
ONLINE
. The
timeInStateSeconds
will indicate how long the
node has been in that state, and therefore how long the node may have
been down or unavailable.
The easiest method to check the health of your replication is to compare the current sequence numbers and latencies for each Applier compared to the Extractor. For example:
shell>trepctl -host host2 status|grep applied
appliedLastEventId : mysql-bin.000076:0000000087725114;0 appliedLastSeqno : 2445 appliedLatency : 252.0 ... shell>trepctl -host host1 status|grep applied
appliedLastEventId : mysql-bin.000076:0000000087725114;0 appliedLastSeqno : 2445 appliedLatency : 2.515
For parallel replication and complex multi-service replication structures, there are additional parameters and information to consider when checking and confirming the health of the cluster.
The above indicates that the two hosts are up to date, but that there is a significant latency on the Applier for performing updates.
Each node within the cluster will have a specific state that indicates whether the node is up and running and servicing requests, or whether there is a fault or problem. Understanding these states will enable you to clearly identify the current operational status of your nodes and cluster as a whole.
A list of the possible states for the replicator includes:
The replicator service is starting up and reading the replicator properties configuration file.
The node has been deliberately placed into the offline mode by an administrator. No replication events are processed, and reading or writing to the underlying database does not take place.
The node has entered the offline state because of an error. No replication events are processed, and reading or writing to the underlying database does not take place.
This replicator state is only seen within cctrl. The underlying replicator status should be checked to see the full error state by reviewing the output of trepctl status.
The replicator is preparing to go online and is currently restoring data from a backup.
The replicator is preparing to go online and is currently preparing to process any outstanding events from the incoming event stream. This mode occurs when an Applier has been switched online after maintenance, or in the event of a temporary network error where the Applier has reconnected to the Extractor.
The node is currently online and processing events, reading incoming
data and applying those changes to the database as required. In this
mode the current status and position within the replication stream
is recorded and can be monitored. Replication will continue until an
error or administrative condition switches the node into the
OFFLINE
state.
The replicator is processing any outstanding events or transactions
that were in progress when the node was switched offline. When these
transactions are complete, and the resources in use (memory, network
connections) have been closed down, the replicator will switch to
the OFFLINE:NORMAL
state. This
state may also be seen in a node where auto-enable is disabled after
a start or restart operation.
This status will be seen on an Extractor
replicator and is indicative of the replicator loosing connectivity to
the Source Database that it is extracting from. The replicator will still continue to extract entries from
the binary log that have not yet been processed. After extracting all log entries, the replicator will
proceed to the ONLINE:DEGRADED-BINLOG-FULLY-READ
state.
ONLINE:DEGRADED-BINLOG-FULLY-READ
This status will be seen on an Extractor
replicator following the
ONLINE:DEGRADED
state
and indicates that the replicator has completed reading all binglog entries. In a clustering
environment, it indicates to the cluster that failover can now proceed.
In general, the state of a node during operation will go through a
natural progression within certain situations. In normal operation,
assuming no failures or problems, and no management requested offline, a
node will remain in the ONLINE
state indefinitely.
Maintenance on Tungsten Replicator or the dataserver must be performed
while in the OFFLINE
state. In the
OFFLINE
state, write locks on the
THL and other files are released, and reads or writes from the
dataserver are stopped until the replicator is
ONLINE
again.
During a maintenance operation, a node will typically go through the following states at different points of the operation:
Operation | State |
---|---|
Node operating normally |
ONLINE
|
Administrator puts node into offline state |
GOING-OFFLINE
|
Node is offline |
OFFLINE:NORMAL
|
Administrator puts node into online state |
GOING-ONLINE:SYNCHRONIZING
|
Node catches up with Extractor |
ONLINE
|
In the event of a failure, the sequence will trigger the node into the error state and then recovery into the online state:
Operation | State |
---|---|
Node operating normally |
ONLINE
|
Failure causes the node to go offline |
OFFLINE:ERROR
|
Administrator fixes error and puts node into online state |
GOING-ONLINE:SYNCHRONIZING
|
Node catches up with Extractor |
ONLINE
|
During an error state where a backup of the data is restored to a node in preparation of bringing the node back into operation:
Operation | State |
---|---|
Node operating normally |
ONLINE
|
Failure causes the node to go offline |
OFFLINE:ERROR
|
Administrator restores node from backup data |
GOING-ONLINE:RESTORING
|
Once restore is complete, node synchronizes with the Extractor |
GOING-ONLINE:SYNCHRONIZING
|
Node catches up with Extractor |
ONLINE
|
You can manually change the replicator states on any node by using the trepctl command.
To switch to the OFFLINE
state if
you are currently ONLINE
:
shell> trepctl offline
Unless there is an error, no information is reported. The current state can be verified using the trepctl status:
shell> trepctl status
Processing status command...
...
state : OFFLINE:NORMAL
timeInStateSeconds : 21.409
uptimeSeconds : 935.072
To switch back to the ONLINE
state:
shell> trepctl online
When using replicator states in this manner, the replication between
hosts is effectively paused. Any outstanding events from the Extractor will
be replicated to the Applier with the replication continuing from the
point where the node was switched to the
OFFLINE
state. The sequence number
and latency will be reported accordingly, as seen in the example below
where the node is significantly behind the Primary:
shell> trepctl status
Processing status command...
NAME VALUE
---- -----
appliedLastEventId : mysql-bin.000004:0000000005162941;0
appliedLastSeqno : 21
appliedLatency : 179.366