6.6.1. Recover a failed Replica

As long as the cluster is operating in the AUTOMATIC policy mode, then in most cases the managers will attempt to recover a failed replica automatically.

If automatic recovery fails, further investigation will be required to establish the cause of failure. Once the fault has been resolved and the host is viable (ie no data corruption) and available again, it can be recovered back into Replica mode using either the recover command, or the specific single datasource command datasource db1 recover:

[LOGICAL:EXPERT] /alpha > recover
RECOVERING DATASERVICE 'alpha
SET POLICY: AUTOMATIC => MAINTENANCE
FOUND PHYSICAL DATASOURCE TO RECOVER: 'db1@alpha'
RECOVERING DATASOURCE 'db1@alpha'
VERIFYING THAT WE CAN CONNECT TO DATA SERVER 'db1'
Verified that DB server notification 'db1' is in state 'ONLINE'
DATA SERVER 'db1' IS NOW AVAILABLE FOR CONNECTIONS
RECOVERING 'db1@alpha' TO A SLAVE USING 'db3@alpha' AS THE MASTER
SETTING THE ROLE OF DATASOURCE 'db1@alpha' FROM 'master' TO 'slave'
RECOVERY OF DATA SERVICE 'alpha' SUCCEEDED
REVERT POLICY: MAINTENANCE => AUTOMATIC
RECOVERED 1 DATA SOURCES IN SERVICE 'alpha'

or:

[LOGICAL:EXPERT] /alpha > datasource db1 recover
RECOVERING DATASOURCE 'db1@alpha'
STARTING SERVICE 'db1/mysql'
VERIFYING THAT WE CAN CONNECT TO DATA SERVER 'db1'
Verified that DB server notification 'db1' is in state 'ONLINE'
DATA SERVER 'db1' IS NOW AVAILABLE FOR CONNECTIONS
RECOVERING 'db1@alpha' TO A SLAVE USING 'db3@alpha' AS THE MASTER
RECOVERY OF 'db1@alpha' WAS SUCCESSFUL

The single recover command will attempt to recover all the Replica resources in the cluster, bringing them all online and back into service. The command operates on all shunned or failed Replicas, and only works if there is an active Primary available.

In some cases, the datasource may show as ONLINE and the recover command does not bring the datasource online, particularly with the following error:

The datasource 'db1' is not FAILED or SHUNNED and cannot be recovered.

Checking the datasource status in cctrl the replicator service has failed, but the datasource shows as online:

+---------------------------------------------------------------------------------+
|db1(slave:ONLINE, progress=-1, latency=-1.000)                                   |
|STATUS [OK] [2025/01/27 01:59:33 PM UTC]                                         |
+---------------------------------------------------------------------------------+
|  MANAGER(state=ONLINE)                                                          |
|  REPLICATOR(role=slave, master=db3, state=SUSPECT)                               |
|  DATASERVER(state=ONLINE)                                                       |
|  CONNECTIONS(created=4, active=0)                                               |
+---------------------------------------------------------------------------------+

In this case, the datasource can be manually shunned, which will then enable the recover command to operate and bring the node back into operation.

6.6.1.1. Recover a Replica from manually shunned state

A Replica that has been manually shunned can be added back to the dataservice using the datasource recover command:

[LOGICAL:EXPERT] /alpha > ls

...
DATASOURCES:
+---------------------------------------------------------------------------------+
|db1(slave:SHUNNED(MANUALLY-SHUNNED), progress=41, latency=59020.057)             |
|STATUS [SHUNNED] [2025/01/28 08:35:36 AM UTC]                                    |
+---------------------------------------------------------------------------------+
|  MANAGER(state=ONLINE)                                                          |
|  REPLICATOR(role=slave, master=db3, state=ONLINE)                               |
|  DATASERVER(state=ONLINE)                                                       |
|  CONNECTIONS(created=0, active=0)                                               |
+---------------------------------------------------------------------------------+
...

[LOGICAL:EXPERT] /alpha > datasource db1 recover
RECOVERING DATASOURCE 'db1@alpha'
VERIFYING THAT WE CAN CONNECT TO DATA SERVER 'db1'
Verified that DB server notification 'db1' is in state 'ONLINE'
DATA SERVER 'db1' IS NOW AVAILABLE FOR CONNECTIONS
RECOVERING 'db1@alpha' TO A SLAVE USING 'db3@alpha' AS THE MASTER
RECOVERY OF 'db1@alpha' WAS SUCCESSFUL

[LOGICAL:EXPERT] /alpha > ls

...
DATASOURCES:
+---------------------------------------------------------------------------------+
|db1(slave:ONLINE, progress=41, latency=59020.000)                                |
|STATUS [OK] [2025/01/28 08:36:37 AM UTC]                                         |
+---------------------------------------------------------------------------------+
|  MANAGER(state=ONLINE)                                                          |
|  REPLICATOR(role=slave, master=db3, state=ONLINE)                               |
|  DATASERVER(state=ONLINE)                                                       |
|  CONNECTIONS(created=0, active=0)                                               |
+---------------------------------------------------------------------------------+
...

6.6.1.2. Provision or Reprovision a Replica

In the event that you cannot get the Replica to recover using the datasource recover command, you can re-provision the Replica from another Replica within your dataservice.

The command performs three operations automatically:

  1. Performs a backup of a remote Replica

  2. Copies the backup to the current host

  3. Restores the backup

Warning

When using tprovision you must be logged in to the Replica that has failed or that you want to reprovision. You cannot reprovision a Replica remotely.

To use tprovision :

  1. Log in to the failed Replica.

  2. Select the active Replica within the dataservice that you want to use to reprovision the failed Replica. You may use the Primary but this will impact performance on that host. If you use MyISAM tables the operation will create some locking in order to get a consistent snapshot.

  3. Run tprovision specifying the source you have selected:

    shell> tprovision -s db2 -m xtrabackup
    
    2025/01/27 15:59:15 | INFO Started script tprovision
    2025/01/27 15:59:20 | INFO xtrabackup version is 8.0.35-31
    2025/01/27 15:59:20 | INFO MySQL version is 8.0.40
    2025/01/27 15:59:20 | INFO
    source = db2
    method = xtrabackup
    parallel threads = 4
    port = 22
    topology = CLUSTER
    ...
    ...
    Verified that DB server notification 'db1' is in state 'ONLINE'
    DATA SERVER 'db1' IS NOW AVAILABLE FOR CONNECTIONS
    RECOVERING 'db1@alpha' TO A SLAVE USING 'db3@alpha' AS THE MASTER
    RECOVERY OF 'db1@alpha' WAS SUCCESSFUL
    [LOGICAL] /alpha >
    Exiting...
    
    2025/01/27 15:59:48 | INFO Script ended

    tprovision handles the cluster status, backup, restore, and repositioning of the replication stream so that restored Replica is ready to start operating again.

    Note

    For a full explanation of using tprovison see The tprovision Script

Important

When using a Multi-Site/Active-Active topology the additional replicator must be put offline before restoring data and put online after completion.

shell> mm_trepctl offline
shell> tprovision -s db2 -m xtrabackup
shell> mm_trepctl online
shell> mm_trepctl status

6.6.1.3. Replica Datasource Extended Recovery

If the current Replica will not recover, but the replicator state and sequence number are valid, the Replica is pointing to the wrong Primary, or still mistakenly has the Primary role when it should be a Replica, then the Replica can be forced back into the Replica state.

For example, in the output from ls in cctrl below, the replicator on db2 is mistakenly identified as the Primary, even though db1 is correctly operating as the Primary.

Important

It is important to note that in the scenario shown below, as the db2 host is in a SHUNNED state, and the Replicator is OFFLINE, it is therefore isolated from applications and won't be operating as the incorrectly labelled role. db1 is the only functional primary node in the cluster.

COORDINATOR[db3:AUTOMATIC:ONLINE]

ROUTERS:
+----------------------------------------------------------------------------+
|connector@db1[2096](ONLINE, created=2, active=0)                            |
|connector@db2[2092](ONLINE, created=2, active=0)                            |
|connector@db3[2107](ONLINE, created=2, active=0)                            |
+----------------------------------------------------------------------------+

DATASOURCES:
+----------------------------------------------------------------------------+
|db1(master:ONLINE, progress=43, THL latency=0.073)                          |
|STATUS [OK] [2025/01/28 08:39:09 AM UTC]                                    |
+----------------------------------------------------------------------------+
|  MANAGER(state=ONLINE)                                                     |
|  REPLICATOR(role=master, state=ONLINE)                                     |
|  DATASERVER(state=ONLINE)                                                  |
|  CONNECTIONS(created=0, active=0)                                          |
+----------------------------------------------------------------------------+

+----------------------------------------------------------------------------+
|db2(slave:SHUNNED(MANUALLY-SHUNNED), progress=-1, latency=-1.000)           |
|STATUS [SHUNNED] [2025/01/28 08:42:11 AM UTC]                               |
+----------------------------------------------------------------------------+
|  MANAGER(state=ONLINE)                                                     |
|  REPLICATOR(role=master, state=OFFLINE)                                    |
|  DATASERVER(state=ONLINE)                                                  |
|  CONNECTIONS(created=0, active=0)                                          |
+----------------------------------------------------------------------------+

+----------------------------------------------------------------------------+
|db3(slave:ONLINE, progress=43, latency=0.319)                               |
|STATUS [OK] [2025/01/28 08:42:14 AM UTC]                                    |
+----------------------------------------------------------------------------+
|  MANAGER(state=ONLINE)                                                     |
|  REPLICATOR(role=slave, master=db1, state=ONLINE)                          |
|  DATASERVER(state=ONLINE)                                                  |
|  CONNECTIONS(created=0, active=0)                                          |
+----------------------------------------------------------------------------+

The datasource db2 can be brought back online using this sequence:

  1. Enable set force mode:

    [LOGICAL:EXPERT] /alpha > set force true
    FORCE: true
  2. Switch the replicator offline:

    [LOGICAL:EXPERT] /alpha > replicator db2 offline
    Replicator 'db2' is now OFFLINE
  3. Set the replicator to Replica operation:

    [LOGICAL:EXPERT] /alpha > replicator db2 slave
    Replicator 'db2' is now a slave of replicator 'db1'

    In some instances you may need to explicitly specify which node is your Primary when you configure the Replica; appending the Primary hostname to the command specifies the Primary host to use:

    [LOGICAL:EXPERT] /alpha > replicator db2 slave db1
    Replicator 'db2' is now a slave of replicator 'db1'
  4. Switch the replicator service online:

    [LOGICAL:EXPERT] /alpha > replicator db2 online
    Replicator 'db2' is now ONLINE
  5. Ensure the datasource is correctly configured as a Replica:

    [LOGICAL:EXPERT] /alpha > datasource db2 slave
    Datasource 'db2' now has role 'slave'
  6. Recover the Replica back to the dataservice:

    [LOGICAL:EXPERT] /alpha > datasource db2 recover
    DataSource 'db2' is now OFFLINE

Datasource db2 should now be back in the dataservice as a working datasource.

Similar processes can be used to force a datasource back into the Primary role if a switch or recover operation failed to set the role properly.

If the recover command fails, there are a number of solutions that may bring the dataservice back to the normal operational state. The exact method will depend on whether there are other active Replicas (from which a backup can be taken) or recent backups of the Replica are available, and the reasons for the original failure. Some potential solutions include