KB Article #178032
Cassandra node not seen as a replica in the ring
Problem
-- Sometimes, for some reasons, a cassandra node joining the ring will not be seen as a replica. In this situation, when doing a nodetool ring in a 3 nodes configuration, the 3 nodes will fine be listed, but number of replicas will be show as 2:
$ ./nodetool -h node1 ring kps
Datacenter: datacenter1
==========
Replicas: 2
Address Rack Status State Load Owns Token
333333
192.168.56.12 rack1 Up Normal 336,93 KB 100,00% -8922677634030955674
192.168.56.11 rack1 Up Normal 359,33 KB 100,00% -6217483617434545138
192.168.56.13 rack1 Up Normal 358,26 KB 100,00% 333333
-- In this situation, if you are in a HA with full consistency configuration, if one of the node seen as a replicas is stopped, you will have some cassandra error in API Gateway instance logs when doing some request on KPS:
me.prettyprint.hector.api.exceptions.HUnavailableException: : May not be enough replicas present to handle consistency level.
Resolution
- Solution to solve this issue is to remove the node not seen as a replica from the ring, and to make it join the again:
1. First step is to decommission the node not seen as replica, for this example, it will be node3:
$ ./nodetool -h node3 decommission
2. Once the node is decommissioned from the ring, nodetool ring should show the following:
Datacenter: datacenter1
==========
Replicas: 2
Address Rack Status State Load Owns Token
333333
192.168.56.12 rack1 Up Normal 336,93 KB 100,00% -8922677634030955674
192.168.56.11 rack1 Up Normal 359,33 KB 100,00% -6217483617434545138
Stop API Gateway on this node.
3. Remove content (or move it to another folder) from data/, saved_caches/ and commitlog/ cassandra folder, by default under [apigateway_home]/groups/group-x/instance-y/conf/kps/cassandra/
4. Restart the instance, the node will automatically join the ring again.
5. Run nodetool repair on the 3 nodes.