ScyllaDB Documentation Logo Documentation
  • Server
    • ScyllaDB Open Source
    • ScyllaDB Enterprise
    • ScyllaDB Alternator
  • Cloud
  • Tools
    • ScyllaDB Manager
    • ScyllaDB Monitoring Stack
    • ScyllaDB Operator
  • Drivers
    • CQL Drivers
    • DynamoDB Drivers
Download
Menu
ScyllaDB Docs Scylla Operator Node operations using Scylla Operator Replacing a Scylla node

Replacing a Scylla node¶

Replacing a dead node¶

In the case of a host failure, it may not be possible to bring back the node to life.

Replace dead node operation will cause the other nodes in the cluster to stream data to the node that was replaced. This operation can take some time (depending on the data size and network bandwidth).

This procedure is for replacing one dead node. To replace more than one dead node, run the full procedure to completion one node at a time

Procedure

  1. Verify the status of the node using nodetool status command, the node with status DN is down and need to be replaced

    kubectl -n scylla exec -ti simple-cluster-us-east-1-us-east-1a-0 -c scylla -- nodetool status
    Datacenter: us-east-1
    =====================
    Status=Up/Down
    |/ State=Normal/Leaving/Joining/Moving
    --  Address        Load       Tokens       Owns    Host ID                               Rack
    UN  10.43.125.110  74.63 KB   256          ?       8ebd6114-969c-44af-a978-87a4a6c65c3e  us-east-1a
    UN  10.43.231.189  91.03 KB   256          ?       35d0cb19-35ef-482b-92a4-b63eee4527e5  us-east-1a
    DN  10.43.43.51    74.77 KB   256          ?       1ffa7a82-c41c-4706-8f5f-4d45a39c7003  us-east-1a
    
  2. Identify service which is bound to down node by checking IP address

    kubectl -n scylla get svc
    NAME                                    TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                                                           AGE
    simple-cluster-client                   ClusterIP   None            <none>        9180/TCP                                                          3h12m
    simple-cluster-us-east-1-us-east-1a-0   ClusterIP   10.43.231.189   <none>        7000/TCP,7001/TCP,7199/TCP,10001/TCP,9042/TCP,9142/TCP,9160/TCP   3h12m
    simple-cluster-us-east-1-us-east-1a-1   ClusterIP   10.43.125.110   <none>        7000/TCP,7001/TCP,7199/TCP,10001/TCP,9042/TCP,9142/TCP,9160/TCP   3h11m
    simple-cluster-us-east-1-us-east-1a-2   ClusterIP   10.43.43.51     <none>        7000/TCP,7001/TCP,7199/TCP,10001/TCP,9042/TCP,9142/TCP,9160/TCP   3h5m
    
  3. Drain node which we would like to replace using. This command may delete your data from local disks attached to given node!

    kubectl drain gke-scylla-demo-default-pool-b4b390a1-6j12 --ignore-daemonsets --delete-local-data
    

    Pod which will be replaced should enter the Pending state

    kubectl -n scylla get pods
    NAME                                    READY   STATUS    RESTARTS   AGE
    simple-cluster-us-east-1-us-east-1a-0   2/2     Running   0          3h21m
    simple-cluster-us-east-1-us-east-1a-1   2/2     Running   0          3h19m
    simple-cluster-us-east-1-us-east-1a-2   0/2     Pending   0          8m14s
    
  4. To being node replacing, add scylla/replace="" label to service bound to pod we are replacing.

    kubectl -n scylla label svc simple-cluster-us-east-1-us-east-1a-2 scylla/replace=""
    

    Your failed Pod should be recreated on available k8s node

    kubectl -n scylla get pods
    NAME                                    READY   STATUS    RESTARTS   AGE
    simple-cluster-us-east-1-us-east-1a-0   2/2     Running   0          3h27m
    simple-cluster-us-east-1-us-east-1a-1   2/2     Running   0          3h25m
    simple-cluster-us-east-1-us-east-1a-2   1/2     Running   0          9s
    

    Because other nodes in cluster must stream data to new node this operation might take some time depending on how much data your cluster stores. After bootstraping is over, your new Pod should be ready to go. Old one shouldn’t be no longer visible in nodetool status

    kubectl -n scylla exec -ti simple-cluster-us-east-1-us-east-1a-0 -c scylla -- nodetool status
    Datacenter: us-east-1
    =====================
    Status=Up/Down
    |/ State=Normal/Leaving/Joining/Moving
    --  Address        Load       Tokens       Owns    Host ID                               Rack
    UN  10.43.125.110  74.62 KB   256          ?       8ebd6114-969c-44af-a978-87a4a6c65c3e  us-east-1a
    UN  10.43.231.189  91.03 KB   256          ?       35d0cb19-35ef-482b-92a4-b63eee4527e5  us-east-1a
    UN  10.43.191.172  74.77 KB   256          ?       1ffa7a82-c41c-4706-8f5f-4d45a39c7003  us-east-1a
    
  5. Run the repair on the cluster to make sure that the data is synced with the other nodes in the cluster. You can use Scylla Manager to run the repair.

PREVIOUS
Upgrading version of Scylla
NEXT
Automatic cleanup and replacement in case when k8s node is lost
Scylla Operator
  • v1.8
    • v1.8
    • v1.7
    • master
  • Deploying Scylla on a Kubernetes Cluster
  • Deploying Scylla on EKS
  • Deploying Scylla on GKE
  • Deploying Scylla stack using Helm Charts
  • Deploying Scylla Manager on a Kubernetes Cluster
  • Setting up Monitoring
  • Version migrations
  • Node operations using Scylla Operator
    • Upgrading version of Scylla
    • Replacing a Scylla node
    • Automatic cleanup and replacement in case when k8s node is lost
    • Maintenance mode
    • Restore from backup
  • Performance tuning
  • Upgrade of Scylla Operator
  • Releases
  • Known issues
  • Scylla Cluster CRD
  • Contributing to Scylla Operator
  • Create an issue
  • Edit this page

On this page

  • Replacing a Scylla node
    • Replacing a dead node
Logo
Docs Contact Us About Us
Mail List Icon Slack Icon Forum Icon
© 2023, ScyllaDB. All rights reserved.
Last updated on 29 March 2023.
Powered by Sphinx 4.5.0 & ScyllaDB Theme 1.3.5