ScyllaDB University Live | Free Virtual Training Event
Learn more
ScyllaDB Documentation Logo Documentation
  • Server
  • Cloud
  • Tools
    • ScyllaDB Manager
    • ScyllaDB Monitoring Stack
    • ScyllaDB Operator
  • Drivers
    • CQL Drivers
    • DynamoDB Drivers
  • Resources
    • ScyllaDB University
    • Community Forum
    • Tutorials
Download
ScyllaDB Docs Scylla Operator Resources ScyllaClusters Node operations using Scylla Operator Restore from backup

Caution

You're viewing documentation for a previous version of Scylla Operator. Switch to the latest stable version.

Restore from backup¶

This procedure will describe how to restore from backup taken using Scylla Manager to a fresh empty cluster of any size.

Warning

Restoring schema with ScyllaDB OS 5.4.X or ScyllaDB Enterprise 2024.1.X and consistent_cluster_management isn’t supported.

When creating the target ScyllaDB cluster, configure it with consistent_cluster_management: false. Refer to API Reference to learn how to customize ScyllaDB configuration files.

When following the steps for schema restore, ensure you follow the additional steps dedicated to affected ScyllaDB versions.

In the following example, the ScyllaCluster, which was used to take the backup, is called source. Backup will be restored into the ScyllaCluster named target.

apiVersion: scylla.scylladb.com/v1
kind: ScyllaCluster
metadata:
  name: source
spec:
  agentVersion: 3.4.0
  version: 6.2.0
  developerMode: true
  backups:
  - name: foo
    location:
    - s3:source-backup
    keyspace:
    - '*'
  datacenter:
    name: us-east-1
    racks:
    - name: us-east-1a
      members: 1
      storage:
        capacity: 1Gi
      resources:
        limits:
          cpu: 1
          memory: 1Gi
apiVersion: scylla.scylladb.com/v1
kind: ScyllaCluster
metadata:
  name: target
spec:
  agentVersion: 3.4.0
  version: 6.2.0
  developerMode: true
  datacenter:
    name: us-east-1
    racks:
    - name: us-east-1a
      members: 1
      storage:
        capacity: 1Gi
      resources:
        limits:
          cpu: 1
          memory: 1Gi

Make sure your target cluster is already registered in Scylla Manager. To get a list of all registered clusters, execute the following command:

$ kubectl -n scylla-manager exec -ti deployment.apps/scylla-manager -- sctool cluster list
+--------------------------------------+---------------------------------------+---------+-----------------+
| ID                                   | Name                                  | Port    | CQL credentials |
+--------------------------------------+---------------------------------------+---------+-----------------+
| af1dd5cd-0406-4974-949f-dc9842980080 | scylla/target                        | default | set             |
| ebd82268-efb7-407e-a540-3619ae053778 | scylla/source                        | default | set             |
+--------------------------------------+---------------------------------------+---------+-----------------+

Identify the tag of a snapshot which you want to restore. To get a list of all available snapshots, execute following command:

kubectl -n scylla-manager exec -ti deployment.apps/scylla-manager -- sctool backup list -c <CLUSTER_ID> --all-clusters -L <BACKUP_LOCATION>

Where:

  • CLUSTER_ID - the name or ID of a registered cluster with access to BACKUP_LOCATION.

  • BACKUP_LOCATION - the location in which the backup is stored.

In this example, BACKUP_LOCATION is s3:source-backup. Use the name of cluster which has access to the backup location for CLUSTER_ID. In this example, it’s scylla/target.

$ kubectl -n scylla-manager exec -ti deployment.apps/scylla-manager -- sctool backup list -c scylla/target --all-clusters -L s3:source-backup
backup/ff36d7e0-af2e-458c-afe6-868e0f3396b2
Snapshots:
  - sm_20240105115931UTC (409MiB, 1 nodes)
Keyspaces:
  - system_schema (15 tables)
  - users (9 tables)

Restore schema¶

In the below commands, we are restoring the sm_20240105115931UTC snapshot. Replace it with a tag of a snapshot that you want to restore. Restoring consist of two steps. First, you’ll restore the schema, and then the data. To restore schema, create a restore task manually on target ScyllaCluster by executing following command:

kubectl -n scylla-manager exec -ti deployment.apps/scylla-manager --  sctool restore -c <CLUSTER_ID> -L <BACKUP_LOCATION> -T <SNAPSHOT_TAG> --restore-schema

Where:

  • CLUSTER_ID - a name or ID of a cluster you want to restore into.

  • BACKUP_LOCATION - the location in which the backup is stored.

  • SNAPSHOT_TAG - a tag of a snapshot that you want to restore.

When the task is created, the command will output the ID of a restore task.

$ kubectl -n scylla-manager exec -ti deployment.apps/scylla-manager -- sctool restore -c scylla/target -L s3:source-backup -T sm_20240105115931UTC --restore-schema
restore/57228c52-7cf6-4271-8c8d-d446ff160747

Use the following command to check progress of the restore task:

$ kubectl -n scylla-manager exec -ti deployment.apps/scylla-manager -- sctool progress -c scylla/target restore/57228c52-7cf6-4271-8c8d-d446ff160747
Restore progress
Run:            0dd20cdf-abc4-11ee-951c-6e7993cf42ed
Status:         DONE - restart required (see restore docs)
Start time:     05 Jan 24 12:15:02 UTC
End time:       05 Jan 24 12:15:09 UTC
Duration:       6s
Progress:       100% | 100%
Snapshot Tag:   sm_20240105115931UTC

+---------------+-------------+----------+----------+------------+--------+
| Keyspace      |    Progress |     Size |  Success | Downloaded | Failed |
+---------------+-------------+----------+----------+------------+--------+
| system_schema | 100% | 100% | 214.150k | 214.150k |   214.150k |      0 |
+---------------+-------------+----------+----------+------------+--------+

As suggested in the progress output, you will need to execute a rolling restart of the ScyllaCluster.

kubectl patch scyllacluster/target --type=merge -p='{"spec": {"forceRedeploymentReason": "schema restored"}}'

Use the following commands to wait until restart is finished:

$ kubectl wait --for='condition=Progressing=False' scyllaclusters.scylla.scylladb.com/target
scyllacluster.scylla.scylladb.com/target condition met

$ kubectl wait --for='condition=Degraded=False' scyllaclusters.scylla.scylladb.com/target
scyllacluster.scylla.scylladb.com/target condition met

$ kubectl wait --for='condition=Available=True' scyllaclusters.scylla.scylladb.com/target
scyllacluster.scylla.scylladb.com/target condition met

Caution

Restoring schema with ScyllaDB OS 5.4.X or ScyllaDB Enterprise 2024.1.X and consistent_cluster_management

After you’ve followed the above steps with a ScyllaDB target cluster with consistent_cluster_management disabled, you’ll need to enable Raft by configuring the target cluster with consistent_cluster_management: true. Refer to API Reference to learn how to customize ScyllaDB configuration files.

You will then need to execute a rolling restart of the ScyllaCluster for the change to take effect.

kubectl patch scyllacluster/target --type=merge -p='{"spec": {"forceRedeploymentReason": "raft enabled"}}'

Use the following commands to wait until restart is finished:

$ kubectl wait --for='condition=Progressing=False' scyllaclusters.scylla.scylladb.com/target
scyllacluster.scylla.scylladb.com/target condition met

$ kubectl wait --for='condition=Degraded=False' scyllaclusters.scylla.scylladb.com/target
scyllacluster.scylla.scylladb.com/target condition met

$ kubectl wait --for='condition=Available=True' scyllaclusters.scylla.scylladb.com/target
scyllacluster.scylla.scylladb.com/target condition met

Restore tables¶

To restore the tables content, create a restore task manually on target ScyllaCluster by executing the following command:

kubectl -n scylla-manager exec -ti deployment.apps/scylla-manager -- sctool restore -c <CLUSTER_ID> -L <BACKUP_LOCATION> -T <SNAPSHOT_TAG> --restore-tables

Where:

  • CLUSTER_ID - a name or ID of a cluster you want to restore into.

  • BACKUP_LOCATION - the location in which the backup is stored.

  • SNAPSHOT_TAG - a tag of a snapshot that you want to restore.

When the task is created, the command will output the ID of a restore task.

$ kubectl -n scylla-manager exec -ti deployment.apps/scylla-manager -- sctool restore -c scylla/target -L s3:source-backup -T sm_20240105115931UTC --restore-tables
restore/63642069-bed5-4def-ba0f-68c49e47ace1

Use the following command to check progress of the restore task:

$ kubectl -n scylla-manager exec -ti deployment.apps/scylla-manager -- sctool progress -c scylla/target restore/63642069-bed5-4def-ba0f-68c49e47ace1
Restore progress
Run:            ab015cef-abc8-11ee-9521-6e7993cf42ed
Status:         DONE
Start time:     05 Jan 24 12:48:04 UTC
End time:       05 Jan 24 12:48:15 UTC
Duration:       11s
Progress:       100% | 100%
Snapshot Tag:   sm_20240105115931UTC

+-------------+-------------+--------+---------+------------+--------+
| Keyspace    |    Progress |   Size | Success | Downloaded | Failed |
+-------------+-------------+--------+---------+------------+--------+
| users       | 100% | 100% | 409MiB |  409MiB |     409MiB |      0 |
+-------------+-------------+--------+---------+------------+--------+

Post-restore repair progress
Run:            ab015cef-abc8-11ee-9521-6e7993cf42ed
Status:         DONE
Start time:     05 Jan 24 12:48:04 UTC
End time:       05 Jan 24 12:48:15 UTC
Duration:       11s
Progress:       100%
Intensity:      1
Parallel:       0
Datacenters:
  - us-east-1

+-------------+--------------+----------+----------+
| Keyspace    |        Table | Progress | Duration |
+-------------+--------------+----------+----------+
| users       | users        | 100%     | 0s       |
+-------------+--------------+----------+----------+

Was this page helpful?

PREVIOUS
Maintenance mode
NEXT
Deploying multi-datacenter ScyllaDB clusters in Kubernetes
  • Create an issue
  • Edit this page

On this page

  • Restore from backup
    • Restore schema
    • Restore tables
Scylla Operator
  • v1.15
    • v1.17
    • v1.16
    • v1.15
    • master
  • Architecture
    • Overview
    • Storage
      • Overview
      • Local CSI Driver
    • Tuning
    • ScyllaDB Manager
  • Installation
    • Overview
    • Kubernetes
      • Generic
      • EKS
      • GKE
    • GitOps (kubectl)
    • Helm
  • Resources
    • Overview
    • ScyllaClusters
      • ScyllaClusters
      • ScyllaDB clients
        • Discovering ScyllaDB Nodes
        • Using CQL
        • Using Alternator (DynamoDB)
      • Node operations using Scylla Operator
        • Upgrading version of Scylla
        • Replacing a Scylla node
        • Automatic cleanup and replacement in case when k8s node is lost
        • Maintenance mode
        • Restore from backup
      • Deploying multi-datacenter ScyllaDB clusters in Kubernetes
        • Build multiple Amazon EKS clusters with inter-Kubernetes networking
        • Build multiple GKE clusters with inter-Kubernetes networking
        • Deploy a multi-datacenter ScyllaDB cluster in multiple interconnected Kubernetes clusters
      • Exposing ScyllaCluster
    • ScyllaDBMonitorings
    • NodeConfigs
    • ScyllaOperatorConfigs
  • Quickstarts
    • Deploying ScyllaDB on GKE
    • Deploying ScyllaDB on EKS
  • Support
    • Support overview
    • Known issues
    • Troubleshooting
      • Troubleshooting installation issues
    • Gathering data with must-gather
    • Releases
  • API Reference
    • scylla.scylladb.com
      • NodeConfig (scylla.scylladb.com/v1alpha1)
      • ScyllaCluster (scylla.scylladb.com/v1)
      • ScyllaDBDatacenter (scylla.scylladb.com/v1alpha1)
      • ScyllaDBMonitoring (scylla.scylladb.com/v1alpha1)
      • ScyllaOperatorConfig (scylla.scylladb.com/v1alpha1)
Docs Tutorials University Contact Us About Us
© 2025, ScyllaDB. All rights reserved. | Terms of Service | Privacy Policy | ScyllaDB, and ScyllaDB Cloud, are registered trademarks of ScyllaDB, Inc.
Last updated on 14 July 2025.
Powered by Sphinx 8.1.3 & ScyllaDB Theme 1.8.6