ScyllaDB University Live | Free Virtual Training Event
Learn more
ScyllaDB Documentation Logo Documentation
  • Deployments
    • Cloud
    • Server
  • Tools
    • ScyllaDB Manager
    • ScyllaDB Monitoring Stack
    • ScyllaDB Operator
  • Drivers
    • CQL Drivers
    • DynamoDB Drivers
    • Supported Driver Versions
  • Resources
    • ScyllaDB University
    • Community Forum
    • Tutorials
Install
Search Ask AI
ScyllaDB Docs ScyllaDB Operator Management Upgrading Upgrading ScyllaDB Operator

Caution

You're viewing documentation for an unstable version of ScyllaDB Operator. Switch to the latest stable version.

Upgrading ScyllaDB Operator¶

ScyllaDB Operator supports N+1 upgrades only. That means to you can only update by 1 minor version at the time and wait for it to successfully roll out and then update all ScyllaClusters that also run using the image that’s being updated. (ScyllaDB Operator injects it as a sidecar to help run and manage ScyllaDB.)

We value the stability of our APIs and all API changes are backwards compatible.

Caution

If any additional steps are required for a specific version upgrade, they will be documented in the Upgrade steps for specific versions section below.

Upgrade via GitOps (kubectl)¶

A typical upgrade flow using GitOps (kubectl) requires re-applying the manifests using ones from the release you want to upgrade to. Note that ScyllaDB Operator’s dependencies also need to be updated to the versions compatible with the target ScyllaDB Operator version.

Please refer to the GitOps installation instructions for details.

Upgrade via Helm¶

Prerequisites¶

  • Update ScyllaDB Operator dependencies to the versions compatible with the target ScyllaDB Operator version. Refer to the Helm installation instructions for details.

  • Make sure Helm chart repository is up-to-date:

    helm repo add scylla https://scylla-operator-charts.storage.googleapis.com/stable
    helm repo update
    

Upgrade ScyllaDB Manager¶

Replace <release_name> with the name of your Helm release for ScyllaDB Manager and replace <version> with the version number you want to install:

helm upgrade --version <version> <release_name> scylla/scylla-manager

Upgrade ScyllaDB Operator¶

Replace <release_name> with the name of your Helm release for ScyllaDB Operator and replace <version> with the version number you want to install:

  1. Update CRD resources. We recommend using --server-side flag for kubectl apply, if your version supports it.

    tmpdir=$( mktemp -d ) \
      && helm pull scylla-operator/scylla-operator --version <version> --untar --untardir "${tmpdir}" \
      && find "${tmpdir}"/scylla-operator/crds/ -name '*.yaml' -printf '-f=%p ' \
      | xargs kubectl apply
    
  2. Update ScyllaDB Operator.

    helm upgrade --version <version> <release_name> scylla/scylla-operator
    

Upgrade steps for specific versions¶

Caution

The below instructions are supplementary to the standard upgrade procedure and don’t fully replace it. Make sure to familiarize yourself with both the standard upgrade procedure and the additional steps for the specific version you are upgrading to, if applicable, and follow them accordingly.

1.20 to 1.21¶

Ensure ScyllaCluster repair and backup task names are RFC 1123 compliant¶

Before upgrading, ensure that all ScyllaCluster repair (.spec.repairs[].name) and backup (.spec.backups[].name) task names conform to RFC 1123 subdomain requirements:

  • contain no more than 253 characters,

  • contain only lowercase alphanumeric characters, ‘-’ or ‘.’,

  • start with an alphanumeric character,

  • end with an alphanumeric character.

You can run the following snippet to check for such ScyllaClusters:

output=$(kubectl get scyllaclusters --all-namespaces -o json | jq -r '
  def is_rfc1123_subdomain_invalid:
    (length <= 253) and test("^[a-z0-9]([a-z0-9.-]{0,251}[a-z0-9])?$") | not;
  .items[] |
  {
    namespace: .metadata.namespace,
    name: .metadata.name,
    invalid_repairs: [(.spec.repairs // [] | .[].name | select(is_rfc1123_subdomain_invalid))],
    invalid_backups: [(.spec.backups // [] | .[].name | select(is_rfc1123_subdomain_invalid))]
  } |
  select((.invalid_repairs | length > 0) or (.invalid_backups | length > 0)) |
  "\(.namespace)/\(.name)\n  Invalid repairs: \(if (.invalid_repairs | length) > 0 then (.invalid_repairs | join(", ")) else "(none)" end)\n  Invalid backups: \(if (.invalid_backups | length) > 0 then (.invalid_backups | join(", ")) else "(none)" end)\n"
') && if [ -z "$output" ]; then echo "All ScyllaCluster repair and backup task names are RFC 1123 compliant."; else echo "$output"; fi

You should get an output similar to the following if there are any ScyllaClusters with invalid repair or backup task names:

scylla/example
  Invalid repairs: invalid_repair
  Invalid backups: invalid_backup

or the following if all ScyllaCluster repair and backup task names are compliant:

All ScyllaCluster repair and backup task names are RFC 1123 compliant.

Note

Why is this necessary? Starting with v1.20.1, ScyllaDB Operator emitted warnings for ScyllaCluster repair and backup task names not conforming to RFC 1123 subdomain requirements. In v1.21, these warnings have been replaced with hard validation errors, and the operator will refuse to start if any existing ScyllaClusters have non-conforming task names.

Ensure ScyllaCluster spec.version is not empty¶

ScyllaCluster spec.version is now a required field. Any create or update request with an empty value will be rejected by the admission webhook. You can run the following snippet to check whether any of your existing ScyllaClusters have an empty spec.version:

kubectl get scyllaclusters --all-namespaces -o json | jq -r '
  .items[] | select((.spec.version // "") == "") |
  "\(.metadata.namespace)/\(.metadata.name)"
'

If the command returns no output, all your ScyllaClusters are unaffected. If any are listed, set their spec.version to a valid ScyllaDB image tag before upgrading.

Note

Why is this not a breaking change? A ScyllaCluster with an empty spec.version was never functional. The migration controller would fail to reconcile it, leaving the cluster in a permanently degraded state. Making the field required only prevents new broken clusters from being created.

Review ScyllaDBMonitoring spec.type default change¶

The default value of ScyllaDBMonitoring spec.type has changed from SaaS to Platform. Any existing ScyllaDBMonitoring object that omits spec.type will render Platform dashboards after the upgrade instead of SaaS. The SaaS value is also now deprecated and will be removed in a future release; the admission webhook will emit a warning when it is set explicitly.

You can run the following snippet to list ScyllaDBMonitoring objects that will be affected by the default change or that still use the deprecated SaaS value:

output=$(kubectl get scylladbmonitorings --all-namespaces -o json | jq -r '
  .items[] |
  select((.spec.type // "SaaS") == "SaaS") |
  "\(.metadata.namespace)/\(.metadata.name)\t(spec.type=\(.spec.type // "<unset, defaults to SaaS>"))"
') && if [ -z "$output" ]; then echo "No ScyllaDBMonitoring objects rely on the SaaS type."; else echo "$output"; fi

If the command returns no output, none of your ScyllaDBMonitoring objects are affected. Otherwise, for each listed object, decide whether you want to:

  • keep the previous behavior by setting spec.type: SaaS explicitly before upgrading (the admission webhook will emit a deprecation warning, and you will need to migrate to Platform before a future release removes SaaS), or

  • adopt the new default by either setting spec.type: Platform explicitly or leaving spec.type unset.

Note

Why is this not a breaking change? The Platform dashboards are a superset of the SaaS dashboards, so switching from SaaS to Platform does not remove functionality.

1.17 to 1.18¶

Upgrading from v1.17.x requires extra actions due to the removal of the standalone ScyllaDB Manager controller. The controller becomes an integral part of ScyllaDB Operator. As a result, the standalone ScyllaDB Manager controller deployment and its related resources need to be removed before upgrading ScyllaDB Operator.

Upgrade via GitOps (kubectl)¶

kubectl delete -n scylla-manager \
    clusterrole/scylladb:controller:aggregate-to-manager-controller \
    clusterrole/scylladb:controller:manager-controller \
    poddisruptionbudgets.policy/scylla-manager-controller \
    serviceaccounts/scylla-manager-controller \
    clusterrolebindings.rbac.authorization.k8s.io/scylladb:controller:manager-controller \
    deployments.apps/scylla-manager-controller

Upgrade via Helm¶

ScyllaDB Manager Helm installation has to be upgraded before upgrading ScyllaDB Operator Helm installation, following the standard Helm upgrade procedure. This will ensure that the ScyllaDB Manager Controller is removed before upgrading the ScyllaDB Operator.

Was this page helpful?

PREVIOUS
Upgrading
NEXT
Upgrading ScyllaDB clusters
  • Create an issue
  • Edit this page

On this page

  • Upgrading ScyllaDB Operator
    • Upgrade via GitOps (kubectl)
    • Upgrade via Helm
      • Prerequisites
      • Upgrade ScyllaDB Manager
      • Upgrade ScyllaDB Operator
    • Upgrade steps for specific versions
      • 1.20 to 1.21
        • Ensure ScyllaCluster repair and backup task names are RFC 1123 compliant
        • Ensure ScyllaCluster spec.version is not empty
        • Review ScyllaDBMonitoring spec.type default change
      • 1.17 to 1.18
        • Upgrade via GitOps (kubectl)
        • Upgrade via Helm
ScyllaDB Operator
Search Ask AI
  • master
    • master
    • v1.20
    • v1.19
    • v1.18
    • v1.17
  • Architecture
    • Overview
    • Storage
      • Overview
      • Local CSI Driver
    • Tuning
    • ScyllaDB Manager
  • Installation
    • Overview
    • Kubernetes prerequisites
    • GitOps (kubectl)
    • Helm
    • OpenShift
  • Management
    • Configuring kernel parameters (sysctls)
    • Collecting core dumps
    • Synchronising bootstrap operations in ScyllaDB clusters
    • Automatic data cleanup
    • Upgrading
      • Upgrading ScyllaDB Operator
      • Upgrading ScyllaDB clusters
    • Monitoring
      • ScyllaDB Monitoring overview
      • Setting up ScyllaDB Monitoring
      • Exposing Grafana
      • Setting up ScyllaDB Monitoring on OpenShift
    • Networking
      • IPv6 networking
        • Getting started with IPv6 networking
        • Configure dual-stack networking with IPv4
        • Configure dual-stack networking with IPv6
        • Configure IPv6-only networking
        • Migrate clusters to IPv6
        • Troubleshoot IPv6 networking issues
        • IPv6 configuration reference
        • IPv6 networking concepts
  • Resources
    • Overview
    • ScyllaClusters
      • ScyllaClusters
      • ScyllaDB clients
        • Discovering ScyllaDB Nodes
        • Using CQL
        • Using Alternator (DynamoDB)
      • Node operations using Scylla Operator
        • Upgrading version of ScyllaDB
        • Replacing a ScyllaDB node
        • Automatic cleanup and replacement in case when k8s node is lost
        • Maintenance mode
        • Restore from backup
        • Resizing storage in ScyllaCluster
      • Deploying multi-datacenter ScyllaDB clusters in Kubernetes
        • Build multiple Amazon EKS clusters with inter-Kubernetes networking
        • Build multiple GKE clusters with inter-Kubernetes networking
        • Deploy a multi-datacenter ScyllaDB cluster in multiple interconnected Kubernetes clusters
      • Exposing ScyllaDB cluster
    • ScyllaDBClusters
      • ScyllaDBClusters
      • Exposing ScyllaDB cluster
    • NodeConfigs
    • ScyllaOperatorConfigs
    • RemoteKubernetesCluster
  • Quickstarts
    • Deploying ScyllaDB on GKE
    • Deploying ScyllaDB on EKS
  • Support
    • Support overview
    • Known issues
    • Troubleshooting
      • Troubleshooting installation issues
    • Gathering data with must-gather
    • Releases
  • Reference
    • API Reference
      • scylla.scylladb.com
        • NodeConfig (scylla.scylladb.com/v1alpha1)
        • RemoteKubernetesCluster (scylla.scylladb.com/v1alpha1)
        • RemoteOwner (scylla.scylladb.com/v1alpha1)
        • ScyllaCluster (scylla.scylladb.com/v1)
        • ScyllaDBCluster (scylla.scylladb.com/v1alpha1)
        • ScyllaDBDatacenterNodesStatusReport (scylla.scylladb.com/v1alpha1)
        • ScyllaDBDatacenter (scylla.scylladb.com/v1alpha1)
        • ScyllaDBManagerClusterRegistration (scylla.scylladb.com/v1alpha1)
        • ScyllaDBManagerTask (scylla.scylladb.com/v1alpha1)
        • ScyllaDBMonitoring (scylla.scylladb.com/v1alpha1)
        • ScyllaOperatorConfig (scylla.scylladb.com/v1alpha1)
    • Feature Gates
Docs Tutorials University Contact Us About Us
© 2026, ScyllaDB. All rights reserved. | Terms of Service | Privacy Policy | ScyllaDB, and ScyllaDB Cloud, are registered trademarks of ScyllaDB, Inc.
Last updated on 04 May 2026.
Powered by Sphinx 9.1.0 & ScyllaDB Theme 1.9.2