ScyllaDB University Live | Free Virtual Training Event
Learn more
ScyllaDB Documentation Logo Documentation
  • Deployments
    • Cloud
    • Server
  • Tools
    • ScyllaDB Manager
    • ScyllaDB Monitoring Stack
    • ScyllaDB Operator
  • Drivers
    • CQL Drivers
    • DynamoDB Drivers
    • Supported Driver Versions
  • Resources
    • ScyllaDB University
    • Community Forum
    • Tutorials
Install
Ask AI
ScyllaDB Docs ScyllaDB Operator Management Networking IPv6 networking Migrate clusters to IPv6

Caution

You're viewing documentation for an unstable version of ScyllaDB Operator. Switch to the latest stable version.

Migrate clusters to IPv6¶

This guide shows you how to migrate existing ScyllaDB clusters to IPv6 networking.

Before you begin¶

Prerequisites¶

  • Existing ScyllaDB cluster running on IPv4

  • Kubernetes cluster with IPv6 support enabled

  • Administrative access to the cluster

Important considerations¶

  • Downtime: Migration requires a rolling restart of all pods

  • Data safety: Data is preserved during migration

  • Client updates: Client applications need updated connection strings after migration

  • Testing: Test the migration process in a non-production environment first

Warning

Migration involves a rolling restart of your cluster. Plan the migration during a maintenance window.

Choose your migration path¶

Select the migration approach that fits your needs:

  1. Migrate from IPv4 to dual-stack: Add IPv6 support while keeping IPv4 (recommended)

  2. Migrate from IPv4 to IPv6-only: Completely migrate to IPv6 (experimental)

Migrate from IPv4 to dual-stack¶

This is the recommended migration path because:

  • Minimizes disruption to existing clients

  • Allows gradual client migration

  • Provides fallback to IPv4 if issues arise

Step 1: Backup your data¶

Before making any changes, ensure you have recent backups. See Configuring backup tasks for details on setting up backups for your ScyllaCluster.

Step 2: Update cluster configuration¶

Edit your ScyllaCluster manifest to add dual-stack support:

apiVersion: scylla.scylladb.com/v1
kind: ScyllaCluster
metadata:
  name: your-cluster-name
  namespace: scylla
spec:
  # ... existing configuration ...
  
  # Add network configuration
  network:
    ipFamilyPolicy: PreferDualStack
    ipFamilies:
      - IPv4  # Keep IPv4 as primary
      - IPv6  # Add IPv6 support
    dnsPolicy: ClusterFirst

Step 3: Apply the configuration¶

Apply the updated configuration:

kubectl apply -f scylla-cluster.yaml

The operator will perform a rolling update of all pods.

Step 4: Monitor the migration¶

Watch the rolling update progress:

# Check cluster status
kubectl exec -it <pod-name> -n scylla -c scylla -- nodetool status

Expected output:

Datacenter: dc
===================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address           Load      Tokens  Owns  Host ID                              Rack
UN  10.244.2.7        501.79 KB 256     ?     4583fff5-2aa6-4041-9be8-c74bcabaff8c rack
UN  10.244.2.8        494.49 KB 256     ?     b1f889b4-80e7-4685-a3c5-1b81797c2ce4 rack
UN  10.244.2.9        494.96 KB 256     ?     7a4bb6da-415e-4fc3-a6ca-0369c0e76bf0 rack

Wait for all nodes to show UN (Up/Normal) status.

Step 5: Verify dual-stack configuration¶

Confirm services have both IP families:

kubectl get svc -n scylla -o custom-columns=NAME:.metadata.name,IP-FAMILIES:.spec.ipFamilies,POLICY:.spec.ipFamilyPolicy

Expected output:

NAME                            IP-FAMILIES        POLICY
scylla-dual-stack-client        [IPv4 IPv6]        PreferDualStack
scylla-dual-stack-us-east-1a-0  [IPv4 IPv6]        PreferDualStack

Step 6: Test connectivity¶

Test that clients can connect via both protocols:

# Get service IPs
kubectl get svc your-cluster-name-client -n scylla -o jsonpath='{.spec.clusterIPs}'

Example output

[
  "10.96.136.229",
  "fd00:10:96::6277"
]

Test connection using service name:

kubectl run -it --rm cqlsh --image=scylladb/scylla-cqlsh:latest --restart=Never -n scylla-test -- \
    your-cluster-name-client.scylla.svc.cluster.local 9042 \
  -e "SELECT cluster_name,broadcast_address FROM system.local;"  

Example output

-------------------+---------------------
 cluster_name      | scylla-cluster
 broadcast_address | 10.244.2.42

(1 rows)
pod "cqlsh" deleted

Step 7: Update client applications¶

Update your client applications to use the dual-stack service. Most clients will automatically work with dual-stack services without changes.

Step 8: Verify cluster health¶

After migration completes:

# Verify all nodes are up
kubectl exec -it <pod-name> -n scylla -c scylla -- nodetool status

Migrate from IPv4 to IPv6-only¶

Warning

Experimental Feature: IPv6-only configurations are experimental. See Production readiness for details. This migration path requires careful planning and testing.

Step 1: Verify IPv6-only readiness¶

Ensure your environment supports IPv6-only:

# Check that all nodes have IPv6 addresses
kubectl get nodes -o jsonpath='{.items[*].status.addresses[?(@.type=="InternalIP")].address}' | tr ' ' '\n' | grep ':'

Example output

fc00:f853:ccd:e793::2
fc00:f853:ccd:e793::4
fc00:f853:ccd:e793::3
fc00:f853:ccd:e793::5

Step 2: Update client applications¶

Update all client applications to support IPv6 before migrating the cluster.

Step 3: Backup your data¶

Create a backup before proceeding. See Configuring backup tasks for details on setting up backups for your ScyllaCluster.

Step 4: Update cluster configuration¶

Edit your ScyllaCluster manifest for IPv6-only:

apiVersion: scylla.scylladb.com/v1
kind: ScyllaCluster
metadata:
  name: your-cluster-name
  namespace: scylla
spec:
  # ... existing configuration ...
  
  # Update network configuration
  network:
    ipFamilyPolicy: SingleStack
    ipFamilies:
      - IPv6  # IPv6 only
    dnsPolicy: ClusterFirst

Step 5: Apply and monitor¶

Apply the configuration and monitor the migration:

kubectl apply -f scylla-cluster.yaml

# Monitor the rolling update
kubectl get pods -n scylla -w

Step 6: Verify IPv6-only operation¶

Check that cluster is using IPv6:

# Verify pod IPv6 addresses
kubectl get pods -n scylla -o wide

# Check cluster status
kubectl exec -it <pod-name> -n scylla -c scylla -- nodetool status

Expected output with IPv6 addresses:

Datacenter: datacenter
===================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address           Load      Tokens  Owns  Host ID                              Rack
UN  fd00:10:244:1::7f 501.79 KB 256     ?     4583fff5-2aa6-4041-9be8-c74bcabaff8c rack
UN  fd00:10:244:2::6d 494.49 KB 256     ?     b1f889b4-80e7-4685-a3c5-1b81797c2ce4 rack
UN  fd00:10:244:3::6c 494.96 KB 256     ?     7a4bb6da-415e-4fc3-a6ca-0369c0e76bf0 rack

Step 7: Test client connectivity¶

Test that clients can connect:

kubectl run -it --rm cqlsh --image=scylladb/scylla-cqlsh:latest --restart=Never -n scylla-test -- \
    your-cluster-name-client.scylla.svc.cluster.local 9042 \
  -e "SELECT cluster_name,broadcast_address FROM system.local;"  

Expected output:

-------------------+---------------------
 cluster_name      | scylla-cluster
 broadcast_address | fd00:10:244:2::25

(1 rows)
pod "cqlsh" deleted

Step 8: Verify cluster health¶

Check cluster health:

# Verify all nodes are up
kubectl exec -it <pod-name> -n scylla -c scylla -- nodetool status

Rollback to IPv4¶

If you encounter issues during migration, you can roll back to the previous IPv4-only configuration.

Update your ScyllaCluster manifest:

network:
  ipFamilyPolicy: SingleStack
  ipFamilies:
    - IPv4

Apply the rollback:

kubectl apply -f scylla-cluster.yaml

The operator will perform a rolling update back to IPv4.

Troubleshooting¶

If you encounter problems during migration:

Nodes not joining cluster¶

Symptom: Pods are running but nodes show as down

Solution:

  1. Check DNS resolution:

    kubectl exec -it <pod-name> -n scylla -- nslookup <service-name>
    
  2. Verify network configuration:

    kubectl get svc -n scylla -o yaml | grep -A 5 -i family
    
  3. Review pod logs:

    kubectl logs <pod-name> -n scylla -c scylla
    

Connection failures¶

Symptom: Clients cannot connect after migration

Solution:

  1. Verify service IPs:

    kubectl get svc -n scylla -o wide
    
  2. Check client configuration for IPv6 support

For more troubleshooting steps, see Troubleshoot IPv6 issues.

Migration best practices¶

  1. Test first: Always test migration in a non-production environment

  2. Backup: Create backups before starting migration

  3. Monitoring: Set up alerts for cluster health during migration

  4. Gradual approach: Use dual-stack first, then migrate to IPv6-only if needed

  5. Client coordination: Coordinate with application teams before migration

  6. Documentation: Document your specific migration steps and any customizations

Next steps¶

  • Troubleshoot IPv6 issues

  • Configure IPv6 networking

  • Understand IPv6 networking concepts

Related documentation¶

  • How to configure IPv6 networking

  • Troubleshoot IPv6 networking

  • IPv6 networking concepts

  • IPv6 configuration reference

Was this page helpful?

PREVIOUS
Configure IPv6-only networking
NEXT
Troubleshoot IPv6 networking issues
  • Create an issue
  • Edit this page

On this page

  • Migrate clusters to IPv6
    • Before you begin
      • Prerequisites
      • Important considerations
    • Choose your migration path
    • Migrate from IPv4 to dual-stack
      • Step 1: Backup your data
      • Step 2: Update cluster configuration
      • Step 3: Apply the configuration
      • Step 4: Monitor the migration
      • Step 5: Verify dual-stack configuration
      • Step 6: Test connectivity
      • Step 7: Update client applications
      • Step 8: Verify cluster health
    • Migrate from IPv4 to IPv6-only
      • Step 1: Verify IPv6-only readiness
      • Step 2: Update client applications
      • Step 3: Backup your data
      • Step 4: Update cluster configuration
      • Step 5: Apply and monitor
      • Step 6: Verify IPv6-only operation
      • Step 7: Test client connectivity
      • Step 8: Verify cluster health
    • Rollback to IPv4
    • Troubleshooting
      • Nodes not joining cluster
      • Connection failures
    • Migration best practices
    • Next steps
    • Related documentation
ScyllaDB Operator
  • master
    • v1.20
    • v1.19
    • v1.18
    • v1.17
    • master
  • Architecture
    • Overview
    • Storage
      • Overview
      • Local CSI Driver
    • Tuning
    • ScyllaDB Manager
  • Installation
    • Overview
    • Kubernetes prerequisites
    • GitOps (kubectl)
    • Helm
  • Management
    • Configuring kernel parameters (sysctls)
    • Synchronising bootstrap operations in ScyllaDB clusters
    • Automatic data cleanup
    • Upgrading
      • Upgrading ScyllaDB Operator
      • Upgrading ScyllaDB clusters
    • Monitoring
      • ScyllaDB Monitoring overview
      • Setting up ScyllaDB Monitoring
      • Exposing Grafana
      • Setting up ScyllaDB Monitoring on OpenShift
    • Networking
      • IPv6 networking
        • Getting started with IPv6 networking
        • Configure dual-stack networking with IPv4
        • Configure dual-stack networking with IPv6
        • Configure IPv6-only networking
        • Migrate clusters to IPv6
        • Troubleshoot IPv6 networking issues
        • IPv6 configuration reference
        • IPv6 networking concepts
  • Resources
    • Overview
    • ScyllaClusters
      • ScyllaClusters
      • ScyllaDB clients
        • Discovering ScyllaDB Nodes
        • Using CQL
        • Using Alternator (DynamoDB)
      • Node operations using Scylla Operator
        • Upgrading version of ScyllaDB
        • Replacing a ScyllaDB node
        • Automatic cleanup and replacement in case when k8s node is lost
        • Maintenance mode
        • Restore from backup
        • Resizing storage in ScyllaCluster
      • Deploying multi-datacenter ScyllaDB clusters in Kubernetes
        • Build multiple Amazon EKS clusters with inter-Kubernetes networking
        • Build multiple GKE clusters with inter-Kubernetes networking
        • Deploy a multi-datacenter ScyllaDB cluster in multiple interconnected Kubernetes clusters
      • Exposing ScyllaDB cluster
    • ScyllaDBClusters
      • ScyllaDBClusters
      • Exposing ScyllaDB cluster
    • NodeConfigs
    • ScyllaOperatorConfigs
    • RemoteKubernetesCluster
  • Quickstarts
    • Deploying ScyllaDB on GKE
    • Deploying ScyllaDB on EKS
  • Support
    • Support overview
    • Known issues
    • Troubleshooting
      • Troubleshooting installation issues
    • Gathering data with must-gather
    • Releases
  • Reference
    • API Reference
      • scylla.scylladb.com
        • NodeConfig (scylla.scylladb.com/v1alpha1)
        • RemoteKubernetesCluster (scylla.scylladb.com/v1alpha1)
        • RemoteOwner (scylla.scylladb.com/v1alpha1)
        • ScyllaCluster (scylla.scylladb.com/v1)
        • ScyllaDBCluster (scylla.scylladb.com/v1alpha1)
        • ScyllaDBDatacenterNodesStatusReport (scylla.scylladb.com/v1alpha1)
        • ScyllaDBDatacenter (scylla.scylladb.com/v1alpha1)
        • ScyllaDBManagerClusterRegistration (scylla.scylladb.com/v1alpha1)
        • ScyllaDBManagerTask (scylla.scylladb.com/v1alpha1)
        • ScyllaDBMonitoring (scylla.scylladb.com/v1alpha1)
        • ScyllaOperatorConfig (scylla.scylladb.com/v1alpha1)
    • Feature Gates
Docs Tutorials University Contact Us About Us
© 2026, ScyllaDB. All rights reserved. | Terms of Service | Privacy Policy | ScyllaDB, and ScyllaDB Cloud, are registered trademarks of ScyllaDB, Inc.
Last updated on 11 February 2026.
Powered by Sphinx 8.2.3 & ScyllaDB Theme 1.8.10
Ask AI