ScyllaDB University Live | Free Virtual Training Event
Learn more
ScyllaDB Documentation Logo Documentation
  • Server
  • Cloud
  • Tools
    • ScyllaDB Manager
    • ScyllaDB Monitoring Stack
    • ScyllaDB Operator
  • Drivers
    • CQL Drivers
    • DynamoDB Drivers
  • Resources
    • ScyllaDB University
    • Community Forum
    • Tutorials
Download
ScyllaDB Docs Scylla Operator Resources ScyllaClusters Deploying multi-datacenter ScyllaDB clusters in Kubernetes Build multiple GKE clusters with inter-Kubernetes networking

Build multiple GKE clusters with inter-Kubernetes networking¶

This document describes the process of creating multiple GKE clusters in a shared VPC and explains the steps necessary for configuring inter-Kubernetes networking between clusters in different regions. The interconnected clusters can serve as a platform for deploying a Multi Datacenter ScyllaDB cluster.

This guide will walk you through the process of creating and configuring GKE clusters in two distinct regions. Although it is only an example setup, it can easily be built upon to create infrastructure tailored to your specific needs. For simplicity, several predefined values are used throughout the document. The values are only exemplary and can be adjusted to your preference.

Prerequisites¶

To follow the below guide, you first need to install and configure the following tools that you will need to create and manage GCP and Kubernetes resources:

  • gcloud CLI - Google Cloud Command Line Interface, a command line tool for working with Google Cloud resources and services directly.

  • kubectl – A command line tool for working with Kubernetes clusters.

See Install the Google Cloud CLI in GCP documentation and Install Tools in Kubernetes documentation for reference.

Create and configure a VPC network¶

For the clusters to have inter-Kubernetes networking, you will create a virtual network shared between all the instances, with dedicated subnets for each of the clusters. To create the subnets manually, create the network in custom subnet mode.

Create the VPC network¶

Run the below command to create the network:

gcloud compute networks create scylladb --subnet-mode=custom

With the VPC network created, create a dedicated subnet with secondary CIDR ranges for their Pod and Service pools in each region which the clusters will reside in.

Create VPC network subnets¶

To create a subnet for the first cluster in region us-east1, run the below command:

gcloud compute networks subnets create scylladb-us-east1 \
    --region=us-east1 \
    --network=scylladb \
    --range=10.0.0.0/20 \
    --secondary-range='cluster=10.1.0.0/16,services=10.2.0.0/20'

To create a subnet for the second cluster in region us-west1, run the below command:

gcloud compute networks subnets create scylladb-us-west1 \
    --region=us-west1 \
    --network=scylladb \
    --range=172.16.0.0/20 \
    --secondary-range='cluster=172.17.0.0/16,services=172.18.0.0/20'

Caution

It is required that the IPv4 address ranges of the subnets allocated for the GKE clusters do not overlap.

Refer to Create a VPC-native cluster and Alias IP ranges in GKE documentation for more information about VPC native clusters and alias IP ranges.

Create GKE clusters¶

With the VPC network created, you will now create two VPC native GKE clusters in dedicated regions.

Create the first GKE cluster¶

Run the following command to create the first GKE cluster in the us-east1 region:

gcloud container clusters create scylladb-us-east1 \
    --location=us-east1-b \
    --node-locations='us-east1-b,us-east1-c' \
    --machine-type=n1-standard-8 \
    --num-nodes=1 \
    --disk-type=pd-ssd \
    --disk-size=20 \
    --image-type=UBUNTU_CONTAINERD \
    --no-enable-autoupgrade \
    --no-enable-autorepair \
    --enable-ip-alias \
    --network=scylladb \
    --subnetwork=scylladb-us-east1 \
    --cluster-secondary-range-name=cluster \
    --services-secondary-range-name=services

Refer to Creating a GKE cluster section of Scylla Operator documentation for more information regarding the configuration and deployment of additional node pools, including the one dedicated for ScyllaDB nodes.

You will need to get the cluster’s context for future operations. To do so, use the below command:

kubectl config current-context

For any kubectl commands that you will want to run against this cluster, use the --context flag with the value returned by the above command.

Deploy Scylla Operator¶

To deploy Scylla Operator follow the installation guide.

Create the second GKE cluster¶

Run the following command to create the second GKE cluster in the us-west1 region:

gcloud container clusters create scylladb-us-west1 \
    --location=us-west1-b \
    --node-locations='us-west1-b,us-west1-c' \
    --machine-type=n1-standard-8 \
    --num-nodes=1 \
    --disk-type=pd-ssd \
    --disk-size=20 \
    --image-type=UBUNTU_CONTAINERD \
    --no-enable-autoupgrade \
    --no-enable-autorepair \
    --enable-ip-alias \
    --network=scylladb \
    --subnetwork=scylladb-us-west1 \
    --cluster-secondary-range-name=cluster \
    --services-secondary-range-name=services

Follow analogous steps to create the second GKE cluster and prepare it for running ScyllaDB.

Configure the firewall rules¶

When creating a cluster, GKE creates several ingress firewall rules that enable the instances to communicate with each other. To establish interconnectivity between the two created Kubernetes clusters, you will now add the allocated IPv4 address ranges to their corresponding source address ranges.

First, retrieve the name of the firewall rule associated with the first cluster, which permits traffic between all Pods on a cluster, as required by the Kubernetes networking model. The rule name is in the following format: gke-[cluster-name]-[cluster-hash]-all.

To retrieve it, run the below command:

gcloud compute firewall-rules list --filter='name~gke-scylladb-us-east1-.*-all'

The output should resemble the following:

NAME                                NETWORK   DIRECTION  PRIORITY  ALLOW                     DENY  DISABLED
gke-scylladb-us-east1-f17db261-all  scylladb  INGRESS    1000      udp,icmp,esp,ah,sctp,tcp        False

Modify the rule by updating the rule’s source ranges with the allocated Pod IPv4 address ranges of both clusters:

gcloud compute firewall-rules update gke-scylladb-us-east1-f17db261-all --source-ranges='10.1.0.0/16,172.17.0.0/16'

Follow the analogous steps for the other cluster. In this example, its corresponding firewall rule name is gke-scylladb-us-west1-0bb60902-all. To update it, you would run:

gcloud compute firewall-rules update gke-scylladb-us-west1-0bb60902-all --source-ranges='10.1.0.0/16,172.17.0.0/16'

Refer to Automatically created firewall rules in GKE documentation for more information.


Having followed the above steps, you should now have a platform prepared for deploying a multi-datacenter ScyllaDB cluster. Refer to Deploy a multi-datacenter ScyllaDB cluster in multiple interconnected Kubernetes clusters in Scylla Operator documentation for guidance.

Was this page helpful?

PREVIOUS
Build multiple Amazon EKS clusters with inter-Kubernetes networking
NEXT
Deploy a multi-datacenter ScyllaDB cluster in multiple interconnected Kubernetes clusters
  • Create an issue
  • Edit this page

On this page

  • Build multiple GKE clusters with inter-Kubernetes networking
    • Prerequisites
    • Create and configure a VPC network
      • Create the VPC network
      • Create VPC network subnets
    • Create GKE clusters
      • Create the first GKE cluster
        • Deploy Scylla Operator
      • Create the second GKE cluster
    • Configure the firewall rules
Scylla Operator
  • v1.17
    • v1.17
    • v1.16
    • v1.15
    • master
  • Architecture
    • Overview
    • Storage
      • Overview
      • Local CSI Driver
    • Tuning
    • ScyllaDB Manager
  • Installation
    • Overview
    • Kubernetes
      • Generic
      • EKS
      • GKE
    • GitOps (kubectl)
    • Helm
  • Resources
    • Overview
    • ScyllaClusters
      • ScyllaClusters
      • ScyllaDB clients
        • Discovering ScyllaDB Nodes
        • Using CQL
        • Using Alternator (DynamoDB)
      • Node operations using Scylla Operator
        • Upgrading version of Scylla
        • Replacing a Scylla node
        • Automatic cleanup and replacement in case when k8s node is lost
        • Maintenance mode
        • Restore from backup
      • Deploying multi-datacenter ScyllaDB clusters in Kubernetes
        • Build multiple Amazon EKS clusters with inter-Kubernetes networking
        • Build multiple GKE clusters with inter-Kubernetes networking
        • Deploy a multi-datacenter ScyllaDB cluster in multiple interconnected Kubernetes clusters
      • Exposing ScyllaDB cluster
    • ScyllaDBClusters
      • ScyllaDBClusters
      • Exposing ScyllaDB cluster
    • ScyllaDBMonitorings
    • NodeConfigs
    • ScyllaOperatorConfigs
    • RemoteKubernetesCluster
  • Quickstarts
    • Deploying ScyllaDB on GKE
    • Deploying ScyllaDB on EKS
  • Support
    • Support overview
    • Known issues
    • Troubleshooting
      • Troubleshooting installation issues
    • Gathering data with must-gather
    • Releases
  • API Reference
    • scylla.scylladb.com
      • NodeConfig (scylla.scylladb.com/v1alpha1)
      • RemoteKubernetesCluster (scylla.scylladb.com/v1alpha1)
      • RemoteOwner (scylla.scylladb.com/v1alpha1)
      • ScyllaCluster (scylla.scylladb.com/v1)
      • ScyllaDBCluster (scylla.scylladb.com/v1alpha1)
      • ScyllaDBDatacenter (scylla.scylladb.com/v1alpha1)
      • ScyllaDBMonitoring (scylla.scylladb.com/v1alpha1)
      • ScyllaOperatorConfig (scylla.scylladb.com/v1alpha1)
Docs Tutorials University Contact Us About Us
© 2025, ScyllaDB. All rights reserved. | Terms of Service | Privacy Policy | ScyllaDB, and ScyllaDB Cloud, are registered trademarks of ScyllaDB, Inc.
Last updated on 04 June 2025.
Powered by Sphinx 8.1.3 & ScyllaDB Theme 1.8.6