ScyllaDB University Live | Free Virtual Training Event
Learn more
ScyllaDB Documentation Logo Documentation
  • Server
  • Cloud
  • Tools
    • ScyllaDB Manager
    • ScyllaDB Monitoring Stack
    • ScyllaDB Operator
  • Drivers
    • CQL Drivers
    • DynamoDB Drivers
  • Resources
    • ScyllaDB University
    • Community Forum
    • Tutorials
Download
ScyllaDB Docs Scylla Operator Resources ScyllaClusters Deploying multi-datacenter ScyllaDB clusters in Kubernetes Build multiple Amazon EKS clusters with inter-Kubernetes networking

Build multiple Amazon EKS clusters with inter-Kubernetes networking¶

This document describes the process of creating multiple Amazon EKS clusters in different regions, using separate VPCs, and explains the steps necessary for configuring inter-Kubernetes networking between the clusters. The interconnected clusters can serve as a platform for deploying a multi-datacenter ScyllaDB cluster.

This guide will walk you through the process of creating and configuring EKS clusters in two distinct regions. Although it is only an example setup, it can easily be built upon to create infrastructure tailored to your specific needs. For simplicity, several predefined values are used throughout the document. The values are only exemplary and can be adjusted to your preference.

Prerequisites¶

To follow the below guide, you first need to install and configure the tools that you will need to create and manage AWS and Kubernetes resources:

  • eksctl – A command line tool for working with EKS clusters.

  • kubectl – A command line tool for working with Kubernetes clusters.

For more information see Getting started with Amazon EKS – eksctl in AWS documentation.

Create EKS clusters¶

Create the first EKS cluster¶

Below is the required specification for the first cluster.

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: scylladb-us-east-1
  region: us-east-1

availabilityZones:
- us-east-1a
- us-east-1b
- us-east-1c

vpc:
  cidr: 10.0.0.0/16

nodeGroups:
  ...

Specify the first cluster’s configuration file and save it as cluster-us-east-1.yaml. Refer to Creating an EKS cluster section of ScyllaDB Operator documentation for the reference of the configuration of node groups.

To deploy the first cluster, use the below command:

eksctl create cluster -f=cluster-us-east-1.yaml

Run the following command to learn the status and VPC ID of the cluster:

eksctl get cluster --name=scylladb-us-east-1 --region=us-east-1

You will need to get the cluster’s context for future operations. To do so, use the below command:

kubectl config current-context

For any kubectl commands that you will want to run against this cluster, use the --context flag with the value returned by the above command.

Deploy Scylla Operator¶

To deploy Scylla Operator follow the installation guide.

Create the second EKS cluster¶

Below is the required specification for the second cluster. As was the case with the first cluster, the provided values are only exemplary and can be adjusted according to your needs.

Caution

It is required that the VPCs of the two EKS clusters have non-overlapping IPv4 network ranges.

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: scylladb-us-east-2
  region: us-east-2

availabilityZones:
- us-east-2a
- us-east-2b
- us-east-2c

vpc:
  cidr: 172.16.0.0/16

nodeGroups:
  ...

Follow analogous steps to create the second EKS cluster and prepare it for running ScyllaDB.

Configure the network¶

The prepared Kubernetes clusters each have a dedicated VPC network. To be able to route the traffic between the two VPC networks, you need to create a networking connection between them, otherwise known as VPC peering.

Create VPC peering¶

Refer to Create a VPC peering connection in AWS documentation for instructions on creating a VPC peering connection between the two earlier created VPCs.

In this example, the ID of the created VPC peering connection is pcx-08077dcc008fbbab6.

Update route tables¶

To enable private IPv4 traffic between the instances in the VPC peered network, you need to establish a communication channel by adding a route to the route tables associated with all the subnets associated with the instances for both VPCs. The destination of the new route in a given route table is the CIDR of the VPC of the other cluster and the target is the ID of the VPC peering connection.

The following is an example of the route tables that enable communication of instances in two peered VPCs. Each table has a local route and the added route which sends traffic targeted at the other VPC to the peered network connection. The other preconfigured routes are omitted for readability.

Route table Destination Target
eksctl-scylladb-us-east-1-cluster/PublicRouteTable 10.0.0.0/16 local
172.16.0.0/16 pcx-08077dcc008fbbab6
eksctl-scylladb-us-east-2-cluster/PublicRouteTable 172.16.0.0/16 local
10.0.0.0/16 pcx-08077dcc008fbbab6

Refer to Update your route tables for a VPC peering connection in AWS documentation for more information.

Update security groups¶

To allow traffic to flow to and from instances associated with security groups in the peered VPC, you need to update the inbound rules of the VPCs’ shared security groups.

Below is an example of the inbound rules that to be added to the corresponding security groups of the two VPCs.

Security group name

Type

Protocol

Port range

Source

eksctl-scylladb-us-east-1-cluster-ClusterSharedNodeSecurityGroup-TD05V9EVU3B8

All traffic

All

All

Custom 172.16.0.0/16

eksctl-scylladb-us-east-2-cluster-ClusterSharedNodeSecurityGroup-1FR9YDLU0VE7M

All traffic

All

All

Custom 10.0.0.0/16

The names of the shared security groups of your VPCs should be similar to the ones presented in the example.


Having followed the above steps, you should now have a platform prepared for deploying a multi-datacenter ScyllaDB cluster. Refer to Deploy a multi-datacenter ScyllaDB cluster in multiple interconnected Kubernetes clusters in ScyllaDB Operator documentation for guidance.

Was this page helpful?

PREVIOUS
Deploying multi-datacenter ScyllaDB clusters in Kubernetes
NEXT
Build multiple GKE clusters with inter-Kubernetes networking
  • Create an issue
  • Edit this page

On this page

  • Build multiple Amazon EKS clusters with inter-Kubernetes networking
    • Prerequisites
    • Create EKS clusters
      • Create the first EKS cluster
        • Deploy Scylla Operator
      • Create the second EKS cluster
    • Configure the network
      • Create VPC peering
      • Update route tables
      • Update security groups
Scylla Operator
  • v1.17
    • v1.17
    • v1.16
    • v1.15
    • master
  • Architecture
    • Overview
    • Storage
      • Overview
      • Local CSI Driver
    • Tuning
    • ScyllaDB Manager
  • Installation
    • Overview
    • Kubernetes
      • Generic
      • EKS
      • GKE
    • GitOps (kubectl)
    • Helm
  • Resources
    • Overview
    • ScyllaClusters
      • ScyllaClusters
      • ScyllaDB clients
        • Discovering ScyllaDB Nodes
        • Using CQL
        • Using Alternator (DynamoDB)
      • Node operations using Scylla Operator
        • Upgrading version of Scylla
        • Replacing a Scylla node
        • Automatic cleanup and replacement in case when k8s node is lost
        • Maintenance mode
        • Restore from backup
      • Deploying multi-datacenter ScyllaDB clusters in Kubernetes
        • Build multiple Amazon EKS clusters with inter-Kubernetes networking
        • Build multiple GKE clusters with inter-Kubernetes networking
        • Deploy a multi-datacenter ScyllaDB cluster in multiple interconnected Kubernetes clusters
      • Exposing ScyllaDB cluster
    • ScyllaDBClusters
      • ScyllaDBClusters
      • Exposing ScyllaDB cluster
    • ScyllaDBMonitorings
    • NodeConfigs
    • ScyllaOperatorConfigs
    • RemoteKubernetesCluster
  • Quickstarts
    • Deploying ScyllaDB on GKE
    • Deploying ScyllaDB on EKS
  • Support
    • Support overview
    • Known issues
    • Troubleshooting
      • Troubleshooting installation issues
    • Gathering data with must-gather
    • Releases
  • API Reference
    • scylla.scylladb.com
      • NodeConfig (scylla.scylladb.com/v1alpha1)
      • RemoteKubernetesCluster (scylla.scylladb.com/v1alpha1)
      • RemoteOwner (scylla.scylladb.com/v1alpha1)
      • ScyllaCluster (scylla.scylladb.com/v1)
      • ScyllaDBCluster (scylla.scylladb.com/v1alpha1)
      • ScyllaDBDatacenter (scylla.scylladb.com/v1alpha1)
      • ScyllaDBMonitoring (scylla.scylladb.com/v1alpha1)
      • ScyllaOperatorConfig (scylla.scylladb.com/v1alpha1)
Docs Tutorials University Contact Us About Us
© 2025, ScyllaDB. All rights reserved. | Terms of Service | Privacy Policy | ScyllaDB, and ScyllaDB Cloud, are registered trademarks of ScyllaDB, Inc.
Last updated on 04 June 2025.
Powered by Sphinx 8.1.3 & ScyllaDB Theme 1.8.6