Was this page helpful?
Caution
You're viewing documentation for a previous version of Scylla Operator. Switch to the latest stable version.
This guide is focused on deploying Scylla on EKS with improved performance. Performance tricks used by the script won’t work with different machine tiers. It sets up the kubelets on EKS nodes to run with static cpu policy and uses local sdd disks in RAID0 for maximum performance.
Most of the commands used to setup the Scylla cluster are the same for all environments As such we have tried to keep them separate in the general guide.
If you don’t want to run the commands step-by-step, you can just run a script that will set everything up for you:
# Edit according to your preference
EKS_REGION=us-east-1
EKS_ZONES=us-east-1a,us-east-1b,us-east-1c
# From inside the examples/eks folder
cd examples/eks
./eks.sh -z "$EKS_ZONES" -r "$EKS_REGION"
After you deploy, see how you can benchmark your cluster with cassandra-stress.
First of all, we export all the configuration options as environment variables. Edit according to your own environment.
EKS_REGION=us-east-1
EKS_ZONES=us-east-1a,us-east-1b,us-east-1c
CLUSTER_NAME=scylla-demo
For this guide, we’ll create an EKS cluster with the following:
A NodeGroup of 3 i3-2xlarge
Nodes, where the Scylla Pods will be deployed. These nodes will only accept pods having scylla-clusters
toleration.
- name: scylla-pool
instanceType: i3.2xlarge
desiredCapacity: 3
labels:
scylla.scylladb.com/node-type: scylla
taints:
role: "scylla-clusters:NoSchedule"
ssh:
allow: true
kubeletExtraConfig:
cpuManagerPolicy: static
A NodeGroup of 4 c4.2xlarge
Nodes to deploy cassandra-stress
later on. These nodes will only accept pods having cassandra-stress
toleration.
- name: cassandra-stress-pool
instanceType: c4.2xlarge
desiredCapacity: 4
labels:
pool: "cassandra-stress-pool"
taints:
role: "cassandra-stress:NoSchedule"
ssh:
allow: true
A NodeGroup of 1 i3.large
Node, where the monitoring stack and operator will be deployed.
- name: monitoring-pool
instanceType: i3.large
desiredCapacity: 1
labels:
pool: "monitoring-pool"
ssh:
allow: true
Script requires several dependencies:
eksctl - See: https://docs.aws.amazon.com/eks/latest/userguide/getting-started-eksctl.html
kubectl - See: https://kubernetes.io/docs/tasks/tools/install-kubectl/
Refer to Deploying Scylla on a Kubernetes Cluster in the ScyllaDB Operator documentation to deploy the ScyllaDB Operator and its prerequisites.
ScyllaDB, except when in developer mode, requires storage with XFS filesystem. The local NVMes from the cloud provider usually come as individual devices. To use their full capacity together, you’ll first need to form a RAID array from those disks.
NodeConfig
performs the necessary RAID configuration and XFS filesystem creation, as well as it optimizes the nodes. You can read more about it in Performance tuning section of ScyllaDB Operator’s documentation.
Deploy NodeConfig
to let it take care of the above operations:
kubectl apply --server-side -f examples/eks/nodeconfig-alpha.yaml
Afterwards, deploy ScyllaDB’s Local Volume Provisioner, capable of dynamically provisioning PersistentVolumes for your ScyllaDB clusters on mounted XFS filesystems, earlier created over the configured RAID0 arrays.
kubectl -n local-csi-driver apply --server-side -f examples/common/local-volume-provisioner/local-csi-driver/
kubectl apply --server-side -f examples/common/local-volume-provisioner/storageclass_xfs.yaml
Now you can follow the steps described in Deploying Scylla on a Kubernetes Cluster to launch your ScyllaDB cluster in a highly performant environment.
Instructions on how to access the database can also be found in the generic guide.
Once you are done with your experiments delete your cluster using the following command:
eksctl delete cluster "${CLUSTER_NAME}"
Was this page helpful?
On this page