Skip to main content
This guide walks you through setting up a production-ready ClickHouse cluster on Kubernetes using the Altinity ClickHouse Operator.
It also includes steps to test data persistence and connectivity to ensure your deployment is reliable.

Prerequisites

Before starting, ensure the following tools and components are installed:
RequirementDescription
Kubernetes clusterAny K8s distribution (GKE, EKS, DigitalOcean, Minikube, etc.)
kubectlCLI tool to interact with your cluster
Helm 3+To install the ClickHouse Operator
StorageClassPersistent volume provisioner (e.g., do-block-storage on DigitalOcean or gp3-encrypted on AWS)
NamespaceRecommended: infra or clickhouse

Step 1: Install ClickHouse Operator

The Altinity ClickHouse Operator manages ClickHouse clusters declaratively via CRDs (Custom Resource Definitions).

Install via Helm

helm repo add altinity https://helm.altinity.com
helm repo update
helm install clickhouse-operator altinity/clickhouse-operator --namespace clickhouse --create-namespace
To verify installation:
kubectl get pods -n clickhouse
You should see something like:
NAME                                    READY   STATUS    RESTARTS   AGE
clickhouse-operator-7f999c9c4b-xyz12    1/1     Running   0          1m

Step 2: Create ClickHouse Cluster

1

Generate password for your default user

docker run --rm alpine sh -c "echo -n 'YOUR_PASSWORD' | sha256sum | awk '{print \$1}'"
2

Create ClickHouse Cluster

Create a file named clickhouse-cluster.yaml with the following content:
clickhouse-cluster.yaml
apiVersion: "clickhouse.altinity.com/v1"
kind: "ClickHouseInstallation"
metadata:
  name: suprsend-ch
spec:
  templates:
    podTemplates:
      - name: clickhouse-pod-template
        spec:
          containers:
            - name: clickhouse
              image: clickhouse/clickhouse-server:25.9
              volumeMounts:
                - name: clickhouse-storage
                  mountPath: /var/lib/clickhouse
    volumeClaimTemplates:
      - name: clickhouse-storage
        reclaimPolicy: Retain
        spec:
          accessModes:
            - ReadWriteOnce
          resources:
            requests:
              storage: 200Gi # Based on requirement
          storageClassName: do-block-storage # Based on available storage classes
  configuration:
    users:
      # create appuser with password from Secret
      "appuser/profile": "default"
      "appuser/password_sha256_hex": "GENERATED_SHA256_HEX_OF_YOUR_PASSWORD"
      # Allow only private RFC1918 ranges (edit to your exact pod CIDR if you know it)
      "appuser/networks/ip": "10.0.0.0/8"
      "appuser/networks/ip1": "172.16.0.0/12"
      "appuser/networks/ip2": "192.168.0.0/16"
    clusters:
      - name: suprsend-ch
        layout:
          shardsCount: 1
          replicasCount: 1
        templates:
          podTemplate: clickhouse-pod-template
3

Apply the manifest

kubectl apply -f clickhouse-cluster.yaml -n clickhouse
4

Verify the cluster is running

Wait until the pods are running:
kubectl get pods -n clickhouse
Expected output:
NAME                                      READY   STATUS    RESTARTS   AGE
chi-suprsend-ch-suprsend-ch-0-0-0         1/1     Running   0          1m
chi prefix stands for ClickHouseInstallation (CRD name defined by the Altinity Operator).

Step 3: Verify ClickHouse Cluster Health

Run the following command:
kubectl logs chi-suprsend-ch-suprsend-ch-0-0-0 -n clickhouse | grep "Ready for connections"
If you see “Ready for connections”, your ClickHouse node is healthy.

Step 4: Connect to ClickHouse

kubectl port-forward pod/chi-suprsend-ch-suprsend-ch-0-0-0 8123:8123 -n clickhouse
Then connect via the HTTP interface:
curl 'http://localhost:8123/?query=SELECT%201'
Expected response:
1

Step 5: Test Persistence

To verify that your data persists across pod restarts:
1

Insert some data

INSERT INTO test.data VALUES (3, 'Persistence Test');
2

Delete the pod

kubectl delete pod chi-suprsend-ch-suprsend-ch-0-0-0 -n clickhouse
3

Wait for it to restart

kubectl get pods -n clickhouse -w
4

Reconnect and check the data

kubectl exec -it chi-suprsend-ch-suprsend-ch-0-0-0 -n clickhouse -- clickhouse-client -q "SELECT * FROM test.data;"
✅ If you still see all inserted rows — persistence is working properly.

Step 6: Cleanup or Reinstall

If you need to reinstall or reset everything:
kubectl delete chi suprsend-ch -n clickhouse
kubectl delete pvc -n clickhouse --all
kubectl delete pod -n clickhouse --all
Deleting PVCs will erase all stored data if reclaim policy is not set to Retain.

Step 7: Common Testing Commands

PurposeCommand
List databaseskubectl exec -it chi-suprsend-ch-0-0-0 -n clickhouse -- clickhouse-client -q "SHOW DATABASES"
Check system tableskubectl exec -it chi-suprsend-ch-0-0-0 -n clickhouse -- clickhouse-client -q "SELECT name, engine FROM system.tables WHERE database='test'"
View pod logskubectl logs chi-suprsend-ch-0-0-0 -n clickhouse
Describe volumeskubectl describe pvc -n clickhouse

Step 8: SuprSend Helm Configuration

Once your ClickHouse cluster is set up and running, configure SuprSend to connect to it.
This section shows only the ClickHouse-specific configuration. You must also configure other required secrets and values for SuprSend to work properly. See the complete configuration guide: SuprSend Installation Guide

Kubernetes Secret Configuration

First, add the ClickHouse-specific secrets to your suprsend-secrets.yaml:
# ============================================
# ClickHouse Configuration (this guide)
# ============================================
# username of ClickHouse database (e.g: default)
clickhouseConnUrlUserKey: "default"
# password of the clickhouse DB
clickhouseConnUrlPassKey: "YOUR_PASSWORD"

Helm Values Configuration

Then add the following to your suprsend-values.yaml (along with other required configuration):
# hostname of the ClickHouse DB instance
clickhouseConnUrlHost: ""
# port for the ClickHouse connection (e.g: 9440 for TLS, 9000 for non-TLS)
clickhouseConnUrlPort: "9440"
# target database for the ClickHouse connection
clickhouseConnUrlDb: "suprsend"
# defines the ClickHouse connection scheme. use 'clickhouse'/'clickhouses' for secure TCP with TLS/SSL
clickhouseConnUrlScheme: "clickhouse"
# Enable or disable TLS/SSL encryption for clickhouse connection (e.g: true/false)
clickhouseConnUrlSecure: "true"
# ClickHouse deployment type: 'cloud' (managed) or 'oss_single' (self-hosted single node)
clickhouseDeploymentType: "cloud"
The above configuration goes under global.config section in your suprsend-values.yaml.

Summary

You now have a fully functional ClickHouse cluster running inside Kubernetes with:
  • Persistent 200Gi storage per pod
  • 1 shard × 1 replica layout (easy to scale later)
  • Verified persistence across restarts
  • Easy connectivity via clickhouse-client or HTTP API
  • SuprSend Helm configuration ready
For more advanced setups (multi-shard clusters, backups, monitoring, etc.), refer to:
👉 Altinity ClickHouse Operator Documentation

FAQ

Possible cause: StorageClass not found.Fix: Check that storageClassName matches your cluster’s available storage classes.
Possible cause: Wrong image or insufficient resources.Fix: Check logs with kubectl logs and verify resource limits are adequate.
Possible cause: ClickHouse not yet ready.Fix: Wait until the log shows “Ready for connections” before attempting to connect.
Possible cause: PVC deleted or reclaim policy not set.Fix: Ensure reclaimPolicy: Retain is set on your StorageClass.