Skip to main content
Temporal is a distributed, scalable, durable, and highly available orchestration engine designed to execute asynchronous long-running business logic in a resilient way. This guide walks you through setting up Temporal for SuprSend’s Self-Hosted deployment. This installation guide covers:
  • Setting up Temporal with PostgreSQL for standard workloads
  • Configuring Temporal with Cassandra for high-volume workloads (>0.5M workflows/day)
  • Enabling SSO authentication
  • Configuring required search attributes for SuprSend

Prerequisites

Before installing Temporal, ensure you have: System Requirements
  • Kubernetes cluster (AWS EKS, GKE, AKS, kind, or minikube)
  • kubectl configured to access your cluster
  • Helm v3 (v3.8.0 or later recommended)
  • Git for cloning the Temporal repository
Database Requirements
  • PostgreSQL 12+ instance accessible from your Kubernetes cluster. For smaller deployments you can use same postgres database which is setup for suprsend. If you have higher volume, you can setup a different postgres instance as well OR move to cassandra backed deployment.
  • Database credentials with sufficient privileges to create databases and schemas
Resource Requirements
  • Minimum: 4 CPU cores, 8GB RAM
  • Recommended: 8 CPU cores, 16GB RAM
  • Storage: At least 100GB for logs and data
Verify Prerequisites
# Check kubectl access
kubectl cluster-info

# Check Helm version
helm version

# Verify cluster resources
kubectl top nodes

Installation Options

Temporal can be deployed with different storage backends depending on your workload requirements:
Workload SizePrimary StorageVisibility StoreUse Case
< 1M workflows/dayPostgreSQLPostgreSQLStandard deployments
> 1M workflows/dayCassandraOpenSearch/PostgreSQLHigh-volume deployments

Database Setup

Temporal requires two PostgreSQL databases:
  • temporal: Stores workflow execution data
  • temporal_visibility: Stores workflow visibility/search data

Step 1: Build the Temporal SQL Tool

The temporal-sql-tool is required to initialize the database schemas. Build it from the official Temporal repository:
# Clone the Temporal repository
git clone https://github.com/temporalio/temporal
cd temporal

# Build the temporal-sql-tool
make temporal-sql-tool

Step 2: Initialize PostgreSQL Databases

Set up your PostgreSQL connection details and create the required databases:
# Set PostgreSQL connection details
export SQL_PLUGIN=postgres12
export SQL_HOST=your_postgresql_host
export SQL_PORT=5432
export SQL_USER=your_postgresql_user
export SQL_PASSWORD=your_postgresql_password

# Create and initialize the main temporal database
./temporal-sql-tool --database temporal create-database
SQL_DATABASE=temporal ./temporal-sql-tool setup-schema -v 0.0
SQL_DATABASE=temporal ./temporal-sql-tool update -schema-dir schema/postgresql/v12/temporal/versioned

# Create and initialize the visibility database
./temporal-sql-tool --database temporal_visibility create-database
SQL_DATABASE=temporal_visibility ./temporal-sql-tool setup-schema -v 0.0
SQL_DATABASE=temporal_visibility ./temporal-sql-tool update -schema-dir schema/postgresql/v12/visibility/versioned

Step 3: Verify Database Setup

Verify that both databases were created successfully:
# Connect to PostgreSQL and list databases
psql -h $SQL_HOST -U $SQL_USER -d postgres -c "\l" | grep temporal
You should see both temporal and temporal_visibility databases listed.

Step 3: Install Temporal

Checout temporal helm repository from https://github.com/temporalio/helm-charts
git clone https://github.com/temporalio/helm-charts

cd charts/temporal
Now modify values/values.postgres.yaml and install Temporal using Helm:
# Add the Temporal Helm repository
helm repo add temporal https://go.temporal.io/helm-charts
helm repo update

# Install Temporal
helm install temporal temporal/temporal \
  -f values/values.postgresql.yaml \
  --timeout 900s \
  --namespace temporal \
  --create-namespace

Alternative: Command Line Configuration

Instead of modifying the values file, you can pass configuration directly via command line:
helm install temporal temporal/temporal \
  --set elasticsearch.enabled=false \
  --set server.config.persistence.default.sql.host=your_postgresql_host \
  --set server.config.persistence.default.sql.user=your_postgresql_user \
  --set server.config.persistence.default.sql.password=your_postgresql_password \
  --set server.config.persistence.visibility.sql.host=your_postgresql_host \
  --set server.config.persistence.visibility.sql.user=your_postgresql_user \
  --set server.config.persistence.visibility.sql.password=your_postgresql_password \
  --timeout 900s \
  --namespace temporal \
  --create-namespace

Step 4: Verify Installation

Check that all Temporal components are running:
# Check pod status
kubectl get pods -n temporal

# Check services
kubectl get svc -n temporal

# Check if Temporal UI is accessible
kubectl port-forward -n temporal svc/temporal-frontend 8080:8080
Open http://localhost:8080 in your browser to access the Temporal Web UI.

Step 5: Setup default namespace for temporal

SuprSend will need a namespace in temporal to store workflow state, create one using following commands:
# Get shell access inside temporal admintools
kubectl exec -it deployment/temporal-admintools -n temporal -- /bin/bash

# create default namespace, Important: *DO NOT* change namespace name
# change retention period based on your need
temporal operator namespace create \
  --retention 7d \
  --namespace default

Step 6: Configure SuprSend Search Attributes (Required)

1

Verify if the namepsaces are created properly

# check the namespaces, you should get two namespaces temporal-system, default
$ temporal operator namespace list --output jsonl | jq ".namespaceInfo.name"
"temporal-system"
"default"

2

Add Search attributes needed for SuprSend

temporal operator search-attribute create --name SS_WorkspaceId --type Int
temporal operator search-attribute create --name SS_TenantId --type Keyword
temporal operator search-attribute create --name SS_SubscriberId --type Keyword
temporal operator search-attribute create --name SS_WorkflowSlug --type Keyword
temporal operator search-attribute create --name SS_BatchDigestGroupingKey --type Keyword
temporal operator search-attribute create --name SS_BroadcastId --type Keyword
temporal operator search-attribute create --name SS_HasBatchDigestNode --type Bool
temporal operator search-attribute create --name SS_IsBatchDigestInProgress --type Bool
3

Check if the attributes are created properly

You can check if above attributes have been setup properly by executing:
temporal operator search-attribute list

Step 7: SuprSend Helm Configuration

Once your Temporal cluster is set up and running with the required search attributes, configure SuprSend to connect to it.
This section shows only the Temporal-specific configuration. You must also configure other required secrets and values for SuprSend to work properly. See the complete configuration guide: SuprSend Installation Guide
Helm Values Configuration Add the following to your suprsend-values.yaml (along with other required configuration):
# Temporal server host and port (e.g: temporal-frontend.temporal.svc.cluster.local:7233)
temporalServerHostPort: "temporal-frontend.temporal.svc.cluster.local:7233"
The above configuration goes under global.config section in your suprsend-values.yaml.

Step 8: Upsize temporal based on your need


High-Volume Deployment with Cassandra - Optional

For workloads exceeding 1 million workflows per day, use Cassandra as the primary datastore with OpenSearch or PostgreSQL for visibility.

Prerequisites for High-Volume Setup

  • Cassandra cluster (3+ nodes recommended)
  • OpenSearch cluster or PostgreSQL instance for visibility store
  • Sufficient resources: 16+ CPU cores, 32+ GB RAM

Step 1: Initialize Cassandra Keyspaces

If you have an existing Cassandra cluster, initialize the required keyspaces:
# Navigate to the temporal repository directory
cd temporal

# Set Cassandra connection details
export CASSANDRA_HOST=your_cassandra_host
export CASSANDRA_PORT=9042
export CASSANDRA_USER=your_cassandra_user
export CASSANDRA_PASSWORD=your_cassandra_password

# Build the Cassandra tool
make temporal-cassandra-tool

# Create and initialize the temporal keyspace
./temporal-cassandra-tool create-Keyspace -k temporal
CASSANDRA_KEYSPACE=temporal ./temporal-cassandra-tool setup-schema -v 0.0
CASSANDRA_KEYSPACE=temporal ./temporal-cassandra-tool update -schema-dir schema/cassandra/temporal/versioned

Step 2: Configure High-Volume Deployment

Download and configure the Cassandra values file:
# Download the Cassandra values file
curl -o values.cassandra.yaml https://raw.githubusercontent.com/temporalio/helm-charts/main/temporal/values/values.cassandra.yaml
Edit values.cassandra.yaml to configure:
values.cassandra.yaml
# Cassandra configuration
server:
  config:
    persistence:
      default:
        cassandra:
          hosts: ["your_cassandra_host"]
          port: 9042
          user: your_cassandra_user
          password: your_cassandra_password
          keyspace: temporal
      visibility:
        # Use OpenSearch for visibility (recommended for high volume)
        opensearch:
          version: "v1"
          indices:
            visibility: temporal_visibility_v1_dev
          url: "http://your_opensearch_host:9200"
          username: your_opensearch_user
          password: your_opensearch_password

Step 3: Install High-Volume Temporal

# Install Temporal with Cassandra configuration
helm install temporal temporal/temporal \
  -f values.cassandra.yaml \
  --timeout 900s \
  --namespace temporal \
  --create-namespace

Use PostgreSQL for Visibility

If you prefer PostgreSQL for visibility store instead of OpenSearch:
values.cassandra.yaml
# In values.cassandra.yaml, replace the visibility section with:
server:
  config:
    persistence:
      visibility:
        sql:
          host: your_postgresql_host
          port: 5432
          user: your_postgresql_user
          password: your_postgresql_password
          databaseName: temporal_visibility

Configure SSO Authentication (Optional)

Enable Single Sign-On (SSO) authentication for the Temporal Web UI to integrate with your identity provider.

Step 1: Create Kubernetes Secret

Create a secret containing your SSO client secret:
kubectl create secret generic temporal-auth-secret \
  --from-literal=TEMPORAL_AUTH_CLIENT_SECRET=your_client_secret \
  --namespace temporal

Step 2: Configure SSO Environment Variables

Add SSO configuration to your Helm values file:
values.yaml
# Add to your values.yaml file
web:
  additionalEnv:
    - name: TEMPORAL_AUTH_ENABLED
      value: "true"
    - name: TEMPORAL_AUTH_PROVIDER_URL
      value: "https://accounts.google.com"  # Replace with your provider
    - name: TEMPORAL_AUTH_CLIENT_ID
      value: "your_client_id"
    - name: TEMPORAL_AUTH_CALLBACK_URL
      value: "https://your-domain.com:8080/auth/sso/callback"
  additionalEnvSecretName: temporal-auth-secret

Step 3: Update Temporal Installation

Upgrade your Temporal installation with SSO configuration:
helm upgrade temporal temporal/temporal \
  -f values.yaml \
  --namespace temporal
For more SSO configuration options, refer to the Temporal Web UI Server Environment Variables.

Monitoring Your Deployment

Check Cluster Status

Monitor your Temporal deployment using standard Kubernetes tools:
# Check all Temporal services
kubectl get svc -n temporal

# Check pod status
kubectl get pods -n temporal

# Check resource usage
kubectl top pods -n temporal

Access Temporal Web UI

# Port forward to access the Web UI
kubectl port-forward -n temporal svc/temporal-frontend 8080:8080
Open http://localhost:8080 in your browser to access the Temporal Web UI.

Verify Search Attributes

# Access admin tools and verify search attributes
kubectl exec -it deployment/temporal-admintools -n temporal -- tctl admin cluster get-search-attributes
You should see all the SuprSend-specific search attributes listed in the output.

FAQ

Check existing search attributes with tctl admin cluster get-search-attributes. If they already exist, you can skip adding them or remove and re-add with tctl admin cluster remove-search-attributes --name SS_WorkspaceId.
Check pod status and events:
kubectl describe pod -n temporal -l app.kubernetes.io/name=temporal
kubectl top nodes
kubectl top pods -n temporal
kubectl logs -n temporal -l app.kubernetes.io/name=temporal
Check service status and port-forward to access the UI:
kubectl get svc -n temporal
kubectl port-forward -n temporal svc/temporal-frontend 8080:8080
kubectl get pods -n temporal -l app.kubernetes.io/component=frontend
Add resource limits to your values.yaml:
server:
  resources:
    limits:
      memory: "2Gi"
      cpu: "1000m"
    requests:
      memory: "1Gi"
      cpu: "500m"