Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.suprsend.com/llms.txt

Use this file to discover all available pages before exploring further.

Temporal is a distributed, scalable, durable, and highly available orchestration engine designed to execute asynchronous long-running business logic in a resilient way. This guide walks you through setting up Temporal for SuprSend’s Self-Hosted deployment. This installation guide covers:
  • Setting up Temporal with PostgreSQL for standard workloads
  • Configuring Temporal with Cassandra for high-volume workloads (>0.5M workflows/day)
  • Enabling SSO authentication
  • Configuring required search attributes for SuprSend

Prerequisites

Before installing Temporal, ensure you have: System Requirements
  • Kubernetes cluster (AWS EKS, GKE, AKS, kind, or minikube)
  • kubectl configured to access your cluster
  • Helm v3 (v3.8.0 or later recommended)
  • Git for cloning the Temporal repository
Database Requirements
  • PostgreSQL 12+ instance accessible from your Kubernetes cluster. For smaller deployments you can use same postgres database which is setup for suprsend. If you have higher volume, you can setup a different postgres instance as well OR move to cassandra backed deployment.
  • Database credentials with sufficient privileges to create databases and schemas
Resource Requirements
  • Minimum: 4 CPU cores, 8GB RAM
  • Recommended: 8 CPU cores, 16GB RAM
  • Storage: At least 100GB for logs and data
Verify Prerequisites
# Check kubectl access
kubectl cluster-info

# Check Helm version
helm version

# Verify cluster resources
kubectl top nodes

Installation Options

Temporal can be deployed with different storage backends depending on your workload requirements:
Workload SizePrimary StorageVisibility StoreUse Case
< 1M workflows/dayPostgreSQLPostgreSQLStandard deployments
> 1M workflows/dayCassandraOpenSearch/PostgreSQLHigh-volume deployments

Database Setup

Temporal requires two PostgreSQL databases:
  • temporal: Stores workflow execution data
  • temporal_visibility: Stores workflow visibility/search data

Step 1: Build the Temporal SQL Tool

The temporal-sql-tool is required to initialize the database schemas. Build it from the official Temporal repository:
# Clone the Temporal repository
git clone https://github.com/temporalio/temporal
cd temporal

# Build the temporal-sql-tool
make temporal-sql-tool

Step 2: Initialize PostgreSQL Databases

Set up your PostgreSQL connection details and create the required databases:
# Set PostgreSQL connection details
export SQL_PLUGIN=postgres12
export SQL_HOST=your_postgresql_host
export SQL_PORT=5432
export SQL_USER=your_postgresql_user
export SQL_PASSWORD=your_postgresql_password

# Create and initialize the main temporal database
./temporal-sql-tool --database temporal create-database
SQL_DATABASE=temporal ./temporal-sql-tool setup-schema -v 0.0
SQL_DATABASE=temporal ./temporal-sql-tool update -schema-dir schema/postgresql/v12/temporal/versioned

# Create and initialize the visibility database
./temporal-sql-tool --database temporal_visibility create-database
SQL_DATABASE=temporal_visibility ./temporal-sql-tool setup-schema -v 0.0
SQL_DATABASE=temporal_visibility ./temporal-sql-tool update -schema-dir schema/postgresql/v12/visibility/versioned

Step 3: Verify Database Setup

Verify that both databases were created successfully:
# Connect to PostgreSQL and list databases
psql -h $SQL_HOST -U $SQL_USER -d postgres -c "\l" | grep temporal
You should see both temporal and temporal_visibility databases listed.

Step 4: Install Temporal

Check out the Temporal Helm chart from temporalio/helm-charts and work from the chart directory:
git clone https://github.com/temporalio/helm-charts
cd helm-charts/charts/temporal
Create a values file (for example values-temporal-postgres.yaml) that uses your PostgreSQL instance for both default and visibility stores, enables the Web UI and admintools, and disables bundled databases you do not need. The structure below matches the chart’s server.config.persistence + datastores style; align field names with your chart version using helm show values temporal/temporal if anything differs. Install Temporal with Helm (from helm-charts/charts/temporal, or pass the full path to -f):
# Add the Temporal Helm repository
helm repo add temporal https://go.temporal.io/helm-charts
helm repo update

# Install Temporal (adjust the values filename to match your file)
helm install temporal temporal/temporal \
  -f values-temporal-postgres.yaml \
  --timeout 900s \
  --namespace temporal \
  --create-namespace

Alternative: command-line --set flags

Some chart versions still expose Postgres settings under server.config.persistence.default.sql.*. If that matches your helm show values output, you can pass overrides directly (for simpler setups only):
helm install temporal temporal/temporal \
  --set elasticsearch.enabled=false \
  --set server.config.persistence.default.sql.host=your_postgresql_host \
  --set server.config.persistence.default.sql.user=your_postgresql_user \
  --set server.config.persistence.default.sql.password=your_postgresql_password \
  --set server.config.persistence.visibility.sql.host=your_postgresql_host \
  --set server.config.persistence.visibility.sql.user=your_postgresql_user \
  --set server.config.persistence.visibility.sql.password=your_postgresql_password \
  --timeout 900s \
  --namespace temporal \
  --create-namespace
Prefer the values file (datastores / connectAddr layout) above when it matches your chart; the --set keys vary by chart version.

Step 5: Verify installation

Check that all Temporal components are running:
# Check pod status
kubectl get pods -n temporal

# Check services
kubectl get svc -n temporal

# Temporal Web UI (HTTP). Service name may vary by chart version — confirm with: kubectl get svc -n temporal
kubectl port-forward -n temporal svc/temporal-web 8080:8080
Open http://localhost:8080 in your browser to access the Temporal Web UI.

Step 6: Set up the default namespace for Temporal

SuprSend expects a Temporal namespace named default. The admintools image does not include bash; use sh.
kubectl exec -it deployment/temporal-admintools -n temporal -- /bin/sh
Inside that shell, create the namespace (do not change the name default; adjust retention if needed):
temporal operator namespace create \
  --retention 7d \
  --namespace default
Exit with exit when finished.

Step 7: Configure SuprSend search attributes (required)

The temporal CLI runs inside the admintools pod. jq is not installed in that image — run jq on your workstation by piping kubectl exec output to your local shell.
1

Verify namespaces

From your machine (pipes remote JSONL to local jq):
kubectl exec deployment/temporal-admintools -n temporal -- \
  temporal operator namespace list --output jsonl | jq -r ".namespaceInfo.name"
You should see temporal-system and default.
2

Open an admintools shell (for the next step)

Search-attribute commands must run where the temporal CLI is available — inside the admintools pod. Open a shell first:
kubectl exec -it deployment/temporal-admintools -n temporal -- /bin/sh
Then run the temporal operator search-attribute create commands below in that shell.
3

Add search attributes needed for SuprSend

temporal operator search-attribute create --name SS_WorkspaceId --type Int
temporal operator search-attribute create --name SS_TenantId --type Keyword
temporal operator search-attribute create --name SS_SubscriberId --type Keyword
temporal operator search-attribute create --name SS_WorkflowSlug --type Keyword
temporal operator search-attribute create --name SS_BatchDigestGroupingKey --type Keyword
temporal operator search-attribute create --name SS_BroadcastId --type Keyword
temporal operator search-attribute create --name SS_HasBatchDigestNode --type Bool
temporal operator search-attribute create --name SS_IsBatchDigestInProgress --type Bool
4

Verify search attributes

Still inside the admintools shell:
temporal operator search-attribute list

Step 8: SuprSend Helm configuration

Once your Temporal cluster is set up and running with the required search attributes, configure SuprSend to connect to it.
This section shows only the Temporal-specific configuration. You must also configure other required secrets and values for SuprSend to work properly. See the complete configuration guide: SuprSend Installation Guide
Helm Values Configuration Add the following to your suprsend-values.yaml (along with other required configuration):
# Temporal server host and port (e.g: temporal-frontend.temporal.svc.cluster.local:7233)
temporalServerHostPort: "temporal-frontend.temporal.svc.cluster.local:7233"
The above configuration goes under global.config section in your suprsend-values.yaml.

Step 9: Upsize Temporal based on your need


High-Volume Deployment with Cassandra - Optional

For workloads exceeding 1 million workflows per day, use Cassandra as the primary datastore with OpenSearch or PostgreSQL for visibility.

Prerequisites for High-Volume Setup

  • Cassandra cluster (3+ nodes recommended)
  • OpenSearch cluster or PostgreSQL instance for visibility store
  • Sufficient resources: 16+ CPU cores, 32+ GB RAM

Step 1: Initialize Cassandra Keyspaces

If you have an existing Cassandra cluster, initialize the required keyspaces:
# Navigate to the temporal repository directory
cd temporal

# Set Cassandra connection details
export CASSANDRA_HOST=your_cassandra_host
export CASSANDRA_PORT=9042
export CASSANDRA_USER=your_cassandra_user
export CASSANDRA_PASSWORD=your_cassandra_password

# Build the Cassandra tool
make temporal-cassandra-tool

# Create and initialize the temporal keyspace
./temporal-cassandra-tool create-Keyspace -k temporal
CASSANDRA_KEYSPACE=temporal ./temporal-cassandra-tool setup-schema -v 0.0
CASSANDRA_KEYSPACE=temporal ./temporal-cassandra-tool update -schema-dir schema/cassandra/temporal/versioned

Step 2: Configure High-Volume Deployment

Download and configure the Cassandra values file:
# Download the Cassandra values file
curl -o values.cassandra.yaml https://raw.githubusercontent.com/temporalio/helm-charts/main/temporal/values/values.cassandra.yaml
Edit values.cassandra.yaml to configure:
values.cassandra.yaml
# Cassandra configuration
server:
  config:
    persistence:
      default:
        cassandra:
          hosts: ["your_cassandra_host"]
          port: 9042
          user: your_cassandra_user
          password: your_cassandra_password
          keyspace: temporal
      visibility:
        # Use OpenSearch for visibility (recommended for high volume)
        opensearch:
          version: "v1"
          indices:
            visibility: temporal_visibility_v1_dev
          url: "http://your_opensearch_host:9200"
          username: your_opensearch_user
          password: your_opensearch_password

Step 3: Install High-Volume Temporal

# Install Temporal with Cassandra configuration
helm install temporal temporal/temporal \
  -f values.cassandra.yaml \
  --timeout 900s \
  --namespace temporal \
  --create-namespace

Use PostgreSQL for Visibility

If you prefer PostgreSQL for visibility store instead of OpenSearch:
values.cassandra.yaml
# In values.cassandra.yaml, replace the visibility section with:
server:
  config:
    persistence:
      visibility:
        sql:
          host: your_postgresql_host
          port: 5432
          user: your_postgresql_user
          password: your_postgresql_password
          databaseName: temporal_visibility

Configure SSO Authentication (Optional)

Enable Single Sign-On (SSO) authentication for the Temporal Web UI to integrate with your identity provider.

Step 1: Create Kubernetes Secret

Create a secret containing your SSO client secret:
kubectl create secret generic temporal-auth-secret \
  --from-literal=TEMPORAL_AUTH_CLIENT_SECRET=your_client_secret \
  --namespace temporal

Step 2: Configure SSO Environment Variables

Add SSO configuration to your Helm values file:
values.yaml
# Add to your values.yaml file
web:
  additionalEnv:
    - name: TEMPORAL_AUTH_ENABLED
      value: "true"
    - name: TEMPORAL_AUTH_PROVIDER_URL
      value: "https://accounts.google.com"  # Replace with your provider
    - name: TEMPORAL_AUTH_CLIENT_ID
      value: "your_client_id"
    - name: TEMPORAL_AUTH_CALLBACK_URL
      value: "https://your-domain.com:8080/auth/sso/callback"
  additionalEnvSecretName: temporal-auth-secret

Step 3: Update Temporal Installation

Upgrade your Temporal installation with SSO configuration:
helm upgrade temporal temporal/temporal \
  -f values.yaml \
  --namespace temporal
For more SSO configuration options, refer to the Temporal Web UI Server Environment Variables.

Monitoring Your Deployment

Check Cluster Status

Monitor your Temporal deployment using standard Kubernetes tools:
# Check all Temporal services
kubectl get svc -n temporal

# Check pod status
kubectl get pods -n temporal

# Check resource usage
kubectl top pods -n temporal

Access Temporal Web UI

kubectl port-forward -n temporal svc/temporal-web 8080:8080
Open http://localhost:8080 in your browser to access the Temporal Web UI.

Verify Search Attributes

# Access admin tools and verify search attributes
kubectl exec -it deployment/temporal-admintools -n temporal -- tctl admin cluster get-search-attributes
You should see all the SuprSend-specific search attributes listed in the output.

FAQ

Check existing search attributes with tctl admin cluster get-search-attributes. If they already exist, you can skip adding them or remove and re-add with tctl admin cluster remove-search-attributes --name SS_WorkspaceId.
Check pod status and events:
kubectl describe pod -n temporal -l app.kubernetes.io/name=temporal
kubectl top nodes
kubectl top pods -n temporal
kubectl logs -n temporal -l app.kubernetes.io/name=temporal
Check service status and port-forward to the Web UI service (not the gRPC frontend):
kubectl get svc -n temporal
kubectl port-forward -n temporal svc/temporal-web 8080:8080
kubectl get pods -n temporal -l app.kubernetes.io/component=web
Add resource limits to your values.yaml:
server:
  resources:
    limits:
      memory: "2Gi"
      cpu: "1000m"
    requests:
      memory: "1Gi"
      cpu: "500m"