Skip to main content
Old S3 connector (v1.0) will be deprecated over time: If you’re using the existing S3 connector (v1.0), migrate to v2.0 to support the latest logging and analytics updates. The v2.0 connector exports all log types and notification analytics and fixes gaps in error logging present in v1.0.
Export your SuprSend notification data directly to your S3 bucket. Build custom analytics dashboards, debug delivery issues, surface errors to your customers, or maintain compliance audit trails—all with data you fully own and control.

How it works

  • Every 5 minutes, SuprSend syncs your notification data to S3 in Parquet format. Data lands in hourly partitions across three data points, depending on your connector settings:
    your-bucket/
    ├── year=2025/month=01/day=15/hour=14/messages.parquet
    ├── year=2025/month=01/day=15/hour=14/workflow_executions.parquet
    └── year=2025/month=01/day=15/hour=14/requests.parquet
    
  • At every sync, we add or replace the existing hourly parquet files where the data has been changed.
    • For query engines like AWS Athena, changes in hourly partitions are automatically detected and reflected in the final tables.
    • For data warehouses that don’t support automatic row overwrites (e.g., BigQuery), use the updated_at column to select the latest state of the data.
  • Data is encrypted with TLS 1.2+ in transit and SSE-S3/SSE-KMS at rest.
  • The Parquet format and partition structure work natively with query engines like Athena, Spark, and Presto.
  • If you pause sync, data from the pause period backfills automatically when you resume.

What you can export

You can choose which data points to sync based on your use case. Each data point should be synced in a separate table in your data warehouse. Most users sync Messages for analytics and delivery troubleshooting. For internal logging, error analysis, and audit trails, you can sync all data points.

Data points

Data pointWhat’s in itUse it for
MessagesDelivery status, engagement, vendor info, failuresAnalytics, delivery troubleshooting
Workflow ExecutionsStep-by-step workflow logsDebugging workflow level errors or computations like user preferences
RequestsAPI payloads and their responsesAPI debugging, Audit trails, Workflow Trigger level errors
Data Points Parameter on SuprSend Platform Errors logged in each table:
TableErrors
RequestsAPI level errors, workflow trigger level errors (condition mismatch, user not found, etc.)
Workflow ExecutionsWorkflow level errors (dynamic variables in workflow could not be resolved, template rendering failed, webhook returned a 404 response, etc.)
MessagesDelivery failures

Table Schema

Column nameDescriptionDatatype
api_typeEntity type for the API callstring
api_nameWorkflow, event, or broadcast name passed in the API callstring
distinct_id_listList of user distinct_id values or object type/id the request was sent forarray
actorActor passed in the event or workflow API requeststring
tenant_idTenant ID for which the API request was sentstring
payloadInput payload passed in the trigger, including API call detailsjson
responseHTTP API responsejson
metadataSDK, machine, and location information for the requestjson
errorsRequest-level errors with message and severityarray(json)
executionsWorkflow or broadcast execution IDs and slugs triggered by this API call. Execution IDs can be used to link with the workflow executions tablearray(json)
idempotency_keyIdempotency key passed in the API request. A UUID is generated if not providedstring
created_atTime when the request was received by SuprSend (UTC)datetime
updated_atTime when this entry was last updateddatetime
statusStatus of the API requeststring
status can have these values:
  • completed: request is successfully processed.
  • failure: request failed to process due to some error
  • partial_failure: request has been partially processed with some failure or has an acceptable warning (like workflow conditions evaluated to false)*

Linking different data points

These data points are linked, allowing you to trace a notification from the initial API request to final delivery. The idempotency key is the common identifier across all tables and can be used to follow a single request end to end. Since the idempotency key is provided in the API request, you can also store it in your system to correlate SuprSend processing with your internal logs.
From → ToJoin on
Requests → Workflow Executionsexecution_id
Workflow Executions → Messageswf_execution_id
Requests → Messages (shortcut)idempotency_key

Setup

Step 1: Create your S3 bucket

Open AWS S3 Console and create a bucket with these settings:
  • Bucket name: Something like suprsend-logs-production (save this—you’ll need it)
  • Region: Pick one close to you
  • Block all public access: Yes
  • Encryption: SSE-S3 (or SSE-KMS for compliance)

Step 2: Create an IAM policy

This gives SuprSend permission to write to your bucket. In IAM Console, create a policy with this JSON:
{
  "Version": "2012-10-17",
  "Statement": [{
    "Sid": "SuprSendS3ExportAccess",
    "Effect": "Allow",
    "Action": ["s3:PutObject", "s3:ListBucket", "s3:GetObject"],
    "Resource": [
      "arn:aws:s3:::YOUR_BUCKET_NAME/*",
      "arn:aws:s3:::YOUR_BUCKET_NAME"
    ]
  }]
}
Replace YOUR_BUCKET_NAME with your actual bucket name. Save it as something like suprsend_s3_policy.

Step 3: Set up authentication

Two authentication methods are available:
MethodWhen to useWe recommend
IAM RoleProduction, enterprise, multi-account setups✅ Yes—credentials rotate automatically, no secrets to manage
IAM UserDevelopment, testing, quick POCsOnly if IAM Role isn’t feasible. Requires manual key rotation every 90 days.

Step 4: Connect in SuprSend

Go to SettingsConnectorsAmazon S3 v2.0Add Connector
S3 Connector IAM Role Configuration
Enter your AWS credentials, select which data points to export, then Save and toggle Enable sync.

Step 5: Verify it’s working

Give it about 10 minutes, then check your S3 bucket. You should see folders like year=2025/month=01/... appearing.
Nothing showing up? Jump to FAQs for troubleshooting steps.

Best practices

  • Use IAM Role in production—no keys to manage
  • Never commit credentials to git
  • Keep your bucket private with encryption enabled (SSE-S3 or SSE-KMS)
  • Block all public access to your bucket
  • Use IAM Role for production—credentials rotate automatically, no secrets to manage
  • Rotate IAM User keys every 90 days—required for security compliance
  • Use External ID for IAM Roles—prevents “confused deputy” attacks
  • Assign policies to groups, not users—simplifies permission management
  • Sync only what you need—for analytics, Messages is often enough. Add Workflow Executions and Requests for debugging.
  • Use updated_at for incremental queriesupdated_at tells you when a given row was last updated. You can filter by this field instead of scanning all data when you just need to recently updated data, especially while fetching logs.
  • Clean up old data regularly— we always append new data. So, your files can accumulate over time. It’s better to clean up old data regularly to avoid unnecessary storage costs.

FAQs

Work through this checklist:
  1. Wait 10 minutes—first sync takes time
  2. Check bucket name—must match exactly, case-sensitive
  3. Check region—must match in both AWS and SuprSend
  4. Check IAM—Check if the Role ARN, Access Key and External ID are correct.
  5. Check policy- See if the policy has these actions: "Action": ["s3:PutObject", "s3:ListBucket", "s3:GetObject"] assigned to the right bucket.
It should work if the AWS setup is correct and in SuprSend → SettingsConnectorsAmazon S3 v2.0, you see:
  • Sync toggle is ON
  • Status shows “Active”
  • At least one dataset selected
  • Data point might not be selected—check your export settings
  • When setting up the connector first time, data points only export going forward (no historical backfill)
  • Were notifications actually sent during that time?
  • If sync was paused, data backfills when you resume
If you still find some gap in data, please contact support.
What you didWhat happens
Paused then resumedBackfills everything from the pause
Added a new datasetStarts fresh, historical data will not be backfilled
Re-enabled a datasetBackfills from when it was disabled
Absolutely:
  • Add one: Starts exporting going forward
  • Remove one: Stops syncing, data stays in S3
  • Re-enable: Backfills automatically