Old S3 connector (v1.0) will be deprecated over time: If you’re using the existing S3 connector (v1.0), migrate to v2.0 to support the latest logging and analytics updates. The v2.0 connector exports all log types and notification analytics and fixes gaps in error logging present in v1.0.
How it works
-
Every 5 minutes, SuprSend syncs your notification data to S3 in Parquet format. Data lands in hourly partitions across three data points, depending on your connector settings:
-
At every sync, we add or replace the existing hourly parquet files where the data has been changed.
- For query engines like AWS Athena, changes in hourly partitions are automatically detected and reflected in the final tables.
- For data warehouses that don’t support automatic row overwrites (e.g., BigQuery), use the updated_at column to select the latest state of the data.
- Data is encrypted with TLS 1.2+ in transit and SSE-S3/SSE-KMS at rest.
- The Parquet format and partition structure work natively with query engines like Athena, Spark, and Presto.
- If you pause sync, data from the pause period backfills automatically when you resume.
What you can export
You can choose which data points to sync based on your use case. Each data point should be synced in a separate table in your data warehouse. Most users sync Messages for analytics and delivery troubleshooting. For internal logging, error analysis, and audit trails, you can sync all data points.Data points
| Data point | What’s in it | Use it for |
|---|---|---|
| Messages | Delivery status, engagement, vendor info, failures | Analytics, delivery troubleshooting |
| Workflow Executions | Step-by-step workflow logs | Debugging workflow level errors or computations like user preferences |
| Requests | API payloads and their responses | API debugging, Audit trails, Workflow Trigger level errors |

| Table | Errors |
|---|---|
| Requests | API level errors, workflow trigger level errors (condition mismatch, user not found, etc.) |
| Workflow Executions | Workflow level errors (dynamic variables in workflow could not be resolved, template rendering failed, webhook returned a 404 response, etc.) |
| Messages | Delivery failures |
Table Schema
- Requests
- Workflow Executions
- Messages
| Column name | Description | Datatype |
|---|---|---|
| api_type | Entity type for the API call | string |
| api_name | Workflow, event, or broadcast name passed in the API call | string |
| distinct_id_list | List of user distinct_id values or object type/id the request was sent for | array |
| actor | Actor passed in the event or workflow API request | string |
| tenant_id | Tenant ID for which the API request was sent | string |
| payload | Input payload passed in the trigger, including API call details | json |
| response | HTTP API response | json |
| metadata | SDK, machine, and location information for the request | json |
| errors | Request-level errors with message and severity | array(json) |
| executions | Workflow or broadcast execution IDs and slugs triggered by this API call. Execution IDs can be used to link with the workflow executions table | array(json) |
| idempotency_key | Idempotency key passed in the API request. A UUID is generated if not provided | string |
| created_at | Time when the request was received by SuprSend (UTC) | datetime |
| updated_at | Time when this entry was last updated | datetime |
| status | Status of the API request | string |
completed: request is successfully processed.failure: request failed to process due to some errorpartial_failure: request has been partially processed with some failure or has an acceptable warning (like workflow conditions evaluated to false)*
Linking different data points
These data points are linked, allowing you to trace a notification from the initial API request to final delivery. The idempotency key is the common identifier across all tables and can be used to follow a single request end to end. Since the idempotency key is provided in the API request, you can also store it in your system to correlate SuprSend processing with your internal logs.| From → To | Join on |
|---|---|
| Requests → Workflow Executions | execution_id |
| Workflow Executions → Messages | wf_execution_id |
| Requests → Messages (shortcut) | idempotency_key |
Setup
Step 1: Create your S3 bucket
Open AWS S3 Console and create a bucket with these settings:
- Bucket name: Something like
suprsend-logs-production(save this—you’ll need it) - Region: Pick one close to you
- Block all public access: Yes
- Encryption: SSE-S3 (or SSE-KMS for compliance)

Step 2: Create an IAM policy
This gives SuprSend permission to write to your bucket. In IAM Console, create a policy with this JSON:YOUR_BUCKET_NAME with your actual bucket name. Save it as something like suprsend_s3_policy.
Step 3: Set up authentication
Two authentication methods are available:| Method | When to use | We recommend |
|---|---|---|
| IAM Role | Production, enterprise, multi-account setups | ✅ Yes—credentials rotate automatically, no secrets to manage |
| IAM User | Development, testing, quick POCs | Only if IAM Role isn’t feasible. Requires manual key rotation every 90 days. |
- IAM Role (Recommended)
- IAM User
Use IAM Role when:Save these for the next step: Role ARN + External ID (case-sensitive, no extra spaces)
- Running in production environments
- Security compliance requires no long-lived credentials
- You have multi-account AWS setups
- You want zero credential management overhead
- In IAM Console → Roles → Create Role
- Select Another AWS Account and enter SuprSend’s account ID:
924219879248 - Attach the policy you just created
- Name it (e.g.,
suprsend_s3_role)
Step 4: Connect in SuprSend
Go to Settings → Connectors → Amazon S3 v2.0 → Add Connector- IAM Role
- IAM User

Step 5: Verify it’s working
Give it about 10 minutes, then check your S3 bucket. You should see folders likeyear=2025/month=01/... appearing.
Best practices
Security
Security
- Use IAM Role in production—no keys to manage
- Never commit credentials to git
- Keep your bucket private with encryption enabled (SSE-S3 or SSE-KMS)
- Block all public access to your bucket
Managing roles & permissions
Managing roles & permissions
- Use IAM Role for production—credentials rotate automatically, no secrets to manage
- Rotate IAM User keys every 90 days—required for security compliance
- Use External ID for IAM Roles—prevents “confused deputy” attacks
- Assign policies to groups, not users—simplifies permission management
Data syncing & querying
Data syncing & querying
- Sync only what you need—for analytics, Messages is often enough. Add Workflow Executions and Requests for debugging.
- Use
updated_atfor incremental queries—updated_attells you when a given row was last updated. You can filter by this field instead of scanning all data when you just need to recently updated data, especially while fetching logs. - Clean up old data regularly— we always append new data. So, your files can accumulate over time. It’s better to clean up old data regularly to avoid unnecessary storage costs.
FAQs
Files not appearing?
Files not appearing?
Work through this checklist:
- Wait 10 minutes—first sync takes time
- Check bucket name—must match exactly, case-sensitive
- Check region—must match in both AWS and SuprSend
- Check IAM—Check if the Role ARN, Access Key and External ID are correct.
- Check policy- See if the policy has these actions:
"Action": ["s3:PutObject", "s3:ListBucket", "s3:GetObject"]assigned to the right bucket.
How do I know it's working?
How do I know it's working?
It should work if the AWS setup is correct and in SuprSend → Settings → Connectors → Amazon S3 v2.0, you see:
- Sync toggle is ON
- Status shows “Active”
- At least one dataset selected
Missing data?
Missing data?
- Data point might not be selected—check your export settings
- When setting up the connector first time, data points only export going forward (no historical backfill)
- Were notifications actually sent during that time?
- If sync was paused, data backfills when you resume
If you still find some gap in data, please contact support.
How does backfilling work?
How does backfilling work?
| What you did | What happens |
|---|---|
| Paused then resumed | Backfills everything from the pause |
| Added a new dataset | Starts fresh, historical data will not be backfilled |
| Re-enabled a dataset | Backfills from when it was disabled |
Can I change data points later?
Can I change data points later?
Absolutely:
- Add one: Starts exporting going forward
- Remove one: Stops syncing, data stays in S3
- Re-enable: Backfills automatically

