Music digital asset management

TrackCrate

Centralize music. Release fearlessly.

TrackCrate is a lightweight music asset hub for indie artists and small labels collaborating across time zones. It versions stems, artwork, and press with rights metadata, creates trackable shortlinks, and one-click AutoKit press pages with a private stem player. Centralize files, kill version chaos, and ship releases faster with expiring, watermarked downloads.

Subscribe to get amazing product ideas like this one delivered daily to your inbox!

TrackCrate

Product Details

Explore this AI-generated product idea in detail. Each aspect has been thoughtfully created to inspire your next venture.

Vision & Mission

Vision
Power independent artists and small labels to deliver fearless releases worldwide, with every project effortlessly organized, shareable, and industry-ready.
Long Term Goal
By 2029, power 50,000 indie releases annually, cut release prep time 50%, and slash version errors 80%, becoming the default asset backbone for 25,000 independent artists and labels.
Impact
For indie artists and small labels, TrackCrate cuts release prep time by 40%, reduces file-sharing emails by 60%, and delivers 25% faster promo turnaround, reducing version errors and missed deliverables to drive cleaner, on-time rollouts across remote teams.

Problem & Solution

Problem Statement
Indie artists and small label managers lose stems, artwork, and press assets across cloud drives, emails, and links, causing version conflicts and delayed releases; current tools are bloated or generic storage lacking stem-aware versioning, rights metadata, and link tracking.
Solution Overview
TrackCrate centralizes stems, artwork, and press in a versioned vault with rights metadata and trackable shortlinks, ending lost files and version chaos. AutoKit creates a branded press page with a private stem player and expiring, watermarked downloads so collaborators access the right files instantly.

Details & Audience

Description
TrackCrate is a lightweight music asset hub that stores, versions, and shares stems, artwork, and press in one place. Indie artists and small labels collaborating remotely use it. It ends lost files, version chaos, and messy links by centralizing assets with rights metadata and trackable shortlinks. Its one-click AutoKit instantly generates a branded press page with a private stem player.
Target Audience
Indie music artists and small label managers (18-40) collaborating remotely, juggling scattered assets across timezones.
Inspiration
At 1:13 a.m. after a Friday session, a rapper dug through Gmail, the producer pinged three Drive folders, and the designer swore at an expired WeTransfer. The hi-res cover was missing, the release date slipping. In that pit-in-stomach silence, it clicked: music needs storage that knows stems, credits, and press. I hacked a folder script that auto-built a branded press page and private stem player—TrackCrate’s first spark.

User Personas

Detailed profiles of the target users who would benefit most from this product.

S

Sync-Savvy Sofia

- Age 32–45; freelance or agency music supervisor - Based in LA/NYC/London; remote-friendly schedule - Film/media or music-business background; clearance fluent - Project-based income: fees, buyouts, and rush premiums

Background

Cut her teeth clearing songs at an ad agency, where a missed split once killed a national spot. Moved freelance, now demands instant rights clarity and organized stems to keep edits moving.

Needs & Pain Points

Needs

1. Immediate rights metadata with contacts 2. Watermarked stems for picture testing 3. Expiring links for clients and legal

Pain Points

1. Missing splits torpedo last-minute placements 2. Scattered stems stall edit sessions 3. Non-trackable links muddy approvals

Psychographics

- Worships clarity under crushing deadlines - Prioritizes legal certainty over sonic novelty - Values vendors who anticipate clearance hurdles - Prefers tidy folders and unambiguous filenames

Channels

1. Gmail - primary 2. Slack - team threads 3. LinkedIn - sourcing 4. DISCO - music inbox 5. Frame.io - video reviews

M

Metadata-Minded Malik

- Age 26–40; indie label or publisher role - Remote-first; coordinates across time zones - Music business and info-systems background; DDEX fluent - Manages PRO/MLC/SoundExchange registrations and deliveries

Background

Recovered six figures after auditing mis-registered works early in his career. Now builds rigorous schemas and demands version truth across assets and metadata.

Needs & Pain Points

Needs

1. Exportable credits and splits in DDEX/CSV 2. Version-lock metadata to asset revisions 3. Approval history with timestamps

Pain Points

1. Inconsistent filenames break ingest automations 2. Missing ISRC/ISWC delays releases 3. Conflicting splits trigger disputes

Psychographics

- Precision over speed, every field matters - Trusts systems, not tribal knowledge - Hates ambiguity; loves standardized schemas - Documentation is care, not bureaucracy

Channels

1. Google Sheets - master data 2. Slack - cross-team updates 3. Gmail - external coordination 4. Notion - documentation 5. Dropbox - legacy archives

M

Milestone-Driven Manager Mia

- Age 27–42; manages 2–5 active artists - Travels frequently; mobile-first workflows - Background in tour or project management - Revenue tied to timely releases and campaigns

Background

Learned coordination running merch and advancing shows, then scaled into management. Repeated delays from version chaos taught her to centralize assets and tighten approvals.

Needs & Pain Points

Needs

1. Instant EPKs from approved assets 2. Trackable shortlinks for partner outreach 3. Clear approval checkpoints per deliverable

Pain Points

1. Chasing latest files across threads 2. Conflicting versions derail timelines 3. No visibility into partner follow-up

Psychographics

- Calendar-centric, thrives on visible progress - Pragmatic, outcomes over process purity - Communicates constantly, prefers async updates - Data-guided, trusts engagement signals

Channels

1. Gmail - hub inbox 2. WhatsApp - quick nudges 3. Slack - team coordination 4. Google Drive - legacy files 5. Airtable - release tracker

S

Snippet-Savvy Sienna

- Age 22–35; edits across TikTok, Reels, Shorts - Remote contractor for indie acts and labels - Proficient in CapCut and Adobe Premiere - Works nights to ride trend windows

Background

Started editing fan cams, then cut tour recaps for emerging acts. Burned by outdated files and mismatched art, she now insists on a single source of truth.

Needs & Pain Points

Needs

1. Approved snippets and artwork per aspect ratio 2. Watermarked previews for stakeholder reviews 3. One link with latest assets

Pain Points

1. Waiting on clearances kills trends 2. Wrong dimensions waste editing time 3. Old files accidentally hit publish

Psychographics

- Speed demon, ships before trends cool - Brand guardian, consistent visuals matter - Mobile-first, hates clunky workflows - Prefers reusable templates, minimal rework

Channels

1. TikTok - daily posting 2. Instagram - reels focus 3. Gmail - asset delivery 4. CapCut - quick edits 5. Notion - content calendar

C

Catalog-Caretaker Carmen

- Age 35–55; steward of 2,000–20,000 tracks - Works for legacy indie, estate, or reissue imprint - Hybrid office/remote; hardware archive access - KPI: recoverability and rights certainty

Background

Inherited messy drives and unlabeled tapes from a defunct label. After losing a sync due to missing credits, she prioritized systematic migration and proofed metadata.

Needs & Pain Points

Needs

1. Bulk ingest with duplicate detection 2. Reconcile versions with authoritative metadata 3. Controlled, watermarked researcher access

Pain Points

1. Duplicate files inflate storage and confusion 2. Unknown ownership halts monetization 3. Artwork sources lost or corrupted

Psychographics

- Preservation-first, futureproof everything - Structure brings calm; chaos wastes time - Risk-averse, validates before sharing - Patient, methodical progress over sprints

Channels

1. Dropbox - current vault 2. Google Drive - team share 3. Discogs - reference checks 4. Gmail - external queries 5. Trello - migration board

G

Grant-Ready Grace

- Age 24–38; solo artist or duo - Based in grant-heavy regions (CA, EU, AUS) - Juggles gigs, part-time work, applications - Moderate tech fluency; minimal admin support

Background

Missed a major grant because a link expired and credits were incomplete. Now systematizes submissions, tailoring press pages and checking analytics before hitting send.

Needs & Pain Points

Needs

1. AutoKit pages matching application specs 2. Shortlinks with per-recipient tracking 3. Expiring, watermarked downloads for juries

Pain Points

1. Each grant demands different materials 2. EPK versions multiply and confuse 3. No feedback visibility post-submission

Psychographics

- Opportunity hunter, deadlines drive focus - Quality over quantity in applications - Seeks clarity; hates vague requirements - Confidence grows from organized materials

Channels

1. Submittable - application hub 2. Gmail - correspondence 3. Instagram - presence 4. LinkedIn - professional touch 5. Google Docs - statements

Product Features

Key capabilities that make this product valuable to its target users.

QuotaGuard Limits

Set per-recipient play and download caps with automatic expiry once limits are reached. Customize budgets by asset (e.g., stems vs. masters) and get threshold alerts before auto‑revoke. This curbs overexposure, keeps promos controlled, and removes the manual hassle of policing usage.

Requirements

Per-Recipient Usage Caps
"As a label manager, I want to set per-recipient caps on plays and downloads so that promo recipients can sample but not exceed the usage I intend."
Description

Enable creators to set hard caps on plays and downloads for each identified recipient across shared links, AutoKit press pages, and the private stem player. Quotas can be defined at the share, asset, or bundle level and applied per recipient identity (email-verified user or invited collaborator). The system increments counters on qualified play/download events with debounce and deduplication windows to prevent accidental double counts. When a cap is reached, further access to the restricted action is blocked for that recipient and a friendly, branded message explains the limit with an optional request-more flow. Remaining allowances are surfaced to senders in the dashboard and can be optionally shown to recipients. Works alongside expiring and watermarked downloads, honoring existing rights metadata and share settings without breaking current flows. For anonymous/open links, a link-level quota fallback is enforced when recipient identity cannot be verified.

Acceptance Criteria
Email-Verified Recipient Play Cap on AutoKit Page
Given an AutoKit press page is shared to a recipient whose email is verified And an asset-level play cap of 5 is set for Track A for that recipient And the debounce window is configured to 5 seconds and the deduplication window to 2 minutes When the recipient initiates 5 distinct play events for Track A separated by more than 5 seconds each Then the recipient’s play counter for Track A increments to 5 And a 6th play attempt is blocked with a friendly message and HTTP 403 And repeated play/pause within the 5-second debounce window does not increment the counter And identical play events within the 2-minute dedup window from the same device/IP are not double-counted And the event log records timestamp, recipient ID, asset ID, and action type
Download Cap Enforcement with Watermarks and Expiry
Given a recipient has a download cap of 2 for Master File A and the share enforces watermarking and a 24-hour link expiry When the recipient downloads Master File A twice within the link validity window Then both downloads succeed and are watermarked per rights metadata And a 3rd download attempt is blocked with a branded message explaining the cap And attempts made after link expiry are denied due to expiry and do not increment counters And share-level authentication requirements (e.g., login) remain enforced throughout
Quota Scope Resolution and Counter Decrement Rules
Given applicable quotas are configured simultaneously for a recipient: share-level plays = 10, bundle-level plays = 6, asset-level plays = 3 for Track B When the recipient plays Track B once Then all applicable counters (share, bundle, asset) each decrement by 1 And the effective remaining allowance displayed equals the minimum remaining across applicable scopes And access is blocked when any one applicable scope reaches 0 remaining And the block message cites the scope that hit 0 (e.g., “Asset-level play limit reached”) And the dashboard displays remaining counts per scope for that recipient and asset
Anonymous Link Fallback Quota Enforcement
Given an open link where recipient identity cannot be verified And a link-level download quota of 50 applies to the bundle When anonymous users collectively complete 50 qualified downloads across assets in that link Then the 51st download attempt from any anonymous user is blocked And the block message explains the link-level quota was reached And no per-recipient counters are created or updated And events within configured debounce/dedup windows are not double-counted
Cap-Reached Messaging and Request-More Flow
Given a recipient has reached their play or download cap for an asset/share When the recipient attempts the restricted action again Then a branded, customizable message is shown with the sender’s name/logo and copy And a “Request More” action is available to the recipient And submitting the request sends a notification to the sender including recipient ID, context (asset/share), and requested units And the recipient sees a confirmation state without regaining access And upon the sender increasing the cap, the recipient can retry successfully without needing a new link
Dashboard Visibility of Remaining Allowances
Given a sender opens the dashboard for a specific share or bundle When viewing the Recipients or Usage tab Then for each recipient the UI shows used and remaining plays/downloads per scope (share, bundle, asset) and the effective remaining allowance And values update within 5 seconds of a new qualified event And data can be exported to CSV including recipient identifier, asset ID, scope, used, remaining, and last-activity timestamp
Recipient-Facing Allowance Display Toggle
Given the sender enables “Show remaining allowances to recipients” on a share When a verified recipient views the AutoKit page or private stem player Then the UI displays remaining plays/downloads relevant to the recipient and asset/bundle And when the toggle is disabled, no allowance counts are displayed to recipients And no recipient can see other recipients’ allowances or identities
Asset-Type Budget Profiles
"As an indie artist, I want default budgets for each asset type so that I don’t have to micromanage limits every time I share a release."
Description

Provide configurable quota templates by asset type (e.g., masters, stems, artwork, press materials) and apply them at catalog, release, or share scope. Creators can define different play/download budgets and behaviors per type, such as stricter limits on masters while allowing more stem previews. A clear rules hierarchy determines precedence: share-specific overrides release-level, which overrides catalog defaults. Budgets can be copied, versioned, and previewed before activation, with validation to catch conflicting rules. Integrates with rights metadata to enforce restrictions based on ownership and licensing constraints. Ensures consistent quota policy across assets while allowing granular exceptions when needed.

Acceptance Criteria
Create and Save Asset-Type Budget Profile Template
Given I am a Catalog Admin When I create a template for asset type "Masters" with Play Cap=10 per recipient, Download Cap=2 per recipient, Auto-Expire on cap reached=true, and Threshold Alert=80% Then the template saves as Version 1 with status "Draft" and appears in the Template Library within 2 seconds And invalid numeric input (negative, non-integer, blank) blocks save with inline error messages naming the field And duplicate template names within the same scope are rejected with a clear error
Apply Profiles at Catalog, Release, and Share Scopes
Given a Catalog Default profile (Masters: Play 6, Download 1), a Release Override (Masters: Play 4, Download 1), and a Share Override (Masters: Play 2, Download 0) When a recipient opens a Master via that Share Then the effective caps are Play 2 and Download 0 for that recipient And the enforcement decision is logged with a precedence record indicating Share > Release > Catalog and the applied profile IDs
Detect Conflicts on Profile Activation
Given two active Share-level profiles exist for asset type "Masters" on the same Share When I attempt to activate the second profile Then activation is blocked and a validation message lists the conflicting profiles by name/ID and scope with links to edit or disable And no partial changes are applied; the previously active profile remains effective
Copy, Version, and Preview Budget Profiles
Given an existing template "Masters Tight v1" When I copy it to "Masters Tight v2" and change Play Cap from 4 to 3 Then the new version is created with incremented version number, retains audit history, and is saved as "Draft" And when I click Preview for Release "R-123" and recipient "dj@domain.com" Then I see the computed effective caps per asset type before activation, including the scope that will apply to each cap
Rights Metadata Integration Enforcement
Given an asset has rights metadata prohibiting downloads When a profile would otherwise allow downloads > 0 Then the system enforces downloads=0 for that asset and displays "Downloads restricted by rights" on the share And attempts to enable downloads in the profile while rights prohibit are blocked with a validation error
Auto-Expiry on Cap Reached per Recipient
Given a per-recipient Play Cap=3 for asset type "Masters" at the effective scope When a recipient completes the 3rd play of a Master Then further play attempts for Masters for that recipient are blocked within 5 seconds and the share shows "Play quota reached; access expired" And an audit log entry is recorded with recipient ID, asset type, cap reached, timestamp, and enforcing scope
Auto-Expiry and Access Revocation
"As a promotions coordinator, I want access to auto-expire when limits are reached so that I don’t have to manually police overuse."
Description

Automatically expire access or revoke specific actions when a quota limit is met or a time window ends, whichever comes first based on configured policy. Support flexible actions on breach, including blocking downloads, disabling playback, hiding assets, or revoking recipient tokens without deleting the share. Provide customizable post-expiry messaging and a self-serve request-extension flow that notifies the sender. All revocation events are recorded with timestamps, actor, and policy reason for auditability. Behaves predictably with existing expiring links and watermarking, ensuring that revocation does not orphan files or break unrelated shares.

Acceptance Criteria
Quota Breach Auto-Revokes Downloads (Per-Recipient, Per-Asset-Type)
Given a share with recipient R and assets including stems and masters, and a download quota of 5 for stems only When R completes the 5th successful stem download within the share Then the 5th download succeeds and is watermarked per configuration And when R attempts a 6th stem download in the share Then the download is blocked with HTTP 403 and reason "quota_exceeded" And the configured post-expiry message for downloads is displayed to R And playback of stems remains allowed if not restricted by policy And master downloads remain allowed if not restricted by policy And a revocation event is recorded with timestamp, actor "system", policy reason "quota:stems:5", and scope {share_id, recipient_id}
Time Window Expiry Enforces Earliest Policy Action
Given a share with a time window end T and a quota that has not been reached When the current time reaches or exceeds T Then the configured actions "disable playback" and "hide assets" are applied immediately And any attempt by the recipient to play returns HTTP 403 with reason "time_expired" And asset listings are hidden for that recipient within the share And a single revocation event is logged with reason "time_expired" at time T And no quota-based revocation event is generated for the same moment
Flexible Actions on Breach—Token Revocation Without Deleting Share
Given a share configured to "revoke recipient token" on breach (quota or time) When a breach condition occurs for recipient R Then R's token becomes invalid for download, playback, and listing on that share And the share remains accessible to other recipients And files are not deleted or moved in storage And unrelated shares for R remain unaffected and functional And the sender can later re-enable access via an extension without creating a new share
Custom Post-Expiry Messaging and Self-Serve Extension Request
Given a share with custom post-expiry message M and extension requests enabled When the recipient encounters a blocked action due to expiry or revocation Then M is displayed along with a "Request Extension" control And submitting the request captures recipient_id, share_id, requested notes/reason, and the blocked action type And a notification is sent to the sender via configured channels (email and in-app) within 1 minute And the request appears in the sender’s queue with status "Pending" And access remains blocked until the sender approves an extension
Revocation Event Audit Trail Completeness
Given any expiry or revocation action is enforced by policy Then an immutable audit record is written containing event_id, share_id, recipient_id, asset_scope, action_enforced, policy_type (quota|time), policy_value, reason_code, actor (system|admin), and timestamp in UTC ISO 8601 And the record is retrievable via API and admin UI within 5 seconds of the event And audit records are append-only and cannot be edited or deleted And any attempt to alter an audit record is logged as a separate security event
Compatibility with Expiring Links and Watermarking; No Orphaned Files
Given a share uses expiring shortlinks and watermarked downloads When access is revoked for recipient R due to a breach Then shortlinks for other recipients and unrelated shares remain functional And underlying files and watermark settings remain intact and unchanged And previously generated watermarked files are only accessible via unrelated valid shares; otherwise access is blocked with 403 And no storage objects are left without references (no orphaned files) And unrelated requests do not return 5xx errors due to the revocation
Threshold Alerts and Notifications
"As a small label owner, I want proactive alerts before caps are hit so that I can adjust limits or follow up without surprises."
Description

Allow owners to configure threshold alerts at defined percentages of quota consumption (e.g., 50%, 75%, 90%) and on limit-reached events. Deliver alerts via in-app notifications, email, and optional Slack/webhook channels with recipient, asset, and remaining allowance context. Implement rate limiting and digesting to avoid notification fatigue for high-volume campaigns. Provide per-share notification preferences and team-wide defaults. Notifications link back to the relevant share or recipient view for one-click actioning (extend, revoke, or message). All alerts are logged for compliance and team transparency.

Acceptance Criteria
Configuring Threshold Percentages for a Share
Given an owner opens Share Settings > QuotaGuard > Alerts for Share S And team-wide defaults exist at 50%, 75%, 90% When the owner customizes thresholds to 60% and 85% Then each threshold value must be an integer between 1 and 99 and not duplicated And the thresholds are saved at the share level overriding team defaults And the active thresholds are displayed in ascending order on the share summary And the configuration is persisted and available via API within 1 second of save
Triggering Threshold Alerts on Consumption Crossings
Given Share S defines thresholds 50%, 75%, 90% and a per-recipient quota Q for asset type T And recipient R has current usage U for metric M (plays or downloads) < 50% When R's usage for T,M increases to at or above 50% Then send exactly one 50% threshold alert for R,S,T,M and mark it as sent And do not re-send the same threshold alert for R,S,T,M on subsequent events And include remaining allowance (Q - used), total used, threshold crossed, and timestamp in the alert And if usage later drops below a sent threshold and re-crosses it, do not alert again for that threshold
Multi-Channel Delivery with Preferences and Fallbacks
Given a share has channel preferences set (in-app, email, Slack, webhook) and subscriber list defined (owner and any subscribed team members) When an alert is emitted for Share S Then deliver the alert to each enabled channel for all subscribers And if Slack/webhook is not configured or returns non-2xx, retry up to 3 times with exponential backoff and log the failure And if a channel ultimately fails, fall back to in-app and email where enabled And the in-app inbox shows an unread item and the notification badge increments within 5 seconds And emails are addressed to the correct subscribers and include a subject with [Threshold Crossed] or [Limit Reached] tags
Limit-Reached Auto-Revoke and Alert
Given QuotaGuard auto-expiry is enabled for Share S and asset type T And recipient R has a quota Q for T When R reaches 100% of Q for metric M (plays or downloads) Then immediately revoke further access to T for R on S for metric M And send a limit-reached alert via all enabled channels including used, remaining=0, and event timestamp And include deep links to Extend, Revoke All for T, and Message R in the alert And block subsequent access attempts by R for T,M and log each blocked attempt with reason=limit_reached
Rate Limiting and Digesting for High-Volume Campaigns
Given team defaults are set to max 1 real-time alert per recipient per share per channel per 5 minutes And a digest window of 15 minutes with a digest threshold of 10 alerts per owner When more than 10 alerts would be delivered to the same owner within a 15-minute window Then suppress additional real-time alerts after the first and queue them for a digest And send a digest at the end of the window summarizing counts by share, asset type, and metric with links to views And annotate suppressed alerts in the log with suppression_reason=rate_limited and digest_id And ensure no more than 1 real-time alert per recipient per share per channel is sent within any 5-minute window
Per-Share Preferences and Team-Wide Defaults Inheritance
Given a team-wide default configuration exists for channels, thresholds, and digest settings When a new share is created Then the share inherits the team defaults And when an owner updates that share's notification preferences Then changes take effect for future alerts without altering the team defaults And disabling a channel at the share level prevents delivery on that channel even if enabled by team default And if a share has no explicit thresholds, team defaults apply
Alert Payload, Deep Links, and Compliance Logging
Given any alert (threshold or limit-reached) is delivered Then the payload includes: share name/ID, recipient name/ID/email, asset type/ID/version, metric (plays/downloads), event type, threshold%, used, remaining, ISO 8601 timestamp, team ID And the alert contains a deep link to the Share or Recipient view enabling one-click Extend, Revoke, or Message actions subject to permissions And every delivery attempt is logged with event_id, channel, delivery_status (delivered/bounced/retried/failed), retry_count, and correlation_id And the alert log is visible to team Admins/Owners under Compliance > Notifications with filters by date range, share, recipient, channel, and event type
Real-Time Usage Tracking and Audit Log
"As a product manager, I want a trustworthy audit log of usage so that we can enforce limits confidently and answer disputes with evidence."
Description

Instrument reliable, low-latency tracking for play and download events with debouncing, bot filtering, and cross-session deduplication. Maintain a per-recipient, per-asset ledger that supports time-series views, filters, and CSV export for compliance and reporting. Counters are consistent across time zones and update dashboards in near real time to reflect remaining allowances. Provide integrity checks and reconciliation jobs to correct drift between counters and logs. Expose a read-optimized view for analytics without impacting playback performance. Data retention policies respect privacy requirements while preserving essential audit trails.

Acceptance Criteria
Debounced Play and Download Event Tracking with Bot Filtering
Given a recipient R and asset A, When R initiates 3 play events for A within a 5-second window from the same session, Then exactly 1 play is recorded in the ledger and the play counter increments by 1. Given known-bot user agents or requests lacking human interaction signals, When a play or download request is made, Then the event is rejected and not logged and a bot_filtered metric increments. Given duplicate download retries for the same signed URL of asset A by recipient R within 10 minutes, When the client retries due to network errors, Then exactly 1 download is recorded and the counter increments by 1. Given the same event arrives multiple times with the same idempotency key for recipient R and asset A within a 15-minute window, When events are processed, Then only 1 ledger entry exists for that idempotency key.
Near-Real-Time Counter Updates and Timezone-Consistent Displays
Given an accepted play or download event for recipient R and asset A, When the event is processed, Then dashboard counters update within 2 seconds p95 and 5 seconds p99 to reflect remaining allowances. Given recipient R reaches the configured allowance for asset A, When the final allowed event is recorded, Then remaining allowance shows 0 and access state becomes Expired within 2 seconds. Given viewers in different time zones, When they view the same date range, Then totals and remaining allowances are identical while timestamps display in the viewer's local time with storage in UTC.
Time-Series Audit Log with Filters and CSV Export
Given a selected date range, recipients, asset types, and event types, When the audit log is queried, Then only matching ledger entries are returned and aggregates equal the filtered result set. Given a paginated result set of N rows, When CSV export is triggered, Then the CSV contains exactly N rows in the same sort order with columns [timestamp_utc, recipient_id, asset_id, asset_type, event_type, session_id, ip_hash, user_agent_hash, country_code]. Given an export up to 100,000 rows, When CSV export is requested, Then the file is generated within 60 seconds p95 and a downloadable link is issued that expires after 24 hours. Given a chosen display timezone, When CSV is exported, Then timestamps include both UTC and the selected timezone offset.
Integrity Checks and Reconciliation of Counters vs Logs
Given daily reconciliation, When the job runs, Then aggregates recomputed from the audit log equal stored counters for each recipient-asset and any discrepancy is corrected atomically with a "correction" entry appended. Given tamper-evident hashing, When a day's ledger is closed, Then a Merkle root of that day's entries is computed and stored and later verification detects any alteration. Given manual reconciliation is triggered for recipient R and asset A, When the job completes, Then dashboard counts reflect corrected values and an audit event "reconciled" is logged with before/after values. Given reconciliation is running, When playback traffic occurs, Then p95 playback start latency degradation is < 5 ms and no 5xx errors are introduced.
Read-Optimized Analytics View Isolation
Given heavy analytics queries scanning 7 days of events, When executed, Then they are served from a read-optimized store and do not increase playback start latency by more than 5 ms p99. Given event ingestion at 100 events per second sustained, When analytics queries run concurrently, Then event acceptance latency remains <= 200 ms p95 and <= 500 ms p99. Given the analytics replica lag exceeds 30 seconds, When dashboards are viewed, Then a "data delayed" indicator is shown and ingestion/playback remain unaffected.
Data Retention and Privacy Preservation
Given retention policies, When 180 days have elapsed since an event, Then raw event payloads are purged while derived aggregates and minimal audit fields are retained. Given a GDPR deletion request for recipient R, When processed, Then direct identifiers (e.g., email, IP, user agent) are removed or pseudonymized within 30 days and aggregates remain without PII. Given privacy minimization mode, When events are logged, Then IP and user agent are stored as salted hashes and country code is retained for geo reporting. Given a compliance export, When generated, Then redacted fields are labeled and integrity hashes validate the exported records.
Threshold Alerts and Auto-Expiry Signaling
Given thresholds at 80% and 100% of allowance, When recipient R crosses a threshold for asset A, Then an alert event is emitted within 2 seconds and recorded in the audit log. Given 100% allowance is reached, When the final allowed event is accepted, Then an auto-expiry signal is published and subsequent access checks for recipient R and asset A return "revoked" within 2 seconds. Given counters are corrected downward after reconciliation, When remaining allowance increases, Then a "quota_restored" audit event is logged and access is re-enabled immediately.
Admin Overrides and Grace Controls
"As an A&R coordinator, I want to quickly grant extra plays to a key partner so that I can keep momentum without rebuilding links."
Description

Offer fine-grained controls for team members to extend, reset, pause, or remove quotas for specific recipients or entire shares. Support whitelisting recipients to be exempt from quotas and defining short grace windows that permit limited additional usage post-expiry. Include bulk actions for campaign management and confirmation flows to prevent accidental changes. Every override action is captured in the audit log with actor, timestamp, scope, and reason. Changes propagate immediately to enforcement services without requiring link regeneration.

Acceptance Criteria
Extend Quota for Specific Recipient and Asset
Given an admin with Manage Quotas permission selects recipient R and asset A within share S When the admin enters an extension of X plays/downloads and optionally sets a new expiry datetime D and confirms with a non-empty reason Then the system increases the remaining quota by X and updates expiry to D (if provided) without regenerating the share link And enforcement reflects the new limits within 5 seconds And an audit log entry is recorded with actor, timestamp, action=extend, scope={share:S, asset:A, recipient:R}, previous_values, new_values, reason
Reset Usage Counters for Recipient on Share
Given an admin selects recipient R within share S to reset usage counters When the admin confirms the reset with a non-empty reason Then play and download counters for R on S are set to 0 and remaining quotas are recalculated from the original caps without changing the current expiry And enforcement reflects the reset within 5 seconds without link regeneration And an audit log entry is recorded with actor, timestamp, action=reset, scope={share:S, recipient:R}, previous_values, new_values, reason
Pause and Resume Quota Enforcement for Recipient
Given an admin chooses to pause quota enforcement for recipient R on share S When the admin confirms the pause with a non-empty reason Then all plays/downloads for R on S are allowed regardless of caps, and no quota counters decrement while paused And enforcement enters paused state within 5 seconds without link regeneration And an audit log entry is recorded with actor, timestamp, action=pause, scope={share:S, recipient:R}, reason When the admin resumes enforcement Then caps and counters resume from their pre-pause values and an audit log entry is recorded with action=resume
Remove Quotas for Entire Share
Given an admin selects share S and chooses Remove Quotas When the admin confirms by typing REMOVE and provides a non-empty reason Then all quota enforcement for share S is disabled for all recipients without regenerating links And enforcement reflects the removal within 5 seconds And an audit log entry is recorded with actor, timestamp, action=remove_quotas, scope={share:S}, previous_values, new_values, reason
Whitelist Recipient Exempt from Quotas
Given an admin selects recipient R on share S and chooses Whitelist from quotas When the admin optionally sets a whitelist expiry datetime D_w and confirms with a non-empty reason Then quota enforcement is bypassed for R on S (no blocking, no decrementing) until D_w if provided, otherwise indefinitely, without link regeneration And enforcement reflects the whitelist within 5 seconds And an audit log entry is recorded with actor, timestamp, action=whitelist, scope={share:S, recipient:R}, whitelist_expiry=D_w, reason
Configure and Enforce Post-Expiry Grace Window
Given an admin configures a grace window for share S or asset A with duration G minutes and budget B plays/downloads When a recipient R reaches quota expiry on the configured scope Then R is allowed up to B additional plays/downloads within G minutes post-expiry without regenerating links And after B is exhausted or G elapses (whichever first), access is blocked by enforcement And start and end of grace usage are recorded and an audit log entry is created with actor, timestamp, action=grace_applied, scope, budget=B, duration=G
Bulk Overrides with Confirmation and Per-Target Results
Given an admin selects a set of N targets (recipients and/or shares/assets) and chooses a bulk action (extend X, reset, pause, remove quotas, whitelist) When the system presents a preflight summary of affected targets and the admin confirms with a non-empty reason Then the action is executed for each target independently, with no link regeneration, and enforcement updates within 5 seconds per target And the UI reports per-target success/failure and no unaffected targets are modified And an audit log entry is recorded for each target with actor, timestamp, action, scope, previous_values, new_values (or state), reason
QuotaGuard API and Webhooks
"As a developer at a label, I want APIs and webhooks for quotas so that I can integrate limits with our CRM and reporting pipelines."
Description

Provide REST endpoints to configure quotas, retrieve recipient allowance states, and perform overrides programmatically, with OAuth or API key authentication and idempotency. Emit webhooks for key lifecycle events such as threshold crossed, limit reached, action revoked, and override applied, enabling external workflows and BI integrations. Ensure backward compatibility with existing TrackCrate shortlink and AutoKit APIs, including consistent resource identifiers. Include pagination, filtering, and ETag-based caching for efficiency, plus sandbox keys for testing without affecting production counters. Comprehensive documentation and examples accelerate partner adoption.

Acceptance Criteria
Create Quota via REST with Idempotency-Key
Given a valid OAuth2 bearer token or API key and a unique Idempotency-Key header, When POST /v1/quotas with payload {recipient_id, shortlink_id or asset_id, asset_type, play_cap, download_cap, expires_at}, Then the response is 201 with body containing quota_id, caps, status=active, created_at, and identifiers matching existing shortlink/asset formats. Given the same Idempotency-Key and identical payload within 24 hours, When POST is retried, Then the response returns the original quota_id and an Idempotency-Replayed indicator with no duplicate side effects. Given missing/invalid authentication, When POST /v1/quotas, Then the response is 401 with error.code=auth_required. Given invalid fields (e.g., negative caps, unknown asset_type, past expires_at), When POST /v1/quotas, Then the response is 422 with field-level errors. Given rate limits exceeded, When POST /v1/quotas, Then the response is 429 with a Retry-After header.
Retrieve Recipient Allowance State with ETag Caching
Given a valid token, When GET /v1/recipients/{recipient_id}/allowances?asset_id={id}, Then the response is 200 with plays_used, downloads_used, remaining, caps, status (active|revoked|expired), and an ETag header. Given no state change since the last GET, When calling the same endpoint with If-None-Match set to the prior ETag, Then the response is 304 with no body. Given allowance counters change, When GET with the old ETag, Then the response is 200 with updated values and a new ETag. Given a request for a non-existent recipient or asset, When GET, Then the response is 404 with error.code=not_found.
Override Quota Programmatically with Audit Trail
Given a token with scope=quotas.write and a unique Idempotency-Key, When POST /v1/quotas/{quota_id}/overrides with {action: grant|reset|revoke, amount?, reason}, Then the response is 200 with override_id, actor, occurred_at, and updated allowance state. Given action=grant and amount=5, When applied, Then remaining increases by 5 without exceeding integer limits and status remains active. Given action=reset, When applied, Then plays_used and downloads_used reset to 0 and remaining equals caps. Given action=revoke, When applied, Then status becomes revoked and remaining becomes 0 immediately. Given insufficient scope, When POST /overrides, Then the response is 403 with error.code=forbidden. Given a duplicate Idempotency-Key, When retried, Then the override is not double-applied and the same override_id is returned.
Webhook: Threshold Crossed Delivery and Validation
Given a configured webhook endpoint and secret, When an allowance crosses a configured threshold (e.g., >=80% of cap), Then TrackCrate sends POST to the callback within 10 seconds with event=quota.threshold_crossed and payload {event_id, delivery_id, quota_id, recipient_id, asset_id, threshold, previous_percent, current_percent, occurred_at}. Given the delivery, When verifying headers, Then X-TrackCrate-Signature (HMAC-SHA256 of the raw body) and X-TrackCrate-Event-Id are present and valid. Given the receiver returns non-2xx, When retrying, Then TrackCrate retries up to 3 times with exponential backoff and preserves the same delivery_id for deduplication. Given the receiver returns 2xx, When delivery completes, Then no further retries occur and the delivery status is retrievable via GET /v1/webhooks/deliveries/{delivery_id}.
Webhook: Limit Reached Auto-Revocation Event
Given a recipient hits the play_cap or download_cap, When the cap is reached, Then access is auto-revoked for that recipient-asset and a webhook with event=quota.limit_reached is delivered containing {quota_id, recipient_id, asset_id, cap_type, cap_value, used, occurred_at}. Given auto-revocation, When subsequent download or play is attempted via shortlink or AutoKit, Then the API responds 403 with error.code=quota_limit_reached and no asset is served. Given delivery failures, When the receiver responds non-2xx, Then retries occur up to 3 times as per global webhook policy. Given successful revocation, When querying the allowance state, Then status=revoked and remaining=0 are returned.
Pagination and Filtering for Allowance Lists
Given many allowance records, When GET /v1/allowances?asset_id={id}&page[size]=100, Then the response is 200 with up to 100 items and a links.next cursor when more results exist. Given links.next is present, When GET using page[cursor]=<token>, Then the next page of results is returned and the cursor advances until exhaustion. Given filters, When specifying recipient_id, asset_type (stems|masters|artwork), status (active|revoked|expired), updated_since, or threshold_gte, Then results are filtered accordingly and filters are combinable. Given no sort parameter, When GET, Then results are sorted by updated_at desc by default; When sort=plays_used or sort=-downloads_used, Then sorting reflects the requested field and order. Given load of 10k records, When requesting any single page, Then p95 response time is <= 500ms in staging performance tests.
Sandbox Keys Do Not Affect Production Counters
Given a sandbox API key, When invoking POST/PUT quota endpoints, Then responses include environment=sandbox and no production counters or states are changed. Given a sandbox GET request, When querying allowances, Then values reflect sandbox-only state and are isolated from production identifiers and counters. Given sandbox webhooks are enabled, When sandbox events occur, Then webhooks are delivered with environment=sandbox and are never sent to production endpoints. Given idempotency and ETag, When used in sandbox, Then semantics are identical to production but scoped to the sandbox environment.

DeviceLock Binding

Bind each shortlink to the first verified device or allow a fixed number of devices with one-tap approvals. Suspicious device changes prompt re-verification and notify owners, discouraging link forwarding without blocking legitimate use across personal devices.

Requirements

Privacy-Preserving Device Fingerprint
"As a link recipient, I want my access to bind to my device automatically so that I can use links without repeated logins while discouraging unauthorized forwarding."
Description

Implement a stable, privacy-preserving device identifier used to bind shortlinks to a specific device or a limited set of devices. Generate a device ID using a combination of low-entropy, non-invasive signals (e.g., user agent, platform, time zone, screen metrics) combined with a first-party, signed cookie and a WebCrypto-derived key stored in IndexedDB/keychain. Hash and salt all identifiers server-side to avoid storing raw attributes and rotate salts on a defined cadence. Provide resilience across app updates and normal browser upgrades while detecting resets (incognito, cookie clears) and handling them gracefully. Ensure cross-subdomain consistency for TrackCrate assets (shortlinks, AutoKit press pages, private stem player) and CDN enforcement. Expose a deterministic “same-device” check API for enforcement and analytics without exposing the raw fingerprint. Produce a clear consent banner when needed and maintain regional toggles to comply with privacy regulations.

Acceptance Criteria
Stable Device ID on Same Device
Given a user returns on the same physical device and browser profile within 90 days When they open any TrackCrate surface (shortlink, AutoKit page, private stem player) in a logged-out state Then the computed device_id is identical to the previously issued device_id And the device_id persists across reloads and restarts without re-verification And minor browser or app updates do not change the device_id
Graceful Handling of Private Mode and Storage Resets
Given a user opens content in private/incognito mode or after clearing cookies/storage When TrackCrate computes a device identifier Then an ephemeral device_id is issued that is not persisted beyond the session And reset_detected=true is surfaced to enforcement logic And the user is prompted for re-verification only when accessing protected assets And purely public assets remain accessible without prompts
Cross-Subdomain Consistency for TrackCrate Surfaces
Given a user on the same device and browser profile When they visit trackcrate.com, links.trackcrate.com, autokit.trackcrate.com, and cdn.trackcrate.com Then the device_id value is identical across these subdomains And cross-context iframes and redirects preserve the device_id within 1 second of initial generation And no third‑party cookies are required
Server-Side Hashing, Salting, and Salt Rotation
Given device signals and local keys are received from the client When the server stores identifiers Then only salted hashes are stored; no raw attributes or stable identifiers are persisted in databases or logs And salts rotate automatically every 30 days And same-device checks remain accurate across rotations using a rolling window of current and previous salts for at least 45 days And salts are never exposed in client-visible responses or headers
Deterministic Same-Device Check API
Given an authorized service calls POST /api/device/same with {link_id, device_token} When the API processes the request Then it returns 200 with {same_device: boolean, reason: one of [same,new_device,reset_detected,limit_exceeded], confidence: 0.0–1.0} And no raw device attributes or unhashed identifiers are included in the response And p95 latency is ≤100 ms with error rate <0.1% over 24 hours And requests are rate-limited per link_id and IP to prevent enumeration
Regional Consent and Privacy Controls
Given a user located in a region requiring consent (e.g., EEA/UK) When they first visit a TrackCrate surface that would set cookies or generate a WebCrypto key Then a consent banner is displayed before any non-essential storage occurs And on Accept, consent is recorded with timestamp, region, and version; on Decline, only strictly necessary storage is used and device binding is disabled And users can change consent in Settings and the change propagates across subdomains within one refresh And CDN enforcement honors consent state when issuing/validating tokens
CDN Enforcement Using Device Token
Given an asset request to the CDN with a signed device token header for a bound shortlink When the device token validates as same-device within the allowed device count Then the CDN returns 200 And if validation fails, the CDN returns 403 with a non-identifying error code And tokens have a TTL ≤15 minutes with clock skew tolerance of ±2 minutes And no PII or raw fingerprint components appear in request or response headers
One-Tap Binding & Approval Flow
"As a recipient, I want to bind a link to my current device in one tap so that I can access content quickly without complex authentication steps."
Description

Create an inline, low-friction flow that binds a shortlink to the first verified device on initial visit and supports one-tap approvals for additional devices up to the allowed limit. Present a minimal banner or interstitial on AutoKit pages, stem player, and download gates indicating current device usage and remaining device slots. On tap/click, confirm binding, provision a device-bound access token, and continue seamlessly to content. For additional devices, allow recipients to request approval with a single tap; route the approval request to the owner/collaborator with context and allow immediate approval/denial. Persist device nicknames for user clarity and show a quick “This isn’t my device” escape to avoid accidental binding. Localize UI and support both desktop and mobile UX patterns.

Acceptance Criteria
First-Visit One-Tap Device Binding on AutoKit
Given an unbound shortlink with a device limit N And a visitor who has not yet bound any device When the visitor opens the shortlink on a device and taps Confirm on the binding banner Then the device is bound to the shortlink And a device-bound access token is provisioned and stored securely on that device And the visitor is routed to the requested content without an extra page load And subsequent visits on the same device auto-access content while the token is valid And attempts to use the token from a different device are rejected with HTTP 401 And repeated taps within 5 seconds do not create duplicate device records
Inline Device Usage Banner Across Surfaces
Given a shortlink accessed on AutoKit pages, the stem player, or a download gate When the page renders Then a minimal banner/interstitial displays the current device nickname (if bound), devices used M, device limit N, and remaining slots N−M And the component renders consistently across all surfaces and viewports And on mobile the component height is <= 15% of viewport; on desktop height is <= 80px And the banner provides a single primary action to Confirm/Bind and a secondary link to "This isn’t my device"
One-Tap Approval for Additional Devices (Within Limit)
Given a shortlink with device limit N And M devices already bound where M < N When a new device visits and taps Request Approval Then an approval request is sent to the owner/collaborators with device fingerprint, proposed nickname, geo/time, and link context And the owner receives an actionable notification in-app and via email within 60 seconds And upon the owner tapping Approve, the requester device is bound and granted access within 3 seconds without requiring a page reload And upon Deny, the requester sees a denial message and no device is bound
Device Limit Reached Handling
Given a shortlink with device limit N And M devices already bound where M >= N When a new device attempts access Then the UI clearly indicates 0 remaining slots and disables immediate binding And the user may send an approval request to the owner And the owner is prompted to free a slot before approval can take effect And access is not granted to the new device until a slot is freed and approval is confirmed
"This Isn’t My Device" Escape Flow
Given the binding banner is shown on an unbound device When the user taps "This isn’t my device" Then no binding occurs and no device token is created And the UI shows guidance to open the link on the intended device and an option to sign in And the binding banner does not auto-reappear on this device for 24 hours unless the user explicitly re-initiates binding
Device Nicknames Capture and Persistence
Given a device is binding or requesting approval When the user reviews or edits the suggested device nickname Then the nickname is saved with the device record And the nickname is displayed in owner approval requests and device management views And the nickname persists across sessions and can be edited by the owner later And if left blank, a generated nickname in the format "<OS>-<Model>-<Last4>" is applied
Localization and Cross-Device UX Parity
Given the application locale is set to any supported locale (e.g., en, es, fr, ar) When rendering the binding banner/interstitial on mobile and desktop Then all strings are localized via i18n keys with no unintended English fallbacks And RTL locales render correctly with mirrored layout without text truncation on common viewports (360x640, 390x844, 1440x900) And primary actions are tap/click accessible, keyboard focusable, and meet minimum target sizes (>= 44x44 dp mobile, >= 32x32 px desktop) And screen readers announce action labels and remaining device slots; color contrast meets WCAG 2.1 AA
Per-Link Device Policy Controls
"As a label owner, I want to set how many devices can use a link so that sharing is limited without blocking legitimate use across my team’s personal devices."
Description

Add configuration options on shortlink creation/edit and via API to define device binding behavior: single device only, up to N devices, or disabled. Allow owners to set default policies at workspace/label level and override per link or release. Include options for auto-approve first device, require owner approval for subsequent devices, cooldown windows for new device binding, and auto-expiration of bindings when links expire. Surface policy indicators in the shortlink detail view and analytics (e.g., bound devices, remaining slots, denied requests). Ensure policies propagate to AutoKit pages, private stem player, and expiring/watermarked download endpoints consistently. Provide validation and sensible defaults to minimize misconfiguration.

Acceptance Criteria
Set Device Policy on Shortlink Creation/Edit
Given I am a workspace or link owner creating or editing a shortlink When I select a device policy mode of "Single device", "Up to N devices", or "Disabled" and configure N, autoApproveFirst, requireOwnerApprovalForAdditional, cooldownHours, and expireWithLink as applicable Then the form enforces validation rules (e.g., N is required and >=1 when mode="Up to N devices"; N is ignored when mode is not multi; cooldownHours is a non-negative integer; options incompatible with mode are disabled) And default values are pre-populated from workspace/label defaults or system defaults when none are set And on Save the policy persists and is immediately reflected in the shortlink detail view policy indicator And reopening the edit form shows the previously saved values
Workspace/Label Default Policy with Per-Link/Release Override
Given I have Manage Settings permission at the workspace or label scope When I set or update the default device policy at the workspace or label Then newly created shortlinks under that scope inherit those default values And when a release-level default is set, shortlinks associated to that release inherit the release default overriding workspace defaults And when I override the policy on an individual shortlink, only that link changes and no defaults are modified And an audit log records changes to defaults and per-link overrides with actor, previous value, new value, and timestamp
API Configuration and Validation for Device Policy
Given I have an API token with write:shortlinks When I POST/PUT /shortlinks or PATCH /shortlinks/{id} with devicePolicy { mode: "single"|"multi"|"disabled", maxDevices, autoApproveFirst, requireOwnerApprovalForAdditional, cooldownHours, expireWithLink } Then the API validates combinations (mode="multi" requires maxDevices>=1; mode in [single,multi,disabled]; cooldownHours>=0 integer; booleans only accepted when applicable) and returns 400 with structured error codes on invalid input And on success returns 200/201 with the persisted policy, ETag, and lastUpdated And GET /shortlinks/{id} returns the effective devicePolicy with indicators of whether each value is inherited (workspace/label/release/system) or overridden And GET/PUT /workspaces/{id}/defaults and /labels/{id}/defaults support configuring and retrieving default devicePolicy settings
Device Binding and Approval Flow
Given a shortlink with policy mode="multi", maxDevices=3, autoApproveFirst=true, requireOwnerApprovalForAdditional=true and no devices bound When the first unique, verified device opens the shortlink Then the device is auto-bound and boundDevices=1 and remainingSlots=2 And when a second unique device opens the shortlink Then an approval request is created and the owner is notified via in-app and email/push, and the device status is pending until approved And when the owner approves via one-tap or in-app, the device becomes bound, access is granted within 5 seconds, and boundDevices increments And when maxDevices is reached, additional unique device attempts receive HTTP 403 with reason "device_limit_reached" and the attempt is logged as denied And Given mode="single", the first device auto-binds if autoApproveFirst=true and subsequent unique device attempts follow requireOwnerApprovalForAdditional (approved -> bind; not approved -> 403 device_limit_reached) And Given mode="disabled", no device is bound and all verified devices can access without approval
Cooldown Window Enforcement
Given a shortlink policy with cooldownHours=24 and requireOwnerApprovalForAdditional=true When a new device was bound less than 24 hours ago Then subsequent unique device attempts during that window are not auto-approved and are placed into pending_approval with reason "cooldown_active" and a retryAfter value equal to time remaining And when the owner approves during the cooldown, the device binds immediately and the cooldown restarts from the approval timestamp And when the cooldown elapses, the next qualifying attempt proceeds according to policy (auto-approve if slots remain; otherwise pending/denied per settings)
Auto-Expiration of Bindings When Links Expire Across Endpoints
Given a shortlink with expireAt set or that is manually deactivated When the link expires or is deactivated Then all device bindings for that link are marked expired and access to AutoKit pages, the private stem player, and expiring/watermarked downloads returns HTTP 410 with reason "link_expired" And policy indicators update to show the link is expired and no devices are active And when the link is reactivated, prior bindings remain expired and devices must re-verify and re-bind under the current policy
Policy Indicators and Analytics Reporting
Given a shortlink has an active device policy and binding activity When I open the shortlink detail view Then I see mode, maxDevices (if applicable), autoApproveFirst, requireOwnerApprovalForAdditional, cooldownHours, expireWithLink, boundDevices, remainingSlots, pendingApprovals, deniedRequests, lastBindingAt, and nextCooldownEndsAt And GET /analytics/shortlinks/{id}/devicePolicy returns counts for boundDevices, remainingSlots, pendingApprovals, deniedRequests, approvals, denials, and access attempts by endpoint (AutoKit, player, downloads) over a specified date range And the UI metrics match the analytics API values for the same time window
Suspicious Activity Detection & Re-Verification
"As a content owner, I want suspicious device changes to trigger re-verification so that unauthorized forwarding is discouraged without blocking genuine collaborators."
Description

Implement heuristics to detect anomalous device changes and high-risk access patterns and trigger step-up re-verification. Consider signals such as rapid device churn within a short window, large geo-IP deltas, data center/VPN indicators, repeated access from unbound devices, and failed token validations. When thresholds are met, pause high-risk access paths and prompt the recipient for lightweight re-verification (email magic link or one-time code) and/or require owner approval. Log all events with reasons, outcome, and metadata for auditability. Tune thresholds to balance security and friction and allow owners to opt into stricter modes per link. Provide reporting to highlight links with elevated risk for proactive management.

Acceptance Criteria
Rapid Device Churn Triggers Step-Up Verification
Given a shortlink with default risk policy (ChurnDevices=3 within 120m) And the recipient has ≤ ChurnDevices bound devices When ≥ 3 distinct unbound device fingerprints access the shortlink within 120 minutes Then classify the session as high-risk And pause access for unbound devices and show a re-verification prompt And send a single consolidated owner notification within 5 minutes (deduped per 30 minutes) And allow already bound devices to continue without interruption And upon successful re-verification, grant access and optionally bind the device if within allowed device limit And write an audit log entry with reason="rapid_device_churn", device_ids, window_minutes, outcome, actor_id, ip, user_agent, timestamp
Large Geo-IP Delta Requires Re-Verification
Given a shortlink with geo-anomaly thresholds (GeoDeltaKm=800, GeoWindow=60m) And a last successful access from location A at T0 When a new access occurs from location B at T1 with distance(A,B) > 800 km and (T1−T0) ≤ 60 minutes Then flag the session as high-risk and require step-up re-verification And upon successful re-verification via magic link, resume access and optionally bind the device if allowed And if a second geo-anomaly occurs within 24 hours for the same recipient+link, require owner approval before binding any new device And write an audit log entry with reason="geo_ip_delta", delta_km, window_minutes, outcome, geo_provider, ip, timestamp
Data Center/VPN IP Elevates Risk
Given an IP reputation service indicating ip_type in {data_center, vpn, proxy} with confidence ≥ 0.8 When access originates from an unbound device on such an IP Then require step-up re-verification before any content is served And do not bind the device until re-verification succeeds on a non-flagged IP or owner approval is granted When access originates from a bound device on such an IP and coincides with a geo anomaly or device fingerprint change Then require step-up re-verification And write an audit log entry with reason="dc_vpn_indicator", reputation_confidence, ip_asn, outcome, timestamp
Repeated Unbound Device Attempts Trigger Temporary Lock and Owner Approval
Given thresholds (UnboundAttempts=5 within 24h) per shortlink When more than 5 access attempts from distinct unbound devices occur within 24 hours Then place the shortlink into elevated protection for 24 hours where new device binding requires owner approval And notify the owner with a one-tap approval link (notification rate-limited to 2 per 24h) And if the owner approves within 24 hours, allow the new device to bind and reset the unbound-attempt counter Else keep the elevated protection active until the window expires And write an audit log entry with reason="unbound_attempts_exceeded", attempts_count, window_hours, owner_action, outcome, timestamp
Failed Token Validations Initiate Step-Up and Throttling
Given thresholds (InvalidTokenAttempts=3 within 15m) per recipient+link When ≥ 3 invalid or expired token validations occur within 15 minutes Then invalidate outstanding tokens for unbound devices and require step-up re-verification on next attempt And rate-limit subsequent token validation attempts to 1 per minute for 15 minutes And upon successful re-verification, issue fresh tokens and resume normal rate limits And write an audit log entry with reason="failed_token_validations", attempts_count, window_minutes, rate_limit_applied, outcome, timestamp
Step-Up Re-Verification Flow (Magic Link and One-Time Code)
Given re-verification methods {email_magic_link, one_time_code} And policy (CodeTTL=10m, CodeResendInterval=30s, MaxCodeAttempts=5, Lockout=15m) When a session is flagged high-risk Then present a step-up screen stating the reason category without exposing sensitive details And allow the user to choose a method And if email_magic_link is chosen, deliver the link within 60s and accept a single-click confirmation to resume access And if one_time_code is chosen, accept the code within 10 minutes, with at most 5 attempts before a 15-minute lockout And upon success, return the user to the originally requested resource and, if eligible, bind the device And upon failure/lockout, keep high-risk access paused and display a help contact option And write an audit log entry with reason_category, method, delivery_latency_ms, attempts_used, outcome, timestamp
Owner Controls and Risk Reporting
Given per-link modes {Standard, Strict} When mode=Strict Then any new device requires owner approval, and geo-anomaly or dc/vpn indicators always trigger step-up re-verification When the owner updates thresholds or mode Then changes take effect within 5 minutes and are reflected in policy metadata for the link And a Risk Report lists links with ≥ 1 high-risk event in the past 7 days or risk_score ≥ threshold, with filters by reason, date, and link owner And the report supports CSV export and includes columns {link_id, reason, count, last_seen_ts, mode} And an API endpoint /v1/audit/events returns paginated audit entries with fields {event_id, link_id, reason, metadata, outcome, ts} And all timestamps are UTC and entries are retained for ≥ 180 days
Owner Notifications & Device Management Console
"As a project manager, I want a clear console and notifications to approve or revoke devices so that I can control access quickly without leaving my workflow."
Description

Deliver real-time notifications and a management console for bound devices per shortlink. Send email/in-app/webhook alerts for new device bindings, approval requests, denials, and suspicious activity triggers with actionable controls (approve, deny, revoke). Provide a dashboard listing each link’s bound devices with metadata (last seen, location rough country, browser/OS family), the ability to rename, revoke, increase device cap, or reset bindings. Maintain a full audit log exportable via CSV and accessible via API. Integrate with Slack/webhooks for teams and respect workspace roles/permissions for who can approve or manage devices. Reflect changes immediately across AutoKit, stem player, and CDN token enforcement.

Acceptance Criteria
Real-time Multi-Channel Alerts for Device Events
Given a shortlink with DeviceLock is enabled and an owner with notification preferences configured When a new device binds, an approval is requested, an approval/denial is issued, or a suspicious activity trigger fires Then an email, in-app, and signed webhook notification are dispatched within p95 ≤ 5 seconds of the event And the notification payload includes event type, shortlink identifier, actor (user or system), device fingerprint summary, last seen timestamp (UTC), rough country, browser/OS family, and correlation ID And webhook requests are HMAC-SHA256 signed with a rotating secret and include a nonce with 5-minute TTL to prevent replay And webhook delivery retries with exponential backoff for up to 3 attempts and marks the attempt status in the audit log And email and in-app notifications include actionable Approve/Deny/Revoke controls that deep-link to the secure action endpoint
Approve/Deny From Notifications Enforces Access Immediately
Given an owner or approver with sufficient permissions receives a device approval request notification When they click Approve or Deny in email, Slack, or in-app and successfully authenticate Then the device binding state is updated accordingly and an audit log entry is recorded with actor, outcome, and timestamp (UTC) And denied devices have their CDN tokens invalidated and session access blocked within p95 ≤ 2 seconds And approved devices receive valid tokens and can access AutoKit, stem player, and downloads immediately And the decision propagates consistently across AutoKit, the private stem player, and CDN token enforcement within p95 ≤ 2 seconds
Device Management Console Listing and Controls
Given an owner or approver views the Device Management Console for a specific shortlink When the console loads Then it lists all bound devices with: nickname, device ID suffix, last seen timestamp (UTC), rough country, and browser/OS family And the user can rename a device (1–50 chars, no control characters) with inline validation and see the change reflected immediately And the user can revoke a device and the device loses access (CDN tokens invalidated, player access blocked) within p95 ≤ 2 seconds And the user can increase the device cap up to the workspace-defined maximum, with confirmation and audit logging And the user can reset all bindings (requires typed confirmation and role check) and all existing device sessions are blocked within p95 ≤ 2 seconds And the device list updates in real time as changes occur without a full page refresh
Suspicious Activity Detection and Re-Verification Flow
Given DeviceLock analytics monitor device and location patterns for a shortlink When thresholds are met (e.g., >3 device change attempts in 24h, >2 countries in 1h, or geo-distance >2000 km in 1h) Then the affected session or device is marked as requiring re-verification and download access is blocked until resolved And the owner receives notifications (email, in-app, webhook) describing the trigger with device and location summary And the owner can approve the session from the notification or console, which restores access within p95 ≤ 2 seconds And events are deduplicated to avoid alert storms (grouped to one notification window per 30 minutes per shortlink) and fully logged in the audit trail
Audit Log Export and API Access
Given a user with Audit permission opens the audit section for a workspace or shortlink When they request an export for a specified time range Then a CSV is generated within ≤ 30 seconds (or queued with progress) and a signed download link with 24-hour expiry is provided And the CSV contains rows for events: new binding, approval request, approve, deny, revoke, cap change, reset bindings, suspicious trigger, re-verification And each row includes: event_id, timestamp (UTC ISO-8601), actor user_id/email, actor role, shortlink_id/slug, device_id (hashed), IP hash, rough country, browser/OS family, outcome/status, correlation ID And the Audit API endpoint returns the same fields with filters for shortlink_id, event_type, actor, and time range, supporting cursor pagination up to 1000 items per page And access to exports and API is enforced by workspace roles and all export/API accesses are themselves logged
Slack and Webhook Team Integrations with Role Enforcement
Given a workspace has Slack and generic webhooks configured with secrets and channel mappings When device-related events occur Then the configured Slack channel receives a message with event details and interactive Approve/Deny/Revoke buttons And only users with Owner or Approver roles can execute these actions; others see read-only messages and blocked action attempts are rejected with a clear error And successful or failed Slack actions write audit entries and reflect state changes across AutoKit, the stem player, and CDN enforcement within p95 ≤ 2 seconds And webhook deliveries respect IP allowlists, include signature headers, and expose a 2xx acknowledgment contract; failures retry with exponential backoff and surface in an integration health view
Device-Bound Tokenization & CDN Enforcement
"As an engineer, I want device-bound tokens enforced at the CDN so that link binding cannot be bypassed by direct asset URLs."
Description

Bind access to device-specific, signed tokens enforced at the edge for streams and downloads. On successful binding, mint short-lived JWTs or macaroons that embed the device hash, link ID, policy, and watermark parameters. Validate tokens at CDN/edge and origin for every asset request (stems, artwork, press assets), denying or degrading access when the device does not match. Tie watermark payloads to the device binding for forensic tracing of leaked downloads. Support token rotation, replay protection, and clock skew handling. Ensure graceful degradation when offline by caching limited-scope tokens where permitted and honoring expiry. Provide observability for token denials and reasons to aid support and tuning.

Acceptance Criteria
Token Minting on Device Binding
- Given a verified device and an active shortlink policy, when the client requests a token, then the system mints a signed token (JWT RS256/ES256 or macaroon HMAC-SHA256) with claims: jti, iat, exp, link_id, device_hash, policy_id, watermark_params, aud, iss. - Given a policy default TTL of 15 minutes and an override of 10 minutes, when the token is minted with override, then exp-iat equals 600 seconds. - Given a token mint request missing any required claim, when validation runs, then the response is 400 with reason_code="missing_claim" and no token is issued. - Given an invalid signature algorithm request, when the token is minted, then the token is rejected and 400 reason_code="unsupported_alg" is returned. - Given a token mint request for a non-verified device, when processing occurs, then the response is 403 reason_code="device_not_verified".
Edge and Origin Enforcement for Asset Requests
- Given a request for stems, artwork, or press assets without a token, when it hits the CDN, then the CDN returns 401 with WWW-Authenticate error="invalid_token" reason="missing" and no origin fetch occurs. - Given a request attempts to bypass the CDN and hit origin directly without a valid token, when the origin validates, then the origin returns 401/403 with reason_code="invalid_or_missing_token". - Given a valid token whose device_hash matches the bound device, when requesting any asset, then the CDN returns 200/206 and includes X-Token-Validated: true. - Given HLS/DASH streaming, when requesting playlists and media segments, then each request requires a valid token and is independently validated at the CDN. - Given a token with an exp in the past, when validation runs, then the CDN returns 401 with reason_code="token_expired" and does not proxy to origin.
Watermark Tied to Device Binding
- Given a valid download token, when a downloadable asset is served, then the file is embedded with a forensic watermark encoding device_hash, link_id, and timestamp derived from token claims. - Given the downloaded file, when the internal watermark decoder runs, then the decoded payload exactly matches the token’s device_hash and link_id with 100% verification. - Given a valid stream token, when streaming is requested, then the stream is tagged with a session watermark parameter tied to device_hash for forensic tracing. - Given a token whose device_hash does not match the requesting device, when a download is requested, then the CDN/origin returns 403 with reason_code="device_mismatch" and no file body is transmitted. - Given watermark embedding fails, when the system detects the failure, then the request is denied with 500 reason_code="watermark_embed_failed" and the event is logged with correlation_id.
Replay Protection and Range Request Handling
- Given a single-use download token, when the first GET for the asset completes with 200, then any subsequent GET with the same token within its TTL returns 409 with reason_code="replay_detected" and zero bytes are served. - Given a single-use download token and HTTP byte-range requests from the same device for the same asset, when multiple Range requests occur during the initial transfer, then they are permitted without triggering replay and the transfer completes. - Given a streaming session, when media segment requests reuse a token, then the token includes a per-segment nonce (or jti+path binding) and repeated identical requests within a 60s window return 409 reason_code="segment_replay". - Given an IP/device change mid-transfer, when a second client attempts to reuse the token, then the request is denied with 403 reason_code="token_bound_to_device".
Token Rotation and Clock Skew Tolerance
- Given signing key rotation, when new tokens are minted, then they are signed by the new key and the CDN accepts both old and new keys for a 10-minute overlap window. - Given an active playback session during rotation, when the client refreshes its token, then playback continues with no stall exceeding 2 seconds and no user action required. - Given client and edge clocks differ by up to ±300 seconds, when validating iat/nbf/exp, then the token is accepted; outside this window the token is rejected with reason_code="clock_skew_exceeded". - Given a revoked jti, when a request arrives with that token, then the edge rejects it with 403 reason_code="token_revoked" within 60 seconds of revocation propagation.
Offline Access with Limited-Scope Tokens
- Given policy offline_allowed=true and pre-authorized assets cached while online, when the client is offline, then requests using cached limited-scope tokens within their TTL succeed; after TTL they return 401 reason_code="token_expired_offline". - Given offline mode, when the client requests an uncached or non-authorized asset, then the request fails locally without network calls and is logged with reason_code="offline_asset_not_authorized". - Given reconnection, when the client syncs, then expired offline tokens are purged and fresh tokens are minted for authorized assets without user intervention. - Given device replacement while offline, when the client attempts access, then access is denied until re-verification occurs after reconnect, preserving device binding integrity.
Denial Observability and Reason Codes
- Given any token validation failure at CDN or origin, when the response is generated, then a structured log is emitted containing correlation_id, link_id, device_hash (hashed), reason_code, http_status, edge_pop, and timestamp. - Given denials occur, when metrics are scraped, then counters are exposed per reason_code and asset_type, and a dashboard displays top reasons over the last 24 hours with p95 latency for validations. - Given support receives a user report with a shortlink, when they search by correlation_id or link_id, then the most recent 100 denial events are retrievable within 30 seconds. - Given privacy constraints, when logs are emitted, then no PII is recorded; device identifiers are hashed and token bodies are never logged.

ForwardTrace Links

Let trusted recipients forward access safely: each forward generates a child shortlink with its own inaudible watermark, quotas, and expiry. You keep a clear lineage of who shared with whom, so collaboration spreads while accountability remains intact.

Requirements

Hierarchical Child Link Generation
"As a label project manager, I want each forward to create a unique child link tied to the new recipient so that I can track share lineage and manage access at any level."
Description

Generate unique child shortlinks whenever a recipient forwards access. Each child link maintains a parent-child relationship for full lineage tracking, inherits the asset scope and baseline permissions from the parent, and supports overrides for expiry, quotas, and passcodes. The system should attach recipient identity (email/name or organization) to each child at creation, optionally requiring verification before activation. Links must be compatible with AutoKit press pages and the private stem player, preserving deep-link targets. All link objects include immutable IDs, parent IDs, creation metadata, and current status, enabling precise control and reporting across a branching share tree.

Acceptance Criteria
Auto Child Link Creation with Parent Lineage
Given an active parent shortlink L with asset scope S and baseline permissions P When a trusted recipient forwards access to identity I Then the system creates a child shortlink C with a unique immutable id and parentId = L.id And C records recipientIdentity = I and createdAt, createdBy, createdIp And an inaudible watermark token unique to C is generated and bound to C And an audit event "child_link_created" is recorded with parentId and childId And if L.status != "Active", no child is created and the request is rejected with HTTP 403 "parent_inactive"
Permission Inheritance with Scoped Overrides
Given parent link L with asset scope S, baseline permissions P, expiry E_p, quota Q_p, and optional passcode Pc_p When creating child link C with overrides expiry E_c, quota Q_c, and passcode Pc_c Then C inherits scope S and permissions P And E_c, if provided, must be <= E_p; otherwise E_c = E_p And Q_c, if provided, must be > 0 and <= min(Q_p, L.remainingQuota); otherwise Q_c = min(Q_p, L.remainingQuota) And Pc_c, if provided, is required for access to C; otherwise Pc_c = Pc_p And attempts to expand scope beyond S or escalate permissions P are rejected with HTTP 400 with field-level validation errors
Recipient Verification Before Activation
Given org policy verificationRequired = true and recipient identity I (email/name or organization) When child link C is created for I Then C.status = "PendingVerification" and activation is blocked until I verifies via a one-time link sent to I.email And upon successful verification before E_c, C.status transitions to "Active" with activationAt recorded And access attempts before verification return HTTP 401 "verification_required" and are audit logged And if verification is not completed before E_c or after maxAttempts, C is not activated and any further verification links are invalidated
Deep-Link Preservation with AutoKit and Stem Player
Given parent link L targets deep-link T on an AutoKit press page or private stem player When child link C is used to access L's target Then C resolves to T exactly, preserving anchors, query params, and timecodes And the rendered page/player respects C's scope S and permissions P And all media streamed/downloaded via C is watermarked with C's watermark token And navigation or API calls via C cannot access assets outside S
Immutable Identifiers and Creation Metadata
Given an existing link C with fields id, parentId, createdAt, createdBy, createdIp When a client attempts to modify any immutable field (id, parentId, createdAt) Then the request is rejected with HTTP 409 "immutable_field" and no changes are persisted And retrieving C via API returns the original immutable values unchanged And updates to allowed mutable fields (expiry, quota, passcode, status) are accepted and audit logged
Per-Child Quota and Expiry Enforcement
Given child link C with quota Q_c and expiry E_c When N download or stream events occur via C Then C.remainingQuota decreases by N atomically and cannot go below zero And if an event would exceed remainingQuota, it is rejected with HTTP 429 "quota_exceeded" and does not decrement the counter And after currentTime > E_c, all access via C returns HTTP 410 "expired" and is audit logged
Lineage Reporting and Share Tree Traversal
Given a root link R with multiple generations of descendants When requesting GET /links/{R.id}/lineage Then the response returns the full share tree (or requested depth) with nodes containing id, parentId, createdAt, recipientIdentity, status for each descendant And each node includes a path-to-root that traces back to R without cycles And the API supports pagination (cursor) and depth filters and orders nodes by createdAt ascending within siblings And the lineage export endpoint can produce a CSV with columns id, parentId, level, recipient, status, createdAt
Per-Link Inaudible Watermarking
"As an A&R, I want audio delivered through forwarded links to carry a unique watermark so that any leak can be traced back to the exact recipient who shared it."
Description

Embed a unique, inaudible watermark payload into audio delivered via each child link to identify the exact lineage node on leaks or misuse. The watermark should encode at minimum: LinkID, ParentLinkID, Recipient fingerprint, and timestamp. Apply watermarks on on-demand renders for downloads and streamed previews in the private stem player with minimal latency and no audible degradation. For non-audio assets (e.g., artwork/press), apply complementary steganographic or metadata tagging where feasible. Watermark settings inherit from the parent link but can be toggled or strengthened per child. Ensure robustness across common transcodes and DAW imports and provide verification tooling to read back the watermark for enforcement and audits.

Acceptance Criteria
On‑Demand Watermark for Child‑Link Downloads
Given a child shortlink for an audio asset with watermarking enabled When the recipient requests a download Then the system embeds a unique watermark payload containing LinkID, ParentLinkID, RecipientFingerprint, and Timestamp and returns the file with HTTP 200 Given a 5-minute, 44.1 kHz stereo WAV source When the download is requested Then p95 additional processing time introduced by watermarking is <= 2.5 seconds Given the original and watermarked files When analyzed for loudness and clipping Then integrated LUFS difference is within ±0.1 LU and no new samples exceed 0 dBFS Given a double-blind ABX test over 10 diverse tracks with 20 trials per track When comparing original vs watermarked Then detection rate is not statistically above chance (p >= 0.05)
Low‑Latency Stream Watermarking in Private Player
Given the private stem player with watermarking enabled When a recipient presses Play on a preview Then first audio is rendered within 300 ms and the stream is watermarked on the fly Given baseline playback without watermarking When comparing startup latency metrics Then p95 startup latency increase due to watermarking is <= 150 ms and rebuffer ratio delta is <= 5 percentage points Given a double-blind ABX test on streamed previews When comparing original stream vs watermarked stream captures Then detection rate is not statistically above chance (p >= 0.05)
Payload Fields Accuracy (LinkID, ParentLinkID, Recipient, Timestamp)
Given a watermarked file generated from a known child link When the verification tool extracts the payload Then LinkID, ParentLinkID, and RecipientFingerprint exactly match the database record for that child link Given the extracted Timestamp When compared to the server-side generation time Then the difference is within ±5 seconds Given 100 generated child links across multiple parents When extracting payloads from 100 corresponding files Then 100% of extracted payloads are unique per child link
Robustness Across Transcodes and DAW Imports
Given a set of watermarked audio files When transcoded to MP3 320 kbps, MP3 128 kbps, AAC 256 kbps, and Ogg Vorbis 192 kbps Then the watermark is extractable with >= 99% success and the payload matches exactly Given a watermarked file When imported to a DAW and exported to 24-bit WAV with normalization Then the watermark remains extractable and the payload matches exactly Given a watermarked file When a gain change of up to ±1 dB or dithering is applied Then the watermark remains extractable and the payload matches exactly
Verification Tooling (CLI and API) Extraction and Reporting
Given the CLI tool When running trackcrate verify <file> Then it outputs JSON including LinkID, ParentLinkID, RecipientFingerprint, Timestamp, confidence, and source format within 2 seconds for a 10-minute track Given the verification API When POSTing a file or URL to /api/watermark/verify Then it returns HTTP 200 with the same JSON schema on success or HTTP 422 with a specific error code when no watermark is found Given a valid extracted payload When resolving lineage from LinkID and ParentLinkID Then the response includes the resolved lineage path for audit if available
Non‑Audio Asset Tagging (Artwork and Press)
Given image assets (PNG, JPEG) downloaded via a child shortlink When tags are applied Then each image contains an invisible tag or metadata with LinkID, ParentLinkID, RecipientFingerprint, and Timestamp and extraction succeeds on >= 99% of test images Given a PDF press kit downloaded via a child shortlink When tags are applied Then the PDF XMP or document metadata contains the payload fields and the verification tool reads them back successfully Given an asset type where steganography is not feasible When the download occurs Then a metadata-only tag is applied where possible; if neither is feasible, the system records tagging_strategy=not_feasible with reason in the audit log
Inheritance and Per‑Child Override of Watermark Settings
Given a new child link created from a parent with watermarking enabled and a defined robustness profile When the child link is created Then the child inherits watermarking enabled state and robustness profile from the parent Given a child link settings page When a user toggles watermarking or selects a stronger robustness profile Then subsequent downloads and streams from that child link use the updated settings and the change is audit-logged with user, timestamp, and previous value Given a parent link with multiple children When a single child has overridden settings Then other children continue to inherit and are unaffected by that override
Per-Link Quota and Expiry Controls
"As a content owner, I want to set specific download and time limits on each forwarded link so that access remains controlled and time-bound as sharing propagates."
Description

Allow senders to define and override per-link quotas (download count, stream count, device limit) and absolute or relative expiry windows for each child link. Support inheritance from the parent with configurable policies (e.g., stricter of parent/child). Provide automatic enforcement, pre-expiry reminders, and post-expiry deactivation with optional grace periods. Quota and expiry settings must be visible in the lineage view and accessible via API. Include safeguards to prevent bypass (e.g., IP/device fingerprinting, session binding) while respecting privacy and regional compliance requirements.

Acceptance Criteria
Child Link Overrides Under Stricter-of Inheritance
Given a parent link with quotas {downloads:10, streams:50, devices:3} and absolute expiry 2025-10-01T00:00:00Z and inheritancePolicy="stricter-of" When a sender creates a child link proposing {downloads:12, streams:60, devices:5} and expiry 2025-12-01T00:00:00Z Then the child link is created with effective quotas {downloads:10, streams:50, devices:3} and expiry 2025-10-01T00:00:00Z And the lineage view flags each field as inherited via stricter-of, showing proposed vs applied values And GET /links/{childId}/limits returns appliedLimits matching the effective quotas and expiry, including inheritedFrom={parentId} and policy="stricter-of"
Relative Expiry and Grace Period Enforcement
Given a child link with relative expiry=14 days from creation and gracePeriod=72h When the link is accessed at T+13d23h Then the response is 200 and headers include X-Expiry-At and X-Grace-Starts with correct timestamps When the link is accessed at T+14d+1h (within grace) Then the response is 200 and headers include X-Grace-Remaining≈71h and banner "Link in grace period" When the link is accessed after T+17d (expiry + 72h) Then the response is 410 Gone with errorCode=EXPIRED and the link is deactivated in UI and API
Pre-Expiry Reminder Notifications
Given a link with absolute expiry 2025-10-01T00:00:00Z and notifyBefore=[72h,24h] with email and in-app channels enabled for the owner When time reaches 2025-09-28T00:00:00Z and 2025-09-30T00:00:00Z Then the owner receives a reminder per window including {linkId, expiryAt, remainingDownloads, remainingStreams, remainingDevices, renewUrl} And in-app shows a reminder card for the link until dismissed or renewed And duplicate reminders within the same window are suppressed; reminders are cancelled if the link is renewed or deleted before send time
Quota Enforcement and Remaining Counters
Given a child link with quotas {downloads:3, streams:5, devices:2} When a recipient performs 3 completed downloads Then the 3rd succeeds and responses include X-Remaining-Downloads=0; a 4th attempt returns 429 Too Many Requests with errorCode=DOWNLOAD_QUOTA_EXCEEDED When the recipient initiates 6 streams Then the 6th attempt returns 429 Too Many Requests with errorCode=STREAM_QUOTA_EXCEEDED And successful responses include X-Remaining-Downloads and X-Remaining-Streams headers And retries of the same download within a 10-minute idempotency window do not double-count if the prior attempt completed
Device Limit Enforcement with Privacy-Safe Binding
Given a child link with device limit=2 and consent=granted in a region not requiring prior consent When three unique devices attempt access in 24h Then the first two are permitted and bound; the third returns 403 Forbidden with errorCode=DEVICE_LIMIT_EXCEEDED When cookies are cleared on an allowed device Then access remains permitted via fingerprint binding without prompting for re-authorization Given region=EU and consent=denied When devices attempt access up to the limit Then enforcement uses session binding + IP with a rolling 24h TTL, storing only hashed, non-PII identifiers, and still blocks the (limit+1)th device with 403 DEVICE_LIMIT_EXCEEDED And GET /links/{id}/security reports bindingMethod in {fingerprint,sessionBinding} and region without exposing raw identifiers
Lineage Visibility of Limits, Usage, Expiry, and Policies
Given a lineage Parent -> Child A -> Grandchild A1 where each has distinct quotas, expiry, and inheritancePolicy When the owner opens the lineage view Then each node displays {downloads: used/limit, streams: used/limit, devices: bound/limit, expiryAt or relative expiry, grace status, inheritancePolicy} And overridden vs inherited fields are visually indicated per node And selecting a node reveals an audit log of limit/expiry changes with actor, timestamp, oldValue, newValue And exporting lineage to CSV includes these fields for each node
API Read/Update With Validation and Concurrency Control
Given a token with scope=links.write for workspace W and link {id} owned by W When PATCH /links/{id}/limits is called with If-Match: <ETag> and body {downloads:4, streams:10, devices:1, expiry:"2025-10-15T00:00:00Z", policy:"stricter-of"} Then response is 200 with updated effective values, version increment, and ETag changed When a PATCH attempts to set downloads above the parent while policy=stricter-of Then response is 409 Conflict with errorCode=POLICY_VIOLATION and a fields array listing rejected properties When GET /links/{id}/limits is called by a token without scope Then response is 403 Forbidden with errorCode=FORBIDDEN And all timestamps are ISO 8601 UTC; counters are non-negative integers; unknown fields are rejected with 400 Bad Request
Controlled Forwarding Permissions
"As a sender, I want to restrict who can forward my link and how far it can spread so that collaboration can grow without losing control."
Description

Introduce granular controls that determine who may forward access and to what extent. Options include toggleable forward permission, maximum forward depth, per-link allowlists/denylists (emails, domains), and optional approval workflows for new forwards. Provide templated invite messages and capture of recipient identity prior to activation. All forwarding actions generate auditable events, and child links inherit the strictest applicable constraints from their ancestry unless explicitly relaxed by an authorized owner.

Acceptance Criteria
Forward Permission Toggle Behavior
Given a parent link with Forward Permission = Off When a recipient attempts to create a forward Then no child link is created And the UI shows "Forwarding disabled by owner" And an auditable event type="forward_blocked" reason="forward_disabled" is recorded with link_id, actor_id, timestamp Given a parent link with Forward Permission = On When a recipient creates a forward to a valid identity Then a child shortlink is created And the child is watermarked uniquely and has its own quotas and expiry And an auditable event type="forward_created" is recorded with parent_link_id, child_link_id, from_actor_id, to_identity, timestamp
Maximum Forward Depth Enforcement
Given a root link with max_forward_depth = N and depth(root)=0 When a child at depth d initiates a forward Then the forward succeeds only if d < N And the resulting child has depth d+1 And if d >= N the forward is blocked with message "Forward depth limit reached" And an event type="forward_blocked" reason="max_depth_reached" is recorded with link_id, actor_id, depth, max_depth, timestamp And the UI displays remaining_forward_depth = max(0, N - d)
Per-Link Allowlist and Denylist Resolution
Given a link with allowlist and denylist configured (emails and/or domains) When a forward is requested to identity X Then the forward is allowed only if (allowlist is empty or X matches allowlist) and X does not match denylist And matching is case-insensitive and domain matches apply to all emails at that domain And if rejected, no child link is created and the UI shows "Recipient not allowed" And an event type="forward_blocked" reason="identity_not_allowed" includes matched_rule and identity X
Approval Workflow for New Forwards
Given approval_required = true and approver(s) configured for the parent link When a recipient requests a forward Then a pending request is created with request_id, parent_link_id, requested_to_identity, requested_by, depth, timestamp And approvers receive notification within 1 minute And approvers can approve or deny the request And upon approval a child link is created and event type="forward_request_approved" records approver_id and child_link_id And upon denial no link is created and event type="forward_request_denied" records approver_id and reason And if no action within the configured SLA (default 72h) the request auto-expires and event type="forward_request_expired" is recorded
Templated Invite Messages Rendering and Send
Given system-provided templates with variables {artist_name}, {release_title}, {link_url}, {expiry}, {watermark_notice} When a sender selects a template and initiates a forward Then a preview renders with all placeholders resolved And sending is blocked if any placeholder is unresolved or link_url is missing And the final message includes exactly one clickable link_url and the expiry date And the selected template_id and any custom edits are stored with event type="forward_message_sent"
Recipient Identity Capture and Activation
Given identity_capture = true on the parent or child link When a new recipient opens the child link Then they must submit name and email and verify ownership of the email via a magic link And until verification completes, downloads and streaming are blocked and status="Pending Verification" And upon successful verification, activation_timestamp and verified_email are recorded and the lineage shows the verified identity as the child link owner And the watermark identity binds to the verified email for any downloads
Inheritance and Relaxation of Constraints Across Lineage
Given a parent-child lineage with constraints set at any ancestor When a child link is created Then effective constraints on the child are computed as: - forward_permission = false if any ancestor is false - max_forward_depth = min(remaining_depth across ancestors) - expiry = earliest expiry across ancestors - quotas = most restrictive quota across ancestors - allowlist = intersection of allowlists (if any); denylist = union of denylists - approval_required = true if any ancestor requires approval And any attempt to relax constraints requires an authorized owner and is validated against non-relaxable ancestor settings And all constraint computations and relaxations are logged with event types "constraints_inherited" and "constraint_relaxed" including sources and rationale And the UI surfaces the effective constraints and their source ancestor
Lineage Graph and Audit Trail
"As a project coordinator, I want a clear visual lineage and complete activity log so that I can understand who shared with whom and take targeted actions quickly."
Description

Display a real-time, navigable graph of parent-child relationships for each shared item or collection, showing recipients, statuses (active, expired, revoked), quotas consumed, and key events (created, forwarded, downloaded, streamed). Provide filters, search, and CSV/JSON export for reporting. Each node should surface quick actions (revoke, extend, change quota) and link to detailed activity logs. The audit trail must be immutable, time-stamped, and scoped to roles/permissions within TrackCrate.

Acceptance Criteria
Lineage Graph Overview Rendering
Given a shared item with a parent link and at least two child forwards When the owner opens the Lineage view for that item Then a directed graph renders with nodes for the parent and each child, connected by edges indicating parent→child Then each node displays recipient identifier (name or email), shortlink code, status badge (Active | Expired | Revoked), and quota used/limit (e.g., 3/10) Then hovering a node shows a tooltip with Created At and Last Activity timestamps Then selecting a node opens a side panel with counts of Created, Forwarded, Downloaded, and Streamed events
Real-time Event Updates on Graph
Given the Lineage view is open and connected When a collaborator forwards a link creating a child shortlink Then the graph displays the new child node and edge within 2 seconds and logs a 'created' and 'forwarded' event for the parent and child respectively When a recipient downloads or streams from any node Then the corresponding node's counters increment within 2 seconds and a 'downloaded' or 'streamed' event appears in the activity feed When a node's link is revoked or reaches expiry Then its status badge updates to Revoked or Expired within 2 seconds and the node becomes visually de-emphasized
Filtering and Searching the Lineage Graph
Given a Lineage view with 50+ nodes spanning multiple statuses and dates When the user applies filters for Status=Active, Date Range=Last 30 days, and Quota Used >= 1 Then only nodes matching all filters remain visible and the node count reflects the filtered set When the user searches by recipient email or shortlink code Then matching nodes are highlighted and non-matching nodes are de-emphasized When filters are cleared Then the full, unfiltered graph is restored
CSV and JSON Export of Filtered Audit Data
Given a filtered Lineage view and the user has Export permission When the user exports to CSV Then a file downloads within 5 seconds containing one row per node and nested events count columns (Created, Forwarded, Downloaded, Streamed), with timestamps in ISO 8601 UTC and fields: nodeId, parentId, shortCode, recipientId/email, status, quotaUsed, quotaLimit, createdAt, lastActivityAt When the user exports to JSON Then a file downloads within 5 seconds containing the same fields plus a chronological events array per node (type, actorId, timestamp, ip, userAgent) When the user lacks Export permission Then export options are disabled in the UI and API requests return 403 Forbidden
Node Quick Actions and Activity Link
Given a selected node and the user has Manage Links permission When the user chooses Revoke Then a confirmation is required and, upon confirm, the node status changes to Revoked immediately, access via the shortlink is blocked, and the action is recorded in the audit trail When the user chooses Extend Expiry and sets a future date/time Then the expiry updates, the new date is shown on the node, and the change is recorded in the audit trail When the user chooses Change Quota and inputs a value within allowed limits Then the quota limit updates, displayed used/limit adjusts, and the change is recorded in the audit trail When the user chooses View Activity Then the detailed activity log opens for that node When the user lacks Manage Links permission Then quick actions are disabled with explanatory tooltips and API attempts return 403
Immutable, Time-Stamped, Role-Scoped Audit Trail
Given the detailed activity log for any node Then each event entry includes: eventId, eventType (created|forwarded|downloaded|streamed|revoked|quota_changed|expiry_changed), actorId, actorRole, timestamp (ISO 8601 UTC), sourceIp, userAgent Then events are append-only and ordered by timestamp; no UI affordance exists to edit or delete events When an authorized user attempts to modify or delete an event via API Then the request is rejected with 403 and no changes occur in the log When exporting and re-importing the audit data for verification Then integrity checks (e.g., per-event checksum or hash chain) validate without error When a user without sufficient role/permissions views the log Then only events within their scope are visible and sensitive fields are redacted as configured
Graph Navigation and Accessibility
Given a Lineage view with deep branching (depth >= 5, nodes >= 200) When the user pans, zooms, and expands/collapses branches Then interactions remain responsive (p95 < 200ms per action) and viewport preserves context with a minimap or breadcrumb to root When the user uses keyboard navigation (Tab/Arrow/Enter) Then focus moves between nodes and actions, and all actions are operable via keyboard When a screen reader is used Then nodes expose accessible names (recipient, status, quota) and edges expose parent→child relationships via ARIA labels Then color is not the sole indicator of status; badges include text labels (Active/Expired/Revoked)
Branch Revocation and Anomaly Alerts
"As a rights manager, I want to detect suspicious activity and revoke an entire branch instantly so that potential leaks are contained without disrupting legitimate collaborators."
Description

Enable one-click revocation of any node and its downstream branch, with options for soft lock (temporary suspend) or hard kill (permanent revoke), plus rekeying of upstream links if needed. Implement anomaly detection for suspicious behavior (e.g., rapid multi-geo downloads, quota bursts, unrecognized devices) and send configurable alerts to owners. Offer automated responses such as auto-suspend on threshold breach and require owner approval to restore. All actions should be recorded in the audit trail and reflected instantly in the lineage graph.

Acceptance Criteria
Soft Lock of Node and Downstream Branch
Given I am the owner viewing the lineage graph, When I click "Soft Lock" on node N and confirm, Then node N and all descendants transition to state Suspended within 5 seconds and their shortlinks are inaccessible for downloads. Given any active download session belongs to a suspended node, When the suspension is applied, Then the session is terminated within 10 seconds and no additional bytes are served. Given a suspended shortlink is visited, When a recipient loads the page, Then the page displays a "temporarily suspended" message, asset requests return HTTP 403, and no quota is consumed. Given a node is suspended by manual action, When I choose "Restore" as the owner, Then the node and descendants return to Active and downloads resume, unless an auto-suspend approval is pending.
Hard Kill of Node and Downstream Branch
Given I am the owner, When I click "Hard Kill" on node N and confirm with a second irreversible prompt, Then node N and all descendants are permanently revoked within 5 seconds. Given a revoked shortlink is accessed, When a recipient attempts to view or download, Then the page displays a "permanently revoked" message, asset requests return HTTP 410, and no forwarding or new child creation is possible. Given a node is hard killed, When anyone attempts to restore it or modify quotas/expiry, Then the action is rejected with a "cannot restore revoked link" error. Given a node is hard killed, When the lineage graph is displayed, Then node N and descendants are visually marked as Revoked.
Upstream Rekeying After Revocation
Given a branch has been revoked or flagged, When I select "Rekey Upstream" for ancestor nodes A..B, Then new signing keys are generated for those nodes within 30 seconds without changing their public shortlink URLs. Given rekeying completed, When a new child shortlink is generated from a rekeyed ancestor, Then it is tagged with the new key version and cannot be authorized using pre-rekey credentials. Given other unaffected branches exist under the rekeyed ancestors, When recipients use their existing links, Then those links continue to function without interruption. Given a revoked branch retains old credentials, When it attempts to create children or access upstream-protected endpoints, Then the request is rejected with HTTP 401/403.
Anomaly Detection and Owner Alerts
Given anomaly rules are enabled for a link, When download activity shows >=3 distinct countries within 10 minutes, Then an anomaly event of type Multi-Geo Spike is raised. Given anomaly rules are enabled for a link, When downloads consume >=80% of quota within 5 minutes, Then an anomaly event of type Quota Burst is raised. Given anomaly rules are enabled for a link, When a new device fingerprint not seen in the last 30 days downloads, Then an anomaly event of type Unrecognized Device is raised. Given an anomaly event is raised, When notifications are sent, Then the owner receives an alert via all enabled channels within 60 seconds containing link ID, anomaly type, observed metrics, and timestamp. Given multiple identical anomalies occur within 15 minutes, When alerts are evaluated, Then only one alert is sent (debounced) and subsequent events are summarized in the audit trail.
Auto-Suspend on Threshold Breach with Approval to Restore
Given Auto-Suspend is enabled for severity High anomalies, When an anomaly meets the configured auto-suspend criteria, Then the affected node and all descendants are soft locked within 5 seconds. Given an auto-suspend occurred, When the owner attempts to restore, Then the UI requires an explicit "Approve Restore" step and restoration is blocked until approved. Given auto-suspend is active, When a recipient accesses the link, Then suspended behavior applies (HTTP 403 for assets, suspension message shown) and no quota is consumed. Given the owner approves restoration, When the restore is executed, Then the node state returns to Active and an alert is sent confirming restoration. Given an auto-suspend occurred, When the owner chooses "Also Rekey Upstream" during restore, Then rekeying executes before the node returns to Active.
Audit Trail and Lineage Graph Real-Time Update
Given any action occurs (Soft Lock, Restore, Hard Kill, Rekey, Auto-Suspend, Alert Sent), When the audit trail is queried, Then a corresponding immutable entry exists with: UTC timestamp, actor (user/system), node IDs, action type, reason/anomaly ID (if applicable), previous state, new state, and request origin (IP/device). Given an action is performed, When I view the audit log UI, Then the new entry appears within 2 seconds of completion and is correctly ordered by timestamp. Given an action changes a node’s state, When the lineage graph is viewed, Then node N and descendants visually reflect the new state within 2 seconds. Given audit integrity is required, When an entry is corrected, Then the original entry remains immutable and a separate corrective entry is appended referencing the original.
API and Webhooks for ForwardTrace
"As a developer integrating TrackCrate, I want APIs and webhooks for ForwardTrace so that I can automate link creation, monitoring, and enforcement in our existing workflows."
Description

Provide REST/GraphQL endpoints to create, manage, and query parent/child links; set quotas and expiries; control forwarding permissions; and retrieve lineage and audit data. Expose webhooks for events (link created, forwarded, downloaded, streamed, expired, revoked, anomaly_detected). Include OAuth-scoped access, idempotency keys, and rate limits. Supply SDK examples and OpenAPI/GraphQL schema docs to ease integration with label tooling and CRM systems.

Acceptance Criteria
OAuth-Scoped Access, Idempotency, and Rate Limits
Given a request without an OAuth access token, When calling any ForwardTrace API endpoint, Then the response is 401 Unauthorized with a WWW-Authenticate header indicating Bearer. Given a token missing required scopes for the operation (e.g., forwardtrace.read, forwardtrace.write, forwardtrace.webhooks), When invoking the endpoint, Then the response is 403 Forbidden with error "insufficient_scope" and a "required_scopes" list. Given a valid token with the correct scopes, When invoking the endpoint, Then the response is 2xx and includes no scope-related errors. Given a POST or mutation request including an Idempotency-Key header and body X, When processed the first time, Then the server creates the resource and returns 201 Created (REST) or data payload (GraphQL) with a stable resource id. Given the same request re-sent within the idempotency window with the same Idempotency-Key and identical body X, When processed, Then the server returns 200 OK with the same resource id and an Idempotency-Replayed: true header (or equivalent flag), without creating a duplicate. Given the same Idempotency-Key but a different request body, When processed, Then the server returns 409 Conflict with error code "idempotency_key_body_mismatch" and creates no resource. Given a client that exceeds the published per-token rate limit, When further requests are made, Then the server returns 429 Too Many Requests with Retry-After and standard rate limit headers (X-RateLimit-Limit, X-RateLimit-Remaining, X-RateLimit-Reset), and idempotent retries after Retry-After succeed. Given requests that receive 429 responses, When counting toward the limit window, Then 429 responses do not further decrement the remaining quota beyond the request that triggered the limit.
Create and Manage Parent Links via REST and GraphQL
Given a valid OAuth token with forwardtrace.write, When POST /v1/links is called with asset_ids, quotas, expiry_at (ISO-8601), and forwarding permissions (allow_forward, max_forward_depth), Then the server returns 201 Created with a link object including id, short_url, status=active, quotas, expiry_at, allow_forward, max_forward_depth, created_at, created_by. Given invalid inputs (expiry in the past, non-numeric or negative quotas, unknown asset_ids), When POST /v1/links is called, Then the server returns 422 Unprocessable Entity with field-level error details and creates no link. Given a created link id, When GET /v1/links/{id} is called, Then the server returns 200 OK with the full current representation including download_count and stream_count initialized to 0. Given an existing active link, When PATCH /v1/links/{id} updates quotas, expiry_at, or forwarding permissions (but not id or parent/child relationships), Then the server returns 200 OK and persisted changes are reflected in subsequent GETs and GraphQL queries. Given an active link, When revokeLink is invoked (REST: POST /v1/links/{id}:revoke or GraphQL mutation), Then the link status becomes revoked, subsequent access attempts are blocked with 403 Forbidden, and a "revoked" event is emitted once.
Forwarding to Child Links with Unique Watermarks and Permissions
Given a parent link with allow_forward=true and remaining forward depth (>0), When POST /v1/links/{parent_id}/forward (or GraphQL forwardLink) is called with optional quotas and expiry_at, Then the server returns 201 Created with a child link that has a new id, short_url, parent_id set, status=active, its own quotas and expiry_at, and a unique watermark_id. Given allow_forward=false or no remaining forward depth, When a forward is attempted, Then the server returns 403 Forbidden with error "forwarding_not_allowed" and creates no child link. Given a forward request without quotas/expiry, When processed, Then the child inherits default policies from the parent according to spec (not exceeding any parent-enforced maximums) and records forwarded_by (actor) and created_at. Given multiple forwards from the same parent, When each is created, Then each child receives a distinct short_url and watermark_id, and the lineage records all children correctly. Given a child link, When checking its metadata, Then allowed forward depth does not exceed the remaining depth policy and forwarding permissions are enforced consistently for further descendants.
Lineage and Audit Retrieval via API and GraphQL
Given a link id, When GET /v1/links/{id}/lineage is called, Then the server returns 200 OK with a lineage structure containing nodes (id, parent_id, status, created_at, created_by/forwarded_by, expiry_at, quotas.used/remaining, watermark_id) and supports depth and pagination parameters. Given a GraphQL query for link(id) { lineage(depth: N, after: C) { nodes { id parent_id status expiry_at watermark_id download_count stream_count } pageInfo { hasNextPage endCursor } } }, When executed with valid authorization, Then the response includes the requested depth and pagination info. Given a link id, When GET /v1/audit-events?link_id={id} is called, Then the server returns 200 OK with a chronologically ordered, cursor-paginated list of events including types: link_created, link_forwarded, downloaded, streamed, expired, revoked, anomaly_detected; each event contains event_id, occurred_at, type, link_id, actor_id (if available), and relevant data. Given a link with recorded downloads/streams, When comparing lineage download/stream counters to aggregated audit events at the time of query, Then the counts are consistent (no negative or decreasing totals, and totals match event aggregates for completed events).
Webhook Delivery, Security, and Retry Semantics
Given a registered webhook endpoint with a shared secret and selected event types, When a link is created/forwarded/downloaded/streamed/expired/revoked or anomaly_detected occurs, Then the system sends an HTTP POST to the endpoint with a JSON payload containing event_id, type, occurred_at, link_id, and data, and includes X-TrackCrate-Signature (HMAC-SHA256 with timestamp) and X-TrackCrate-Timestamp headers. Given the receiver returns any 2xx status, When processing delivery, Then the event is marked delivered and no retry is attempted. Given the receiver returns non-2xx or times out, When processing delivery, Then the system retries with exponential backoff up to the documented max attempts, preserving at-least-once delivery guarantees. Given multiple events for the same link, When delivered, Then order is preserved per link_id, and consumers can deduplicate using event_id which is unique and stable. Given an invalid signature or stale timestamp beyond the allowed window, When the receiver validates, Then verification fails using the provided examples, and the provider continues retrying per policy until success or max attempts are reached.
Quotas, Expiry, Access Responses, and Event Emission
Given an active link with remaining download quota before expiry, When a recipient downloads an asset via the link, Then the response is 200, the download_count increments atomically by 1, and a "downloaded" event is emitted. Given an active link with remaining stream quota before expiry, When a recipient streams via the stem player, Then the stream_count increments atomically by 1 and a "streamed" event is emitted. Given a link that has reached its download or stream quota, When another download/stream is attempted, Then the response is 429 Too Many Requests with an error indicating quota_exceeded; no counts increment and no duplicate "downloaded/streamed" events are emitted. Given a link past its expiry_at, When a download/stream is attempted, Then the response is 410 Gone, an "expired" event is emitted once on first detection, and subsequent attempts continue returning 410 without emitting duplicate "expired" events. Given a link that was revoked, When access is attempted, Then the response is 403 Forbidden and no further "downloaded/streamed" events are emitted after revocation.
Developer Docs, OpenAPI/GraphQL Schemas, and SDK Examples
Given the published OpenAPI 3.1 document at /docs/openapi.json and human-readable docs, When validated with an OpenAPI linter, Then there are no errors and all ForwardTrace endpoints (links CRUD, forward, lineage, audits, webhooks) are documented with request/response schemas and error models. Given the GraphQL endpoint, When fetching the SDL/introspection, Then the schema exposes types and mutations/queries covering creation, forwarding, lineage, audits, and webhook subscription management, with field descriptions. Given the official SDK examples for JavaScript/TypeScript and Python, When running the provided scripts with test credentials, Then they successfully: obtain OAuth token, create a parent link, forward a child link, handle idempotency on retry, subscribe to webhooks and verify signatures, receive at least one webhook event, and query lineage and audit events. Given code samples in the docs, When copy-pasted and run with valid configuration, Then they execute without modification errors and produce the documented outputs. Given versioned docs, When checking the changelog, Then breaking changes and deprecations for the ForwardTrace API/webhooks are clearly listed with migration guidance.

Watermark Map

Visualize a chain-of-custody map that ties every recipient to a unique watermark ID. Drop in a suspect clip to identify the original link in seconds and see the propagation path across forwards, accelerating leak source discovery and response.

Requirements

Per-Recipient Watermark ID Generation
"As a label manager, I want each recipient to receive a uniquely identified asset so that any leaked clip can be traced back to the exact source."
Description

Generate a cryptographically unique, non-guessable watermark identifier for every TrackCrate shortlink, download, and private player stream. Persist a canonical mapping between asset version, recipient identity, issuance timestamp, and watermark ID in a tamper-evident registry. Integrate with existing expiring, watermarked downloads so each retrieval receives a distinct ID while preserving the master asset. Expose an internal service and API to mint, validate, and look up IDs, ensuring collision resistance, rate limiting, and idempotent issuance for retried requests. Provide admin tooling to search by asset, recipient, or ID and return lineage context needed by Watermark Map.

Acceptance Criteria
Mint API: Unique, Non-Guessable Watermark ID
Given a POST to /watermarks/mint with valid auth and payload {assetVersionId, recipientId, context} When the request is processed Then the response status is 201 and body includes id, assetVersionId, recipientId, issuedAt And id matches regex ^[0-9A-Za-z]{22}$ and is generated via a CSPRNG with ≥128 bits of entropy And the system records no collisions when minting 1,000,000 IDs in test harness And IDs are non-sequential and contain no deterministic prefixes across 10,000 sequential mints And p95 latency for the endpoint is ≤150ms under nominal load (≤50 RPS)
Idempotent Issuance via Idempotency-Key
Given two POSTs to /watermarks/mint with identical payload and the same Idempotency-Key header within 24 hours When both requests are processed Then both return the same id and the second returns 200 with Idempotent-Replay: true And if the payload differs, the second request returns 409 with an error describing the mismatch And if the Idempotency-Key is absent, each successful request returns a distinct id And idempotency records expire after 24 hours and are purged without affecting the registry
Rate Limiting and Abuse Controls on Mint/Validate/Lookup APIs
Given an API key making requests to mint/validate/lookup endpoints When requests exceed 100 requests per minute per API key or 1000 requests per minute per organization Then subsequent requests receive HTTP 429 with a Retry-After header set appropriately And rate-limit counters reset per rolling 60-second window and allow a burst of 2x the per-key limit And malformed or unauthenticated requests receive 400/401/403 without consuming the rate limit And all 429 responses are logged with key, org, endpoint, and quota bucket for audit
Tamper-Evident Registry Persistence
Given a successful mint When the record is persisted Then the registry stores {watermarkId, assetVersionId, recipientId, issuedAt (UTC ISO-8601 ms), linkId/sessionId, parentId?, actor, method} immutably And each record includes prevHash and recordHash using SHA-256 to form a hash chain And attempts to update or delete a record are rejected with 405 and produce no state change And running POST /watermarks/audit/verify returns status=ok when the chain is intact And introducing a synthetic mismatch in a test environment causes /watermarks/audit/verify to return status=fail identifying the earliest broken link
Integration: Expiring Watermarked Downloads and Private Player Streams
Given a valid shortlink retrieval for a downloadable asset before expiry When the download is generated Then a new watermarkId is minted and embedded in the derivative file without modifying the master asset checksum And the mapping links watermarkId to linkId and recipientId and is visible in lookup And after link expiry, attempts to mint for that link return 403 and no registry entry is created And for private player streams, a new watermarkId is minted per play session and applied consistently across all segments of that session And session records include sessionId, ipHash, userAgent, and issuedAt for lineage
Validate and Lookup Endpoints Return Canonical Mapping
Given a GET /watermarks/{id} When the id exists and caller is authorized Then response status is 200 with {watermarkId, assetVersionId, recipientId, issuedAt, linkId/sessionId, parentId?, revoked:false} And p95 latency is ≤200ms under nominal load And GET /watermarks/validate?id={id} returns 200 valid:true for existing, non-revoked IDs and 404 for unknown IDs and 410 for revoked IDs And access is restricted to org members with scope watermark:read, otherwise 403
Admin Tooling: Search, Lineage Context, and Export
Given an org admin accesses the Watermarks admin tool When they search by assetVersionId, recipient (email or ID), watermarkId, or issuedAt range Then results are returned with pagination (default 50/page), sortable by issuedAt and recipient And selecting a record shows lineage context: parentId, link/forward chain, session/download details suitable for Watermark Map And admins can export current results to CSV with a maximum of 100,000 rows per export And all actions are logged with admin user, timestamp, and filters applied
Robust Watermark Embedding for Audio Stems
"As an audio engineer, I want the watermark to survive typical sharing and re-encoding so that leak detection remains reliable without compromising sound quality."
Description

Embed the unique watermark ID into audio assets (WAV/AIFF/FLAC/MP3) using an inaudible watermarking algorithm resilient to common transformations (transcoding, re-encoding, gain changes, trimming, and moderate time-stretching). Implement a server-side pipeline that injects the watermark on-the-fly for downloads and during playback in the private stem player without degrading audio fidelity. Ensure low-latency processing, batch support for album/stem packs, deterministic quality checks, and confidence benchmarking across codecs and bitrates. Maintain original lossless masters; store derived fingerprints and embedding metadata for later verification by Watermark Map.

Acceptance Criteria
On-the-Fly Watermarking During Private Stem Player Playback
- Given a logged-in recipient with a unique link and watermark ID, When they stream any supported audio asset (WAV/AIFF/FLAC/MP3) in the private stem player, Then the delivered audio includes an embedded watermark matching the recipient’s ID. - Given the same asset is played across multiple sessions by the same recipient, When streamed again, Then the embedded watermark ID remains consistent and decodable across the entire duration played. - Given player seek, pause, resume, and buffer events, When the user scrubs within the track, Then the watermark remains present and decodable in all streamed segments. - Then median added playback start latency <= 300 ms and p95 <= 600 ms under 50 concurrent streams. - Then watermark decode confidence on a 30-second sampled segment from the stream >= 0.95 at 320 kbps AAC and >= 0.9 at 128 kbps AAC.
Server-Side Watermark Injection for Download Requests
- Given a recipient requests a download via their shortlink, When generation begins, Then a derived file is produced on-the-fly with the recipient’s watermark ID and delivered, while the original lossless master remains unchanged in storage. - Then output format equals the requested format (WAV/AIFF/FLAC/MP3) unless policy requires transcode; in all cases, the embedded watermark remains decodable. - Then throughput SLA for a 5-minute 44.1kHz stereo WAV: generation time <= 8 s p50, <= 15 s p95. - Then repeated downloads by the same recipient for the same asset produce byte-identical outputs or a stable cryptographic signature with identical decode results and confidence.
Resilience to Common Audio Transformations
- Given a watermarked file, When gain is adjusted by ±6 dB, Then watermark decode confidence >= 0.9. - When re-encoded to MP3 128 kbps CBR, MP3 192 kbps VBR, AAC 128 kbps, and Opus 96 kbps, Then decode confidence >= 0.9 for each codec/bitrate. - When trimmed by up to 5 s from start or end, Then decode succeeds and the ID matches the source. - When time-stretched between 0.97x and 1.03x without pitch correction, Then decode confidence >= 0.85. - When downmixed to mono and resampled between 44.1 kHz and 48 kHz, Then decode confidence >= 0.9.
Batch Watermarking for Album and Stem Packs
- Given a batch download (1 <= N <= 200 assets), When a recipient requests the archive, Then every asset in the archive is watermarked with that recipient’s unique ID. - Then total processing time <= 5 s base + 4 s per minute of total audio duration (p95 <= 1.5x this SLA). - Then failures are isolated: any failed asset is retried up to 2 times; succeeded items are delivered with a manifest enumerating per-asset watermark IDs and checksums; failures are listed. - Then the delivered ZIP passes integrity verification and includes per-file sidecar JSON containing watermark ID, algorithm version, embedding parameters, and checksum.
Audio Fidelity Preservation and Deterministic Quality Checks
- For lossless outputs, Then ViSQOL score delta vs. master <= 0.02 and segmental SNR >= 35 dB over the full track; for lossy outputs, Then objective quality delta vs. baseline encode does not exceed 0.5 dB PEAQ ODG. - In a 10-listener ABX double-blind on 60 s excerpts (44.1 kHz/16-bit WAV), Then detection rate of watermark presence <= 20% (p < 0.05). - On embed completion, Then the system computes and stores an audio fingerprint (e.g., Chromaprint), watermark decode confidence, and a deterministic QC verdict; assets failing thresholds are blocked from delivery and surfaced with error codes.
Embedding Metadata and Fingerprint Storage for Watermark Map
- For each delivered or streamed watermarked asset, Then the system persists: asset_id, recipient_id, watermark_id, algorithm version, embedding timestamp, parent link ID, output codec/bitrate, derived file checksum, audio fingerprint, and embed-time decode confidence. - Then records are immutable with append-only audit trail; timestamp skew across services <= 100 ms. - Then an authorized API can retrieve records by watermark_id or recipient_id with p99 read latency <= 200 ms; all access is permissioned and logged.
Verification From Suspect Clip
- Given a suspect audio clip (10–30 s) in a common format, When verification is requested, Then the system decodes and returns the watermark ID and originating link within 5 s p50 and 10 s p95. - Then the decoder tolerates up to 1 dBFS clipping, background noise down to 20 dB SNR, and pitch shift ±0.5 semitones with time-stretch 0.98x–1.02x, achieving >= 95% success on the benchmark set. - If decode confidence < 0.7 or multiple candidates exist, Then return top-3 candidate IDs with confidence scores and flag for manual review; all attempts are logged for the Watermark Map.
Fast Watermark Extraction from Suspect Clips
"As a promotions lead, I want to drop a suspect clip and instantly see who it was issued to so that I can act quickly to contain the leak."
Description

Provide a drag-and-drop UI and API endpoint to ingest short audio clips (≥5 seconds) and rapidly extract the embedded watermark ID with a confidence score. Support common formats, partial segments, and noisy recordings to enable practical field investigations. Return results in seconds under normal load, including the matched ID, original shortlink, recipient, asset version, and first-seen timestamp. Implement a scalable worker queue, concurrency controls, GPU/CPU acceleration where applicable, and rate limiting to protect the service. Log all analyses for audit, and surface fallback guidance when confidence is below threshold.

Acceptance Criteria
Drag-and-Drop UI Extraction Happy Path
Given an authenticated user with access to TrackCrate, When they drag-and-drop a 5–30 second MP3 or WAV clip that contains a valid watermark into the extractor UI, Then within 3 seconds (p95) the UI displays watermark_id, original_shortlink, recipient, asset_version, first_seen_at (ISO-8601 UTC), confidence in [0,1] with two-decimal precision, and extraction_status=success. Given a clip shorter than 5 seconds, When dropped onto the UI, Then the upload is blocked client-side with an error "Clip must be at least 5 seconds" and no API call is made. Given a supported clip 5–60 seconds, When upload starts, Then a progress indicator is shown and a cancel action is available that stops processing; And a corresponding audit record is written with outcome=canceled if canceled. Given a successful extraction, When results are shown, Then a "View on Watermark Map" link navigates to the recipient node for the matched watermark_id. Given an unsupported format file, When dropped, Then the UI displays "Unsupported format" and prevents upload.
API Extraction Endpoint and Response Schema
Given a valid API token, When the client POSTs a clip of 5–60 seconds to /v1/watermarks/extract as multipart/form-data or application/octet-stream, Then the response is 200 with JSON containing watermark_id (string), confidence (number), original_shortlink (URL string), recipient (object with id and name), asset_version (string), first_seen_at (ISO-8601 UTC string), request_id (UUID string), and processing_ms (integer). Given a valid request under normal load for files ≤ 20 MB, When processed, Then p95 processing_ms ≤ 3000 and p99 ≤ 5000. Given a clip shorter than 5 seconds or a corrupt file, When submitted, Then the API returns 422 with error.code and error.message describing the validation failure. Given an unsupported media type, When submitted, Then the API returns 415 Unsupported Media Type. Given a missing or invalid token, When submitted, Then the API returns 401 Unauthorized. Given Accept: application/json, When the API returns, Then the response includes Cache-Control: no-store and Content-Type: application/json; charset=utf-8.
Robustness to Partial Clips, Formats, and Noise
Given a 5-second segment starting at a random offset within a watermarked track, When extracted, Then the correct watermark_id is returned with confidence ≥ 0.80. Given MP3 (64–320 kbps), WAV (16-bit), AAC/M4A (≥ 64 kbps), FLAC, and OGG Vorbis at sample rates 22.05–48 kHz, When extracted, Then success rate ≥ 99% with the correct watermark_id on a test set of at least 100 clips per format. Given a smartphone-style re-recording with background noise at SNR ≥ 0 dB and mild room reverb (T60 ≤ 0.5 s), When extracted, Then the correct watermark_id is returned with confidence ≥ 0.60. Given an audio clip with no TrackCrate watermark present, When extracted, Then extraction_status=no_match, confidence ≤ 0.30, and no watermark_id is returned. Given an audio clip with partial or degraded watermark such that confidence falls below threshold, When extracted, Then the result is status=indeterminate and no recipient is attributed.
Performance and Scalability Under Normal and Bursty Load
Given normal load of ≤ 20 requests per second system-wide, When extraction requests are processed, Then p95 end-to-end latency ≤ 3 seconds and error rate (5xx) ≤ 0.1%. Given a 10x burst to 200 requests per second sustained for 60 seconds, When extraction requests are processed, Then no data loss occurs, queueing is applied, p95 latency ≤ 6 seconds during the burst, and the backlog drains within 2 minutes after the burst ends. Given GPU resources are unavailable, When the system processes requests on CPU-only, Then p95 latency ≤ 5 seconds at 5 requests per second and correctness is unchanged within ±0.02 confidence. Given a worker crash mid-job, When the job is retried, Then it is retried up to 2 times with exponential backoff and is marked failed with a clear error code if still unsuccessful. Given a queued job exceeds a 30-second wait time, When monitored, Then backpressure signals cause new UI submissions to display "System busy, please retry" and new API requests to receive 503 with a Retry-After header.
Rate Limiting and Abuse Protection
Given an API key, When more than 60 extraction requests are received within 60 seconds or more than 10 concurrent jobs are active for that key, Then subsequent requests receive 429 Too Many Requests with X-RateLimit-Limit, X-RateLimit-Remaining, and X-RateLimit-Reset headers. Given a client continues sending requests after receiving 429s and exceeds the limit for 3 consecutive minutes, When requests arrive, Then the key is temporarily blocked for 15 minutes and requests receive 403 with error.code=rate_limit_block. Given a UI user exceeds 10 extractions per minute, When they attempt another extraction, Then the UI shows "Rate limit reached, please try again in X seconds" and no API call is made. Given a single IP without authentication exceeds 100 requests per minute, When requests arrive, Then a 401 is returned and the IP is flagged for monitoring.
Audit Logging and Traceability
Given any extraction request (success, no_match, indeterminate, failed, or canceled), When processing completes or is canceled, Then an immutable audit record is persisted within 2 seconds containing: request_id, user_id/org_id, timestamp (UTC), client_ip, mime_type, duration_sec, file_size_bytes, input_hash_sha256, model_version, compute_backend (GPU|CPU), outcome, watermark_id (if any), confidence (if any), latency_ms, and rate_limit_applied (bool). Given audit records are stored, When an admin queries by time range, watermark_id, or user_id, Then results return within 2 seconds and support pagination and CSV export. Given audit records, When a non-admin user queries, Then only their organization’s records are returned and sensitive fields (client_ip, input_hash_sha256) are redacted. Given audit integrity requirements, When records are appended, Then a tamper-evident hash chain is updated and nightly verification passes for 100% of records.
Low-Confidence Fallback Guidance
Given a completed extraction with confidence < threshold (default 0.75), When results are displayed in the UI, Then a guidance banner appears with the steps: "Try a 10–20 s clip", "Use a cleaner sample with less background noise", and "Try a different segment", including a link to documentation. Given a completed extraction with confidence < threshold via the API, When the response is returned, Then status=indeterminate, watermark_id=null, and guidance is an array of at least three actionable strings; HTTP 200 is returned. Given confidence ≥ threshold, When results are produced, Then no fallback guidance is shown and the Watermark Map updates to highlight the matched recipient.
Chain-of-Custody Graph Visualization
"As an A&R director, I want an interactive map of how a file spread so that I can understand the path and address weak points in the sharing chain."
Description

Render an interactive Watermark Map that visualizes the chain of custody from the original issuer through all forwards and reshares, using nodes for recipients and edges for handoffs. Display key context (timestamps, channel/link type, asset version, territory, device hints) with filtering, search, and time zoom. Highlight the originating link when a suspect clip is identified and animate the shortest path from source to current node. Provide export options (PNG/SVG and JSON) and deep links back to the associated shortlinks, AutoKit pages, and release records. Ensure performance on projects with thousands of recipients and accessibility for global teams across time zones.

Acceptance Criteria
Render Graph at Scale with Smooth Interactions
- Given a project containing 5,000 recipient nodes and associated edges, When the Watermark Map loads, Then the initial render completes within 3 seconds on a standard laptop (4-core CPU, integrated GPU) and displays a legible layout without UI freeze. - Given the rendered graph, When the user pans or zooms, Then p95 interaction frame time is <= 50 ms and no input is dropped for more than 300 ms. - Given the same dataset and layout seed, When the map is reloaded, Then node positions are deterministic within ±2 px and edge routes are consistent. - Given a node is clicked, When selection occurs, Then the node is highlighted, its immediate neighbors and incident edges are emphasized, and non-neighbor elements are visually de-emphasized. - Given multi-select mode is active, When the user lasso-selects up to 500 nodes, Then selection completes within 500 ms and the UI remains responsive.
Suspect Clip Identification and Path Trace
- Given a suspected clip with an embedded watermark ID is pasted or dropped, When the system parses the watermark, Then it resolves to a unique recipient node or presents a disambiguation dialog if multiple matches exist. - Given a recognized watermark ID, When the user clicks "Trace", Then the originating link node is highlighted and the shortest path from source to the matched node is animated within 1 second and is pausable and restartable. - Given no match is found, When parsing completes, Then the user sees a non-destructive "No match found" message with troubleshooting steps and a link to support, and no changes are applied to the graph. - Given a trace is completed, When the operation ends, Then an audit log entry is recorded with user, timestamp (UTC), watermark ID, path length, and filters active at time of trace.
Context Metadata Display and Filters
- Given a node or edge is hovered or focused, When a tooltip or detail panel opens, Then it shows timestamp (local with UTC toggle), channel/link type, asset version, territory, and device hints; fields missing are labelled "Unknown". - Given one or more filters (channel, asset version, territory, device, date range) are applied, When filters are confirmed, Then only matching nodes/edges remain visible and counts update; p95 filter application time <= 500 ms on 5,000 nodes. - Given filters are active, When the user copies the share URL, Then the URL encodes the filter state and pasting it in a new session restores the same view. - Given a user in any locale, When timestamps render, Then the display follows the user's locale preferences with an option to switch to ISO 8601 UTC.
Search and Time Zoom Controls
- Given the user types in the search bar, When a query (name/email/watermark ID) of >= 2 characters is entered, Then results are returned and highlighted within 200 ms p95 and navigation jumps to the first match on Enter. - Given the timeline control is used, When the user adjusts the time range, Then the graph updates to include only edges within the range and the histogram reflects the selection with at least 1-hour granularity. - Given multiple matches exist, When the user presses Arrow Down/Up in the search input, Then focus cycles through result items and the corresponding node is centered.
Export Visual and Data Outputs
- Given the current viewport shows a filtered graph, When the user exports PNG or SVG, Then the file includes the visible graph, legend, applied filter summary, and a timestamp watermark; exported at selectable scales (1x, 2x) and matches on-screen colors. - Given export to JSON is chosen, When the export starts, Then the file contains only visible nodes/edges plus global metadata (schemaVersion, generatedAt UTC, filters, layout seed); it validates against the defined schema without errors. - Given a project with 5,000 nodes, When any export is performed, Then the operation completes within 5 seconds p95 or displays a progress indicator and does not block interactions. - Given the export completes, When the file downloads, Then the filename pattern is TrackCrate_<projectName>_WatermarkMap_<YYYYMMDDTHHmmssZ>.<ext>.
Deep Linking to Related Records
- Given a node or edge detail panel is open, When the user clicks the shortlink/AutoKit/release link, Then the target opens in a new tab with a returnTo parameter that restores the map view state on return. - Given an invalid or unauthorized deep link is followed, When the target cannot be opened, Then the user sees a friendly error with options to request access or view limited metadata; no sensitive data is exposed. - Given a user returns from a deep-linked page, When the map loads with a valid returnTo parameter, Then filters, zoom, camera position, and selection are restored.
Accessibility and Keyboard Navigation for Global Teams
- Given only a keyboard is used, When navigating the map UI, Then all controls and nodes are reachable in a logical tab order; there is a "Skip to map" and "Skip to filters" link; and all actions have keyboard shortcuts (e.g., +/- for zoom, arrows for pan, Enter to select). - Given a screen reader is active, When the map is focused, Then it exposes an appropriate ARIA role with an accessible name; nodes announce label, role, degree, and key metadata; and tooltips are read via ARIA relationships. - Given color-vision deficiencies, When the map renders, Then the color palette is colorblind-safe and meets WCAG 2.1 AA contrast (>= 4.5:1 for text/legends, >= 3:1 for graphical states). - Given global teams across time zones, When timestamps are shown, Then a timezone toggle is available and hovering any timestamp shows ISO 8601 UTC. - Given localization settings, When the UI language is switched, Then all visible strings in the map area, filters, export dialog, and errors are localized for supported locales without truncation or overlap.
Derived Link Lineage and Forward Tracking
"As a campaign manager, I want reshares to create traceable child links so that I can see who forwarded to whom and revoke access downstream if needed."
Description

Enable recipients to reshare assets via a TrackCrate "Forward" action that issues child shortlinks with derived watermark IDs tied to the parent, preserving permissions and expirations. Record lineage (parent → child) to build a verifiable propagation tree even when files are forwarded beyond the original recipient. Where reshares occur off-platform, infer likely handoffs using access signals (IP/UA clusters, geotime proximity, referers) and mark them as probabilistic edges. Surface lineage in the Watermark Map, allow revocation cascades from any node, and ensure expiration and access policy changes propagate to descendants.

Acceptance Criteria
Forward Generates Child Shortlink with Derived Watermark ID
Given an authenticated recipient has an active (not expired/revoked) parent shortlink When the recipient uses the TrackCrate "Forward" action to share the asset Then a child shortlink is created within 2 seconds with: - a unique short code - a derived watermark ID deterministically tied to the parent watermark ID - a stored parent→child linkage in lineage records (including timestamps, actor, and source parent ID) And an audit log entry is written capturing requester, parent link ID, child link ID, and resulting permissions/expiration And the child link is immediately usable and resolves to the same asset set as the parent
Propagation of Permissions and Expiration to Descendant Links
Given a parent shortlink with defined permissions (e.g., stream-only vs. download-allowed) and an absolute expiration timestamp T When any child link is created via "Forward" Then the child inherits the parent’s permissions exactly and cannot broaden access And the child’s expiration equals T and cannot exceed T And if the parent is already expired at creation time, child creation is blocked with a clear error And if the parent’s expiration is changed post-creation, all descendants’ expirations update to match within 60 seconds
Lineage Recording and Propagation Tree Integrity
Given a chain of forwards across multiple levels (parent → child → grandchild, etc.) When lineage records are queried for the root link Then the system returns a complete, acyclic propagation tree including all nodes and edges with creation timestamps and actors And every child node references exactly one parent (no orphans, no cycles) And deletion or revocation of any node preserves historical lineage for audit/export And lineage export (JSON/CSV) contains node IDs, parent IDs, watermark IDs, permissions, and expiration for each node
Off-Platform Reshare Inference via Access Signals
Given access logs contain events with IP/UA clusters, geotime proximity (≤30 minutes, ≤50 km), and HTTP referers indicative of a forward When signals meet or exceed the configured confidence threshold (e.g., ≥0.7) Then the system creates a probabilistic edge between the likely source node and the inferred recipient node within 15 minutes of signal ingestion And the edge is labeled as probabilistic with its confidence score and top contributing signals And if signals fall below threshold, no edge is created and a rationale is recorded And probabilistic edges are visually distinct and filterable in the Watermark Map
Revocation Cascade from Any Node
Given any node in the propagation tree is selected for revocation When the user confirms revocation Then the selected node becomes inaccessible immediately and returns a 403 Revoked on access And all descendant nodes become inaccessible within 60 seconds And the Watermark Map updates the node and descendants’ states to Revoked within 60 seconds And an audit log records the cascade scope (count of descendants) and completion time
Policy Change Propagation to Descendants
Given a parent node’s access policy (e.g., disable downloads, add viewer whitelist, rate-limit) is modified When the policy change is saved Then the updated policy is enforced on the parent immediately and on all descendants within 60 seconds And no descendant can retain permissions broader than the parent after propagation And a policy propagation report lists affected nodes and completion status And access attempts violating the updated policy are blocked and logged with the effective policy source
Watermark Map Visualization and Clip Backtrace
Given a user uploads or pastes a suspect media clip to the Watermark Map backtrace input When the system decodes the embedded watermark ID Then it resolves the watermark to a specific node within 5 seconds and highlights the path from that node back to the root And deterministic edges and probabilistic edges are visually differentiated with legends and confidence indicators And if multiple matches are possible, the UI lists candidates with scores and allows selection, logging the operator’s choice And the map view supports exporting the highlighted path and node metadata for incident response
Leak Response Actions and Alerts
"As a product manager, I want instant alerts and one-click containment actions so that I can stop leaks and coordinate a swift response."
Description

Provide real-time notifications (email/Slack/webhook) when a suspect clip matches a watermark or when anomalous access patterns suggest a leak. From the Watermark Map, allow one-click actions: expire a link (and optionally all descendants), rotate watermark IDs for future deliveries, lock the asset, and generate a takedown bundle containing recipient details, timestamps, and evidence. Track incident timelines, owners, and resolution status, and measure MTTR to inform process improvements. Integrate with existing TrackCrate shortlink controls and expiring downloads for immediate enforcement.

Acceptance Criteria
Real-Time Suspect Clip Match Alerts
Given alert destinations (email, Slack, webhook) are configured and a suspect clip matches a watermark_id with confidence >= 0.95 When the match is confirmed by the fingerprinting service Then a notification is sent to each configured destination within 30 seconds including fields: incident_id, asset_id, watermark_id, source_link_id, confidence, matched_timestamp, map_url And the webhook is a signed HMAC-SHA256 POST with a unique event_id and retries up to 5 times with exponential backoff on non-2xx responses And duplicate alerts for the same incident_id are suppressed for 10 minutes, with state changes summarized in a single update message
Anomalous Access Pattern Alerts
Given anomaly detection thresholds are set to defaults (>=5 unique IPs in 15 minutes for a single link OR geo-velocity > 1000 km within 10 minutes) When access logs for a link meet any anomaly rule Then an anomaly incident is created within 60 seconds and notifications are sent with fields: incident_id, asset_id, source_link_id, anomaly_type, metrics_snapshot, map_url And alert suppression prevents more than one anomaly alert per link per 10 minutes while continuing to append metrics to the incident timeline And the incident is auto-tagged "anomaly" and linked to the Watermark Map node
One-Click Expire Link and Descendants from Watermark Map
Given a user with Manage Links permission selects a link node on the Watermark Map When they click "Expire link" and confirm with the optional "Expire descendants" toggle enabled or disabled Then the selected link is invalidated in the shortlink service within 10 seconds and subsequent requests receive HTTP 410 Gone with an explanatory message And if "Expire descendants" is enabled, all descendant links are recursively invalidated within 20 seconds And the UI updates the affected node(s) to an "Expired" state and writes an audit log entry with actor_id, link_id(s), and timestamp
Watermark ID Rotation for Future Deliveries
Given a user initiates "Rotate Watermark IDs" on an asset When they confirm the rotation Then a new watermark seed/range is generated and marked active within 60 seconds while existing watermark_ids remain resolvable for past deliveries And all new shortlinks and exports created after rotation use the new watermark_ids, verifiable via a test delivery that embeds and reports the new id And the action is recorded with actor_id, asset_id, old_seed, new_seed, and timestamp in the audit log
Asset Lock for Immediate Access Freeze
Given a user initiates "Lock asset" from the Watermark Map or asset page When they confirm the lock Then new downloads, streams, and token issuances for the asset are blocked within 10 seconds and return HTTP 423 Locked with a "Asset locked by owner" message And existing signed URLs cannot be refreshed or reissued after T0, and attempts are logged as denied events And the asset and associated links display a "Locked" badge in UI and the action is recorded in the audit log
Takedown Bundle Generation with Evidence
Given an open incident is selected When the user clicks "Generate takedown bundle" Then a ZIP is produced within 30 seconds containing: a PDF summary (incident_id, owner, timeline, actions), CSV of recipients (recipient_id, email, link_id, watermark_id), access logs (timestamps, IPs, user agents), and match evidence (clip hash, confidence) And the bundle includes a prefilled DMCA/notice template and a cryptographic checksum file (SHA-256) for integrity And a secure, expiring share URL (default 7 days) is created and audit-logged
Incident Timeline, Ownership, and MTTR Tracking
Given an alert creates an incident When an owner is assigned and the incident progresses through states Then the timeline records all events (alerts, actions, communications) with actor_id, type, timestamp (UTC), and metadata, and supports manual notes And valid states include Open, Contained, Monitoring, Resolved, and MTTR is computed as Resolved_at minus Created_at and displayed on the incident and in the MTTR report And incidents can be filtered and exported (CSV) by date range, owner, state, and MTTR on the Incidents dashboard
Role-Based Access Control and Audit Trail
"As a label legal counsel, I want tightly controlled access and a complete audit trail so that investigations are compliant and defensible."
Description

Restrict Watermark Map visibility and leak forensics to authorized roles (e.g., Owner, Label Admin, Legal) with project-scoped permissions. Mask or minimize exposure of personal data by default, with just-in-time unmasking for privileged users. Record immutable audit logs of all views, searches, extractions, revocations, and exports, including actor, time, and context. Provide exportable audit reports to support legal processes and compliance, and enforce retention policies aligned with TrackCrate’s data governance and regional privacy requirements.

Acceptance Criteria
Owner/Label Admin/Legal-Only Watermark Map Access
- Given a user is authenticated and has role Owner, Label Admin, or Legal on the project, when they open the Watermark Map, then the map loads with HTTP 200 and renders within 2 seconds for the 95th percentile. - Given a user is authenticated but lacks an authorized role for the project, when they attempt to open the Watermark Map via UI or API, then access is denied with HTTP 403 and no record count or metadata is revealed. - Given a user’s role is revoked, when they refresh the page or call the API, then access is denied within 60 seconds of revocation and the session token cannot bypass enforcement. - Given a request references a watermark ID from another project/tenant, when an unauthorized user sends it, then the system returns HTTP 403 or 404 without confirming the resource’s existence, and an audit event is recorded.
Project-Scoped Permission Isolation
- Given a Label Admin has access to Project A only, when they attempt to view the Watermark Map for Project B, then the request is denied with HTTP 403 and the denial is audited. - Given a link generated in Project A is forwarded to a user with access only to Project B, when the user follows it, then the Watermark Map remains inaccessible unless the user is explicitly granted access to Project A. - Given the user is granted project access, when they retry, then the access succeeds within 60 seconds of the permission change and is audited. - Given API requests include a projectId, when there is a mismatch between token scope and projectId, then the request is rejected with HTTP 403 and contains no data body.
Default PII Masking with JIT Unmask
- Given any authorized user opens the Watermark Map, when recipient fields are displayed, then emails, phone numbers, IPs, and names are masked by default (e.g., j***@d***.com, +1-***-***-1234, 203.0.113.0/24). - Given a privileged user (Owner or Legal) requests unmasking, when they provide a reason (minimum 10 characters) and complete 2FA, then PII fields are unmasked for that user’s session only and auto-revert to masked after 15 minutes of inactivity. - Given a non-privileged user attempts to unmask, when they click Unmask, then the action is blocked (UI-disabled or HTTP 403) and no unmasked data is revealed. - Given any unmask action occurs, when the audit log is reviewed, then it contains the actor, reason, scope (fields unmasked), timestamp, and project context.
Immutable Audit Logging of Sensitive Actions
- Given any user performs a view, search, extraction, revocation, or export related to the Watermark Map, when the action completes, then an audit event is recorded with actor ID, role, project ID, action type, target IDs, outcome (success/failure), ISO 8601 UTC timestamp, client IP, user agent, and correlation ID. - Given audit events are persisted, when integrity is verified, then the log is append-only and tamper-evident (hash-chain verification passes); attempts to update or delete events are rejected and produce no change. - Given audit logging is unavailable, when a sensitive action is initiated, then the operation is aborted with a 503 error and no partial state change occurs. - Given an auditor queries by time range and filters, when retrieving up to 10,000 events, then results return within 5 seconds for the 95th percentile.
Exportable Audit Reports for Legal/Compliance
- Given a privileged user (Legal or Owner) requests an audit report for a project and time range, when the export is generated, then CSV and JSON files are produced with the defined audit schema and a SHA-256 checksum for each file. - Given privacy-by-default, when an export is generated, then PII fields remain masked unless the requester provides a reason and completes 2FA to include unmasked PII; the export is watermarked with requester ID, timestamp, and stated purpose. - Given an export contains up to 100,000 events, when generated, then it completes within 30 seconds for the 95th percentile and is available via a signed URL for 7 days before automatic expiry. - Given any export is initiated, when it completes or fails, then a corresponding audit event is recorded including whether PII was unmasked.
Retention Policy Enforcement and Legal Hold
- Given regional retention settings are configured, when audit events exceed their retention period, then they are purged or irreversibly anonymized by a daily job and become non-recoverable via UI or API. - Given a legal hold is applied to a project/case, when retention processing runs, then events under hold are preserved until the hold is lifted, after which purging resumes per policy. - Given a purge cycle completes, when a purge report is requested, then the system provides a signed report with counts by action type, time window, and result status; post-purge queries for older data return no records. - Given data residency constraints, when an export destination violates regional restrictions, then the request is blocked with HTTP 403 and the reason is communicated to the requester.

Tripwire Tamper

Detect scraping and automation patterns (e.g., abnormal chunking or headless requests) and automatically downgrade the stream or switch to a decoy preview. Instant tamper alerts include session details to help you act fast, while legitimate reviewers remain uninterrupted.

Requirements

Headless & Automation Signature Detection
"As a label admin, I want the system to detect headless or automated access patterns so that we can protect pre-release assets without blocking real reviewers."
Description

Implements server- and edge-side detection of headless browsers and scripted automation by fingerprinting request behavior and environment signals (e.g., navigator.webdriver hints, GPU/AudioContext anomalies, missing media capabilities, atypical TLS/cipher suites, cookie and storage behavior, and known headless user-agent patterns). Produces a per-session risk score in near real time without adding perceptible playback latency. Integrates with TrackCrate shortlinks, AutoKit press pages, and the private stem player so that detection occurs consistently across delivery surfaces. All processing minimizes PII, avoids persistent device fingerprinting, and adheres to privacy guidelines while enabling high-confidence bot/scraper identification.

Acceptance Criteria
Edge Headless Signals Across Surfaces
Given any request to a TrackCrate shortlink, AutoKit press page, or private stem player, When navigator.webdriver is true or plugins/mimeTypes length is 0 while UA claims a modern browser, Then add ≥20 risk points and persist the signal to the session within 100ms P95. Given WebAudio is supported by UA but AudioContext cannot be constructed or reports zero output devices, When media capability checks run, Then add ≥15 risk points and record signal=webaudio_anomaly. Given the user-agent matches a maintained headless pattern list, When the request is evaluated at the edge, Then record signal=headless_ua and add ≥30 risk points.
TLS/HTTP Signature Anomaly Detection at Server/Edge
Given a new session, When TLS fingerprint or HTTP/2 pseudo-header ordering matches a known automation library signature, Then record signal=tls_anomaly and add ≥25 risk points without requiring client JS. Given missing Accept-Language or inconsistent Accept/Connection headers typical of scripted clients, When the first asset request is received, Then record signal=header_anomaly and add ≥10 risk points.
Per-Session Risk Scoring Latency Targets
Given the first media or page request in a session, When all collected signals are aggregated, Then compute a 0–100 risk_score and return it via X-TC-Risk-Score response header within 150ms P95 and 300ms P99. Given risk evaluation on playback start, When detection runs, Then added time-to-first-byte attributable to detection is <10ms P95 and <20ms P99.
Cross-Surface Risk Consistency (Shortlinks, AutoKit, Stem Player)
Given a session created on a TrackCrate shortlink, When the user opens the AutoKit press page or stem player within 30 minutes using the same session token, Then the risk_score and signals are shared and consistent across surfaces with ≤1 point variance. Given any surface updates the session with a new signal, When the session is accessed from another surface, Then the updated risk_score is reflected within 1 second.
Privacy Guardrails and Non-Persistent Fingerprinting
Given detection processing, When storing signal data, Then mask IP (drop last octet), hash user-agent, store country code only, and retain session data ≤24 hours; do not write stable device identifiers to client storage. Given a user returns after session expiry, When a new session is created, Then no prior identifiers are used to link sessions and prior signals are not associated.
High-Risk Session Alert Payload and SLA
Given risk_score ≥ configured_threshold (default 70), When this threshold is crossed, Then emit a webhook and dashboard alert within 2 seconds containing session_id, surfaces, top 5 signals with weights, risk_score, and decision context, excluding masked PII per privacy rules. Given an allowlist of reviewer tokens/origins, When a request matches the allowlist, Then compute risk but suppress alerts for that session.
Cookie and Storage Behavior Anomaly Signals
Given the app sets a first-party cookie and a localStorage key on initial load, When subsequent requests lack the cookie or storage is inaccessible while other resources load normally, Then record signal=storage_anomaly and add ≥15 risk points. Given StorageManager.estimate reports quota=0 or throws in a context where it should be available, When capability checks run, Then add ≥10 risk points and record signal=storage_capability_anomaly.
Abnormal Chunking Pattern Heuristics
"As a security-conscious admin, I want the player’s network behavior monitored for scraping-like chunking so that automated rippers are flagged before full-length assets are exfiltrated."
Description

Analyzes media delivery requests (HLS/DASH segments and HTTP range reads) to detect scraping signatures such as sequential full-file range scans, superhuman chunk cadence, excessive parallel segment fan-out, retry storms, and cross-asset correlation from the same origin. Maintains sliding-window counters per session, asset, and shortlink, with configurable thresholds by content type (stems vs. press previews). Runs as a lightweight middleware at the CDN/edge and emits feature vectors to the risk-scoring engine. Adds <10ms overhead per request on P95 and degrades gracefully under load.

Acceptance Criteria
Sequential Full-File Range Scan Detection
Given an HTTP media asset served via Range requests with sliding-window tracking per session_id, asset_id, and shortlink_id When the same client issues sequential, increasing Range reads that cover >=95% of the asset within a 120s window and each next range begins at previous_end+1 with chunk sizes within configured bounds Then heuristics.range_scan is set to true for that window, the detection is deduplicated once per window, and a feature vector labeled pattern=range_scan with window metrics is emitted to the risk engine within 50ms P95
Superhuman Chunk/Segment Cadence Detection
Given media delivery via HLS/DASH segments or HTTP Range reads with sliding-window tracking enabled When the median inter-request interval for chunks in the current window drops below the configured cadence threshold for at least N consecutive chunks or sustained RPS exceeds the configured limit Then heuristics.superhuman_cadence is set to true, the detection is deduplicated per window, and a feature vector with cadence metrics is emitted to the risk engine within 50ms P95
Excessive Parallel Segment Fan-Out Detection
Given a session fetching media chunks When concurrent in-flight chunk requests for a single asset exceed the configured fanout_threshold for >=1s or the session exceeds per-asset concurrency across >=K assets within the window Then heuristics.parallel_fanout is set to true, concurrency stats are recorded, and a feature vector is emitted to the risk engine within 50ms P95
Retry Storm Detection and Backoff Violations
Given a session making media requests with retry handling observed at the edge When retries per asset exceed the configured retry_rate_threshold within the sliding window, including identical or overlapping ranges, or exponential backoff is violated for >=M consecutive attempts Then heuristics.retry_storm is set to true, retry/backoff metrics are captured, and a feature vector is emitted to the risk engine within 50ms P95
Cross-Asset Origin Correlation Detection
Given requests sharing the same origin fingerprint (ip_hash + ua_hash + tls_fingerprint) within a sliding window When the origin fetches >=3 distinct assets across the same shortlink or account within T seconds and their chunk patterns (sequence, cadence, or fan-out) exceed the configured correlation threshold Then heuristics.cross_asset_correlation is set to true and a correlation feature vector linking involved asset_ids and session_ids is emitted to the risk engine within 50ms P95
Sliding-Window Counters and Threshold Configurability
Given content types stems and press_previews each with configured heuristic thresholds When requests are processed at the edge Then counters are maintained independently per (session_id, asset_id, shortlink_id) with window length W and eviction after inactivity E, thresholds are applied by content_type, and configuration changes take effect within 60s without service restart
Edge Overhead and Graceful Degradation
Given normal load of >=1000 requests/sec per POP with heuristics enabled When added processing time is measured at the edge Then the middleware adds <=10ms latency at P95 per request and does not block media responses And when system load exceeds configured limits or the risk engine is unavailable Then the middleware degrades gracefully by bypassing non-critical heuristics, prioritizing content delivery, dropping telemetry before media, marking degraded=true on metrics, and maintaining 5xx error rate increase <=0.1%
Adaptive Mitigation: Downgrade or Decoy Switch
"As a promo manager, I want risky sessions automatically served a decoy preview so that leaks are deterred without manual intervention."
Description

When a session’s risk score exceeds a policy threshold, automatically apply mitigation without breaking playback: (a) downgrade stream quality/bitrate and increase watermark intensity, or (b) transparently switch the media source to a decoy preview asset. Policies are configurable per release, asset, or shortlink and support test/simulation mode before enforcement. Mitigation actions are reversible and stateful, ensuring the session can return to normal when risk drops. All actions are logged with correlation IDs and surfaced in asset and session views.

Acceptance Criteria
Seamless Auto-Downgrade on Risk Threshold
Given a player session streaming an asset with a policy configured to downgrade at riskScore >= 70 (dwellWindow = 3s, targetBitrate = 96 kbps, watermarkIncrease = +2 levels) And the session’s riskScore remains >= 70 continuously for at least 3 seconds When the risk condition is met Then within 2 seconds the stream bitrate is reduced to 96 kbps (±5%) or lower as configured without a fatal playback error and with no stall exceeding 500 ms And subsequent media segments include the increased watermark intensity within 2 seconds And the session mitigationState is recorded as "downgraded" for the duration of the condition
Seamless Decoy Switch on Elevated Risk
Given a player session with a policy configured to switch to a decoy preview asset when riskScore >= 80 (dwellWindow = 3s) And a valid, pre-associated decoy asset is available And the session’s riskScore remains >= 80 continuously for at least 3 seconds When the risk condition is met Then within 2 seconds the media source switches to the decoy asset and subsequent segment requests reference the decoy asset identifier And playback continues without a fatal error and with no stall exceeding 750 ms And no further segments of the original full asset are requested after the switch
Policy Scope and Precedence Configuration
Given three mitigation policies exist for the same content: release-level P1 (threshold 70, downgrade), asset-level P2 (threshold 75, decoy), and shortlink-level P3 (threshold 90, downgrade) When a session is initiated via the shortlink to that asset Then the most specific applicable policy is selected in the order shortlink > asset > release (P3 applies) And disabling the shortlink policy causes the asset-level policy (P2) to apply for new sessions within 60 seconds And disabling the asset-level policy causes the release-level policy (P1) to apply for new sessions within 60 seconds And policy changes do not affect already-mitigated sessions until a new decision point is reached
Simulation Mode (Test-Only) Does Not Enforce
Given a mitigation policy is set to Simulation mode with threshold 70 and action "decoy" And a player session’s riskScore reaches 72 for 5 seconds When the decision engine evaluates the session Then no enforcement occurs (no bitrate change, no source switch, no watermark change) And a simulated decision is logged including policyId, actionType, riskScore, and would-be parameters with a correlationId And the session and asset views display a "Simulation" label with the would-be action within 5 seconds And the API for mitigation metrics reports the simulated action in the simulation counts
Stateful Reversal When Risk Drops
Given a session is under active mitigation (state = downgraded or decoy) with hysteresisThreshold = 60, hysteresisDuration = 5s, and cooldown = 10s And the session’s riskScore drops below 60 and remains below for at least 5 seconds And the last mitigation transition occurred more than 10 seconds ago When the reversal condition is met Then within 2 seconds the original stream quality/source and watermark settings are restored And playback continues without a fatal error and with no stall exceeding 500 ms And the mitigationState is updated to "normal" and no additional transitions occur within the 10-second cooldown
Mitigation Action Logging and Correlation IDs in Views
Given logging is enabled for mitigation decisions When a mitigation action is applied or reversed for a session Then a log entry is created with a correlationId that ties the decision and all related sub-events together And the entry includes: timestamp (ISO8601 UTC), sessionId, assetId, releaseId (if applicable), shortlinkId (if applicable), policyId, policyScope, actionType (downgrade|decoy), decision (apply|reverse), riskScoreAtDecision, before/after parameters (bitrate, watermarkLevel, source), clientIp, userAgent And the log entry is queryable via the Logs API within 2 seconds of the event And the entry appears in both the asset view and the session view timelines within 5 seconds And fetching by correlationId returns a consistent set of related records
Instant Tamper Alerts with Session Context
"As a label ops lead, I want instant tamper alerts with actionable detail so that I can quickly decide to revoke access or adjust policy."
Description

Delivers real-time alerts via email, Slack, and webhooks when tamper rules trigger, including actionable context: link ID, asset ID, reviewer identity (if authenticated), IP/ASN, user agent, referrer, risk score timeline, and the exact heuristics that fired. Includes deep links to the session detail view and one-click actions (revoke link, block IP range, raise threshold). Supports alert suppression windows, rate limits, and routing rules per team to prevent noise.

Acceptance Criteria
Slack Alert on Tamper Rule Trigger
Given a protected TrackCrate link is accessed and Tripwire Tamper detects abnormal chunking or a headless user agent When a tamper rule triggers for session_id S Then a Slack message is delivered to the configured team channel within 5 seconds And the payload displays link_id, asset_id, session_id S, reviewer_identity (authenticated ID or "anonymous"), ip, asn, user_agent, referrer, risk_timeline (timestamp,score pairs), and heuristics_fired And the message includes deep links to the session detail view for S And the message renders one-click actions: Revoke Link, Block IP Range, Raise Threshold And clicking any action from Slack completes in ≤2 seconds and writes an audit log entry with actor, action, target_id, and timestamp
Email Alert with Suppression Window
Given a suppression window of 10 minutes is configured for alerts by session When multiple tamper rules fire for the same session within the window Then only the first email alert is sent and subsequent ones are suppressed And the email is delivered within 10 seconds of the first trigger And the email subject includes [Tripwire Tamper], link_id, primary_heuristic, and current risk_score And the email body contains link_id, asset_id, session_id, reviewer_identity, ip, asn, user_agent, referrer, risk_timeline, heuristics_fired, deep link to session detail, and signed one-click action URLs for Revoke Link, Block IP Range, Raise Threshold And the suppression event is recorded in the audit log with reason "suppression_window"
Webhook Delivery with Retry and Signature
Given a webhook destination is configured with secret SECRET and a 5s timeout When a tamper rule triggers for session_id S Then a POST request is sent within 3 seconds with JSON payload including link_id, asset_id, session_id S, reviewer_identity, ip, asn, user_agent, referrer, risk_timeline, heuristics_fired, event_id, occurred_at And headers include X-TrackCrate-Signature: HMAC-SHA256(body, SECRET) and X-TrackCrate-Event-ID: event_id And if the endpoint times out or returns a non-2xx, retries use exponential backoff for up to 15 minutes with a maximum of 6 attempts And duplicate deliveries are prevented at the receiver via idempotency using X-TrackCrate-Event-ID And a failed final delivery is visible in TrackCrate with status "Failed" and last_response details
Team-Based Routing Rules and Escalation
Given team routing rules exist mapping conditions to destinations (e.g., asset.tag=PR, risk_score>=70, reviewer_role=External) When an alert matches a rule Then alerts are sent only to the destinations specified by the rule (selected Slack channels, webhook endpoints, email groups) And no alert is sent to destinations not matched by any rule And rule evaluation results (matched rule id and conditions) are attached to the alert context and logged And an escalation rule dispatches to the on-call destination when risk_score >= 80 or a critical heuristic (e.g., credential_stuffing) fires
Per-Team and Per-Destination Rate Limits with Digest
Given a per-team limit of 60 alerts/minute and per-destination limit of 30 alerts/minute are configured When incoming alert volume would exceed either limit Then the system throttles deliveries to respect both limits And suppressed alerts are counted and included in a digest message sent at most once every 5 minutes per destination And the digest contains counts by heuristic, top 5 IPs/ASNs, and links affected And the audit log records each suppression with reason "rate_limit" and destination_id
One-Click Actions Permissioned and Reversible
Given the recipient has the "Security Admin" role in TrackCrate When they click "Block IP Range" from an alert Then the system validates role and scopes the block to the IP /24 (IPv4) or /48 (IPv6) containing the session IP And the block takes effect within 10 seconds and is enforced on subsequent requests And the alert UI and session detail reflect the active block with a reference ID And an "Undo" link valid for 15 minutes reverses the block and restores prior state And all actions (do/undo) produce audit log entries with actor, action, scope, and outcome
Deep Link to Session Detail View
Given an alert contains a deep link with session_id S When a user with access opens the link Then the session detail view loads within 2 seconds and is pre-filtered to S And the view shows risk timeline, fired heuristics, and all context fields matching those in the alert payload And if the user lacks access, the link responds with 403 and no sensitive data is rendered
Session Forensics & Evidence Retention
"As a rights manager, I want an auditable record of suspected tampering so that we can investigate incidents and share evidence with partners if needed."
Description

Captures a chronological trail of suspected tamper sessions with sampled request headers, range maps, mitigation decisions, and player-side events (where available) for audit and investigation. Stores records with configurable retention and export (CSV/JSON) while enforcing access control, encryption at rest, and PII minimization. Provides a dashboard to search by asset, shortlink, IP/ASN, or reviewer, and to compare normal vs. flagged sessions. Supports legal hold for ongoing investigations.

Acceptance Criteria
Automatic Evidence Capture on Tamper Flag
Given a playback session is flagged by Tripwire Tamper When the flag event is raised Then an evidence record is created or updated within 2 seconds with fields: session_id, asset_id, shortlink_id, reviewer_id (if present), utc_timestamps And sampled_request_headers are limited to the whitelist: Host, User-Agent, Accept, Accept-Language, Range, Referer, Origin, Accept-Encoding, Sec-CH-UA, Via, X-Forwarded-For And byte_range_map is aggregated in 1-second bins with count per unique requested range And mitigation_decision is recorded with code (ALLOW, DOWNGRADE, DECOY_PREVIEW) and rationale plus decision_timestamp And player_events timeline (play, pause, seek, stall) is captured if client signals exist; otherwise player_events="unavailable" And all timestamps are UTC with millisecond precision And evidence updates are append-only (prior entries are immutable)
PII Minimization and Data Redaction
Given evidence is persisted for a flagged session When data is stored or exported Then only whitelisted headers are retained; cookies, authorization headers, and request/response bodies are never stored And IP addresses are stored as salted SHA-256 hashes and as anonymized networks (/24 for IPv4, /48 for IPv6); full IPs never appear in exports And ASN and coarse geolocation (country, region, city if available) may be stored; emails, names, device IDs are excluded unless reviewer_id is explicitly provided And user-agent strings may be stored verbatim; all non-whitelisted headers are discarded
Role-Based Access Control, Encryption, and Audit
Given a user attempts to view or export session evidence When the user has role Admin or Security Analyst Then access is granted; otherwise a 403 Forbidden is returned And Security Analyst views show redacted network identifiers (hashed_ip, anonymized_network), while Admins may view any additional sensitive fields if present And every view/export action writes an immutable audit log entry with user_id, action, scope, timestamp, and result And all evidence data at rest is encrypted with KMS-managed AES-256 keys; encryption state is reported in system health and visible to Admins
Configurable Retention and Legal Hold
Given a global retention policy of N days is configured by an Admin When the nightly retention job runs Then evidence older than N days with legal_hold=false is permanently deleted and the deletions are logged with counts And records with legal_hold=true are preserved indefinitely until an Admin removes the hold And any change to the retention value or legal_hold status is restricted to Admins and creates an audit log entry with before/after values and actor
Evidence Export in CSV and JSON
Given an Admin or Security Analyst requests an export for a time range or selected session_ids When the export size is up to 10,000 sessions or 30 days of data (whichever is smaller) Then a downloadable CSV and a JSON file are generated within 30 seconds And the schema includes: session_id, asset_id, shortlink_id, reviewer_id (if present), utc_timestamps, mitigation_decisions, byte_range_map, sampled_request_headers, player_events, hashed_ip, anonymized_network, asn, geodata, legal_hold, evidence_checksum And file checksums and record counts match the number of exported records And export URLs are signed and expire within 10 minutes; access is logged
Dashboard Search and Filter
Given a user with permission opens the Session Forensics dashboard When they search by asset_id, shortlink_id, IP, ASN, reviewer_id, or date range and apply filters (flagged_only, decision_code) Then results return within 2 seconds for up to 50,000 sessions with pagination (default 50 per page) And IP search accepts a plain IPv4/IPv6 input and internally hashes it to match stored hashed_ip values And selecting a result opens a detail view showing the full event timeline, range map, headers (redacted per role), and mitigation decisions
Normal vs Flagged Session Comparison
Given a flagged session is selected in the dashboard When the user chooses Compare to Normal and selects a baseline (e.g., last 5 unflagged sessions for the same asset) Then a side-by-side view renders charts for request_rate, unique_ranges_per_minute, header_anomaly_score, and player_interactions And differences are quantified as absolute deltas and z-scores and can be exported as CSV/JSON And the comparison view loads within 3 seconds and honors all access controls and redactions
Reviewer Allowlist & False-Positive Recovery
"As a promo coordinator, I want trusted reviewers to be exempt or quickly restored after a false alarm so that their experience remains smooth and relationships are protected."
Description

Enables allowlists for trusted reviewers, IP ranges, and referrer domains with adjustable policy thresholds and exemptions. Provides a rapid recovery mechanism for false positives: admins can issue a signed override link or lift mitigation for a session in real time, restoring full-quality playback without requiring the reviewer to retry. Includes a simulation mode to test new rules against historical traffic and a changelog of policy edits. Fully integrated with AutoKit press pages and the private stem player.

Acceptance Criteria
Allowlisted Reviewer Uninterrupted Playback
Given a reviewer account ID, IP CIDR, or referrer domain is in the allowlist When the reviewer loads an AutoKit press page or the private stem player and begins playback Then Tripwire mitigation is bypassed for that session (no downgrade, no decoy) And initial playback starts within 2 seconds at p95 And the session is tagged "exempt-allowlist" with the matched key in telemetry/logs And no tamper alert is generated for the exempted requests
Adjustable Policy Thresholds and Exemptions Apply Within 60 Seconds
Given an admin updates bot/tamper thresholds and assigns exemptions by reviewer ID, IP CIDR, or referrer domain When the admin saves the policy Then the new thresholds propagate to all enforcement nodes within 60 seconds And subsequent requests reflect the updated thresholds and exemptions And the policy shows a new version ID and timestamp with the saving admin’s identity And validation prevents saving thresholds with invalid ranges or overlapping CIDRs
Signed Override Link Restores Playback Without Retry
Given a reviewer session is mitigated and an admin generates a signed override link with a TTL When the reviewer opens the link within its TTL Then the session is upgraded to full-quality within 10 seconds without page refresh or retry by the reviewer And the link is single-use and expires immediately after use or at TTL, whichever comes first And the override applies only to the targeted session and originating device/browser And an audit entry records admin ID, session ID, TTL, and outcome
Real-Time Mitigation Lift For Active Session
Given an active session is currently downgraded or served a decoy preview When an admin clicks "Lift Mitigation" for that session in the dashboard Then mitigation is disabled for that session within 3 seconds And playback ramps to the highest allowed rendition within 10 seconds without user action And the lift persists for 30 minutes unless revoked earlier And all actions are logged and visible in the session timeline
Simulation Mode Validates New Rules Against Historical Traffic
Given an admin selects a proposed rule-set and a time window up to the last 30 days When the admin runs a simulation Then results return within 15 minutes for up to 5M historical requests And results include estimated mitigation rate, estimated false-positive rate using allowlist ground truth, top triggering rules, and impacted reviewer segments And no live traffic is affected and no mitigations are enacted during simulation And a downloadable CSV/JSON report and shareable link to the run are available
Policy Changelog and Audit Trail
Given any policy change or action (allowlist edits, threshold updates, exemptions, overrides, mitigation lifts) When the change is saved or the action is executed Then an immutable changelog entry is recorded with ISO8601 UTC timestamp, actor, action type, before/after diff, and optional reason And entries are filterable by date range, actor, and entity type, and exportable to CSV and JSON And reverting to a prior policy creates a new entry without altering the original record
End-to-End Integration on AutoKit and Private Stem Player
Given allowlists and recovery mechanisms are configured When a reviewer accesses an AutoKit press page or the private stem player via a trackable shortlink Then exemptions and overrides apply uniformly across both experiences And playback recovery produces no visible interruption (no restart) to the user And watermarked download rules and link tracking remain unaffected And cross-browser tests on latest Chrome, Safari, and Firefox desktop/mobile pass at p95 success rate ≥ 99%

Access Pledge

Gate links with lightweight clickwrap terms (embargo, no reuploads, intended use) and capture recipient name, role, and consent. Exportable receipts create a clean audit trail that reassures rights holders and reduces back-and-forth on compliance.

Requirements

Clickwrap Access Gate
"As a promo recipient, I want to quickly review and accept clear terms before accessing assets so that I can proceed confidently and stay compliant."
Description

Provide a pre-access interstitial for shortlinks and AutoKit press pages that displays lightweight terms (embargo date/time with timezone, no reuploads, intended use selection), requires explicit consent via checkbox and Accept button before granting access, and blocks downloads, the private stem player, and press assets until accepted. Persist consent status per recipient and link to avoid re-prompting unless terms change or a session expires, ensure mobile-responsive performance, deep-link back to the originally requested asset after acceptance, support anonymous recipients via tokenized links or email entry, and record accept/decline outcomes.

Acceptance Criteria
Gate Enforcement on Shortlinks and AutoKit
Given a recipient opens a protected shortlink or AutoKit press page When the page loads Then an interstitial clickwrap gate is shown before any downloads, press assets, or the private stem player are accessible And the gate displays embargo date and time with timezone, a no‑reuploads clause, and an intended‑use selector And the Accept button is disabled until the consent checkbox is checked and an intended‑use value is selected And clicking Decline prevents access, shows a denial message, and no assets or audio are loaded
Consent Form Inputs and Validation
Given the clickwrap form is shown When the recipient attempts to accept without completing required fields Then inline validation indicates missing or invalid entries and Accept remains disabled And required fields include: email, full name, role, intended use, and the consent checkbox And email input enforces valid format and role is selected from a provided list (with Other allowing free text) And all validation messages are accessible and clear
Deep-Link Continuation After Acceptance
Given a recipient requested a specific asset or action behind the gate (e.g., a file download, a stem player track, or a press image) When the recipient accepts the terms Then the system returns the recipient to the originally requested URL/asset And if the request was a file download, the download initiates automatically within 1 second And if the request was to play audio, playback begins without requiring another tap (subject to browser autoplay policies)
Consent Persistence and Re-Prompt Rules
Given a recipient has accepted the terms for a specific link When the same recipient returns to the same link within an active session Then the gate is bypassed and access is granted directly When the link owner updates the terms (text, embargo values, or intended‑use options) to a new version Then previously consenting recipients are re‑prompted and must accept the new version before access resumes When the recipient’s session expires or consent storage is cleared Then the recipient is re‑prompted and a fresh acceptance is required
Identity Handling for Anonymous and Tokenized Links
Given a tokenized link contains recipient identity (e.g., email and name) When the gate renders Then email and name are prefilled and email is read‑only, and role/intended use must still be selected Given an anonymous link without embedded identity When the gate renders Then email, full name, role, intended use, and consent checkbox are required before Accept is enabled And all captured identity fields are associated with the acceptance record for that specific link
Outcome Recording and Receipt Export
Given a recipient accepts or declines the gate When the outcome is submitted Then the system records an audit entry containing: outcome (accept/decline), timestamp with timezone, link ID, terms version, embargo values, intended use, recipient email/name/role (if provided), IP, and user agent And link owners can view acceptance logs per link and export them as CSV filtered by date range And each acceptance has a unique receipt ID retrievable via API
Mobile Responsiveness and Performance
Given the gate is accessed on mobile devices (320–414px width) When the page loads Then content fits the viewport without horizontal scrolling, text is legible, and tap targets are at least 44px And keyboard/focus order follows the form logically, labels are associated with inputs, and screen readers announce field errors And Largest Contentful Paint is ≤ 2.5s and Time to Interactive is ≤ 3.5s on a simulated 4G connection with a mid‑tier device And UI responds within 100ms to input on modern devices
Recipient Identity Capture
"As a label publicist, I want to know who accessed the assets and in what capacity so that I can maintain a verifiable audit trail and follow up appropriately."
Description

Collect recipient name, role, organization, and email prior to granting access, with optional prefill from secure link parameters and inline validation. Automatically capture IP address, user-agent, timestamp, and locale; support a role taxonomy (press, radio, playlist, internal) with custom entry; store identity per access record tied to project/release; require identity for each unique access token to prevent bypass; offer a low-friction, single-screen UX that honors privacy preferences and regional consent notices.

Acceptance Criteria
Gate Blocks Access Until Identity Submitted
Given a recipient opens an access link with a valid token Then an identity form is displayed requesting name, role, organization, and email on a single screen And any controls to view or download protected assets are disabled or hidden When the recipient submits all required fields with valid input Then the gate unlocks and the recipient is granted access to the protected content in the same session And an identity access record is created and associated with the token and project/release
Secure Prefill From Signed Link Parameters With Inline Validation
Given the access link includes signed prefill parameters for name, email, role, organization, and locale Then the corresponding fields are pre-populated on load And if the signature is missing or invalid, prefill is ignored and an integrity event is logged And prefilled values remain editable by the recipient When the recipient edits fields, inline validation runs on blur and before submit Then an invalid email shows an inline error and blocks submission until corrected And required fields are enforced based on the selected role
Automatic Capture of IP, User-Agent, Timestamp, and Locale
Given the identity form is submitted successfully Then the system records and stores the submitter’s IP address (IPv4/IPv6), user-agent string, server-received timestamp (UTC), client timezone offset, and locale (from Accept-Language) And these fields are persisted with the access record without additional user input And the captured values are viewable in admin/audit views for that access record
Role Taxonomy With Custom Role Entry
Given the role selector is displayed Then the options include Press, Radio, Playlist, Internal, and Other When Other is selected Then a free-text custom role input becomes required and is validated to be 2–50 characters And on submit, both the normalized role value (taxonomy or "custom") and the entered custom role label are stored with the access record
Token-Level Identity Enforcement and Bypass Prevention
Given a request attempts to access protected content using a valid token with no associated identity record Then the system redirects to the identity gate and denies direct content access with HTTP 403 until identity is captured When identity is submitted for that token Then subsequent content requests with the same token in the same flow return HTTP 200 and the gate is not shown again in that flow And attempts to reuse the token from a different device or browser without an identity record for that token are gated until identity is captured
Identity Stored and Linked to Project/Release
Given identity submission succeeds Then an access record is created that includes captured identity and metadata, the access token ID, and the project/release ID And the record is unique per token per project/release And the record can be queried and exported by project/release and date range in admin tools
Single-Screen UX With Regional Privacy and Consent Notices
Given the recipient’s locale/country is inferred from IP geo and/or Accept-Language on first load Then the identity fields, role selector, and required notices render on a single page with no multi-step navigation And for EEA/UK locales, a GDPR consent notice with an explicit checkbox is shown and must be accepted to proceed And for California (US-CA), a CCPA/CPRA privacy notice with a “Do Not Sell/Share” link is shown and the user’s preference is captured if expressed And the consent/notice version, choice (if any), and timestamp are stored with the access record
Consent Receipt Export
"As a rights manager, I want downloadable receipts of who agreed to what and when so that I can prove compliance to rights holders and partners."
Description

Generate immutable consent receipts that capture recipient identity, terms body and version hash, acceptance timestamp with timezone, IP, user-agent, link ID, asset list, and embargo details. Provide per-recipient PDF receipts, batch CSV exports by release or date range, and shareable receipt links with access control. Make receipts tamper-evident via checksum, include TrackCrate branding with optional label logo, and integrate receipt references into existing link analytics and project views.

Acceptance Criteria
Generate Per-Recipient Immutable PDF Consent Receipt
Given a recipient completes clickwrap for a gated link When the system records consent Then a PDF receipt is generated within 5 seconds containing recipient_full_name, recipient_role, recipient_email (if collected), ip, user_agent, link_id, asset_list (ids and filenames), terms_body, terms_version, terms_hash (SHA-256), accepted_at (ISO 8601 with timezone), embargo_start/end (if any), receipt_id, and checksum Given a generated receipt When the PDF is opened Then it is read-only (non-editable/flattened), text-selectable, includes receipt_id and checksum in metadata, and file size is ≤1 MB for receipts with ≤50 assets Given branding settings with optional label logo configured (PNG/SVG ≤1 MB) When the PDF is generated Then TrackCrate branding is visible and the label logo is rendered without distortion; if no logo, layout remains consistent Given the receipt exists When a project user views the recipient row Then actions "View Receipt" and "Download PDF" are available and deliver the identical, checksummed file
Batch CSV Export by Release and Date Range
Given a release is selected When exporting consent receipts to CSV Then the CSV contains one row per receipt with columns: receipt_id, recipient_name, recipient_role, recipient_email, link_id, link_title, release_id, terms_version, terms_hash, accepted_at_utc, accepted_timezone, ip, user_agent, embargo_start_utc, embargo_end_utc, asset_ids, asset_filenames, checksum, file_url, and includes a header row (UTF-8, RFC 4180) Given a date range filter and optional release filter When exporting Then only receipts with accepted_at within the inclusive range (converted to UTC) and matching release are included Given assets lists with multiple values When exporting Then asset_ids and asset_filenames are semicolon-delimited within a single cell Given an export exceeds 50,000 rows When generating CSV Then the system streams the file to complete within 60 seconds or initiates an asynchronous export that completes within 5 minutes and notifies with a download link Given a CSV export is completed When downloaded Then a companion .sha256 checksum file is available and the checksum matches the CSV
Shareable Receipt Links with Access Control and Revocation
Given a consent receipt exists When a project user clicks "Copy Shareable Link" Then a unique URL with a signed token is created with default expiry of 7 days and configurable range of 1–30 days Given a shareable receipt link When it is opened by a visitor Then the receipt view is displayed only if the token is valid, unexpired, and not revoked; otherwise an access denied page is shown without exposing PII Given a shareable receipt link is viewed When the page loads Then the system logs view timestamp, ip, and user_agent to the receipt access log Given a project user revokes a shareable link When the revoked URL is opened Then access is denied and the attempt is recorded in the access log Given tenant policy requires sign-in for shareable links When the link is opened Then the viewer must authenticate to the correct tenant before the receipt is displayed
Tamper-Evident Checksum Generation and Verification
Given a receipt PDF or CSV export is generated When the file is finalized Then a SHA-256 checksum is computed, embedded on page 1 for PDFs and included as a column/companion file for CSVs, and stored with receipt metadata Given a downloaded receipt PDF When it is submitted to the public verification endpoint with receipt_id Then the service recomputes the hash and returns status "valid" if it matches or "invalid" with HTTP 409 if not Given verification reports a mismatch When a project user views the receipt in-app Then a "Tamper Suspected" badge is displayed with a link to re-download the canonical file Given branding settings change after a receipt is generated When viewing the existing receipt Then the original PDF remains unchanged and its checksum is identical to the stored value
Receipt References in Link Analytics and Project Views
Given consent receipts exist for a link When viewing the link analytics page Then metrics show total_recipients, total_accepts, acceptance_rate_percent, last_accepted_at, and filters by terms_version and date range are available Given the project recipients list is open When viewing a recipient who accepted Then the row shows receipt_id, status "Accepted", accepted_at timestamp, and a link to open the receipt PDF Given analytics are exported When exporting to CSV Then each row includes receipt_id and terms_version to align analytics with receipts Given a recipient has not accepted When viewing analytics or exporting receipts Then they are excluded from accepted counts and are not included in the receipts export
Embargo Details Captured and Displayed with Timezone Accuracy
Given a link has embargo_start and/or embargo_end configured When a recipient accepts the terms Then the receipt captures embargo_start and embargo_end in UTC and the link's IANA timezone and displays both alongside accepted_at Given a receipt is rendered When times are displayed Then accepted_at and embargo times show the correct local time with offset and the UTC equivalent Given acceptance occurs around a daylight saving transition When stored and displayed Then UTC timestamps are correct and local display reflects the accurate DST offset Given a link has no embargo configured When generating a receipt Then embargo fields are empty and the PDF omits the embargo section
Terms Versioning and Hashing Integrity
Given terms text is updated When the terms version increments for a link Then the system computes a SHA-256 hash of the exact terms body and stores both version and hash Given a recipient accepts the terms When the receipt is generated Then it includes the exact terms body presented at acceptance, the corresponding terms_version, and terms_hash; later edits to terms do not alter existing receipts Given two receipts have identical terms bodies When comparing their hashes Then terms_hash values are identical; any character difference yields different hashes Given a CSV export of receipts When opened Then terms_version and terms_hash columns are populated for every row
Embargo & Access Controls
"As an artist manager, I want embargoes and download permissions enforced automatically so that I can safely share pre-release materials without leaks."
Description

Enforce embargo start/end timestamps on gated links, blocking access until the start time and disabling downloads after expiry. Respect per-asset flags for preview-only versus downloadable items, integrate with expiring, watermarked downloads, allow per-role exceptions via signed tokens, and present informative countdowns and messages. Implement consistent UTC storage with localized display, secure direct asset URLs with short-lived signatures and referrer checks, and apply rate limiting for repeated attempts.

Acceptance Criteria
Embargo Start Blocks Access Until Start Time
Given a gated link with startAt (UTC) in the future and endAt after startAt When a recipient opens the link before startAt Then the landing page returns HTTP 200 with an embargo banner and a live countdown to startAt in the recipient’s local timezone And no asset list, previews, or download controls are rendered And any API calls to list assets or request downloads for this link return HTTP 403 with error code embargo_not_started And the displayed local start date/time equals startAt converted from UTC, with countdown accuracy within ±1 second
Embargo End Disables Downloads After Expiry
Given current time >= endAt for a gated link When a recipient opens the link Then the landing page returns HTTP 200 with an “access expired” message including the local end time And all download controls are hidden/disabled for every asset And preview playback remains available only where the asset’s preview is enabled And any attempt to initiate a download returns HTTP 410 Gone with error code embargo_expired And any previously issued download URLs for this link are invalidated and return HTTP 410 And an audit event is recorded for each expired download attempt
Per-Asset Preview-Only vs Downloadable Enforcement
Given a kit where Asset A has previewOnly=true and Asset B has downloadable=true When the recipient opens the link during the active window (startAt <= now < endAt) Then Asset A renders a player with no download button and any download API request for Asset A returns HTTP 403 with error code preview_only And Asset B renders a download control that succeeds only if role permissions allow and the link is within the window And server-side validation enforces asset flags on every request regardless of client UI state
Expiring, Watermarked Download Delivery
Given a downloadable asset and a recipient permitted to download within the active window When the recipient clicks Download Then the system issues a short-lived, single-use, signed URL that expires in <= 15 minutes and is bound to linkId, assetId, and recipient identifier And the delivered file contains an embedded watermark including linkId and a timestamp And reusing the same URL or downloading after expiry returns HTTP 403 with error code token_expired And an audit record captures recipient name, role, assetId, timestamp, IP, and watermark id
Per-Role Exception via Signed Token
Given a recipient has a valid, unexpired signed token granting scope=embargo_bypass for linkId X When the recipient opens the link before startAt Then the landing page and APIs behave as if within the active window for that recipient only, subject to per-asset flags And the token signature, scope, audience (linkId), and expiry are verified; tampered or wrong-audience tokens return HTTP 401 And access without the token continues to respect the embargo for that recipient
UTC Storage and Localized Countdown/Times
Given a gated link stored with startAt and endAt in UTC When recipients in different time zones (including over a DST change) view the link Then the page displays local start/end times correctly converted from UTC and a countdown accurate to within ±1 second And server-side allow/deny decisions use only UTC timestamps And automated tests validate at least three time zones and a DST transition case
Signed URL, Referrer Enforcement, and Rate Limiting
Given a direct asset request without a valid short-lived signature Then the server returns HTTP 403 with error code invalid_signature and no file bytes are streamed Given a direct asset request with a valid signature but a Referer whose origin does not match the issued link’s origin Then the server returns HTTP 403 with error code bad_referer Given >= 20 denied requests (invalid signature or pre-start API calls) from the same IP and link within 60 seconds Then subsequent requests are throttled with HTTP 429 and exponential backoff up to 5 minutes And all denials and throttles are logged with correlationId, IP, linkId, and reason; valid requests are unaffected once the window expires
Terms Management & Re-consent
"As a label admin, I want to update pledge terms and automatically collect re-consent so that compliance stays current without manual follow-ups."
Description

Provide an admin editor to create reusable pledge templates with variables (e.g., project name, embargo date), multi-language content, and versioning. Allow per-link template selection and light customization, pin the accepted version to each receipt, and when terms change, require re-consent for future access with optional invalidation of prior tokens. Offer recipient notifications with re-consent links, display version history in the dashboard, and ensure backward-compatible rendering of historic terms.

Acceptance Criteria
Create and Save Reusable Pledge Template with Variables
- Given I am an admin in the Terms Editor, When I author a template using variables {{project_name}} (string) and {{embargo_date}} (date), Then Save is enabled only when variable names are unique, syntactically valid, and typed. - When I click Save with valid inputs, Then the template is persisted with a unique Template ID and semantic version v1.0.0 and appears in the template picker within 5 seconds. - When I click Preview with sample data, Then all placeholders resolve; any missing values are flagged inline and block Save with a clear message.
Multi-language Terms Rendering and Selection
- Given a template has translations for en, es, and fr with identical placeholders, When I save, Then validation fails if any translation omits a placeholder or mismatches its type. - When a recipient opens a pledge link with ?lang=es or Accept-Language=es, Then Spanish renders; if es is unavailable, Then the default language renders and a language selector is displayed. - When rendering a right-to-left language (e.g., ar), Then text direction is RTL and layout remains legible on mobile and desktop breakpoints.
Template Versioning and Pinning on Acceptance Receipt
- Given a published template v1.0.0, When I publish text-only changes, Then the version increments to v1.0.1; adding a new required variable increments to v1.1.0; removing or renaming variables increments to v2.0.0. - When a recipient accepts terms, Then the receipt stores template_id, semantic_version, rendered_terms_sha256, variable values, recipient name, role, timestamp, and IP/user-agent. - When the template is later updated, Then the stored receipt remains immutable and continues to reference the originally accepted version and hash.
Per-Link Template Selection with Light Customization
- Given I am creating a share link, When I select a template and choose Version Policy = Latest or Lock to vX.Y.Z, Then the selection is saved with the link. - When I set variable values, Then only declared variables are accepted; values are validated by type; per-link free-text customizations are limited to 500 characters and HTML is sanitized/stripped. - When I preview the link, Then the rendered terms exactly match what recipients will see for the selected version policy and variable values.
Re-consent on Terms Update with Optional Token Invalidation
- Given links using Version Policy = Latest, When I publish v1.1.0 and check "Require re-consent," Then any future access to affected links redirects to a consent page until the user re-consents. - When I also check "Invalidate prior tokens," Then previously issued tokens cannot fetch assets until re-consent is completed; attempts return 401 with a re-consent link. - When a recipient re-consents, Then a new receipt is created and associated to the recipient and link; prior receipts remain viewable and unaltered.
Recipient Notification for Re-consent
- When re-consent is required and recipient email exists, Then the system sends notification within 15 minutes containing template name, new version, change summary, and a unique re-consent link. - When an email bounces, Then the event is logged and surfaced in the dashboard with the recipient record. - When the recipient completes re-consent via the email link, Then the dashboard reflects completion within 2 minutes and the link cannot be reused (single-use).
Dashboard Version History and Backward-Compatible Rendering
- When viewing a template, Then the dashboard lists all versions with timestamps, editors, change notes, and a diff view between adjacent versions. - When opening a historic version, Then the terms render identically to the original using stored content and renderer; the output hash matches the receipt's stored hash. - When exporting a historic version or related receipts to PDF/CSV, Then the file downloads successfully with a checksum and includes pinned version identifiers.
Compliance Logs & Webhooks
"As a label operations lead, I want real-time events and a complete audit log so that our internal systems stay in sync and we can respond quickly to compliance issues."
Description

Maintain an append-only audit log of identity submissions, consent decisions, access grants/denials, and terms changes. Expose signed webhook events (consent.accepted, consent.declined, embargo.reached, terms.updated) and APIs to retrieve receipts and logs by link, release, or recipient. Implement role-based access controls, redact sensitive fields as configured, provide retry/backoff on webhook delivery, and surface admin email summaries and alerts for unusual activity patterns.

Acceptance Criteria
Append-Only Audit Log Integrity
Given an identity submission, consent decision, access grant/denial, or terms change occurs, When the event is recorded, Then the system appends a new immutable audit entry containing event_type, event_id (UUIDv4), occurred_at (ISO8601 UTC), actor_id, resource_id(s), and checksum. Given any attempt to update or delete an audit entry via API or internal service, When the request is made, Then the system rejects it and returns 405 for update and 403 for delete and the entry remains unchanged; an attempted_tamper audit event is recorded. Given multiple events for the same resource, When logs are retrieved, Then entries are returned in chronological order with stable cursor-based pagination and no gaps or duplicates. Given the audit store, When integrity verification runs hourly, Then no violations of the append-only constraint are found; if a violation is detected, an admin alert is generated within 5 minutes.
Signed Webhook Delivery: Consent and Terms Events
Given a webhook subscription with a shared secret exists, When consent.accepted, consent.declined, terms.updated, or embargo.reached occurs, Then the system POSTs within 10 seconds to the subscriber endpoint with JSON payload including event_type, event_id, occurred_at, resource identifiers, and headers X-TrackCrate-Timestamp and X-TrackCrate-Signature (HMAC-SHA256 over timestamp + payload). Given the subscriber validates the signature with the configured secret, When the payload is unaltered and the timestamp is within 5 minutes, Then validation succeeds; otherwise validation fails and the event is ignored by the subscriber. Given event delivery, When the subscriber returns a 2xx response, Then the event is marked delivered and no further retries occur. Given resilience requirements, When the same event_id is delivered more than once, Then the payload is identical (except delivery headers) enabling idempotent processing by subscribers.
Webhook Retry, Backoff, and Dead-Letter Handling
Given a webhook delivery attempt fails with a network error or non-2xx (except 410), When the system retries, Then it retries up to 12 attempts over ~24 hours using exponential backoff with jitter starting at 1 minute. Given a 410 Gone response from the subscriber, When received, Then the subscription is immediately disabled, no further retries are scheduled, and an admin notification is sent. Given maximum retries are exhausted, When the event remains undelivered, Then it is placed in a dead-letter queue visible via admin UI/API and an alert is sent to OrgAdmins. Given an admin selects replay for a dead-lettered event, When triggered, Then the event is re-enqueued to the subscription with a new delivery attempt while preserving the original event_id and updated signature timestamp.
Receipts and Compliance Logs Retrieval APIs
Given an authenticated caller with scope compliance:read requests GET /compliance/logs with filters by link_id, release_id, recipient_id, and occurred_at range, When parameters are valid, Then the API returns paginated results (limit, next_cursor) sorted ascending by occurred_at, stable across pages, with P95 latency <= 500ms on a dataset of 10k records. Given GET /compliance/receipts/{receipt_id}, When Accept is application/pdf, Then the API returns a PDF receipt; when Accept is application/json, Then it returns JSON; receipts include event_id, recipient identity (subject to redaction), terms snapshot, consent decision, and a verification checksum and are immutable. Given invalid filters or unauthorized access, When requested, Then 400 is returned for invalid parameters and 403/404 for unauthorized/not found without leaking resource existence. Given an export request POST /compliance/logs/export with filter set, When accepted, Then a signed download URL is returned and expires within 24 hours; the export includes only records permitted by the caller’s scope.
Role-Based Access Control for Compliance Data
Given user roles OrgAdmin, LabelManager, Contributor, and ExternalViewer, When accessing compliance logs or receipts, Then OrgAdmin and LabelManager with scope compliance:read can view all org records; Contributors can view records they initiated; ExternalViewer is denied with 403. Given an API token, When the token lacks scope compliance:read or violates resource constraints, Then access is denied with 403 and an access_denied audit event is recorded including actor, scope, and resource. Given a request for unmasked PII, When the requester lacks scope compliance:pii:unmask, Then PII fields are redacted in responses and webhooks. Given any access decision is made, When granted or denied, Then the decision is itself logged with actor_id, evaluated scopes, and rationale code.
Configurable Redaction of Sensitive Fields
Given organization redaction settings specify fields (e.g., ip_address, email, user_agent, note), When enabled, Then those fields are masked in API responses, exports, receipts (if external), and webhook payloads; storage retains encrypted originals. Given a user with scope compliance:pii:unmask, When requesting data, Then masked values are returned in full only to that user and an unmask_access audit event is recorded. Given redaction settings are updated, When saved, Then changes are logged with who/when/what, applied to subsequent reads within 5 minutes, and do not mutate stored historical records. Given a link configured to send external receipts, When receipts are generated, Then sensitive values are redacted per the organization’s configuration.
Admin Summaries and Anomaly Alerts
Given an organization with compliance activity, When the scheduled job runs daily at 08:00 in the org’s primary timezone, Then an email summary is sent to OrgAdmins including counts by event_type, unique recipients, declines, embargo.reached, webhook failure rate, and links to dashboards. Given anomalies occur (decline rate > 10% over the last hour with N >= 50; >= 5 failed webhook attempts to the same endpoint within 30 minutes; >= 3 access denials from the same IP within 10 minutes), When detected, Then an alert is sent via configured channel (email at minimum) with context and an investigation link and is rate-limited with a 30-minute cooldown per pattern. Given admins update notification preferences, When saved, Then selected channels and thresholds persist and are respected within 10 minutes. Given a summary or alert is sent, When delivery succeeds or fails, Then delivery status is tracked; failures generate a backup email attempt and are logged for review.

Recall & Replace

Revoke or swap assets across all active Guest Guard Links with one click—no new outreach needed. Recipients see a friendly update message, while you preserve analytics and watermark history, minimizing disruption when mixes change or campaigns pivot.

Requirements

One-click Global Asset Swap
"As a label manager, I want to replace an asset across all active links with one click so that I don’t have to resend links and my campaign keeps running smoothly."
Description

Provide a single-action control to replace a selected asset (stems, masters, artwork, press docs) across all active Guest Guard Links, AutoKit press pages, and trackable shortlinks without generating new URLs. The system updates the asset pointer while preserving link IDs, download permissions, expirations, and existing audience rules. Upon swap, previews and stem player sources are re-indexed, and recipient-specific watermarks are regenerated on next download. Includes a confirmation step showing impact counts, a background job queue for large libraries, and retry logic for transient failures. Integrates with storage, watermarking, analytics, and link services to ensure uninterrupted campaigns and consistent user experience.

Acceptance Criteria
Global Swap Preserves URLs and Identifiers
Given an asset A associated with active Guest Guard Links, AutoKit press pages, and trackable shortlinks When the user triggers One-click Global Asset Swap to replace A with asset B of the same type Then all affected destinations continue resolving via their original URLs and IDs with no new links generated And then link IDs, shortlink slugs, and page IDs remain unchanged And then the asset pointer for each destination updates to B within 60 seconds
Pre-swap Impact Confirmation and Safe Cancel
Given a user selects asset A and a replacement asset B When the user initiates the swap Then a confirmation modal displays counts of affected items by type (Guest Guard Links, AutoKit pages, shortlinks) and total recipients And then the modal displays an estimated processing time window And then if A and B are different asset classes (e.g., stem vs. artwork), the modal blocks confirmation with a clear error message When the user cancels, then no changes are made and no jobs are created When the user confirms, then the swap is enqueued and a job ID is displayed to the user
Background Processing, Chunking, and Retry
Given the swap job contains N affected pointers (N >= 1) When processing begins Then the system processes items in batches of up to 200 pointers per batch And then transient failures (e.g., 5xx from storage, watermarking, or link services) are retried with exponential backoff up to 5 attempts per item And then permanently failed items after max retries are recorded with error codes and a human-readable reason And then the overall job status reflects Completed, Completed with Failures, or Failed, with counts for succeeded and failed items And then a Re-run Failed Items action is available and only retries the failed subset
Preview Re-indexing and Stem Player Source Refresh
Given the swap is confirmed When processing completes for a destination Then previews (waveforms, thumbnails) for asset B are regenerated and indexed And then CDN caches for preview and stream URLs are invalidated within 5 minutes And then AutoKit stem players load and stream B on the next page load without manual refreshes required And then no stale previews or streams of A are served after the cache invalidation window
Watermark Regeneration and History Preservation
Given recipients previously downloaded asset A with recipient-specific watermarks When the swap completes Then historical watermark records for A remain available in audit logs without modification When any recipient downloads or streams asset B via an affected destination Then a new recipient-specific watermark for B is generated at request time and embedded/applied to the delivered media And then average watermark generation latency is <= 2 seconds per file measured over a batch of at least 50 requests
Permissions, Expirations, and Audience Rules Unchanged
Given destinations have specific download permissions, expiration dates, and audience rules configured When the swap occurs Then those settings remain unchanged and continue to govern access to asset B And then attempts after an already-passed expiration still deny access as before And then existing whitelists, passcodes, and rate limits are enforced without requiring reconfiguration
Recipient Update Message and Campaign Continuity
Given a recipient opens an affected link after the swap When the page or player loads Then the recipient sees a non-blocking message indicating the asset has been updated, including a timestamp And then no 404/500 errors or unintended redirects occur during or after the swap And then analytics events (views, plays, downloads) continue to accrue under the original link ID without reset or duplication
Preserve Analytics & Watermark Lineage
"As a rights administrator, I want analytics and watermark lineage preserved across replacements so that reporting and leak tracing remain accurate and defensible."
Description

Maintain uninterrupted analytics continuity and complete watermark lineage when assets are recalled or replaced. All engagement metrics (clicks, plays, downloads, geos, referrers) remain attributed to the original link, while a new asset version record is associated under the same link identity. Store immutable mappings of recipient → watermark → asset version to support leak tracing and compliance. Expose lineage in reporting, allow CSV/JSON export for audits, and ensure historical previews and checksums remain queryable. Data model includes version identifiers, change reason, actor, timestamp, and linkage to original campaigns.

Acceptance Criteria
Recall Preserves Analytics Under Original Link Identity
Given an active Guest Guard Link with existing analytics (clicks, plays, downloads, geos, referrers) When the linked asset is recalled or replaced with a new version Then the link_id remains unchanged And pre-replacement metrics remain attributed to the same link_id And post-replacement engagements are appended to the same link_id timeline And reports show a continuous time series without gaps at the replacement timestamp
Immutable Recipient–Watermark–Version Mapping
Given recipients have unique watermarks for version V1 And the asset is replaced with version V2 When new watermarks are issued for V2 Then V1 mappings (recipient_id, watermark_id, version_id=V1) remain immutable and queryable And V2 mappings (recipient_id, watermark_id, version_id=V2) are created and immutable And any attempt to modify an existing mapping via UI/API is rejected with 409 Conflict and no data change
Lineage Visible in Reporting UI and API
Given a link with multiple asset versions When viewing the link analytics lineage panel or calling GET /links/{id}/lineage Then each version entry includes: version_id, version_index, change_reason, actor_id, actor_name, changed_at (ISO-8601 UTC), campaign_id And each entry links to checksum_sha256 and preview_url And entries are ordered by changed_at ascending And the API responds within ≤1s P95 for up to 50 versions
CSV and JSON Export for Lineage Audits
Given a link with lineage and analytics When exporting via UI or GET /links/{id}/export?format=csv|json&scope=lineage_analytics Then the file includes columns/keys: link_id, version_id, version_index, changed_at, change_reason, actor_id, campaign_id, recipient_id, watermark_id, asset_checksum_sha256, geo, referrer, date, clicks, plays, downloads And timestamps are UTC ISO-8601; headers/keys are consistent; values are escaped per RFC 4180 for CSV And exported aggregates match UI totals exactly (tolerance = 0) And asset checksums in export equal stored checksums for each version
Historical Preview and Checksum Queryability
Given an asset version has been replaced When requesting GET /versions/{version_id}/preview and GET /versions/{version_id}/checksum Then preview returns 200 and streams the correct historical asset And checksum endpoint returns stored SHA-256 hash And both endpoints remain available for retired versions And requests for non-existent version_id return 404
Event Attribution Across Replacement Boundary
Given engagements continue arriving during and after a replacement When processing events with timestamps before and after the replace_time Then events are attributed to the same link_id And each event is stamped with the version_id active at event.timestamp And no events are dropped; ingestion lag P95 ≤ 2 minutes And daily aggregates reconcile to raw events with zero-count discrepancy
Recipient Update Banner & Contextual Messaging
"As a recipient, I want clear notice when an asset was updated so that I understand what changed and can quickly get the latest file without confusion."
Description

Display a friendly, non-blocking update banner to recipients when an asset they access has been recalled or replaced. The banner appears on Guest Guard Link pages, embedded stem players, and AutoKit press pages, indicating update date/time and optional release notes from the sender. Provide quick actions to download the latest file, view what changed (filename, duration, checksum), and dismiss the notice. Ensure accessibility (screen reader labels, focus order), localization support, and responsive layout. Include a sender-side composer for short update messages and a preview mode before publishing.

Acceptance Criteria
Banner Appears on Guest Guard Link After Recall/Replace
Given an active Guest Guard Link that references an asset which has been recalled or replaced When a recipient loads the link Then a non-blocking update banner is displayed at the top of the page within 500 ms of initial content render And the banner states the asset was updated and shows the update date/time in the recipient’s locale And if release notes exist they are displayed (truncated to two lines) with a “View more” control Given the link references no updated assets When a recipient loads the link Then the banner does not appear
Banner Appears in Embedded Stem Player and AutoKit Pages
Given an embedded stem player or AutoKit press page that references an updated asset When rendered at viewport widths 320px, 768px, and 1280px Then the update banner is visible, does not overlap primary media or download controls, and adapts layout (stacked at ≤480px, inline above controls at >480px) And in embedded contexts the collapsed banner height does not exceed 96px and remains within the player/page container Given the host page scrolls When the user scrolls Then the banner scrolls with the player/page container and does not fix to the viewport
Quick Actions: Download Latest, View Changes, Dismiss
Given an asset has been replaced When the banner renders Then it includes actions: “Download latest”, “What changed”, and “Dismiss”, each keyboard-focusable with accessible labels Given the user clicks “Download latest” and the link policy allows downloads When the action is triggered Then the latest file download starts with existing watermarking and expiry rules enforced Given the user clicks “What changed” When the action is triggered Then a modal opens with change details Given the user clicks “Dismiss” When the action is triggered Then the banner hides immediately without a full page reload and remains hidden for the current page view Given an asset has been recalled without replacement When the banner renders Then “Download latest” is not shown
What Changed Modal Details
Given a replaced asset When “What changed” is opened Then the modal shows previous and current values for filename, duration (mm:ss.mmm), and SHA-256 checksum And it displays the update date/time and the full release notes (if provided) And it provides a “Copy” control for each checksum value Given any field has not changed When displayed Then it is explicitly indicated as “Unchanged” Given a recalled asset with no replacement When “What changed” is opened Then the modal indicates the asset was recalled and no replacement is available, and hides previous/current comparison rows
Accessibility Compliance (WCAG 2.2 AA)
Given the banner appears dynamically When a screen reader is active Then the banner is announced via aria-live="polite" with a descriptive label, and does not automatically steal focus Given keyboard-only navigation When interacting with the banner and modal Then all controls are reachable via Tab/Shift+Tab, actionable via Enter/Space, the modal traps focus while open, Esc closes the modal, and focus returns to the triggering control on close Given standard contrast checks Then text and interactive elements in the banner and modal meet a minimum 4.5:1 contrast ratio and have visible focus indicators
Localization and Timezone Behavior
Given the recipient’s locale is detected from Accept-Language or an overriding link parameter When the banner and modal render Then all system strings are localized to that locale, and any missing translations fall back to English Given the update timestamp When displayed Then it is shown in the recipient’s local timezone and locale format, with a machine-readable ISO 8601 datetime attribute and an absolute-time tooltip including timezone Given an RTL locale When rendered Then layout, text alignment, and iconography mirror appropriately for right-to-left reading order
Sender Update Message Composer and Preview
Given a sender composes an update message When typing in the composer Then input is limited to 500 characters, disallowed HTML is stripped, line breaks are preserved, and a live character count is shown Given the sender clicks Preview When preview mode opens Then the banner and modal are rendered as recipients will see them across contexts (Guest Guard Link, embedded player, AutoKit), using the composed message Given the sender publishes the update When recipients next load or refresh eligible pages Then the new message appears without requiring new links, and the sender can edit or clear the message in a subsequent update
Scoped Recall & Replace Controls
"As a campaign manager, I want to recall or replace assets for specific recipients or links so that I can control rollout scope and minimize disruption."
Description

Enable targeting of recall/replace operations to specific audiences and links. Provide filters by list/segment (PR, DSP, press), tag, link creation date range, geography, access role, and manual selection of link IDs. Support action types: Recall (disable access and optionally expire immediately) and Replace (swap to a new version). Include scheduling (execute now or at a future time), preflight impact summary (affected links, recipients, storage delta), and soft rollout (percentage or cohort-based). Respect existing expiration rules and access controls, and log all actions for auditability.

Acceptance Criteria
Targeted Link Selection via Filters and IDs
Given I select one or more segments (e.g., PR, DSP, Press) And I select one or more tags And I set a link creation date range And I select one or more geographies And I select one or more access roles When I apply the filters Then the target set equals links that match the intersection across filter categories and the union within each category value set And the UI displays the exact count of matched links and lists their IDs Given I manually input specific link IDs When I apply both filters and manual IDs Then the target set is the de-duplicated union of matched filters and entered IDs And any invalid or non-existent IDs are listed with a validation error and excluded Given the resulting target set is empty When I attempt to proceed Then the action is blocked and I am prompted to adjust selection
Recall Action: Disable Access and Optional Immediate Expiration
Given I have a non-empty targeted set of active Guest Guard Links When I choose Action = Recall and confirm Then targeted links are marked disabled within 60 seconds And recipients clicking those links see a friendly recall message instead of the original asset And shortlink URLs remain unchanged for analytics continuity Given I toggle "Expire immediately" = On When the recall executes Then the expiration timestamp on each targeted link is set to now And existing expiration rules for non-targeted links remain unchanged Given some targeted links are already expired When the recall executes Then those links remain expired and are also marked disabled for audit clarity And historical analytics and watermark records remain intact and queryable
Replace Action: Swap Asset Version While Preserving History
Given I have a non-empty targeted set of active Guest Guard Links And I have uploaded or selected a new asset version compatible with the target slot/type When I choose Action = Replace and confirm Then future accesses via targeted links serve the new version within 60 seconds And the shortlink remains the same And recipients see a friendly update message on first access after the swap And existing analytics counters continue without reset And watermark lineage records associate previous and new versions for each link Given some targeted links are expired or geo/access-restricted When the replace executes Then those constraints remain enforced and are not relaxed by the replace Given the new asset fails validation (type/codec/DRM mismatch) When I attempt to confirm Then the replace is blocked with a specific validation error message
Preflight Impact Summary and Validation
Given I have configured filters, manual IDs, and selected an action (Recall or Replace) When I open the preflight summary Then I see the exact counts of affected links and unique recipients And I see the projected storage delta for the action (increase/decrease in GB) And I can preview the list of affected link IDs and export it as CSV Given the preflight shows zero affected links When I attempt to start the action Then the system blocks execution and prompts me to revise targeting Given I schedule a Replace with a new asset When I view preflight Then I see the current asset version, target version, and estimated propagation time window
Scheduling: Execute Now or Future with Timezone Support
Given I select Execute = Now When I confirm an action Then the job is queued immediately and begins within 60 seconds And I can view a running status until completion Given I select Execute = Later And I set a future date/time in the workspace timezone When I schedule the action Then the job is stored with both workspace timezone and UTC timestamps And I can edit or cancel it any time before it starts Given the scheduled time is in the past at confirmation When I save the schedule Then I am prompted to run immediately or pick a new time, and the job does not schedule silently Given a transient failure occurs during execution When the job retries Then it retries with backoff up to a defined limit without duplicating effects (idempotent)
Soft Rollout by Percentage or Cohort
Given I choose Soft Rollout = Percentage and set 10% When I start a Replace Then 10% ±0 of targeted recipients (deterministically sampled per recipient ID) receive the new version And the remaining recipients continue to receive the old version until the percentage is increased or rollout completes Given I increase the percentage from 10% to 50% When I update the rollout Then previously included recipients remain included and additional recipients are added to reach 50% Given I choose Soft Rollout = Cohort and select specific segments/tags When I start a Recall or Replace Then only recipients in the selected cohorts are affected And recipients outside those cohorts are unaffected until included And rollout progress is visible with counts per cohort/percentage
Comprehensive Audit Logging and Traceability
Given I execute or schedule a Recall or Replace When the action is saved Then an immutable audit record is created capturing actor, timestamp, filters, manual IDs, action type, soft rollout config, schedule, and preflight summary And upon execution completion, the record includes affected link IDs, success/failure counts, error reasons, storage delta, and version hashes Given I have Audit Log view permissions When I open the audit log Then I can filter by date range, actor, action type, and link ID And I can export the record set as CSV/JSON And no existing audit records can be edited or deleted by any user
Atomic Swap & CDN Invalidation
"As a platform operator, I want asset swaps to be atomic with proper CDN invalidation so that recipients never encounter mixed versions or broken downloads."
Description

Perform zero-downtime swaps using staged uploads and atomic pointer flipping. Validate the replacement asset (checksums, duration, channel count, codec) and block the flip if incompatible with current delivery profiles. In the same transaction, trigger CDN cache invalidation, regenerate player manifests, and update shortlink redirects to avoid stale content. Provide retry with exponential backoff for purge calls, region-aware propagation checks, and metrics to verify cache freshness. Roll back automatically if any post-flip health checks fail to prevent broken links or mixed versions.

Acceptance Criteria
Atomic Swap Success with Compatible Replacement
Given a staged replacement asset whose codec, channel_count, sample_rate, duration (±100ms), and checksum are validated against the active delivery profile When the user initiates the swap Then the system flips the live asset pointer atomically within 1 second without returning 4xx/5xx to clients And in-flight streams/downloads continue without interruption (no connection resets; segment continuity preserved) And shortlink analytics aggregation keys and watermark history remain unchanged And the release revision is incremented and recorded in the audit log
Swap Blocked on Incompatibility Validation
Given a staged replacement asset that is incompatible (e.g., codec not in profile, channel_count mismatch, duration deviates >100ms, or checksum missing) When the user attempts the swap Then the system blocks the flip and returns a validation error enumerating each failed check And no changes are made to live pointers, manifests, or shortlink redirects And no CDN invalidations are issued And an audit log entry records the failed swap with reasons
CDN Invalidation and Player Manifest Regeneration
Given a successful atomic flip When post-flip tasks execute Then CDN purge requests for all affected paths (asset, manifests, artwork, shortlink targets) are issued within 1 second And HLS/DASH manifests are regenerated and published within 2 seconds And shortlink redirects target the new asset URL within 1 second And a freshness probe fetch returns the new ETag/version for new sessions, with no stale content served for new sessions after 2 minutes
Exponential Backoff and Retry on CDN Purge Failures
Given CDN purge calls return 429 or 5xx When the system retries invalidations Then retries follow exponential backoff with jitter (1s, 2s, 4s, 8s, 16s; max 5 attempts; max interval 30s) And all attempts and outcomes are logged with status codes and latency And on success within the retry budget, the workflow proceeds to propagation checks And on final failure, the swap is marked Degraded, a user-facing alert is raised, and post-flip health checks determine whether to rollback
Region-Aware Propagation and Cache Freshness Metrics
Given a swap with completed purge requests When synthetic probes from at least 10 representative regions (NA/EU/APAC/SA/AF) fetch the asset and manifests Then ≥95% of regions reflect the new version within 2 minutes and 100% within 5 minutes And metrics emit versioned cache-hit/miss, ETag seen, and last-updated timestamps per region And a dashboard segments freshness by region and path, with exportable time-series for verification
Automatic Rollback on Post-Flip Health Check Failure
Given post-flip health checks detect any failure (non-2xx asset/manifests, signature mismatch, codec/profile drift, or mixed-version playback) within 5 minutes of flip When a failure threshold is met Then the system reverts the live pointer to the previous asset within 30 seconds And CDN invalidations and manifest regeneration target the restored asset And shortlink redirects are restored to the prior target And users do not encounter broken links (≤0.1% 5xx during rollback window) And the failed revision is marked Failed with a complete incident log
Zero-Downtime Under Concurrent Access During Swap
Given 1,000 concurrent clients streaming/downloading during a swap When the atomic flip occurs Then the error rate remains ≤0.1% 5xx and P95 latency increases by ≤20% during the 60s window around the flip And sessions started before the flip receive only the pre-flip version; sessions started after receive only the post-flip version And no single session receives mixed segments from different versions And cumulative analytics and watermark attribution remain contiguous across the version boundary
Version History, Diff & Rollback
"As an artist manager, I want a clear version history with diffs and the ability to roll back so that I can verify changes and quickly revert mistakes."
Description

Maintain a per-asset version timeline with metadata diffs and safe rollback. Store and display version attributes (ISRC/UPC, BPM, loudness, duration, file size, checksum, artwork hash, notes), who changed it, and why. Provide visual diffs for audio duration and artwork changes, and quick compare of file checksums. Allow a controlled rollback that restores a previous version across selected links, respecting current access rules and recording a new version entry. Include guardrails (confirmation modals, impact preview, dependency warnings for scheduled releases) and exportable change logs for stakeholders.

Acceptance Criteria
Version Timeline Rendering and Metadata Completeness
Given an asset has multiple historical versions with stored attributes When a user opens the Version History panel for the asset Then versions are listed in reverse chronological order and each version displays: Version ID, ISRC/UPC, BPM, loudness (LUFS), duration (mm:ss), file size (MB), checksum (SHA-256), artwork hash, notes, editor name, editor timestamp, and change reason And timestamps are shown in the viewer’s timezone with ISO-8601 tooltips And the list supports pagination of 50 versions per page and search by Version ID and editor name And missing or null attributes render as "—" without breaking layout
Visual Diff for Audio Duration and Artwork
Given two versions of an asset are selected in Version History When the user clicks View Diff Then the diff shows audio duration change in seconds and percentage, with increases highlighted in red and decreases in green And artwork thumbnails are shown side-by-side with dimensions and file sizes; changes are highlighted; identical artwork shows "No artwork changes" And if either version lacks artwork, the slot displays "No artwork" And the diff view renders within 1 second using pre-indexed metadata
Quick Checksum Compare
Given any version row in Version History When the user triggers Quick Compare between the current version and a selected version Then the UI displays "Match" if SHA-256 values are identical, otherwise "Mismatch" And both checksum strings can be copied via copy-to-clipboard controls And the comparison result appears within 300 ms of click And a "View full diff" link is shown when mismatch = true
Controlled Rollback to Prior Version Across Selected Links
Given a user with Edit permission selects a prior version and chooses specific Guest Guard Links to update When the user confirms the rollback Then the system creates a new version entry that references the selected prior content and metadata and sets it as current for the chosen links only And the rollback respects each link’s existing access rules (expiry, password, watermark settings) without altering them And an audit record is written with actor, timestamp, previous current version ID, new version ID, affected link IDs, and the entered reason And no other links are changed
Guardrails: Confirmation Modal, Impact Preview, and Dependency Warnings
Given a user initiates a rollback When the confirmation modal opens Then an impact preview lists all selected Guest Guard Links, their audiences, and any scheduled releases that reference the asset And if any scheduled release is within 72 hours, a dependency warning is displayed and requires explicit acknowledgment via a checkbox to proceed And if a release is locked, the rollback is blocked and the Confirm button is disabled with an explanatory message And the modal requires a non-empty reason (minimum 5 characters) before enabling Confirm
Exportable Change Logs for Stakeholders
Given Version History is visible for an asset When the user exports the change log as CSV or PDF with a date range filter Then the exported file contains one row per version change with fields: Asset ID, Version ID, Change type (create/update/rollback), ISRC/UPC, BPM, loudness, duration, file size, checksum, artwork hash, notes, editor name, editor timestamp, change reason, affected link IDs And the export respects the selected date range and returns the expected record count And the file generates within 5 seconds for up to 1,000 rows And only users with Viewer role or higher can export
Analytics and Watermark History Preservation on Rollback
Given selected Guest Guard Links have existing analytics and watermark event history When a rollback creates a new version and updates those links Then existing analytics and watermark history remain intact and are not deleted or reset And new analytics events are attributed to the new version ID from the rollback timestamp forward And recipient-facing pages display an "Asset updated" banner with the update timestamp without requiring new outreach
API & Webhook Support for Recalls
"As a developer integrating our internal tools, I want API endpoints and webhooks for recall events so that systems stay in sync without manual intervention."
Description

Expose REST endpoints to initiate recalls and replacements, list affected links, and query version lineage. Provide idempotent operations with request keys, OAuth scopes limiting recall privileges, and fine-grained filters matching the UI. Emit webhooks for asset_recalled, asset_replaced, link_updated, and watermark_regenerated with payloads that include link IDs, old/new asset IDs, and version metadata. Include rate limits, audit logging, a test sandbox, and comprehensive documentation with examples to integrate TrackCrate with label workflows and CI/CD pipelines.

Acceptance Criteria
Idempotent Recall/Replace Endpoints
Given a valid OAuth2 access token with scope recall:write When the client POSTs /v1/recalls with body {asset_id, reason} and header Idempotency-Key: K Then the API responds 202 Accepted with body containing recall_id and idempotency_key K and starts processing without creating duplicate operations And a subsequent POST with the same Idempotency-Key K and identical body within 24 hours returns the same recall_id and no additional side effects And a subsequent POST with the same Idempotency-Key K but a different body returns 409 Conflict with error_code "IDEMPOTENCY_KEY_MISMATCH" And POST /v1/replacements follows the same idempotency behavior And all responses include a unique Request-Id header
OAuth Scopes & Permission Enforcement
Given an OAuth2 token lacking recall:write When the client calls POST /v1/recalls or POST /v1/replacements Then the API responds 403 Forbidden with error_code "INSUFFICIENT_SCOPE" and a WWW-Authenticate header indicating required scope recall:write Given no Authorization header When calling any recall or replacement endpoint Then the API responds 401 Unauthorized with WWW-Authenticate: Bearer Given a token with recall:read When the client calls GET /v1/recalls/{id} and GET /v1/recalls/{id}/links Then the API responds 200 OK with permitted fields only Given a token with version:read and/or link:read When the client calls GET /v1/version-lineage or GET /v1/links Then the API responds 200 OK and denies access if required scopes are missing
Filter Parity & Affected Links Listing
Given active Guest Guard links exist for asset A with varying recipient_email, status, tags, and created_at values When the client calls GET /v1/links?affected_by_asset_id=A&status=active&recipient_email=user@example.com&created_at[gte]=2025-01-01&tag=press Then the API returns only links matching all filters, sorted by created_at desc, with pagination via next_cursor and limit parameters And the results match the UI’s filter logic for the same criteria And GET /v1/recalls/{recall_id}/links returns exactly the link_ids that were impacted by that recall And if include_total=true is provided, the response includes total_count; otherwise totals are omitted for performance
Version Lineage Query API
Given assets have been recalled and replaced forming a lineage When the client calls GET /v1/version-lineage?asset_id=A Then the API returns 200 OK with a lineage object containing fields: lineage_id, asset_id, version, parent_asset_id, replaces_asset_id, replaced_by_asset_id, recalled_at, metadata.version_notes And the lineage includes the complete chain from root through current in correct order And the endpoint supports include=children,parents and pagination for large trees And if the asset_id is unknown the API returns 404 Not Found with error_code "ASSET_NOT_FOUND"
Webhook Events Emission, Ordering & Retries
Given a recall or replacement is initiated and processed When the operation completes per affected link Then the system emits webhooks: asset_recalled (once per recall), asset_replaced (once per replacement), link_updated (per affected link), watermark_regenerated (per affected link requiring new watermark) And events for the same link_id are delivered in order and include an increasing sequence number And delivery uses at-least-once semantics; non-2xx responses are retried with exponential backoff for up to 24 hours before moving to a dead-letter queue And failed deliveries are visible at GET /v1/webhooks/dlq and can be retried via POST /v1/webhooks/dlq/{event_id}/retry
Webhook Payload Schema & Signing
Given a subscriber has configured a webhook endpoint with a shared signing secret When TrackCrate sends any of the four webhook events Then the JSON payload includes: event_type, event_id (UUID), created_at (ISO-8601), schema_version, request_id, idempotency_key, link_ids (array), old_asset_id, new_asset_id (nullable), lineage {lineage_id, version, parent_asset_id} And HTTP headers include TC-Timestamp and TC-Signature where the signature is HMAC-SHA256 over "{timestamp}.{raw_body}" with scheme "v1=" And signatures within a 5 minute tolerance validate successfully; invalid signatures lead to 400/401 and TrackCrate retries per policy And payload size does not exceed 256 KB; if larger, payload includes resources_url to fetch full details
Operational Controls: Rate Limits, Audit Logs, Sandbox, and Docs
Given a client makes API calls When request volume exceeds the configured per-client rate limit Then the API responds 429 Too Many Requests with X-RateLimit-Limit, X-RateLimit-Remaining, X-RateLimit-Reset, and Retry-After headers and resumes service when within limits Given a recall or replacement API call is made When the operation is processed Then an audit log entry is recorded with timestamp, user_id or client_id, scopes, source_ip, request_id, idempotency_key, asset_id(s), affected_link_count, outcome, and http_status and is retrievable via GET /v1/audit/logs with filters and pagination Given a client targets the sandbox When requests are sent to https://sandbox.api.trackcrate.com or with header X-TC-Sandbox: true Then operations execute in an isolated environment, do not affect production, and emit webhooks with header TC-Sandbox: true to the configured test endpoint Given developer documentation is accessed When viewing the API reference Then an OpenAPI 3.1 spec is available and examples are provided for curl, Node.js, and Python for each endpoint, including webhook signature verification, error codes, pagination, filtering, and idempotency usage

Template Composer

Design custom Role Rings with granular scopes (projects, tracks, stems, artwork, press) and actions (view, comment, upload, replace, publish). Bundle default policies like QuotaGuard limits, DeviceLock, watermarks, and Access Pledge terms, then save as reusable presets (e.g., Artist, Mixer, PR, A&R). Standardize onboarding in seconds while ensuring consistent, least‑privilege access.

Requirements

Role Ring Builder
"As a label admin, I want to compose a custom role by selecting scopes and actions so that collaborators get exactly the access they need and nothing more."
Description

Provide an interactive builder to compose Role Rings with granular scopes (workspace, project, release, track, stem, artwork, press assets) and actions (view, comment, upload, replace, publish). Support hierarchical inheritance, explicit allow/deny, conditional constraints (e.g., time‑bounded publish) and conflict detection with inline guidance. The builder should output a normalized, machine-readable policy used by TrackCrate’s authorization layer across web, API, shortlinks, and AutoKit stem player. Ensure real‑time preview of effective permissions against sample assets to reduce misconfiguration and enforce least‑privilege by default templates.

Acceptance Criteria
Compose Role Ring and emit normalized policy
Given a new Role Ring draft in the builder When the user assigns one or more actions [view, comment, upload, replace, publish] to at least one scope [workspace, project, release, track, stem, artwork, press] Then the Save button becomes enabled only when at least one (scope, action) pair is configured Given the user saves the Role Ring When the backend generates the policy document Then the output validates against JSON Schema id "roleRingPolicy/1.0" And includes fields: policyId (UUIDv4), policyVersion ("1.0"), ringName, rules[], constraints[], createdAt (ISO-8601), createdBy (userId) And rules are canonicalized (sorted by scope specificity then action name) And duplicate or overlapping rules are de-duplicated in the normalized output
Hierarchical inheritance and explicit deny precedence
Given a parent scope rule project:* allow view And a child scope rule track:123 deny view When previewing effective permissions for track:123 Then view is denied because explicit deny overrides inherited allow Given a parent scope rule project:ABC allow upload And no child rule exists for its tracks When previewing effective permissions for track under project:ABC Then upload is allowed via inheritance Given parent scope allow publish and child scope allow publish with a stricter constraint When previewing effective permissions on the child asset Then the more specific child rule governs (most-specific-wins)
Time-bounded publish constraint enforcement
Given a rule release:789 allow publish with constraint window {start: 2025-09-10T00:00:00Z, end: 2025-09-20T00:00:00Z} When current time is before start Then publish is denied in preview and authorization decisions Given the same rule When current time is within [start, end) Then publish is allowed Given the same rule When current time is on or after end Then publish is denied Given the rule is evaluated in different client timezones When decisions are computed Then UTC timestamps are used consistently and decisions are identical
Conflict detection with inline guidance
Given a configuration that introduces both allow and deny for the same scope/action (e.g., track:123 allow view and track:123 deny view) When the configuration is present in the builder Then an inline conflict banner appears listing the conflicting rules and affected scope/actions count And each conflicting rule is highlighted with a badge and jump-to anchor And the Save action is disabled until the conflict is resolved or an explicit precedence choice is applied (default deny-wins) And the guidance panel displays recommended fixes (e.g., remove redundant allow, narrow scope)
Real-time effective-permissions preview
Given the user selects sample assets (one per scope) in the preview panel When the user adds, edits, or removes a rule in the builder Then the Effective Permissions matrix updates without page reload and reflects the change immediately And each cell shows Allow or Deny with a tooltip explaining the decision path (rule id, scope, inheritance, constraints) And clearing all rules results in Deny for all actions across all sample assets
Cross-surface authorization parity (web, API, shortlinks, AutoKit)
Given a saved Role Ring policy assigned to a test identity When requesting access to the same sample assets and actions via - Web application UI checks - REST API (bearer token) - Shortlink access flow - AutoKit stem player Then the authorization decisions (Allow/Deny) are identical across all surfaces for each (scope, action) And discrepancies (if any) are logged with correlation IDs for investigation
Least-privilege defaults and templates
Given the builder is opened to create a new Role Ring When no template is applied Then the default state grants no actions for any scope (deny-by-default) Given the user applies the "Artist" template When previewing effective permissions Then only the template-defined minimal actions are allowed, and all others remain denied Given a user attempts to grant workspace-wide publish When the rule is added Then the builder displays a least-privilege warning and requires explicit confirmation before enabling Save
Policy Bundling Engine
"As an admin, I want to attach usage and security policies to a role template so that our standards are consistently enforced without manual setup."
Description

Enable attachment of default operational policies to a Role Ring, including QuotaGuard limits (bandwidth, download count, link creation), DeviceLock (max devices, device reset workflow), dynamic watermarks on previews/downloads, and Access Pledge acceptance (terms gating with version pinning). Policies must be enforceable at download/stream time for shortlinks and AutoKit pages, and at upload/replace for assets. Provide configurable presets, numeric thresholds, expirations, and exceptions, with policy evaluation integrated into request pipelines and surfaced in UI with clear enforcement messages.

Acceptance Criteria
Preset Creation and Application to Role Rings
Given I have Org Admin permissions and open Template Composer When I create a Role Ring and configure policies (QuotaGuard, DeviceLock, Watermark, Access Pledge) with numeric thresholds, expirations, and exceptions Then I can save the configuration as a named preset and it appears in the Presets list within 2 seconds Given an existing or new Role Ring When I apply the saved preset Then all included policies and values are attached to the Role Ring and are visible in the Role Ring’s Policies summary And an audit log entry records who applied which preset and the resulting policy values And only Org Admins can create/update/delete presets; non-admins cannot see the Preset management UI
QuotaGuard Limits Enforced on Shortlinks and AutoKit
Given a Role Ring with QuotaGuard: bandwidth=5 GB/week, download_count=100/week, link_creation=10/day When content is streamed or downloaded via a shortlink or AutoKit page by a member of that Role Ring Then the appropriate counters decrement in real time and are visible in the UI within 5 seconds When any counter reaches its limit Then subsequent requests are blocked with HTTP 429 and error_code=QUOTA_EXCEEDED and a human-readable message naming the exact limit reached And the UI displays remaining quota and next reset time; counters reset on schedule without manual intervention And designated exceptions (whitelisted users/roles) bypass the limit and are logged as bypassed
DeviceLock Maximum Devices with Reset Workflow
Given DeviceLock is set to max_devices=3 for a Role Ring When a user associated with that Role Ring authenticates on devices A, B, and C Then access is allowed and devices are registered with fingerprint IDs When the same user attempts access from device D Then access is denied with HTTP 403 and error_code=DEVICE_LIMIT and the UI offers a device reset request When an owner/admin approves the reset in the UI Then the oldest device is revoked, device D is registered, and an audit log entry is created with approver, old device, new device, timestamp And device resets are rate-limited to 1 per 24 hours per user; private/incognito sessions do not create additional device slots
Dynamic Watermarks on Previews and Downloads
Given a Role Ring has Watermark policy enabled with template including user_id, email hash, timestamp, and shortlink_id When an audio preview or download is initiated via shortlink or AutoKit Then the delivered media contains the configured dynamic watermark (audible overlay for previews; embedded steganographic/ID3/metadata or image overlay per asset type for downloads) and the operation completes within +300ms of baseline And watermark is omitted for assets marked with a watermark exemption and for roles explicitly exempted And the watermark payload recorded in logs matches the user/session performing the action And attempts to request unwatermarked variants via query params are rejected with HTTP 403 and error_code=WATERMARK_ENFORCED
Access Pledge Gating with Version Pinning
Given Access Pledge terms v3 are attached to a Role Ring When a user without an acceptance record tries to view, stream, or download via shortlink or AutoKit Then access is blocked until the user accepts v3; upon acceptance, access is immediately granted and an immutable record is stored with user_id, terms_version=v3, ip, timestamp When terms update to v4 Then the same user is prompted again and access is blocked until v4 is accepted, unless an exception allows continued access on pinned v3 And API responses include error_code=TERMS_REQUIRED with a link to the acceptance flow when access is gated And admins can view/export acceptance records filtered by version
Upload/Replace Policy Enforcement for Assets
Given a Role Ring with policies: QuotaGuard upload_bandwidth=2 GB/day, allowed_actions=upload+replace, file_types=wav,flac,png,pdf, max_file_size=2 GB, watermark_on_upload=false When a member attempts to upload or replace an asset from the Projects or Tracks area Then the request passes policy evaluation in the upload pipeline before any write, and only permitted actions proceed And disallowed file types or sizes are rejected with HTTP 415/413 and specific error codes (UNSUPPORTED_TYPE, FILE_TOO_LARGE) And if daily upload bandwidth is exceeded, the request is rejected with HTTP 429 and error_code=UPLOAD_QUOTA_EXCEEDED and the UI displays remaining quota and reset time And if a policy requires pre-processing (e.g., checksum, virus scan), the operation blocks until completion and logs the results And successful uploads/replacements are logged with policy snapshot and counters updated within 5 seconds
Template Presets & Versioning
"As a producer, I want to reuse and version role templates so that onboarding remains fast and consistent across releases."
Description

Allow saving Role Rings with bundled policies as reusable presets (e.g., Artist, Mixer, PR, A&R). Support semantic versioning, cloning, diffing, changelogs, deprecation, and rollback. Ship a curated starter library and permit org‑level presets. When a preset updates, provide non‑breaking migrations and an opt‑in flow to upgrade existing assignments with a preview of changes and impact analysis. Ensure presets are discoverable via search and tagged by use case.

Acceptance Criteria
Save Preset with Bundled Policies
Given a valid Role Ring with selected scopes/actions and bundled policies (QuotaGuard, DeviceLock, Watermark, Access Pledge) When the user saves it as a preset with a unique name and at least one tag Then a new preset is created with version 1.0.0, a unique ID, created/updated timestamps, and is listed in the preset library And reopening the preset shows scopes/actions and all policy values exactly matching the saved snapshot And assigning the preset to a collaborator applies the defined permissions and policies within 5 seconds and records an audit log entry
Semantic Versioning on Publish
Given draft changes exist for an existing preset When the owner publishes and selects release type (major | minor | patch) Then the version increments according to SemVer 2.0.0 (MAJOR.MINOR.PATCH) and prior published versions remain immutable And publishing requires non-empty release notes; otherwise a validation error prevents publishing And attempts to edit a published version are blocked; only new drafts can be created And version identifiers are unique per preset
Clone Preset
Given a published preset exists When the user selects Clone Then a new preset is created as a draft with a new ID, name prefilled with “(Clone)”, and all scopes/actions/policies/tags copied And edits to the clone do not alter the source preset And the clone’s first publish starts at version 1.0.0 And the clone operation completes within 2 seconds
Diff and Changelog View
Given two versions (V1 and V2) of the same preset When the user opens Compare Then the UI displays added/removed/modified items across scopes, actions, policies, quotas, tags, and visibility with old vs new values And the generated changelog for V2 includes the release notes and auto-listed diffs And exporting the diff to JSON or CSV matches the on-screen results And the comparison view loads within 2 seconds for presets up to 200 rules/policies
Deprecation and Rollback Controls
Given a published version Vx of a preset When the owner marks Vx as Deprecated Then Vx is hidden from new-assignment pickers within 1 minute and remains active for existing assignments with a Deprecated badge in views Given an assignment currently using Vy When the owner triggers Rollback to Vx Then the assignment’s effective permissions revert to Vx within 5 seconds and an audit log records who, when, and what changed And deprecated versions cannot be set as default for new projects
Opt-in Upgrade Flow with Impact Analysis and Non-breaking Migrations
Given a preset has a newer version Vn than the version assigned to N collaborators When the owner initiates Upgrade Then a preview lists all impacted assignments, per-assignment diffs, and risk flags for potential breaking changes (e.g., scope removals, tighter quotas) And the system proposes a non-breaking migration plan such that each upgraded assignment’s effective permissions are preserved or expanded; any assignment that would lose permissions is flagged and excluded by default And the owner can opt in per assignment; upon confirm, upgrades apply atomically per assignment with success/failure status and audit logs And post-upgrade, effective permissions match the preview; no assignment loses access without explicit override confirmation And the preview loads within 5 seconds for up to 500 assignments
Discoverability with Search, Tags, Starter Library, and Org-Level Scope
Given the preset library When the user searches by name, tag, use case, or policy attribute Then results filter accordingly with facets for use case tags (Artist, Mixer, PR, A&R) and visibility (Org, Curated) and respond within 1 second for up to 1,000 presets And curated starter presets are visible read-only and can be imported into the org as new presets retaining tags and descriptions And org-level presets are visible only to members with appropriate permissions (Manage Presets for edit, Assign Presets for use) And creating or editing a preset requires at least one tag; tags are validated (allowlist or org-created) and are used in search ranking (exact tag matches rank first)
Bulk Apply & Onboarding Assignments
"As a project manager, I want to apply a template to multiple collaborators and assets at once so that onboarding takes seconds and stays consistent."
Description

Provide flows to apply templates to users, groups, and invitation links across multiple projects/releases in one action. Include a preview of effective permissions and policies before finalizing, optional time windows (start/end dates), and one‑click revocation. Support bulk change propagation, drift detection when manual overrides diverge from the template, and guided remediation to re‑align or intentionally fork access. Integrate with invite emails and SSO/JIT provisioning to auto‑apply templates on first login.

Acceptance Criteria
Bulk Apply Template to Multiple Projects and Principals
Given an admin with Manage Templates permission selects Template T, Projects [P1..Pn], and targets [users, groups, invite links] When they click Apply Then the system creates or updates assignments for all selected targets across all selected projects without creating duplicates Given any selected target already has Template T on a project When the bulk apply runs Then the assignment is left unchanged and counted as "skipped (already applied)" Given the bulk apply operation completes When the confirmation modal appears Then it shows counts for total targets, successes, skips, and failures with per-item error messages downloadable as CSV Given the operation executes When propagation is triggered Then effective permissions are applied to all affected resources within 60 seconds and the API returns a job ID to track progress Given the operation executes When auditing is reviewed Then a parent audit record summarizes the bulk action with child records per target and project
Effective Permissions & Policy Preview Before Apply
Given an admin selects Template T and targets (projects and principals) When they open Preview Then the system displays per-target effective actions and policies by scope (project, track, stem, artwork, press) with inheritance sources and conflicts indicated Given Preview is open When the admin switches to Diff view Then the system shows current vs after-apply differences per scope, including additions and removals Given Preview is open and destructive changes would remove existing permissions When the admin proceeds Then the UI requires an explicit confirmation checkbox before enabling Apply Given Preview loads Then it completes within 3 seconds for up to 50 targets and indicates paging for larger sets Given a preview was generated When Apply is executed after more than 10 minutes Then the system validates for intervening changes and requires the user to refresh the preview if any material changes are detected
Time-Bound Access Windows on Assignments
Given an admin sets a start and/or end datetime on a template assignment When the assignment is saved Then access becomes effective at the start time and automatically revokes at the end time across all scopes Given the start time is in the future When the user attempts to access resources before the start Then access is denied with a message indicating the scheduled start, and no downloads are allowed Given the end time has passed When the user attempts to access resources Then access is denied within 5 minutes of the end time and any active sessions or expiring links are invalidated Given an admin sets or edits the time window When saving Then the UI displays the project time zone and the UTC equivalent; the API stores ISO 8601 UTC values Given a time window is updated When saved Then scheduled grant/revoke jobs are updated accordingly and an audit record is created
One-Click Revocation with Immediate Propagation
Given an admin selects one or more template assignments When they click Revoke Access Then the system removes the template assignments across all selected projects and propagates changes within 60 seconds Given revocation occurs When invitation links related to those assignments exist Then the links are immediately invalidated and cannot be used to join Given revocation completes When the API is queried for effective permissions Then the removed permissions no longer appear and an audit entry records actor, time, and reason Given revocation encounters partial failures When the confirmation is shown Then failed items are listed with error reasons and a Retry action is available
Drift Detection on Manual Overrides
Given a template assignment exists When a manual change alters scopes, actions, or bundled policies away from the template Then the system flags the assignment as Drifted within 5 minutes and surfaces a badge in the assignments list and preview Given drift is detected When an admin opens the drift details Then a diff view shows template vs current values with specific overrides highlighted Given drift is cleared by re-applying the template or by forking access When the next detection run occurs Then the drift flag is no longer shown for that assignment Given no changes occurred since last scan When the nightly drift scan runs Then no new drift flags are created
Guided Remediation to Re-align or Fork
Given a drifted assignment is selected When the admin chooses Re-align to Template Then the system shows an impact preview and on confirm restores all settings to match the template Given a drifted assignment is selected When the admin chooses Fork Access Then the system creates a detached policy copied from the current state, labels it "Forked from Template T on <date>", and removes the template linkage Given remediation completes When the audit log is reviewed Then it includes action type (re-align or fork), actor, targets, and pre/post diffs Given remediation would reduce existing permissions When the admin attempts to proceed Then an explicit acknowledgment is required before applying changes
Auto-Apply via Invite and SSO/JIT Provisioning
Given an admin creates an invite link bound to Template T and Projects [P1..Pn] When a new user completes sign-up via the invite Then Template T is applied to that user across those projects before the first dashboard load Given SSO/JIT is enabled with mapping rules that map an IdP group attribute to Template T When a new SSO user logs in for the first time Then Template T is auto-applied across mapped projects within the authentication transaction Given auto-apply fails for any project When the user's first session starts Then the user receives minimal access, the admin is notified of failures with reasons, and a retry is queued Given a returning user logs in and already has Template T via mapping When auto-apply rules evaluate Then no duplicate assignments are created and unchanged items are skipped Given mapping rules are updated to remove a template from a user When the user next logs in Then the previously mapped assignments are revoked within 60 seconds and recorded in audit
Permission Simulator & Least‑Privilege Analyzer
"As a security‑conscious admin, I want to simulate a template’s effective permissions so that I can verify least‑privilege before inviting collaborators."
Description

Offer a simulator to test a template against real or sample assets and user contexts. Visualize accessible resources and actions, highlight over‑privileged scopes, and recommend reductions based on historic usage patterns. Generate sharable simulation reports for review/approvals and store results for audit. Integrate with the builder to allow one‑click fixes from recommendations.

Acceptance Criteria
Run Simulation Against User or Sample Context
- Given a selected template and target context (real user or sample persona) with a defined asset set, When the user runs the simulation, Then the system computes effective permissions across scopes (projects, tracks, stems, artwork, press) and actions (view, comment, upload, replace, publish) and returns results within 5 seconds for ≤5,000 assets. - Given deterministic fixture data, When the simulation runs, Then the counts of accessible resources per scope/action match the expected fixture counts exactly. - Given a context with no access, When the simulation runs, Then the result indicates zero accessible resources and disables export and apply actions.
Visualize Accessible Resources and Actions
- Given completed simulation results, When the visualization loads, Then it displays totals per scope, a filterable list by scope and action, and a per-resource action matrix consistent with the results. - Given the user applies filters (e.g., scope=stems, action=replace), When the view updates, Then only matching resources are shown and the totals update accordingly. - Given a resource is selected, When details are opened, Then the effective permission path (template rule → scope → action) is displayed.
Detect Over-Privileged Scopes and Actions
- Given a historic usage window of 60 days, When the analyzer runs, Then any granted action never used by the target context or cohort within the window is flagged as over-privileged with severity "Moderate". - Given a granted scope that exceeds the minimal required resource set to perform used actions (e.g., project-wide vs track-level), When analyzed, Then it is flagged as over-privileged with severity "High". - Given fixtures where all granted actions were used at least once in the window, When analyzed, Then no over-privileged flags are produced.
Recommend Least-Privilege Reductions From History
- Given over-privileged findings, When recommendations are generated, Then the system proposes removal of unused actions and/or narrowing scopes (project → track, track → stems) with rationale referencing historic usage. - Given the user previews impact, When the preview runs, Then a before/after diff of accessible counts is shown and used actions within the historic window remain allowed (no regressions). - Given conflicts with mandatory policies (e.g., Access Pledge requirements), When recommendations are computed, Then conflicting recommendations are omitted with an explanation.
One-Click Apply Fixes in Builder
- Given accepted recommendations, When the user clicks Apply in Builder, Then a new template version is created with the recommended changes and tagged with the simulation ID. - Given the apply operation, When executed, Then changes are atomic, auditable, and rollback to the prior version is available. - Given insufficient user privileges, When Apply is attempted, Then the action is blocked with an authorization error and no changes occur.
Generate Sharable Simulation Report for Review
- Given simulation results, When a report is generated, Then it includes template name and version, context details, timestamp, asset counts by scope/action, over-privileged findings, recommendations, and the before/after preview. - Given a report is shared, When a shortlink is created, Then the link is trackable, revocable, and expires by default in 7 days (configurable), and access follows workspace share policy (authentication or token). - Given export is requested, When exporting, Then PDF and CSV (findings and counts) are generated with identical content and a tamper-evident footer (hash and timestamp).
Store Simulation Results for Audit and Approvals
- Given a simulation is completed, When stored, Then an immutable record with unique ID, hash, inputs (template version, context, asset set), results, findings, recommendations, and the generated report is persisted and retained for at least 24 months. - Given audit search, When querying by template, context user, date range, or approver, Then matching simulation records are returned within 2 seconds for up to 10,000 records. - Given an approval workflow, When an approver approves or rejects, Then the decision, rationale, and timestamp are added to the record and reflected in the report, and Apply is enabled only on approved reports per workspace policy.
API & Webhook Integration
"As a developer, I want API endpoints and webhooks for templates so that I can automate onboarding and compliance from our internal tools."
Description

Expose REST endpoints and SDK helpers to CRUD templates, list presets, assign templates to identities or invites, and query effective permissions. Emit webhooks for template created/updated/deleted, assignment applied/revoked, policy violations, and drift events. Ensure idempotency, pagination, and fine‑grained API auth aligned to Role Rings. Document sample automations (e.g., auto‑assign PR template on press link creation) and provide sandbox keys for testing.

Acceptance Criteria
Create Template via REST/SDK with Idempotency
Given a valid template payload defining Role Rings (scopes/actions) and default policies (QuotaGuard, DeviceLock, watermark, Access Pledge) and an Idempotency-Key header When POST /v1/templates is called Then response is 201 with template_id, version >= 1, and stored template matches the payload (excluding server-managed fields) and is retrievable via GET /v1/templates/{template_id} Given the same POST is retried within 24 hours with the same Idempotency-Key When the request is processed Then the response body is identical to the original creation and exactly one template exists in the system Given a payload missing required fields or containing invalid scopes/actions When POST /v1/templates is called Then response is 422 with machine-readable error codes and field paths Given SDK helper Templates.create(payload, { idempotencyKey }) is used When invoked with the same inputs as REST Then it returns a normalized object identical in shape and values to the REST response
Template Update/Delete Emits Webhooks
Given a subscribed webhook endpoint with HMAC secret is configured When PATCH /v1/templates/{id} updates name, rings, or policies Then a template.updated webhook is delivered within 5 seconds including event_id, template_id, changed_fields diff, and valid HMAC signature header; delivery retries with exponential backoff for up to 24 hours on non-2xx Given a template exists When DELETE /v1/templates/{id} is called Then response is 204, the template is no longer retrievable (404), a template.deleted webhook is delivered, and any active assignments are revoked with assignment.revoked webhooks emitted
List Presets with Pagination and Filtering
Given more than 100 presets exist When GET /v1/templates?kind=preset&limit=50 is called Then exactly 50 items are returned, items are consistently ordered by created_at desc, and a next_cursor is present Given a next_cursor from the previous page When GET /v1/templates?kind=preset&cursor={next_cursor}&limit=50 is called repeatedly until cursor is null Then all presets are returned exactly once with no duplicates or omissions Given limit is outside allowed bounds When limit < 1 or > 100 Then response is 400 with an explicit validation error Given a caller lacks list:templates permission When GET /v1/templates is called Then response is 403 with error code insufficient_scope
Assign Template to Identity/Invite with Role-Aligned Auth
Given an actor has manage:templates:assign for scope project:X and a valid template_id When POST /v1/assignments with { template_id, subject_id: <user_id>, scope: project:X } is called Then response is 201 with assignment_id, subject_id, template_id, scope, and computed effective permissions reference; an assignment.applied webhook is delivered within 5 seconds Given an actor lacks sufficient permissions When POST /v1/assignments is called Then response is 403 with error code insufficient_scope and no webhook is emitted Given an invite is pending for invite_email When POST /v1/assignments with { template_id, invite_email, scope } is called Then response is 201 and assignment.applied webhook includes subject_type=invite and invite_email Given the same assignment request is retried with identical Idempotency-Key within 24 hours When POST /v1/assignments is called Then the response is idempotent and no duplicate assignments are created
Query Effective Permissions Returns Accurate Scopes
Given a subject has two templates assigned across scopes project:X and track:Y When GET /v1/permissions/effective?subject_id={id}&scope=project:X is called Then the response enumerates allowed actions per resource type within scope project:X, includes source template_ids for each permission, and applies deny-overrides-allow semantics where applicable Given a pending invite subject When querying effective permissions for the invite Then the response includes permissions flagged as pending=true Given a subject without any assignments When GET /v1/permissions/effective is called Then the response is an empty permission set with 200 OK Given typical load conditions and up to 10 assigned templates When querying effective permissions Then p95 latency is <= 300ms
Policy Violation and Drift Webhooks
Given a template with QuotaGuard limiting uploads to 2GB/day and a user has 100MB remaining When the user attempts to upload a 200MB file Then the API returns 429 with error code quota_exceeded and a policy.violation webhook is emitted including subject_id, template_id, policy=QuotaGuard, limit, used, attempted Given DeviceLock is enabled for a subject When a request originates from an unregistered device Then the API returns 401 with error code device_not_authorized and a policy.violation webhook is emitted including device fingerprint Given a subject's direct permissions are modified outside the assigned template baseline When divergence is detected Then a drift.detected webhook is emitted including subject_id, affected_scopes, and a minimal diff of added/removed permissions; webhook deliveries are signed and retried on failure
Docs, Sample Automations, and Sandbox Keys Available
Given a new developer signs up When requesting sandbox API keys Then keys are issued immediately with prefix sk_sandbox_ and visible rate limit and webhook signing secret in the dashboard Given the Quickstart and API Reference When following the "Auto-assign PR template on press link creation" guide using the SDK Then creating a press link in sandbox triggers automatic assignment of the PR preset and an assignment.applied webhook is received; GET /v1/assignments confirms the assignment Given published Postman collection and SDK examples (Node, Python) When running the smoke tests against the sandbox Then CRUD templates, list presets with pagination, assignments, effective permissions, and webhook verification all return expected 2xx responses with documented shapes, and idempotency/pagination/auth behaviors match documentation

Access Preview

Simulate exactly what a recipient in a given Role Ring will see and be able to do—before you send. Get clear warnings for assets or actions outside the ring and share a secure “view as role” link for internal QA. Prevent oversharing and catch misconfigurations early without test accounts.

Requirements

Role Ring Simulator Engine
"As a label manager, I want to preview a release exactly as a Press role would see it so that I can confirm only the intended assets and actions are exposed before sharing."
Description

Implement a deterministic simulation engine that renders the product exactly as a selected Role Ring would experience it across TrackCrate, including releases, folders, individual assets, shortlinks, and AutoKit press pages. Resolve effective permissions from combined ACLs (workspace, release, folder, asset, and link-level overrides) to determine visibility, actions (stream, download original, download watermarked, comment), expirations, geo/IP rules, and watermark state. Provide a toggle between “Owner” and “View as <Role>” with no state mutation in preview mode. Integrate with the entitlement service and caching layer to compute permission graphs and return preview data within 1.5s for up to 500 assets via batched queries. Handle edge cases like unassigned assets, inherited overrides, and link parameterization. Ensure accessibility, localization parity, and consistent UI chrome indicating simulation mode.

Acceptance Criteria
Permission Graph Resolution Across Surfaces
Given a workspace with mixed ACLs at workspace, release, folder, asset, and link levels with overrides and inheritance And a Role Ring selection is applied (e.g., Publicist) When the simulator renders releases, folders, individual assets, shortlinks, and AutoKit press pages Then the visible items exactly match the effective entitlements from the entitlement service And permitted actions per item (stream, download original, download watermarked, comment) match computed entitlements And per-item watermark state, expiration timestamps, and geo/IP restrictions are reflected in the preview payload And the output is deterministic for identical inputs (stable snapshot identifier for the same inputs)
Performance at Scale (500 Assets)
Given a release containing at least 500 assets across 10+ folders with varied ACLs and link-level overrides When the simulator computes permissions and renders the preview Then P95 end-to-end response time is <= 1500 ms and P50 is <= 600 ms And no more than 4 batched calls are made to the entitlement service and 4 to the metadata cache per request And peak memory usage per request is <= 256 MB with zero timeouts or retries needed to complete
No State Mutation in Preview Mode
Given an Owner toggles to “View as <Role>” and navigates all supported surfaces When in preview mode Then zero writes occur to databases and object storage (0 write ops, 0 bytes written) And no share links, audit events, analytics counters, or watermark artifacts are created And exiting preview restores the exact prior UI state, filters, and navigation context
Overshare Warnings and Outside-Ring Alerts
Given assets or actions exist that the selected Role Ring cannot access When the simulator renders any surface Then a consolidated warning banner displays counts by restricted action type (stream, original download, watermarked download, comment) And each affected item shows an inline badge and tooltip citing the restriction source (level and rule) And warnings update in real time as ACLs change, without page reload
Secure Shareable “View as Role” Link
Given an Owner generates a shareable preview link scoped to a Role Ring When an authenticated teammate within the workspace opens the link Then the session is restricted to that Role Ring with identical permissions and simulation UI chrome And the link token is opaque, single-scope, revocable, and expires within a configurable TTL (default 24h) And access attempts outside the workspace, after expiry, or by unauthorized users return 403 with no asset metadata leakage
Geo/IP and Expiration Rule Simulation
Given a test location/IP and current timestamp can be set in simulator controls When simulating as a Role Ring Then geo/IP-restricted items are hidden or action-disabled consistent with entitlements for that location And expired items display disabled actions with an explicit expiry label and timestamp And if no test location is set, the system defaults to the requestor’s resolved IP/country
Accessibility and Localization Parity in Simulation Mode
Given the simulator is active on any surface When navigating via keyboard and screen reader Then the simulation banner, toggle, and exit control meet WCAG 2.2 AA for focus order, aria-labels, roles, and contrast (>= 4.5:1) And all simulator UI strings are localized to the current workspace language with Owner-mode-equivalent fallbacks And the simulation chrome appears consistently on releases, folders, assets, shortlinks, and AutoKit pages
Overshare/Undershare Warnings & Fix-It CTAs
"As a product assistant, I want clear warnings when a role has more access than policy allows so that I can correct misconfigurations before sending."
Description

During simulation, compute diffs between the selected Role Ring’s effective access and the workspace’s policy templates to detect oversharing (e.g., downloads enabled where only streaming is allowed) or undersharing (e.g., required assets hidden). Display severity-tagged warnings per asset/action with a consolidated summary, and provide inline “Fix” CTAs that deep-link to permission or link settings. Support configurable policy templates per Role Ring (e.g., Press: watermarked downloads, 14-day expiry; Public: stream-only) and batch-apply corrections after explicit confirmation. Maintain read-only simulation until changes are confirmed. Persist resolution status for audit and show warnings count in the preview header.

Acceptance Criteria
Overshare Detection Against Role Template
Given a workspace Role Ring template defines stream-only for audio assets And a simulated recipient in that Role Ring has effective access allowing download on at least one audio asset When the Access Preview computes the policy diff Then an Overshare warning is created for each affected asset And the severity is set to Critical for download enabled where only streaming is allowed And the warning includes the policy reference and the specific permission causing the overshare And the consolidated summary increments Overshare and Critical counts accordingly
Severity and Summary Display
Given multiple warnings exist across assets and actions When warnings are displayed in Access Preview Then each warning shows a severity tag (Critical, High, Medium, Low) And each warning is tied to a specific asset/action with a clear message And the preview header displays the total warnings count and the highest severity present And the summary groups counts by severity and by Overshare vs Undershare And when there are zero warnings, the header shows 0 and a Compliant state
Inline Fix CTAs and Read-only Simulation
Given a warning has an associated fix When the user clicks the Fix CTA Then the app deep-links to the relevant permissions or link settings with the asset and rule pre-selected And no changes are persisted until the user confirms on a confirmation step And the Access Preview remains read-only until confirmation And after closing settings, the user is returned to the simulation with the originating warning focused
Batch Apply Corrections and Confirmation
Given one or more fixable warnings are present When the user selects Batch Apply from the summary Then a confirmation modal lists each proposed change with asset, current value, and target value And the user can select or deselect individual changes before confirming And upon confirm, only the selected changes are applied And an audit entry records user, timestamp, items changed, and counts resolved And the simulation refreshes to reflect new effective access, updating or removing warnings accordingly
Template Configuration and Enforcement by Role Ring
Given distinct Role Ring templates exist (e.g., Press: watermarked downloads with 14-day expiry; Public: stream-only) And the user simulates a recipient in one Role Ring When the policy diff runs Then the selected Role Ring template is used as the baseline for evaluation And expiry longer than 14 days is flagged as Overshare, missing watermark as Overshare, and hidden required assets as Undershare And link-level overrides and asset-level exceptions are included in the effective access calculation And expiry calculations use the workspace timezone for consistency
Resolution Persistence and Header Count
Given warnings are resolved via inline or batch fixes When the system updates resolution state Then each resolved warning is marked Resolved with resolver, timestamp, and method And the preview header warnings count reflects only unresolved warnings And the audit log exposes the resolution history for the simulation session And rerunning the diff does not recreate warnings for items whose effective access matches the template
Secure “View as Role” QA Link
"As a project lead, I want to share a “view as Press” link with my internal team so that they can QA the experience before we send it externally."
Description

Enable generation of secure, expiring internal QA links that reproduce an Access Preview state without creating external test accounts. Links are organization-bound, require authenticated workspace membership, respect least-privilege, and never expand permissions beyond the simulated role. Support single-use and multi-use modes, TTL configuration (default 72 hours), optional password protection, revocation, and immediate invalidation on permission changes. Deep-link to specific release/folder/asset/AutoKit contexts with all query parameters preserved. Enforce that private originals remain non-downloadable unless the role allows it. Track opens and revocations in audit logs and exclude these events from external recipient analytics.

Acceptance Criteria
Org-Bound Authenticated Access
Given a Secure “View as Role” QA link exists, When an unauthenticated user opens it, Then they are required to authenticate and no preview data is returned before sign-in. Given a Secure “View as Role” QA link exists, When an authenticated user from a different organization opens it, Then access is denied (403) and no asset names, thumbnails, or metadata are returned. Given a Secure “View as Role” QA link exists, When an authenticated member of the owning organization opens it, Then the Access Preview state loads successfully with no elevation of privileges.
Least-Privilege Role Simulation and Protected Originals
Given a QA link that simulates Role R, When a viewer with higher native privileges opens it, Then all actions and visibility are constrained to Role R capabilities. Given assets contain private originals and Role R lacks original-download permission, When the viewer uses the link, Then original file downloads are not offered and no direct original URLs are generated. Given Role R permits streaming-only, When the viewer plays media, Then only watermarked or proxy streams are served and original files remain inaccessible.
TTL Default and Expiration Enforcement
Given a QA link is created without a TTL override, When 72 hours elapse from creation, Then the link expires and subsequent requests return an expiration response without content. Given a QA link is created with a TTL of X hours, When X hours elapse, Then the link is expired and cannot be used to access any content. Given an expired QA link is opened, When the page is refreshed, Then access remains denied and the link cannot be revived.
Link Configuration Modes and Security Controls
Given a single-use QA link, When it is opened by the first authorized member, Then the link is marked consumed and all later attempts are denied without revealing content. Given a multi-use QA link, When multiple authorized members open it within its TTL, Then each can access the Access Preview state concurrently. Given a QA link configured with a password, When the password is not provided or is incorrect, Then access is denied and no preview state is returned. Given a QA link configured with a password, When the correct password is supplied by an authorized member, Then the Access Preview state loads successfully.
Revocation and Permission Change Invalidation
Given an active QA link, When the link is manually revoked by an owner or admin, Then new requests are denied immediately and existing sessions lose access on their next interaction. Given an active QA link, When underlying role permissions or asset privacy are reduced, Then the link's effective permissions are reduced immediately and newly forbidden resources/actions become unavailable. Given an active QA link, When underlying role permissions are expanded, Then the QA link does not gain permissions beyond the originally simulated role.
Deep-Link Context and Query Preservation
Given a QA link generated from a specific release, folder, asset, or AutoKit page with query parameters, When the link is opened, Then the user lands on the same route and UI state with all query parameters preserved exactly. Given the QA link targets a specific asset context, When opened, Then the targeted asset is focused/selected as in the original Access Preview. Given the QA link includes arbitrary query parameters, When opened, Then those parameters are present in the resolved URL and applied to restore the same view state.
Audit Logging and Analytics Exclusion
Given a QA link is opened by an authorized member, When audit logs are reviewed, Then an open event is recorded with the link identifier, actor, timestamp, and simulated role. Given a QA link is revoked, When audit logs are reviewed, Then a revocation event is recorded with the link identifier, actor, and timestamp. Given QA link opens and revocations occur, When external recipient analytics are viewed for related releases or AutoKit, Then these internal QA events do not increment external views, clicks, or downloads.
AutoKit & Shortlink Context Preview
"As a publicist, I want to preview the AutoKit page as a Press role so that I can verify the stem player and download rules are correct before launch."
Description

Extend Access Preview to fully render AutoKit press pages and trackable shortlinks exactly as the selected Role Ring would encounter them. Simulate component-level behavior including private stem player availability, watermarked download buttons, rights metadata panels, and localized copy. Emulate mobile/desktop breakpoints and theme variants. Respect link-specific settings such as expiry, geo/IP restrictions, and UTM/campaign parameters, showing resultant redirects or gated experiences. Provide quick navigation between AutoKit and asset hub previews while preserving the selected role and context.

Acceptance Criteria
Role-Ring Accurate AutoKit Rendering
Given I select a Role Ring in Access Preview for an AutoKit, When I open the AutoKit preview, Then only components permitted to the selected Role Ring are visible and actionable. Given the private stem player is permitted for the selected Role Ring, When I preview, Then the stem player renders with playable controls; Given it is not permitted, Then the stem player is hidden. Given watermarked downloads are enabled for the selected Role Ring, When I preview, Then download buttons display a watermark indicator and non-permitted download buttons are hidden or disabled with an explanatory tooltip. Given rights metadata fields have per-role visibility, When I preview as the selected Role Ring, Then only allowed fields render in the rights panel. Given preview mode adds helper chrome, When I preview, Then no preview-only controls appear within the AutoKit content area and preview chrome is confined to the preview bar.
Shortlink Settings Respect and Outcomes
Given a shortlink is expired, When I preview as any Role Ring, Then I see the same expired experience recipients would see (expiry page or message) and the destination content is not accessible. Given a shortlink has geo restrictions and my preview IP resolves to an allowed region, When I preview, Then the destination loads; Given my IP resolves to a blocked region, Then I see the geo-block experience. Given a shortlink has IP allow/deny rules and my IP is denied, When I preview, Then I see the IP-block experience; Given my IP is allowed, Then the destination loads. Given UTM/campaign parameters are appended to the shortlink, When I preview, Then parameters are preserved through redirects and any campaign-variant components render accordingly. Given a shortlink redirects to a gated destination (e.g., email capture), When I preview, Then the same gate is shown and the final redirect target is displayed only after satisfying the gate, mirroring recipient behavior.
Breakpoint and Theme Variant Emulation
Given I select Mobile, Tablet, or Desktop in the preview controls, When I switch breakpoints, Then the AutoKit layout, navigation, and component visibility respond to the selected breakpoint. Given multiple theme variants exist (e.g., Light, Dark, Label Theme A/B), When I change the theme in preview, Then the AutoKit applies the selected theme consistently across all components. Given a breakpoint and theme are selected, When I reload the preview, Then the chosen breakpoint and theme are restored for the current session.
Localization and Copy Fallback Simulation
Given I set the preview locale or Accept-Language to a supported language, When I preview, Then all localized copy and labels render in that language. Given a translation key is missing for the selected locale, When I preview, Then the copy falls back to the default language without breaking layout. Given the selected locale uses RTL scripts, When I preview, Then layout direction switches to RTL and typography aligns appropriately. Given locale-specific formats are defined, When I preview, Then dates, numbers, and currencies render using the locale's formats.
Out-of-Ring Asset/Action Warnings
Given I preview as a specific Role Ring, When assets or actions referenced by AutoKit or shortlinks fall outside that ring, Then a warnings panel lists each item with type, location, and recommended remediation. Given no assets or actions are outside the selected Role Ring, When I preview, Then no warnings are shown. Given I adjust sharing/permissions to include the selected Role Ring, When I refresh the preview, Then the warnings panel updates in real time and resolved items disappear. Given an asset is suppressed in AutoKit due to role constraints, When I preview, Then the warning appears prior to share and links directly to the asset settings.
Shareable Secure "View as Role" Link (AutoKit & Shortlink)
Given I am previewing as a Role Ring with specific breakpoint, theme, locale, and UTM context, When I click "Copy QA preview link", Then a signed link is generated that reproduces the same role and context on open. Given the QA preview link has an expiry set (e.g., 24 hours) or is revoked, When it is accessed after expiry/revocation, Then access is denied and an expiry message is shown. Given the QA preview link is opened by any user, When accessed, Then no production analytics, conversion pixels, or shortlink counters are incremented. Given the QA preview link targets a shortlink with restrictions (expiry, geo/IP), When accessed, Then those restrictions are enforced exactly as for a recipient, showing the corresponding blocked, gated, or redirect experience.
Context Preservation Between AutoKit and Asset Hub Previews
Given I am previewing as a Role Ring with a selected locale, breakpoint, theme, and UTM context, When I use Quick Nav to switch between AutoKit and Asset Hub previews, Then the same role and context are preserved across views. Given I modify any context parameter (role, locale, breakpoint, theme, UTM) in one preview, When I switch to the other preview, Then the updated context is applied without reset. Given I navigate back and forth multiple times, When I return to a prior preview, Then the last-used context for that session remains consistent.
Multi-Entry Preview Triggers & Deep Links
"As a label engineer, I want to open Access Preview from wherever I’m working so that I can validate access quickly without losing my place."
Description

Enable Access Preview from multiple entry points—release page, folder view, asset detail, and link settings—preserving selection and scroll position when switching roles. Support deep-linking directly into a specific asset or AutoKit section with the role parameter encoded in the URL. Provide keyboard shortcut and API endpoint to invoke the preview programmatically. Ensure breadcrumbs and back navigation return users to their original context. Handle permission changes mid-session by prompting to refresh the simulation.

Acceptance Criteria
Launch Preview from Multiple Entry Points
Given the user is on the Release page with assets selected and scrolled When the user invokes Access Preview from the page action Then the preview opens in view-as mode for the current Role Ring and preserves the current selection and scroll position Given the user is in Folder view with items selected and scrolled When Access Preview is launched from toolbar or context menu Then the same selection and scroll offset are reflected in the preview Given the user is on an Asset detail page When Access Preview is launched Then the preview opens focused on that asset Given the user is on Link Settings for a shortlink or AutoKit When Access Preview is launched Then the preview reflects the link’s current target configuration
Preserve Selection and Scroll When Switching Roles
Given Access Preview is open with N items selected and a specific scroll position When the user switches the Role Ring Then the selection persists for items visible to the new role and previously selected but now-hidden items are indicated as unavailable And the vertical scroll position remains unchanged within the current list or grid Given some selected items are not permitted for the new role When switching roles Then a warning summarizes the count of excluded items and identifies them without revealing restricted content
Deep Link to Specific Asset or AutoKit Section with Role Parameter
Given a deep link URL containing role={roleId|slug} and target={assetId|sectionId} When the URL is opened Then Access Preview opens in the specified role and navigates directly to the targeted asset or AutoKit section Given the role parameter is invalid or unauthorized When the URL is opened Then the app displays a not-authorized state without exposing hidden metadata Given the role parameter is missing When the URL is opened Then the app responds with a clear error message indicating the role parameter is required for deep preview links
Keyboard Shortcut Invokes Access Preview
Given the user is on Release, Folder, Asset detail, or Link Settings When the user presses Cmd/Ctrl+Shift+P Then Access Preview opens from the current context and preserves selection and scroll position Given Access Preview is already open When the user presses Cmd/Ctrl+Shift+P again Then the shortcut has no adverse effect and does not open a duplicate session
API Endpoint to Programmatically Invoke Preview
Given an authenticated client with preview:invoke scope When it POSTs to /v1/preview with {contextType, contextId, roleId, targetId?} Then the service responds 201 with {previewUrl, expiresAt} and the URL opens Access Preview in the specified role and target Given the client provides an invalid roleId or nonexistent context When POST /v1/preview is called Then the service responds 400 or 404 with a machine-readable error code and no URL Given the client lacks preview:invoke scope When POST /v1/preview is called Then the service responds 403 Forbidden
Breadcrumbs and Back Navigation Restore Original Context
Given the user entered Access Preview from a source page with a breadcrumb trail, selection, and scroll position When the user exits via Back or a breadcrumb Then the app returns to the originating page with the prior selection and scroll restored and no loss of unsaved UI state Given the user opened Access Preview via a deep link When the user presses Back Then navigation returns to the prior browser history entry without revealing internal-only routes
Permission Changes Mid-Session Prompt Refresh
Given Access Preview is open and underlying permissions change for the current role or assets When the system detects the change (event or 403 on fetch) Then the user is prompted to refresh the simulation to apply updated permissions Given the user accepts the prompt When the refresh occurs Then the preview reloads and the visible assets and actions reflect the new permissions Given the user dismisses the prompt When continuing the session Then the preview remains open and indicates the simulation may be stale until refreshed
Audit Logging & Compliance for Access Preview
"As a workspace admin, I want comprehensive logs of Access Preview usage so that we can demonstrate due diligence and investigate any issues."
Description

Record all Access Preview sessions and generated QA links with user, timestamp, simulated role, scope (release/folder/asset/link/AutoKit), warnings present, link creations, and revocations. Store immutable logs for 18 months, with search and export. Prevent preview activity from polluting external recipient analytics. Enforce role-based controls: only users with manage permissions may generate QA links; all users with view rights may simulate roles. Surface a lightweight activity panel in the preview UI and provide organization-level reports for security reviews.

Acceptance Criteria
Comprehensive Audit Event Capture
Given a user starts an Access Preview for any supported scope (release, folder, asset, link, AutoKit) When the session begins Then an audit log entry is created with fields: event=preview.start, orgId, userId, scopeType, scopeId, simulatedRoleRing, sessionId, timestamp (UTC ISO 8601), clientIp, userAgent, warningsPresent (boolean), warningsList. When the session ends Then an audit log entry is created with fields: event=preview.end, sessionId, timestamp, durationMs. Given a user with manage permission creates a QA link When the link is created Then an audit log entry is created with fields: event=qaLink.create, orgId, userId, scopeType, scopeId, linkId, simulatedRoleRing(s), expirationTimestamp, timestamp. Given any QA link exists When it is revoked Then an audit log entry is created with fields: event=qaLink.revoke, orgId, userId, linkId, timestamp, reason (optional).
Immutable 18-Month Retention and Tamper Resistance
Given any audit log entry exists When attempting to modify or delete it within 18 months of its timestamp via any API or UI Then the system rejects the request with HTTP 403 and records the denied attempt. Then all audit log entries remain retrievable and unaltered for at least 18 months from their timestamp. When a log entry surpasses 18 months Then it becomes eligible for purge per retention policy and any purge is itself logged with event=audit.purge. Then all audit timestamps are stored and exposed in UTC with millisecond precision.
Log Search and Export
Given audit logs exist When filtering by date range, userId, orgId, scopeType, scopeId, eventType, simulatedRoleRing, warningsPresent, or linkId Then the system returns matching results with total count and pagination. Then retrieving a page of up to 200 results completes within 2 seconds for datasets up to 100k records. When exporting the current result set Then the system provides downloadable CSV and JSON files including all fields captured plus export metadata (generatedAt UTC, applied filters). When no logs match the filters Then the system returns an empty result set with total=0 and disables export.
Analytics Isolation for Internal Previews and QA Links
Given a user performs an Access Preview or visits a QA link When viewing external recipient analytics for the related scope Then no views, plays, downloads, or clicks from those internal sessions are counted in recipient analytics. Then internal preview and QA-link activity is attributed only to an internal analytics stream that is excluded from recipient-facing dashboards and APIs. When simulating 10 QA-link visits and 3 downloads in a test environment Then recipient analytics remain unchanged while internal analytics reflect the activity.
Role-Based Access Controls
Given a user with view rights on a scope When they open Access Preview Then they can simulate any available role ring for that scope. Given a user without manage permission on a scope When they attempt to generate a QA link Then the action is blocked with HTTP 403 and a clear error message is shown and an audit event event=qaLink.denied is recorded. Given a user with manage permission on a scope When they generate a QA link Then the link is created successfully constrained to the selected role ring(s) and scope and the action is logged.
Preview UI Activity Panel
Given the Access Preview UI is open for a scope When the Activity panel is expanded Then it displays the 20 most recent audit events for that scope (preview start/end, warnings surfaced, QA link create/revoke) each with timestamp, user, and event type. When more than 20 events exist Then pagination or infinite scroll loads additional events within 2 seconds per page. When the user lacks permission to view audit data Then the panel displays a permissions notice and no event data. When a QA link is revoked elsewhere Then the revoke event appears in the panel within 5 seconds.
Organization-Level Security Reports
Given an organization admin accesses Security Reports When they generate an audit report for a selected date range and optional filters (user, scope, event type) Then the system produces a downloadable package containing a summary (event counts by type, top users, top scopes) and a detailed CSV/JSON appendix of matching audit logs. Then only organization admins can access and generate these reports; non-admin attempts are denied with HTTP 403 and logged. When the report is generated Then a unique report ID, generation timestamp (UTC), and the filter parameters are included in the package metadata.

Timeboxed Roles

Attach start/expiry windows and milestone triggers to Role Rings (e.g., auto‑downgrade Mixer to Reviewer after ‘Mix Approved’). Recipients are notified of changes, and owners can extend or revoke with one tap. Keeps access tight to timelines, reducing manual cleanup and risk.

Requirements

Timeboxed Role Assignment Engine
"As a release owner, I want to set roles that automatically start and expire on defined windows so that collaborator access aligns with the production schedule without manual cleanup."
Description

Enable attaching start and expiry windows to role assignments within Role Rings at the project, release, and asset levels. Support absolute dates/times and relative offsets from milestones (e.g., “start at Mix Approved + 2 days”), with clear precedence rules when overlapping assignments exist. Handle user-local time zones and daylight saving transitions consistently, and provide defaults/fallbacks (e.g., auto-revert to Viewer or No Access on expiry). Integrate with TrackCrate’s permission model across files, stem versions, AutoKit press pages, and trackable shortlinks so access changes apply uniformly. Provide UI with calendar/time pickers, quick presets, and inline validation; expose CRUD APIs for automation and bulk apply. Ensure resilience for suspended/deleted users, and maintain an audit of assignment changes.

Acceptance Criteria
Absolute Start/Expiry with User-Local Time Zones and DST Handling
Given an assigner with time zone America/Los_Angeles sets a start time of 2025-03-09 02:30 local, When the assignment is saved, Then the engine snaps to the next valid local time (03:00), stores the unambiguous UTC instant, and displays 03:00 to all users localized to their own time zones. Given an assigner sets an expiry of 2025-11-02 01:30 America/Los_Angeles (ambiguous time), When the assignment is saved, Then the engine interprets it as the first occurrence by default, provides a UI toggle to choose the second occurrence, stores the exact UTC instant with DST offset metadata, and shows an “DST” badge in the UI. Given absolute start/expiry are in the past or future, When the current time crosses a boundary, Then the effective role activates/deactivates within 60 seconds and an audit entry is recorded with the resolved UTC instant and viewer-local render. Given any viewer’s time zone preference changes, When they reopen the assignment, Then start/expiry render in the new local time zone without altering the stored UTC instants.
Relative Window from Milestone Trigger (e.g., Mix Approved + 2 days)
Given a role assignment configured as Start: Mix Approved + 2 days, Expire: +7 days, On expiry: downgrade to Reviewer, When the Mix Approved milestone is first achieved at timestamp T, Then the engine schedules start at T+2d and expiry at T+7d (in UTC) and automatically changes the role to Reviewer at expiry. Given the milestone is achieved while the system is temporarily offline, When services recover, Then catch-up processing applies any missed activations/expiries immediately, emits notifications, and backfills audit entries with original intended times. Given the milestone timestamp is edited after being achieved, When viewing existing relative assignments, Then they remain anchored to the original first-achieved timestamp unless explicitly re-linked and re-saved by an owner. Given the milestone is reverted to Not Achieved, When no explicit re-link occurs, Then existing scheduled windows do not shift and continue to fire based on the original anchor.
Overlapping Assignments Precedence and Conflict Resolution
Given multiple active assignments exist for the same user and resource across scopes (project, release, asset), When evaluating effective permissions, Then the engine applies scope precedence Asset > Release > Project. Given multiple active assignments exist at the same scope for the same user and resource, When roles conflict, Then the engine selects the most permissive role per the configured role hierarchy unless an explicit No Access is present at that same scope, in which case No Access wins. Given overlapping windows transition over time, When a boundary is crossed, Then the engine recalculates the effective role, applies it across all surfaces within 60 seconds, and records a single audit event summarizing the before/after role and the precedence rule applied. Given two assignments at different scopes are both active and one is No Access at a more specific scope, When evaluated, Then the effective result is No Access even if a broader scope grants higher privileges.
Default Fallback on Expiry (Viewer/No Access) with Auto-Revert
Given an assignment has no explicit on-expiry role, When the expiry time passes, Then the effective role reverts to Viewer within 60 seconds and an audit event is recorded. Given an assignment is configured with On Expiry: No Access, When the expiry time passes, Then the effective role becomes No Access within 60 seconds and any active sessions or links relying on higher access are invalidated. Given a user is suspended at the time a start boundary is reached, When evaluating access, Then the user does not gain access; upon unsuspension, the engine computes the effective role based on current time and scheduled windows without replaying missed notifications. Given a user is deleted, When processing scheduled role windows, Then all pending assignments for that user are cancelled, no notifications are sent, and a terminal audit entry is recorded for each cancelled assignment.
Uniform Permission Propagation Across Files, Stems, AutoKit, and Shortlinks
Given the effective role changes for a user on a project/release/asset, When evaluated, Then access updates consistently across file library, stem versions, AutoKit press pages (including private stem player), and trackable shortlinks within 60 seconds. Given a shortlink was created that required elevated access, When the user’s role downgrades below the required level, Then following that shortlink yields a 403/Access Revoked within 60 seconds and the shortlink analytics attribute the denial to a role change event. Given the private stem player is open in AutoKit, When the user loses access to one or more stems, Then the player removes those stems from the playlist and blocks playback without requiring a refresh, and shows a non-intrusive notice. Given caches and CDNs are in use, When role changes occur, Then cache invalidation or token rotation ensures no stale access beyond 60 seconds and all changes are traceable via audit logs.
UI Presets, Pickers, and Inline Validation
Given a user opens the timeboxing UI for a role, When selecting Absolute mode, Then date/time pickers accept local times, render the user’s time zone, and flag nonexistent/ambiguous DST times with guidance and resolution controls. Given a user selects Relative mode, When choosing a milestone anchor and offsets, Then quick presets (24 hours, 72 hours, End of Week 23:59, Until Next Milestone) are available and correctly populate offsets. Given start is after or equal to expiry, When attempting to save, Then the Save action is disabled and inline validation messages explain the conflict until corrected. Given the user changes time zone preference, When viewing configured times, Then only the display changes; stored UTC instants remain unchanged and a banner clarifies this behavior. Given keyboard-only navigation and screen readers, When interacting with controls, Then all inputs are reachable, labeled, and announce validation errors per WCAG 2.1 AA.
CRUD APIs and Bulk Apply with Idempotency and Audit
Given the public API exposes endpoints for role assignments, When creating with an Idempotency-Key header and identical payload within 24 hours, Then the same assignment resource is returned without duplication and the response includes an idempotency hit indicator. Given a bulk apply request contains up to 500 items mixing absolute and relative windows, When processed, Then successes are applied independently of failures, per-item errors are returned with precise codes (e.g., INVALID_WINDOW, UNKNOWN_MILESTONE), and the overall HTTP status reflects partial success (207 Multi-Status or equivalent envelope field). Given API writes occur, When requests succeed, Then each create/update/delete emits an audit record with who, what, when (UTC), scope, old/new role, anchors, and source (UI/API), and optionally triggers notifications unless suppress_notifications=true is set. Given malformed time zones or invalid offsets are submitted, When validated, Then the API rejects the request with specific field-level errors and does not create partial assignments.
Milestone-Triggered Role Transitions
"As a project manager, I want roles to change automatically when key milestones are reached so that collaborators always have the right level of access at each phase."
Description

Introduce a rules engine that maps workflow milestones to automatic role changes (e.g., “on Mix Approved: Mixer → Reviewer”). Support chainable transitions, optional delays (e.g., T+24h), and guardrails to prevent oscillation when a milestone is edited or reverted. Provide a preview/simulation mode and conflict resolution when multiple triggers target the same user. Integrate with TrackCrate’s milestone system (Mix Approved, Master Delivered, Pre-save Launch, Release Day, Takedown) and accept external events via webhooks. Allow per-project overrides and owner-approved exceptions, with full logging of evaluated rules and outcomes.

Acceptance Criteria
Chainable Role Transitions on Mix Approved and Master Delivered
Given project P has rules R1: on "Mix Approved" change Mixer→Reviewer; and R2: on "Master Delivered" change Reviewer→Viewer And user U currently has role Mixer on project P When the milestone "Mix Approved" is set to true at time T Then U's role changes from Mixer to Reviewer within 5 seconds And prior "Mixer" permissions are revoked immediately upon change And a notification is sent to U and project owners within 10 seconds And an audit log entry is recorded with correlationId, ruleId=R1, event="Mix Approved", decision="applied" When the milestone "Master Delivered" is set to true at time T2 Then U's role changes from Reviewer to Viewer within 5 seconds And an audit log entry is recorded with ruleId=R2, decision="applied"
Delayed Transition T+24h with Cancellation on Revert
Given a rule R3: on "Mix Approved" change Reviewer→Viewer with delay=24h And user U currently has role Reviewer on project P When "Mix Approved" is set at T Then a scheduled transition job is created for T+24h and visible in the UI with the exact fire time in the project's timezone And no role change occurs before T+24h And at T+24h ± 1 minute the role changes to Viewer and a notification is sent When "Mix Approved" is reverted before T+24h Then the scheduled job is cancelled within 10 seconds and no role change occurs And rescheduling ensures at most one pending job per user-rule; subsequent re-setting restarts the single schedule
Oscillation Guardrails on Milestone Edit/Revert
Given rules exist that change roles based on the milestone "Mix Approved" And user U previously transitioned from Mixer→Reviewer due to "Mix Approved" When "Mix Approved" is toggled off and on multiple times Then the same transition is not re-applied more than once for U for the same milestone instance (idempotency) And no automatic reverse transition is executed on revert unless an explicit "on revert" rule exists And any suppressed transition attempts are logged with reason in {duplicate_event, revert_without_rule} And the user's final role after the toggles equals the role dictated by the latest valid milestone state and rules
Preview/Simulation of Rule Outcomes
Given preview mode is enabled for project P with rules R1..Rn And a tester selects user U and a sequence of hypothetical events including delays When the simulation is run Then the system returns the ordered list of predicted role transitions with timestamps, including delayed executions and conflict resolutions And no actual role assignments, notifications, or persistent logs are created And running the same events live produces the same decisions and schedule as the preview output
Decisioning Priority and Conflict Resolution for Concurrent Triggers
Given two or more rules (R1..Rn) target user U with different resulting roles in response to the same event set And precedence is defined as: owner-approved exception > per-project override > lower numeric rule.priority > more restrictive role > alphabetical role name When the engine evaluates the event set Then exactly one final role is applied to U following the precedence order And all non-applied candidate transitions are recorded as suppressed with the precedence reason And decisions are deterministic across repeated evaluations with identical inputs And an owner can approve, extend, or revoke an exception with one tap, and the precedence effect takes place within 5 seconds
Integration with Built-in Milestones and External Webhooks
Given built-in milestones {Mix Approved, Master Delivered, Pre-save Launch, Release Day, Takedown} exist for project P When a rule is configured for each milestone and the milestone is set Then the corresponding transitions are executed per rule within 5 seconds and logged Given an external webhook with eventType mapped to a rule for project P And the request includes a valid HMAC signature and idempotency key When the webhook is delivered one or more times Then the event is accepted once, duplicate deliveries (same idempotency key within 24h) are ignored, and only one transition is applied And invalid or missing signatures result in 401 and no transition
Full Logging and Auditability of Rule Evaluations
Given the rules engine evaluates an event for user U When a decision is made (applied, suppressed, scheduled, cancelled, failed) Then a structured audit record is produced containing timestamp (UTC), correlationId, projectId, userId, ruleId, event source (milestone/webhook), inputs, decision, previousRole, newRole (if any), delay (if any), notification outcome, and error (if any) And logs are queryable in the project's Audit view by date range, user, ruleId, event, and decision And logs are exportable to CSV and JSON by authorized owners/collaborators And preview mode runs do not create persistent audit records
Access Enforcement Across Platform
"As a label admin, I want expired or revoked roles to instantly lose access across files, links, and the stem player so that sensitive assets remain protected without gaps."
Description

Apply timeboxed role decisions at every access point, including file downloads (with watermarking), private stem player, AutoKit press pages, and trackable shortlinks. Enforce immediate revocation on expiry or manual revoke by invalidating signed URLs/tokens, expiring CDN cache, and rejecting late requests server-side. Implement a fast policy check on each request and a background sweeper to catch stragglers. Ensure low-latency evaluation, graceful degradation if policy services are unavailable, and detailed denial reasons for support. Provide configuration to define fallback behavior (read-only vs. no access) and ensure enforcement covers versioned assets and shared collections.

Acceptance Criteria
Immediate Revocation on Timeboxed Role Expiry or Manual Revoke
Given a user’s role ring has an expiry timestamp or is manually revoked When the expiry time is reached or the owner revokes access Then all new requests to file downloads, private stem player, AutoKit pages, and shortlinks are denied within 5 seconds globally with HTTP 403 and error_code=ROLE_REVOKED And any existing signed URLs or tokens are invalidated immediately and cannot be reused (subsequent requests return HTTP 403) And server-side requests arriving after expiry/revoke are rejected regardless of client cache or CDN cache state And access to all versions of assets and any shared collections is denied consistently
Watermarked File Downloads Enforced with Signed URL Invalidation
Given a user with an active role permitting downloads within the time window When they request a download for a specific asset version Then the system issues a signed URL scoped to that version and user, with expiry ≤ 15 minutes and embedded watermark unique to user+request And the downloaded file contains the watermark with 100% detection in verification tests And download attempts using an expired or revoked signed URL return HTTP 403 with error_code=URL_EXPIRED_OR_REVOKED And upon role expiry or revoke before download completion, any further range or chunk requests are denied within 5 seconds And the download audit log records user_id, asset_id, version_id, role_id, decision, and denial_reason when applicable
Private Stem Player Segment Requests Enforce Real-Time Policy Checks
Given a user is streaming audio via the private stem player (HLS/DASH) When the player requests the master playlist or media segments Then each request includes a token bound to user, role, and collection/version scope And the policy decision is evaluated per request with p95 evaluation latency ≤ 20 ms at the edge And if the role expires or is revoked mid-session, subsequent segment or playlist requests are denied within 5 seconds with HTTP 403 and error_code=ROLE_REVOKED And no more than one segment (≤ 4s) is served after revocation in synthetic tests
AutoKit Press Pages and Trackable Shortlinks Access Enforcement
Given a protected AutoKit press page and its trackable shortlink When an eligible user within the role window accesses via the shortlink Then the landing page and all embedded assets respect the same policy checks as direct access And cache keys include auth/role version so protected content is never served from a shared public cache And when the user’s role expires or is revoked, the landing page responds with HTTP 403 and error_code=ROLE_REVOKED while preserving UTM/shortlink tracking And attempts to access deep-linked assets from the page with stale cached URLs are denied server-side regardless of CDN state
Background Sweeper Catches Stragglers and Purges Edge Caches
Given periodic enforcement via a background sweeper When a role expires or is revoked Then the sweeper invalidates residual tokens and signed URLs and triggers CDN cache purge for affected paths within 60 seconds And metrics report tokens_invalidated_count and cdn_purges_triggered with p95 sweep-to-purge latency ≤ 45 seconds And any residual access using previously cached content is denied server-side with HTTP 403 and a logged denial_reason=STALE_CDN_BYPASSED
Detailed Denial Reasons and Support Observability
Given an access request is denied by policy When the response is returned to the client Then the client receives HTTP 403 with a non-sensitive message and a correlation_id And the server logs structured fields: user_id (if present), asset_id/collection_id, version_id, endpoint, error_code, denial_reason, role_id, policy_snapshot_id, correlation_id And support tooling can retrieve the full denial record by correlation_id within 1 second and display the precise cause (e.g., ROLE_EXPIRED, MANUAL_REVOKE, TOKEN_TAMPERED, URL_EXPIRED, POLICY_TIMEOUT) And no PII beyond user_id is exposed in client-facing responses
Configurable Fallback Behavior During Policy Service Unavailability
Given the policy decision service is unavailable or times out beyond 200 ms When a request is received for any protected endpoint Then the system applies the configured fallback: read_only or no_access And in read_only, metadata and thumbnails return 200 while downloads/streams return HTTP 403 with error_code=FALLBACK_READ_ONLY And in no_access, all protected resources return HTTP 403 with error_code=FALLBACK_NO_ACCESS And the chosen fallback is applied consistently across file downloads, stem player, AutoKit pages, and shortlinks, including versioned assets and shared collections And all fallback decisions are logged with policy_service_status and correlation_id
Smart Notifications & Reminders
"As a collaborator, I want timely notifications about when my access will start, change, or expire so that I can plan my work and avoid surprises."
Description

Notify recipients and owners of upcoming starts, changes, and expiries with email, in-app, and optional mobile push. Provide configurable reminders (e.g., 72/24/1 hours before expiry), batched digests to reduce noise, and deep links to the relevant project or AutoKit page. Respect user time zones and quiet hours; localize content; and include a timeline view of current and scheduled access. Track delivery, opens, and failures; retry transient errors; and allow recipients to adjust notification preferences per project. Ensure notifications reflect the final state after rule evaluations to avoid contradictory messages.

Acceptance Criteria
Pre-Expiry Reminders Respect Time Zones and Quiet Hours
Given a recipient has role access expiring at a specific timestamp and a profile time zone And project reminder offsets are configured to 72h, 24h, and 1h before expiry And the recipient has quiet hours configured When reminder jobs are scheduled and executed Then each reminder is scheduled relative to the expiry in the recipient’s local time And any reminder that would fall within quiet hours is deferred to the next allowed send time the same day And a single reminder is sent per offset with no duplicates And the notification is sent only via the recipient’s enabled channels (email, in-app, optional push) And the content includes a localized expiry timestamp and a deep link to the relevant project or AutoKit page And delivery, opens, and failures are recorded per message
Start Notifications Reflect Final State After Rule Evaluation
Given a role ring has a start window at time T and related rules/milestones may modify roles When time T is reached and all rule evaluations complete Then exactly one notification is sent reflecting the final access state (role, scope, window) And any pre-scheduled notifications that conflict with the final state are canceled And the content includes the effective role, localized start time, and a deep link to the project or AutoKit page And the event is added to the access timeline and delivery metrics are recorded
Milestone-Triggered Role Change Alerts to Owners and Recipients
Given a role ring auto-changes (e.g., Mixer → Reviewer) upon the milestone "Mix Approved" When the milestone is marked complete Then the affected recipient and project owners receive notifications within 60 seconds And the message includes prior role, new role, effective time, and reason ("Mix Approved") And owner notifications include one-tap actions: Extend, Revoke, Open Timeline And notifications are localized to each recipient’s locale/time zone and include deep links And the change is reflected in the timeline and audit log, with delivery outcomes captured
Noise-Reduced Daily Digest for Upcoming Access Changes
Given a user has multiple access starts/changes/expiries scheduled in the next 24 hours across projects And digest preference is enabled for that user When the daily digest window runs at 08:00 local time Then a single digest per channel is sent listing upcoming items with type, localized time, and deep links And items already notified individually are excluded unless their details changed And individual notifications suppressed by the digest are not sent separately And delivery and opens are tracked at both digest and per-item levels
Per-Project Notification Preferences and Reminders Configuration
Given a recipient opens Notification Preferences for Project X When they disable email and push for Role Changes and select only a 24-hour pre-expiry reminder Then subsequent Role Change notifications for Project X are sent in-app only And only the 24-hour pre-expiry reminder is scheduled and sent And preferences for Project X do not affect Project Y And a confirmation is shown and an audit entry is recorded And deep links from notifications open directly to Project X preferences
Delivery Tracking and Resilient Retry
Given a notification send attempt encounters a transient provider error (e.g., 4xx retryable or timeout) When the system processes the send Then it retries with exponential backoff up to 3 attempts within 15 minutes And on success the message is marked Delivered; on exhaustion it is marked Failed with provider code And opens are tracked (pixel for email, SDK for push/in-app) and attributed to a unique message ID And failures and retries are visible in a delivery report filtered by project and recipient And users do not receive duplicate user-visible messages due to retries
Access Timeline View Mirrors Notification Schedule
Given a project member opens the Access Timeline When role windows are added, modified, or changed by milestones Then the timeline shows current and scheduled access for each recipient with start/change/expiry markers And times are displayed in the viewer’s locale/time zone And updates appear within 15 seconds of any change And selecting an item opens the deep-linked project or AutoKit page And scheduled notifications align 1:1 with timeline entries in type and timing
One-Tap Extend/Revoke Controls
"As a project owner, I want to extend or revoke access with a single tap so that I can quickly respond to schedule changes and reduce risk."
Description

Provide owners with quick actions to extend, edit, pause, or revoke role windows from project and user views. Include presets (1 day, 3 days, 1 week) and custom durations, with an impact preview showing what access will change and when. Support bulk selection for multiple collaborators, optional reason codes, and a brief undo window. Enforce permissions, manage concurrency safely (optimistic locking), and update enforcement/notifications in real time. Optimize for mobile so changes can be made on the go during sessions or review calls.

Acceptance Criteria
One‑Tap Extend with Presets and Undo
Given I am a project owner or workspace admin viewing a collaborator’s Role Ring on project or user view (desktop or mobile) When I tap Extend and choose a preset of 1 day, 3 days, or 1 week, and confirm Then an impact preview is shown before confirmation summarizing the new expiry timestamp and affected resources, and the Confirm button is enabled only after preview is displayed And upon confirmation the new expiry is applied within 2 seconds and reflected across all active sessions within 5 seconds And an Undo control is presented for 10 seconds; if Undo is used within the window, the change is fully reverted and no notifications are sent And if the Undo window expires without Undo, the change is persisted, an audit record is written, and the recipient is notified within 30 seconds And extending an already expired window reactivates access starting now with the selected duration; extending an active window adds duration to the existing end time
Custom Duration/Edit with Validated Impact Preview
Given I choose Edit Window for a collaborator’s Role Ring When I set a custom start and/or end (or custom duration) and open the impact preview Then the preview lists the roles/resources impacted, the exact start and end timestamps in my local timezone and UTC, and a summary of what access will change and when And the Confirm button is disabled if end < start, end is in the past (unless resuming from paused with end in future), or the input format is invalid And on confirmation the change applies within 2 seconds and is reflected across sessions within 5 seconds And the updated schedule is persisted precisely to the minute with no rounding, and an audit record captures before/after timestamps
Bulk Extend/Revoke with Atomicity and Batching
Given I multi‑select 2 to 200 collaborators and one or more Role Rings When I choose Extend (preset or custom) or Revoke and confirm after viewing the impact preview Then changes are applied atomically per collaborator+role (either all intended changes for that subject succeed or none do) And the operation returns a per‑subject success/failure report with totals; successes are not rolled back due to other subjects failing And real‑time updates are pushed to all viewers within 5 seconds And recipient notifications are batched to one message per recipient per operation and delivered within 60 seconds after the undo window expires (or immediately if undo is not offered) And processing 100 subjects completes in ≤ 5 seconds median and 200 in ≤ 15 seconds median
Pause and Resume Access Windows
Given a collaborator has an active timeboxed Role Ring When I tap Pause and confirm after preview Then enforcement transitions to paused within 2 seconds and the collaborator immediately loses access to affected resources And a 10‑second Undo is offered; if not undone, an audit entry is written and a notification is sent to the collaborator within 30 seconds And when I tap Resume, the original remaining window (pre‑pause) is restored and counts down from the resume time; if the original end is in the past, I am prompted to extend before resuming And paused windows are visibly labeled Paused in project and user views
Permission Enforcement, Reason Codes, and Audit Trail
Given role window controls require Manage Roles permission When a user without permission views project or user views Then Extend, Edit, Pause, and Revoke controls are hidden; direct API attempts are rejected with HTTP 403 and no changes occur When I perform any change and optionally enter a reason (0–200 characters) Then the reason is stored; if provided it appears in the audit entry and is included in owner/admin notifications; recipient notifications include the reason if policy allows And every change writes an immutable audit record with actor, timestamp, target collaborator, action, previous values, new values, reason (if any), and request ID
Optimistic Locking and Conflict Resolution
Given another user modifies the same Role Ring between my load and confirm When I attempt to Extend/Edit/Pause/Revoke Then the update is rejected with a version conflict and no changes are applied And I see the latest values and can re‑open the impact preview to retry And in bulk operations, only conflicting subjects are rejected; successful subjects remain committed, and the result report lists conflicts distinctly And repeated submissions with the same request ID are idempotent and do not duplicate audit entries or notifications
Mobile Quick Actions Accessibility and Performance
Given I am on a mobile device ≤ 400px width in portrait When I open a collaborator’s Role Ring actions Then Extend, Edit, Pause, and Revoke are reachable with tap targets ≥ 44×44pt and visible without horizontal scrolling And the action flow requires ≤ 2 taps to select an action and ≤ 1 additional tap to confirm after preview (presets), or ≤ 2 additional taps for custom duration And visual updates (badges, timestamps, Paused state) appear within 5 seconds of confirmation And the UI meets color contrast and focus order requirements and supports screen readers for all controls and previews
Audit Trail & Compliance Export
"As a compliance stakeholder, I want a verifiable log of who had access when and why so that I can satisfy audits and partner obligations."
Description

Record an immutable event log of role assignments, time windows, rule evaluations, transitions, notifications sent, and access denials/approvals, with actor, timestamp, and rationale. Provide filters by project, user, role, and date range, and export to CSV/JSON for label partners. Surface an “Access Timeline” within the project and optionally on AutoKit press pages (owner-only) to show who had what when. Support retention policies, tamper-evident storage, and API access for external compliance tools, tying entries to rights metadata for end-to-end traceability.

Acceptance Criteria
Immutable Event Log for Timeboxed Role Activities
Given an owner assigns or updates a timeboxed role, When the action is saved, Then an audit entry is appended capturing event_id, project_id, subject_user_id, actor_id, actor_type, event_type (assignment|update|revoke), role_before, role_after, start_at, end_at, trigger_name (nullable), rationale (nullable), timestamp (ISO 8601 UTC), rule_evaluation_id (nullable), outcome, and integrity_hash Given any audit entry exists, When an API or UI attempts to modify or delete it, Then the operation is rejected with 405/Forbidden and a separate audit entry logs the denied attempt with actor_id and timestamp Given an automated transition occurs from a configured milestone (e.g., Mix Approved), When the transition fires, Then an audit entry is appended linking rule_id/scheduler_id and the source milestone/event_id Given a notification is sent due to a role change, When delivery completes or fails, Then an audit entry is appended with notification_id, channel, recipient_id, delivery_status, and timestamp Given an access request to a protected asset is approved or denied, When the decision is recorded, Then an audit entry includes decision (approved|denied), scope (asset|project), actor_id, subject_user_id, rationale, and timestamp
Audit Log Filtering by Project, User, Role, and Date Range
Given audit entries across multiple projects and users, When I filter by project_id=P and date range [start,end], Then only entries with project_id=P and timestamp within [start,end] are returned Given a filter user_id=U and role=Reviewer, When I execute the query, Then only entries where U is actor or subject and the event relates to role Reviewer are returned Given combined filters project_id, user_id, role, and event_type, When I query, Then results satisfy all predicates and are sorted by timestamp desc by default Given no entries match the filters, When I query, Then a 200 response with an empty array and total=0 is returned Given pagination parameters limit=100 and a cursor, When I fetch successive pages, Then results contain no duplicates or gaps and the final page has has_more=false
Compliance Export to CSV and JSON
Given filters are applied, When I export to CSV, Then the file includes UTF-8 encoded headers: event_id,project_id,subject_user_id,actor_id,actor_type,event_type,role_before,role_after,start_at,end_at,trigger_name,decision,rationale,timestamp,rights_metadata_id,integrity_hash Given filters are applied, When I export to JSON, Then the output is a UTF-8 encoded array of objects with the same fields as the CSV and preserves data types (timestamps as ISO 8601 strings) Given an export up to 100k records, When requested, Then the export completes within 60 seconds or streams progressively with a stable cursor Given an export completes, When the download is prepared, Then a SHA-256 checksum is provided and matches the downloaded file Given I request an export with a date range exceeding retention, When generated, Then the file excludes purged entries and includes a retention_gap summary block (JSON) or footer note (CSV) with counts by event_type
Access Timeline UI in Project and Owner-Only AutoKit
Given a project owner views the Project > Access Timeline, When the page loads, Then a chronological timeline of role intervals shows who had which role over which time windows with collapsed contiguous intervals Given a collaborator without owner permission views AutoKit, When the page loads, Then the Access Timeline module is not rendered and direct timeline API calls return 403 Given the owner toggles “Show Access Timeline on AutoKit”, When enabled, Then the timeline renders on AutoKit for the owner only and each entry deep-links to its audit event Given a project with 5k timeline entries, When the timeline is opened, Then first meaningful paint occurs within 2 seconds and scroll/paginate loads additional entries without time gaps or duplicates Given a timeline entry is selected, When details expand, Then rationale, actor, subject, role, and exact UTC/localized timestamps are displayed and match the underlying audit entry
Tamper-Evident Storage and Integrity Verification
Given an audit entry is appended, When integrity verification runs, Then the entry’s integrity_hash validates against the append-only hash chain/Merkle root without discrepancy Given an entry is altered out-of-band to simulate tampering, When integrity verification runs, Then the system flags the entry as compromised and surfaces the failed proof with event_id and root_hash at failure time Given a client requests integrity proof for event_id=E, When the API is called, Then the response includes the current log root, a proof path for E, and verification instructions that validate locally Given events are reordered in storage to simulate attack, When verification runs, Then order inconsistency is detected and a critical alert is emitted with affected range
Retention Policies and Legal Hold
Given a retention policy of N days is configured for audit logs, When an entry’s age exceeds N days and it is not on legal hold, Then the entry is purged and a non-PII purge marker aggregates counts by event_type and date Given a legal hold is applied to project_id=P, When the retention job runs, Then entries for P are retained regardless of age and the hold status is visible via API/UI Given a retention policy is updated, When the change is saved, Then the policy change (old_value,new_value,actor_id,timestamp) is itself recorded in the audit log Given an export spans a period with purged entries, When generated, Then the export excludes purged records and includes a retention_gap summary with time ranges and counts
Compliance API Access and Rights Metadata Linkage
Given an API client with scope audit.read, When requesting GET /api/compliance/audit with filters (project_id,user_id,role,event_type,start,end), Then a 200 response returns filtered, paginated results with next_cursor Given a client without sufficient scope, When accessing audit endpoints, Then a 403 is returned and the denial is recorded in the audit log with actor_id and timestamp Given a valid request, When results are returned, Then each entry includes rights_metadata_id and an embedded snapshot of key rights fields (rightsholder, territory, usage_window) or a link to resolve them Given high request volume beyond the rate limit, When calls exceed the threshold, Then 429 responses are returned with Retry-After and no data leakage occurs Given the API specification is requested, When fetching /api/spec (OpenAPI), Then the audit endpoints and schemas are documented and up to date

Drift Guard

Continuously monitor for permission creep by comparing live access against the applied Role Ring. Flags manual overrides, forwarded child links, and inherited scopes, offering one‑click “Reapply Template” or documented exceptions. Maintain least‑privilege without blocking legitimate workflows.

Requirements

Role Ring Baseline Engine
"As a label admin, I want a reliable baseline of intended access saved with each asset so that Drift Guard can accurately detect and remediate permission creep over time."
Description

Persists the applied Role Ring per asset/container as a signed baseline including member principals, roles, scopes, link permissions, and inheritance rules; computes a canonical representation to compare against live ACLs. Supports versioning when templates change, with backward linkage to template IDs and timestamped application events. Baseline stored per object (release, track, stem, artwork, AutoKit page, shortlink) and updated on template reapplication, asset moves, or ownership transfer. Exposes an API to fetch current baseline and diff state, and integrates with the audit log.

Acceptance Criteria
Baseline Persistence on Role Ring Application
Given a Role Ring template is applied to an object of type release, track, stem, artwork, AutoKit page, or shortlink When the application event is committed Then a baseline record is persisted for that object containing member principals, roles, scopes, link permissions, inheritance rules, template_id, template_version, applied_by principal, and application_timestamp And the baseline record includes a stable baseline_id and object_id And the baseline stores a canonical_representation string and its cryptographic_signature And the baseline is queryable by object_id and baseline_id
Deterministic Canonical Representation and Signature
Given two Role Ring templates and ACL states that are semantically identical but with different ordering of principals, roles, and links When the canonical representation is computed Then the canonical_representation strings are byte-identical And the canonical_hash value is identical And signature verification using the platform signing key succeeds Given any change to a scoped permission, role, membership, inheritance rule, or link permission When the canonical representation is recomputed Then the canonical_hash changes And prior signatures fail verification
API: Fetch Current Baseline and Diff
Given a client with read permission for object_id X When it calls GET /acl/baseline?object_id=X Then the API returns 200 with the current baseline, including baseline_id, object_id, template_id, template_version, canonical_hash, cryptographic_signature, applied_by, and application_timestamp And the response includes ETag equal to canonical_hash Given a client with read permission for object_id X When it calls GET /acl/diff?object_id=X Then the API returns 200 with in_sync boolean and an array of deltas by category: members, roles, scopes, link_permissions, inheritance_rules, forwarded_child_links Given an unauthorized client When it calls either endpoint Then the API returns 401 or 403 accordingly Given a nonexistent object_id When either endpoint is called Then the API returns 404 And p95 latency for both endpoints is <= 300 ms under nominal load
Diff Accuracy for Live ACL Deviations
Given the live ACL has an extra principal not present in the baseline When diff is computed Then deltas.members includes an added entry for that principal with role and scope Given the live ACL removes a principal present in the baseline When diff is computed Then deltas.members includes a removed entry for that principal Given a role or scope on an existing principal is changed When diff is computed Then deltas.roles or deltas.scopes includes a modified entry with before and after values Given a child link forwards permissions beyond baseline scope When diff is computed Then deltas.forwarded_child_links includes the offending link with target_id and forwarded_scopes Given inheritance is toggled differently from the baseline When diff is computed Then deltas.inheritance_rules reflects the change Given no differences exist When diff is computed Then in_sync is true and deltas arrays are empty
Template Versioning and Backward Linkage
Given template T v1 is applied to object_id X When the baseline is saved Then baseline.template_id = T and baseline.template_version = 1 and baseline.previous_baseline_id is null Given template T v2 is applied to object_id X When the baseline is updated Then a new baseline is created with baseline_version incremented, template_version = 2, and previous_baseline_id referencing the prior baseline And the previous baseline remains immutable and queryable And the API GET /acl/baseline/history?object_id=X returns an ordered list of baselines with timestamps and previous_baseline_id links
Automatic Baseline Updates on Object Changes
Given object_id X has a current baseline When the Role Ring template is reapplied to X Then a new baseline is computed and persisted, linked via previous_baseline_id, and an application event is recorded Given object_id X is moved to a different container that changes inheritance When the move event is committed Then a new baseline is computed reflecting new inheritance rules and persisted within 5 seconds Given object_id X ownership is transferred When the transfer event is committed Then a new baseline is computed to reflect ownership-based scopes and persisted within 5 seconds And all updates are idempotent and atomic per object_id
Audit Log Integration for Baseline Events
Given a baseline is created, updated, or a diff is fetched When the operation completes Then an audit log entry is written with fields: object_id, actor_id, action (baseline_create, baseline_update, baseline_diff_fetch), baseline_id, template_id, previous_baseline_id (if any), canonical_hash, result (success/failure), and timestamp And audit entries are retrievable via the audit log API and correlate to application events via correlation_id
Live Access Diff Monitor
"As a security-conscious producer, I want real-time detection of access changes that exceed our Role Ring so that I can address risks before files leak."
Description

Continuously compares live access control lists and link scopes against the stored Role Ring baseline to detect drift, including manual overrides, ad-hoc shares, and widened scopes. Runs on change events (ACL updates, link creation, membership changes) and scheduled sweeps for completeness. Produces normalized diffs with severity levels and remediation options, and writes findings to the security event log.

Acceptance Criteria
Detect Manual Permission Override on Asset
Given an asset has a stored Role Ring baseline for principals and permissions And a manual ACL change grants a principal a permission not present in the baseline When the Live Access Diff Monitor processes the ACL update event Then a finding is created with type="acl_override" and a unique finding_id And the normalized_diff includes change_type="add", entity="principal", subject_id, permission_before=null, permission_after, baseline_ref And severity is set to High if permission_after grants write or higher; Medium if read-only And remediation options include "Reapply Template" and "Create Exception" And the finding is written to the security event log with required fields
Flag Widened Scope on Child Share Link
Given a child asset resides under a parent whose baseline restricts link scope And a new shortlink is created for the child with a scope broader than the baseline (e.g., Anyone with link) When the Live Access Diff Monitor processes the link creation event Then a finding is created with type="scope_widened" and a unique finding_id And the normalized_diff includes scope_before=baseline_scope and scope_after=link_scope And severity is High if scope_after is Public/Anyone-with-link; Medium if broader-than-baseline but authenticated And remediation options include "Reapply Template" (reset to baseline scope) and "Create Exception" And the finding is written to the security event log with required fields
Scheduled Sweep Discovers Unreported Drift
Given the scheduled sweep interval elapses When the Live Access Diff Monitor performs a full scan of assets, ACLs, and links Then it compares live access against the stored baselines and computes diffs And it creates findings only for drifts that do not have an open finding (no duplicates) And it updates last_seen timestamps on existing open findings that still reproduce And it writes each new finding to the security event log with required fields And it records sweep metadata (start_time, end_time, resources_scanned, new_findings_count) to the event log
One-Click Reapply Template Remediates Drift
Given an open drift finding supports auto-remediation via "Reapply Template" When a user with remediation privileges triggers "Reapply Template" Then the system reverts the affected ACLs and/or link scopes to match the baseline And immediately re-evaluates the resource to verify the drift is resolved And if resolved, the finding status is set to Remediated with remediation_timestamp and actor recorded And if not resolved, the finding status is set to Remediation Failed with error details And a remediation event is written to the security event log including pre/post states
Exception Suppresses Expected Drift Until Expiry
Given a user with exception privileges records a documented exception with subject, scope, reason, approver, and expiry When the exception is saved Then future monitor runs suppress matching drift findings while the exception is active And any matching open findings transition to Exception Active with a link to the exception record And upon exception expiry or revocation, suppression is removed and drift becomes detectable on next event or sweep And all exception create/update/expire actions are written to the security event log with required fields
Security Event Log Captures Drift and Remediation
Given any drift finding is created, updated, remediated, or exception state changes When the system writes to the security event log Then each event includes at minimum: event_type, finding_id, timestamp (UTC ISO-8601), asset_id, link_id (nullable), subject_id (nullable), baseline_version, normalized_diff, severity, action (create|update|remediate|exception), actor (system|user), correlation_id And events are immutable append-only and retrievable by finding_id and correlation_id And log write success is confirmed and surfaced to the calling workflow
Forwarded Link & Inheritance Analyzer
"As a campaign manager, I want visibility into when forwarded or inherited links broaden access so that I can fix risky shares without breaking legitimate distribution."
Description

Identifies forwarded child links and inherited permissions that expand access beyond the baseline. Traverses object graph for releases, tracks, stems, artwork, and AutoKit press pages to analyze propagation of scopes, including shortlink redirects and private stem player embeds. Flags cases where downstream entities have broader read/download privileges or lack watermarking/expiry constraints, and associates them back to the originating asset.

Acceptance Criteria
Detect Forwarded Shortlink Redirect Expanding Access
Given an originating asset with a Role Ring baseline of {read: team-only, download: none} And a child shortlink L1 redirects to L2 which redirects to the asset And any hop in the chain grants broader scopes than the baseline (e.g., read: public or download: granted) When the analyzer runs Then it flags the chain as a forwarded link expansion And includes in the finding: originAssetId, chain=[L1->L2->...->asset], expandedScopes, baselineScopes, reason="forwarded_redirect_expands_access" And de-duplicates identical chains across runs And produces no finding when every hop is equal to or more restrictive than the baseline
Traverse Object Graph Across Releases, Tracks, Stems, Artwork, and Press Pages
Given relationships: release->track, track->stem, release->artwork, release->pressPage, pressPage->embed(private stem player), shortlink->target And potential cycles via backlinks or multiple shortlinks When the analyzer runs on a release Then it traverses all supported edges breadth-first up to depth 6 and follows up to 8 shortlink redirects per path And it detects and breaks cycles so no node is visited more than once per path And it completes analysis for a release with <= 500 nodes and <= 1,500 edges in <= 5 seconds in the baseline environment And it emits a trace for each flagged path including pathNodes, pathEdges, and scopeDiff
Flag Missing Watermark or Expiry on Downstream Access
Given an originating asset requiring watermark=true and expiry<=14 days for downloads And a downstream link or embed provides read or download access When the analyzer compares constraint inheritance Then any downstream access lacking watermark or exceeding expiry is flagged And findings include constraintDiff={watermark: missing|inherited, expiryDeltaDays} And private stem player embeds inherited via AutoKit are verified to stream watermarked audio; non-watermarked streams are flagged And no finding is created if downstream constraints are equal to or more restrictive than the baseline
Identify Inherited Scope Expansion Beyond Role Ring
Given a Role Ring baseline for the origin (e.g., collaborators: read, engineers: read+download, public: none) And scopes may be inherited via object relationships and shortlink settings When live access is computed per downstream entity Then any subject that gains scopes beyond the baseline (e.g., public gains read, or collaborators gain download) is flagged with subject, scopeAdded, sourceOfInheritance And manual overrides on any node are distinguished from inherited grants and labeled override=true|false And non-expanding inheritance produces no flag
Associate Findings Back to Origin and Provide Reportable Output
Given any flagged case is detected When the analyzer records the finding Then the finding is associated to originAssetId and appears in the Drift Guard panel for that asset And the API returns a JSON record including: originAssetId, downstreamEntityId, path, scopeDiff, constraintDiff, reasonCode, firstDetectedAt, lastSeenAt, occurrenceCount, severity And findings are grouped by reasonCode and path for display with pagination and counts
Respect Documented Exceptions and TTL
Given a documented exception exists for originAssetId with matching downstreamEntityId or pathHash and reasonCode, with an expiry timestamp When the analyzer runs Then it suppresses creating new findings for matching cases until expiry And it logs suppressed candidates with suppressedBy=exceptionId for audit And upon expiry, the next run re-creates the finding And updating or revoking an exception takes effect within one analyzer run cycle
One-click Reapply Template
"As an artist manager, I want a one-click way to restore intended permissions so that I can quickly correct mistakes without combing through complex ACLs."
Description

Provides a guided action to reapply the current Role Ring template to the selected asset, reversing unauthorized changes. Offers a preflight preview of changes, optional dry-run, and the ability to exclude documented exceptions. Executes atomically with rollback on failure and records an audit trail entry with actor, scope impacted, and remediation results.

Acceptance Criteria
Preflight Preview Excludes Documented Exceptions
Given asset A has deviations from its applied Role Ring template including: a manual override M, a forwarded child link F, an inherited scope drift S, and a documented exception E And the current user has permission to manage access on asset A When the user opens Reapply Template preflight and checks "Exclude documented exceptions" Then the preview lists M, F, and S as pending changes and does not include E And the preview displays total counts by change type (revokes, grants, link removals, scope corrections) And no changes are applied until the user confirms Apply
Dry-Run Reports Impact Without Applying Changes
Given asset A has drift from its Role Ring template And the user selects "Dry-run" in the Reapply Template dialog When the user executes the dry-run Then no permissions, links, or scopes on asset A are changed And the system returns a report identical to the preflight preview And an audit entry is recorded marked as Dry-Run with the projected changes and no side effects
Atomic Apply With Full Rollback on Failure
Given asset A has N pending changes from preflight And apply begins When a failure occurs after at least one change has been attempted Then all changes are rolled back to the exact pre-apply state And the operation status is Failure with an error code and correlation ID And the audit entry records rollback performed, the failed step, and zero net changes
Audit Trail Captures Actor, Scope, and Results
Given any Reapply Template action (dry-run or apply) completes Then a single audit entry exists containing: actor ID, timestamp, asset ID, Role Ring template ID/version, action type (dry-run/apply), exceptions honored, list of changes (before -> after), scopes impacted, outcome (success/failure), duration, and correlation ID And the entry is immutable and retrievable via the audit API within 2 seconds
Idempotent No-Op When No Drift
Given asset A matches its Role Ring template with no drift When the user runs preflight Then the preview shows zero pending changes When the user proceeds to apply Then no modifications are made and the operation completes successfully within 3 seconds And the audit entry records "No changes" outcome
Reapply Reverses Unauthorized Changes and Cleans Links
Given asset A has unauthorized manual grants, unauthorized forwarded child links, and incorrect inherited scopes relative to the Role Ring template When the user applies Reapply Template (excluding documented exceptions) Then unauthorized manual grants are revoked, unauthorized forwarded child links are removed, and inherited scopes are corrected to match the template And resulting access for all principals exactly matches the Role Ring template plus documented exceptions
Only Authorized Users Can Reapply Template
Given user U lacks Manage Access permission on asset A When U attempts to open or execute Reapply Template Then the action is blocked with HTTP 403 (or equivalent) and no changes are made And an access-denied event is logged; no remediation audit entry is created
Exception Workflow with Expiry
"As a project lead, I want to grant temporary, well-documented access beyond the template so that collaborators can work without permanently weakening our security."
Description

Enables authorized users to document and approve exceptions to the baseline with explicit scope, justification, approver, and expiration. Exceptions are enforced as allow rules that the monitor respects, automatically expiring and reverting at end-of-life. Supports time-boxed guest access, watermarking requirements, and download caps to preserve least-privilege while enabling collaboration.

Acceptance Criteria
Time-Boxed Guest Access Exception Creation and Activation
- Given an authorized user submits a guest access exception with identity, resource scope, allowed actions, start/end timestamps, and justification, When the request is saved, Then the exception is recorded with a unique ID, status "Pending Approval", and no access is granted before approval. - Given a pending exception, When an approver approves it, Then access is applied within 60 seconds, limited strictly to the defined scope, and the exception status updates to "Active". - Given an active exception with a future start time, When current time is before start, Then no access is granted; When current time reaches start, Then access begins within 60 seconds. - Then all access outside the specified scope remains denied and logged.
Approval, Rejection, and Audit Trail Integrity
- Given a pending exception, When an approver with the Exception Approver role approves or rejects it, Then the system records approver ID, timestamp, decision, and comment, and the record becomes immutable except via a formal amendment workflow. - Given an exception is rejected, Then no access is granted and the requester is notified via configured channels. - Given an approved exception, Then notifications are sent to requester and project owners including scope, expiry, and conditions. - Given an attempt by a non-approver to approve, Then the action is blocked and logged with a security event.
Automatic Expiry, Revocation, and Reversion to Role Ring
- Given an active exception with expiration T, When current time >= T, Then all access granted by the exception is revoked within 2 minutes and any links/tokens created under the exception are invalidated. - Then base Role Ring permissions are reapplied and any exception-specific overrides are removed without affecting baseline roles. - Then the audit log records the revocation with exception ID, affected identities, and resources, and notifications are sent to requester and approver. - Given an access attempt after expiry, Then the request is denied with 403 and error code EXC-EXPIRED.
Drift Guard Monitor Respect and Flagging Logic
- Given an active approved exception, When Drift Guard runs, Then deviations covered by the exception are not flagged as permission creep. - Given an access change outside any active exception, When Drift Guard runs, Then it is flagged as a manual override and “Reapply Template” preserves any active exceptions while restoring baseline. - Given forwarded child links and inherited scopes created under an exception, Then they are attributed to the exception and not flagged until the exception expires or is revoked. - Given “Reapply Template” is executed, Then base permissions are restored and the active exception remains intact and visible in reports.
Watermarking Enforcement Under Exception Conditions
- Given an exception requires watermarking, When a user downloads assets within scope, Then files are delivered with visible and embedded watermarks containing user ID and exception ID. - Then watermarking cannot be disabled by the requester or recipient for covered assets during the exception; attempts are blocked and logged. - Given stem player streaming under the exception, Then streams apply session-bound watermarking where supported, or downloads are blocked if watermarking cannot be applied.
Download Cap and Rate Limits Within Exception Scope
- Given an exception defines a download cap N and time window W, When downloads occur within scope, Then the system counts unique file downloads and blocks further downloads once N is reached within W with error EXC-CAP. - Then download counts aggregate across shortlinks and forwarded child links associated with the exception and reset only upon approved amendment. - Given the cap is extended by an approver, Then the new limits take effect within 60 seconds and the audit trail records the amendment; historical counts remain intact.
Exception Extension, Amendment, and Early Revocation
- Given an active exception, When an approver extends expiry or narrows scope, Then the change requires justification, is recorded with approver ID and timestamp, and takes effect within 60 seconds. - Given a requested increase in scope (broader resources or actions), Then it requires approval before becoming active; until approved, original scope remains in force. - Given early revocation by an approver, Then access granted by the exception is revoked within 2 minutes, tokens/links are invalidated, and audit/notifications are issued. - Given a policy-defined maximum exception duration (e.g., 30 days), When an extension exceeds the limit, Then the action is blocked with error EXC-POLICY and no changes are applied.
Drift Alerts & Digest Notifications
"As a small label owner, I want clear, actionable alerts about permission drift so that I can prioritize remediation without being overwhelmed."
Description

Sends in-app, email, and Slack alerts when drift is detected, grouped by asset and severity to reduce noise. Provides configurable thresholds, per-label routing, and a daily or weekly digest. Alerts include actionable context, diff summary, and quick actions for Reapply Template or create Exception. Integrates with Notification Preferences and the audit log.

Acceptance Criteria
Real-time multi-channel drift alert delivery
Given a permission drift is detected for an asset and real-time alerts are enabled in Notification Preferences When the drift event is persisted Then an in-app alert is created within 5 seconds containing asset name, label, severity, drift type, diff summary, actor (if known), timestamp, and deep links to the asset, diff, and audit log And if email alerts are enabled for the label or recipients, an email is sent within 60 seconds with subject prefix "[Drift Guard]", severity, asset, diff summary, and CTAs for Reapply Template and Create Exception And if a Slack route is configured for the label and severity, a Slack message is posted within 30 seconds to the mapped channel with identical context and CTAs And duplicate alerts for the same asset and drift signature within 15 minutes are suppressed and a suppression counter is displayed on the in-app alert
Noise reduction via asset- and severity-based grouping
Given multiple drift changes occur for the same asset within a 10-minute window When alerts are generated Then they are grouped into a single alert thread with severity set to the highest among grouped events And the alert shows a count of grouped changes and a collapsed list of diffs by timestamp And recipients receive at most one email and one Slack message per grouping window per asset And grouping does not delay delivery beyond the first event’s channel SLA
Configurable thresholds and policies per label
Given a label admin configures drift alert thresholds (minimum severity for real-time, drift types to include, and scope filters) When a drift is detected that is below the minimum severity Then no real-time channel messages are sent and the event is queued for the next digest And when a drift type is excluded by policy, no alerts are produced but the event is still logged in the audit log And threshold and policy changes take effect within 2 minutes and are versioned with who/when in the audit log
Per-label routing rules for channels and recipients
Given routing rules map severity and asset type to specific Slack channels, email distributions, and in-app audiences for a label When a High severity drift is detected on a Master asset Then alerts are sent to the configured High severity Slack channel and email list, and shown in-app to users with access to the asset And when a Medium severity drift is detected on an Artwork asset with route = in-app only Then only in-app alerts are delivered and no email or Slack is sent And routing rule evaluation is logged with the selected routes per event in the audit log
Daily and weekly drift digest generation and delivery
Given a label sets a digest frequency (daily or weekly) and send time with label time zone When the digest window elapses Then a digest is generated including all drift events in the period that were not acknowledged, resolved by Reapply Template, or covered by an approved Exception And the digest groups items by asset and severity, includes counts, top changes, and links to full diff and audit log And the digest is delivered via the label’s configured channels at the scheduled time, with delivery results recorded in the audit log And events already sent as real-time alerts remain included only as summaries without re-notifying recipients
Actionable alerts with quick actions and state transitions
Given an alert contains CTAs for Reapply Template and Create Exception When a user clicks Reapply Template with sufficient permissions Then the Role Ring is reapplied, the drift is resolved, an audit log entry is written with before/after, and the alert state becomes Resolved with a success confirmation And when a user creates an Exception from the alert Then an exception record is created with scope, duration, rationale, approver, and the alert state becomes Acknowledged; the event is excluded from future digests during the exception period And failed actions return an error message and do not change alert state; failures are logged
Respect Notification Preferences and failure fallbacks
Given users and labels have Notification Preferences for Drift Guard channels and quiet hours When a real-time alert is triggered during a user’s quiet hours or for a disabled channel Then that user does not receive that channel notification while team-level routes still deliver as configured And all alert deliveries record per-recipient/channel suppression reasons in the audit log And if a Slack delivery fails (HTTP 4xx/5xx or timeout), a retry is attempted up to 3 times with exponential backoff; on final failure, an email fallback is sent to the route if enabled and the failure is logged

Smart Assign

Auto‑apply Role Rings based on rules tied to metadata, tags, status changes, or email domains (e.g., when status switches to PR, invite the Press list with PR Ring and ForwardTrace quotas). Eliminates repetitive setup and ensures the right people get the right access at the right moment.

Requirements

Rule Engine for Role Rings
"As a label admin, I want to define rules that automatically assign Role Rings and quotas based on status, tags, and domains so that the right collaborators get correct access without manual work."
Description

Introduce a flexible, deterministic rules engine that evaluates TrackCrate entities (releases, assets, collections, users) and context (status transitions, metadata fields, tags, email domains, time windows) to auto-apply Role Rings and related controls. Conditions support field/value matches, regex on email domains, tag add/remove events, and state changes (e.g., Draft→PR). Actions include assigning one or more Role Rings, setting ForwardTrace download quotas, applying expiration dates and watermark policies to downloads, and auto-inviting mapped contact lists (e.g., Press). Rules are idempotent, re-entrant, and re-evaluated on subsequent changes to prevent drift. Scopes include workspace/label-level rules with inheritance and overrides. Provide versioning for rules, enable/disable toggles, and API endpoints for CRUD and evaluation. Ensures the right access is granted at the right time with zero manual steps, reducing setup time and errors across time zones.

Acceptance Criteria
Draft→PR Auto-Apply PR Ring and Controls
Given an enabled workspace-level rule with condition "status changes from Draft to PR" and actions "assign Role Ring 'PR', invite contact list 'Press', set ForwardTrace quota = 3, set download expiration = 7 days, enable watermark" And a release in Draft with a mapped 'Press' contact list When the release status is changed to PR and rules are evaluated Then the Role Ring 'PR' is assigned to the release And all contacts in the 'Press' list are invited with the 'PR' ring And each invited contact has ForwardTrace download quota set to 3 And download links generated for this release expire in 7 days And watermarking is enabled for all downloads created under this rule And re-running evaluation without further changes does not create duplicate invites or modify quotas (idempotent)
Regex Email Domain Assigns Collaborator Ring
Given an enabled rule with condition "user email matches regex /@.*(label|studio)\.co$/i" and action "assign Role Ring 'Collaborator'" And a release where a new user is added without any ring When a user with email alice@coollabel.co is added to the release and rules are evaluated Then the user is granted the 'Collaborator' ring on that release When a user with email bob@gmail.com is added and rules are evaluated Then no ring is assigned by this rule
Tag Add/Remove Triggers Re-evaluation and State Reconciliation
Given an enabled rule with condition "tag 'Embargoed' is present" and actions "assign Role Ring 'Embargo', set download expiration = release_date, enable watermark" And a release initially without the 'Embargoed' tag When the 'Embargoed' tag is added to the release and rules are evaluated Then the 'Embargo' ring is assigned and download expiration and watermark policies are applied When the 'Embargoed' tag is removed and rules are re-evaluated Then the 'Embargo' ring and the expiration/watermark settings applied by this rule are removed (state reconciled)
Idempotent and Re-entrant Evaluation
Given an enabled rule that has already been applied to an entity When the same triggering event is received multiple times or evaluation is invoked repeatedly without relevant state changes Then no duplicate rings or invites are created, quotas are not incremented, and policy values remain unchanged And the evaluation result is identical across runs for the same inputs (deterministic)
Workspace/Label Rule Inheritance and Override
Given a label-level rule R1 and a workspace-level rule R1' that overrides R1 for the same condition and target within the same hierarchy And only R1' is enabled When rules are evaluated for a matching entity Then the actions of R1' are applied and the actions of R1 are not When R1' is disabled and R1 remains enabled Then the actions of R1 are applied
Rule Versioning and Enable/Disable via API
Given a rule with version v1 enabled When a new version v2 of the same rule is created via POST /api/rules/{id}/versions and enabled via PATCH with {"enabled": true} Then v2 becomes the active version for evaluation and v1 is not used And GET /api/rules/{id} returns v2 as current with "enabled": true When v2 is disabled via PATCH with {"enabled": false} Then no versions of the rule are active for evaluation When DELETE /api/rules/{id} is called Then subsequent evaluations do not include this rule and the API returns 204
Auto-Invite Mapped Contact Lists with Deduplication
Given an enabled rule with action "invite contact list 'Press' with Role Ring 'PR'" And the 'Press' list contains contacts A and B, where A is already invited to the target release with 'PR' When the rule is triggered and evaluated Then contact B is invited with the 'PR' ring And contact A is not re-invited or duplicated And the total distinct invite count increases only by 1
Rule Builder & Simulation UI
"As an admin, I want to configure rules visually and preview their impact before enabling them so that I can avoid mis-sharing and tune scope confidently."
Description

Deliver an admin-facing UI to compose rules with condition groups (AND/OR), field pickers for metadata, tag selectors, status transition pickers, and email domain matchers. Provide reusable templates (e.g., “On PR status: invite Press with PR Ring + ForwardTrace quotas”), cloning, and draft mode. Include a simulation/dry-run feature to preview affected users/assets, resulting Role Rings, quotas, and invitations before enabling a rule. Validate for conflicting or incomplete configurations and surface helpful guidance. Support accessibility, localization, and role-based permissions so only authorized admins can create/modify rules.

Acceptance Criteria
Compose Rule with Condition Groups and Field/Tag Pickers
Given an authorized admin opens the Rule Builder When they create a new rule Then they can add a top-level condition group with operator AND or OR And they can add more than one condition to a group And they can nest at least one sub-group inside a group with its own AND/OR operator And the metadata field picker lists available fields from the workspace schema with type-appropriate operators (e.g., equals/contains for text, greater/less for numbers/dates) And the tag selector supports multi-select with autocomplete of existing tags and displays selected tags as removable chips And the rule editor prevents saving if a condition is missing a field, operator, or value
Configure Status Transition Trigger and Email Domain Matching
Given an authorized admin is editing a rule When they add a trigger for status transitions Then they can choose a transition of Any -> PR or a specific From and To status from the defined status list And they can add an email domain matcher that accepts exact domains (e.g., example.com) and wildcard subdomains (e.g., *.news.com) And entering an invalid domain pattern shows an inline error and blocks save And the UI displays a real-time count of current users matching the domain criteria based on directory data
Apply Templates and Clone Existing Rules
Given an authorized admin opens the Templates panel in Rule Builder When they select the template "On PR status: invite Press with PR Ring + ForwardTrace quotas" Then the rule form is auto-populated with matching conditions, actions, and quotas And they can review a read-only preview of the template before applying And they can clone any existing rule; the clone contains identical conditions and actions And the cloned rule is assigned a unique name by appending "(Copy)" and requires confirmation before activation And the system prevents duplicate active rule names
Draft Mode, Activation Gate, and Rule Summary
Given a rule is saved as Draft When the admin views the rule header Then the rule status shows Draft and the rule does not affect access or send invitations When the admin clicks Activate Then the system runs validation and blocks activation if errors exist And on success a confirmation modal displays a summary of triggers, conditions, actions, target rings, and quotas And after confirmation the rule status changes to Active and the Last Updated timestamp and user are recorded And deactivating an active rule immediately stops further evaluations by that rule
Simulation/Dry-Run of Affected Entities and Outcomes
Given a draft or inactive rule with valid configuration When the admin clicks Simulate Then the system evaluates the rule against current workspace data without making changes And it displays counts of affected assets and users, and a tabular preview of sample matches (e.g., top 50) with filters And it lists resulting Role Rings, assigned quotas, and pending invitations that would be generated And it flags potential issues (e.g., quota over-allocation or users already having equivalent access) as warnings And the Simulate action produces no side effects (no invitations sent, no roles changed), which is verified by unchanged audit logs
Conflict Detection, Incomplete Config Validation, and Guidance
Given an admin attempts to save or activate a rule When the configuration is incomplete (e.g., missing action, empty condition value, invalid operator for field type) Then inline errors are displayed next to the offending inputs with specific guidance and links to documentation When the configuration conflicts with existing active rules (e.g., duplicate invitations to the same list, contradictory ring assignments, or mutually exclusive quotas) Then the system shows a conflict panel summarizing the conflicts and identifies the impacted rules by name and link And the Save Draft action remains available, but Activate is disabled until conflicts are resolved or acknowledged where allowed
Permissions, Accessibility, and Localization Support
Given a non-admin user tries to access the Rule Builder or modify rules Then access is denied with a 403-style message and no rule data is exposed Given an authorized admin creates/edits/activates a rule Then an audit log entry records actor, timestamp, action, and before/after diffs And the Rule Builder is fully operable with keyboard only, provides visible focus states, ARIA roles/labels for interactive components, and passes WCAG 2.1 AA color contrast checks And when the locale is switched between en-US and es-ES, all static UI strings are localized, date/time formats adapt, and layouts do not truncate or overlap text
Real-time Event Triggers & Processing
"As a project manager, I want rule-driven access to update within seconds of changes so that collaborators aren’t blocked or overexposed during fast-moving release cycles."
Description

Implement an event-driven pipeline that listens to key TrackCrate events (status changes, metadata updates, tag add/remove, user invites/acceptances, new email domains) and evaluates relevant rules within a target SLA (<=15 seconds). Provide debouncing and batching to handle rapid change bursts, at-least-once processing with idempotency keys, retry/backoff, and a dead-letter queue. Expose health checks, metrics, and alerting for throughput, latency, error rates, and rule evaluation counts. Ensure horizontal scalability to support spikes during coordinated release pushes across time zones.

Acceptance Criteria
SLA Processing for Key Events
Given a key TrackCrate event (status change, metadata update, tag add/remove, user invite/acceptance, or new email domain) is emitted with timestamp T0 When the event enters the pipeline Then the end-to-end time from T0 to completion of all matching Smart Assign rule actions is <= 15 seconds at p99 and <= 5 seconds at p50 for each event type And a completion record is written with eventId, ruleIds applied, start/end timestamps, and outcome=success|noop|error for every processed event
Debounce and Batch Rapid Changes
Given N changes to the same entity within a 5-second window When the pipeline processes these changes Then only one rule evaluation is executed using the latest state as of the last change, and it completes within 15 seconds of the last change timestamp And unrelated entities are batched up to 100 events per batch or flushed every 2 seconds, whichever comes first And no qualifying rule is skipped due to batching or debouncing
At-Least-Once with Idempotency
Given the same eventId is delivered K times within 24 hours When processing occurs Then rule evaluation is performed at most once and all duplicate deliveries are acknowledged without re-triggering side effects And idempotency keys are computed as hash(tenantId, entityId, eventType, changeFingerprint) and stored for at least 24 hours And metrics report deduplicated_count increments by K-1
Retry, Backoff, and Dead-Letter Handling
Given a transient downstream error (HTTP 5xx or timeout) during rule action execution When processing fails Then the system retries with exponential backoff and jitter: initial delay 1s, max delay 30s, up to 5 attempts within 2 minutes And upon final failure, the message is moved to the dead-letter queue with reason, sanitized payload reference, and correlationId, and an alert is emitted within 60 seconds And manual requeue from DLQ preserves original eventId and idempotency behavior
Health Checks, Metrics, and Alerting
Given the service is healthy When GET /health/events is called Then it returns HTTP 200 with JSON fields: queueLagSeconds, workersAvailable, dependenciesOk=true, version And metrics are exposed: event_throughput_per_sec, processing_latency_ms (p50/p95/p99), error_rate, rule_evaluation_count, queue_lag_seconds, dlq_depth labeled by tenantId and eventType And alerts fire when any condition holds for 5 minutes (unless noted): processing_latency_ms_p99 > 15000; error_rate > 2%; queue_lag_seconds > 60; dlq_depth > 0 for 1 minute
Horizontal Scalability Under Release Spike
Given a coordinated release push generates 10,000 events over 5 minutes across 50 tenants When autoscaling is enabled Then the system scales to sustain at least 200 processed events per second within 3 minutes and maintains processing_latency_ms_p99 <= 15000 after scale-out And no messages are lost, and per-entity ordering is preserved throughout the spike and recovery
Tenant Isolation and Rule Scope Enforcement
Given two tenants with overlapping email domains and tag names When tenant A emits qualifying events Then only tenant A’s Smart Assign rules are evaluated and actions apply exclusively to tenant A’s users and assets And cross-tenant triggers are blocked and logged with correlationId and outcome=blocked, with zero side effects in other tenants And every rule action writes an audit log containing tenantId, eventId, ruleId, actor, and outcome
Access Orchestration & Notifications
"As a label owner, I want invites, permissions, quotas, and press-page access to be orchestrated automatically when rules fire so that rollouts are consistent and require no manual coordination."
Description

Coordinate the downstream actions triggered by rules across TrackCrate surfaces: apply Role Rings to releases/assets/collections, set or update ForwardTrace download quotas, enforce expiring and watermarked download policies, update AutoKit press page access, and generate trackable shortlinks where needed. De-duplicate invites, maintain consistent permissions across objects, and respect existing manual overrides where configured. Send customizable notifications (email and in-app) to invitees and owners, with templates, tokens (release name, embargo date), and localization. Log failures and provide recovery workflows for partial successes.

Acceptance Criteria
PR Status Change Orchestration
Given a release has a Smart Assign rule that triggers on status change to "PR" And the rule specifies Role Rings, ForwardTrace quotas, AutoKit access, download policy, and shortlink generation When the release status changes from any non-PR value to PR Then within 60 seconds the specified Role Ring is applied to the release, all child assets, and associated collections And ForwardTrace download quotas are set or updated to the values defined in the rule And expiring and watermarked download policies are enforced for all newly granted access And the corresponding AutoKit press page is granted access per the rule And required trackable shortlinks are generated if none exist, or reused if valid And an orchestration audit log entry is created with a correlation ID, start/end timestamps, and counts of targets acted upon
Invite De-duplication and Permission Merge
Given a contact already has active or pending access to any object in scope via a prior manual invite or rule When a new rule execution would invite the same contact for overlapping scope Then no duplicate invite is sent (exactly one email and one in-app notification max per run per contact) And the contact's Role Rings are merged as the union of privileges across the overlapping scope And the resulting permissions are consistent across releases, assets, and collections for that scope And a single notification is delivered summarizing the consolidated access and scope And the audit log records the de-duplication with references to superseded or merged invites
Manual Overrides Respected
Given an object has manual permission overrides flagged as locked When a Smart Assign rule would change permissions that conflict with locked overrides Then the rule does not downgrade or alter locked permissions And non-conflicting changes are applied as specified And a skip entry is recorded in the audit log listing objects and fields skipped with reason "manual_override" And owners receive a summary notification if any overrides prevented changes, including counts and impacted objects
Expiring and Watermarked Downloads Enforcement
Given a rule requires watermarking and an expiry duration for downloads When an invitee accesses a shortlink or AutoKit page to download assets Then downloads initiated before expiry are served with watermarks embedded per configuration And ForwardTrace records a unique fingerprint per download including invitee ID (if known), IP hash, timestamp, and asset ID And any download attempt after expiry returns an error and no file is served And regenerating a link resets expiry only if allowed by rule; otherwise preserves original expiry And behavior is validated across at least two file types (e.g., audio stems and artwork)
Customizable Notifications with Templates, Tokens, and Localization
Given email and in-app templates exist with tokens such as {{release_name}} and {{embargo_date}}, and locales en, es, and fr are configured When notifications are generated for invitees with a preferred locale Then tokens are replaced with correct values and locale-appropriate date/time formats And fallback to the rule's default locale occurs if the invitee's locale is unsupported And owners can preview the rendered notification before sending And each notification includes at minimum the access summary, primary shortlink, and support contact And delivery outcomes (sent, bounced, opened for email; delivered, seen for in-app) are captured in the log
Failure Logging and Recovery for Partial Success
Given an orchestration run affects multiple targets and at least one action fails (e.g., AutoKit update timeout) When the run completes Then a failure report is recorded with per-target status (success/fail), error codes, and retry eligibility And the owner is notified with a link to the recovery workflow And invoking "Retry failed only" re-executes idempotently on failed targets without duplicating invites, links, or changing already successful permissions And a second failure on the same target increases the attempt count and preserves the original correlation ID
Trackable Shortlink Generation and Reuse
Given a rule specifies that shortlinks must be created for a release and its assets When orchestration runs Then a unique shortlink is created per target object if one does not exist, using the configured namespace and UTM parameters And existing valid shortlinks are reused and not regenerated And each shortlink click is attributed to the invitee where possible; otherwise tagged as "unknown" And shortlinks inherit and enforce the same expiry and watermark policies as the underlying access And shortlink counts and last-click timestamps are visible in the audit log
Conflict Resolution & Safeguards
"As a workspace admin, I want clear precedence and guardrails when multiple rules overlap so that access remains predictable and secure."
Description

Define precedence and merge strategies when multiple rules target the same user/resource (e.g., most-restrictive vs union of Role Rings, last-write-wins by priority index). Prevent invite storms via rate limiting and recipient de-duplication. Provide guardrails such as approval steps for high-impact actions (e.g., external domain blasts), domain verification for organization-scoped rules, and caps on default quotas. Surface conflict warnings in the UI and simulation with clear resolution outcomes before activation.

Acceptance Criteria
Conflict Resolution: Precedence Strategy Selection
Given multiple Smart Assign rules target the same user/resource with differing Role Rings and quotas And the project’s conflict strategy is set to Most-Restrictive When the rules are evaluated Then the resulting Role Rings equal the intersection of permissions across all targeted rings And the resulting quotas for overlapping metrics equal the minimum value across rules And no permission or quota exceeds the most restrictive input Given the same inputs And the project’s conflict strategy is set to Union When the rules are evaluated Then the resulting Role Rings equal the union of permissions across all targeted rings And the resulting quotas for overlapping metrics equal the maximum value across rules, subject to global caps And non-overlapping permissions are included Given a change in conflict strategy from Union to Most-Restrictive When the simulation is re-run Then the deltas in permissions and quotas are surfaced per affected user/resource before activation
Priority Index: Last-Write-Wins
Given two or more rules assign conflicting values to the same permission or quota for the same user/resource And a priority index is configured for each rule When the conflict cannot be resolved by the selected precedence strategy alone Then the rule with the highest priority index determines the final value Given two conflicting rules share the same priority index When the conflict is evaluated Then the rule with the most recent updated_at timestamp determines the final value (last-write-wins) And an audit log entry is recorded with rule IDs, priority indices, timestamps, and chosen winner Given the winning rule would violate a global cap or domain safeguard When the resolution is applied Then the cap/safeguard overrides the rule outcome And a warning is displayed in simulation and rule execution logs
Anti-Spam Safeguards: De-duplication and Rate Limiting
Given multiple rules would invite the same recipient to the same resource within a 24-hour window When invitations are generated Then the recipient receives a single consolidated invite with the resolved Role Rings and quotas And duplicate notifications are suppressed Given organization-level and project-level invite rate limits are configured (defaults present) When the number of outgoing invites in a 60-minute window would exceed a limit Then additional invites are queued and not sent immediately And admins receive a rate-limit alert with counts, affected rules, and estimated time to drain And the invites-per-hour actually sent never exceeds the configured limit Given queued invites exist due to rate limiting When the next evaluation window opens Then queued invites are sent in FIFO order subject to current limits And a single summary email is sent to admins per window, not one per invite
Approval Workflow for External Domain Blasts
Given a rule targets recipients on external (non-verified, non-internal) domains And the resolved recipient count exceeds the configured high-impact threshold When the rule is activated or scheduled to run Then the system requires an explicit approver with approval permission to approve the blast And invitations are blocked until approval is granted Given an approver grants approval When the blast executes Then the approval record logs approver ID, timestamp, rule ID, recipient count, and pre-send diff Given approval is not granted within the configured SLA When the approval window expires Then the execution is cancelled and an expiration notice is sent to rule owners and approvers
Domain Verification for Org-Scoped Rules
Given an org-scoped rule filters recipients by email domain And the domain is not verified for the organization When attempting to activate the rule Then activation is blocked with a clear error stating the domain must be verified And the UI presents verification steps (DNS or email proof) Given the domain becomes verified When the rule is re-validated Then activation is allowed And simulation reflects the newly eligible recipients without errors Given a domain loses verification status (e.g., DNS check fails) When a scheduled rule run occurs Then execution for that domain is skipped, logged as a safeguard action, and owners are notified
Quota Caps Enforcement for Role Rings
Given default quota caps are configured at org/project levels for ForwardTrace and related metrics When a rule (alone or via merge) would assign quotas exceeding the applicable cap Then the resulting quota is clamped to the cap value And a warning is surfaced in simulation and execution logs indicating the cap applied and the pre-cap value Given a user with Admin permission attempts to raise a default quota above the cap via rule configuration When saving the rule Then the save is rejected with a validation error specifying the cap and how to request a cap increase Given multiple quotas from different rings are merged under Union strategy When the total would exceed cap Then the post-merge quota is limited to cap and the contributing rules are identified in the UI
Simulation & UI Conflict Warnings Before Activation
Given one or more rules match a test dataset in Simulation mode When the simulation is run Then the UI lists for each affected user/resource: applicable rules, chosen precedence strategy, priority resolution details, merged Role Rings, final quotas, de-dup outcome, rate-limit impact, approval requirements, and any domain/ cap safeguards applied And conflicts are highlighted with their resolution rationale Given unresolved high-risk warnings exist (e.g., pending approvals, unverified domains, projected rate-limit breach) When attempting to activate the rule set Then activation is blocked and the UI enumerates the blocking items with direct links to remediate Given all blocking warnings are resolved When activation is retried Then activation succeeds and a simulation summary snapshot is stored for audit
Auditing, Rollback & Reporting
"As a compliance-minded producer, I want a clear history and easy rollback of automated access changes so that I can audit and correct mistakes quickly."
Description

Record a complete audit trail of rule evaluations and resulting actions, including triggering events, matched conditions, applied Role Rings, quotas set, invites sent, and notifications delivered. Present per-release and per-user timelines with filters and export (CSV/JSON) options. Support one-click rollback to revert Role Ring assignments and invitations to a prior state, with safety checks and impact summaries. Expose a reporting view to track adoption, time saved, and common rule outcomes, and provide webhooks/API for external compliance systems. Apply retention policies aligned with privacy requirements.

Acceptance Criteria
Audit Trail Recording for Smart Assign Evaluations and Actions
Given a triggering event (status change, metadata edit, tag change, or email-domain match) When Smart Assign evaluates rules and applies any actions Then an audit entry is written per rule evaluation and per action with fields: eventId (UUIDv4), correlationId, occurredAt (UTC ms), workspaceId, releaseId (optional), subjectUserId (optional), triggerType, ruleId, ruleVersion, matched (true|false), matchedConditions[], actionsApplied[], outcome, actorType (system|user), error (optional) And entries are persisted no later than 500 ms after the action commit (p95) And entries are append-only; any correction is a new event referencing the prior eventId And duplicate writes are prevented via idempotency keys And failures to log are surfaced as system alerts and retried up to 3 times
Per-Release and Per-User Timelines with Filters and Export
Given audit entries exist for a release or user When the timeline is viewed Then events are shown in reverse chronological order with a page size of 50 and cursor pagination And filters are available for date range, eventType ∈ {evaluation, assignment, inviteSent, notificationDelivered, rollback}, ruleId, actorType, outcome, and free-text search And timestamps display in the viewer’s timezone, with UTC on hover And exporting to CSV or JSON includes only events matching current filters and columns: schemaVersion, eventId, occurredAt, triggerType, ruleId, outcome, actionsApplied, actorType, releaseId, subjectUserId And exports are delivered via a permission-checked, signed URL valid for 24 hours and limited to 250,000 rows per file And export generation streams results so that exports of 250,000 rows complete within 60 seconds (p95)
One-Click Rollback to Prior State with Safety Checks
Given a release has Smart Assign-applied role rings, quotas, invites, and notifications recorded in the audit timeline When a user with ManageRoles permission selects a checkpoint event and initiates Rollback Then the system presents an impact summary listing: roleRing adds/removes, quota resets, invites to cancel/resend, notifications to withdraw, and affected user counts And safety checks block rollback if it would overwrite newer manual changes unless the user explicitly enables Force Rollback And on confirmation, the rollback executes atomically per release; individual item failures are retried up to 3 times and reported And the resulting state matches the selected checkpoint for Smart Assign-managed artifacts only, leaving unrelated changes untouched And a rollback event is recorded linking to the checkpoint and detailing the delta applied
Webhooks and API for External Compliance Systems
Given a workspace has a webhook configured and enabled When audit events are created Then the system POSTs events within 5 seconds in JSON with fields including eventId, occurredAt, schemaVersion, and HMAC-SHA256 signature in the header using the shared secret And deliveries are idempotent with an Idempotency-Key header; non-2xx responses trigger exponential backoff retries for up to 24 hours with jitter And delivery logs display last status, attempt count, next retry, and response code And the API exposes GET /api/audit-events with cursor pagination, filters (date range, eventType, ruleId, outcome, releaseId, subjectUserId), and 200 responses within 300 ms (p95) for pages ≤ 200 items And API access is authenticated, versioned, and rate limited to 120 requests/min per token with 429 responses including Retry-After
Reporting View for Adoption, Time Saved, and Common Outcomes
Given Smart Assign is in use across releases When an admin opens the Reporting view and selects a timeframe Then metrics display: adoption rate (% releases with ≥1 active rule), auto-actions applied, invites sent, notifications delivered, rollbacks executed, and estimated time saved (auto-actions × configured seconds-per-action default 45s) And the metrics reconcile to audit counts within 1% for the selected timeframe And charts support grouping by day, week, or month and respect filters for workspace, release status, and ruleId And all widgets support CSV/JSON export with the same filters and signed URLs valid for 24 hours
Retention Policies and Legal Holds for Audit Data
Given a workspace retention period N days is configured When an audit event exceeds N days and is not under legal hold Then PII fields (emails, names, IPs) are irreversibly anonymized and events older than N+7 days are purged from hot storage And admins can apply legal holds per release or subjectUserId to suspend deletion until cleared And retention jobs run daily, produce a purge summary audit event, and never include purged content in API or exports thereafter And retention configuration changes apply prospectively and are permission-gated
Access Control and Secure Delivery for Audit Data and Exports
Given workspace permissions are enforced When a user attempts to view timelines or generate exports Then only users with Owner, Admin, or Compliance roles can view full audit scopes; Contributors see only their own actions; others receive 403 And all views and downloads are themselves audited with viewerUserId, occurredAt, and resourceId And export downloads require a signed, single-use URL bound to the requesting user and expire within 24 hours; URLs can be revoked by admins

Handoff Switch

Move collaborators between Role Rings with one click (e.g., Contributor → Approver → Publicist). Migrates link policies, quotas, and pledges automatically, and uses Recall & Replace to update active links without churn. Smooth, auditable transitions as projects progress.

Requirements

One-click Role Transition UI
"As a project owner, I want to move collaborators between role rings with one click so that I can progress releases without manual policy updates or errors."
Description

Provide a compact control on collaborator cards and the project roster to switch a user’s Role Ring (e.g., Contributor → Approver → Publicist) with a single action. The UI surfaces eligible target roles, required approvals (if any), and a clear preview of changes to link policies, quotas, pledges, and asset access before confirmation. It supports per-user and multi-select batch operations, includes a dry-run summary, and presents blocking validations (e.g., unmet pledges or quota constraints). The control integrates with project permissions, respects tenant-level role templates, and localizes timestamps for distributed teams. The outcome is a fast, low-friction experience that minimizes errors and ensures users understand the impact prior to committing a handoff.

Acceptance Criteria
Single-User One-Click Role Transition Control
Given a collaborator is visible in a project and the acting user has Manage Roles permission, When the Handoff Switch is opened on the collaborator card or roster row, Then only eligible target Role Rings are listed and ineligible roles are hidden or annotated with reason codes. Given the collaborator’s current role is displayed, When the target role menu opens, Then the current role is indicated and the Confirm action is disabled until a different target is selected. Given a valid target role is selected, When the user clicks Confirm, Then the role updates within 2 seconds without page reload and the role badge and permissions reflect the new role immediately.
Impact Preview of Policies, Quotas, Pledges, and Asset Access
Given a target role is selected, When the preview panel opens, Then it displays link policy changes (added/removed/modified count), quota deltas (before/after values and remaining), pledges migration status (migrated/retained/blocked count), and asset access deltas (added/removed asset counts). Given the preview panel is shown, When no changes exist for a category, Then that category is labeled No change. Given the preview panel is shown, When the user cancels or closes the panel, Then no state changes are persisted and the original role remains intact.
Blocking Validations on Unmet Pledges and Quota Constraints
Given unmet pledges or quota limits would be violated by the transition, When the preview is computed, Then blocking validations are displayed with reason codes and affected item counts. Given blocking validations are present, When the user attempts to Confirm, Then the Confirm action is disabled and the UI provides a link or action to view and resolve the issues. Given all blocking issues are resolved, When the preview is recomputed, Then the Confirm action becomes enabled without requiring a page reload.
Required Approvals Visibility and Gating
Given the tenant role template requires approvals for the target role, When a target is selected, Then the UI shows the required approver list and minimum quorum. Given approvals are required and not yet granted, When the user attempts to Confirm, Then the UI prevents confirmation and offers a single action to send approval requests. Given the required approvals are granted, When the target role is reselected or the panel refreshes, Then the UI labels approval state as Satisfied and enables Confirm.
Multi-Select Batch Role Transition with Dry-Run
Given multiple collaborators are selected, When Dry Run is initiated, Then a per-user summary is displayed including target role, policy/quota/pledge deltas, required approvals status, and predicted link updates, without persisting any changes. Given the dry-run results are shown, When the user clicks Confirm, Then the system processes transitions in batch and returns per-user outcomes (Success/Blocked/Skipped) with counts, and partial failures do not roll back successes. Given batch processing completes, When reviewing results, Then per-user messages include reason codes for failures and a selector to retry only failed items.
Recall & Replace Updates Active Links Without Churn
Given the collaborator has active shortlinks and AutoKit press pages, When the role transition is confirmed, Then active links are updated in place (IDs unchanged) within 60 seconds, preserving analytics continuity and UTM parameters. Given links are updated, When a recipient opens a previously shared link, Then policy and watermarking changes are applied and no new link is created. Given link updates are propagated, When inspecting analytics before and after the transition, Then cumulative metrics remain continuous with no data reset.
Permissions, Templates, Localization, and Audit Trail
Given the acting user lacks Manage Roles permission for the project, When viewing collaborator cards, Then the Handoff Switch control is hidden or disabled with an explanatory tooltip. Given tenant-level role templates restrict certain transitions, When opening the target list, Then only transitions allowed by the template are presented. Given a transition is confirmed, When the audit log is queried, Then an entry exists with actor, timestamp localized to the viewer’s locale/time zone, from→to role, approvals reference, link update counts, and validation outcomes, and it is retrievable within 2 seconds.
Policy, Quota & Pledge Migration Engine
"As an admin, I want role handoffs to migrate policies, quotas, and pledges automatically so that governance remains consistent without manual reconfiguration."
Description

Implement a backend service that atomically migrates all role-bound attributes when a handoff occurs, including link access policies, download quotas, publishing permissions, and pledge obligations. The engine maps source→target role templates, preserves explicit overrides, and recalculates effective policies across assets (stems, artwork, press kits). It performs transactional updates with rollback-on-failure, is idempotent for retried events, and emits structured events for observability. The service enforces guardrails (e.g., cannot downgrade below active pledge requirements) and merges tenant defaults with project-specific rules, ensuring consistent outcomes across projects and time zones.

Acceptance Criteria
Atomic Handoff: Contributor → Approver (All Assets)
Given a project with stems, artwork, and press kits and a user in Contributor role with explicit overrides and active trackable links When a Handoff Switch to Approver is executed for the user Then the engine migrates access policies, download quotas, publishing permissions, and pledge obligations atomically in a single transaction And Recall & Replace updates all active links to reflect the new effective policies without changing URLs And explicit overrides are preserved and re-applied on top of the Approver template And effective policies across all assets are recalculated and persisted with a new policy version And an audit record is written referencing all updated objects and the actor And the migration completes within 5s for ≤500 assets and ≤2,000 active links And the response includes correlation_id, counts_migrated, links_updated, overrides_preserved
Transactional Rollback on Partial Failure
Given a handoff in progress and a simulated failure on writing effective policy for any asset When the failure occurs Then the transaction is rolled back and no partial changes persist in databases or caches And end-user links continue to resolve with pre-handoff behavior (0 policy-related 4xx/5xx during the window) And an audit failure record is written with error_code and rolled_back=true And the API returns a non-2xx with a retryability flag consistent with error type
Idempotent Retry for Duplicate Handoff Events
Given a handoff request with idempotency_key K and correlation_id C that is processed successfully When the same request (same key and payload) is retried 1..N times within 24h Then all side effects are applied exactly once And subsequent retries return 200 with idempotent=true and no additional mutations And only one success audit event exists for correlation_id C
Guardrails: Prevent Downgrade Below Active Pledges
Given the user has active pledges requiring Approver-level publishing permission until Milestone M When attempting to handoff the user to Contributor (a downgrade below requirements) Then the migration is blocked And no policy, quota, or link changes are applied And the API returns 409 CONFLICT with error_code=PLEDGE_GUARDRAIL and details of violating pledges And an alert event is emitted to project admins with suggested allowed target roles
Merge and Precedence of Rules and Overrides
Given tenant defaults T, project-specific rules P, target role template R, and user explicit overrides O When computing the effective policy during migration Then precedence is O > P > T > R for each policy attribute And the computed effective policy is applied uniformly across stems, artwork, and press kits And each policy attribute persists metadata source_of_truth in {override, project, tenant, template} And the emitted event includes a policy_diff summary with counts for added, removed, and changed attributes
Observability: Structured Events and Metrics
Given any handoff outcome (success or failure) When the engine completes processing Then a structured event is emitted with schema including correlation_id, project_id, actor_id, source_role, target_role, counts, duration_ms, idempotency_key, outcome, error_code (nullable), rolled_back (bool) And PII and secrets are redacted according to policy And audit trail stores before/after policy snapshots with hashes And metrics expose success_count, failure_count, p95_duration_ms, and recall_replace_updates labeled by tenant and role_pair
Time-Zone Consistency for Expirations and Quota Windows
Given assets and links with expiration policies and quota windows, and collaborators in multiple time zones When a handoff occurs at any wall-clock time Then all calculations use UTC-normalized timestamps And link expiration instants remain unchanged (absolute time) while local displays differ only by timezone formatting And quota windows maintain their original boundaries (no unintended reset/shift) And users in JST and PST observe identical absolute expiration instants and remaining quota for the same link
Recall & Replace Propagation
"As a publicist, I want active links to reflect updated policies after a handoff so that I don’t have to resend or recreate distribution links."
Description

Extend the existing Recall & Replace mechanism to propagate role changes to all active assets and shortlinks without breaking URLs. When a handoff occurs, the system reissues access tokens, updates AutoKit press page visibility, stem player permissions, and watermark/expiry parameters according to the new role. Recipients accessing old links are transparently subject to updated policies, with optional grace periods and per-project messaging. The process runs asynchronously with progress reporting, deduplicates overlapping updates, and guarantees eventual consistency while maintaining uninterrupted link availability.

Acceptance Criteria
URL Continuity During Handoff
Given a project with active shortlinks and asset links under role "Contributor" When a Handoff Switch moves the collaborator to role "Approver" Then all existing URLs resolve without 404/410 for the duration of propagation And the 5xx error rate remains below 0.1% during the first 10 minutes post-handoff And p95 response latency stays within 1.5x the pre-handoff baseline during the propagation window
Access Token Reissue and Policy Enforcement
Given N active links and shortlinks associated to the collaborator's current role When the Handoff Switch is executed Then new access tokens are issued for all affected links and old tokens are invalidated within 60 seconds And requests using old URLs are transparently served under the new role policy without requiring the recipient to obtain a new URL And quotas and pledges reflect the new role immediately after token reissue And an audit log entry records the token reissue and role change with link IDs and timestamps
AutoKit Press Page Visibility Propagation
Given one or more AutoKit press pages tied to the project When the collaborator role changes via Handoff Then press page modules (e.g., assets, bios, contacts) reflect the new role’s visibility within 120 seconds And modules not permitted by the new role return 403 to unauthenticated or disallowed viewers And page cache is invalidated so that the first subsequent view after propagation shows the updated visibility
Stem Player Permissions, Watermark, and Expiry Update
Given a private stem player and downloadable assets governed by the collaborator’s role When the collaborator is moved to a new role via Handoff Then stream/play permissions, download allowances, and expiry windows are enforced per the new role within 120 seconds And newly downloaded assets are watermarked per the new role’s watermark template on first download after propagation And attempts exceeding the new role’s limits are blocked with HTTP 429/403 and an explanatory message
Grace Period and Per-Project Messaging on Old Links
Given a project-level grace period G is configured and custom messaging is enabled When a Handoff occurs Then recipients accessing existing URLs see a banner/message reflecting the pending policy change within 5 minutes And old role quotas and permissions remain in effect until the grace period expires And at grace expiry, subsequent requests are enforced under the new role without requiring new URLs, with the banner updated to reflect the change
Asynchronous Propagation with Progress Reporting and SLA
Given propagation runs asynchronously across assets, links, and pages When a Handoff is initiated Then a progress API and UI surface total, processed, succeeded, failed counts and percent complete, updating at least every 5 seconds And 99% of affected artifacts reflect the new role policy within 2 minutes and 100% within 10 minutes barring retriable failures And failures are reported per item with retry status, without blocking unaffected items
Deduplication and Idempotent Concurrency Handling
Given overlapping or rapid successive Handoffs for the same collaborator and project When propagation jobs are queued and executed Then updates are deduplicated so each affected link/asset is processed once per effective role change And the final state reflects the last-in-time Handoff event by timestamp And retries are idempotent, producing no duplicate tokens, watermarks, notifications, or audit entries
Access Revalidation & Permission Sync
"As a security-conscious label manager, I want permissions to sync instantly after role changes so that former access isn’t lingering and new access works immediately."
Description

On successful handoff, trigger immediate revalidation of collaborator permissions across TrackCrate: asset library access (stems, artwork, press), folder scopes, private stem player, and download endpoints. Invalidate and reissue tokens where required, update watermarking rules, and adjust expiry windows. The process supports large catalogs via batched jobs, includes rate-limiting to avoid churn, and provides a reconciliation report of granted/revoked entitlements. Integration points include CDN cache invalidation for protected content and consistent updates to mobile and web clients.

Acceptance Criteria
Immediate Revalidation on Handoff
Given a collaborator’s role is changed via Handoff Switch and the operation returns success When the revalidation process starts Then their access rights for asset library (stems, artwork, press), folder scopes, private stem player, and download endpoints are updated within 5 seconds for catalogs ≤ 500 assets And for catalogs > 500 assets, the first batch is applied within 15 seconds and all batches complete within 15 minutes And all subsequent API permission checks reflect the new role And the operation is idempotent if retried within 10 minutes (no duplicate work, consistent result) And no resource retains permissions from the previous role after completion
Token Invalidation and Reissue
Given the collaborator has active access tokens, refresh tokens, and signed download URLs/shortlinks When the handoff revalidation runs Then all tokens tied to the old role are invalidated immediately And new tokens are issued and bound to the new role policies And existing active shortlinks are updated in place via Recall & Replace without changing public URLs And affected sessions receive a 401 on the next request and succeed upon re-auth within 2 requests And no more than 1 failed request per client session occurs during the transition window
Watermarking and Expiry Policy Sync
Given watermarking rules or expiry windows differ between roles When handoff completes Then new downloads apply the new watermark template and expiry TTL immediately And any previously issued signed URLs with longer TTLs are revoked and reissued respecting the new expiry within 10 seconds And cached watermarked assets tied to the previous policy become inaccessible within 60 seconds And audit logs record old and new policy values for the collaborator
Batched Revalidation for Large Catalogs
Given a collaborator has access to more than 5,000 assets When revalidation runs Then assets are processed in batches of up to 500 items with a maximum concurrency of 5 workers (configurable) And downstream rate limiting keeps 429/5xx error rate under 1% per minute And transient failures are retried with exponential backoff up to 3 times per asset And progress metrics publish every 30 seconds with processed/succeeded/failed/pending counts And the job can resume after interruption without duplicating side-effects
CDN Cache Invalidation for Protected Content
Given protected content is cached at the CDN When permissions or watermark/expiry policies change Then targeted cache invalidations are issued for affected paths within 5 seconds And updated policy is enforced globally within 60 seconds for new requests And no full-site purge occurs And invalidation request IDs are captured for reporting
Reconciliation Report and Audit Trail
Given a handoff-triggered revalidation has executed When the job completes or times out Then a reconciliation report is produced within 60 seconds including: collaborator ID, old→new role, start/end timestamps, counts of entitlements granted/revoked/unchanged, tokens invalidated/reissued, assets processed/succeeded/failed, cache invalidations issued, client notifications sent And the report is available to admins in the project activity log and downloadable as JSON and CSV And failed items include error codes and retry status And the job outcome is marked Pass if failure rate < 0.5% and tokens/permission updates/cache invalidations succeeded; otherwise Fail with reasons
Client Consistency Across Web and Mobile
Given the collaborator is signed in on TrackCrate web and mobile clients When handoff revalidation occurs Then clients receive a push or long-poll signal to refresh permissions within 10 seconds And UI access checks reflect the new role within 15 seconds without requiring app restart And offline clients reconcile on next sync and do not regain access to revoked resources And download attempts with revoked URLs return a standardized error code and retry guidance
Audit Trail & Notifications
"As a compliance officer, I want detailed, exportable logs and notifications for handoffs so that I can audit changes and inform stakeholders appropriately."
Description

Record an immutable audit trail for each handoff with actor, target user, before/after roles, affected policies/quotas/pledges, timestamps, and propagation outcomes. Provide project-level and tenant-level views with filters and export. Notify impacted users and owners through in-app notifications and email/webhooks, including a concise summary of changes and any required follow-ups (e.g., approvals or pledge updates). Ensure time zone–aware presentation and permission-based visibility so sensitive details are limited to authorized roles.

Acceptance Criteria
Immutable audit record on role handoff
Given an authorized actor initiates a Handoff Switch for a target user in a project When the handoff is confirmed Then a single audit record is created containing actor_id, actor_role, target_user_id, project_id, tenant_id, before_role, after_role, affected_policies, affected_quotas, affected_pledges, handoff_id, correlation_id, timestamp_utc And the audit record is immutable (no update or delete allowed via UI or API) and any mutation attempt returns 403 and is separately logged And the audit record is visible in project and tenant audit views within 2 seconds of handoff completion And creating the same handoff with the same correlation_id does not create a duplicate audit record
Propagation outcomes captured for Recall & Replace
Given active shortlinks are impacted by the handoff When Recall & Replace propagation runs Then the audit record captures per-propagation metrics: total_count, updated_count, skipped_count, failed_count, duration_ms, started_at_utc, finished_at_utc And each failure includes error_code and error_message with retriable true/false and next_retry_at_utc when applicable And a propagation_status of success, partial, or failed is stored and displayed in audit views
Project-level audit view with filters and export
Given a project owner opens the project audit trail When they filter by date range, actor, target user, role transition, outcome status, and policy/quota/pledge changes and paginate to page size 50 Then the system returns the correct filtered results within 1 second and sorts by timestamp descending by default And selecting Export CSV or Export JSON produces a file containing the filtered set with UTC timestamps and the viewer's timezone column And empty filter results render a no-events state without errors And column totals and counts reflect only the filtered results
Tenant-level visibility and field redaction
Given a tenant admin opens the tenant audit trail When they view entries Then only events from projects within the tenant and within their permission scope are listed And sensitive fields (e.g., pledge amounts and quota limits) are fully visible only to authorized roles; other roles see these fields redacted as "Restricted" And attempting to fetch an audit record outside scope returns 404 And exports respect the same visibility and redaction rules
Time zone–aware timestamp presentation
Given a user has a preferred timezone set When they view audit entries Then all timestamps display in the preferred timezone with offset and an option to toggle to UTC And changing the preferred or session timezone immediately updates displayed timestamps and converts date-range filters correctly And sorting is performed by UTC time to ensure correct order across DST transitions And exports include both timestamp_utc and timestamp_local with the IANA timezone identifier
In-app and email notifications to impacted users and owners
Given a handoff changes a user’s role or affects policies, quotas, or pledges When the handoff completes Then each impacted user and the project owner receive an in-app notification within 10 seconds containing actor, before_role, after_role, a concise summary of policy/quota/pledge changes, propagation outcome summary, and required follow-up actions with deep links And if email notifications are enabled for a recipient, an email with the same summary is delivered within 2 minutes; if disabled, no email is sent And notifications are deduplicated per recipient per handoff_id and retried up to 3 times on transient failures with exponential backoff And notification delivery outcomes (success/failure) are recorded against the audit record
Webhook event emission for external systems
Given webhooks are configured for the tenant When a handoff completes Then a POST request is sent to each active endpoint within 10 seconds with a JSON payload containing event_id, idempotency_key, handoff_id, actor, target_user, before_role, after_role, policy/quota/pledge diffs, propagation outcomes, project_id, tenant_id, timestamp_utc And the request is signed with a shared secret and includes signature and key_id headers And delivery is at-least-once with 3 retries on 5xx or timeout using exponential backoff (30s, 2m, 10m) And webhook delivery results (status code, attempts, last_attempt_at_utc, error_code) are visible in the audit record
Rollback & Safety Net
"As a producer, I want the ability to undo a handoff safely so that I can recover quickly from mistakes without breaking links or access."
Description

Enable a reversible handoff window with a pre-change snapshot of policies, quotas, pledges, and link states. Provide a one-click rollback that re-applies previous settings and triggers Recall & Replace to restore prior link behavior. Include conflict detection (e.g., new pledges created post-handoff) with guided resolution, and maintain a change journal for partial restores when full rollback is not possible. Offer a dry-run validator that flags risks before committing a handoff in complex projects.

Acceptance Criteria
Snapshot on Handoff Commit
Given a user confirms a Handoff Switch for one or more collaborators, When the handoff is committed, Then the system captures an immutable pre-change snapshot including policies, quotas, pledges, link states, role assignments, initiator, timestamp, and scope before any modifications apply; And assigns a unique snapshot ID; And records the snapshot in the change journal; And the snapshot is retrievable via UI and API; And capture completes within 2 seconds for projects with up to 200 assets.
Rollback Window Enforcement
Given a snapshot exists for a completed handoff, When the elapsed time since the handoff is within the configured rollback window (default 72 hours), Then the One-Click Rollback action is enabled and indicates remaining time; When the elapsed time exceeds the window, Then full rollback is disabled with an explanatory message; And Partial Restore remains available; And an admin override (if configured) requires Handoff:Admin permission and is logged in the change journal.
One-Click Rollback Restores Prior State
Given a valid snapshot within the rollback window and no unresolved conflicts, When the user triggers One-Click Rollback, Then pre-change policies, quotas, pledges, and role assignments are re-applied atomically; And Recall & Replace executes to restore prior link behavior without generating new shortlink URLs; And all affected links revert access rules, expirations, and watermarking to snapshot values; And the system reports counts of restored items and any skips; And the operation completes without data loss and creates a new journal entry referencing the snapshot ID.
Conflict Detection and Guided Resolution
Given post-handoff changes exist (e.g., new pledges, modified quotas, new or edited links), When a rollback is initiated, Then the system detects and lists each conflict before applying changes; And provides per-item choices to Keep Post-Handoff, Revert to Snapshot, or Duplicate as Draft; And blocks rollback until all conflicts are resolved or the user switches to Partial Restore; And presents a final resolution summary requiring explicit confirmation; And records chosen resolutions and outcomes in the change journal.
Partial Restore via Change Journal
Given full rollback is blocked or the rollback window has expired, When the user selects specific entities to restore from a snapshot (policies, quotas, pledges, designated links), Then only the selected entities are restored with dependency validation; And items with unresolved conflicts are skipped with reasons; And the system guarantees consistent state (no orphaned references); And a detailed journal entry lists restored items, skipped items, and reasons; And no duplicate entities are created unless the user selected Duplicate as Draft.
Dry-Run Validator Pre-Handoff
Given a pending Handoff Switch on a project, When the user runs Dry Run, Then the system simulates the handoff and potential rollback, enumerates all intended changes, and flags risks including quota overages, pledge conflicts, permission gaps, and non-revertible external effects; And outputs a report with per-item severity and a rollback readiness score; And makes no data changes; And returns a success indicator when no blocking risks are found and a non-success indicator when blocking risks exist.
Audit Trail and Permissions
Given role-based access controls are enforced, When any snapshot creation, rollback, partial restore, or dry-run occurs, Then an immutable audit record is written with actor, timestamp, scope, snapshot ID, items affected, and outcomes; And only users with Handoff:Manage (or higher) can execute rollback and partial restores; And unauthorized users see disabled actions with explanatory messaging; And audit records are queryable and exportable via UI and API.

Ring Insights

Track engagement and control health by Role Ring—invites accepted, time‑to‑first‑play, approval velocity, and leak/tamper incidents. Compare rings across projects to find bottlenecks and refine templates. Data‑driven tuning speeds releases while preserving security.

Requirements

Unified Ring Event Tracking & Schema
"As a label operations manager, I want accurate, ring-level engagement and security metrics aggregated from all relevant events so that I can trust the insights to drive decisions across projects."
Description

Implement a normalized analytics data model and ingestion pipeline that captures and aggregates per-project, per-Role Ring events: invites sent/accepted, time-to-first-play, approval actions (approve/request changes) with timestamps, and leak/tamper signals originating from watermark beacons and shortlink anomalies. Provide derived metrics (acceptance rate, median time-to-first-play, approval velocity distributions, incident rate) and ring-level rollups with support for cross-project joins and time-series queries. Include idempotent event ingestion, late-arrival handling, clock-skew tolerance, schema versioning, backfill jobs for legacy logs, data retention policies, and PII minimization with compliant identifiers. Expose queryable materialized views or APIs optimized for the Insights Dashboard and benchmarking features, with freshness SLAs and monitoring.

Acceptance Criteria
Idempotent Event Ingestion for Role Ring Events
- Given an event with event_id E and identical payload is received N>=2 times within 24h, When the pipeline ingests the events, Then exactly one record exists in analytics.events and a dedup_count metric increments by N-1. - Given an event with the same event_id E but a different payload checksum, When ingested, Then the event is quarantined with reason="event_id_collision" and an alert is emitted within 5 minutes. - Given two events with distinct event_id values and identical payload, When ingested, Then both are stored and no dedup occurs.
Late-Arrival Handling and Aggregation Corrections
- Given a valid event with event_time up to 14 days older than ingest_time, When ingested, Then downstream materialized rollups are corrected and visible in Insights within 10 minutes of ingestion. - Given a valid event older than the 14-day lateness threshold, When ingested, Then it is rejected from default ingestion with reason="beyond_lateness_threshold" and logged; backfill jobs may override via an allowlist. - Given late-arrival corrections, When a consumer queries the affected day/ring, Then acceptance_rate, median_time_to_first_play, approval_velocity, and incident_rate reflect the new totals without double-counting.
Clock-Skew Tolerance and Canonical Timestamping
- Given events for the same actor may have event_time differing from ingest_time by up to ±10 minutes, When computing metrics, Then event_time is used as the canonical timestamp and ingest_time is preserved for lineage. - Given a computed duration (e.g., time_to_first_play) would be negative due to ordering anomalies, When calculating, Then the value is clamped to 0 and the contributing events are tagged clock_skew_outlier=true. - Given skew exceeds ±10 minutes, When ingested, Then the event is accepted, tagged clock_skew_outlier=true, and included in aggregates; outlier counts are exposed in a monitoring view.
Schema Versioning and Backward-Compatible Upcasting
- Given producers send events with schema_version v1 or v2 where v2 is additive, When ingested, Then both versions are accepted and normalized to the unified model via an upcaster. - Given an event with an unknown major version (e.g., v3.x with breaking changes), When ingested, Then it is quarantined with reason="unsupported_schema_version" and surfaced in the schema registry alert feed within 5 minutes. - Given a schema change is deployed, When querying the data dictionary, Then JSON Schemas and field lineage for each version are available and the migration status is recorded.
Legacy Log Backfill with Exactly-Once Semantics
- Given historical logs in S3 matching the legacy format for a date range, When executing the backfill job, Then events are parsed and mapped to the normalized schema at ≥50k events/min sustained throughput. - Given overlap with previously ingested data, When backfill runs, Then duplicates are avoided using deterministic dedup keys (source_id + event_id + checksum) and analytics counts match source log counts within ±1% for each day/ring. - Given backfill completes, When integrity checks run, Then row-level spot checks (≥200 samples/day) show 100% field mapping accuracy for required fields and the job emits a completion report.
PII Minimization and Compliant Identifiers
- Given events may contain emails and IPs at the edge, When persisted to analytics tables, Then only salted HMAC-SHA256 identifiers are stored (no raw email/IP), with IPs zeroed at last octet prior to hashing. - Given key management policies, When rotating the HMAC salt/key (≥ every 90 days), Then new events use the new key and historical joins remain possible via a versioned key_id without re-identifying subjects. - Given raw staging data, When retention policies run, Then raw PII is purged within 7 days, analytics tables contain no direct PII, and an access audit shows no roles can query raw PII outside the retention window.
Materialized Views/APIs with Freshness and Cross-Project Support
- Given continuous ingestion, When querying ring_daily_metrics, Then acceptance_rate, median_time_to_first_play, approval_velocity_p50/p95, and incident_rate are available per project_id, ring_id, and day with p95 data freshness ≤5 minutes and p99 ≤15 minutes. - Given multiple projects and rings, When calling GET /insights/rings with filters (project_ids[], ring_names[], date_range) and group_by in {project, ring, project_ring}, Then responses return within p95 ≤500ms for cached windows and p95 ≤2s for cold queries up to 90 days. - Given watermark beacon and shortlink anomaly events, When they are ingested, Then incident_rate updates within 1 minute and the incidents are attributable to ring_id and project_id in both views and API responses. - Given per-project metrics are summed, When cross-project aggregates are queried, Then totals match the sum of constituent projects within 0.1% for counts and within 0.5% for medians (using standard aggregation rules).
Ring Insights Dashboard
"As a project lead, I want a clear dashboard that shows where each ring is slowing us down so that I can target follow-ups and unblock the release."
Description

Deliver an interactive analytics UI that surfaces Role Ring KPIs and trends per project and portfolio: invites accepted, time-to-first-play, approval velocity, incident counts, and an overall health score. Provide ring filters, date ranges, cohorting by invite date, asset type filters (stems/artwork/press), and drill-down from ring to member event timelines. Include visualizations (sparklines, histograms, percentile bands), bottleneck identification highlights, and CSV export. Ensure responsive design, fast-loading cached queries, empty-state guidance, accessible color contrast/ARIA (WCAG 2.1 AA), and adherence to TrackCrate RBAC so only authorized users view insights.

Acceptance Criteria
Access and Role Ring/Date Range Filtering
Given a user with TrackCrate Insights permission opens the Ring Insights Dashboard for Project P When they set Role Ring = "Mix Engineer" and Date Range = "Last 30 days" Then all KPIs (invites accepted, time-to-first-play, approval velocity, incident counts, health score), charts, and tables recalculate to include only events for that ring within that date range And an on-screen filter summary shows Ring: Mix Engineer; Range: Last 30 days And the URL query string reflects the filters and restoring the URL reconstructs the same state after refresh or share And if no data matches the filters, an empty-state panel appears with guidance and a CTA to invite collaborators, with no errors in console And filter changes complete recalculation and re-render with p95 under 1200 ms on cached data And a user without Insights permission who attempts to access the dashboard receives HTTP 403 and no insights data or counts are rendered or leaked via network responses
Cohorting by Invite Date and Asset Type Filters
Given a project with members invited across multiple weeks and asset types (stems, artwork, press) When the user enables Cohort = Invite Date and selects Cohort granularity = Weekly Then charts and tables segment metrics by invite week and display per-cohort counts (N) that sum to the filtered total And selecting Asset Types = [Stems, Press] recalculates KPIs and visualizations to include only those asset types And tooltips for each cohort show cohort label, sample size (N), p50/p90 time-to-first-play, and p50/p90 approval velocity with units And switching cohort granularity (Weekly -> Monthly) updates buckets without changing the filtered total (within +/-1 due to timezone boundaries) And clearing asset type filters restores the All assets baseline with matching totals
Drill-down from Ring to Member Event Timeline
Given the user is viewing Ring-level KPIs with active filters When they click a ring row or chart element for a specific ring Then the app navigates to a Member Event Timeline scoped to that ring and the active filters And the timeline lists members with chronological events (invite_sent, invite_accepted, first_play, approval, download, incident) with ISO 8601 timestamps in the project's timezone And per-member derived metrics (time_to_first_play, approval_velocity) are displayed and match back-end calculations within 0.5% relative error And a Back to Ring Insights control returns to the previous view with filters intact And with cached data, p95 time to first contentful render of the first 50 members is <= 2.5 s
Visualizations, Percentile Bands, and Bottleneck Highlights
Given filtered data with sufficient samples (N >= 5 per ring) When the dashboard renders visualizations Then KPI trend sparklines display the last 12 weeks on a consistent time axis with labeled units And histograms for time-to-first-play and approval velocity use adaptive binning (10–30 bins when N >= 100) and show counts per bin And shaded percentile bands display p50 and p90; tooltips include min, p10, p50, p90, max, and sample size And percentile values in the UI match query results within 0.5% relative error And any ring whose p50 time-to-first-play or approval velocity exceeds the portfolio median by >= 30% (N >= 5) is flagged as a bottleneck with a visible badge And clicking a bottleneck badge opens an explainer showing metric name, delta %, sample size, and suggested next actions; removing the condition removes the flag on refresh
CSV Export of Insights (Filters Preserved)
Given an authorized user with Export permission per RBAC is viewing filtered insights When they click Export CSV Then the generated CSV includes only rows matching active filters (ring, date range, cohort buckets, asset types) And columns include: project_id, project_name, ring, member_id, member_name, invite_date, accepted_at, first_play_at, approval_at, incidents_count, time_to_first_play_hours, approval_velocity_hours, health_score And datetimes are ISO 8601 UTC; encoding is UTF-8; header row present; comma as separator And for datasets <= 100,000 rows, file generation completes within 10 seconds; larger exports are queued and the user is notified when ready And CSV row count matches the UI total for the same filters (±0 discrepancy) And users without Export permission do not see the export control and receive HTTP 403 on direct export API calls
Query Performance and Caching
Given analytics queries are cacheable When a user loads the dashboard with previously computed results Then p95 time to interactive is <= 1.8 s on cached queries and <= 4.0 s on uncached queries (10 Mbps connection, mid-tier laptop) And query results are cached for 15 minutes; cache invalidation occurs within 60 seconds of new relevant events (invites, plays, approvals, incidents) And a skeleton UI appears within 200 ms of navigation and is replaced by data without cumulative layout shift > 0.1 And long-running exports or queries show a non-blocking progress indicator and do not freeze the UI thread
Responsive and Accessible UI (WCAG 2.1 AA)
Given users access the dashboard on various devices and assistive technologies When the viewport width ranges from 320 px to 1920 px Then core KPIs, filters, charts, and export controls remain usable; charts switch to compact variants below 600 px without losing labels And all interactive elements are keyboard navigable with a visible focus indicator; tab order follows visual order; a Skip to content link is available And color contrast meets AA: text and icons >= 4.5:1; large text/UI components >= 3:1; information is not conveyed by color alone And charts expose ARIA roles/names and provide text summaries for screen readers; KPI cards announce label and value; export button has an accessible name And no critical action requires hover; all hover content is accessible via focus
Cross-Project Ring Benchmarking & Cohort Comparison
"As a label head, I want to benchmark PR and Legal rings across all releases this quarter so that I can identify systemic delays and refine our standard templates."
Description

Enable comparison of Role Ring performance across projects, templates, time periods, and cohorts. Provide configurable benchmarks (e.g., portfolio median/75th percentile), outlier detection, and template-level aggregations to reveal systemic bottlenecks. Allow saved comparisons, shareable links with permission checks, scheduled report snapshots, and cached results for repeat queries. Support breakdowns by ring type (A&R, PR, Legal), geography, and release size while preserving privacy via aggregation thresholds.

Acceptance Criteria
Cross-Project Multi-Filter Comparison
Given the user has Ring Insights access and selects two or more projects, one or more templates, a date range, and cohort filters And selects breakdowns by ring type, geography, and release size When the user runs the comparison Then the system returns a table and chart with, per segment, invites-accepted rate, median time-to-first-play, median approval velocity, and leak/tamper incident rate And all applied filters are visible and editable And any segment with fewer than 5 entities is suppressed as "Insufficient data" with counts hidden And the uncached query completes within 8 seconds at p95
Configurable Benchmarks and Deviations
Given the user selects a benchmark type of portfolio median, 75th percentile, or a custom percentile between 50 and 95 When the comparison is rendered Then each metric column displays the benchmark value and the deviation percentage for each segment And segments more than 10% worse than the benchmark are highlighted red; more than 10% better are highlighted green And the selected benchmark is persisted per user for the workspace
Outlier Detection Toggle
Given Outlier Detection is enabled with sensitivity set to a z-score threshold of 2.0 by default When results are computed Then any segment whose metric z-score absolute value is greater than or equal to the threshold or falls within the top/bottom 5th percentile is flagged as an outlier And the flag includes the metric(s) that triggered it and the observed versus expected values And segments below the privacy threshold are excluded from outlier evaluation
Template-Level Aggregations and Ranking
Given multiple release templates are present across the selected projects When the user aggregates by template Then the system displays each template with counts (projects, rings, members) and aggregate metrics for the selected period And supports sorting and top/bottom N ranking by any metric And CSV export includes only data the user is authorized to see and excludes suppressed segments
Saved Comparisons and Shareable Links with Permission Checks
Given a comparison with filters and settings applied When the user saves it with a unique name Then it is stored with the exact filter set, benchmark, and outlier settings and appears in Saved Comparisons And when the user generates a shareable link Then only recipients with access to all underlying projects can view full results; others receive a 403 or a redacted view with suppressed projects And link access and views are audit-logged with user, timestamp, and comparison ID
Scheduled Report Snapshots
Given a saved comparison When the user schedules a snapshot (daily, weekly, or monthly) with recipients Then at the scheduled time a report is generated using the current data and the saved filters, and delivered via email and in-app And each report includes timestamp, data range, filters, benchmark type, and notes any suppressed segments due to privacy thresholds And report generation completes within 10 minutes and failures are retried up to 3 times with alerts to the owner
Query Result Caching and Invalidation
Given an identical comparison query is executed within a 24-hour cache TTL When the user runs the query Then the cached result is returned within 2 seconds and labeled with the cache timestamp And the cache is invalidated within 5 minutes of underlying Ring event data changes or permission changes And users can manually Refresh to bypass cache and force recompute
Thresholds, Baselines, and Anomaly Alerts
"As a release manager, I want to be alerted when a ring’s approval velocity drops below its baseline so that I can intervene before the schedule slips."
Description

Provide configurable thresholds and adaptive baselines per metric and Role Ring (e.g., approval velocity below 24 hours baseline, invite acceptance below 80%, spike in leak incidents). Continuously evaluate new data against these rules to trigger alerts via in-app notifications, email, Slack, or webhooks. Include alert deduplication, cool-down periods, time zone awareness, on-call routing, and an alert audit log. Allow per-project/ring overrides, test-mode previews, and one-click navigation from alert to the relevant dashboard view.

Acceptance Criteria
Per-Ring Metric Threshold and Baseline Configuration
Given a project P with Role Ring "PR" and admin user U, When U sets a threshold "Invite Acceptance Rate >= 80%" for ring PR at project scope and saves, Then the config persists, displays as Active for P/PR, and is versioned in the audit log with user, timestamp, and scope. Given global defaults exist and no project/ring override is present, When evaluation runs, Then the global default threshold is used. Given both project-level and ring-level overrides exist, When evaluating ring PR in project P, Then the ring-level override takes precedence over the project-level, which takes precedence over the global default. Given metric "Approval Velocity (hours)" baseline rule "Alert if baseline +20% exceeded" is entered with window "last 14 days," When saved, Then units, window, and deviation are validated and stored. Given invalid values (negative rate, unsupported unit, empty window), When saving, Then the save is blocked and field-level errors are shown.
Adaptive Baseline Computation and Drift Detection
Given metric Approval Velocity for ring A&R has at least 14 days of data, When baseline window is set to 14 days with top/bottom 5% outliers excluded, Then the system computes a baseline value and confidence score and stores both with a timestamp. Given fewer than 5 data points exist, When baseline is requested, Then the system uses the seeded baseline (admin-provided or template default) and marks confidence as Low. Given the observed mean deviates more than 25% from baseline for 7 consecutive days, When drift detection is enabled, Then the baseline is recomputed and a "baseline updated" info event is written to the audit log.
Real-Time Evaluation and Multi-Channel Alerting
Given a threshold breach occurs for metric "Invite Acceptance Rate" on ring PR, When evaluation detects the breach, Then an alert is created within 5 minutes and delivered via in-app notification, email to the configured list, Slack to the configured channel, and an HTTPS webhook with a signed payload. Given a delivery failure on any channel, When the first attempt fails, Then up to 3 retries are attempted with exponential backoff and failures are logged per channel. Given an alert is delivered, When the recipient views it, Then the alert contains the project, ring, metric, rule id, timestamps, and a one-click deep link to the relevant dashboard view.
Alert Deduplication and Cooldown
Given an alert for rule R (project P, ring PR, metric M) was fired at T0, When the same condition persists and a deduplication window of 30 minutes and cooldown of 60 minutes are configured, Then no duplicate alerts are emitted within 30 minutes and no new alert is emitted again until at least 60 minutes have elapsed. Given the condition clears, When a recovery rule is enabled, Then a single recovery notification is sent and the deduplication state for R is reset. Given the same condition reoccurs after cooldown expiry, When evaluation detects it, Then a new alert with a new correlation id is emitted.
Time Zone Awareness and Quiet Hours
Given project P timezone is America/Los_Angeles and quiet hours are 22:00–07:00 local, When a non-critical alert is triggered at 23:00 local, Then it is queued and delivered at or after 07:05 local, while critical alerts bypass quiet hours. Given recipients in different time zones, When alerts are delivered, Then timestamps display in the recipient’s local time with the project timezone indicated. Given a rule schedule is set for weekdays 09:00–18:00 project time, When outside this window, Then the rule does not evaluate and no alerts are generated.
On-Call Routing and Escalation
Given an on-call schedule with primary and secondary contacts is configured for project P, When an alert fires, Then the primary contact is notified first on all configured channels. Given the alert is not acknowledged within 10 minutes, When escalation is configured, Then the secondary contact is notified and the escalation event is recorded in the audit log. Given an alert is acknowledged via in-app button, Slack action, email action link, or webhook ACK, When received, Then further notifications and escalations stop and the alert status is updated to Acknowledged.
Alert Audit Log, Test-Mode Preview, and Deep Links
Given a user edits a rule, When the change is saved, Then an audit log entry is created recording before/after values, user, timestamp, and scope. Given test mode is enabled for a rule, When the user triggers a preview, Then a simulated alert is generated with sample payloads visible in-app only (or to the designated preview recipient) without notifying normal recipients, and delivery/rendering is shown. Given a user clicks the deep link in an alert, When the dashboard opens, Then it loads filtered to the specific project, ring, metric, and a time window around the event (e.g., ±24 hours).
Leak/Tamper Incident Correlation & Response
"As a security owner, I want incidents tied back to the most probable ring so that I can take targeted remediation without disrupting the entire project."
Description

Correlate suspected leak/tamper signals (e.g., watermark beacon matches, unusual shortlink referrers/geos, expired link access attempts) to Role Rings and member access histories to estimate likely source rings. Create incident records with severity, evidence, and timelines that feed Ring Insights metrics. Provide quick actions (revoke downloads, rotate links, tighten ring permissions), maintain an audit trail, and avoid unnecessary PII exposure. Integrate with existing watermarking/shortlink services and respect privacy and legal constraints.

Acceptance Criteria
Multi-Signal Correlation to Likely Source Role Ring
- Given leak/tamper signals (watermark_beacon, anomalous_referrer, anomalous_geo, expired_link_access) for the same asset within a 24-hour window, When correlation runs, Then an incident is created linking each signal to one or more Role Rings and member access events used as evidence. - Given multiple rings match evidence, When correlation runs, Then each candidate ring is assigned a confidence score 0–100 and rationale, and the highest score is marked as "Likely Source". - Given a watermark_beacon maps to a unique distribution variant, When correlation runs, Then the ring owning the corresponding download is scored ≥ 80 unless conflicting signals lower the score per rules. - Given no evidence maps to any ring, When correlation runs, Then an "Unattributed" incident is created with confidence ≤ 20 and rationale. - Then the correlation job completes within 2 minutes of the first qualifying signal under normal load.
Incident Record with Severity, Evidence, and Privacy Controls
- Given an incident is created, Then the record includes: incident_id, asset_id(s), project_id, candidate_rings, confidence scores, severity ∈ {Low, Medium, High, Critical}, status ∈ {Open, In Review, Mitigated, Closed}, first_signal_ts, last_signal_ts, detection_ts, and narrative rationale. - Given evidence items, Then each evidence entry includes type, source_system, signal_id, UTC timestamp, and redacted fields (no raw IPs, no full user agents, referrer as domain only, geo at country level). - Given evidence rules, When a watermark_beacon is present, Then default severity is at least High; When only expired_link_access attempts are present, Then default severity is Low. - Given retention policy is configured, Then incident evidence is retained per configuration (default 180 days) and auto-purged thereafter with audit entries. - Then incident records are readable only by roles: Security Admin, Project Owner, and designated Compliance roles.
Quick Response Actions: Revoke, Rotate, Tighten
- Given an Open or In Review incident, When an authorized user triggers "Revoke downloads" for selected rings/assets, Then all active download tokens for those rings/assets are invalidated within 60 seconds, and subsequent requests return HTTP 403 with error code DOWNLOAD_REVOKED. - Given an incident, When "Rotate links" is triggered for selected shortlinks, Then new shortlinks are generated and propagated within 60 seconds, and old links return HTTP 410 with error code LINK_RETIRED while preserving analytics mapping. - Given an incident, When "Tighten ring permissions" is triggered, Then download permissions for the selected Role Ring are set to "No Download" and watermarked preview-only within 30 seconds. - Then each action writes an audit entry including actor_id (hashed), role, action, scope, before/after states, and correlation_id within 5 seconds of completion. - Then unauthorized roles cannot see or invoke quick actions (controls disabled with tooltip indicating required role).
Comprehensive, Tamper-Evident Audit Trail
- Given any incident lifecycle change or quick action, Then an append-only audit event is recorded with ISO 8601 UTC timestamp, actor_id (hashed), actor_role, incident_id, affected assets/links, and reason. - Then audit events are chained with a cryptographic hash of the previous record, and any modification attempt results in chain verification failure and a security alert. - Then audit logs are exportable as JSON and CSV filtered by date range, project, and ring. - Then audit retention is configurable with a default of 365 days and exports include a verification manifest for integrity checks.
External Service Integration and Privacy Safeguards
- Given watermarking and shortlink services emit signals via webhook, When a webhook is received, Then the system validates signature, enqueues the event, and acknowledges within 2 seconds. - Given transient failures calling external APIs for link rotation, Then the system retries with exponential backoff (up to 5 attempts) and moves failures to a dead-letter queue with alerting. - Then no PII (names, emails, IP addresses) is sent to external services during mitigation; only asset_id, link_id, and ring_id are transmitted. - Then ingestion normalizes and stores geo at country-level only and strips IP addresses and full user agents from persistent storage. - Then privacy configuration supports regional data residency tags, and events are stored in the region of the project when configured.
Ring Insights Metrics from Incidents
- Given incidents are created/updated/closed, Then Ring Insights aggregates per Role Ring: incident_count by severity, time_to_detection median, time_to_first_response median, and incidents per 100 downloads, updated within 5 minutes. - Given an incident is re-attributed to a different ring, Then metrics decrement from the old ring and increment for the new ring within 5 minutes, and the change is recorded in audit. - Given projects are compared, Then Ring Insights displays incident rates and response metrics side-by-side across selected projects using the same time window and normalization. - Then closing an incident with "Unattributed" excludes it from ring-specific rates but includes it in project-level totals.
Insight-Driven Template Recommendations
"As a product manager, I want actionable, explainable suggestions to tune our Role Ring templates so that we can speed approvals without increasing leak risk."
Description

Generate data-backed recommendations to refine project templates based on ring performance (e.g., adjust ring composition, permission scopes, download expiry, watermark strength, auto-reminder cadence). Provide explanations, impact estimates, and risk considerations, with one-click application to future projects or as a proposed change request to current ones. Support A/B rollouts, opt-outs, and a feedback loop to improve recommendation quality over time, with full audit logging.

Acceptance Criteria
Generate Data-Backed Recommendations by Ring Performance
Given a project with ≥3 completed releases and ≥100 ring events in the last 90 days, When the user opens Ring Insights > Recommendations, Then the system generates 1–10 recommendations within 5 seconds using the most recent 24h data snapshot. Given rings with tracked KPIs (invite acceptance rate, time-to-first-play, approval velocity, leak/tamper incidents), When generating recommendations, Then each recommendation cites at least one KPI deviation from org baseline or template target with p≤0.05 and the comparison window used. Given insufficient data (<100 events or <2 distinct rings), When generating recommendations, Then the system displays a “Not enough data” state showing required thresholds and disables one-click actions. Given filters (project, ring, timeframe 30/60/90 days), When filters are applied, Then the recommendation set and all linked metrics recalculate to match the filters and display the active filters. Given prior recommendations for the same ring and parameter within 30 days, When generating new recommendations, Then duplicates are suppressed so that ≤20% of the list repeats prior items.
Recommendation Explanations, Impact Estimates, and Risks
Given a surfaced recommendation, When it is rendered, Then it includes Explanation (≥100 chars), Impact Estimate (absolute and relative change to a named KPI with 95% CI), and Risk (Low/Med/High) with top 2 risk drivers. Given a recommendation’s Impact Estimate, When viewed, Then it shows the comparison baseline (org/template/ring), the measurement window, and a confidence level, and links to the supporting dataset. Given any recommendation, When accessed via API, Then a machine-readable payload includes: rationale features used, metrics referenced, predicted KPI delta, confidence, assumptions, and affected scope (rings/projects). Given missing inputs for an estimate, When a field cannot be computed, Then the UI/API marks the field as “Unavailable” with a reason code and the item is not eligible for one-click apply.
Apply or Propose Template Changes (One-Click)
Given sufficient permission (Template Admin or above), When the user clicks “Apply to future projects”, Then a diff preview shows affected template fields and values, and upon confirmation the change is applied within 2 seconds and emits an audit entry. Given sufficient permission (Project Admin or above), When the user clicks “Propose change to current project”, Then a change request is created with the recommendation text as description, owners are notified, and activation requires at least one approver from the designated approver group. Given the same recommendation is applied twice, When the user retries, Then the operation is idempotent and no duplicate changes are created. Given an applied change within 30 days, When “Rollback” is clicked, Then the prior template values are restored within 2 seconds and an audit entry records before/after. Given insufficient permission, When viewing the recommendation, Then one-click action buttons are hidden or disabled with a tooltip explaining required roles.
A/B Rollouts: Assignment, Measurement, and Significance
Given A/B is enabled for a recommendation parameter, When starting a test, Then assignment is randomized and stable at the chosen unit (ring or project) with default 50/50 split and configurable ratios. Given an active test, When traffic accrues, Then the system enforces minimum per-variant sample size ≥50 events and minimum duration ≥7 days before computing significance. Given sufficient data, When results are computed, Then the dashboard shows lift with 95% CI, p-value, winner/indeterminate label at α=0.05, and excludes opt-outs and security-incident events. Given a running test, When the user pauses or stops it, Then assignments freeze immediately, further data is not counted, and the action is recorded to audit logs. Given a test is edited (ratio/unit), When changes are saved, Then analysis restarts and the UI flags the reset with a new test version id.
Opt-Out Controls and Data Exclusion
Given an Owner or Admin, When they set an opt-out at org/project/ring/member scope with a reason, Then the opt-out becomes effective within 60 seconds and propagates to recommendation generation and A/B assignment. Given an opted-out entity, When recommendations are generated, Then that entity receives no recommendations and its data is excluded from training and outcome analysis. Given an opt-out list, When viewed, Then it displays scope, subject, reason, actor, timestamp, and supports search and export to CSV/JSON. Given an opt-out is revoked, When saved, Then eligibility is restored within 60 seconds and the revocation is recorded with full history retained. Given a non-privileged user, When attempting to change opt-outs, Then the action is denied and logged.
Feedback Loop to Improve Recommendation Quality
Given a visible recommendation, When a user submits thumbs up/down and an optional comment, Then the feedback is stored with user id, ring/project context, and timestamp, and the UI reflects submission without page reload. Given accumulated feedback, When the weekly model/rules update runs, Then negatively-rated patterns are down-weighted and positively-rated patterns are up-weighted, and impacted recommendations display an “Updated from feedback” badge within 14 days. Given adoption tracking, When a recommendation is applied, Then the system records adoption and post-change KPI deltas to compute a quality score per recommendation pattern available to admins. Given a user marks a recommendation as “Not Applicable”, When similar recommendations are generated for the same ring, Then they are suppressed for 60 days unless data materially changes (≥20% KPI shift).
Full Audit Logging and Export
Given any recommendation lifecycle event (generation, view, filter change, apply, rollback, change request create/approve/reject, A/B start/stop/pause, opt-out add/remove, feedback submit), When it occurs, Then an append-only audit entry is written. Given an audit entry, When stored, Then it contains event type, actor (user/system), project id, ring id, recommendation id, timestamp (ISO 8601 UTC), request id, before/after values (for changes), and a tamper-evident hash chain pointer. Given an authorized auditor, When querying the audit API, Then results are paginated via cursor, filterable by event type/actor/date/ring/project, return within 1 second at P95 for up to 1M records, and export to CSV/JSON is available. Given data retention policy, When reaching 12 months of age, Then audit entries are retained for at least 1 year by default and deletions (if any per policy) are themselves logged with reason and approver.
Insights Access Controls & Privacy Safeguards
"As an organization admin, I want granular control over who can access ring-level insights and exports so that we protect privacy and comply with policy while enabling data-driven work."
Description

Extend TrackCrate RBAC to govern who can view Ring Insights at organization, project, and ring scopes. Enforce least privilege, aggregation thresholds to prevent singling out individuals, and anonymization for cross-project views when user counts are low. Provide consent and data processing disclosures, export governance for CSV downloads, audit logs of insights access, configurable data retention windows, and support for legal holds. Ensure compliance with relevant regulations (e.g., GDPR/CCPA) without degrading analytical utility.

Acceptance Criteria
RBAC-Governed Visibility by Scope (Org/Project/Ring)
Given a user with Org Admin role, When they open Ring Insights at organization scope, Then they can view insights for all projects and rings within the org. Given a user with Project Collaborator role on Project A only, When they open Ring Insights, Then they can view insights only for Project A and its rings, and are denied access to other projects/rings with HTTP 403 and no metadata leakage. Given a user with Ring Reviewer role on Ring X, When they open insights, Then they can only view metrics for Ring X and higher-scope aggregates are hidden. Given a user without any role for a project, When they call the insights API for that project, Then the response is HTTP 403 and the event is audited.
Least-Privilege Defaults and Denied-by-Default Enforcement
Given a newly invited user with no assigned role, When they navigate to Ring Insights, Then insights UI elements are hidden and all insights API requests return HTTP 403. Given an admin downgrades a user's role to a narrower scope, When the user next requests insights, Then the effective permissions reflect the downgrade within 60 seconds and access outside the new scope is blocked. Given a user attempts to expand scope via client-side filters or query parameters, When the server evaluates the request, Then server-side authorization enforces granted scopes and rejects over-broad filters with HTTP 403.
Thresholded Aggregation and Cross-Project Anonymization Enforcement
Given the k-anonymity threshold is set to 5, When a metric slice has distinct user count less than 5, Then the slice is suppressed and labeled "insufficient data" with no counts or percentages displayed. Given a time-series bucket falls below k, When rendering charts or tables, Then the bucket is suppressed or merged per policy and totals exclude suppressed buckets. Given an admin updates k to a value between 3 and 20, When the change is saved, Then it takes effect within 15 minutes and is recorded in the audit log. Given cross-project comparison is enabled, When a multi-project user views insights, Then user identifiers are replaced with pseudonymous group IDs and no per-user identifiers appear anywhere in UI or exports. Given any project slice in a cross-project view has distinct user count less than k, When rendering, Then project names are hidden for that slice and grouped under "Other". Given suppressed/anonymized slices exist, When exporting CSV, Then suppression and anonymization are applied identically with no additional detail leakage. Given privacy safeguards are active, When validating key metrics against a non-privacy baseline on test data, Then relative error is less than or equal to 2% for counts and less than or equal to 0.5 days for time-based medians.
Consent and Data Processing Disclosures
Given an organization enables Ring Insights, When the first admin accesses insights, Then a just-in-time disclosure shows purpose, lawful basis, retention, and links to policy, and requires explicit acknowledgment before enabling collection. Given a data subject opts out of analytics tracking, When their events are processed, Then their identifiers are excluded from Ring Insights metrics within 24 hours of the opt-out being recorded. Given an organization requires consent for EU users, When consent has not been recorded, Then insights collection for EU data remains disabled and the UI indicates consent is required. Given a GDPR/CCPA erasure request is received, When processed, Then the subject's identifiers are deleted or irreversibly pseudonymized from raw events and derived aggregates within 30 days, and completion is auditable.
CSV Export Governance and Controls
Given a user with Export Insights permission initiates an export, When a CSV is generated, Then it is watermarked with requester, scope, timestamp, and SHA-256 hash, and the filename includes an expiry timestamp. Given a CSV export is ready, When a download link is issued, Then it is a single-use pre-signed URL that expires within 15 minutes or immediately after one successful download, whichever comes first. Given suppression/anonymization is in effect in the UI, When exporting, Then the CSV enforces the same suppression/anonymization rules and contains no additional granularity. Given an export completes, When writing audits, Then the audit log includes actor, scope, filters, row count, hash, and justification text, and sends an alert if row count exceeds the configured threshold.
Audit Logging, Retention Windows, and Legal Holds
Given any insights access or export occurs, When the request is processed, Then an immutable audit entry is written within 5 seconds with actor, scope, action, outcome, IP, user-agent, timestamp, and request ID. Given the organization retention window is set to 180 days, When events exceed 180 days, Then event-level data and derived aggregates are purged on a rolling basis with purge jobs logged and verifiable. Given a legal hold is placed on Project A, When retention windows elapse, Then data for Project A is retained until the hold is lifted, and purge jobs skip it with the hold reason recorded. Given an auditor queries access logs for a date range, When results are returned, Then they are complete, tamper-evident, and filterable by actor, project, ring, and action.

PhaseLock Align

One-click, transient- and tempo-aware alignment that corrects timing and phase offsets between versions and stems. Eliminates DAW bounce drift so A/Bs are truly apples-to-apples, making spectral heatmaps accurate and review decisions fast—no manual nudging required.

Requirements

Transient & Tempo Analysis Engine
"As a mix engineer collaborating remotely, I want TrackCrate to auto-detect timing and phase misalignment between versions so that I can compare bounces without manual nudging or guesswork."
Description

A module that detects transients, estimates tempo/BPM and bar/beat grid, and computes cross-correlation between reference and target signals to determine optimal time offset, polarity, and phase rotation. Supports multi-sample-rate audio, variable drift estimation (linear offset plus micro-warping), latency compensation, and noisy or sparse material. Exposes confidence scores and alignment error metrics for downstream UI and logging. Implemented as a streaming, low-latency service callable from web UI and AutoKit pipelines with deterministic outputs for reproducibility.

Acceptance Criteria
Accurate tempo and transient detection across diverse material
Given a test set of percussive tracks (60-180 BPM) and non-percussive tracks with annotated BPM and transient times When the engine processes 30 s excerpts from each track Then estimated BPM error is <= ±0.5 BPM for >= 95% of excerpts and <= ±1.0 BPM for 100% And median beat-phase error <= 10 ms and 95th percentile <= 25 ms versus annotations And transient detection F1 >= 0.90 on percussive set and >= 0.80 on non-percussive set
Cross-sample-rate alignment with deterministic outputs
Given reference-target pairs of identical content at sample rates {44.1, 48, 96, 192} kHz with known initial offsets (e.g., 300 ms) When the engine aligns pairs with mismatched sample rates and identical parameters Then reported offset_ms error is <= 1.0 ms versus ground truth for all sample-rate combinations And output alignment parameters (offset_ms, drift_ppm, warp_map, polarity_inverted, phase_rotation_deg, confidence_score) are byte-identical across 5 repeated runs And offset_ms and warp_map remain unchanged under input gain scaling within ±6 dB
Drift correction on long renders with micro-warping
Given a 10-minute target created from a reference by applying +20 ppm linear drift, sinusoidal micro-warp of ±3 ms at 0.1 Hz, and a 100 ms initial offset When processed in batch alignment mode Then estimated drift_ppm is within ±10% of +20 ppm And the produced warp_map reduces residual alignment error to RMS <= 2 ms and P95 <= 5 ms over the full duration And the cumulative time mapping is strictly monotonic (no time reversals)
Latency and polarity/phase compensation validation
Given a target derived from a reference by delaying 1024 samples at 48 kHz, inverting polarity, and applying an all-pass filter producing +30° phase rotation at 1 kHz When the engine aligns the target to the reference Then reported latency error is <= ±1 sample, polarity_inverted == true, and phase_rotation_deg at 1 kHz is within ±5° of +30° And post-alignment wideband correlation coefficient r >= 0.98 for the synthetic pair
Streaming lock and align within low-latency budget
Given live reference and target streams delivered in 1024-sample frames at 48 kHz with start-time uncertainty of ±500 ms When the engine operates in streaming mode Then initial lock (non-ambiguous offset and tempo) occurs within <= 1.5 s or within <= 2 bars, whichever is sooner And parameter update latency from frame receipt to emitted update is P95 <= 50 ms and P99 <= 100 ms over a 5-minute session And missed-update rate is <= 1 per 10,000 frames
Robustness on sparse/noisy signals with confidence gating
Given sparse vocal and ambient excerpts augmented with pink noise at SNR levels {0, 3, 6, 10} dB and a set of non-matching negative-control pairs When the engine processes these inputs Then confidence_score increases monotonically with SNR with Pearson r >= 0.7 across the set And for any case where confidence_score < 0.6, the engine applies offset-only alignment (no micro-warp) and returns reason == 'low_confidence' And false-lock rate on negative-control pairs is <= 1% (no offset reported with confidence >= 0.6)
Web UI and AutoKit API integration with metrics exposure
Given authenticated /align/stream and /align/batch endpoints accepting reference_id, target_id, and parameter payloads When clients invoke the endpoints with valid inputs Then responses include fields: offset_ms, drift_ppm, warp_map, polarity_inverted, phase_rotation_deg, confidence_score, alignment_error_ms_rms, request_id And requests are idempotent when Idempotency-Key is reused (identical outputs and 200 status) And batch jobs for 60 s audio complete in <= 120 s wall-clock at P95 with deterministic (byte-identical) JSON outputs across reruns And structured logs are emitted per request with timing and metrics
Reference Selection & Alignment Modes
"As a producer, I want to choose which version is the reference and how strictly to align others so that the results match my intent across an entire project."
Description

UI and API to choose a reference track, select alignment scope (global linear offset, linear+phase, or elastic micro-warp for slow drift), and set tolerance/sensitivity. Supports per-stem override, polarity flip, frequency-band-limited alignment focus (e.g., drums), and dry-run analysis. Persists chosen mode in the release workspace and AutoKit presets for consistent batch behavior across sessions.

Acceptance Criteria
Reference Track Selection via UI and API
- Given a release workspace containing multiple tracks, when a user selects a track as the reference in the UI and clicks Save, then the workspace metadata stores the reference track ID and the UI restores the same selection after reload. - Given a valid track_id in the same workspace, when the API endpoint to set the reference is called, then it returns HTTP 200 and subsequent GET returns the same reference track_id. - Given an existing reference, when a different track is set as reference, then the previous reference is replaced, an audit entry is created, and only one reference is active. - Given a track outside the workspace, when it is attempted as a reference via UI or API, then the UI blocks selection and the API returns HTTP 400 with error code REF_OUT_OF_SCOPE. - Given a track being viewed, when it is the same as the reference, then self-reference is disabled and clearly indicated in the UI.
Alignment Mode and Sensitivity Configuration
- Given the alignment panel, when the user chooses a mode (Global Linear Offset, Linear+Phase, Elastic Micro-Warp), then only one mode can be active and the selected mode is applied to the next alignment job. - Given no prior selection, when the panel loads, then Linear+Phase is preselected by default. - Given user-entered sensitivity values, when configuring: offset search window (0–500 ms, default 100, step 1), phase sensitivity (0–100, default 50, step 1), and max drift for elastic (0.0–2.0% per minute, default 0.5, step 0.1), then out-of-range inputs are prevented in UI and rejected by API with HTTP 422. - Given a selected mode and sensitivities, when the user clicks Align, then the job config snapshot recorded in logs includes mode and numeric parameters exactly as set. - Given a mode switch, when preview estimates are recalculated, then the UI updates latency/warp preview within 500 ms on a standard dataset.
Per-Stem Override of Alignment Settings
- Given a stem row, when the user enables Override, then that stem displays its own mode and sensitivity controls and uses them for analysis instead of the global settings. - Given per-stem overrides are set, when the workspace is saved and reloaded, then all overridden settings persist and are reapplied. - Given a stem with an override, when Reset to Global is clicked, then the stem immediately reverts to global settings and the override indicator is cleared. - Given the API receives a PATCH for a stem with override parameters, then it returns HTTP 200 and subsequent GET reflects the override. - Given multiple stems have overrides, when the user selects Reset All Overrides, then all stems inherit global settings and the override count displays 0.
Per-Stem Polarity Flip (Phase Invert)
- Given a stem, when the Polarity Flip toggle is enabled, then monitoring and export paths invert the stem’s polarity and the state is reflected by an icon change. - Given Linear+Phase or Elastic Micro-Warp mode, when polarity flip is enabled for a stem, then the correlation stage uses the inverted signal for that stem (logged as invert=true in the job snapshot). - Given a 1 kHz test tone on a stem, when polarity is flipped, then the rendered file shows a 180° phase inversion relative to the reference. - Given the workspace is saved and reopened, when checking the stem controls, then the polarity state persists and is included in AutoKit presets created from this workspace.
Frequency-Band-Limited Alignment Focus
- Given the alignment panel, when the user selects an alignment focus of Full-Band (default), Drums, Vocals, or Custom, then only the selected band is used for correlation. - Given Drums is selected, when alignment runs, then the job log includes the band ranges 20–250 Hz and 2–6 kHz; given Vocals, then 150 Hz–8 kHz is logged; given Custom, then the user-defined min/max Hz are logged. - Given Custom band values outside 20–20000 Hz or min>=max, when submitted, then the UI blocks save and the API returns HTTP 422 with error code BAND_INVALID. - Given a band change, when preview recalculates, then suggested offsets update within 1 second and the band chip label appears on the preview graph.
Dry-Run Analysis Without Applying Changes
- Given Dry-Run is enabled, when the user starts analysis, then no timing or phase changes are applied to audio buffers or workspace settings and only an analysis report is produced. - Given a 5-minute stereo pair on a standard instance, when dry-run runs, then it completes within 10 seconds and returns: estimated global offset (ms), per-stem phase rotation (degrees), optional warp curve points (time, delta), and confidence (0.0–1.0). - Given the API is called with dry_run=true, when it completes, then it returns HTTP 200 with the results JSON and does not mutate stored settings (verified by unchanged GET of workspace config). - Given a completed dry-run, when the user clicks Apply, then the last analysis result is committed to workspace settings and a new audit entry is created.
Persistence in Workspace and AutoKit Presets
- Given a workspace with reference, mode, sensitivities, band focus, per-stem overrides, and polarity states configured, when the user clicks Save, then all settings persist and are restored exactly upon reload. - Given the user creates an AutoKit preset from the current workspace, when the preset is applied to another release, then the saved alignment settings transfer as defaults and do not overwrite existing explicit per-stem overrides unless Force is checked. - Given a batch export triggered via AutoKit with the preset, when jobs run, then each job log includes the preset ID and a settings snapshot matching the preset at trigger time. - Given a persistence failure (e.g., storage error), when Save is attempted, then the user sees an error banner and no partial settings are silently lost (previous settings remain intact).
Phase-Coherent Grouping for Multi-Stem Align
"As a mastering engineer, I want grouped stems to stay phase-coherent when aligned so that summing remains accurate and mono compatibility is preserved."
Description

Ability to define stem groups (e.g., multi-mic drums, vocal doubles) that must be time-shifted identically to preserve inter-stem phase relationships. The system computes alignment against the group’s anchor but applies a single transform across the group, with safeguards against comb filtering. Supports nested groups and prevents re-rendering conflicts during batch operations.

Acceptance Criteria
Group-level alignment uses single computed transform against anchor
Given a defined stem group G with stems S1..Sn and one stem marked as anchor A And a reference target R selected for alignment When PhaseLock Align is executed for group G Then the system computes a single alignment transform T using A vs R only And applies exactly the same transform T (identical delay/warp parameters) to all stems S1..Sn And no per-stem deviations > 0 samples are applied within the group And the resulting residual offset between A and R is ≤ ±1 sample at the project sample rate And an immutable log entry records T parameters (delay in samples or ms, tempo mapping if any) and affected stem IDs
Preserve inter-stem phase relationships within a group
Given a stem group G (e.g., multi-mic drums or vocal doubles) with known internal phase relationships When a single transform T is applied to align the group to a reference Then pairwise relative delays among stems in G remain unchanged within ±0.1 sample And average phase deviation across 100 Hz–5 kHz among stem pairs changes by ≤ 5° And magnitude-squared coherence between 100 Hz–5 kHz for each stem pair changes by no more than ±0.05 from pre-align values And null-test residual between pre- and post-align for any stem after reversing T is ≤ -60 dBFS (indicating non-destructive application)
Nested groups inherit composite transform and resolve anchors deterministically
Given a parent group P with anchor Ap that contains a child group C with anchor Ac And P is selected for alignment against reference R When PhaseLock Align is executed at the parent group level Then a single composite transform Tp is computed using Ap vs R And Tp is propagated identically to all stems in P, including stems inside C And no additional transform is computed or applied at C during the same operation And the audit trail shows one transform ID applied across all affected stems And the UI/API prevents concurrent alignment runs on C while P is processing (operation is disabled or queued and deduplicated)
Comb-filtering risk detection and operator choice
Given a stem group G where suggested per-stem offsets (if computed independently) would differ by > 0.3 ms When the user requests PhaseLock Align on G Then the system presents a comb-filtering safeguard warning before processing And the default action is "Apply single group transform only" with no per-stem micro-adjustments And if the user proceeds, a single transform is applied; if the user cancels, no changes are made And the dialog provides a short pre/post preview (≥ 3 s) and spectral heatmap toggle And the event is logged with the maximum suggested inter-stem variance and the user’s choice
Batch processing prevents re-rendering conflicts and duplicate work
Given multiple queued or concurrent alignment jobs that include overlapping stems or nested groups When the jobs execute Then a per-asset lock prevents more than one render on the same stem at a time And overlapping transforms are coalesced so each (stem, composite-transform) pair is rendered at most once And no job errors due to write conflicts occur And the final count of renders equals the number of unique (stem, composite-transform) pairs across all jobs And job logs show deduplication entries and consistent terminal states for all jobs
Non-destructive application with versioning and export consistency
Given a group alignment operation completes successfully When users inspect version history and perform playback or exports Then the transform is stored as non-destructive metadata (group ID, anchor ID, delay/warp params, sample rate, timestamp, user) And in-app playback and expiring, watermarked downloads apply the identical transform across all grouped stems And undo/redo toggles the transform without altering underlying audio files (source file checksums unchanged) And re-running alignment creates a new version entry without mutating prior versions
Non-Destructive Preview & A/B Compare
"As an artist reviewing mixes, I want to hear aligned comparisons instantly so that I can approve changes quickly without waiting for re-exports."
Description

In-browser player that applies alignment transforms at playback-time for instant A/B switching between original and aligned states, with synced transport, gain-matched switching, and per-stem mute/solo. Displays measurable offsets (ms/samples/frames) and phase correlation meters. No source files are overwritten; users can commit alignment to a new version only on export.

Acceptance Criteria
A/B Toggle With Synced Transport and Gain Matching
Given a session with Original and Aligned states available and playback running When the user toggles between Original and Aligned Then the transport position remains synchronized within ±1 ms And audio continues without dropouts, clicks, or gaps And short‑term LUFS difference over a 400 ms window is ≤ 0.3 LUFS between states And action-to-audible-change latency is ≤ 50 ms
Non-Destructive Playback and Export Commit
Given a project with uploaded source files When the user previews with alignment enabled Then no source file on storage is modified (file checksums unchanged) And the alignment is applied only to the playback stream When the user exports with "Commit Alignment" selected Then a new version asset is created that embeds the alignment And the original version remains unchanged and available And version history records the export event with timestamp and user identity
Per-Stem Mute/Solo With Alignment Applied
Given multiple stems are loaded and alignment is enabled When the user mutes or solos any stem during playback Then transport remains continuous and in sync And alignment continues to be applied to all audible stems And mute/solo state changes take effect within ≤ 100 ms And gain matching during A/B switching remains within ≤ 0.3 LUFS
Offset Readouts in ms/samples/frames
Given a loaded pair of versions with detected offsets When alignment is computed Then the UI displays per-track offset values in milliseconds, samples, and frames And unit switches correctly convert values based on session sample rate and selected frame rate And displayed values match the applied transform within ±1 sample (or ±0.02 ms at 48 kHz) And readouts update within 200 ms after re-analysis or A/B toggling
Phase Correlation Meter Behavior
Given two stems are playing while the user toggles alignment When the user views the phase correlation meter Then the meter updates at ≥ 10 Hz And values range from -1.00 to +1.00 with resolution ≤ 0.01 And enabling alignment updates the meter within 200 ms And for an in-phase calibration signal the meter displays ≥ +0.98; for an inverted-phase signal displays ≤ -0.98
Tempo/Transient-Aware Alignment Accuracy
Given two bounces of the same mix with DAW drift up to ±40 ms over 5 minutes at 44.1–96 kHz When alignment is enabled during playback Then the residual timing error between corresponding transients measured over the program is ≤ 2 ms RMS and ≤ 5 ms peak And transient shape is preserved (no pre-ringing above -60 dBFS relative to original) And moderate tempo changes (±5% within sections) remain phase-coherent across stems
Seamless Switching Performance Under Load
Given a session with up to 32 stereo stems at 48 kHz/24-bit and alignment enabled When the user performs rapid A/B toggles (≥ 5 toggles in 2 seconds) and mute/solo operations Then no audio buffer underruns are recorded (XRuns = 0) during the test window And the audio output contains no clicks/pops above -80 dBFS relative to program level And action-to-effect latency remains ≤ 50 ms
Spectral Heatmap Alignment Integration
"As a label manager, I want heatmap comparisons to reflect true differences after alignment so that review decisions are based on accurate visuals."
Description

Pipeline step that feeds alignment transforms into the existing spectral heatmap so overlays are grid-synced. Heatmaps recompute or reproject using the alignment matrix, ensuring apples-to-apples comparisons, correct delta views, and accurate review comments pinned to beat positions. Includes cache invalidation keyed by alignment hash for fast redraws.

Acceptance Criteria
One-Click Alignment Applied to Heatmap Overlays
Given two versions or stems of the same track with measurable drift or phase offset When the user clicks PhaseLock Align in the A/B heatmap view Then the spectral heatmap overlays are grid-synced using the computed alignment matrix And the residual median temporal misalignment across detected beat onsets is < 5 ms and the 95th percentile is < 12 ms And the alignment supports differing sample rates and existing tempo maps without resampling artifacts And an Aligned indicator is shown within 200 ms of user action
Reprojection of Existing Heatmaps Using Alignment Matrix
Given cached heatmaps for both items being compared When an alignment matrix is produced for the pair Then the system reprojects existing heatmaps using the matrix without full recomputation where applicable And for a 3-minute track at 1024x256 resolution on reference hardware, reprojection completes in ≤ 400 ms And the reprojected result matches a full recompute with SSIM ≥ 0.98 and pixel-wise mean absolute error ≤ 2%
Beat-Pinned Comments Remain Correct After Alignment Toggle
Given review comments pinned to beat positions on an unaligned comparison When alignment is enabled or disabled Then each comment marker remains attached to its musical beat with positional error < 10 ms or < 1/64 note at current tempo (whichever is larger) And existing deep links (bar:beat:tick or timecode) resolve to the same musical event under the aligned mapping And the UI updates marker positions without flicker within 150 ms
Delta View Reflects Aligned Differences Only
Given Delta mode is enabled for two versions of the same material When alignment is applied Then the delta heatmap is computed on aligned time bases so that drift-only differences are removed And against a ground-truth manually aligned pair, the delta heatmap's pixel-wise mean absolute difference is ≤ 2% and correlation ≥ 0.98 And spectral energy outside a ±10 ms alignment window is reduced by ≥ 80% compared to the unaligned delta
Cache Invalidation and Fast Redraw on Alignment Change
Given the heatmap cache is keyed by content hash, render parameters, and alignment hash When the alignment matrix changes for a given pair Then only cache entries with a mismatched alignment hash are invalidated And a redraw begins within 100 ms and completes within 300 ms for a 3-minute track at 1024x256 on reference hardware And toggling back to a previously used alignment hash yields a cache hit with zero recomputation of heatmap cells
Fallback Behavior When No Alignment Is Available
Given a comparison where PhaseLock Align fails or is not run When the heatmap view is rendered Then the system displays the unaligned heatmaps with an Unaligned indicator And the API returns alignment: null for the pair without throwing errors And comments remain pinned using their original time/beat references And enabling alignment later transitions without a full page reload
Batch Alignment for Multi-Stem Overlays Against a Reference
Given a session with N stems selected and one designated reference When the user clicks PhaseLock Align Then an alignment matrix is computed per stem against the reference and applied to each stem's heatmap overlay And all overlays are grid-synced with per-stem residual median misalignment < 6 ms and 95th percentile < 15 ms And total reprojection time is ≤ N × 150 ms on reference hardware with operations executed safely in parallel
Export, Metadata & Audit Trail
"As an A&R coordinator, I want aligned exports with traceable metadata and reversible changes so that distribution and feedback are orderly and compliant."
Description

Export service that renders aligned files on demand with embedded alignment metadata (offset, warp map, phase adjustment, reference ID), updates version lineage in TrackCrate, and generates expiring, watermarked download links. Shortlink analytics attribute plays/downloads to aligned versions distinctly. Full audit trail and rollback to pre-alignment state.

Acceptance Criteria
On-demand aligned export embeds alignment metadata
Given a release with PhaseLock Align completed and a set of stems/versions selected for export When the user triggers "Export Aligned" Then each rendered file is time-aligned to its reference such that cross-correlation over aligned sections is ≥ 0.98 and the residual offset is ≤ 1 sample at the project sample rate And the following metadata fields are embedded and readable via TrackCrate API: alignment.offsetMs, alignment.warpMapHash, alignment.phaseAdjustDeg, alignment.referenceId, alignment.algorithmVersion, alignment.confidence And the embedded metadata values match the alignment session within tolerance: offset ±0.1 ms, phase ±1°, referenceId exact match And the file naming convention includes the new versionId and the suffix "-aligned" And cryptographic checksums (SHA-256) are generated, stored, and match the rendered files And the export completes without error and is retriable idempotently (same inputs produce same checksum and versionId)
Version lineage updates on aligned export
Given an aligned export completes successfully When TrackCrate writes the new version record Then the version graph contains a child node for the aligned export with edge type "alignedExport" linking to the source versionId and alignment.referenceId And the lineage record captures alignment metadata hash, createdAt (ISO 8601 UTC), createdBy (userId/service), and checksum And the UI lineage view displays the new node within 5 seconds of completion And the Versions API returns the new versionId and lineage path including the alignment edge And repeated exports with identical inputs within 24 hours do not create duplicate nodes (idempotency)
Expiring, watermarked download links for aligned versions
Given an aligned version exists and a user requests a download link with expiry, maxDownloads, and recipientId When the link is generated Then the asset is watermarked uniquely such that the watermark decoder returns recipientId, versionId, and linkId with confidence ≥ 0.99 And the link is a signed HTTPS URL with ≥128 bits of entropy and cannot be used after expiresAt (returns HTTP 403 within 1 minute of expiry, allowing ±5 minutes clock skew) And maxDownloads is enforced per link (returns HTTP 429 after the limit) And the watermark is imperceptible (perceptual SNR ≥ 60 dB measured by internal QA tool) and does not alter file length or sample rate And revoking the link invalidates access within 60 seconds and is recorded in the audit log And downloading through the link records a download event associated to versionId and linkId
Shortlink analytics attribute plays/downloads to aligned versions distinctly
Given a shortlink points to an AutoKit press page or direct file for an aligned version When a user plays the private stem player or downloads via the shortlink Then analytics attribute the event to the specific aligned versionId (not the source version) and to the shortlinkId And events are de-duplicated per IP+UserAgent within a 30-minute window And metrics (plays, unique plays, downloads) appear in analytics UI and API within 5 minutes (p95 latency) And campaign parameters (e.g., utm_source, reviewerId) are captured and queryable with the events And A/B views segment metrics by versionId so comparisons between aligned and non-aligned versions are possible
Full audit trail for export, link lifecycle, and analytics events
Given any of the following actions occur: aligned export created, metadata embedded, link created, link downloaded, link revoked/expired, lineage updated, rollback executed When the action completes Then an append-only audit record is written with: eventType, actor (userId/service), timestamp (ISO 8601 UTC), entityIds (versionId, linkId, referenceId), requestId, previousValues, newValues, and checksum And audit records are tamper-evident via a hash chain (each record stores prevHash) and verification passes for the session And authorized users can query the audit log filtered by versionId and date range and export results as JSON And audit records are retained for at least 2 years And clock skew between services is ≤ 500 ms (based on NTP) so event ordering is consistent within a session
Rollback to pre-alignment state restores lineage and invalidates links
Given an aligned export exists with active download links and recorded analytics When a user with rollback permission executes "Rollback to Pre-Alignment" Then the version lineage reverts to the prior state such that the aligned export node is archived and no longer the active child of the source And all active download links for the aligned version are invalidated within 60 seconds and return HTTP 410 And previously recorded analytics remain preserved and are re-associated to the archived aligned versionId And audit log records the rollback with a diff of lineage changes and the list of invalidated linkIds And the system allows re-running alignment after rollback without orphaning nodes (graph integrity check passes)

LevelMatch A/B

Automatic LUFS and stereo balance matching across versions to remove “louder sounds better” bias. Hear mix changes for what they are, not for level jumps. Optionally lock to a target LUFS and preserve dynamics for fair, repeatable evaluations.

Requirements

Auto LUFS Match Engine
"As a mix engineer, I want all versions to be automatically loudness-matched so that I can evaluate tonal and balance differences without louder-sounds-better bias."
Description

Implements automatic loudness normalization based on ITU-R BS.1770-4/EBU R128 to remove loudness bias when comparing versions. For each uploaded master, mix, or stem package, compute integrated LUFS, short-term LUFS, loudness range (LRA), true peak, and apply a non-destructive playback gain offset to match either a selected reference version or a user-selected target LUFS. Ensure true-peak-aware gain staging to avoid clipping, with headroom safety margins across sample rates and bit depths. Precompute loudness metrics on upload and cache offsets for instant playback in the TrackCrate player and AutoKit pages. Provide per-version overrides and fallbacks for incomplete analysis, and handle batch comparisons within a release.

Acceptance Criteria
Upload Loudness Analysis and Cache Population
Given a supported master/mix/stem file (WAV/AIFF/FLAC; 44.1–192 kHz; 16/24/32-bit) is uploaded and the upload completes When the analysis job runs Then integrated LUFS, short‑term LUFS, LRA, and true peak are computed per ITU‑R BS.1770‑4 with EBU R128 gating and true‑peak oversampling ≥4x, and persisted with the asset And a non‑destructive playback gain offset record is created and cached for that version And the original file bytes remain unchanged (checksum before/after identical) And for files ≤10 minutes, metrics and offset are available within 45 seconds of upload completion; for 10–30 minutes, within 180 seconds And cached offset retrieval time is <50 ms p95 as measured server‑side
Reference Version LUFS Match During A/B
Given a release has a selected reference version and LevelMatch is enabled When I play any other version in that release Then the heard output is gain‑adjusted non‑destructively to match the reference’s integrated loudness within ±0.2 LU And the resulting true peak of the monitored output never exceeds −1.0 dBTP And the applied gain offset (in dB) is displayed in the player within 200 ms of playback start And toggling LevelMatch off restores native level within 200 ms without clicks, pops, or level jumps
Target LUFS Lock With True‑Peak Safety
Given a user‑selected target loudness (e.g., −14 LUFS) is active When any version is played Then only a constant gain is applied (no compression/limiting), aiming to reach the target within ±0.2 LU And if reaching the target would push true peak above −1.0 dBTP, gain is limited to keep true peak ≤−1.0 dBTP and the achieved LUFS is displayed with a “limited by true peak” indicator And true‑peak estimation uses ≥4x oversampling consistently across 44.1–192 kHz sources And no digital clipping indicators are raised during playback
Instant Matched Playback in TrackCrate Player and AutoKit
Given loudness metrics and offsets are cached for a version When the user presses Play in the TrackCrate player or on an AutoKit page Then the correct gain offset is applied before the first audible frame and no audible level jump occurs after start And LevelMatch adds <100 ms p95 to start latency end‑to‑end And cache lookup for offsets is <50 ms p95 server‑side And matched loudness on output is consistent across latest Chrome, Safari, and Firefox on desktop and mobile within ±0.2 LU
Per‑Version Override and Analysis Fallbacks
Given analysis is pending or failed for a version When the user attempts LevelMatch playback Then playback proceeds unadjusted with a visible “Analysis pending/failed” badge and no guessed offset is applied And when a per‑version manual offset or “disable LevelMatch for this version” is set Then that setting overrides auto matching immediately and persists across sessions And when analysis later completes successfully Then the auto offset is updated in cache without interrupting current playback; the next playback uses the new offset
Batch Comparison Consistency Across a Release
Given a release with up to 20 analyzed versions When I cycle A/B between any two versions with LevelMatch enabled Then the integrated loudness of the monitored output between versions differs by ≤0.2 LU And no clicks or pops are introduced during switching; optional crossfade, if enabled, is ≤50 ms And A/B switching latency from toggle to new audio is ≤100 ms p95 And all gain offsets used during the session are served from cache (no recomputation), verified via instrumentation logs
Stereo Balance Match
"As a producer, I want left-right balance and width normalized during A/B so that perceived image shifts don’t mislead my evaluation of a mix revision."
Description

Analyzes stereo image characteristics (L/R RMS, mid/side energy, correlation) and applies a non-destructive channel balance/width offset during playback so versions share comparable stereo centering and perceived width. Provides a user-toggle to enable/disable stereo matching, safeguards mono compatibility, and avoids phase-altering processing by using gain-domain adjustments only. Stores stereo metrics alongside loudness metadata and reuses them across sessions and shared links. Integrates with the TrackCrate player UI to display the applied balance/width offsets for transparency.

Acceptance Criteria
Toggleable Stereo Match During Playback
Given two versions are loaded in the TrackCrate player and a reference version is selected When the user enables Stereo Match during playback Then channel balance and stereo width offsets are applied in real time without writing to source files And enabling or disabling Stereo Match produces no audible artifacts (no clicks/pops) and transition settles within 30 ms And perceived center balance difference between versions is ≤ 0.2 dB ILD relative to the reference And mid/side energy ratio difference between versions is ≤ 0.5 dB And the Stereo Match toggle state persists per user per project across sessions
Metrics Extraction and Persistence
Given a new audio version is uploaded or first played When stereo metrics are not present for the file checksum Then the system computes and stores L/R RMS (integrated), mid/side energy ratio, and inter-channel correlation in the version metadata And subsequent plays reuse the stored metrics without re-analysis unless the file checksum changes And stored metrics are available to the player, API, and included in shared link payloads And metric computation completes before the end of the first playthrough or within 10 seconds, whichever comes first
Mono Compatibility Safeguard
Given Stereo Match is enabled on a stereo file When a computed width or balance offset would reduce the mono correlation below 0.0 Then the offset is clamped to maintain mono correlation ≥ 0.0 And summing the processed playback to mono changes integrated loudness by ≤ 0.5 dB relative to the same content with Stereo Match off And for mono or dual-mono sources, no width change is applied and balance offset is 0 dB, with the UI indicating "No width change applied"
Gain-Domain Only Processing Guarantee
Given Stereo Match processing is active When inspecting the processing chain Then only gain-domain adjustments (L/R and/or M/S gains) are used with no delay lines, filters, or phase rotation And inter-channel delay introduced is 0 samples and measured group delay is 0 across the band And a null test after compensating applied gains achieves ≥ 100 dB cancellation, confirming no phase-altering processing
Cross-Version Consistency With Loudness Match
Given two versions with different loudness and stereo balance are A/B switched When Loudness Match and Stereo Match are both enabled Then Loudness Match is applied before Stereo Match deterministically for every switch And the integrated loudness of versions matches within ±0.1 LU and stereo center/width criteria are met (≤ 0.2 dB ILD difference, ≤ 0.5 dB M/S ratio difference) And no dynamics processing (compression/limiting) is introduced by Stereo Match
Shared Link Reuse and UI Transparency
Given a recipient opens a TrackCrate shared link with Stereo Match available When the player loads versions with stored stereo metrics Then the player uses the stored metrics without re-analysis and Stereo Match behavior matches the owner’s project settings by default And the UI displays applied L/R gain offsets in dB and width offset in % or M/S dB, updating live on A/B switches And an indicator shows Mono Safe clamping when engaged And all applied offsets and toggle states are exposed via the player API for auditability
Instant A/B Comparator
"As a collaborator, I want instantaneous switching between versions at the same timeline position so that I can accurately judge changes on the exact section I’m reviewing."
Description

Delivers a gapless A/B switcher in the TrackCrate web player and AutoKit press pages with synchronized playhead, hotkeys, and click targets for rapid comparison of versions. Displays real-time meters for LUFS, true peak, and shows the applied gain/balance offsets. Includes a "Lock to Target LUFS" toggle, previous/next version navigation, and sticky labeling for versions. Ensures low-latency switching with pre-buffering, mobile-responsive controls, and accessible interactions (ARIA, keyboard navigation). Persists user choices per session and per project for repeatable evaluations.

Acceptance Criteria
Gapless A/B Switching with Synchronized Playhead and Pre-buffering
Given I am on the TrackCrate web player or an AutoKit press page with two or more versions loaded When I trigger an A/B switch between any two versions Then audio continues without an audible gap, click, or phase glitch and the switch latency is <= 50 ms at 44.1 kHz Given both versions share the same sample rate When I switch versions during playback Then the playhead position is preserved within ±10 ms Given versions have different sample rates When I switch during playback Then resampling preserves playhead within ±20 ms and no pitch change is perceived Given the page has loaded and playback has begun When versions are listed for comparison Then each version pre-buffers at least the next 5 seconds of audio within 1 s on broadband (>=10 Mbps) and within 3 s on 3G (>=1 Mbps) Given network throughput drops below 1 Mbps mid-session When I switch versions Then the player falls back to buffered audio, shows a non-blocking “Buffering” state, and resumes with synchronized playhead on recovery
Real-Time Metering and Applied Offset Display
Given a version is playing When audio is active Then LUFS-S and true-peak meters update at >= 10 Hz with accuracy of ±0.5 LUFS and ±0.5 dBTP respectively Given LevelMatch A/B is engaged When comparing versions Then the UI displays the applied gain offset in dB (0.1 dB precision) and stereo balance adjustment in dB L/R (0.1 dB precision) for the active version Given I switch versions When the new version becomes active Then meter scales and offset readouts update within 100 ms Given playback is paused When no samples are flowing Then meters decay smoothly and freeze within 1 s and offsets remain visible
Lock to Target LUFS Toggle Behavior
Given the "Lock to Target LUFS" toggle is enabled When I play any version Then its integrated loudness is adjusted via static gain to meet the selected target within ±0.2 LUFS with no dynamic processing (compression/limiting) applied Given the target LUFS control is available When I set a value between -30.0 and -5.0 LUFS in 0.1 increments Then all versions conform to the new target within ±0.2 LUFS Given the toggle is disabled When I compare versions Then prior per-version LevelMatch offsets (if any) are restored and no target normalization is applied Given achieving the selected target would cause clipping When normalization is computed Then the system applies the maximum safe static gain to keep true peak < 0 dBFS and displays a "Headroom limited" notice without applying dynamic processing
Hotkeys, Keyboard Navigation, and Click Targets
Given the player has focus When I press the A/B hotkey (default: Q) or click the A/B toggle button Then it switches between the last two selected versions within 50 ms Given multiple versions exist When I press number keys 1–9 Then the corresponding version by order is selected and becomes active if playing; 0 opens the full version list Given I press Arrow Left/Right during playback When versions are available Then the active version moves to previous/next in order and audio stays time-aligned within ±10 ms Given pointer or touch input When I tap/click version controls Then each control has a minimum 44x44 px hit area and provides visual pressed/active feedback within 100 ms
Version Navigation with Sticky Labels
Given versions have custom labels When I navigate prev/next or select a version Then the current version label remains visible and pinned near the controls on desktop and mobile Given a label exceeds 20 characters When displayed in the sticky area Then it truncates with ellipsis and reveals full text on hover/focus via tooltip Given I switch versions When the active version changes Then the previous and current labels are both shown (e.g., "A: Mix v3" vs "B: Master v1") for at least 2 seconds to reinforce context Given more than 9 versions exist When navigating via prev/next Then ordering is stable and wraps from last to first and first to last
Accessibility (ARIA) and Mobile Responsiveness
Given keyboard-only usage When interacting with the player Then all functions (play/pause, version select, A/B, lock toggle, target input, prev/next) are reachable via Tab/Shift+Tab with visible focus indicators and operable via Enter/Space Given a screen reader (NVDA, JAWS, VoiceOver) is active When the active version changes Then the new version label and applied offsets are announced via aria-live="polite" within 500 ms; meter values are not continuously announced but are available on demand via labelled controls Given WCAG 2.1 AA requirements When rendering controls Then color contrast meets AA, and touch targets are >= 44x44 px on mobile; orientation changes do not break layout or hide critical controls Given mobile devices (iOS Safari 16+, Android Chrome 114+) When performing A/B switching Then all performance criteria from the gapless switching scenario are met and tap interactions remain operable; unavailable desktop hotkeys degrade gracefully
Persistence of User Choices per Session and Project
Given I adjust A/B settings (selected A and B versions, last active version, Lock to Target LUFS state, target LUFS value) When I navigate away and return within the same session Then the settings are restored automatically for that project Given I revisit the same project on the same device and account within 30 days When I open the player Then my last A/B settings for that project are restored from persisted storage Given I open a different project When the player loads Then only that project's last settings are applied and preferences do not leak across projects Given two tabs of the same project are open When I change A/B settings in one tab Then the other tab reflects the change within 2 seconds or after the next user interaction
Dynamics-Safe Level Lock
"As a mastering engineer, I want level matching that respects dynamics and avoids added limiting so that my assessments remain faithful to the source material."
Description

Provides a dynamics-preserving mode when locking to a target LUFS by prioritizing gain offsets that respect true-peak ceilings and avoid unintended compression/limiting. When necessary, offers an optional transparent true-peak limiter to prevent inter-sample clipping, clearly indicating when limiting is engaged. Surfaces warnings if the requested target would materially alter dynamics, and allows per-version opt-out. Ensures that loudness normalization does not change the creative intent during critical A/B evaluations.

Acceptance Criteria
True-Peak Ceiling Enforcement During Level Lock
Given a version with measured integrated loudness (LUFS-I) and true-peak (dBTP) per ITU-R BS.1770-4 And a configured true-peak ceiling (default -1.0 dBTP) When Dynamics-Safe Level Lock is enabled and the requested target LUFS would push true-peak above the ceiling if applied directly Then the system applies a reduced static gain so the resulting true-peak <= the ceiling within ±0.1 dBTP And the final integrated loudness is the closest achievable toward the target without exceeding the ceiling And the UI shows a "Ceiling Hit" indicator with numeric delta-to-target (±0.1 LU) and applied gain (±0.1 dB) And no limiter/compression is instantiated
Optional Transparent True-Peak Limiter Engagement & Indication
Given the optional true-peak limiter is enabled for Dynamics-Safe Level Lock And the requested target LUFS would otherwise exceed the true-peak ceiling When Level Lock is applied Then the output inter-sample true-peak does not exceed the configured ceiling within ±0.1 dBTP (>=4x oversampling measurement) And a gain-reduction meter displays real-time and maximum GR; the max GR is stored per version And a visible "Limiter Engaged" indicator appears whenever instantaneous GR ≥ 0.1 dB for ≥ 50 ms And total processing latency is reported and A/B compensated; stereo image is preserved (no mid/side imbalance introduced) And when the limiter is disabled, GR remains 0.0 dB and no limiter indicator is shown
Dynamics Alteration Warning Thresholds & Actions
Given a user requests a target LUFS under Dynamics-Safe Level Lock When predicted dynamics impact exceeds thresholds (LRA change > 0.5 LU or required limiting max GR > 1.0 dB or crest-factor reduction > 1.0 dB) Then a non-blocking warning "Target may alter dynamics" is shown on the affected version(s) And the warning offers actions: Proceed (enable limiter for this version), Lower Target (apply suggested safe target), Opt Out (exclude this version from Level Lock) And selecting an action immediately applies and re-analyzes; the warning state clears when impact falls below thresholds And the warning event is logged per version with timestamp and chosen action
Per-Version Opt-Out from Dynamics-Safe Level Lock
Given multiple versions are available in LevelMatch A/B When the user toggles Opt-Out for a specific version Then that version bypasses all normalization and limiting (no gain applied; original peaks/loudness preserved) And all other versions remain locked to the target per configuration And A/B switching visibly marks the opted-out version as "Not Normalized" And the opt-out state persists per version across sessions and shared links And re-enabling Level Lock for that version restores its last computed settings without re-upload
Static Gain-Only Behavior with Limiter Disabled
Given Dynamics-Safe Level Lock is enabled and the limiter is disabled When Level Lock is applied Then the processor applies only a constant gain offset (no time-varying gain) And measured gain variance over time is ≤ 0.1 dB (excluding transport fades) And the gain-reduction meter remains at 0.0 dB for the entire playback And A/B switching is click/pop free and maintains alignment within ±1 sample
Accurate LUFS Targeting and Result Reporting
Given a version analyzed with ITU-R BS.1770-4 (gated) and limiter disabled And no true-peak ceiling condition is triggered When Level Lock is applied to a target LUFS Then the resulting integrated loudness equals the target within ±0.1 LU And the applied gain equals (target - measured) within ±0.1 dB And if a ceiling hit prevented exact targeting, the UI displays achieved LUFS and delta-to-target to ±0.1 LU alongside a "Ceiling Hit" indicator
Loudness & Image Metadata Pipeline
"As a label manager, I want loudness and stereo metrics stored and reused so that A/B comparisons are fast and consistent across sessions and shared links."
Description

Builds a background analysis pipeline that computes and stores loudness (integrated/short-term LUFS, LRA, true peak) and stereo image metrics for every asset version on upload or replacement. Caches precomputed A/B offsets per version set and invalidates intelligently when versions change. Exposes metrics and offsets via internal APIs to the TrackCrate player, AutoKit press pages, and shortlinks, enabling instant, consistent level-matched playback across devices. Scales via worker queues with retry logic and supports stems and multi-file releases.

Acceptance Criteria
Auto-Analysis on Asset Upload
Given a supported audio asset (wav, aiff, flac, mp3, m4a) up to 2 GB is uploaded and ingestion completes, When the upload is finalized, Then an analysis job is enqueued with the assetVersionId within 5 seconds. Given an analysis job is enqueued, When workers are healthy, Then a worker starts processing within 30 seconds (P95) of enqueue time. Given processing starts, When analysis completes, Then integrated LUFS, short-term LUFS (3s window), LRA, true peak (dBTP), stereo balance (L–R RMS dB), stereo width (mid/side ratio), and correlation coefficient are computed with precision ≥ 0.1 LU/0.1 dB and stored with timestamps keyed by assetVersionId. Given analysis succeeds, When querying the metadata store by assetVersionId, Then all metrics are retrievable within 100 ms (P95). Given an unsupported format or decode error, When analysis fails, Then the job status is set to failed with an error code, and no partial metrics are persisted.
Intelligent Reanalysis on Version Replacement
Given an existing version has stored metrics and cached A/B offsets, When the underlying file is replaced, Then a new analysis is triggered automatically and the prior metrics are retained in audit history. Given a version replacement is committed, When caches are evaluated, Then all A/B offsets involving the replaced version are invalidated within 5 seconds and scheduled for regeneration. Given the replacement file is byte-identical to the prior file (same content hash), When replacement is requested, Then the system reuses existing metrics and offsets without reprocessing and records an idempotent no-op. Given new analysis finishes after replacement, When offsets are regenerated, Then the dataVersion is incremented and all dependent caches reflect the new data within 60 seconds.
Precomputed A/B Offsets Caching and Invalidation
Given a version set with two or more analyzed versions, When offsets are generated, Then for every ordered pair the system stores a gain offset (dB) and channel trim/balance correction (dB) that align integrated LUFS within ±0.1 LU and channel balance within ±0.5 dB. Given a target loudness is specified (e.g., −14.0 LUFS) with lock=true, When offsets are requested, Then offsets are computed relative to the target and preserve LRA within ±0.1 LU and true peak ≤ −1.0 dBTP post-gain. Given no target loudness is specified, When offsets are requested, Then offsets are computed relative to the quieter version to avoid clipping and ensure post-gain true peak ≤ −1.0 dBTP. Given any version in a set is added, removed, or replaced, When the change is committed, Then all offsets for the affected set are regenerated and cached within 60 seconds. Given cached offsets exist, When the same offsets are requested repeatedly, Then cache hit rate is ≥ 95% over a rolling 24-hour window for active version sets.
Internal API for Metrics and Offsets
Given a valid internal service token, When GET /internal/metrics?versionId={id} is called, Then the API returns 200 with JSON fields: integratedLufs, shortTermLufs, lra, truePeakDbtp, stereoBalanceDb, stereoWidthMs, correlation, createdAt, with P95 latency ≤ 200 ms. Given a valid internal service token, When GET /internal/ab-offsets?setId={id} is called, Then the API returns 200 with JSON containing pairwise offsets and optional targetLocked offsets, with P95 latency ≤ 150 ms on cache hit and ≤ 500 ms on cache miss. Given an invalid or expired token, When either endpoint is called, Then the API returns 401 without revealing the existence of the resource. Given metrics are not yet available, When GET /internal/metrics is called, Then the API returns 202 Accepted with a Retry-After header indicating next poll time. Given a client supplies If-None-Match with a current ETag, When the resource has not changed, Then the API returns 304 Not Modified.
Low-Latency Offset Retrieval for Instant Playback
Given the TrackCrate player requests A/B offsets for two analyzed versions prior to playback, When the cache is warm, Then offsets are delivered within 100 ms (P95) from the internal API. Given A/B toggling during evaluation, When the player applies provided offsets, Then measured loudness difference over a 15-second reference section is ≤ 0.2 LU and post-gain true peak remains ≤ −1.0 dBTP for both versions. Given stereo balance matching, When offsets are applied, Then left–right RMS energy difference over the same section is ≤ 0.5 dB. Given clients on mobile and desktop, When requesting the same offsets, Then numeric values are identical within ±0.01 across devices for the same version IDs and parameters.
Stems and Multi-File Release Support
Given a release with multiple assets tagged by role (mix, instrumental, acapella, stem type), When ingestion completes, Then each asset is analyzed independently and metrics are stored with trackId, role, and assetVersionId. Given a stem group under a track (e.g., drums, bass, vocals), When A/B offsets are requested between stem versions within the same group, Then offsets are available and constrained to the group membership. Given a batch upload of up to 20 files each ≤ 500 MB, When processed, Then 95% of analyses complete within 15 minutes from batch finalize time. Given assets with sample rates up to 192 kHz and 16–32-bit integer/float depth, When analyzed, Then metrics compute without failure and are normalized per ITU-R BS.1770-4 to LUFS units.
Resilient Worker Queue with Retries and Idempotency
Given a transient processing failure, When a job fails, Then it is retried up to 5 times with exponential backoff starting at 30 seconds and capping at 10 minutes, using an idempotency key of {assetVersionId, contentHash} to prevent duplicate writes. Given retries are exhausted, When the job still fails, Then the job is moved to a dead-letter queue with error context and an alert is emitted to on-call within 1 minute. Given a worker crashes mid-job, When its lease (10 minutes) expires, Then the job lock is released and the job is safely re-queued without duplicating persisted metrics. Given a system restart, When workers resume, Then all in-flight or queued jobs are recovered without loss and resume processing within 2 minutes. Given queue backlog exceeds 1000 jobs for more than 5 minutes, When autoscaling is enabled, Then worker pool scales to restore median queue wait time to ≤ 60 seconds within 10 minutes.
Share-Consistent Level Match
"As an artist sharing mixes externally, I want reviewers to hear the same level-matched A/B I hear in TrackCrate without altering the source files so that feedback is fair and secure."
Description

Ensures that LevelMatch A/B behavior is preserved on shared AutoKit press pages and private stem players accessed via shortlinks, honoring permissions, expiring tokens, and watermarking policies. Carries level-match configuration in shareable URLs without exposing private version details, and guarantees that downloads remain untouched (no destructive level changes). Provides cross-browser compatibility and consistent behavior for reviewers, with analytics capturing A/B usage events for insights without leaking audio content.

Acceptance Criteria
Share URL Initializes LevelMatch Settings
Given a project has multiple mix versions and LevelMatch parameters are configured And an AutoKit or stem player share link is generated with valid levelMatch parameters (e.g., levelMatch=on, targetLUFS=-14, balance=auto) When a reviewer opens the link Then the player initializes with LevelMatch enabled per URL parameters And integrated LUFS normalization is applied in playback to within ±0.2 LUFS of target for each version And stereo balance matching is applied to within ±0.5 dB L/R average between compared versions And A/B switching preserves relative loudness and balance with switch latency ≤100 ms And invalid or missing parameters fall back to project defaults without error
Non-Destructive Playback and Untouched Downloads
Given LevelMatch processing is active during playback When the reviewer initiates a download (original or watermarked) Then the downloaded file bytes match the stored master (for originals) or pre-rendered watermark asset (for watermarked), verified by checksum equality And no LevelMatch gain or processing is applied to the downloaded asset And watermarking, filenames, and embedded metadata match their configured download profiles without alteration
Permission, Token Expiry, and Access Enforcement
Given a share link includes an expiring, permission-scoped token When the token is valid and unexpired Then the reviewer can stream audio with LevelMatch per URL config but cannot modify project-level settings And local changes to LevelMatch (e.g., toggling, target LUFS) affect only the current session and are not persisted to the project When the token is expired, revoked, or lacks playback scope Then playback is blocked, LevelMatch controls are hidden/disabled, and a non-identifying error state is shown without exposing asset names or counts
Cross-Browser and Device Consistency
Given the share link is opened on latest stable Chrome, Firefox, Safari, and Edge (desktop), iOS Safari, and Android Chrome When playing and A/B switching between versions with LevelMatch enabled Then measured integrated loudness per version matches target within ±0.2 LUFS across all tested browsers/devices And stereo balance matching remains within ±0.5 dB across all tested browsers/devices And A/B switch latency is ≤100 ms on desktop and ≤150 ms on mobile And no audible clipping occurs (true peak ≤ -1.0 dBTP) during normalization
Analytics Capture Without Content Leakage
Given analytics collection is enabled for the share link When the reviewer performs A/B switches, toggles LevelMatch, or changes target LUFS Then events are emitted with anonymized session ID, link ID, timestamp, event type, and non-audio numeric parameters only And no audio content, waveforms, fingerprints, checksums, filenames, or private version identifiers are transmitted And if the browser has Do Not Track enabled, analytics are disabled for that session And events are buffered offline and retried up to 3 times with exponential backoff before being dropped
Private Metadata and Version Privacy in Shared Context
Given a share link is generated for a project with multiple private versions When the reviewer opens the link Then the URL and page UI reveal only public labels configured for sharing (e.g., "Version A", "Version B") and never internal filenames, storage paths, user IDs, or database IDs And levelMatch-related URL parameters do not contain or imply private version identifiers And tokens are opaque and signed such that they cannot be decoded client-side to reveal private metadata
Stereo Balance Auto-Match Option Behavior
Given the Auto Balance option in LevelMatch is enabled When switching between versions with differing stereo balance Then the system applies compensating gain to achieve L/R balance within ±0.5 dB without altering mid/side content or introducing clipping (true peak ≤ -1.0 dBTP) And disabling Auto Balance immediately removes compensation and restores original stereo presentation

Delta Solo

Instantly solo only what changed between takes—per stem or full mix. Scrub and loop the difference signal to pinpoint edits, automation moves, or processing tweaks. Export short delta clips with timestamped notes to accelerate feedback cycles.

Requirements

Sample-Accurate Delta Engine
"As a mix engineer, I want to hear only the difference between two takes so that I can instantly identify what changed without re-listening to entire passes."
Description

Compute a phase- and gain-aligned difference signal between any two takes at the stem or full-mix level. Support automatic polarity check, sample-rate/bit-depth normalization, time offset correction, and optional transient-aware time-warping to maximize null accuracy. Stream the delta signal to the player for real-time audition, and expose summary metrics (RMS/peak change, spectral variance, % content changed) for quick assessment. Handle mono/stereo stems and multichannel bounces, with safe output limiting to prevent audition overload. Persist lightweight delta manifests tied to asset hashes to avoid recomputation and ensure determinism across sessions within TrackCrate’s version graph.

Acceptance Criteria
High-Null Alignment for Identical Content with Polarity/Gain/Offset Differences
Given two takes of the same stem where Take B is polarity-inverted, +3.0 dB gain, and delayed by 1 sample, with original formats 44.1 kHz/16-bit and 48 kHz/24-bit When the delta is computed with automatic polarity check, gain alignment, sample-rate/bit-depth normalization, and time-offset correction (transient warping disabled) Then the engine detects and corrects polarity, aligns gain to within ±0.1 dB, and reports the detected offset = 1 sample And the residual delta integrated RMS is ≤ -60 dB relative to Take A integrated RMS over the common program region And the residual delta true-peak is ≤ -40 dB relative to Take A true-peak And the alignment parameters are written to the delta manifest
Transient-Aware Time-Warping Improves Null on Drifted Takes
Given two 5-minute full mixes with cumulative timing drift of 10 ms and identifiable transient anchors in both When the delta is computed once with only global time-offset correction and once with transient-aware time-warping enabled Then enabling transient-aware warping reduces the residual delta integrated RMS by at least 15 dB compared to the non-warped result And the maximum local warp does not exceed 20 ms and the average warp is reported And no overlap-add artifacts exceed -50 dBFS in the residual (measured as spurious energy outside anchor-adjacent windows) And warping summary metrics (max warp, average warp, anchor count) are included in the delta metrics
Real-Time Delta Streaming, Scrub, and Loop
Given a computed delta for an 8-channel bounce at 48 kHz/24-bit When the user presses Play in the TrackCrate player Then audible delta output begins within 100 ms and plays 5 minutes without buffer underruns or dropouts And scrubbing updates the audible delta within 50 ms and loop points snap to sample boundaries with gapless looping And switching between per-stem delta and full-mix delta occurs within 150 ms without audible glitch or transport restart
Summary Metrics Accuracy and Availability
Given any computed delta between two takes When metrics are requested via the API and displayed in the UI Then the following metrics are returned: RMS change (dB), peak change (dBFS), spectral variance (0–1), and percent content changed (0–100%) And values match an offline reference within tolerances: RMS ±0.2 dB, peak ±0.1 dBFS, spectral variance ±0.02, percent changed ±1% And metrics are available within 200 ms of delta computation completion and are persisted in the delta manifest
Safe Output Limiting During Audition
Given a delta whose predicted playback true-peak would exceed -1.0 dBTP When audition is enabled with safe limiting Then the output true-peak does not exceed -1.0 dBTP (measured with ≥4× oversampling), and the sample-peak clipping count is 0 And the limiter exposes gain reduction telemetry per processing block And disabling the limiter restores the original (unlimited) delta level within 1 block without a pop/click
Multichannel and Mono/Stereo Handling
Given stems or bounces in mono (1 ch), stereo (2 ch), and surround (≥6 ch) When deltas are computed for each case Then channel counts and ordering are preserved, per-channel alignment is applied consistently, and interchannel phase coherence change is ≤ 0.01 And attempts to compute deltas between takes with mismatched channel counts are rejected with a clear, localized error message and no audio output
Delta Manifest Persistence and Determinism
Given a delta computed between two assets and saved in a TrackCrate project When the project is reopened locally or by a collaborator Then the delta manifest, keyed by both input content hashes and engine version, is reused without recomputation, producing a byte-identical delta buffer and identical metrics And any change to either input’s content hash or to the engine version invalidates the manifest and triggers recomputation And the manifest size is ≤ 32 KB and stores alignment parameters, metrics, and a digest of the delta buffer
Automatic Take Alignment & Region Mapping
"As a producer, I want takes to auto-align before diffing so that the delta accurately reflects creative changes rather than timing mismatches."
Description

Automatically align candidate takes prior to delta computation using cross-correlation, transient/beat markers, and silence detection to handle offsets, drift, or edited regions. Build a region map that pairs comparable sections and marks unmatched inserts/deletions. Provide manual nudge and per-stem alignment overrides for edge cases. Store alignment parameters in the version context so subsequent comparisons reuse the same mapping and remain consistent across collaborators and devices.

Acceptance Criteria
Auto-Align Offsets and Drift (Full Mix)
Given two takes of the same song with up to 1.5 s start offset, up to 40 ms cumulative drift over 4 minutes, and leading/trailing silences, When automatic alignment runs using cross-correlation, transient/beat markers, and silence gating, Then the resulting alignment reduces absolute timing error across matched regions to median <= 5 ms and 95th percentile <= 12 ms. Given identical inputs, When automatic alignment is run multiple times, Then the produced region map and offsets are deterministic and identical (checksum match).
Region Map With Inserts and Deletions
Given Take A and Take B where A contains an 8-bar extra chorus absent in B and B contains a 500 ms trimmed intro, When region mapping is generated, Then the map pairs all comparable sections and marks the extra chorus as an insert and the trimmed intro as a deletion with accurate start/end timestamps (error <= ±20 ms). Given non-silent audio content, When region mapping is generated, Then >= 98% of non-silent duration is either matched or explicitly labeled as insert/deletion; residual unmatched but non-silent time <= 2%. Given regions shorter than 250 ms, When mapping is generated, Then they are merged into adjacent regions or ignored to avoid micro-gaps, and the decision is recorded in the map metadata.
Manual Nudge Fine-Tuning
Given a matched region is selected, When the user applies a manual nudge of ±1–500 ms in 1, 5, or 10 ms steps, Then the waveform preview updates immediately and the delta recomputes within 1 second of inactivity. Given the user applies a manual nudge, When the user performs undo/redo, Then the alignment parameters revert/advance accordingly and the region map reflects the change. Given manual nudge adjustments are saved, When the session is re-opened, Then the same nudges are present and applied before delta computation.
Per-Stem Alignment Overrides
Given a project with multiple stems and a global alignment, When the user enables a per-stem override for a stem, Then that stem’s alignment offset/map can be adjusted independently without altering other stems’ mappings. Given a per-stem override is applied to correct misalignment, When evaluated against the global mapping, Then the corrected stem’s timing error across its matched regions is reduced by at least 50% or to median <= 6 ms, whichever is met first. Given per-stem overrides exist, When exporting delta clips per stem, Then the export uses the per-stem mapping for that stem.
Persistent Alignment Mapping Across Collaborators and Devices
Given alignment and mapping are saved to the version context, When another collaborator opens the same project on a different device with the same audio file hashes, Then the identical region map and parameters auto-load and produce identical alignment (checksum match). Given any source audio file has changed (hash mismatch or duration difference > 100 ms), When loading the saved mapping, Then the system warns that re-alignment is required and provides a one-click re-align action; the previous mapping is preserved as a prior version. Given saved alignment parameters, When a new comparison is initiated between the same takes, Then the system reuses the mapping without recomputing, completing initialization in < 1 second.
Performance and Determinism
Given a session of up to 5 minutes with up to 16 stereo stems, When automatic alignment and region mapping run on a mid-tier device, Then initial computation completes in <= 12 seconds and UI remains responsive (progress indicator visible and cancel available). Given alignment is canceled mid-process, When the user retries, Then the operation restarts cleanly and produces the same result as an uninterrupted run. Given identical inputs and settings, When alignment runs on different devices (same OS family) or at different times, Then the region map checksum is identical, ensuring deterministic results.
Delta Solo Controls & Visualization
"As a collaborator reviewing edits, I want intuitive controls to solo, scrub, and loop only the changes so that I can quickly evaluate specific tweaks and give precise feedback."
Description

Add per-stem and full-mix Delta Solo toggles that route the player to audition the computed difference signal. Provide scrub and loop controls that operate on the delta timeline, plus keyboard shortcuts (e.g., D to toggle delta, L to loop selection). Render synchronized delta waveforms and an optional spectrogram highlighting frequency bands with the greatest change. Include A/B/C modes (A=Take A, B=Take B, C=Delta) with safe gain normalization and peak warning. Integrate with TrackCrate’s existing player, respecting current selection, markers, and playback speed.

Acceptance Criteria
Per-Stem and Full-Mix Delta Solo Toggle
- Per-stem Delta Solo toggling routes only the selected stem(s) to the computed difference (C) signal; all other stems remain in their current A/B/C modes. - Full-Mix Delta Solo toggle routes the entire mix to the difference (C) signal and overrides any per-stem Delta Solo while active. - Switching Delta Solo on/off is click/pop-free via a minimum 5 ms crossfade and engages within 50 ms of user action. - If Take A and Take B are sample-identical for a routed stem or mix segment, the delta output is digital silence (≤ -120 dBFS). - Channel count and ordering are preserved (e.g., mono stays mono, stereo stays stereo). - UI state clearly indicates which scope is active (per-stem vs full-mix) and which stems are in Delta Solo.
Delta Scrub and Loop Controls on Delta Timeline
- Scrubbing on the delta waveform updates audible position in real time with latency ≤ 30 ms between cursor movement and audio. - Drag-select on the delta timeline creates a loopable range with minimum length 50 ms; loop playback is gapless. - Loop respects selection boundaries and snaps to existing markers when snap is enabled (1, 5, or 10 ms grid). - Changing loop range during playback takes effect on the next loop boundary with drift ≤ 10 ms. - Scrub and loop operate on the delta signal when Delta Solo is active; otherwise they operate on the currently selected A/B mode.
Keyboard Shortcuts for Delta and Loop
- Pressing D toggles Delta Solo for the current scope (per-stem if a stem header is focused; otherwise full-mix) and is ignored when focus is in a text-input field. - Pressing L toggles loop on the current selection; if no selection exists, a default 2 s loop centered at the playhead is created. - Shift+Left/Right adjusts selection edges by ±100 ms; Ctrl/Cmd+L clears the loop. - Shortcuts are displayed in the player help overlay and can be discovered via the ? shortcut.
Synchronized Delta Waveform and Spectrogram Visualization
- Delta waveform renders aligned to the main timeline with playhead sync error ≤ 1 frame at 60 FPS. - Optional spectrogram toggle displays a 20 Hz–20 kHz log-frequency spectrogram of the delta signal with ≥ 43 Hz time resolution and ≥ 48 bins per octave. - Spectrogram highlights frequency bands with the greatest changes (top 10% energy delta) using a distinct overlay and legend. - Zooming and panning apply equally to delta and main waveforms so their views remain synchronized. - Rendering a 5-minute stereo delta at 44.1 kHz initializes within 400 ms and updates progressively without UI frame drops > 16 ms.
A/B/C Modes with Gain Normalization and Peak Warning
- A/B/C mode buttons switch within 50 ms using a 5 ms crossfade; C corresponds to the computed difference between B and A. - Safe gain normalization targets −16 LUFS integrated for A, B, and C within ±1 LU; normalization can be toggled off. - If any mode’s true peak exceeds −1 dBTP, a peak warning indicator appears; enabling auto-attenuation reduces gain to keep peaks ≤ −1 dBTP. - Mode changes preserve relative phase and stereo image; null tests remain valid when normalization is disabled. - Selected mode persists per track during the session.
Integration with Selection, Markers, and Playback Speed
- Delta mode respects current time selection and markers; jumping to markers retains delta state and selection. - Playback speed control (0.5×–2.0×) applies equally to A, B, and C; when global pitch correction is enabled, delta follows the same setting. - Transport controls (play/pause, rewind, fast-forward) behave identically across modes; playhead position is preserved when switching modes. - Marker context menus and edits remain functional when delta mode is active. - Undo/redo includes delta-related actions (toggle, loop set/clear, selection adjust).
Latency and Phase-Accurate Delta Computation
- Delta is computed with sample-accurate alignment using stored take offsets and plugin delay compensation; alignment error ≤ 1 sample. - For stems with missing regions in either take, silence is substituted so delta reflects only true differences. - For multi-channel stems, channel mapping is preserved and delta is computed per channel; mid/side or surround layouts maintain crosstalk ≤ −90 dB. - When playback speed time-stretches A and/or B, delta computation uses the same algorithm to maintain phase coherence. - Formal null test: when B is summed with inverted A at equal gain, C outputs digital silence (≤ −120 dBFS).
Delta Clip Export with Timestamped Notes
"As an artist, I want to export small delta snippets with notes so that my team can hear exactly what changed at specific moments and respond faster."
Description

Enable selection of time ranges on the delta timeline and export short clips (e.g., WAV/MP3/OGG) with embedded or attached timestamped notes. Auto-create a TrackCrate shortlink for each export, apply watermarking and expiry per workspace policy, and attach the clip and notes to the associated version thread for context. Include an optional lightweight web preview player for recipients without account access, with view/download analytics routed to the existing link tracking system.

Acceptance Criteria
Accurate Delta Range Selection and Rendering
Given a project with at least two takes and Delta Solo active, When the user drags on the delta timeline to select a range of 0.5–60.0 seconds, Then the selection snaps to the project grid (bar/beat or time) and displays start/end and duration with millisecond precision (±1 ms). Given a selected stem or full mix, When Export is initiated, Then the rendered clip contains only the difference signal between the chosen takes within the selection and null-tests to at least -60 dBFS against the original differences. Given a selected loop, When the user enables Loop Preview, Then playback loops seamlessly with ≤5 ms crossfade at the boundaries.
Export Formats, Channels, and File Naming
Given a valid selection, When the user selects WAV 24-bit 48 kHz, MP3 320 kbps, or OGG 192 kbps, Then the exported file matches the chosen format and preserves the channel count of the source (mono/stereo) without downmix. Given a valid selection, When the file is exported, Then its filename follows {project}-{versionA}_vs_{versionB}-delta-{stem|mix}-{start}-{end}.{ext} using ISO-8601 timestamps and seconds to 3 decimals. Given DC offset is present in the delta, When exporting, Then DC is removed and peak is normalized to -1.0 dBFS unless the user disables normalization in export options.
Workspace Watermarking and Link Expiry Enforcement
Given a workspace with watermarking enabled, When a delta clip is exported, Then an audible watermark pattern defined by the workspace is applied at the configured interval and level within ±0.5 dB. Given a workspace expiry policy (e.g., duration, views, passcode), When the shortlink is created, Then the link enforces the policy; after expiry, preview and download return HTTP 410 and are inaccessible. Given watermarking is disabled in policy, When exporting, Then no watermark is applied and this state is recorded in the export metadata.
Timestamped Notes Creation, Embedding, and Attachment
Given the user adds note entries during selection (min 0, max 50), When exporting, Then each note stores a timestamp relative to clip start (mm:ss.mmm) and an optional absolute project timecode. Given export format WAV, When exporting with "Embed notes" enabled, Then notes are embedded as RIFF LIST/INFO and iXML/bext chunks; for MP3/OGG, notes are embedded as ID3 TXXX frames or Vorbis comments. Given "Attach notes file" is enabled, When exporting, Then JSON and CSV sidecars containing notes, timestamps, and version IDs are generated and attached to the export record.
Shortlink Creation, Web Preview, and Analytics
Given an export completes, When the system creates a TrackCrate shortlink, Then the slug is unique, HTTPS, and resolves in <300 ms p95 to a lightweight preview page that streams the clip with play/pause and loop controls. Given a recipient without a TrackCrate account, When they open the shortlink before expiry, Then they can preview in-browser and download if allowed by policy, without access to other project assets. Given a preview view or download occurs, When analytics events fire, Then view and download counters increment in the existing tracking system with timestamp, IP (hashed per policy), user agent, and referrer.
Attachment to Version Thread and Notifications
Given an export was initiated from Version Thread X, When the export finishes, Then the clip and notes are attached to Thread X as a single message with the shortlink, filename, duration, and note count. Given project watchers are enabled, When the export is attached, Then watchers receive a notification (in-app and email) including the shortlink and summary, adhering to user notification preferences. Given the thread is viewed later, When opening the export message, Then the embedded web preview loads inline and displays analytics counters (views/downloads) current within the last 5 minutes.
Version, Metadata, and Permissions Integration
"As a label admin, I want deltas and their shares to inherit our rights and access controls so that sensitive materials remain protected while still enabling efficient review."
Description

Bind delta computations and exports to TrackCrate’s versioning model so that each delta is traceable to specific asset versions and stems. Enforce workspace permissions and rights metadata, marking delta artifacts as derived, non-distributable assets by default. Ensure private stem player restrictions apply to delta audition and shares. Record an audit trail (who compared what and when) and surface change summaries in the release timeline for holistic visibility without exposing protected source content.

Acceptance Criteria
Delta Artifact Traceability to Source Versions and Stems
Given a workspace member with permission to view two specific versions of a stem or full mix And version A and version B exist with unique IDs and checksums When the member runs Delta Solo between version A and version B for the selected stem or full mix Then the system creates a delta artifact with metadata fields: delta_id, source_asset_type (stem|mix), stem_id (nullable), from_version_id, to_version_id, from_checksum, to_checksum, created_by, created_at And the delta artifact appears in the release's asset tree under "Derived > Deltas" And the API delta detail endpoint returns those fields And deleting or renaming the source assets does not break the stored references (immutable IDs preserved)
Derived and Non-Distributable Rights Enforcement on Deltas
Given rights metadata on the source assets (rights_owner, territory, usage, PII flags) When a delta artifact is created Then the delta is marked is_derived = true and distributable = false by default And rights metadata is inherited from both sources; if any source is restricted, the delta is at least as restrictive And download and public share actions for the delta are disabled for roles without "Override Distribution Lock" And attempting to enable distribution without the privilege returns 403 and is logged
Private Stem Player Restrictions Apply to Delta Audition and Shares
Given the workspace stem player is set to Private (members only) and external preview is disabled When a member generates a delta and creates a shortlink Then the shortlink requires authentication and role checks equivalent to the source stem's settings And the delta stream is watermarked with the workspace watermark profile And non-members opening the link see an access denied screen without revealing filenames, durations, or waveforms
Audit Trail for Delta Computation and Exports
Given any delta compute or export action occurs When the action completes (success or failure) Then an immutable audit entry is written with: actor_user_id, action_type (compute|export), target (stem_id or mix_id), from_version_id, to_version_id, delta_id (if created), timestamp, client_ip, user_agent, export_recipients (if any), notes And administrators can query and filter audit entries by release, actor, date range, and action_type And audit entries cannot be edited or deleted via the API or UI
Release Timeline Change Summaries Without Exposing Protected Audio
Given a delta is created within a release When the release timeline is viewed by a user without permission to access the source audio Then the timeline shows a non-audio summary card containing: delta label, redacted source asset labels, from_version_id, to_version_id, duration, created_by, created_at, and note snippets And no audio playback controls, waveforms, or download buttons are rendered And authorized users see full labels and can audition per their permissions
Delta Export Policy: Expiry, Watermark, and Recipient Logging
Given the workspace has default link expiry (e.g., 7 days) and watermark settings When a user exports a delta clip with timestamped notes Then the generated asset is stored as derived and non-distributable, and the shortlink inherits the workspace default expiry unless explicitly overridden by a privileged role And the exported media is watermarked per workspace policy And all recipients, expiry, and access events are recorded and visible in the delta’s detail view
Permission Gate on Delta Generation Requires Access to Both Source Versions
Given a user lacks access to one of the two selected versions or lacks export permission in the target release When the user attempts to run Delta Solo Then the operation is blocked with 403 Forbidden and a generic message that does not disclose unauthorized asset details And no delta artifact is created and no partial metadata is persisted And the attempt is written to the audit log
Performance, Streaming, and Caching Pipeline
"As a remote collaborator on limited hardware, I want responsive delta audition without long waits so that I can review changes during a meeting or on the go."
Description

Provide a hybrid client/server processing path that performs chunked, streamable delta computation with WASM-accelerated DSP on the client when feasible, and falls back to a server worker for large mixes. Cache results by asset/version hash and alignment manifest to deliver near-instant replays. Precompute deltas for frequently compared pairs in the background. Ensure low-latency scrubbing through buffered windows and prioritize interactive playback over background exports. Expose health metrics and graceful degradation when resources are constrained.

Acceptance Criteria
WASM Client Streaming With Automatic Server Fallback
Given a device with WASM SIMD and Threads enabled, 4+ logical cores, and a supported browser When the user opens Delta Solo on a pair ≤ 24 stems and ≤ 10 minutes each Then delta computation runs client-side in 512 ms chunks with per-chunk compute time ≤ 25 ms and first audible output ≤ 800 ms from user action And the output is bit-identical to a reference offline render within −50 dB null across non-edit regions Given a device lacking WASM SIMD/Threads support or an estimated client CPU > 60% for the session When the user opens Delta Solo Then the system selects server-worker mode within 300 ms and begins streaming audio within 1,500 ms with no more than one buffer underrun in the first 10 seconds Given an active session in client or server mode When network connectivity drops for ≤ 2 seconds Then playback resumes automatically within 500 ms using buffered chunks without losing timeline position
Content-Addressed Caching by Asset/Version and Alignment Manifest
Given two assets with version hashes A and B and an alignment manifest hash M When Delta Solo is executed Then a cache key K = hash(A,B,M,processing-params) is generated and the delta segments are stored under K with metadata (duration, sample rate, chunk size) Given the same A, B, M, and processing params within TTL (7 days client, 30 days server) When the user reopens Delta Solo Then the system returns a cache hit and first audible output ≤ 300 ms without recomputation Given any change to A, B, M, or processing params When Delta Solo is executed Then the previous cache entries are not reused and a new key K' is created Given the cache under sustained use When 10,000 repeated comparisons of the same pairs are performed in test Then observed cache hit rate ≥ 90% and no key collisions are detected in 1,000,000 key-generation fuzz trials Given storage pressure exceeding the configured limit When eviction runs Then least-recently-used entries are evicted first without impacting currently buffered playback
Background Precomputation of Frequently Compared Pairs
Given a pair is opened ≥ 3 times within 24 hours or explicitly starred by the user When the app is idle (no playback/scrub) and system CPU < 40% Then the pair is queued for background precompute within 2 minutes and processed at ≤ 1 job per core (max 2 concurrent) Given a precomputed pair exists in cache When the user opens Delta Solo for that pair Then first audible output ≤ 250 ms and no DSP recomputation occurs on the critical path Given playback begins while a background precompute is running When resource contention occurs Then the precompute job yields within 200 ms and resumes automatically after playback stops Given the device is on battery below 20% When background precompute is scheduled Then jobs are deferred until power is connected or battery ≥ 40% unless the user overrides in settings
Low-Latency Scrubbing with Buffered Windows
Given delta data is available (live or cached) When the user scrubs or hovers to a new playhead position Then audible delta starts ≤ 100 ms from pointer release and at least 2.0 s of audio is prebuffered on both sides of the playhead Given continuous shuttle or loop over a region When crossing chunk boundaries Then crossfades eliminate clicks/pops and no more than 0 underruns occur over 10 minutes of stress scrubbing in test Given bandwidth or CPU constraints are detected (buffer < 25% for > 1 s) When scrubbing continues Then the system increases chunk size up to 2048 samples and reduces analyzer/visualization frequency to maintain glitch-free audio, while displaying a “Degraded for real-time” indicator
Interactive Playback Priority Over Background Exports
Given a background export of delta clips is in progress When the user presses Play or initiates scrubbing Then playback achieves ≤ 100 ms start latency and 0 glitches during the interaction window, and the export throughput may reduce but never stalls for > 5 s Given multiple exports are queued When interactive playback is active Then export worker priority is lowered and queue order is preserved; no export fails due to starvation and each export makes forward progress at least every 10 s Given playback ends or the app is idle for ≥ 3 s When background tasks are pending Then export workers automatically ramp back to full throughput within 1 s
Health Metrics and Graceful Degradation
Given the system is running in client or server mode When observing the metrics overlay or /metrics endpoint Then the following are reported at 1 s intervals: mode (client/server), cache hit rate, buffer fill %, underrun count, average chunk compute time, fallback reason, precompute queue depth, and export worker state Given CPU > 70% for > 3 s or underruns ≥ 2 in 30 s When degradation policy triggers Then precompute concurrency is set to 0, chunk size increases one step (up to 2048), analyzers are throttled to ≤ 5 Hz, and a user-visible banner explains the active degradation Given resources recover (CPU < 50% and 0 underruns for 30 s) When the system reassesses Then prior quality levels are restored stepwise without causing audible artifacts and the banner is dismissed

Band Focus

Filter the diff by frequency band or instrument range to zero in on issues (e.g., low-end, vocal sibilance, air). Use smart presets or draw custom bands to evaluate targeted fixes without distraction from the rest of the mix.

Requirements

Interactive Band Selector
"As a mixing engineer collaborating remotely, I want to draw and adjust a frequency band on the diff so that I can isolate and assess changes without the rest of the spectrum masking issues."
Description

Provide an EQ-style interface in the Diff player that lets users define one or more focus bands by clicking, dragging, or typing values (center frequency, bandwidth/Q, and slope). Include snap-to musical notes and standard bands, draggable handles, keyboard nudging, and visual overlays that clearly indicate active ranges. Persist selections per user and project with undo/redo and tooltips. Integrate directly with TrackCrate’s version compare and stem player, supporting stereo/mono files and multiple sample rates without reloading the track.

Acceptance Criteria
Create and Adjust Focus Band via Drag Handles
Given the Diff player is loaded and playing audio, when the user click-drags a band’s center handle horizontally, then the center frequency changes continuously within the 20 Hz–20 kHz range and the audio focus updates in under 100 ms. Given a band’s edge handles are drag-resized, when the user drags inward or outward, then the Q (bandwidth) adjusts accordingly and the overlay redraws smoothly at 60 fps on a 1080p display. Given the slope dropdown is opened, when the user selects 6, 12, 18, or 24 dB/oct, then the band’s slope is applied immediately to the audio focus and displayed in the tooltip. Given the user presses Esc during a drag, when the drag is canceled, then the band reverts to its pre-drag values. Given the user drags a handle beyond limits, when the boundary is reached, then the handle snaps to the min/max and does not exceed constraints.
Type-to-Set with Snap-to Notes and Standard Bands
Given a band is selected, when the user types a center frequency value (e.g., "440", "440 Hz", "1k", "1000 Hz"), then the band centers at that frequency within ±1 Hz accuracy. Given snap-to-notes is enabled, when the user types a musical note (e.g., "A4", "C#3") or drags near a note, then the center frequency snaps to the nearest equal-tempered note within 10 cents. Given standard band snapping is enabled, when the user toggles ISO 1/3-octave centers, then drag or typed values snap to {31.5, 40, 50, 63, 80, 100, 125, 160, 200, 250, 315, 400, 500, 630, 800, 1k, 1.25k, 1.6k, 2k, 2.5k, 3.15k, 4k, 5k, 6.3k, 8k, 10k, 12.5k, 16k} Hz. Given a band is selected, when the user types Q or slope values (e.g., "Q 1.2", "12 dB/oct"), then the band updates accordingly and invalid entries are rejected with an inline message.
Multi-Band Creation, Activation, and Solo/Mute
Given no bands exist, when the user clicks Add Band or double-clicks the graph, then a new band is created at the clicked frequency with default Q=1.0 and slope=12 dB/oct. Given multiple bands exist, when the user adds bands, then up to 8 concurrent bands can be active; attempts to add a ninth display a non-blocking notice. Given multiple bands are toggled active, when playback continues, then only the union of active bands is audible in the focus output with no audible artifacts and <1 dB ripple outside passbands. Given a band’s Solo is toggled, when Solo is on for one band, then only that band is audible and others are muted without stopping playback. Given a band is deleted, when the user presses Delete or clicks the trash icon, then the band and its overlay are removed and the audio focus updates in under 100 ms.
Keyboard Nudging and Full Undo/Redo
Given a band handle is selected, when the user presses Left or Right, then the center frequency nudges by 1 Hz (Alt=10 Hz, Shift=100 Hz) without interrupting playback. Given a band handle is selected, when the user presses Up or Down, then Q changes by 0.1 per keypress (Alt=0.01, Shift=0.5) within the 0.1–18 range. Given the slope selector has focus, when the user presses Up or Down, then slope cycles through 6, 12, 18, and 24 dB/oct. Given the user performs a sequence of edits, when Ctrl/Cmd+Z is pressed, then the last edit is undone; when Ctrl/Cmd+Shift+Z is pressed, then the last undone edit is redone. Given the editor remains open, when up to 50 band-related actions are performed, then the undo history retains all 50 actions in order for the current session.
Per-User, Per-Project Persistence
Given a signed-in user with an open project, when the user creates or edits focus bands, then the configuration auto-saves within 2 seconds of the last change. Given the same user reopens the project on any device, when the Diff player loads, then the previously saved bands, their active/mute state, and snap settings are restored. Given a different user opens the same project, when they load the Diff player, then they see their own last-saved configuration, not another user’s configuration. Given the user switches between projects, when they return to the original project, then the band configuration for each project remains isolated and unchanged.
Seamless Integration with Version Compare and Stem Player
Given version A/B are available, when the user switches between versions during playback, then focus bands remain applied consistently to both versions without an audible dropout or a media reload. Given the stem player is active, when the user solos or mutes stems, then the active focus bands continue to filter the stem output with no added latency greater than 10 ms. Given files at 44.1, 48, and 96 kHz are compared, when bands are applied, then a 1 kHz center measures within ±1 Hz regardless of sample rate. Given mono and stereo sources are loaded, when bands are applied, then behavior is identical across channels with no channel imbalance greater than 0.2 dB introduced by the focus processing.
Visual Overlays, Handles, and Tooltips
Given one or more bands are active, when the graph renders, then active ranges are shaded with 30–50% opacity and a distinct color per band; inactive bands use 15% opacity. Given the user hovers over a handle or band, when the tooltip appears, then it shows center frequency (Hz and musical note), Q, and slope within 150 ms and with a contrast ratio of at least 4.5:1. Given bands overlap, when overlays render, then the band under the pointer is highlighted with a 2 px outline, and non-target bands are dimmed to avoid ambiguity. Given the user resizes the player, when the viewport changes, then overlays and handles remain aligned to within 2 px of their corresponding frequencies and Q bandwidths.
Smart Presets Library
"As a producer on deadline, I want one-click presets for common problem areas so that I can quickly focus my listening without manual setup."
Description

Offer a curated set of one-click focus presets (e.g., Low-End 20–120 Hz, Vocal Presence 2–5 kHz, Sibilance 5–9 kHz, Air 10–16 kHz, Kick, Snare, Bass, Guitar) accessible from the Band Focus UI. Each preset defines center frequency, bandwidth, and slope, adapts to the file’s sample rate, and can be versioned and managed centrally. Allow quick switching between presets for comparative listening and show an inline preview of the covered range. Integrate with project defaults and appear contextually when relevant stems are active.

Acceptance Criteria
Preset Library Visible and Selectable in Band Focus
Given the Band Focus UI is open When the user opens the Presets menu Then the following curated presets are displayed in the list: "Low-End (20–120 Hz)", "Vocal Presence (2–5 kHz)", "Sibilance (5–9 kHz)", "Air (10–16 kHz)", "Kick", "Snare", "Bass", "Guitar" And when a preset is selected, the focus band is applied within 100 ms without audio dropout or click And the active preset is visually indicated and persists across play, pause, and seek within the current session
Preset Parameter Application and Sample Rate Adaptation
Given audio files at sample rates 44.1, 48, 88.2, and 96 kHz When any preset is applied Then the preset’s center frequency, bandwidth, and slope map to target values within ±1% frequency tolerance And the realized slope matches the preset value (12/24/48 dB per octave) within ±2 dB at the specified corner frequencies And when the session sample rate changes, the preset retunes within 200 ms without an audible artifact (no transient > -40 dBFS)
Centralized Preset Versioning and Rollback
Given a central library preset at version X exists When an admin publishes version X+1 with changes Then projects see an update notification with version number and summary And a project can opt-in to update, after which the preset version X+1 is stored in the project state And the project can roll back to version X within the same UI And existing sessions pinned to version X remain unchanged until update is confirmed And an audit entry is recorded with user, timestamp, from-version, and to-version
Fast A/B Switching Between Presets
Given two presets have been selected sequentially When the user toggles between the last two presets via keyboard shortcut or UI Then the switch occurs within 100 ms with a 5–20 ms crossfade And output level is gain-matched within ±0.5 dB across switches And the A/B state persists during transport play, pause, and loop
Inline Frequency Range Preview Accuracy
Given the presets menu is open When a preset is hovered or selected Then an inline frequency overlay displays the covered range with labeled bounds matching the preset parameters And overlay boundaries align to the frequency grid within ±1 px at 100% UI scale (±2 px at ≥200% scale) And the preview updates within 50 ms on selection change and hides when Band Focus is disabled
Contextual Preset Surfacing by Active Stem Type
Given the active stem has instrument metadata (e.g., vocal, drums, bass, guitar) When the presets menu opens Then presets relevant to the active instrument are pinned in the top section (minimum 3 suggestions when available) And a "Why suggested" tooltip is available on pinned presets And the full preset list remains accessible without additional clicks And when no metadata exists, the default curated ordering is shown
Project Default Preset Integration
Given a project default preset has been set When Band Focus is opened on any track within the project Then the project default preset is auto-selected And users with edit permission can set or clear the project default via "Set as Project Default" And the default persists across sessions and devices And if the default preset has a newer central version, the project remains on the pinned version until explicitly updated by a user
Real-time Bandpass Diff Processing Engine
"As a mastering engineer, I want accurate, low-latency band isolation during A/B so that I can make confident decisions about targeted fixes."
Description

Implement a low-latency DSP engine that applies band-pass (and optional multiband) filtering to both versions during diff playback while maintaining sample-accurate sync. Provide minimum-phase (zero-latency) and optional linear-phase modes, automatic gain compensation to avoid loudness bias, and CPU-efficient processing via WebAssembly in web and native libraries in desktop apps. Support 44.1–192 kHz, float pipelines, loop playback, and seamless bypass. Include overload detection with graceful fallback and bit-depth/sample-rate conversion paths that preserve timing.

Acceptance Criteria
Min-Phase Zero-Latency Sync in Diff Playback
Given two aligned versions (A and B) are playing in diff mode with minimum-phase band-pass enabled on both signals at any supported sample rate When the band-pass is engaged, disengaged, or its parameters (center frequency, bandwidth/Q) are adjusted during playback Then added algorithmic latency = 0 samples (<= 1-sample total path tolerance), and inter-version alignment error measured by cross-correlation remains <= 1 sample over 10 minutes of continuous playback And no samples are inserted or dropped; glitch/underrun count = 0; no discontinuity spike exceeds -80 dBFS within 10 ms of a parameter change And with global bypass enabled, output matches input within floating-point precision (peak absolute error < 1e-7), confirming transparent bypass path
Linear-Phase Mode with Accurate Latency Reporting & Compensation
Given linear-phase mode is enabled for the active band-pass filters during diff playback When playback starts or filter parameters change Then the engine reports exact processing latency in samples via API before audio for that block is rendered And with host latency compensation applied, inter-version alignment error remains <= 1 sample over 10 minutes And the magnitude response of each band matches its design within ±0.25 dB across the passband, with stopband attenuation meeting or exceeding the specified target in the preset/design
Multiband Operation and Band Definition Accuracy (Presets & Custom)
Given up to 5 concurrent bands are configured via a preset or user-drawn custom bands When those bands are applied to both versions during diff playback Then each band's center frequency error <= ±1% and bandwidth/Q error <= ±5% of requested values across all supported sample rates And combined passband ripple from multiple bands is <= 0.5 dB; enabling/disabling an individual band is click-free (no spikes > -80 dBFS) and takes effect within 1 audio buffer And preset selections map to documented frequency ranges within the above tolerances; custom band configurations serialize and deserialize with round-trip parameter error <= 1 LSB of float32 representation
Automatic Gain Compensation Loudness Neutrality
Given automatic gain compensation (AGC) is enabled When the band-pass is engaged/disengaged or the mode switches between minimum-phase and linear-phase during playback Then integrated loudness (EBU R128 LUFS) measured over a 10 s window differs by <= ±0.5 LU from the bypassed signal And true-peak is constrained to <= -1 dBTP without digital clipping; left/right channels remain matched within 0.1 dB And the applied AGC gain (dB) is exposed via API with 0.1 dB resolution and updates within 50 ms of step changes
Loop Playback and Seamless Bypass Without Artifacts
Given loop playback with arbitrary loop points is active When the playhead crosses loop boundaries with filtering engaged and when global bypass is toggled during playback Then no additional samples are inserted or dropped at loop boundaries; inter-boundary timing error = 0 samples And no audible artifacts at boundaries: transient energy < -80 dBFS relative to program RMS; DC offset step < 1e-4 FS And bypass toggling applies within 1 audio buffer without clicks; when bypassed, output equals input within -120 dBFS peak error across the full pipeline
Sample-Rate/Bit-Depth Conversion with Timing Preservation
Given the two versions use any combination of sample rates in {44.1, 48, 88.2, 96, 176.4, 192} kHz and bit depths {16, 24, 32f}, and the host runs at any supported rate When diff playback begins with band-pass engaged Then resampling and bit-depth conversion preserve timing: cumulative drift between versions <= 1 sample over 10 minutes, with no rebuffering-induced discontinuities And conversion quality meets: noise floor <= -100 dBFS and no resampler alias components above -90 dBFS in the stopband; dithering is applied when reducing to integer bit depths And loop and bypass behavior under converted formats meet the artifact and timing tolerances specified elsewhere in these criteria
Performance Efficiency and Overload Management (WASM/Native)
Given real-time playback in web (WebAssembly with SIMD when available) and desktop native builds at buffer sizes 128–512 samples, stereo, sample rates 44.1–192 kHz, with up to 5 concurrent bands When running for 30 minutes under typical user interaction (parameter tweaks, preset changes, looped playback) Then audio callback deadline misses = 0; average DSP time per callback <= 40% of the buffer interval; p99 <= 70% And peak engine memory usage <= 64 MB with leak rate < 1 MB/hour; initialization time <= 200 ms for first playback And if p99 exceeds 85% or any underrun is detected, overload detection triggers within 100 ms to apply graceful fallback (e.g., switch to min-phase, reduce band count) without audible dropouts; automatic recovery to full quality occurs when headroom > 30% for 5 s; all overload/fallback events are emitted via API for telemetry
Instrument Range Mapping from Stems Metadata
"As a collaborator reviewing stems, I want Band Focus to suggest ranges based on the stem I’m auditioning so that I don’t have to remember typical frequency ranges for each instrument."
Description

Leverage TrackCrate stem names and tags (e.g., Lead Vox, Kick, Bass, Guitar) to suggest relevant Band Focus presets and auto-highlight typical instrument ranges. Provide a "Follow selected stem" option that updates the focus band when the user switches stems in the player. Use a configurable mapping table (no ML required) and handle incomplete metadata gracefully. Ensure recommendations are non-blocking, dismissible, and logged for analytics to refine mappings over time.

Acceptance Criteria
Preset Suggestions from Stem Tags
- Given a track in TrackCrate with stems containing names/tags that match the mapping table (e.g., "Lead Vox", "Kick", "Bass", "Guitar"), when the user opens the Band Focus panel for a selected stem, then the system suggests 1–3 Band Focus presets relevant to that stem’s mapped instrument within 200 ms. - Suggestions are ordered by the mapping table's priority weight for that instrument. - Each suggested preset is labeled "Suggested" and is visually distinct without stealing focus; no modal or blocking overlay is shown. - The currently active focus band is not changed by suggestions unless the user explicitly clicks a suggested preset.
Auto-highlight Typical Instrument Range
- Given a stem is selected whose instrument is found in the mapping table, when the Band Focus panel is opened or the stem selection changes with Follow selected stem off, then an outline/overlay highlights the instrument’s typical frequency range (min/max from mapping table) within 150 ms. - The highlight does not modify the effective filter until the user applies it (clicks "Apply Range" or a suggested preset). - The highlighted range displays numeric bounds in Hz (rounded to nearest 5 Hz below 1 kHz and 10 Hz above) and is fully visible within the spectrum UI. - If multiple mapped ranges exist for one instrument (e.g., "Kick body", "Kick click"), the primary range per mapping priority is highlighted; alternates are available via a dropdown.
Follow Selected Stem Toggle Behavior
- Given the "Follow selected stem" toggle is on, when the user switches the selected stem in the player, then the focus band switches to the new stem's mapped range or last-applied preset for that instrument within 150 ms. - When the toggle is off, switching stems does not change the current focus band. - If a user manually adjusts the band while Follow is on, the manual band persists until the next stem switch, at which point it updates to the new stem’s mapping. - The Follow toggle state is persisted per user per project and restored on reload.
Configurable Mapping Table Resolution and Precedence
- Mapping resolution uses case-insensitive exact tag match first, then normalized stem name contains match (stripped of punctuation/whitespace), then fallback instrument group; no machine learning or fuzzy scoring is used. - Project-level mapping overrides workspace-level, which overrides system default; first match wins and is logged with its source level. - Updating the mapping table takes effect immediately for new panel openings and stem switches; existing sessions update on next interaction without requiring a page reload. - A mapping entry consists of: instrument_key, tag_aliases[], preset_ids[], primary_range[min,max] Hz, alternate_ranges[], priority (integer). Validation rejects overlapping keys and invalid ranges.
Graceful Handling of Incomplete or Unknown Metadata
- Given a stem without recognizable tags or name matches, when the Band Focus panel is opened, then no instrument-specific suggestions are shown; instead, generic presets ("Low End", "Presence", "Air") are offered. - The UI shows a non-blocking note "No instrument match" with an action "Map tag…" that opens mapping configuration (permissions-gated). - No errors or console exceptions are produced; the panel remains fully interactive. - An analytics event "mapping_miss" is logged with anonymized stem metadata.
Non-blocking, Dismissible Recommendations
- Suggested presets and range highlights never prevent other panel interactions; keyboard and pointer inputs remain responsive with 60 FPS target during rendering. - Each suggestion chip has a dismiss (×); dismissing hides that suggestion for the current stem and session and is persisted until reload. - A "Dismiss all suggestions" action hides all suggestions for the current stem; a "Show suggestions" control restores them within the session. - Dismissals do not change the active focus band and are logged for analytics.
Analytics Events for Mapping and Interaction
- The system emits the following events: suggestion_shown, suggestion_clicked, suggestion_dismissed, follow_toggle_on, follow_toggle_off, focus_updated_from_follow, mapping_resolved, mapping_miss. - Each event includes: tenant_id, project_id, track_id, stem_id, user_id (hashed), instrument_key, preset_id (where applicable), mapping_source_level, timestamp (ISO 8601), client_version, and session_id. - Events are batched and sent within 5 seconds or on panel close, with retry (exponential backoff up to 3 attempts) and are dropped if offline beyond 60 seconds; UI remains unaffected. - Opt-out respects user analytics/privacy settings; when disabled, no events are emitted, and functionality remains unchanged.
Spectral Diff Visualization
"As an artist giving notes, I want to see where differences spike in the sibilance band so that I can leave precise, timestamped feedback."
Description

Add an optional time–frequency overlay that visualizes differential energy within the selected band as a heatmap above the waveform. Provide adjustable time/frequency resolution, a colorblind-safe palette, transient/sibilance markers, and synced zoom/pan with the transport. Enable click-to-create timestamped comments that capture the active band context. Optimize rendering for smooth playback with minimal GPU/CPU overhead and allow exporting snapshots (PNG) for external review notes.

Acceptance Criteria
Visualize differential energy in a selected low‑end band above the waveform
Given Band Focus is enabled and a frequency band of 20–120 Hz is selected And a reference and current mix are loaded for diff When the user toggles Spectral Diff Visualization on Then a heatmap overlay renders above the waveform constrained to the selected band extents And the overlay encodes differential energy magnitude using the active palette with a visible legend And the overlay updates continuously during playback with ≤16 ms latency to the transport And enabling/disabling the overlay does not shift audio playback timing And the overlay can be toggled without visual artifacts (no flicker, tearing, or blank frames)
Adjust time and frequency resolution to analyze transients vs tonal changes
Given the Spectral Diff heatmap is visible When the user adjusts time resolution within 1–100 ms per bin via the resolution control Then the heatmap re-renders within 150 ms reflecting the new time resolution And axis ticks/tooltips update to show the active time resolution When the user adjusts frequency resolution between 12–96 bins per octave or equivalent Hz steps within the selected band Then the heatmap re-renders within 150 ms reflecting the new frequency resolution And the chosen resolutions persist for the current project/session
Use colorblind‑safe palettes that remain legible in light and dark themes
Given the palette selector is opened When the user switches among at least two colorblind‑safe palettes Then differential energy levels remain distinguishable under simulated deuteranopia, protanopia, and tritanopia And the mid vs max energy colors meet a ≥3:1 contrast ratio in both light and dark UI themes And the selected palette is applied instantly (≤100 ms) and persisted with the project And the palette name is included in snapshot metadata and comment context
Detect and display transient and sibilance markers within the selected band
Given marker overlays are enabled and the Spectral Diff heatmap is visible When transient detection is active Then transient markers appear at attack onsets within ±5 ms of the corresponding waveform peaks where differential energy exceeds a configurable threshold When sibilance detection is active Then sibilance markers render only if the active band overlaps 4–10 kHz and detected bursts exceed threshold over 20–80 ms durations And transient and sibilance markers can be toggled independently without affecting heatmap rendering And markers are keyboard focusable and expose accessible labels describing timecode and type
Maintain synced zoom and pan with the transport and global timeline
Given the user pans or zooms the waveform or scrubs the transport When the timeline view changes Then the heatmap overlay scrolls/zooms in lockstep with the waveform with no perceptible drift (≤1 pixel over 5 minutes of continuous playback) And loop ranges, playhead position, and band extents remain aligned between waveform and heatmap When the user changes the Band Focus range Then the heatmap updates its vertical band bounds within 100 ms without breaking horizontal sync
Create timestamped comments from overlay clicks that capture band context
Given the Spectral Diff heatmap is visible and Band Focus is active When the user clicks on the heatmap or presses the comment shortcut at time t Then a comment is created at timestamp t (±16 ms), anchored to the clicked location And the comment automatically captures active band bounds, time and frequency resolutions, palette, and marker toggles And reopening the comment restores the overlay with the captured context And the comment is saved to the project and visible to collaborators with appropriate permissions
Export a PNG snapshot of the heatmap overlay for external review
Given the Spectral Diff heatmap is visible When the user selects Export Snapshot (PNG) Then a PNG is saved containing the current viewport of waveform + heatmap with legend and timestamps And the PNG uses sRGB color space and at least the viewport pixel resolution (1.0× scale) with an option for 2.0× scale And embedded or sidecar metadata includes UTC timestamp, track/revision IDs, band bounds, time/frequency resolution, palette, and app version And the export completes within 2 seconds for ≤4K viewport on a supported machine and does not interrupt audio playback
Preset Save, Share, and Persistence
"As a label project manager, I want to share a band-focused diff via an expiring link so that reviewers hear exactly what I’m flagging without exposing the entire mix."
Description

Allow users to create and name custom Band Focus presets, choose scope (private, project, or team), and manage them in a lightweight library. Persist presets server-side, support import/export (JSON), and provide permission-aware sharing via TrackCrate shortlinks with expiring tokens. When opened, the link loads the exact Band Focus state in the diff player. Expose presets internally in AutoKit review views while excluding public press pages by default to prevent unintended exposure.

Acceptance Criteria
Create and Save Custom Band Focus Preset (Private, Project, Team)
Given an authenticated user has configured Band Focus parameters in the diff player When the user saves a preset with a name and selects a scope of Private, Project, or Team Then the preset is stored server-side with all Band Focus parameters (bands, frequency ranges, Q, gain, solo/mute, bypass, order) And the preset appears immediately in the preset selector for the chosen scope And names must be 2-64 characters and contain only letters, numbers, spaces, hyphens, underscores, or periods And saving with a missing or invalid name shows a validation error and does not create the preset And attempting to save a duplicate name within the same scope is rejected with a clear error
Preset Library Management (List, Rename, Duplicate, Delete)
Given a user opens the Preset Library When the library loads Then presets are listed grouped by scope (Private, Project, Team) showing name, owner, and last updated timestamp When the user renames a preset Then the new name is validated against naming rules and saved server-side and the change is reflected in the current session preset selector When the user duplicates a preset Then a new preset is created in the same scope with identical parameters and a distinct name When the user deletes a preset and confirms Then the preset is removed server-side and no longer appears in the library or preset selector on next fetch
Server-Side Persistence and Cross-Device Availability
Given a user has saved presets in any scope they have access to When the user signs in on a different device or browser Then the same presets are available according to scope and permissions without manual import And edits to a preset from one session are reflected in another session after a refresh or next fetch And presets are not lost on local cache clear because they persist server-side
Import and Export Presets (JSON)
Given a user selects one or more presets to export When the user initiates Export Then a JSON file is provided that conforms to the Band Focus Preset schema (name, scope, parameters, schemaVersion) and excludes any access tokens Given a user provides a JSON file that conforms to the schema When the user initiates Import and selects a target scope Then valid presets are created server-side in the chosen scope and appear in the library and selector And any name conflict within the target scope results in a non-destructive imported name that is made unique And invalid entries are skipped with a per-entry error report shown to the user
Permission-Aware Sharing via Expiring Shortlinks
Given a user has a preset and chooses Share When the user generates a TrackCrate shortlink with an expiration TTL T Then the shortlink includes an expiring token that authorizes read-only access to the preset state And opening the shortlink before T with a valid token succeeds even if the recipient is not a project or team member And opening the shortlink after T returns an expired response and does not load the preset And project or team members with normal access can open the shortlink without exposing additional private presets or assets
Open Shared Link Loads Exact Band Focus State in Diff Player
Given a recipient opens a valid preset shortlink When the diff player loads Then the Band Focus preset is applied exactly as saved (bands, frequency ranges, Q, gain, solo/mute, bypass, order) And the preset name and scope are displayed as read-only context And the recipient cannot edit or save over the original preset unless they have appropriate permissions
AutoKit Review Views Include Presets; Public Press Pages Exclude by Default
Given an authenticated team member opens an AutoKit review view for a project When the view loads Then Band Focus presets for the relevant project and team scopes are available in the preset selector And Private presets of the viewer are available only to that viewer Given a public press page is opened When the page loads Then Band Focus presets and preset selectors are not displayed by default and preset state is not exposed

Change Navigator

Auto-generated hotspot markers ranked by change magnitude and type (level, EQ, dynamics, stereo). Jump through them with arrow keys, filter by stem or band, and convert hotspots into to-dos with one click to speed approvals.

Requirements

Change Detection Engine
"As a mixing engineer, I want TrackCrate to automatically detect and score audible changes between versions so that I can jump directly to significant differences without scrubbing the entire track."
Description

Automated analysis service that computes change magnitude and type between any two versions of a mix or stem, tagging time-coded hotspots across level, EQ, dynamics, and stereo width. Processes uploaded stems/mixes on version commit, compares against a selected baseline using windowed feature extraction (LUFS, RMS, crest factor, spectral bands, correlation/width), then normalizes and scores changes. Results are stored as indexed markers linked to assets and stems, enabling downstream UI, filtering, and to-do creation. Runs asynchronously with progress updates, supports multiple sample rates, and respects project permissions. Provides approximate real-time analysis for short files and caching to avoid reprocessing identical pairs. Benefits include eliminating manual A/B scrubbing, standardizing review criteria, and accelerating approvals.

Acceptance Criteria
Auto Analysis on Version Commit with Baseline
Given a project with an accessible baseline version and a new mix or stem is committed, When the commit completes, Then an analysis job is enqueued and associated with the version pair. Given an enqueued analysis job, When processing begins, Then the job status transitions from queued to processing to completed or failed, with timestamps recorded. Given a running analysis job, When the progress endpoint is queried, Then it returns a numeric progress from 0 to 100 and the current phase name until completion. Given analysis completion without errors, Then the results dataset is persisted and retrievable by project id and version pair id.
Windowed Feature Extraction and Multi-Rate Support
Given two audio files with differing sample rates, When analysis runs, Then features are computed on a common timebase without failure and the sample rate difference does not block processing. Given analysis runs, Then per-window features include LUFS, RMS, crest factor, spectral band energies, and stereo correlation/width across the full duration. Given the computed features, Then windows are aligned across the two versions with a consistent hop size and timestamps.
Hotspot Tagging, Typing, and Ranking
Given two versions with audible differences, When analysis completes, Then hotspots are created with start_ms, end_ms, type in {level, EQ, dynamics, stereo}, and magnitude_score normalized between 0 and 1. Given a set of hotspots, When retrieved without an explicit sort parameter, Then they are ordered by magnitude_score descending. Given two hotspots with equal magnitude_score, When ranked, Then they are ordered by start_ms ascending. Given two identical files, When analysis completes, Then zero hotspots are produced.
Indexed Storage and Queryability of Markers
Given stored results, When querying markers by asset id and stem id, Then only markers for those identifiers are returned. Given a time range filter, When querying markers, Then only markers whose intervals overlap the range are returned. Given a band filter (e.g., low, mid, high), When querying markers, Then only EQ-type markers matching the requested bands are returned. Given a sort parameter of time or rank, When querying markers, Then the order of results matches the parameter.
Caching of Identical Version Pairs
Given an existing analysis result for a specific version pair and parameters, When the same request is submitted again, Then the service returns the cached result without starting a new job and records a cache hit. Given any change to either file content or analysis parameters, When a request is submitted, Then a new analysis job is started and the cache is not used.
Permission Enforcement for Analysis and Results
Given a user without project access, When they attempt to trigger analysis or fetch results, Then the request is denied with 403 and no job is created. Given a user with project read permission, When they fetch results, Then access is granted to markers for that project only. Given a user selecting a baseline for comparison, When listing baselines, Then only baselines within the same project that the user can access are returned.
Approximate Real-Time Analysis for Short Files
Given an audio file whose duration is 30 seconds or less, When analysis runs on the standard processing tier, Then total processing time is less than 1.5 times the audio duration. Given a short-file analysis in progress, When the progress endpoint is polled every second, Then the reported progress increases monotonically and reaches 100% at completion without regress.
Hotspot Timeline Markers & Ranking
"As a producer, I want a ranked list of change hotspots I can click to audition so that I can quickly assess whether revisions addressed my notes."
Description

Generate and render time-coded hotspot markers on the waveform/timeline for each detected change, with icon/color by type (level, EQ, dynamics, stereo) and a severity score. Provide a synchronized ranked list view so users can click an item to seek to that timestamp. Hover reveals metrics; click opens a detail panel with before/after snapshots and stem context. Integrates with the private stem player, supports zoom, and persists markers per version pair. Data is consumed from the analysis API and updates live as processing completes. Ensures keyboard and mouse parity and includes loading/empty states.

Acceptance Criteria
Waveform Hotspot Rendering by Type and Severity
Given the analysis API returns hotspots for a version pair with types {level, EQ, dynamics, stereo} When the waveform/timeline view is rendered Then exactly the returned number of markers are displayed at their timestamps within ±10 ms And each marker shows the correct icon and color per type mapping And each marker displays a severity score from 0–100 rounded to the nearest integer And markers occurring within 50 ms of each other are visually bundled with a count badge and expand on hover or click
Synchronized Ranked List and Timeline Seeking
Given hotspots are loaded for the current version pair When the ranked list panel is opened Then items are sorted by severity score descending by default And each row displays timestamp (mm:ss.mmm), type, severity score, and stem context And clicking a list item seeks the player and timeline to the item’s timestamp within ±10 ms and highlights the marker And clicking a timeline marker highlights the corresponding list row and scrolls it into view And sorting can be toggled between Severity, Time Asc, and Time Desc without data loss
Hover Metrics and Detail Panel with Before/After
Given a hotspot marker or ranked list item is visible When the user hovers it Then a tooltip appears within 200 ms showing change metrics appropriate to the type (e.g., Level: ΔdB; EQ: affected bands and ΔdB; Dynamics: ΔLUFS/crest; Stereo: Δwidth) And when the user clicks it Then a detail panel opens showing before/after snapshots (waveform or spectrogram) aligned to a ±200 ms window around the timestamp, stem context, and exact metrics And the detail panel loads cached data within 300 ms or shows a non-blocking skeleton until data arrives And the panel can be closed via mouse click on Close or by pressing Esc
Keyboard and Mouse Parity
Given a user relies on keyboard-only input When navigating hotspots Then Left/Right Arrow keys move to previous/next hotspot and update both timeline and list selection And Enter or Space opens the detail panel for the focused hotspot; Esc closes it And Tab order includes all interactive timeline markers and ranked list items with visible focus outlines And all interactions available via mouse are achievable via keyboard, meeting WCAG 2.1 AA for focus visibility and ARIA labeling
Zoom Behavior and Persistence per Version Pair
Given the user adjusts the timeline zoom level When zooming in or out Then markers remain anchored to their timestamps, reposition smoothly, and cluster/split appropriately with accurate counts And marker labels and hit areas scale to remain usable without overlapping beyond the defined threshold Given the user switches between version pairs or stems When returning to a previously viewed pair/stem Then the same set of markers and rankings reload from persisted storage matching the last known analysis data
Live Updates, Loading, and Empty States
Given analysis is still processing and streaming results When new hotspots are emitted by the API Then they appear in the timeline and ranked list within 1 second without requiring a page reload And the ranked list re-sorts as needed while preserving current selection and scroll position And if the API errors, a non-blocking error banner appears with retry logic (up to 3 attempts with exponential backoff) And while awaiting first results, skeleton loaders and an "Analyzing" status are shown; if no hotspots are found, an empty state message "No significant changes detected" is displayed
Integration with Private Stem Player
Given the private stem player is loaded and a project is open When the user clicks a timeline marker or ranked list item Then playback seeks to the hotspot timestamp within ±10 ms and the playhead stays synchronized with the waveform And scrubbing over the waveform near markers shows preview tooltips without audio glitches or desync And when a hotspot is associated with a specific stem, the detail view reflects that stem context and solo/mute settings do not break playback continuity
Stem and Frequency Band Filtering
"As an A&R reviewer, I want to filter hotspot markers to only vocal changes and high-mid EQ shifts so that I can focus on the areas that matter for radio mixes."
Description

Allow users to filter visible hotspots by stem (e.g., vocals, drums, bass) and by frequency band groups (low, low-mid, high-mid, high) to focus review on relevant content. Filters apply to both the timeline and ranked list, updating in real time without page reloads. Integrates with existing stem metadata and tagging; uses precomputed band energy deltas from analysis to drive band filters. Selections are preserved per user/session and included in shareable review links. Includes clear-all and saved presets per project.

Acceptance Criteria
Real-Time Stem Filter Updates Timeline and Ranked List
Given a project with hotspots across multiple stems and the stem filter menu open When the user selects one or more stems (multi-select allowed) Then only hotspots whose stem tag is in the selected set are visible on the timeline and in the ranked list And the ranked list count equals the number of visible timeline hotspots And the UI updates within 300ms of selection without a full page reload And left/right arrow navigation skips hidden hotspots and moves only through visible ones And the stem filter options list exactly the stems present in project metadata (no missing/extra values) sorted alphabetically And clearing the last selected stem reverts to the default All Stems state showing all hotspots
Frequency Band Filter Using Precomputed Energy Deltas
Given hotspots have precomputed band energy deltas mapped to band groups {Low, Low‑Mid, High‑Mid, High} When the user selects one or more band groups in the band filter Then only hotspots whose dominant band group is in the selected set are visible on the timeline and in the ranked list And the UI updates within 300ms without a full page reload And band filter options are exactly {Low, Low‑Mid, High‑Mid, High} and are multi-select And with no band selected, the system defaults to All Bands (no band-based exclusion)
Combined Stem and Band Filters Narrow Visible Hotspots
Given the user has selected one or more stems and one or more band groups When both filters are active Then the visible hotspots are those whose stem is in the selected stems AND whose dominant band group is in the selected bands And the ranked list and timeline display identical hotspot sets and counts And if zero hotspots match, a clear zero-state message is shown with a one-click control to Clear All Filters And the UI updates within 300ms without a full page reload
Persist Filters Across Session for the Same User
Given an authenticated user adjusts stem and/or band filters in a project When the user refreshes the page or navigates away and returns within the same session Then the previously selected stem and band filters are restored And reopening the project in the same browser session shows the same filters without requiring re-selection And signing out clears the persisted selection for the next session unless a shareable link with explicit filters is used
Shareable Review Links Encode Current Filter Selections
Given a user has an active stem and/or band filter selection in a project When the user generates or copies a shareable review link Then the link encodes the current stem and band selections And a recipient opening the link sees the timeline and ranked list pre-filtered exactly as encoded, regardless of their own defaults And removing the filter parameters from the URL reverts the view to All Stems and All Bands And the link opens without a full page reload and applies filters within 300ms after load
Clear-All Resets Filters to Project Defaults
Given any combination of stem and band filters are active When the user clicks Clear All Filters Then stem and band selections reset to All Stems and All Bands And the timeline and ranked list immediately show all hotspots And any visual indicators of active filters are cleared And the UI updates within 300ms without a full page reload
Project-Level Filter Presets: Save, Apply, Update, Delete
Given a user with edit permissions has an active filter selection When the user saves the selection as a named preset Then the preset appears in the project's preset list for collaborators with view access And applying a preset sets stem and band filters to the preset values and updates the timeline and ranked list within 300ms And updating a preset overwrites its values for all collaborators and is reflected on next load without page reload And deleting a preset removes it from the preset list for all collaborators And saving a preset requires a unique name within the project; attempting to reuse a name shows a validation error and does not create a duplicate
Keyboard Navigation Controls
"As a mastering engineer, I want to use arrow keys to step through hotspots so that I can review changes efficiently without using the mouse."
Description

Enable arrow key navigation to jump to the previous/next hotspot, with optional modifiers to jump by type or stem. Provide accessible focus handling, tooltips for shortcuts, and seamless integration with player transport (spacebar play/pause unaffected). Works across browsers and operating systems, with collision handling for screen readers. Maintains playhead state, respects loop regions, and updates the active item in the ranked list.

Acceptance Criteria
Arrow Key Next/Previous Hotspot Navigation
Given the Change Navigator is visible with at least two hotspots and the navigator container has keyboard focus When the user presses Right Arrow Then the playhead moves to the start time of the next hotspot within the currently visible (filtered) set, the waveform cursor updates within 150 ms of keyup, and the corresponding hotspot becomes the active item in the ranked list Given the Change Navigator is visible with at least two hotspots and the navigator container has keyboard focus When the user presses Left Arrow Then the playhead moves to the start time of the previous hotspot within the currently visible (filtered) set, the waveform cursor updates within 150 ms of keyup, and the corresponding hotspot becomes the active item in the ranked list Rule: No wrap-around — if the current hotspot is the last (or first) in the visible set, the navigation does not move and a non-blocking status message reads "No more hotspots" for 2 seconds
Modifier Navigation by Hotspot Type
Given a hotspot is active and its type is known (e.g., Level, EQ, Dynamics, Stereo) and the navigator has keyboard focus When the user presses Alt/Option+Right Arrow Then the playhead jumps to the next hotspot of the same type as the active hotspot within the visible set, updates within 150 ms, and the matching hotspot becomes active in the ranked list Given a hotspot is active and its type is known and the navigator has keyboard focus When the user presses Alt/Option+Left Arrow Then the playhead jumps to the previous hotspot of the same type within the visible set, updates within 150 ms, and the matching hotspot becomes active in the ranked list Rule: If no further hotspot of that type exists in the chosen direction, no movement occurs and a status message reads "No more [Type] hotspots" for 2 seconds
Modifier Navigation by Stem
Given a hotspot is active and associated to a specific stem and the navigator has keyboard focus When the user presses Shift+Right Arrow Then the playhead jumps to the next hotspot belonging to the same stem within the visible set, updates within 150 ms, and the matching hotspot becomes active in the ranked list Given a hotspot is active and associated to a specific stem and the navigator has keyboard focus When the user presses Shift+Left Arrow Then the playhead jumps to the previous hotspot belonging to the same stem within the visible set, updates within 150 ms, and the matching hotspot becomes active in the ranked list Rule: If no further hotspot for that stem exists in the chosen direction, no movement occurs and a status message reads "No more hotspots in [Stem]" for 2 seconds
Maintain Playhead State and Respect Loop Region
Given the transport is playing and a hotspot navigation key (any supported Arrow or modified Arrow) is pressed When the playhead moves to the target hotspot Then playback continues seamlessly from the new hotspot position with no unintended pause or restart Given the transport is paused and a hotspot navigation key is pressed When the playhead moves to the target hotspot Then the transport remains paused after the move Given a loop region is active When the user navigates by hotspot with any supported key Then navigation is constrained to hotspots whose time ranges intersect the loop region; if none exist in the chosen direction, no movement occurs and a status message reads "No hotspots in loop range" for 2 seconds Rule: Spacebar play/pause behavior remains unchanged and is not intercepted by hotspot navigation bindings
Accessible Focus and Screen Reader Collision Handling
Given the navigator container has keyboard focus When hotspot navigation occurs via any supported key Then keyboard focus remains on the navigator container, the new active list item receives aria-selected=true, and non-active items have aria-selected=false Given a screen reader virtual cursor is detected When the user presses simple Left/Right Arrow keys Then the Change Navigator does not intercept the keys to avoid conflicts, and alternative bindings Alt+Shift+Right/Left Arrow perform next/previous navigation; tooltips and help text reflect the alternative bindings Given hotspot navigation occurs while a screen reader is running When the active hotspot changes Then an aria-live polite announcement conveys rank position, type, stem, and timestamp (e.g., "Hotspot 3 of 12, Dynamics, Vocals, 01:23.450")
Tooltips and Shortcut Hints Across Platforms
Given the user hovers or focuses the Next/Previous hotspot UI controls When the tooltip appears Then it lists the correct platform-specific shortcuts (Windows/Linux: Alt+→/← for Type, Shift+→/← for Stem; macOS: Option+→/← for Type, Shift+→/← for Stem) and basic arrows for Next/Previous; it shows within 150 ms and hides on blur without stealing focus Given a screen reader is detected When tooltips and keyboard cheat-sheet are shown Then they display the collision-safe alternatives (Alt+Shift+→/← for Next/Previous) and include accessible descriptions via aria-describedby Rule: Shortcut hints are consistent across UI surfaces (tooltips, help modal, command palette) and reflect the current OS
Ranked List Synchronization and Scroll Into View
Given hotspot navigation occurs via keyboard When the active hotspot changes Then the corresponding item in the ranked list becomes selected, is scrolled into view with minimal motion (no horizontal scroll), and a selection-changed event is emitted for integrations Rule: List virtualization loads offscreen items within 200 ms so that the new active item’s metadata (type, stem, timestamp) is visible without manual scrolling
One-click Hotspot to To-do
"As an indie label manager, I want to turn a hotspot into an actionable to-do with one click so that I can delegate fixes and track completion during approvals."
Description

Convert any hotspot into a TrackCrate to-do with one click, pre-filling timestamp, stem, change type, severity, and a deep link to the version pair. Support assignee selection, due date, and optional note. Created tasks appear in the release’s to-do board and in collaborator notifications. Permissions ensure only authorized users can create/assign tasks; changes are auditable. To-dos retain a back-reference to the hotspot so status badges appear in the marker UI.

Acceptance Criteria
One-Click Conversion Prefills Hotspot Metadata
Given I am viewing a hotspot marker within the Change Navigator for a release and I have permission to create tasks When I click "Convert to To-do" Then a to-do draft opens prefilled with the hotspot’s timestamp (mm:ss.mmm), stem, change type (level/EQ/dynamics/stereo), severity, and a deep link to the exact version pair at that timestamp And the to-do title is auto-generated as "{Stem} • {ChangeType} at {mm:ss.mmm}" and is editable before saving And saving is blocked if timestamp, deep link, or title are missing
Assignee, Due Date, and Optional Note Capture
Given I have opened the to-do draft from a hotspot When I select an assignee from eligible collaborators on the release Then the assignee is stored on save When I set a due date and time (optional) Then the due date/time is stored on save and displayed in the task When I add an optional note (free text) Then the note is persisted and visible in the task detail
To-do Surfaces on Release Board and Sends Notifications
Given I save a to-do created from a hotspot When the save succeeds Then the task appears immediately on the release’s to-do board in the default view And the task is visible in the assignee’s personal task list And collaborator notifications are sent according to project notification settings including assignee, with payload containing title, stem, change type, timestamp, severity, and deep link
Permission Enforcement for Create and Assign
Given I do not have permission to create tasks on the release When I view hotspot actions Then the "Convert to To-do" action is hidden or disabled And any API attempt to create a task from a hotspot returns an authorization error without creating a task Given I can create tasks but lack permission to assign others When I open the assignee control Then I can only assign the task to myself and attempts to assign others are blocked server-side
Auditability of Task Creation and Updates
Given a task is created from a hotspot When I view the task’s audit log Then it records creator, timestamp, hotspot ID, prefilled fields (stem, change type, severity, timestamp), and deep link When the task’s assignee, due date, status, or note is changed Then an immutable audit entry with who, when, and field diffs is appended And audit entries are retrievable via UI and API
Hotspot Back-Reference and Status Badges
Given a to-do was created from a hotspot When I view that hotspot in the Change Navigator Then a to-do badge is displayed with the current task status and assignee avatar/initials And clicking the badge opens the linked to-do in the task drawer When the to-do status changes (e.g., Open to Done) Then the badge state updates in the marker UI upon change or on refresh And if multiple to-dos are linked to the same hotspot, the badge shows a count and opens a list
Deep Link Resolves Correct Version Pair and Timestamp
Given a to-do created from a hotspot includes a deep link When I follow the deep link from notification or task detail Then the player opens with the exact version pair referenced and playheads positioned at the hotspot timestamp And the associated stem is focused/soloed if supported by the player UI And if the user lacks access to the versions or release, an authorization message is shown instead of loading the player
Version Pair Selection & Reanalysis
"As a collaborator, I want to pick which two versions to compare so that I can validate specific revisions made since my last feedback."
Description

Provide UI controls to choose the baseline and target versions for comparison at project, track, or stem level. When the selection changes, trigger reanalysis if no cached result exists and show progress while preserving current filters. Store and display comparison context in the UI header and in shareable links. Supports pinning a default baseline and safeguards against comparing incompatible sample rates or lengths with clear error messaging and auto-alignment if offsets are detected.

Acceptance Criteria
Track-Level Version Pair Selection UI
Given I am viewing a track in Change Navigator with multiple versions and stems When I open the Baseline and Target selectors at the Track or Stem context Then only versions valid for the current context are listed (e.g., only stems for the selected track at Stem context) And I can select distinct Baseline and Target versions And the selection is visually confirmed in the selectors And the header updates to display Baseline name/version, Target name/version, and context (Project/Track/Stem + entity name) And the Apply/selection action is persisted until changed by the user
Automatic Reanalysis on Selection Change (No Cache)
Given I change either the Baseline or Target to a pair with no cached analysis When the selection is applied Then analysis starts within 1 second of selection And a determinate or indeterminate progress indicator appears within 300 ms and remains visible until completion And current hotspot filters (e.g., stem filter, band, change type) remain active throughout reanalysis And on completion, hotspots and metrics reflect the new pair and respect the active filters And no full page reload occurs
Cached Analysis Reuse and Fast Load
Given cached analysis exists for the selected Baseline/Target pair in the current context and the source versions are unchanged When I apply that selection Then results load within 500 ms And no reanalysis progress indicator is shown And active hotspot filters remain unchanged and are applied to the loaded results And a cache badge or tooltip indicates results are cached
Comparison Context in Header and Shareable Link
Given a Baseline/Target pair is selected with active filters and context (Project/Track/Stem) When I copy a shareable link from the UI and open it in a new session Then the app opens to the same project and context And the Baseline and Target selections are restored And the header displays the restored comparison context exactly And previously active filters are restored and applied And if any ID in the link is invalid or missing, a clear, non-blocking error is shown and defaults are applied
Pinned Default Baseline Behavior
Given I pin a Baseline version as the default for the project When I navigate between tracks and stems or reload the application Then the pinned Baseline auto-populates as the Baseline selection by default And a visible pin indicator appears next to the Baseline in the selector and header And unpinning reverts to the last explicitly chosen Baseline for that context And the pin state persists across sessions for my user within the same project
Incompatibility Safeguards and Auto-Alignment
Given I select a Baseline/Target pair with incompatible sample rates When I attempt to compare Then analysis is prevented and a clear error message explains the incompatibility and remediation options And no partial results are shown Given I select a pair with differing lengths or detectable temporal offset When analysis runs Then the system auto-aligns using detected offset and proceeds And a banner or notice indicates auto-alignment was applied with the measured offset value And the user can review or disable the alignment for this comparison

DAW Marker Sync

Round‑trip comments and change hotspots with your DAW. Import tempo maps; export markers as AAF/CSV/Pro Tools/Logic/Reaper formats so producers and mixers see exact bars and regions to address—no retyping or timecode drift.

Requirements

Tempo Map & Meter Import Alignment
"As a producer, I want to import tempo maps and time signature changes so that markers align to the music's bar/beat grid across DAWs."
Description

Enable import of tempo maps and time signature changes from common sources (e.g., AAF, MIDI tempo maps, and DAW‑exported marker/tempo files) and align them to a selected TrackCrate asset version. Support variable tempos, tempo ramps, multiple meters, and session start offsets (e.g., Bar 1 at SMPTE 01:00:00:00). Normalize for sample rate and frame rate, persist the grid with the asset version, and expose a UI for verifying/adjusting the bar‑beat grid. Provide API endpoints and background jobs for parsing and validation to ensure consistent bar/beat positioning across DAWs.

Acceptance Criteria
Import MIDI Tempo Map with Variable Tempos, Ramps, and Meters
Given an asset version with defined sample rate and frame rate and no existing tempo grid When a user uploads a Standard MIDI file (.mid/.midi) containing tempo changes (including linear ramps) and multiple time signature changes Then TrackCrate parses all tempo and meter events and persists a bar-beat grid for the asset version And imported tempo values match source values within ±0.001 BPM And linear tempo ramp segments are preserved such that the instantaneous tempo at the ramp midpoint matches the expected value within ±0.001 BPM And all time signature change locations and values match the source exactly And the absolute audio time of each imported change point differs from the computed expected by ≤1 sample at the asset’s sample rate And the grid is marked as “Verified: Pending” until user confirmation in the UI
AAF Import with Session Start Offset at SMPTE 01:00:00:00
Given an asset version configured to a specified frame rate (including support for 23.976/24/25/29.97 DF/NDF/30) and sample rate And an AAF file whose session defines Bar 1 | Beat 1 at SMPTE 01:00:00:00 with documented tempo and meter changes When the user imports the AAF Then the persisted grid sets Bar 1 | Beat 1 to 01:00:00:00 within ≤0.5 frame and ≤1 audio sample at the asset’s rates And all tempo and meter changes from the AAF are imported with counts matching the source (±0 difference) And negative bar indices are supported for audio prior to Bar 1 (e.g., count-in) without shifting the defined Bar 1 location And the UI displays the offset and allows toggling “Bar 1 anchored to SMPTE start” on/off without altering imported event timing unless explicitly saved
Frame and Sample Rate Normalization Without Drift
Given an asset version at 44.1 kHz and 25 fps And a tempo map file (MIDI or CSV) authored at 48 kHz and 30 fps When the tempo map is imported Then TrackCrate normalizes all timing to the asset’s sample and frame rates And bar/beat boundaries at 10 minutes absolute time show ≤2-sample cumulative drift relative to expected positions computed from the source map And positions of meter changes after normalization match expected bar indices exactly And a validation report indicates the detected source rates and the applied normalization And the API returns a normalization flag and drift metrics (max/mean) in the import result payload
Persist Grid to Asset Version with Revision History
Given an asset version with no tempo grid When a tempo map is imported Then a persisted grid (revision r1) is created and associated with that asset version and includes a deterministic grid hash When a different tempo map is imported to the same asset version Then a new grid revision (r2) is created without mutating r1, and r2 becomes the active grid And GET /api/asset/{id}/tempo-grid returns the active grid, revision id, and hash; GET with ?revision=r1 returns the prior grid And DELETE of the asset version removes all associated grids; DELETE of a non-active revision is allowed and does not affect the active grid And the system logs who imported each revision and timestamps the change
UI Grid Verification and Adjustment Workflow
Given an asset version with an imported tempo grid When the user opens the Grid Verification UI Then the waveform with bar/beat overlay, tempo ramps, and meter change markers render within 1 second for a 10-minute asset on a standard machine When the user adjusts Bar 1 alignment to a selected transient or SMPTE timecode and saves Then the recomputed grid preserves relative distances of imported change points and updates absolute positions with ≤1-sample error And meter edits (add/edit/delete) apply immediately in the preview and persist upon save to a new grid revision And cancel discards unsaved edits and reverts the overlay to the last saved revision And the UI displays a pass/fail validation banner if any change would introduce >2-sample drift at 10 minutes
API Upload and Background Parsing with Validation
Given the API endpoint POST /api/tempo-grids/import with multipart upload or URL reference When a supported file (MIDI, AAF, CSV) ≤100 MB is submitted for an authorized user Then the API responds 202 Accepted with a job id and checksum, and creates a background job in status=queued And the job transitions queued→processing→completed (or failed) with progress updates; clients can GET /api/tempo-grids/jobs/{id} And on completion the result contains: detected source rates, event counts (tempo/meter), normalization details, drift metrics, grid hash, and active revision id And on validation failure the result includes machine-readable error codes (e.g., UNSUPPORTED_FRAME_RATE, INVALID_TEMPO_EVENT, MISSING_OFFSET) and 422 status And repeated submissions with identical checksum for the same asset are idempotent (no duplicate revisions; returns existing result)
Cross-DAW Consistency Check via Exported Markers Using Imported Grid
Given an asset version with an active imported grid When the user exports markers using the grid to CSV/AAF formats for Pro Tools, Logic, and Reaper Then a test marker placed at Bar 33 Beat 1 in TrackCrate appears at Bar 33 Beat 1 in each DAW within ≤1 tick or ≤1 sample at the asset’s rates And a marker at the midpoint of a tempo ramp lands within ≤0.001 BPM equivalent timing error when inspected in the DAW And exported files include the correct session start offset so Bar 1 aligns to the expected SMPTE time within ≤0.5 frame And the export summary reports per-format placement tolerances and any rounding applied
Cross‑DAW Marker and Region Export
"As a mixing engineer, I want to export markers and regions to my DAW's native format so that I can see exact bars and sections to address without retyping."
Description

Export TrackCrate comments and change hotspots as DAW‑readable markers/regions with both absolute timecode and bar:beat positions. Generate AAF, CSV, and DAW‑specific marker formats (e.g., Pro Tools TXT, Logic XML, Reaper CSV) while preserving region lengths, names, categories/colors, and notes. Embed stable IDs and TrackCrate linkbacks in marker comments for traceability. Handle sample‑rate/frame‑rate conversions, character limits, and per‑DAW naming conventions. Package exports with sidecar files where needed and attach them to the relevant asset version with version‑aware filenames.

Acceptance Criteria
Pro Tools TXT Export Fidelity
Given a TrackCrate project with markers and regions (names, categories/colors, notes) and a tempo map When the user exports in Pro Tools TXT format Then the TXT parses as valid Memory Locations and contains both absolute timecode and bar:beat for every marker/region (native fields or encoded in comments per DAW profile) And importing the TXT into Pro Tools places all markers/regions within ±0.5 frame (absolute) and ±1 tick (bar:beat) of the TrackCrate source And exported region lengths match source within ±1 ms or ±1 tick, whichever is greater And colors are mapped to the nearest available Pro Tools palette value when exact match is unavailable And names are sanitized/truncated per the configured Pro Tools naming rules without producing duplicates
Logic Pro XML Export with Tempo Map Integrity
Given a TrackCrate project with tempo and time‑signature changes and marker/region metadata When the user exports in Logic XML format Then the XML is well‑formed and validates against the expected Logic marker schema And each marker/region includes absolute timecode and bar:beat positions in native fields And importing the XML into Logic places all markers/regions within ±0.5 frame (absolute) and ±1 tick (bar:beat) of the TrackCrate source across the full timeline And tempo and time‑signature events are present so bar:beat alignment is preserved after import And unsupported characters in names/notes are sanitized per Logic rules without data loss to essential content
Reaper CSV Export Color and Notes Preservation
Given a TrackCrate project with colored regions, markers, and multiline notes When the user exports in Reaper CSV format Then the CSV includes required columns to distinguish markers vs regions with start, end, absolute timecode, and bar:beat for each row And importing the CSV into Reaper places all markers/regions within ±0.5 frame (absolute) and ±1 tick (bar:beat) of the TrackCrate source And region lengths match within ±1 ms or ±1 tick, whichever is greater And colors are converted to Reaper‑compatible values with nearest‑color mapping when exact value is unsupported And notes preserve line breaks via escaped sequences and render as expected in Reaper
AAF Export with Sample/Frame Rate Conversion
Given a TrackCrate project at source sample rate and frame rate and a chosen target sample rate/frame rate When the user exports an AAF with markers/regions Then all marker/region absolute positions and lengths are converted to the target rates with no accumulated drift > 0.5 frame over a 60‑minute timeline And bar:beat positions remain consistent with the exported tempo map or sidecar such that reconstituted bar:beat is within ±1 tick after import And the AAF passes validation by an AAF parser and imports without errors into at least one supported DAW And unsupported AAF fields for a given DAW are gracefully downgraded without data loss to required fields (name, start, end, notes)
Stable IDs and TrackCrate Linkbacks in Exports
Given markers/regions have TrackCrate stable UUIDs and asset version URLs When the user exports in any supported format (AAF, Pro Tools TXT, Logic XML, Reaper CSV, generic CSV) Then each exported marker/region includes the stable UUID and HTTPS TrackCrate linkback in a comment/notes field per format conventions And the UUIDs can be parsed from the export to deterministically map back to the original items (100% match rate) And the linkback responds with HTTP 200 and deep‑links to the correct asset version in TrackCrate And if note length limits are reached, the UUID and URL are preserved and other note text is truncated with an ellipsis without breaking the URL/UUID
Packaging and Attachment with Version‑Aware Filenames
Given an asset version in TrackCrate and selected export formats When the user triggers an export Then the system generates one package per DAW format containing the primary marker file plus required sidecars (e.g., tempo map) when applicable And each artifact is attached to the originating asset version with version‑aware filenames following the pattern <assetSlug>_v<version>_<format>_markers[+sidecars].<ext or .zip> And SHA‑256 checksums are computed and stored for each artifact and exposed via API/UI And downloads of the package reproduce the exact bytes originally attached (checksum match)
Cross‑DAW Naming and Character Compliance
Given marker/region names exceeding DAW limits or containing unsupported characters When exporting to each DAW format Then names are sanitized per the DAW profile: prohibited characters replaced, whitespace normalized, and length truncated with a single ellipsis And uniqueness is preserved after truncation via numeric suffixes (e.g., #2, #3) without exceeding the limit And all exported files import without name rejections or auto‑renaming by the DAW And a per‑item mapping of original→exported name is recorded in the export log for auditability
Comment ↔ DAW Marker Round‑Trip Sync
"As a project manager, I want comments and status changes to sync as DAW markers and back so that the team works from a single source of truth."
Description

Provide bidirectional synchronization between TrackCrate comment threads/hotspots and DAW markers. On export, map comments (with assignee, status, and priority) to markers/regions; on import, match markers back to existing comments via embedded IDs, updating statuses and positions or creating new comments when needed. Include conflict resolution rules (e.g., newest wins with change log), per‑user attribution, and an audit history. Support offline workflows via file upload and recognize batch updates per asset version to keep the canonical source of truth in TrackCrate.

Acceptance Criteria
Export Comments to DAW Markers Across Formats
Given an asset version with comments/hotspots including assignee, status, priority, and a tempo map When the user exports markers selecting one or more formats (AAF, CSV, Pro Tools, Logic, Reaper) Then a file per selected format is generated with a marker/region for each eligible comment/hotspot with correct start/end positions And each marker/region embeds the immutable TrackCrate comment identifier as [TCID:<uuid>] in a machine-readable field And assignee, status, and priority are serialized into supported metadata fields per target format mapping And Unicode/special characters are preserved or safely escaped per spec without altering the embedded ID And the export produces downloadable artifacts and a checksum for each file
Import DAW Markers to Update/Create Comments
Given an uploaded DAW marker file referencing an asset version When the system processes the file Then markers containing [TCID] match existing comments and update position (start/end), status, priority, and title where changed And markers without [TCID] create new TrackCrate comments with imported text and positions using default status/priority mapping And a summary report lists counts: updated, created, unchanged, and rejected with reasons And no existing comment is deleted as part of import
Conflict Resolution: Newest Wins with Change Log
Given a TrackCrate comment and a corresponding DAW marker with the same [TCID] were both modified since last sync When the import runs Then the system compares last-modified timestamps and applies the most recent change as the source of truth And the prior and new values, winning source (DAW or TrackCrate), and timestamps are recorded in the change log And the audit history includes who made each change and when And the import summary reports the number of conflicts resolved
Per-User Attribution on Import/Export
Given an authenticated user initiates an import and optional DAW-to-TrackCrate user mappings exist When comments are created or updated via import Then attribution is assigned to the mapped TrackCrate user when a DAW username matches, else to the importing user And exports embed exporter identity (user and timestamp) in supported metadata/header fields And the audit history shows the user and channel (DAW Import/Export) for each change
Offline Sync via File Upload
Given no live DAW integration is available When a user uploads a supported marker export (AAF, CSV, or recognized session export) for an asset version Then the file type and schema are validated before any data changes occur And if validation passes, processing runs asynchronously and notifies the user upon completion with a link to results and audit entry And if validation fails, no changes are applied and an error is returned detailing file/row/field issues
Batch Update Tracking per Asset Version
Given an import results in multiple comment updates/creations When the import completes Then all changes are grouped under a single batch ID associated with the asset version And the batch lists each item, action (create/update), and source file reference And the asset state is updated atomically for the batch; partial failures are isolated or rolled back and reported without inconsistency
Tempo Map Sync and Position Accuracy
Given the asset version has a tempo map and the import file contains tempo and marker positions When markers are imported Then positions are mapped to the current timeline using the tempo map, aligning bars:beats and absolute time within ±10 ms tolerance And if the imported tempo map differs, the newer map (by timestamp) is applied per newest-wins and logged; otherwise the existing map is retained And the import summary reports maximum and average drift detected
Drift and Session Offset Correction
"As a mastering engineer, I want TrackCrate to correct timecode drift and session offsets so that markers stay locked to transients regardless of sample rate or tempo changes."
Description

Detect and correct timecode drift and offset mismatches between uploaded audio and DAW sessions. Identify sample‑rate mismatches, embedded timecode offsets, and tempo‑grid misalignment; offer automatic resync using detected transients/guide cues and manual nudge controls (ms/frames and bar:beat). Persist correction parameters per asset version and reflect adjustments in exported files. Provide visual indicators and logs of applied corrections to ensure reproducible, drift‑free round‑trips.

Acceptance Criteria
Constant Offset Correction from Embedded Timecode
Given an uploaded WAV/AIFF with valid BWF start timecode and a project frame rate When the user selects "Auto from Embedded Timecode" Then the system computes and applies a constant offset so the audio aligns to the intended session start within ≤ 1 ms or ≤ 0.05 frames (whichever is larger) And a correction log entry is created containing detected start TC, session start TC, applied offset (ms and frames), user, and timestamp And the UI shows an "Offset corrected" indicator on the timeline
Linear Drift Correction for Sample-Rate Mismatch
Given an asset whose sample rate differs from the project sample rate When the user selects "Correct linear drift" Then the system resamples/time-warps using a factor equal to project_rate/asset_rate without changing pitch And the residual drift measured against a 10-minute click/guide track is ≤ 2 ms end-to-end And the log records asset_rate, project_rate, correction factor, and residual drift error And the "Drift" status indicator turns green
Tempo-Grid Alignment via Tempo Map Import
Given an imported tempo map and detected transient downbeats When the user runs "Align to tempo grid" Then the system applies a musical offset and optional warp markers so that ≥ 90% of detected downbeats fall within ±1/96 note of the grid over 5 minutes And exported markers land on identical bar:beat positions in the target DAW And the log records bar:beat offset, warp points applied, and pre/post alignment error metrics
Manual Nudge Controls (ms/frames and Bar:Beat) with Preview
Given a loaded asset When the user adjusts manual nudge Then the UI accepts inputs in ms (precision 0.1 ms), frames (respecting current frame rate), and bar:beat:ticks (precision ≥ 1/960 note) with keyboard step shortcuts and undo/redo And nudge changes are non-destructive and previewed in real time with A/B toggle and Reset And applied values are clamped to media bounds and displayed numerically on the timeline
Persisted Correction Parameters per Asset Version
Given an asset version When the user saves or navigates away and returns Then previously applied correction parameters (offset ms/frames, musical offset, drift factor, selected tempo map, cue alignment choices) are persisted and restored exactly And creating a new asset version preserves the prior version unchanged and offers "Copy corrections from previous" as an explicit action And audit history shows who changed what and when
Exported Marker Files Reflect Corrections and Remain Drift-Free
Given corrected alignment When the user exports markers to AAF, CSV, Pro Tools (.ptx/.txt), Logic (.xml), and Reaper (.rpp/.csv) Then all exported positions include the applied offset/drift/tempo corrections And importing each file into its respective DAW with matching project sample rate and frame rate results in marker placements that deviate ≤ 1 frame and ≤ 2 ms from TrackCrate positions across the full timeline And the export bundle includes a human-readable summary of corrections
Visual Indicators and Correction Audit Log
Given any detection or correction When the correction panel is open Then the UI shows status badges for Drift, Offset, and Tempo Grid (states: Detected, Corrected, Not Detected) And the user can download a JSON/CSV audit log containing: asset_version_id, detection method(s), sample rates, frame rate, tempo map hash, offsets before/after, drift ppm, algorithm version, user, timestamp And reapplying the log to the same media reproduces alignment within ≤ 0.5 ms
Timeline Preview and Validation
"As a QA reviewer, I want a timeline preview with the tempo grid and markers so that I can verify alignment before sending files to collaborators."
Description

Offer an in‑app waveform timeline with metered bar/beat grid, imported tempo/meter changes, and all markers/regions for quick verification before export. Enable scrubbing, zoom, loop, and jump‑to‑marker; allow inline edits (rename, color, category) with bulk operations. Provide validation checks (e.g., markers outside media range, overlapping regions without intent) and a dry‑run export report summarizing how items will map per DAW format.

Acceptance Criteria
Waveform and Bar/Beat Grid Accuracy with Tempo/Meter Changes
Given an audio asset up to 10 minutes with an imported tempo/meter map (≥1 tempo change, ≥1 meter change) When the timeline preview loads Then the waveform renders within 2 seconds And bar/beat gridlines align to musical downbeats within ±2 ms across the duration And tempo and meter changes display at the correct bar boundaries with labels And playhead time (hh:mm:ss:ff), bars/beats, and samples remain synchronized within ±1 frame during playback and seek
Marker and Region Import and Visualization
Given markers and regions are imported from a supported format (AAF, CSV, Pro Tools, Logic, Reaper) When the timeline renders Then all items display with correct name, start, end, duration, bar/beat position, and color (if provided) And the displayed item count equals the imported count And clicking a marker (in list or timeline) moves the playhead to within ±1 frame of its start And double-clicking a region frames it in view with 10% visual padding
Scrub, Zoom, Loop, and Jump-to-Marker Controls
Given the timeline is visible When the user scrubs by dragging the playhead Then time readouts update at ≥30 fps and the playhead moves without stutter When the user zooms via wheel/pinch/controls Then zoom executes within 100 ms and maintains cursor-centered focus When the user sets a loop in/out by drag or selection Then looping playback repeats seamlessly with <20 ms gap and the loop range is visually indicated When the user invokes Next/Previous Marker Then the playhead lands within ±1 frame of the target marker
Inline Edit of Name, Color, and Category
Given a marker or region is selected When the user renames it inline Then the new name persists immediately, supports up to 128 UTF‑8 characters, and updates everywhere in <100 ms When the user changes its color from the palette Then the swatch updates immediately and the color is retained for export where supported When the user assigns a category from the predefined list Then the category badge updates and is retained for export where supported
Bulk Edit of Name, Color, and Category
Given multiple markers/regions are selected When the user applies a bulk rename pattern using tokens {index} and/or {basename} Then a preview shows the result and applying updates all names correctly and uniquely When the user applies a bulk color Then all selected items adopt the chosen color When the user applies a bulk category Then all selected items adopt the chosen category And all bulk edits complete within 1 second for up to 500 items
Validation Checks and Issue Highlighting
Given a project is open When validation runs Then markers before 00:00:00:00 or after media end are listed as Errors And regions with negative or zero duration are listed as Errors And overlapping regions (time intersection > 0) are listed as Warnings with overlap duration And markers exactly on region boundaries are not flagged And a summary shows total Errors and Warnings And clicking an issue scrolls and highlights the related item on the timeline
Dry-Run Export Mapping Summary per DAW Format
Given validation has zero Errors When the user runs Dry‑Run Export for AAF, CSV, Pro Tools, Logic, and Reaper formats Then a report is displayed within 2 seconds for up to 500 markers and 200 regions, listing per item: mapped name (with any truncation noted), start/end in absolute time and bars/beats, sample/frame rounding applied, color/category mapping or omissions, and any unsupported attributes flagged And item counts per format equal the number eligible for export And the report indicates zero timecode drift by showing identical absolute times across formats for each item
Permissions, Metadata Propagation, and Audit
"As a label admin, I want exports to honor permissions and carry rights metadata so that sensitive assets remain protected and compliance is maintained."
Description

Enforce role‑based access for tempo/marker import/export operations, ensuring only authorized collaborators can generate or apply sync files. Respect TrackCrate’s expiring links and watermark policies (no unintended audio embedding in exports) and propagate rights/credits metadata into supported export fields. Maintain audit logs of who exported/imported what and when, with version linkage and download telemetry to support compliance and troubleshooting.

Acceptance Criteria
Export Markers Restricted by Role
Given a project with defined roles and a user without "DAW Marker Export" permission When the user attempts to export markers/tempo to AAF, CSV, Pro Tools, Logic, or Reaper via UI or API Then the request is rejected with HTTP 403 and error code PERMISSION_DENIED_EXPORT_MARKERS And no export artifact is created and no download link is issued And an audit entry is recorded with outcome=denied, actor, project_id, version_id, attempted_format, timestamp Given a user with "DAW Marker Export" permission When they export markers/tempo in any supported format Then the export succeeds with HTTP 201 and an expiring download link is created And an audit entry is recorded with outcome=success, actor, project_id, version_id, format, artifact_checksum, artifact_size, timestamp
Import Tempo Map Permission Gate
Given a user without "Tempo/Marker Import" permission When the user uploads/imports a tempo map or marker file Then the operation is blocked with HTTP 403 and error code PERMISSION_DENIED_IMPORT_TEMPO_MARKERS And the project timeline/version remains unchanged And a denied audit entry is recorded Given a user with "Tempo/Marker Import" permission and a valid import file aligned to project sample rate When the user imports the tempo/marker data Then the data is applied to the selected version and a new version is created with incremented version number and linkage to source file And conflicts or timecode drift > 2 frames are flagged and reported to the user with a warning, but import still completes And an audit entry is recorded with outcome=success, actor, project_id, new_version_id, source_file_checksum, import_format, timestamp
No Audio Embedded in Marker Exports
Given a project containing audio assets and watermarked download policies enabled When any export of markers/tempo is generated in AAF, CSV, Pro Tools, Logic, or Reaper formats Then the export contains no embedded audio media (media essence count=0 for AAF; no binary audio payload for text-based formats) And exporter options that could embed or reference audio are disabled or set to "do not embed" And watermarking pipelines are not invoked for these exports And a validation step parses the artifact and confirms zero audio streams before the link is issued; otherwise the export fails with HTTP 409 and error code AUDIO_EMBED_NOT_ALLOWED
Rights/Credits Metadata Propagation in Exports
Given project rights/credits metadata populated (title, ISRC, artist, composers, producers, rights holder, copyright, contact, license) When exporting markers/tempo Then metadata is mapped into supported fields of the target format (e.g., file-level comments, title fields, CSV columns, marker notes) And all Unicode characters are preserved; newline and control characters are sanitized And fields exceeding destination limits are truncated without breaking encoding, with each truncation noted in the audit entry And the parsed export reflects exact values for all supported fields and leaves unsupported fields empty; zero unexpected metadata keys are present
Audit Trail for Import/Export with Version Linkage
Given audit logging is enabled When any import or export attempt occurs (success or failure) Then an immutable audit record is created containing: operation_id, operation_type (import|export|denied), actor_user_id, actor_role, project_id, version_id (or null), source_file_id (for import), export_format (for export), artifact_checksum (if created), artifact_size, http_status, timestamp (UTC ISO8601), client_ip, user_agent, outcome And the record is visible to project admins within 10 seconds of the operation And any attempt to modify or delete an audit record via UI or API returns HTTP 403 and is itself logged as a denied action And each audit record links to the relevant project/version detail view for traceability
Expiring Export Links and Download Telemetry
Given an export is created with a time-to-live policy (e.g., 48 hours) When the download is requested before expiry Then the link resolves (HTTP 302/200) and telemetry is stored with event_type=download, link_id, actor (if authenticated), timestamp, client_ip, user_agent, bytes_transferred, outcome=completed When the same link is requested after expiry Then the request is rejected with HTTP 410 and error code LINK_EXPIRED And no new artifact is generated And telemetry records event_type=download_attempt, outcome=expired When the project or artifact is revoked or deleted Then all associated links are immediately invalidated and return HTTP 404 And telemetry records event_type=revocation for the link_ids

Version Matrix

Compare A/B/C (and more) with pairwise diffs and quick-reference switching. Audition per-stem ‘best of’ choices across versions to guide comp decisions and capture a clear verdict without juggling multiple players or bounces.

Requirements

Synchronized Version Switcher
"As a producer, I want to switch between versions instantly at the same playhead position so that I can make fair, fast A/B/C comparisons without losing context."
Description

Provide a single transport that keeps the playhead locked across multiple versions (A/B/C/…) with instant, gapless toggling between versions via UI buttons and keyboard shortcuts. Perform automatic time alignment at load, respect latency/offsets, and apply optional loudness trimming per version to ensure fair comparisons. Support up to 12 versions per session with waveform thumbnails, version labels, and quick-jump markers. Integrate with TrackCrate’s private stem player, reuse existing media caching, and persist user preferences per project.

Acceptance Criteria
Gapless Toggle With Synchronized Playhead
Given a project with 3 or more versions (A, B, C...) loaded and the transport playing When the user switches the active version via UI buttons or keyboard shortcuts Then the active version changes within 20 ms and playback is continuous with no buffer underrun events logged And the playhead position remains synchronized within ±2 ms across the switch And performing 100 rapid switches back-and-forth does not accumulate drift beyond ±5 ms relative to the original playhead time And when a text input field is focused, version-switching keyboard shortcuts are disabled, while UI buttons continue to work
Automatic Time Alignment and Latency Compensation
Given multiple versions with differing render offsets/latencies are added to a session When the session loads or new versions are added Then the system computes and applies per-version offsets so that transient alignment error is ≤ 2 ms at three test markers across the timeline And the computed per-version offset value (ms) is displayed to the user and saved with the project And the user can manually nudge each version’s offset in 1 ms increments within ±100 ms and the new value is applied immediately and persisted
Per-Version Loudness Matching for Fair A/B/C
Given Loudness Match is toggled ON for the project When versions A..N are analyzed Then each version’s gain trim is applied to match the project target loudness within ±0.5 dB integrated (default target −14 LUFS-I, configurable) And switching versions while playing results in level differences ≤ 0.5 dB And toggling Loudness Match OFF restores original per-version levels And the per-version gain trims and target loudness setting persist with the project
Support for Up to 12 Versions with Labels and Waveform Thumbnails
Given the user adds versions to the session When up to 12 versions are present Then all versions are selectable and visible with editable labels and waveform thumbnails And attempting to add a 13th version is blocked with a clear message indicating the 12-version limit And waveform thumbnails for all loaded versions render within 3 seconds after each version finishes decoding And each version is mapped to a distinct selector (UI and shortcut) corresponding to its index (1–12)
Quick-Jump Markers with Synchronized Navigation
Given at least three quick-jump markers exist in the timeline When the user triggers a jump to any marker while playing Then the playhead jumps to the marker position within 20 ms without audio glitches or underruns And the active version remains time-aligned so that the jump lands at the same absolute timeline position for all versions (≤ 2 ms deviation) And creating, renaming, reordering, and deleting markers updates navigation immediately and persists with the project
Integration with Private Stem Player and Media Cache Reuse
Given the project assets are already present in TrackCrate’s media cache and playable in the private stem player When the Version Switcher is used to toggle among versions during playback Then no additional network requests for already cached media are issued (verified via network inspector) And switching versions does not trigger re-decoding of unchanged segments when available from cache And if the network is offline but assets are cached, version switching remains functional without interruption
Per-Project Preference Persistence for Version Switching
Given the user sets preferences for version switching (e.g., Loudness Match state and target, last active version, keyboard mapping, marker visibility) When the user closes and later reopens the same project Then all previously set preferences are restored exactly as saved And restoring preferences does not alter the project audio state beyond the saved settings (e.g., trims, offsets, active version) And changing preferences takes effect immediately and persists on subsequent saves
Stem Alignment & Mapping
"As a mix engineer, I want stems auto-mapped and time-aligned across versions so that I can compare parts accurately without manual relabeling or nudge work."
Description

Automatically discover and map corresponding stems across versions using filename heuristics, channel metadata, tempo/BPM, and transient anchors. Handle missing/extra stems gracefully with a reconciliation UI to manually map or exclude tracks, and surface warnings for mismatches. Precompute and cache alignment warp maps to keep stems phase-coherent during switching and per-stem audition. Integrate with TrackCrate’s asset model, versioning, and metadata to persist mappings and reuse them across sessions.

Acceptance Criteria
Auto Stem Mapping Across Versions
Given a release with 2–5 mix versions each containing 10–60 stems When automatic mapping is triggered Then stems are matched across versions by normalized filename tokens (case/whitespace/punctuation-insensitive, common suffixes removed), role tags, and channel tags (L/R/M/S) And a candidate match is accepted only if (name similarity ≥ 0.85 OR role+channel match) AND (tempo delta ≤ 1 BPM OR ≤ 0.5%) AND duration delta ≤ 1.5% AND transient-anchor correlation ≥ 0.80 And ambiguous matches (multiple candidates above threshold) are auto-selected by highest confidence and flagged as "Low Confidence" for review And stems with no candidate above threshold are marked "Unmapped" And mapping 100 stems across 3 versions completes in ≤ 15 seconds on a standard processing node
Reconciliation UI for Missing or Extra Stems
Given automatic mapping is complete and at least one stem is Unmapped or Low Confidence When the user opens the Reconciliation UI Then all unmapped stems and extras are listed with suggested candidates, confidence score, and reason codes And the user can manually map via drag/drop or select, exclude stems from comparison, or create a new stem group And the UI enforces one-to-one mapping per stem role per version and prevents duplicate assignments with a clear error message And Save applies changes, revalidates constraints, and updates mapping in ≤ 1 second for up to 50 edits And Cancel reverts to the last saved state with no side effects
Precompute and Cache Alignment Warp Maps
Given a stem group is mapped across versions and a reference version is designated When alignment processing runs Then per-stem, per-version warp maps are computed using tempo, transient anchors, and channel metadata And alignment error at detected anchors is ≤ 2 ms for the 95th percentile and ≤ 5 ms max And average compute time per stem is ≤ 2 seconds (95th percentile ≤ 5 seconds) And warp maps are cached with strong keys (asset hash, version ID, stem ID, algorithm version) And cached warp maps are reused on subsequent sessions and only invalidated when any key input changes
Persist and Reuse Stem Mappings in Asset Model
Given mappings and warp maps are created or edited When the project is saved Then mappings are persisted to TrackCrate’s asset model including version IDs, stem IDs, channel, role, confidence, user overrides, and timestamps And reopening the Version Matrix loads the persisted mapping and ready-to-use warp maps in ≤ 2 seconds for projects up to 300 stems And uploading a new version with matching metadata auto-applies existing mappings when compatibility checks pass; otherwise it is queued for reconciliation And all mapping changes are audit-logged with actor, action, and time
Mismatch Warnings and Confidence Surfacing
Given mapping results contain discrepancies When discrepancies exceed thresholds (tempo drift > 0.5%, channel mismatch, duration variance > 1.5%, or low confidence < 0.85) Then non-blocking warnings are shown with per-stem badges and a summary count And clicking a warning reveals cause, impacted versions, and suggested fixes And resolving the underlying issue (manual map, exclude, or re-analyze) removes the warning within the session And warnings and confidence scores are available via API and exportable to session notes
Phase‑Coherent Per‑Stem Audition and Version Switching
Given warp maps are available for a mapped stem group When a user solos a stem and switches between versions (A/B/C) in the Version Matrix Then playback switches within ≤ 50 ms with no audible clicks/pops And inter-version phase offset at switch points is ≤ 3 ms (95th percentile) and ≤ 5 ms max And composing a best-of selection using stems from different versions maintains time and phase coherence throughout playback And switching or comping does not alter overall LUFS by more than ±0.3 dB relative to the reference version for the same stem
Pairwise Audio Diff Indicators
"As a label rep, I want clear visual indicators of how versions differ so that I can identify meaningful changes quickly without listening to entire tracks end-to-end."
Description

Analyze versions to compute objective differences and visualize them as quick-reference deltas: integrated/short-term LUFS, spectral tilt, dynamic range, stereo width, and phase correlation per stem and master. Display color-coded badges and mini-overlays in the matrix and timeline, with markers where changes exceed thresholds (e.g., +2 dB vocal at chorus). Run analysis on upload or first open, store results in metadata, and expose summaries in shareable views to accelerate decision-making.

Acceptance Criteria
Upload Triggers Analysis and Metadata Storage
Given a user uploads one or more audio files (stems and/or master) to a version, When the upload completes and the files are finalized, Then the system enqueues an analysis job for each unique file checksum within 5 seconds. Given an analysis job runs, When computing metrics, Then the system produces per-file values for: integrated LUFS (EBU R128), short-term LUFS (3 s window), spectral tilt (dB/decade 50 Hz–10 kHz), dynamic range (dB), stereo width (% from mid/side energy), and phase correlation (−1 to +1), plus short-term time series at 100 ms resolution for LUFS and phase. Given metrics are computed, When persisting results, Then the system stores them in version metadata keyed by file_id, stem_name, checksum, sample_rate, duration, and analysis_version and makes them queryable via API and UI. Given a track of up to 10 minutes, When analysis runs on standard workers, Then analysis completes in ≤ track_duration and reports progress updates at least every 10%. Given a re-upload of an identical file (same checksum), When the version is saved, Then no new analysis is performed and cached results are reused; else, prior results are invalidated and recomputed.
First Open Triggers Deferred Analysis
Given a version with files lacking analysis, When a user opens the Version Matrix view, Then the system starts background analysis within 3 seconds without blocking playback or navigation. Given background analysis is running, When partial results become available, Then badges and overlays progressively render without page reload and display a "processing" state for pending items. Given the user navigates away, When jobs are still running, Then analysis continues and the UI reflects completion on return. Given an analysis error occurs, When the user views the page, Then an error banner identifies the file(s) and offers a "Retry analysis" action that successfully requeues the job.
Pairwise Diff Badges in Matrix
Given versions A, B, C have stored metrics, When the user selects a pair (e.g., A vs B), Then each stem and the master row display delta badges for integrated LUFS, short-term LUFS (avg), spectral tilt, dynamic range, stereo width, and phase correlation with signed values and units. Given a pair selection changes, When the user clicks a different pair, Then all badges and overlays update within 200 ms. Given a badge is hovered or focused, When the user interacts, Then a mini-overlay shows sparkline(s) of short-term LUFS delta and phase correlation over time with a 0.1 s cursor readout. Given deltas are displayed, When formatting values, Then rounding is: LUFS 0.1 LU, spectral tilt 0.1 dB/decade, dynamic range 0.1 dB, stereo width 1%, phase correlation 0.01, and N/A for missing stems. Given stems differ across versions (missing or renamed), When computing diffs, Then badges show N/A and those stems are excluded from any aggregate summaries with a tooltip "Stem missing in one version."
Threshold-Based Timeline Markers
Given short-term time series exist per stem and master for two versions, When delta thresholds are exceeded for ≥ 1.0 s, Then timeline markers are created for those ranges with labels indicating metric and magnitude (e.g., "+2.0 LU Vocal"). Given default thresholds, When evaluating deltas, Then markers are created for: short-term LUFS |Δ| ≥ 2.0 LU, spectral tilt |Δ| ≥ 1.0 dB/decade, stereo width |Δ| ≥ 10%, phase correlation ≤ 0.20, dynamic range |Δ| ≥ 1.0 dB. Given section markers exist (e.g., Chorus), When markers are rendered, Then they anchor to the section label when overlap exists and snap playback to the marked region on click. Given dense events, When markers overlap within 500 ms, Then they are clustered into a single marker with a count badge and expandable list.
Per-Stem Metrics and API Exposure
Given stems have been analyzed in both versions of a pair, When requesting metrics via API GET /versions/{id}/metrics?pair=A,B, Then the response includes per-stem and master metrics and deltas with timestamps, units, and analysis_version. Given access control settings, When a user without project access calls the API or opens the UI, Then metrics are not disclosed and a 403 message is returned; collaborators with read access can view. Given master metrics exist, When displaying summaries, Then master values come from the master file analysis and are not inferred from stem aggregation. Given a stem exists in only one version, When building the response, Then the delta for that stem is null and the status flag is "missing_in_other_version".
Shareable Summary in External Views
Given an external share link with permission "Diff Summary," When a recipient opens it, Then they see per-stem and master delta badges and key timeline markers without access to original files or full metrics time series. Given link expiry and access settings, When the share link is expired or revoked, Then the summary page is inaccessible (HTTP 410/403). Given performance constraints, When loading the summary page on a typical 4G connection (10 Mbps), Then initial content is visible within 2.5 s and total metrics payload is ≤ 300 KB. Given the summary is rendered, When content is displayed, Then it includes the compared pair (e.g., "A vs B"), analysis timestamp, project name, and legend explaining badge colors and units.
Color Coding and Accessibility
Given delta magnitudes are categorized, When rendering badges, Then magnitude bands are: subtle (< threshold) neutral gray, moderate (threshold to 2×) amber, strong (> 2×) red, with directional arrows ↑/↓ and numeric values. Given accessibility requirements, When rendering text in badges and overlays, Then contrast ratios meet WCAG 2.1 AA (text ≥ 4.5:1; non-text indicators ≥ 3:1) and tooltips provide the metric name, units, and exact value. Given keyboard navigation, When using Tab/Arrow keys, Then the user can move focus across pair selector, stems, and badges, open overlays with Enter/Space, and dismiss with Esc. Given screen readers, When badges receive focus, Then ARIA labels announce "Delta integrated LUFS for Vocal: plus 1.2 LU, moderate" or equivalent per metric.
Per‑Stem Audition Matrix & Best‑Of Comping
"As a producer, I want to audition stems across versions and mark best-of choices by section so that I can direct the final comp without creating new bounces."
Description

Provide a grid UI that lets users audition any stem from any version in real time, with solo/mute, per-section switching, and crossfades to prevent clicks. Allow users to mark "best-of" selections per stem and song section, generating a non-destructive comp plan with timecodes. Save, comment, and iterate on comp plans; export decisions as structured metadata attached to the release for handoff to mixers/producers. Leverage TrackCrate’s player engine and permissions to ensure secure playback.

Acceptance Criteria
Real-Time Per-Stem Audition Grid
Given a project with at least two versions sharing common stem names When the user clicks any grid cell to audition a stem-version Then playback continues without transport restart, the newly selected stem is heard in sync within ±5 ms alignment to the current playhead, and the active cell state visibly updates immediately And switching stems does not affect the playback position or global tempo/transport state And the previously active cell is deselected and its state is persisted in session history for undo/redo
Solo/Mute Controls & Per-Section Audition
Given the audition matrix is playing When the user toggles Solo on one or more stems Then only the soloed stems are audible and their solo state persists across version switches until cleared When the user toggles Mute on a stem Then that stem is silenced without affecting other stems or transport Given song sections are defined (markers or imported arrangement) When the playhead enters a new section or the user jumps to a section Then per-section selected version choices for each stem automatically take effect without transport restart
Seamless Crossfades on Stem/Section Switch
Given crossfades are enabled (default 20 ms, equal-power) When the user switches a stem version or a section boundary applies a new selection Then a crossfade is applied between outgoing and incoming audio of the same stem with the configured duration and curve And no audible clicks or discontinuities occur and peak level deviation at switch is within ±1 dB relative to steady-state And crossfade duration is user-configurable between 10–100 ms and persists per project
Best-Of Marking & Non-Destructive Comp Plan
Given the user is auditioning stems across versions When the user marks a stem-version as Best-Of for a specific song section Then the system records a non-destructive comp entry including stem name, source version ID, section start/end timecodes, and crossfade settings And the comp plan can be previewed end-to-end with the player using the recorded selections And removing or changing a Best-Of entry updates the plan without altering any source audio files
Save/Comment/Iterate Comp Plans
Given the user has created a comp plan When the user saves the plan with a title Then the plan is versioned with an incrementing revision number and timestamp, and the latest becomes the active plan When a collaborator adds a comment on a plan entry Then the comment is stored with author, timestamp, and is visible to users with access to the release And users can duplicate a plan to a new revision, compare two revisions (diff of stem/section selections), and restore any prior revision
Export Comp Plan as Structured Metadata
Given an active comp plan exists When the user exports the plan Then a structured metadata file (JSON) is produced and attached to the release containing at minimum: schemaVersion, trackID, section boundaries (timecodes), per-stem selections with source version IDs, file IDs/hashes, crossfade settings, and notes And the export is available via UI download and API, and passes JSON schema validation And re-importing the file recreates the same active comp plan selections
Permissioned Secure Playback & Audit
Given project permissions are enforced When a user without audition permission attempts to open the matrix Then access is denied with a clear message and no audio is streamed When an authorized user auditions stems via the matrix or a shared AutoKit link Then playback is watermarked per TrackCrate policy, download is disabled unless explicitly granted, and link expirations are honored And an audit log entry is recorded for each audition session capturing user, timestamp, plan ID (if any), and stems/versions accessed
Verdict Capture & Secure Review Share
"As an A&R, I want to share a secure, time-limited Version Matrix to collect approvals so that I can lock a decision with an auditable trail."
Description

Enable capturing a final verdict (selected version or comp plan) with rationale and inline comments, then generate an expiring, watermarked review link that previews the Version Matrix with restricted controls. Inherit TrackCrate permissions, enforce download restrictions, and track recipient activity (opens, playtime, selections). Store verdicts as immutable artifacts linked to the release to create a clear decision trail and reduce back-and-forth.

Acceptance Criteria
Final Verdict Selection from Version Matrix
Given I am an Owner or Editor on a release with a populated Version Matrix When I select a single winning version OR define a per-stem comp plan (mapping each stem to a source version) and click "Finalize Verdict" Then the system saves a verdict record with: unique verdict ID, author ID, ISO-8601 timestamp, and the exact version/stem mapping Given a verdict is saved Then the matrix state (version IDs, stem list, diff settings) is snapshotted and associated to the verdict Given a verdict exists When any user attempts to modify it Then the action is blocked and the UI requires "Create New Verdict"; the original verdict remains read-only Given a release with multiple verdicts Then the latest is flagged as "Current" and all prior verdicts are retained for audit
Rationale and Inline Comments Capture
Given I am finalizing a verdict When I enter rationale text Then the system enforces a non-empty rationale of at least 10 and at most 2000 characters Given inline comments exist on the Version Matrix (anchored to stems and/or timecodes) When I finalize the verdict Then all included comments (author, anchor, timestamp, text) are captured into the verdict artifact and become read-only Given I view a verdict Then I can see the rationale and associated inline comments in context for each stem
Immutable Verdict Artifact Linked to Release
Given a verdict is finalized Then an immutable artifact is created containing: release ID, verdict ID, author ID, ISO-8601 timestamp, stem-to-version map, rationale, included comment IDs, and a SHA-256 content hash Given the artifact exists When anyone attempts update or delete via API or UI Then the operation is rejected (HTTP 403) and only append-only addenda can be added as new records linked to the artifact Given I open the release timeline/audit log Then the verdict artifact appears with a link to view the snapshot and its hash for verification
Secure Expiring Review Link with Restricted Controls
Given I am an Owner or Editor When I generate a review link Then I must set an expiry (absolute datetime or duration up to 30 days) and may optionally set a recipient email label Given a review link is generated Then recipients can only: play/pause, A/B/C switch, solo/mute stems, toggle diffs, and leave review comments; they cannot: upload, rename, delete, edit versions/stems, or finalize a verdict Given a review link is expired When a recipient attempts to access it Then the application returns HTTP 410 Gone and shows an expiry message Given a review link is manually revoked Then subsequent access returns HTTP 403 Forbidden Given a review session Then any selections a recipient makes are sandboxed to the session and are not persisted to the release
Watermarked, Expiring Download Enforcement
Given a review link Then downloads are disabled by default; any download attempt returns HTTP 403 with message "Downloads disabled for this review" Given the link creator explicitly enables downloads When a recipient downloads an asset Then the file is watermarked with the review link ID and recipient label (if present) and the watermark is verifiable post-download Given a link with enabled downloads is expired Then any direct file URL or signed URL associated to it is invalidated within 60 seconds and returns HTTP 410 Gone
Permission Inheritance and Recipient Access
Given TrackCrate release permissions restrict who can share When a Viewer attempts to generate a review link Then the action is denied; only Owner/Editor may create links Given a review link is created from a release with restricted assets Then only the assets visible to the link creator are exposed via the link Given a collaborator is removed from the release or their role changes Then all review links they created are re-evaluated within 5 minutes; links from removed users are auto-revoked Given optional recipient whitelisting is enabled for a link Then access requires email verification matching the whitelist
Recipient Activity Tracking and Reporting
Given a recipient opens a review link Then an Open event (timestamp, link ID, user agent, approximate country) is recorded within 10 seconds Given a recipient plays audio Then cumulative playtime per version and per stem, number of A/B/C switches, and time-on-page are recorded with ≤5-second accuracy Given a recipient makes selections or leaves comments Then those interactions are logged to analytics but not persisted to the release Given I am an Owner or Editor When I view Review Analytics Then I can see per-link metrics (opens, unique recipients, total playtime, top-selected version) and export them as CSV
Auto Gain Matching & Bias Guardrails
"As a mastering engineer, I want automatic gain matching during comparisons so that my judgments aren’t biased by loudness differences."
Description

Implement automatic gain matching at both master and per-stem levels against a target LUFS to minimize loudness bias during switching and audition. Provide an A/B bias check (randomized level ±0.5 dB) and a bypass toggle for critical listening. Persist trims with the session, expose indicators when normalization is active, and integrate with analysis results for fast startup. Ensure negligible latency and no clipping via true-peak aware processing.

Acceptance Criteria
Master-Level Auto Gain Match to Target LUFS
- User can set a target LUFS (default -14 LUFS); setting persists per session. - With normalization ON, integrated loudness over the last 10 s is within ±0.3 LU of the target for any version played. - A true-peak ceiling of -1.0 dBTP is enforced; no samples exceed -1.0 dBTP under any playback condition. - UI displays a Normalized badge with target LUFS and applied offset in dB; offset updates within 500 ms when material changes. - Toggling normalization changes level by the displayed offset ±0.2 dB.
Per-Stem Normalization and Trim Persistence
- When soloing a stem or in 'best-of' audition, stem loudness (3 s short-term LUFS) is within ±0.5 LU of the target. - Per-stem gain trims auto-saved to the session; on reload, the same trims and normalization state are restored. - Per-stem UI shows normalization active state and numeric offset (±0.1 dB precision). - When multiple stems are active, master output true-peak remains ≤ -1.0 dBTP; if exceeded, system auto-applies safe makeup gain and shows a warning. - Manual user trim edits persist and coexist with normalization without exceeding the TP ceiling.
A/B Bias Check with ±0.5 dB Randomization
- Enabling Bias Check applies a concealed random gain of ±0.5 dB to the currently auditioned version; sign and value are hidden from the user. - During Bias Check, switching between A/B/C keeps normalization active and adds the randomized offset only to the active comparison target. - The randomized offset magnitude is within ±0.5 dB with measurement tolerance ±0.1 dB. - Disabling Bias Check reverts levels to exact normalized values within 100 ms without audible artifacts. - A session event is recorded that a bias check occurred (timestamp, track/version ID) without revealing the offset value.
Bypass Toggle for Critical Listening
- Global Bypass instantly disables all normalization and per-stem trims for playback. - Bypass state is per-user per-session and persists until changed; default is OFF. - Engaging or disengaging Bypass produces no clicks/pops (click-free ramp ≤ 5 ms or zero-crossing switching). - While Bypass is ON, UI indicates Bypass and hides normalization badges; turning Bypass OFF restores previous normalization state exactly. - Metering reflects raw levels when Bypass is ON and normalized levels when OFF.
Fast Startup via Prior Analysis Integration
- If loudness/TP analysis exists, normalization is applied before first audible buffer; level-matched audio is heard within 200 ms of pressing Play. - If analysis is missing, a background analysis starts within 100 ms; an Analyzing indicator appears, and provisional peak-based safety (TP ≤ -1.0 dBTP) is applied until final LUFS normalization is ready. - Transition from provisional to LUFS-based normalization is smoothed with a gain fade ≤ 200 ms and no audible step. - Completed analysis is cached and reused; reopening the session does not re-analyze unchanged files. - Analysis failures surface a non-blocking error and gracefully disable normalization for the affected item.
Seamless Version Switching Level Parity in Version Matrix
- While switching versions in the Version Matrix, loudness difference between any two normalized versions measured over a 5 s window is ≤ 0.3 LU. - Cross-switch latency ≤ 50 ms with no dropouts or clicks. - In per-stem 'best-of' audition, switching the source of a stem changes short-term loudness by ≤ 0.5 LU and maintains TP ≤ -1.0 dBTP on the master. - Keyboard shortcuts and UI interactions produce identical level-parity behavior. - The UI displays the per-version applied gain offset immediately upon selection (≤ 100 ms).
True-Peak Safety and Low-Latency Processing
- True-peak aware processing (oversampled limiter or equivalent) ensures output TP ≤ -1.0 dBTP under all scenarios (solo, multi-stem, version switches). - Additional end-to-end latency introduced by normalization path ≤ 5 ms on desktop and ≤ 10 ms on mobile/web at 48 kHz. - Normalization CPU overhead ≤ 5% on a mid-tier desktop and ≤ 10% on a mid-tier mobile device during 16-stem playback at 48 kHz/128 buffer. - Behavior is consistent across sample rates 44.1, 48, 88.2, and 96 kHz. - Stress test with 32 concurrent stems maintains TP limit and latency bounds without audible artifacts.

Readiness Score

An at-a-glance clearance health grade that audits your bundle for missing codes (ISRC/ISWC/IPI), split inconsistencies, uncleared samples, and contact gaps. Get a prioritized fix list with one-click jumps to resolve issues via Metadata Sentry so your Capsule ships clean and avoids last‑minute rejections.

Requirements

Metadata Completeness Audit
"As a label manager, I want automatic detection of missing and invalid metadata so that I can fix issues early and avoid distributor and PRO rejections."
Description

Automates a comprehensive scan of each Capsule to detect missing or malformed industry identifiers and key fields, including ISRC, ISWC, IPI/CAE, UPC/EAN (release-level), role assignments, recording year, language, parental advisory, and publisher/PRO affiliations. Validates identifier formats and checksums, enforces required-per-role fields, and differentiates track-, stem-, and release-level metadata. Emits machine-readable issues with severity, category, and precise location, runs on upload/save and on-demand, and exposes an internal API so other TrackCrate services can query audit results without reprocessing.

Acceptance Criteria
Auto Audit on Upload or Save
Given an existing Capsule with at least one track or stem When a user uploads a new file to the Capsule or saves metadata on any entity Then the Metadata Completeness Audit starts within 5 seconds of the triggering action And the audit completes within 30 seconds for Capsules with up to 100 assets and within 120 seconds for up to 500 assets And newly detected issues are persisted and queryable via the internal API within 10 seconds of audit completion And previously resolved issues are marked resolved and excluded from the open issues list
On-Demand Audit Trigger
Given a user with permission to manage the Capsule When the user clicks Run Audit Now or calls POST /internal/audits:run with the Capsule ID Then a new audit run is enqueued within 3 seconds and begins processing within 10 seconds And the UI reflects Audit running status until completion and refreshes results within 5 seconds of completion And concurrent manual runs for the same Capsule are de-duplicated so only one run executes at a time
Identifier Format and Checksum Validation
Given track-level identifiers (ISRC, ISWC, IPI/CAE) and release-level identifiers (UPC/EAN) When the audit validates identifiers Then ISRC values must match ^[A-Z]{2}[A-Z0-9]{3}\d{2}\d{5}$ (case-insensitive, stored uppercase) or an issue code identifier.format_invalid with severity High is emitted And ISWC values must match CISAC format (e.g., T-XXXXXXXXX-X) and pass the check digit or issue code identifier.checksum_invalid is emitted with severity High And IPI/CAE values must be 9–11 numeric digits or identifier.format_invalid is emitted with severity Medium And UPC (12 digits) and EAN-13 (13 digits) must pass GS1 check digit validation or identifier.checksum_invalid is emitted with severity High And each issue records category Identifier and a precise field path to the failing value
Role-Based Required Fields and Splits Validation
Given contributors with roles and splits for Master and Publishing rights When the audit verifies role-based requirements Then for each rightType, contributor split percentages must sum to 100.00 ± 0.01; otherwise issue code split.sum_mismatch severity High is emitted And no contributor may appear more than once per rightType and role; duplicates emit split.duplicate_contributor severity Medium And Composer and Lyricist roles require non-empty IPI/CAE and PRO affiliation; missing fields emit role.required_field_missing severity High And Producer and Primary Artist roles require role assignment and non-empty legal name; missing emits role.required_field_missing severity Medium And each issue includes the specific rightType, role, contributorId, and field path
Track vs. Stem vs. Release-Level Rules
Given a Capsule containing tracks, stems linked to tracks, and release metadata When the audit applies level-specific rules Then track entities require ISRC, language, recordingYear, and parentalAdvisory to be present; missing fields emit severity High for ISRC and Medium for others And stem entities must not require ISRC; instead, they require parentTrackId and channel/instrument metadata; missing parentTrackId emits stem.parent_missing severity High And release entity requires UPC or EAN (at least one), releaseYear, and releaseLanguage; missing UPC/EAN emits High, other fields Medium And stems inherit track language by default; if explicitly set, it must be a valid ISO 639-1/2 code or metadata.invalid_language_code severity Medium is emitted And no track-level requirement is erroneously applied to stems, verified by zero missing ISRC issues on stems in audit results
Internal Audit Results API (Cached, Filterable)
Given cached audit results exist for a Capsule When a client calls GET /internal/audits?capsuleId={id}&status=open&severity=High Then the response returns within 500 ms for cached results and does not trigger a new audit run And the response includes ETag and Last-Modified headers; subsequent requests with If-None-Match return 304 when unchanged And the API supports filters by severity, category, entityType, role, and pagination (limit/offset) with stable ordering by createdAt desc And each issue payload includes id (deterministic per issue fingerprint), severity, category, code, message, location.path (JSON Pointer), entityType, entityId, field, createdAt, updatedAt, status
Issue Identity, Location Precision, and Idempotency
Given the same underlying metadata state is audited multiple times When two audits run without changes to the audited fields Then the same issue ids and fingerprints are returned, and issue count remains unchanged And location.path resolves to an existing field in the current Capsule JSON for 100% of issues And when the underlying field is corrected, the next audit marks the prior issue resolved and does not emit a new issue for that field
Split Consistency Validator
"As a producer, I want TrackCrate to flag and help resolve split inconsistencies so that payouts and registrations are accurate and undisputed."
Description

Verifies contributor splits across tracks and stems to ensure totals equal 100%, roles are consistent, and the same parties are represented uniformly across a Capsule. Detects duplicate contributors, conflicting roles per track, territory-specific deviations, and math rounding drift. Suggests reconciliation options (e.g., normalize aliases, merge duplicate profiles) and flags conflicts when splits differ between stems and final track. Provides per-issue context and links back to the exact asset and field to accelerate correction.

Acceptance Criteria
Split Totals Equal 100% Within Tolerance
Given a Capsule containing at least one track or stem with contributor splits When the Split Consistency Validator runs Then the sum of all contributor percentages for each asset equals 100.00% ± 0.01% And assets outside tolerance are flagged with error code SPLIT_TOTAL_MISMATCH including assetId, assetType, computedTotal, expectedTotal, and tolerance And a normalization suggestion is provided when absolute drift ≤ 0.10%, distributing the remainder to the highest-percentage contributor deterministically
Uniform Contributor Identity Across Capsule
Given two or more contributor profiles that share IPI or ISNI, or identical email, or a name-similarity score ≥ 0.92 When the Split Consistency Validator runs Then potential duplicates are flagged with code DUPLICATE_CONTRIBUTOR including contributorIds, matchedFields, and similarityScore And a merge suggestion is generated that selects a canonical profile (prefer profile with IPI; else the profile linked to the most assets) And alias normalization suggestions are created mapping all aliases to the canonical contributor
Role Consistency Per Asset and Across Variants
Given a single track or stem When the validator inspects roles assigned to the same contributor on that asset Then no contributor appears with conflicting primary roles for the same domain (e.g., both Primary Artist and Featuring on the same asset) And conflicts are flagged with code ROLE_CONFLICT_PER_ASSET including contributorId, conflictingRoles, and assetId And across track variants and edits within the Capsule, if the same contributor is credited as Primary Artist on the main track but omitted or demoted on its stems or alternates without an explicit variantException tag, the issue is flagged with code ROLE_INCONSISTENT_VARIANTS
Territory-specific Split Deviations Are Valid and Scoped
Given a track has territory-specific split deviations configured When the validator evaluates each territory group Then each territory group's splits sum to 100.00% ± 0.01% And each deviation defines explicit ISO territory codes or region groups and an effective date range And deviations missing territories or dates are flagged with code TERRITORY_DEVIATION_INVALID And when both global and territory splits exist, lookup precedence (territory over global for matching territories) is verified
Stems and Parent Track Split Alignment
Given stems are linked to a parent track via trackId When the validator compares contributor splits between each stem and its parent track Then all contributors on the parent track are present on each stem unless stemOverride=true on that stem And contributor percentages on stems match the parent track within ± 0.01% unless stemOverride=true And discrepancies are flagged with code STEM_TRACK_SPLIT_CONFLICT including parentTrackId, stemId, contributorId, parentPct, and stemPct
Per-issue Context, Severity, and Deep Links to Fix
Given the validator produces any warning or error When results are returned Then each issue includes issueCode, severity (info|warning|error), assetId, assetType, contributorId when applicable, fieldPath, and a human-readable message And each issue includes deepLinkUrl that opens Metadata Sentry scrolled to fieldPath with the corresponding input focused And when a one-click fix is available, the issue includes fix.actionId and fix.payloadPreview enabling immediate invocation from the results list
Sample Clearance Tracker
"As an artist, I want a clear view of sample clearance status with required documentation so that I don’t risk takedowns or last‑minute release delays."
Description

Tracks declared samples and interpolations for each track, ensuring that clearance status, documentation, license terms, and rights windows are present before shipping. Requires minimum sample metadata (sampled work identifiers, rights holder contacts, usage duration) and supports statuses such as pending, cleared, cleared-with-conditions, or denied. Flags uncleared samples as blocking or high severity and attaches evidence (agreements, emails) for auditability. Integrates with the Readiness Score to weight unresolved samples heavily and surfaces direct resolution actions.

Acceptance Criteria
Minimum Sample Metadata Validation on Track Save
Given a track with at least one declared sample When the user clicks Save on the track metadata Then the system validates that each sample has: sampled work identifier(s) (ISRC and/or ISWC if available), rights holder contact name and email, holder role, sample type (sample or interpolation), usage start and end timestamps (or duration), and usage description Given the sample metadata contains invalid values When validation runs Then email fields must match standard format, timestamps must be within the track length with start < end, and durations must be greater than 0 Given multiple samples are declared on a track When the user clicks Save Then validation is applied independently to each sample and any failing sample blocks the save Given all required sample fields are valid When the user clicks Save Then the track metadata is saved successfully
Clearance Status Management and Transitions
Given a newly added sample When it is created Then its status defaults to Pending Given a sample in Pending When a user updates status to Cleared Then the system requires at least one attached evidence item and records timestamp and user Given a sample in Pending When a user updates status to Cleared-with-conditions Then the system requires license terms and condition notes and at least one evidence item Given a sample in any status When a user updates status to Denied Then the sample is flagged High Severity and a reason is required Given a sample status change When saved Then a non-editable status history entry is appended with previous status, new status, user, timestamp, and optional comment
Evidence Attachment and Audit Trail
Given a sample with status Cleared or Cleared-with-conditions When the user attempts to save status Then at least one evidence file (PDF, image, email .eml/.msg, or link with snapshot) must be attached Given an evidence file upload When completed Then the file is stored with checksum, uploader, timestamp, filename, size, and file type metadata and cannot be edited, only superseded or withdrawn with reason Given an attached evidence item When viewed Then users with permission can preview or download it, and all access is logged with user and timestamp
License Terms and Rights Window Enforcement
Given a sample with status Cleared or Cleared-with-conditions When license terms are entered Then the system captures grant type, territories (include/exclude), media/platforms, attribution requirements, start date, end date, and any royalty or MFN obligations Given a release with a planned ship date and territories When calculating ship readiness Then any sample whose rights window does not cover the ship date and territories is flagged Blocking Given a sample with Cleared-with-conditions When a required attribution or notice is missing in metadata Then the sample is flagged High Severity until resolved
Readiness Score Weighting and Resolution Links
Given a Capsule with at least one track containing a sample in Pending or Denied When the Readiness Score is computed Then the score is reduced according to the configured unresolved-sample weight and the Capsule is labeled Blocked if any blocking sample exists Given a sample-related issue is shown in the Readiness Score panel When the user clicks Resolve via Metadata Sentry Then the system deep-links to the sample’s edit view with the offending fields in focus Given all sample issues are resolved and statuses are Cleared or Cleared-with-conditions with satisfied conditions When the Readiness Score is recomputed Then all sample-related deductions are removed
Pre-Ship Blocking Check at Capsule Shipment
Given a user attempts to ship a Capsule When any track has a sample with status Pending or Denied Then the ship action is blocked and a modal lists each blocking sample with direct actions (Edit, Attach Evidence, Update Status) Given a user attempts to ship a Capsule When any sample has Cleared-with-conditions and required fields (e.g., attribution, territory constraints) are unmet for this release Then the ship action is blocked until conditions are satisfied or status is updated Given all samples on all tracks are Cleared or Cleared-with-conditions with all conditions satisfied for the release window and territories When the user retries ship Then the ship proceeds without sample-related errors
Contact Coverage Check
"As a project coordinator, I want gaps in required rights-holder contact details highlighted so that clearances and registrations can be completed without bottlenecks."
Description

Ensures all required rights and business contacts are present and current for writers, performers, publishers, labels, and artwork owners, including preferred contact method and region/time zone. Validates contact reachability (email format, deduplication), flags missing PRO/publisher contacts for each writer, and confirms an emergency release contact. Suggests contacts from prior Capsules when appropriate and indicates GDPR/consent flags where applicable. Contributes to the Readiness Score and provides targeted fixes for each missing contact.

Acceptance Criteria
Capsule Creation: Required Role Contacts Completeness
Given a Capsule contains at least one writer, performer, publisher, label, and artwork owner entities When the user saves the Contacts section or runs Readiness Score Then the system validates that each entity has at least one associated contact record with fields: name/organization, role, email or phone, preferred contact method, region, and time zone And entities missing any required contact field are flagged with inline errors and added to the Contact Coverage fix list And the Readiness Score displays a contact coverage deduction with the number of affected entities And the fix list provides one-click "Add contact" actions pre-populated with entity and role
Contact Deduplication and Email Reachability Validation
Given contacts are added or imported into a Capsule When two or more contacts share the same normalized email (case-insensitive, trimmed) or the same normalized phone (E.164) Then the system flags duplicates and prompts the user to merge or map a single contact to multiple roles And duplicate contacts do not inflate coverage counts And emails failing basic RFC-compliant format validation are rejected with a clear error before save And disposable or example domains are warned and allowed only with explicit user override
Writer PRO/Publisher Coverage Enforcement
Given writers are present in the Capsule with IPI/CAE and/or PRO affiliations when available When the audit runs Then each writer must have at least one contact for a PRO representative or publisher; writers missing both are flagged And if a writer has ISWC or IPI but no linked PRO/publisher contact, a high-priority fix is created And the fix list links directly to the writer’s contact panel with suggested contacts (if available) And the Readiness Score shows a distinct "Writer PRO/publisher missing" item per writer
Emergency Release Contact Confirmation
Given a Capsule is set to Ready or a release is queued When the user attempts to mark the Capsule as Ready or generate a Release Kit Then the system verifies an Emergency Release contact exists with fields: name, 24/7 method (phone or SMS), region, and time zone And if missing or incomplete, the action is blocked with a modal that lists required fields and provides an "Add emergency contact" shortcut And once added and saved, the action proceeds and the Readiness Score updates accordingly without a full page refresh
Cross-Capsule Contact Suggestions with Consent Carryover
Given an entity (writer, performer, publisher, label, artwork owner) matches prior Capsules in the same workspace by exact legal name or identifier (IPI/ISNI/LabelCode) When the current Capsule is missing a contact for that entity Then the system suggests up to three contacts from prior Capsules with last-verified date and source Capsule ID And accepting a suggestion creates a contact link in the current Capsule, preserving consent flags and preferred method And suggestions are filtered out if consent is withdrawn or restricted for the current region
GDPR/Regional Consent Enforcement and Indicators
Given a contact’s region is within GDPR or another listed privacy jurisdiction When the contact is created or edited Then the system requires a lawful basis selection (Contract, Consent, Legitimate Interest) and records consent timestamp when applicable And contacts marked "Do Not Email" are excluded from email-based coverage and still allow phone coverage And the Contacts table shows a visible GDPR badge and hover text with consent status per contact And Readiness Score explains if coverage is unmet due to consent restrictions
Readiness Score Impact and Targeted Fixes with One-Click Resolve
Given Contact Coverage Check findings exist When the user opens the Readiness Score panel Then each contact-related issue appears as a discrete fix item with severity, count, and a "Resolve" deep link to Metadata Sentry filtered to the entity and role And upon completing the fix and saving in Metadata Sentry, the Contact Coverage re-audits within 5 seconds and removes resolved items And the Readiness Score delta recalculates and displays without a full page reload
Prioritized Fix List with One‑Click Resolve
"As a label ops lead, I want a prioritized, actionable fix list with one‑click jumps so that I can clear issues quickly and keep release timelines on track."
Description

Aggregates all audit findings into a single, ranked queue based on severity, impact on distribution, and dependency order. Provides one‑click deep links into Metadata Sentry to the exact field or form needed to resolve each issue, supports bulk edits for repetitive fixes, and updates issue status in real time as changes are saved. Displays clear remediation guidance, auto-fix suggestions where deterministic, and indicates whether an issue is blocking ship or advisory. Offers API and UI endpoints for exporting the fix queue to external tools.

Acceptance Criteria
Ranked Queue by Severity, Impact, and Dependency
Given a Capsule with issues varying in severity, impact, and dependencies When the Fix List loads or is refreshed Then items are ordered by severity (Critical, High, Medium, Low), then by impact (Blocking before Advisory), then with prerequisites listed before dependents, and ties broken by created_at ascending And each item displays its rank index and badges for Severity, Impact, and Dependency And changing any issue’s severity/impact/dependency data reorders the list within 1 second
One‑Click Deep Link to Exact Field in Metadata Sentry
Given an issue referencing a specific entity and field (e.g., Track: T123, Field: ISRC) When the user clicks Resolve on that issue Then Metadata Sentry opens focused on that exact entity and field with the issue context pre-loaded And on successful Save in Sentry, the Fix List marks the issue Resolved within 2 seconds and navigates back to the queue position And if the deep link cannot be resolved, an actionable error is shown and the user remains in the Fix List
Bulk Edit for Repetitive Fixes
Given multiple selected issues target the same field across different records (e.g., Writer IPI) When the user applies a bulk value or operation Then all selected issues are updated in one action with a single confirmation And successes and failures are reported per item; successful updates persist; failed items are retriable with reasons And bulk edit is blocked when selected issues have incompatible fields or unmet dependencies, with an explanation shown
Real‑Time Issue Status Sync
Given the Fix List is open When an issue is resolved via Metadata Sentry or another session Then the item’s status updates within 2 seconds without a full page refresh And the item disappears from Blocking filters if resolved or moves according to the current sort And a non-intrusive toast indicates the update time
Remediation Guidance and Auto‑Fix Suggestions
Given an issue is opened in detail view When guidance is displayed Then it includes a problem summary, impact statement, and 3–7 step fix instructions with links to relevant policy/docs And if a deterministic fix is available, an Auto‑Fix button is shown with a preview of the exact changes When Auto‑Fix is confirmed Then the changes are applied, the issue resolves, and an audit log entry with actor, timestamp, and rule ID is recorded
Blocking vs Advisory Indicators and Ship Guard
Given the queue contains both blocking and advisory issues When viewing the list Then each item shows a clearly labeled badge: Blocking or Advisory with a tooltip definition And filters allow All, Blocking, or Advisory, defaulting to Blocking And initiating Ship is prevented while any Blocking issue remains, with a list of blockers displayed
Export Fix Queue via API and UI
Given a user with Capsule read access When they export from the UI as CSV or JSON Then the file downloads within 5 seconds and includes fields: issue_id, capsule_id, severity, blocking_flag, dependency_ids, field_path, description, status, created_at, updated_at, deep_link_url When a client calls GET /v1/capsules/{id}/fix-queue with format=json|csv and valid auth Then a 200 is returned with the same fields, filter params (status, blocking) honored, pagination provided, and timestamps in ISO 8601 UTC And unauthorized requests are rejected with 401/403
Readiness Scoring Model and Grade Presentation
"As a small label owner, I want a transparent readiness grade with clear breakdowns so that I can decide when a release is truly ship‑ready and enforce quality gates."
Description

Computes an overall readiness grade (0–100 and letter) using weighted categories such as identifiers, splits, samples, and contact coverage, with transparent scoring rules and per‑category breakdowns. Shows a history of scores across Capsule versions, highlights changes that improved or degraded readiness, and allows label-level threshold configuration for warnings and ship gating. Surfaces the grade and badges in the Capsule and AutoKit, triggers soft warnings below the advisory threshold, and blocks shipping below the hard threshold unless overridden by authorized roles. Exposes webhooks and events for external workflow orchestration.

Acceptance Criteria
Score Calculation and Breakdown Presentation
Given a Capsule with metadata across identifiers, splits, samples, and contact coverage, When the readiness score is computed, Then a numeric score between 0 and 100 (integer) and a letter grade are produced. Given the numeric score, When mapping to a letter, Then A=90–100, B=80–89, C=70–79, D=60–69, F=0–59. Given the computed score, When viewing the breakdown, Then each category displays its weight (sum of weights = 100%), subscore (0–100), and itemized deductions with rule names and point impacts. Given the scoring rules, When a user opens “Scoring Rules” from the breakdown, Then the rules and weights used for the computation are visible and match the calculation shown. Given an update to Capsule metadata that affects only one category, When the score is recomputed, Then only that category’s subscore and the overall score change accordingly and are reflected in the breakdown.
Versioned Score History and Change Highlights
Given a Capsule with multiple versions, When viewing Readiness History, Then a chronological list shows version, timestamp, numeric score, and letter grade for each version. Given two adjacent versions, When differences affect readiness, Then change highlights display per rule the delta (+/- points) and category impacted (e.g., “Added ISRC: +10”). Given a version with no readiness‑affecting metadata changes, When compared to the prior version, Then the score remains the same and the UI labels it “No change”. Given the history view, When filtering by category (identifiers/splits/samples/contacts), Then only changes impacting the selected category are shown. Given a user clicks a change highlight, When navigated, Then the UI focuses the related field section within the Capsule metadata view.
Label Threshold Configuration and Enforcement Settings
Given a Label Admin, When configuring thresholds, Then they can set an Advisory Threshold (warning) and a Hard Threshold (block) as integers 0–100 where Hard Threshold >= Advisory Threshold. Given no custom thresholds for a label, When computing readiness, Then system defaults apply: Advisory=80 and Hard=90. Given thresholds are updated, When saved, Then changes are audit‑logged with actor, timestamp, and before/after values and take effect immediately for subsequent computations and gating checks. Given a non‑admin user, When attempting to edit thresholds, Then the action is denied with an authorization error. Given API access, When calling GET/PUT /labels/{id}/readiness-thresholds, Then validation rules match the UI and responses include the active thresholds and audit reference.
Grade and Badges Display in Capsule and AutoKit
Given a Capsule view, When the score is available, Then a grade badge (letter + color) and numeric score are shown in the header and the category badges display pass/warn/fail states. Given the breakdown icon in Capsule, When clicked, Then a panel reveals per‑category weights, subscores, and deductions. Given an AutoKit page is generated, When viewed by recipients, Then the overall grade badge is displayed consistently with Capsule styling while withholding internal rule details; no PII or internal deduction list is exposed. Given UI accessibility requirements, When rendering badges, Then color contrast meets WCAG 2.1 AA and badges include text labels for non‑color recognition. Given responsive layouts, When viewed on mobile (<375px width), Then badges and grade remain readable without overlap or truncation.
Advisory Soft Warnings Below Threshold
Given a label’s Advisory Threshold, When a Capsule’s score is below this threshold, Then a non‑blocking warning banner appears in the Capsule and in the ship flow with the current score and required threshold. Given a warning banner is dismissed, When the page is reloaded, Then the banner reappears until the score meets or exceeds the advisory threshold. Given a score rises above the Advisory Threshold, When recomputed, Then the warning automatically clears without manual action. Given a warning condition, When it is first detected, Then an internal event is recorded with capsule_id, version_id, score, threshold, and timestamp.
Hard Ship Gating and Authorized Override
Given a label’s Hard Threshold, When a shipping action is initiated for a Capsule below this threshold, Then the ship action is blocked with an explanatory message showing current score and required threshold. Given an authorized role (Label Admin or Release Manager), When choosing to override, Then a mandatory reason must be entered and the override applies to the specific Capsule version only. Given an override is saved, When viewing the ship flow, Then shipping is permitted and the UI shows an “Override active” indicator with actor and timestamp. Given an unauthorized role, When attempting to override, Then the action is denied and the block remains in effect. Given an override, When a new Capsule version is created, Then the prior override does not carry over and gating reevaluates against the new version’s score.
Webhooks and Events for Readiness Orchestration
Given webhook subscriptions for a label, When a Capsule’s readiness score is created or updated, Then a readiness.score.updated event is delivered within 30 seconds with capsule_id, version_id, score, letter, category_breakdown, thresholds, and event_id. Given a score crosses below the Advisory Threshold, When computed, Then a readiness.threshold.warning event is emitted; if below the Hard Threshold, Then a readiness.threshold.blocked event is emitted. Given an override is created or revoked, When saved, Then corresponding readiness.override.created or readiness.override.revoked events are emitted with actor and reason. Given webhook delivery, When the receiver returns non‑2xx, Then retries use exponential backoff for up to 24 hours; events are signed with HMAC‑SHA256 and include an idempotency key (event_id) to prevent duplicate processing. Given the Webhooks API, When calling GET /webhooks/deliveries?capsule_id=… , Then recent delivery attempts and statuses are retrievable for troubleshooting.

Scope Builder

A guided wizard to define usage scope—media, term, territory, exclusivity, MFN, and carve‑outs. It validates against each contributor’s constraints and outputs a one‑page Rights Summary inside the Capsule, so all parties align quickly and negotiations stay on the rails.

Requirements

Constraint Validation Engine
"As a release manager, I want the Scope Builder to automatically validate my proposed terms against all contributors’ constraints so that I avoid unapprovable scopes and keep negotiations efficient."
Description

Implements a rules-driven engine that validates selected usage scope elements (media, term, territory, exclusivity, MFN, and carve‑outs) against each contributor’s stored constraints in real time. Loads constraints from contributor profiles and linked agreements, normalizes them to a shared taxonomy, and evaluates conflicts with clear blocking errors, warnings, and rationale. Supports complex conditions such as overlapping territories, term caps, exclusivity conflicts, MFN parity checks across contributors, and carve‑out precedence. Exposes validation results to the wizard UI via a lightweight API and emits machine-readable codes for analytics. Designed for low-latency feedback, extensible rule definitions, and full audit logging of evaluations.

Acceptance Criteria
Real-Time Validation Feedback in Scope Builder
Given a Scope Builder session with ≤10 contributors and ≤50 linked agreements loaded, When the user changes any of media, term, territory, exclusivity, MFN, or carve-outs, Then the engine returns a validation response within 250 ms p95 and 500 ms p99 measured server-side over a 5-minute rolling window. Given concurrent requests from the same session, When evaluations complete out of order, Then each response includes evaluationId and inputDigest so the client can discard stale results, and the latest response matches the latest inputDigest. Given a successful evaluation, Then the response includes severityCounts for ERROR/WARN/INFO that exactly match the counts in results by severity. Given identical inputs evaluated multiple times, Then the engine produces deterministic results with identical codes and ordering.
Scope Conflict Resolution: Territories, Terms, Exclusivity, Carve-Out Precedence
Given the selected territory includes regions not permitted by any contributor and no carve-out grants those regions, Then a blocking error TERRITORY_CONFLICT is returned with contributors and regions listed, and effectiveScope.territory equals the intersection across all contributors. Given a contributor has a carve-out that narrows a general grant, When territory or media intersects both, Then the carve-out takes precedence and the response includes CARVEOUT_APPLIED info with rationale and affectedFields. Given the selected term exceeds a contributor's maximum term cap, Then a blocking error TERM_CAP_EXCEEDED is returned with contributorId and maxAllowedTerm, and summary.minAllowedTerm equals the lowest cap across contributors. Given exclusivity=exclusive and any contributor allows only non-exclusive for the selected media/territory, Then a blocking error EXCLUSIVITY_CONFLICT is returned listing contributors and dimensions. Given exclusivity=exclusive with carve-outs permitting specific third-party uses, Then a warning EXCLUSIVITY_WITH_CARVEOUT is returned listing the carve-outs that weaken exclusivity.
MFN Parity Check Across Contributors
Given any contributor has MFN=true, When any other contributor receives a more favorable dimension (wider media, longer term, larger territory, higher exclusivity), Then a blocking error MFN_VIOLATION is returned with dimension subcodes (MFN_MEDIA, MFN_TERM, MFN_TERRITORY, MFN_EXCLUSIVITY) and advantaged contributorIds. Given all contributors with MFN=true are on parity across media, term, territory, and exclusivity, Then an info code MFN_OK is returned and no MFN_* errors are present. Given carve-outs apply, When evaluating MFN, Then parity is computed on the effectiveScope after applying carve-outs.
Constraint Loading and Taxonomy Normalization
Given a Capsule with contributors linked to profiles and agreements, When evaluating, Then the engine loads constraints where agreement-level terms override profile defaults, selects Active agreements effective as of now or the most recent unexpired by effectiveDate, and merges multiple agreements per contributor without contradiction. Given inputs or stored constraints contain synonyms or variants (e.g., "UK", "United Kingdom", "GB"), Then normalization maps them to shared taxonomy codes and the response echoes normalizedScope with codes. Given any constraint value cannot be mapped to the taxonomy, Then a warning NORMALIZATION_WARNING is returned including the original value and suggested code if available, and the evaluation proceeds using a safe-narrow interpretation. Given a contributor has no accessible constraints, Then a blocking error CONSTRAINTS_MISSING is returned listing missing sources and contributorId.
API Response Contract and Machine-Readable Codes
Given a valid request, When POST /validate is called with proposed scope and contributorIds, Then the API responds 200 with JSON containing validationId (UUIDv4), ruleSetVersion (semver), inputDigest (sha256 hex), normalizedScope, severityCounts, and results[] ordered by severity (ERROR > WARN > INFO) then code ascending. Given results[], Then each item contains code (UPPER_SNAKE_CASE), severity (ERROR|WARN|INFO), message (<=160 chars), rationale (<=500 chars), affectedFields[], dimensions[], and contributorIds[], and codes are stable across ruleSetVersion minor updates. Given an empty issue set, Then results is an empty array, severityCounts are zeros, and status is OK with no errors present. Given invalid input, Then the API returns 400 with code INVALID_REQUEST and details; Given transient backend failure, Then the API returns 503 with code SERVICE_UNAVAILABLE and retryAfter. Given typical payloads (≤10 contributors, ≤50 agreements), Then response body size is ≤64 KB p95.
Extensible Rule Definitions and Governance
Given a new validation rule is added to the rule registry, When it is published, Then the engine hot-reloads the rule without redeploy within 60 seconds, increments ruleSetVersion minor, and subsequent evaluations include the new rule. Given a rule is disabled via configuration, Then subsequent evaluations omit that rule and results include RULES_CHANGED info for 1 hour after the change. Given rules have numeric priority, When multiple rules apply to the same dimension, Then evaluation order follows priority (lower number first), and conflicts resolve deterministically with precedence: carve-outs > grants > MFN > informational. Given the API is called with ruleSetVersion set to a previous version, Then the engine evaluates using that version and returns ruleSetVersion matching the request. Given GET /rules is called by an authorized admin, Then the API lists active rules with id, name, version, priority, severity, and code.
Full Audit Logging with Rule Versioning
Given any validation is executed, Then an immutable audit record is persisted containing evaluationId, timestamp (UTC), actorId, capsuleId, contributorIds, ruleSetVersion, inputDigest, normalizedScope, results.codes, executionTimeMs, and storeWriteId. Given PII in inputs (names, emails), Then audit records store only IDs; human-readable PII is redacted or omitted. Given normal operation, Then 99.9% of evaluations have audit records durably written within 2 seconds; If the write fails, Then the API response includes warning AUDIT_LOG_DEFERRED and the system retries until success or 5 minutes max. Given an admin with AUDIT_VIEW permission calls GET /audit with capsuleId and date range filters, Then the system returns matching audit entries with pagination, and NDJSON export is available.
Guided Scope Wizard UI
"As a producer assembling a Capsule, I want a clear step-by-step interface to define usage scope with immediate feedback so that I can complete setup quickly without rights mistakes."
Description

Delivers a multi-step, responsive wizard that collects scope inputs (media, term, territory, exclusivity, MFN, carve‑outs) with contextual help, presets, and inline validation. Dynamically reveals fields based on prior selections, provides standardized pickers (date ranges, territory selector with regions/countries, media type taxonomy), and shows real-time validation messages from the engine. Includes progress indicators, autosave to the Capsule draft, accessibility compliance (WCAG AA), keyboard navigation, and mobile-optimized layouts. Supports collaboration via presence indicators and soft locks to prevent overwrite during concurrent editing.

Acceptance Criteria
Dynamic Step Flow, Branching, and Progress Indicator
Given I select Exclusivity = Exclusive, When I am on the wizard, Then MFN and Carve‑outs sections become visible and marked required. Given I change Exclusivity from Exclusive to Non‑exclusive, When I revisit the step, Then MFN and Carve‑outs are hidden and any previously entered values are not persisted in the payload. Given I choose Term Type = Fixed, When I open the date range picker, Then Start Date and End Date inputs are shown, required, and Next is disabled until both are valid. Given I choose Term Type = Perpetual, When I view the term controls, Then End Date input is hidden and excluded from the serialized draft. Given I select Territory = Global, When I proceed to the Territory step, Then the Region/Country selector is hidden and any previously selected regions/countries are cleared from the draft. Given I select Territory = Regional, When I proceed, Then the Region/Country selector is shown with multi‑select enabled. Given I switch Media from Advertising to Streaming, When I proceed, Then any Advertising‑specific subfields are removed from the draft and no validation errors remain for removed fields. Given all required fields on the current step are valid, When I press Enter or click Next, Then the wizard advances to the next step. Given I complete a step, When the step changes, Then the progress indicator updates to reflect the number of completed steps out of total and the current step name. Given a required field is invalid, When I click Next, Then focus moves to the first invalid field and an inline error is announced and displayed.
Standardized Pickers, Presets, and Contextual Help
Given the Term picker, When a user selects a date range, Then Start Date must be on or before End Date and the value is stored as ISO‑8601 dates in UTC. Given the Territory selector, When a user selects a Region (e.g., Europe), Then all contained countries are selectable and counts reflect the number of countries selected. Given the Territory selector search, When a user types a country name or ISO code, Then matching regions/countries appear within 200 ms and can be selected via keyboard and pointer. Given the Media taxonomy tree, When a user checks a parent node, Then all allowed child media types are selected and displayed as chips; unchecking the parent clears all children. Given a preset (e.g., “Standard Promo”) is chosen, When applied, Then the wizard pre‑populates media, term, territory, exclusivity, MFN, and carve‑outs per preset and shows a non‑blocking “Preset applied” toast; users can override any field afterward. Given a contextual help icon is present for a field, When clicked or focused and activated with Enter/Space, Then a popover opens with definition and examples; pressing Esc or focusing away closes it; external “Learn more” opens in a new tab.
Real‑time Validation Messaging from Rights Engine
Given a user edits a scope field, When input stops for 300 ms, Then a debounced call is made to the validation engine and a spinner is shown inline until a response is received. Given the engine returns an error tied to a field, When the message is received, Then the field shows an inline error state with the message, Next is disabled, and the error is listed in the step summary. Given the engine returns a warning (non‑blocking), When displayed, Then the field shows a warning style, Next remains enabled, and the warning appears in the step summary. Given a field with an existing engine message is corrected, When the next validation response indicates no issue, Then the inline message clears and the field returns to normal state. Given the validation request times out or fails, When this occurs, Then a non‑blocking banner appears with “Couldn’t validate, retrying…” and the client retries with exponential backoff up to 3 times; a Retry action is available. Given multiple contributors’ constraints are violated, When the engine returns per‑contributor messages, Then messages are grouped by contributor and rendered under the relevant fields with contributor names.
Draft Autosave and Recovery
Given I change any field, When I blur the field or 1 s elapses after the last keystroke, Then the draft autosaves to the Capsule with a visible saving indicator that resolves to “Saved”. Given I change steps, When the step transition completes, Then an autosave is triggered. Given the network is slow or offline, When an autosave is attempted, Then saves are queued with retry backoff, the UI shows “Offline – changes queued”, and no data is lost on refresh. Given I hard refresh or return later, When I reopen the Capsule’s Scope Builder, Then my last draft state and the last active step are restored. Given I revert a field to its previously saved value, When autosave runs, Then no new save is sent for that field (no‑op), and the save payload contains only changed keys. Given a save fails after all retries, When this occurs, Then an error banner appears with details and a “Try again” control; attempting again uses the latest local draft state.
Accessibility and Keyboard Navigation (WCAG AA)
Given the wizard UI, When navigated with a keyboard, Then all interactive elements are reachable via Tab/Shift+Tab in a logical order and show a visible focus indicator. Given a tree or multi‑select control (media/territory), When focused, Then Arrow keys navigate items, Space/Enter toggles selection, and Esc closes any open panel. Given inline validation appears, When using a screen reader, Then the error/warning text is announced via aria‑live polite and the invalid field has aria‑invalid=true with an accessible description. Given any textual content, When rendered, Then color contrast meets WCAG 2.1 AA (≥4.5:1 for text; ≥3:1 for large text/icons) and state is not conveyed by color alone. Given the progress indicator, When the step changes, Then the change is announced to screen readers and the indicator is labeled with the current and total steps. Given contextual help popovers, When opened, Then they trap focus until closed, are dismissible via Esc, and restore focus to the invoking control on close. Given the page loads, When a user prefers reduced motion, Then animations are reduced or removed respecting prefers‑reduced‑motion.
Mobile‑Optimized Responsive Layouts
Given a viewport between 320px and 480px, When the wizard is rendered, Then form controls stack in a single column and primary actions are sticky at the bottom without covering content. Given touch interaction, When tapping controls, Then all touch targets are at least 44x44 CSS pixels and have 8px minimum spacing. Given the virtual keyboard opens, When focusing inputs, Then the view scrolls to keep the focused field and helper text visible without being occluded. Given a device with a notch or safe areas, When in portrait, Then content respects safe‑area insets with no clipped controls. Given a mid‑tier mobile device on simulated Slow 4G, When loading the wizard, Then Time to Interactive is ≤3.0s and input latency (FID/INP) remains under 200ms during step transitions. Given images or illustrations in help, When on mobile, Then assets are responsive and do not exceed container width or cause horizontal scrolling.
Collaboration Presence Indicators and Soft Locks
Given two users open the same Capsule’s Scope Builder, When both are on the wizard, Then presence avatars appear within 1s and field‑level focus indicators show who is editing which field. Given a user begins editing a field, When another user attempts to edit the same field, Then the second user sees a soft‑lock notice and the field becomes read‑only until the lock is released. Given a soft lock is active, When the locking user is idle for 30s or navigates away from the field, Then the lock releases and others can edit. Given two users edit different fields, When both save via autosave, Then both changes persist without conflict and are visible to all within 2s. Given a conflicting save occurs on the same field, When the second user attempts to save, Then the UI shows “Update blocked by {User}” with an option to request access; no remote overwrite occurs. Given a user disconnects, When presence pings are missed for 10s, Then their presence indicator clears and any held locks are released.
Contributor Constraint Import & Mapping
"As a label ops user, I want to import and normalize contributors’ rights constraints so that Scope Builder can validate against accurate, structured data without manual re-entry."
Description

Enables ingestion and normalization of contributor-specific rights constraints from profiles, prior Capsules, and CSV/JSON uploads. Maps free-form entries to TrackCrate’s normalized rights taxonomy for media, territory, term, exclusivity, MFN, and carve‑outs. Provides a review screen to resolve ambiguities, deduplicate entries, and set precedence rules. Synchronizes mapped constraints to the validation engine and flags stale data. Includes permissions to restrict who can edit contributor constraints and a change history for compliance.

Acceptance Criteria
Import from Profile and Prior Capsules
Given a Capsule with Contributor A added and an authenticated user with Edit Contributor Constraints permission When the user opens Scope Builder > Constraint Import and selects sources Profile and Prior Capsules Then Contributor A’s latest profile constraints are staged with source=Profile and ISO8601 lastUpdated timestamp And the user can search prior Capsules by name/ID and select one or more And only constraints for Contributor A from the selected Capsules are staged with source=Capsule and capsuleId/version And items already present with identical normalized values are not duplicated And the user can include/exclude each staged item via checkbox, default Include=true And results render within 2 seconds for up to 200 constraints
CSV/JSON Upload Parsing & Validation
Given an upload file in CSV or JSON format up to 5MB and 5,000 rows When the user uploads via Constraint Import Then the system validates presence of fields [contributorId, media, territory, term, exclusivity, mfn, carveOuts] And rows missing mandatory fields are rejected with inline row numbers and reasons And parsing completes within 5 seconds for files meeting the size/row limits And no data is staged if a fatal schema error occurs; otherwise valid rows are staged and invalid rows are listed in an errors panel And users can download an error CSV containing the rejected rows
Free‑Form Mapping to Rights Taxonomy
Given staged constraints containing free‑form values for media, territory, term, exclusivity, MFN, or carve‑outs When the user proceeds to Mapping Then the system proposes normalized values from TrackCrate’s taxonomy with a confidence score 0–100 per field And confidence >= 90 auto‑maps; 60–89 requires user confirmation; < 60 remains unresolved And the user can override any mapping and search the taxonomy for alternatives And all selected mappings are validated against allowed enumerations and formats (e.g., ISO 3166 territory codes, ISO 8601 date ranges)
Ambiguity Resolution Review
Given unresolved or low‑confidence mappings exist When the user opens the Review screen Then the UI lists each unresolved item with suggested options, definitions/tooltips, and source provenance And the Finalize Import action is disabled until unresolved count equals 0 And selecting a resolution updates the unresolved count in real time And Cancel discards staged changes; Save Draft preserves staged state for the Capsule only
Deduplication and Precedence Rules
Given staged constraints from multiple sources for the same contributor When Deduplicate is executed Then exact duplicates (same normalized media, territory, term, exclusivity, MFN, carve‑outs) are merged into a single entry with aggregated sources And conflicts are highlighted in a diff view by field And the user can set precedence order among sources (default: Profile > Prior Capsules > Upload) And applying precedence resolves conflicts deterministically and updates the Effective Constraints preview And the user can save the precedence template for reuse on future imports
Sync to Validation Engine and Staleness Detection
Given the user finalizes the import When synchronization runs Then all effective constraints are persisted to the Capsule and made available to the Scope Builder validation engine within 3 seconds And the system records a syncId and timestamp And if a linked source (e.g., contributor profile) changes after sync, the Capsule shows a Stale badge within 60 seconds and provides a Re‑sync action that opens the import review with a change diff
Permissions and Change History
Given role‑based permissions are configured When a user without Edit Contributor Constraints permission attempts to import, map, or finalize Then the actions are blocked with a 403 message and a request‑access link And users with permission can perform edits and must enter an optional change note And every create/update/delete is logged with actor, timestamp, source, before/after values, and Capsule ID in an immutable audit trail And auditors with View Audit permission can export the change history to CSV for a specified date range
Conflict Resolution Suggestions
"As an artist manager, I want the system to suggest compliant scope options when my proposal conflicts with constraints so that I can quickly find a solution acceptable to all parties."
Description

Generates actionable alternatives when a proposed scope fails validation, offering scope adjustments that would satisfy all contributors. Presents a side-by-side matrix showing which contributors block which elements, suggests permissible term ranges, territory subsets, media exclusions, or exclusivity downgrades, and highlights MFN implications. Allows one-click application of a suggested alternative and re-runs validation immediately. Includes rationale tooltips and estimated impact on affected contributors to streamline negotiations.

Acceptance Criteria
Failed Validation: Contributor Block Matrix Display
Given a proposed scope fails validation across multiple contributors When the user opens the Conflict Resolution panel Then a side-by-side matrix renders within 2 seconds for up to 25 contributors and 6 scope dimensions (media, term, territory, exclusivity, MFN, carve‑outs) And each matrix cell that is blocked shows the blocking contributor constraint and a one-line reason And the matrix provides filters to show only blocking contributors and/or only blocking elements And per-contributor and per-element blocker counts are displayed And exporting the matrix to CSV includes contributor, element, reason, and constraint ID
Suggestion Engine: Cross-Dimension Alternatives Generation
Given a failed validation with at least one conflicted element When the user requests suggestions Then the system generates between 3 and 10 distinct alternatives within 3 seconds, if a feasible solution space exists And each alternative satisfies all contributors’ constraints at time of generation And alternatives span at least two dimensions among: term range adjustments, territory subsets (ISO-3166 regions), media exclusions, and exclusivity downgrade (e.g., Exclusive → Non-Exclusive) And each alternative lists the exact field deltas from the current proposal And alternatives are ranked by lowest cumulative impact score (defined as normalized sum of term reduction %, territory coverage reduction %, media exclusions count weight, and exclusivity penalty)
Apply Suggestion: One-Click Update and Immediate Re-Validation
Given a list of generated alternatives is visible When the user clicks Apply on a specific alternative Then the proposal is updated atomically with the alternative’s deltas And validation re-runs automatically and displays results within 2 seconds And if validation passes, the proposal state shows Valid with a green status and the applied alternative is logged with timestamp and user ID And if validation fails due to concurrent constraint changes, a warning is shown and the matrix highlights the new blockers without persisting the failed alternative
MFN Implications: Disclosure and Constraint Adjustment
Given one or more contributors have MFN clauses relevant to the scope When an alternative would create a more favorable scope for any contributor Then the alternative is flagged with an MFN badge before application And a details panel lists all MFN-affected contributors, the MFN rule triggered, and the exact required adjustments And the estimated MFN cascade impact (number of contributors affected and fields to be equalized) is displayed And applying the alternative also applies the necessary MFN-aligned adjustments or blocks application with a clear explanation if alignment is impossible
Rationale Tooltips and Per-Contributor Impact Estimates
Given suggestions are displayed When the user hovers or taps the info icon on any suggestion Then a tooltip shows the rationale including: conflicted constraints addressed, rules used to derive the alternative, and data sources (constraint IDs) And the suggestion shows per-contributor impact estimates including: term delta (months), territory coverage change (% of regions), media types removed (count), and exclusivity change (boolean) And impact severity is summarized as Low/Medium/High using defined thresholds (e.g., territory reduction <10% = Low, 10–25% = Medium, >25% = High) And tooltips are accessible (keyboard focusable, screen-reader labeled)
No Feasible Alternative: Minimal Conflict Set Explanation
Given contributor constraints are mutually exclusive such that no valid scope exists When the user requests suggestions Then the system returns zero alternatives within 2 seconds And a message explains that no feasible solution exists and lists a minimal conflict set (smallest subset of constraints that cannot be satisfied together) And the message identifies negotiation levers (e.g., specific media, regions, term) tied to the blocking contributors And the user can export the conflict report (PDF/CSV) for negotiation
Rights Summary Generator
"As a project lead, I want an authoritative one-page Rights Summary I can share with stakeholders so that everyone aligns on the agreed scope without misinterpretation."
Description

Produces a concise, one-page Rights Summary inside the Capsule reflecting the selected scope, contributor constraints considered, validation status, and approvals. Renders a clean printable layout with branding, timestamps, version ID, and per-contributor acknowledgments. Supports secure sharing via shortlink, PDF export with watermark, and JSON export for contract systems. Locks the summary to a specific scope version with checksum to ensure tamper-evidence and references linked assets (stems, artwork) by immutable IDs.

Acceptance Criteria
Summary mirrors selected scope choices
Given a Capsule with a saved scope selection including media, term, territory, exclusivity, MFN, and carve-outs as version V When the Rights Summary is generated Then the Summary lists all six scope dimensions with their exact values from version V And the Summary header displays the scope version ID V And every displayed scope value equals the persisted value in the scope data store for version V
Constraint validation results per contributor
Given contributors C1..Cn each with stored usage constraints And a selected scope saved as version V When the Rights Summary is generated Then each contributor row displays a validation status of Pass, Flag, or Block based on evaluating V against that contributor's constraints And flagged or blocked contributors list the conflicting scope dimensions And the overall validation status is Passed if all Pass, Issues Found if any Flag and none Block, or Blocked if any Block
Per-contributor approvals captured and surfaced
Given contributors are invited to review the Rights Summary for version V When a contributor submits an approval decision Then the Summary records their decision (Approved/Rejected), display name, and an ISO-8601 timestamp And the contributor row reflects the current decision state And the overall Approval status is Approved only when all contributors are Approved; otherwise Pending And an immutable audit log entry is written for each decision change
One-page printable layout with branding and metadata
Given the Rights Summary for version V is opened in print preview with default browser margins on Letter and A4 When printed to PDF Then the output consists of exactly one page without clipping or truncated content And the page includes brand logo, Capsule name, scope version ID, and an ISO-8601 generation timestamp And the shortlink to the Summary is printed on the page And body text is at least 11pt and matches the approved typography spec
Secure shortlink sharing and access control
Given a user creates a share shortlink for the Rights Summary of scope version V When any party accesses the shortlink over HTTPS Then the Summary for version V is displayed read-only And the URL contains an unguessable token with at least 128 bits of entropy And if the owner revokes the shortlink, subsequent access returns 410 Gone
Watermarked PDF and schema-valid JSON exports
Given the Rights Summary for scope version V When the user exports as PDF Then the PDF includes a visible watermark containing the Capsule name and scope version ID on all pages And the PDF metadata contains a creation timestamp and document ID matching the Summary When the user exports as JSON Then the JSON conforms to the published RightsSummary schema version S And includes scope data, contributor validation statuses, approvals, linked asset IDs, and the checksum And repeating the same export for version V with unchanged inputs produces byte-identical files
Checksum lock and immutable asset references
Given scope version V and linked assets identified by immutable IDs When the Rights Summary is generated Then a SHA-256 checksum is computed over the normalized summary payload and displayed on the Summary And any change to scope, approvals, or linked asset set produces a different checksum And the Summary references each linked asset by its immutable ID and links resolve to those IDs And if the stored checksum does not match the recomputed value on load, the Summary shows a tamper warning and disables export and sharing actions
Scope Versioning & Audit Trail
"As a legal coordinator, I want full version history and diffs for scope changes so that I can audit negotiations and demonstrate compliance."
Description

Tracks every change to scope inputs and validations with immutable version IDs, diffs, who/when metadata, and reason notes. Supports branching for alternative proposals, rollback to prior versions, and comparison views that highlight differences in media, term, territory, exclusivity, MFN, and carve‑outs. Integrates with approvals to prevent edits on locked, approved versions and emits events for activity feeds and compliance reporting.

Acceptance Criteria
Immutable Version Creation on Scope Edit
Given a Capsule’s Scope Builder is open on version Vn and the user has edit permissions When the user changes any scope input (media, term, territory, exclusivity, MFN, carve‑outs) and clicks Save, providing a non‑empty reason note (min 5 characters) Then the system creates a new immutable version V(n+1) with a UUIDv4 ID, sequential index, actor (user ID), and ISO‑8601 UTC timestamp And a field‑level diff from Vn is recorded and stored And the validation snapshot (per‑contributor constraint pass/fail) is captured for V(n+1) And all prior versions remain unchanged and read‑only And the new version appears in GET /capsules/{capsuleId}/scope/versions and in the UI version list within 2 seconds of save
Comparison View Highlights Scope Differences
Given two scope versions are selected in the Compare view When the versions differ in any of media, term, territory, exclusivity, MFN, or carve‑outs Then the UI highlights only the changed fields and shows previous vs current values side‑by‑side And unchanged fields appear unhighlighted with identical values And the compare header displays both version IDs, branch names, actors, and timestamps And exporting the comparison to PDF and JSON reproduces the same highlighted differences and metadata
Branching Alternative Proposal from Any Version
Given a user views version Vx with view or edit access When the user creates a new branch with a unique name within the Capsule Then a branch record is created with a unique branchId and parentVersionId = Vx And subsequent saves on this branch create versions labeled with that branchId without altering Vx or its branch And attempting to reuse an existing branch name within the Capsule results in a validation error and no branch is created And GET /capsules/{capsuleId}/scope/branches lists the new branch within 2 seconds
Rollback Creates New Version While Preserving History
Given a user is on branch B with current version Vy and has permission to modify branch B When the user selects Rollback to prior version Vx and provides a reason note (min 5 characters) Then the system creates a new version Vz on branch B that exactly copies the snapshot of Vx (no history is deleted) And validations are re‑evaluated for Vz and stored And an audit trail entry records operation=rollback, from=Vy, to=Vz, origin=Vx, actor, timestamp, and reason And Vz appears in the versions list and can be compared to both Vy and Vx
Approval Lock Prevents Edits but Allows Branching
Given version Vx is marked Approved and Locked by the approvals subsystem When any user attempts to edit scope inputs on Vx via UI or API Then the action is blocked: UI edit controls are disabled and API mutations return 403 with error code scope_version_locked And allowed actions on Vx are limited to view, compare, export, and branch‑from And rollback targeting Vx on the same branch is blocked; branching from Vx to propose changes is permitted
Event Emission for Activity Feed and Compliance
Given any versioning operation completes (create, edit, branch, approve, rollback) When the operation succeeds Then an event is emitted to the Activity Feed and Compliance stream within 5 seconds with an idempotency key = {versionId}:{operation} And the payload includes capsuleId, versionId, branchId, operation, actorId, timestamp, reason (if provided), impactedFields[], and validation status summary And the Activity Feed displays a human‑readable entry referencing the version ID and operation And Compliance API can query events by date range and operation type and returns the emitted event
Approval Workflow & Notifications
"As a contributor, I want a clear approval workflow with timely notifications so that I can review and sign off on the scope without delays or confusion."
Description

Implements role-based permissions for proposing, reviewing, and approving scopes, with multi-party approval gates and per-contributor consent tracking. Sends in-app and email notifications for review requests, changes, and pending approvals with configurable reminders and deadlines. Displays a consolidated approval status in the Capsule, blocks finalization until required approvals are complete, and records signed-off scope version IDs for the Rights Summary.

Acceptance Criteria
Role-Based Proposal Submission Permissions
Given a user with Proposer permission on the Capsule, When they submit a new scope draft from Scope Builder, Then the system saves the draft, sets status "Pending Review", and records the proposer and timestamp. Given a user without Proposer permission, When they attempt to submit a scope draft, Then the submission is blocked with 403 and a descriptive permission error is shown in-app. Given a user with Reviewer or Approver permission, When they open the draft, Then they can view but cannot edit fields unless they also hold Proposer permission.
Multi-Party Approval Gates Enforcement
Given an approval gate requiring approvals from all listed contributors, When fewer than all required contributors have approved, Then "Finalize Scope" is disabled and an explanatory tooltip lists missing approvals. Given an approval gate configured as sequential (A then B), When A has not approved, Then B cannot approve and sees a waiting status. Given an approval gate configured as parallel, When any contributor approves, Then others can still approve independently without order constraints. Given all required approvals are captured, When the last approval is submitted, Then the gate status becomes "Complete" and finalization is enabled.
Per-Contributor Consent Tracking
Given an approver submits Approve or Decline, When the action is saved, Then the system records contributor ID, decision (Approve/Decline), scope version ID, approver identity, timestamp, and optional comment. Given a contributor has declined, When the proposer edits the scope and creates a new version, Then prior approvals for all contributors are invalidated for the new version and statuses reset to "Pending". Given a contributor has approved a specific version, When a different version is under review, Then the prior approval is displayed as "Approved vX" and not counted toward the new version's completion.
Review Request, Change, and Decision Notifications
Given a scope draft is submitted for review, When recipients have Reviewer or Approver permission, Then each recipient receives an in-app notification and an email with the Capsule link, scope version ID, and stated deadline. Given the scope content changes while under review, When a new version is published, Then all pending approvers receive a change notification including a change summary and the new version ID. Given an approver takes action, When they approve or decline, Then the proposer receives in-app and email notifications reflecting the decision and any comments.
Configurable Reminders and Deadlines
Given a deadline is set for approvals, When the deadline is created, Then reminders are scheduled at configured intervals (e.g., 72h, 24h, 2h) to all pending approvers. Given an approver completes their action, When their decision is saved, Then future reminders for that approver are canceled. Given the deadline passes with pending approvals, When the deadline expires, Then overdue notifications are sent to the proposer and pending approvers, and the Capsule shows the gate as "Overdue".
Consolidated Approval Status in Capsule
Given a scope version is under review, When viewing the Capsule, Then a consolidated status panel displays counts of Approved/Pending/Declined, each contributor’s status, and the overall gate status. Given any approval status changes, When the panel is open, Then the panel updates within 5 seconds to reflect the latest state. Given a user lacks permission to view contributor identities, When viewing the panel, Then contributor names are masked while aggregate counts remain visible.
Finalization Block and Rights Summary Version Recording
Given required approvals are incomplete, When a user attempts to finalize the scope, Then finalization is blocked with a message listing missing approvers. Given all required approvals are complete, When the user finalizes, Then the system records the signed-off scope version ID and associates it with the Rights Summary in the Capsule. Given a new scope version is created after finalization, When viewing the Rights Summary, Then the previously recorded version ID remains immutable, and a banner indicates that a newer draft exists.

Signoff Ledger

Collect per‑split approvals with lightweight e‑consent tied to file hashes and watermark IDs. The Ledger timestamps, records roles, and captures exceptions, exporting an audit‑ready PDF in the Capsule. You get a defensible chain of title without email archaeology or version confusion.

Requirements

Hash‑Bound E‑Consent Capture
"As a label admin, I want collaborators to e‑consent to specific file hashes so that approvals are tied to the exact assets being released."
Description

Provide a lightweight consent flow that binds each approval to immutable asset fingerprints. The consent page enumerates all assets in a Capsule (stems, mixes, artwork, press) with their SHA‑256 hashes and, where applicable, watermark IDs, so signers explicitly approve the exact binaries being released. Approvals are captured per‑split and per‑asset, persisted to the Signoff Ledger with signer metadata, consent text version, and linkage to the Capsule version. Tokenized, expiring links enable one‑click access on any device; the flow is mobile‑first, accessible, and localizes dates/timezones. Ledger entries lock to the approved file hashes, eliminating version drift and email archaeology, and expose status back to TrackCrate’s Capsule view.

Acceptance Criteria
Tokenized Expiring Consent Link Delivery
Given a Capsule with Signoff Ledger enabled and an intended signer When the requester sends a consent invitation Then the system generates a single-use, tokenized URL bound to the Capsule version ID and signer identity And the token expires 7 calendar days after issuance or immediately upon successful submission, whichever comes first And accessing the URL from any device opens the consent page without requiring account sign-in And accessing an expired or consumed token shows an "expired link" message and prevents consent submission And if the Capsule version changes before submission, the token becomes invalid and directs the signer to request a new link
Capsule Asset Enumeration With Hashes and Watermark IDs
Given a Capsule version containing stems, mixes, artwork, and press assets with stored checksums When a signer opens the consent page Then 100% of assets in the Capsule version are listed with filename, file size, MIME type, SHA-256 hash, and watermark ID where available And displayed SHA-256 hashes are verified server-side against stored checksums at render time And assets lacking watermark IDs display "N/A" for that field And the list order matches the Capsule asset order and is stable across refreshes And any checksum mismatch blocks submission and displays an error referencing the affected asset(s)
Per-Asset, Per-Split Approval Capture
Given a signer associated to a role and split percentage for the Capsule When they review the asset list Then they must record a decision per asset as Approve or Exception before submission And an "Approve All" action, if used, records an Approve decision for each individual asset explicitly And submission is blocked until all assets have decisions And the ledger stores per record: asset ID, split ID, decision, UTC timestamp, and optional exception note up to 500 characters And partial approvals are accepted and reflected as "Partial" status for the split until all assets are approved And any change to the Capsule version or an asset hash after a draft decision clears unsent decisions and requires re-consent
Signer Metadata and Consent Text Version Recording
Given a signer submits consent When the system records the submission Then the ledger stores signer name, email, claimed role, split percentage, IP address, user agent, locale, timezone, and UTC timestamp And the ledger stores the consent text version ID and checksum used at time of consent And the entry links to the Capsule version ID and the exact set of approved asset hashes and watermark IDs And all user-facing timestamps display localized to the signer’s timezone while UTC is preserved in storage
Ledger Immutability and Hash Locking
Given an approved consent entry exists in the Signoff Ledger When any asset file in the Capsule is replaced or modified Then the existing consent entry becomes read-only and is labeled "Superseded by version <id>" And a new consent request is required; prior consent cannot be edited or migrated And the PDF export includes the original asset list with SHA-256 hashes and watermark IDs plus a cryptographic digest of the export And any attempt to modify a stored consent entry is rejected and logged with actor, UTC timestamp, and reason
Capsule View Signoff Status Exposure
Given a Capsule with one or more invited signers When a user views the Capsule in TrackCrate Then per-split, per-asset status displays as Approved, Pending, or Exception with counts and a rollup (e.g., 8/10 assets approved) And statuses update within 5 seconds of a consent submission And hovering or tapping a status reveals signer name, UTC timestamp, and consent text version And a filter allows showing only assets with outstanding approvals or exceptions
Mobile-First Accessible Consent Flow
Given a signer opens the consent page on a mobile device over typical 3G When the page loads Then first meaningful paint occurs within 2 seconds and total page weight is under 1.0 MB excluding asset files And all controls meet a minimum 44px touch target and are fully keyboard-navigable And the page meets WCAG 2.1 AA for color contrast and visible focus indicators And dates/times are localized to the device locale while UTC is preserved in storage And any preview or download link presented is expiring and watermarked, and is disabled when the token is expired or consumed
Role & Split Mapping
"As a project manager, I want to map each required signer to their role and split so that approvals reflect contractual entitlements and completeness is tracked."
Description

Define and validate required signers by role (e.g., artist, producer, writer, label, photographer) and percentage split, with optional thresholds (e.g., 100% of writers or majority of producers). The system maps each splitter to their contact identity and assigns which assets require their approval. It surfaces completeness gates in the Capsule, blocks release until requisite approvals are met, and supports per‑split exceptions (e.g., artwork not required for writers). Split data synchronizes with existing TrackCrate metadata and shortlinks so that approvals reflect contractual entitlements and downstream export uses consistent roles/splits.

Acceptance Criteria
Threshold enforcement for role groups (100% writers, majority producers)
- Given a Capsule with Writers A:50% and B:50% and a threshold of "100% of writers" for Masters, When only A approves Masters, Then writers coverage for Masters = 50% and requirement status = "Pending". - Given the same Capsule, When A and B approve Masters, Then writers coverage = 100% and requirement status = "Satisfied". - Given Producers P1:40%, P2:30%, P3:30% and a threshold of "Majority of producers" defined as >50% cumulative producer split for Masters, When P1 and P2 approve Masters, Then producer coverage = 70% and requirement status = "Satisfied". - Given a role member with a 0% split, When thresholds are computed, Then the member is not listed as required and does not affect coverage. - Given coverage is rounded to two decimals, When cumulative split >= 99.995%, Then coverage displays as 100.00% and status = "Satisfied".
Asset-specific approval matrix by role
- Given Capsule assets categorized as Masters, Stems, Artwork, and Press, When role rules are configured as Writers: Masters+Stems; Producers: Masters+Stems; Label: Masters+Artwork+Press; Photographers: Artwork only, Then the Ledger shows required signers per asset category matching these mappings. - Given the above rules, When generating required signers for Artwork, Then Writers and Producers are excluded and Photographers and Label are included. - Given the above rules, When generating required signers for Stems, Then Photographers are excluded and Writers and Producers are included. - Given a new asset of category Press is added, When the asset is saved, Then required signers for Press recalculate immediately and the current coverage is displayed.
Capsule completeness gates and release blocking
- Given Capsule completeness gates are enabled, When any required role-group coverage for any required asset category is below its configured threshold, Then the Capsule status = "Incomplete", Publish/Export actions are disabled, and a blocker banner lists unmet roles and coverage percentages. - Given unmet requirements exist, When a user attempts to create a shortlink or publish an AutoKit page, Then the action is blocked with an error listing the specific missing approvals and a link to open the Ledger. - Given all thresholds are satisfied across required asset categories, When a user attempts Publish/Export/Shortlink actions, Then the actions succeed with no blocker message. - Given new assets are uploaded that introduce new required approvals, When the upload completes, Then the Capsule status flips to "Incomplete" and actions are blocked until coverage is restored.
Contact identity mapping and invite flow for splitters
- Given a splitter without a mapped TrackCrate identity, When the Ledger is opened, Then the splitter is flagged "Unmapped" and cannot be requested to approve until mapped. - Given an email invite is sent to an unmapped splitter, When the invitee accepts and verifies their email, Then the splitter is mapped to that TrackCrate ID and becomes eligible to approve without altering existing coverage calculations for others. - Given two splitters are attempted to be mapped to the same TrackCrate identity within the same role group, When the mapping is saved, Then the system prevents the duplicate and prompts to merge or adjust splits first. - Given an existing verified TrackCrate contact matches by email, When mapping is attempted, Then the system suggests the match and links without sending a new invite.
Real-time sync of roles/splits to metadata and shortlinks
- Given roles/splits are edited in the Ledger, When the changes are saved, Then TrackCrate release metadata, shortlink entitlements, and AutoKit variables reflect the new roles/splits within 60 seconds and a new metadata version is recorded. - Given roles/splits are updated via the external Metadata API, When the Capsule is next opened or a sync is triggered, Then the Ledger reconciles, recomputes required signers, and logs the sync event in activity history. - Given approvals have been captured, When role/split changes alter coverage, Then completeness gates recompute immediately and downstream publish attempts respect the updated status.
Per-split exceptions overriding role defaults
- Given role defaults require Writers for Stems, When an exception excluding Writer B from Stems is added, Then the coverage base for Stems becomes the sum of eligible writer splits excluding B, and coverage status recomputes accordingly. - Given an exception with an expiry date of 2025-12-31, When the current date is 2026-01-01, Then the exception is no longer applied and required signers revert to role defaults. - Given an exception is added or removed, When the change is saved, Then an audit entry is recorded with actor, timestamp, scope (asset categories), and reason, and the change appears in the export PDF and Capsule activity.
Split validation and normalization rules
- Given a role group Writers, When splits entered sum to 100% within a tolerance of ±0.01, Then the system accepts the values and normalizes display to two decimals. - Given splits sum < 99.99% or > 100.01% for a role group that must total 100%, When saving, Then the system blocks save and highlights offending entries with the error "Role splits must total 100%". - Given any split is negative or greater than 100%, When saving, Then the system blocks save with a specific validation error per field. - Given a role group configured as "Open" (no 100% total required), When saving splits, Then any total is permitted and coverage is computed using the configured threshold definition for that group.
Identity & Timestamp Capture
"As a collaborator, I want to verify with a one‑time code and see a timestamped log of my consent so that my approval is legally attributable and auditable."
Description

Attribute each consent to a verified individual and a precise moment in time. Collect signer identity using email verification with one‑time code, optional SSO, and device/IP fingerprinting, recording timezone‑aware timestamps and consent locale. Store consent text version, signer agent (browser/OS), and integrity checks to strengthen evidentiary value. All entries are tamper‑evident within the Ledger and are linked to the Capsule snapshot. Display a public (permissioned) receipt to the signer immediately upon completion.

Acceptance Criteria
Email OTP Identity Verification
Given a signer enters an email address and requests verification When a 6-digit OTP is generated Then the OTP is emailed to the address, stored as a salted hash, single-use, with a 10-minute TTL Given a signer submits the correct OTP within the TTL When verification is processed Then the email is marked verified for this consent session, method saved as "OTP", and a verification timestamp is recorded Given a signer submits incorrect OTP codes 5 times within 60 minutes When a 6th attempt occurs Then the system locks verification for that email for 15 minutes and records a rate-limit event in the Ledger Given an OTP has already been used When a reuse attempt is made Then the attempt is rejected and logged without creating or updating consent
SSO Identity Verification Capture
Given a signer selects an approved SSO provider (Google, Apple, Microsoft) When SSO authentication completes successfully Then the system stores provider, subject_id, verified email (if provided), and method as "SSO:<Provider>", with a verification timestamp Given SSO returns an unverified email When the signer proceeds to consent Then the signer must complete email OTP verification before consent can be recorded Given SSO fails or is canceled When the signer returns to the consent screen Then no partial consent record is created and an auth-failure event is logged
Device and Network Fingerprinting
Given a signer lands on the consent screen When the consent is completed Then the system captures and stores IP address (IPv4/IPv6), full User-Agent string, and a device fingerprint hash derived from non-PII signals, bound to the consent_id Given JS-based signals are unavailable When fingerprinting is attempted Then a minimal_fingerprint is recorded and the record is flagged with fingerprint_confidence="low" Given an audit export is generated When the consent record is included Then the IP, User-Agent, and fingerprint hash are present in the export
Timezone-Aware Timestamp and Locale Recording
Given a signer completes consent When the event is recorded Then the system stores server_time_utc (ISO 8601 with milliseconds), client_timezone (IANA), client_utc_offset_at_event, and consent_locale (BCP 47) Given client and server timezones differ When the receipt is displayed Then both server_time_utc and client local time (derived from timezone/offset) are shown with clear labels Given the audit PDF is generated When the consent entry is rendered Then all timestamp and locale fields are included and match the Ledger entry
Consent Text Versioning and Integrity Hash
Given a consent text is presented to the signer When consent is recorded Then consent_text_version_id, consent_text_sha256, and consent_text_locale are stored with the consent record Given the consent text changes before submission When the signer attempts to submit Then the signer is prompted to re-acknowledge the updated text, and the new version_id and sha256 are stored upon acceptance Given the audit PDF is generated When the consent text is included Then the SHA-256 of the embedded text matches consent_text_sha256
File Hash and Watermark Linkage
Given consent pertains to one or more files and watermarked assets When consent is recorded Then the Ledger stores file_sha256 list, watermark_id list, and the shortlink_id used in the consent flow Given a file is replaced with a new version before consent is submitted When the signer resumes and submits consent Then the file_sha256 and file_version_id bound to the consent session reflect the latest presented files at time of acceptance Given an audit export is requested When the consent record is included Then file hashes and watermark IDs are present for independent verification
Tamper-Evident Ledger Entry and Permissioned Receipt
Given a consent is successfully recorded When the Ledger writes the entry Then the entry includes consent_id, capsule_snapshot_id, previous_hash, and current_hash computed over all persisted fields, and is appended to an immutable, hash-chained log Given any persisted field is altered post-write When an integrity check runs Then the hash chain validation fails and the record is flagged with integrity_status="tamper_detected" Given consent completion When the confirmation step renders Then the signer is shown a permissioned receipt within 2 seconds with consent_id, identity method, timestamps, and evidence summary, accessible via a signed URL for at least 90 days
Exception Logging & Revision Control
"As a producer, I want to record exceptions and request revisions so that carve‑outs and pending issues are captured without blocking unrelated approvals."
Description

Allow signers to decline, annotate, or carve out specific assets or terms without blocking unrelated approvals. Capture structured exception reasons, free‑form notes, and requested changes. When assets or terms change, auto‑generate a new Ledger revision tied to the updated hashes and require re‑consent only for affected parties. Maintain a clear diff between revisions within the Capsule and preserve full historical trail for auditability.

Acceptance Criteria
Decline Specific Asset Without Blocking Others
Given a signing request contains multiple assets and term sections When a signer declines approval for one or more selected items and submits an exception Then only the selected items are marked Declined with the recorded exception And all unselected items retain their existing approval states (Approved/Pending) And the signoff flow remains actionable for unblocked items And the activity feed records the decline event with signer ID, role, timestamp (UTC), item IDs, and exception code
Capture Structured Exception With Validation
Given a signer opens the exception form for an item When they attempt to submit without selecting at least one reason code from the configured list Then submission is blocked and an inline error "Reason required" is shown Given a signer selects one or more reason codes and adds optional notes and requested changes When notes exceed 2000 characters or any requested change entry exceeds 256 characters or more than 10 requested change entries are provided Then submission is blocked with specific field-level errors When valid data is submitted Then the exception is saved with fields: exception_id, item_id(s), reason_code(s), notes, requested_changes[], signer ID, role, timestamp (UTC) And a content checksum of the exception payload is stored for integrity And the data is visible in the Capsule and included in exports
Carve-Out Specific Stems or Terms Sections
Given an asset bundle contains multiple stems and a terms document with labeled sections When a signer carves out selected stems and/or specific terms sections Then the Ledger marks only those selections as Exception-Pending And all other items remain approvable and can be finalized independently And the Capsule UI clearly badges carved-out items and filters by exception state And exports annotate carved-out items with the associated exception IDs
Auto-Generate Revision on Asset/Term Change and Targeted Re-Consent
Given Ledger revision R0 exists with recorded approvals and exceptions When any asset file or terms content changes resulting in a different content hash or watermark ID Then the system creates revision R1 with a new revision ID and records updated hashes/IDs And produces a machine- and human-readable diff from R0 to R1 And only signers who had approved or interacted with changed items are set to Re-Consent Required for those items And approvals for unchanged items persist without re-consent And the Capsule retains R0 intact and links R0 and R1 via a revision chain
Capsule Displays Human-Readable Diff Between Revisions
Given a user opens the Capsule diff view between revisions Rn and Rn+1 Then the view lists added, removed, and modified assets with before/after filenames, sizes, and content hashes/watermark IDs And terms changes are shown with line- or clause-level highlights and section identifiers And each diff entry links to the underlying approval/exception records And the diff can be exported as PDF and JSON including revision IDs and generation timestamp (UTC)
Audit Trail Preservation and Export
Given any revision history exists for a Ledger When a user exports the audit PDF from the Capsule Then the PDF includes: all revision IDs, event timestamps (UTC), signer identities and roles, consent states, exception details (reason codes, notes, requested changes), file hashes and watermark IDs, IP/country at consent, and the revision diff summary And the PDF includes a document checksum or digital signature for tamper-evidence And the underlying event log is append-only; attempts to alter historical records are blocked and logged as separate events
Targeted Re-Consent for Resolved Exceptions
Given a signer previously declined or carved out specific items in revision R0 with recorded exception IDs When a new revision R1 is created that updates only those specific items Then the signer is required to re-consent only for the affected items And the re-consent view shows the prior exception notes and requested changes side-by-side with the updated content And upon approval, the Ledger links the new approval to the prior exception IDs and marks them Resolved And no action is required for items the signer previously approved and that remain unchanged
Audit‑Ready PDF Export & Capsule Embed
"As label counsel, I want a signed PDF of the Ledger so that I can furnish a defensible chain of title to distributors and partners."
Description

Generate a sealed PDF certificate containing the Ledger summary: Capsule identifier, asset list with hashes and watermark IDs, roles and splits, signer identities, timestamps, exceptions, and completeness status. Apply a TrackCrate digital signature and checksum to the PDF, and embed it in the Capsule for download and external sharing. Include machine‑readable attachments (JSON) for ingestion by distributors, PROs, and legal systems. Ensure exports reflect the exact Ledger state and are reproducible at any time.

Acceptance Criteria
Generate Sealed PDF Certificate from Completed Ledger
Given a Capsule with a completed Signoff Ledger and all required approvals recorded When an authorized user initiates “Export Audit-Ready PDF” Then a single PDF is generated that includes: Capsule identifier, Ledger identifier/version, asset list with file hashes and watermark IDs, roles and splits, signer identities, signatures/approvals, timestamps, exceptions (if any), and overall completeness status And the PDF is digitally signed with the TrackCrate signature and the signature validates against the current TrackCrate public certificate And a SHA-256 checksum of the exact PDF bytes is computed and displayed within the document (metadata and visible summary) And the PDF is saved as read-only so that any post-generation modification invalidates the digital signature And the exported filename follows the pattern: CapsuleID_LedgerID_v{n}_certificate.pdf
Embed Certificate in Capsule with Permissions and External Sharing
Given the PDF certificate has been successfully generated for a Capsule When the export completes Then the certificate is embedded in the Capsule’s Signoff Ledger section and is available for download to users with view access to the Capsule And only users with generate/export permission can create a new certificate; viewers can download but cannot regenerate And an external share link can be created by authorized users; the link provides access to the exact PDF and can be configured with expiration and optional password And downloads via Capsule and via external link return application/pdf with the original filename and byte-for-byte identical content And each download is recorded in the Capsule activity log with user/link identity and timestamp
Include and Validate Machine-Readable JSON Attachments
Given a PDF certificate is generated When the file is finalized Then the PDF contains embedded JSON attachment(s) accessible as standard PDF attachments And at minimum a ledger.json attachment exists containing: capsule_id, ledger_id/version, completeness_status, generated_at (UTC ISO 8601), assets[{id, filename, file_hash_sha256, watermark_id}], roles[{role, split_pct}], signers[{party_id, legal_name, email_or_wallet, signed_at_utc, signature_id}], exceptions[{type, description, affected_entities}], and checksum_sha256 of the PDF And the JSON validates against the defined schema and is well-formed UTF-8 without BOM And the attachment filename(s) follow a stable convention: ledger.json (required) and assets.json (optional, if split out) And a test parser can programmatically extract and parse the attachment(s) from the PDF without manual steps
Deterministic, Reproducible Export for Immutable Ledger State
Given a specific immutable Ledger state at timestamp T When the Audit-Ready PDF is generated multiple times across environments Then each generated PDF is byte-for-byte identical and has the same SHA-256 checksum And timestamps in the document and JSON are normalized to UTC ISO 8601 And lists (assets, signers, roles) are sorted by stable keys (e.g., asset_id ascending, signer identity ascending) to ensure deterministic ordering And regenerating the certificate at any later time for the same Ledger state yields the same checksum And if the Ledger changes after T, a new versioned certificate is produced with an incremented version and a different checksum while previous versions remain accessible
Accurate Ledger State, Timestamps, and Exceptions in Certificate
Given a Ledger with a mix of approved, pending, and declined signers and one recorded exception When the certificate is generated Then completeness_status is “Complete” only if all required signers have approved; otherwise “Incomplete” (or “With Exceptions” if exceptions exist) And each signer entry includes identity, role (if applicable), and exact signed_at_utc timestamp to second-level precision or better And all asset entries include the exact stored file hash and watermark ID that match the Capsule’s current asset records And roles and splits sum to 100% (±0.01) and any rounding rules are documented in the JSON fields And the exceptions section lists each exception with type and description and references affected assets/signers where applicable
Robust Failure Handling and Audit Logging for Export
Given an export attempt encounters a failure (e.g., signing failure, checksum mismatch, JSON attachment write error, or permission denial) When the system handles the error Then no partial or unsigned certificate is embedded in the Capsule And the user is shown a clear, actionable error message with a retry option and reference code And the attempt is recorded in the Capsule activity log with error category and timestamp, without exposing sensitive keys And any temporary files are securely deleted and no invalid external links are created And a subsequent successful retry produces a valid, signed PDF with correct checksum and attachments
Reminder & Escalation Workflow
"As a release coordinator, I want automated reminders and escalation rules so that signoff completes on time without manual chasing across time zones."
Description

Provide automated, timezone‑aware reminders to pending signers with smart pacing and gentle nudges, including in‑message asset highlights and deadline context. Support escalation rules (e.g., cc manager after 7 days), link expiry/refresh, and a coordinator dashboard to resend or reassign signers where permitted. Update Capsule status in real time and surface blockers, due dates, and predicted completion based on historical response patterns.

Acceptance Criteria
Timezone-Aware Reminder Delivery
Given a pending signer with a known or inferred timezone and defined quiet hours When the system schedules the next reminder Then the send time is between 09:00 and 18:00 in the signer’s local timezone on a business day, adjusted for DST And if the computed time falls outside quiet hours, schedule at the next 09:00 local And no more than one reminder is sent per signer per ledger within any rolling 24-hour window And the coordinator UI displays the scheduled time in the signer’s timezone
Smart Pacing and Nudge Cadence
Given a signer has not completed e-consent after the initial invite When the reminder cadence runs Then default reminders are sent on Day 2, Day 5, and Day 9 post-invite, respecting the 24-hour cooldown and quiet hours And if the signer opened the last message but did not act, delay the next reminder by +24 hours And if the signer replies or the thread receives any inbound message, pause reminders until a coordinator resumes them And total pre-escalation reminders do not exceed 3 unless an admin override is configured
In-Message Context & Asset Highlights
Given a pending signoff with attached assets and a due date When composing a reminder message Then the message includes the top 3 assets by recent activity (names and thumbnails), the due date with the signer’s localized timezone, and a relative deadline (e.g., “due in 3 days”) And the message contains the latest secure shortlink embedding watermark ID and file-hash fingerprint And a single primary “Review & Sign” CTA is present, accessible by keyboard and screen readers And all dynamic fields render correctly from the current ledger state within 300ms server-side
Escalation Rule: CC Manager After 7 Days
Given a signer is unresponsive for 7 calendar days since the first invite When the escalation rule is enabled Then an escalation email is sent to the configured manager and CC’d to the coordinator within 1 hour of the 7-day mark And the signer continues to receive weekly reminders unless the coordinator opts out And the ledger records an “Escalated: Manager CC” event with timestamp, recipients, and message IDs And only one automatic manager escalation is sent per signer unless manually re-triggered
Link Expiry and Refresh Handling
Given a reminder contains a shortlink with an access token that expires after 14 days or upon manual revoke When a recipient clicks an expired link Then the system presents a secure refresh flow, verifies identity, and issues a new shortlink within 10 seconds And all prior tokens for that signer/ledger are invalidated immediately upon refresh And expired links return HTTP 410 with noindex headers and human-friendly guidance to request a new link And the ledger audit log records token rotation with previous and new token IDs and timestamps
Coordinator Dashboard: Resend & Reassign Controls
Given a coordinator with “Manage Signers” permission opens a pending ledger When the coordinator selects Resend for a signer Then the latest reminder is dispatched immediately and an audit entry logs actor, recipient, and message ID When the coordinator selects Reassign and provides a role-equivalent new signer and reason Then the original invite is revoked, the new signer receives an initial invite, and the ledger shows a “Reassigned” event with before/after values and timestamps And visible blockers (e.g., bounced email, missing manager) surface as actionable badges with one-click fixes where available
Real-Time Capsule Status, Blockers, and Prediction
Given the Capsule is open in a coordinator’s browser When ledger events occur (reminder sent, open, click, sign, bounce, escalation, token refresh) Then the Capsule status updates within 5 seconds without page reload And blockers and due dates are visible and accurate to the latest event And a predicted completion date/time with an 80% confidence interval is displayed and recalculated no more than every 15 minutes And backtests on the last 60 days achieve MAPE ≤ 20%; otherwise the prediction badge shows “Low Confidence”

CueSheet AutoFill

Automatically populates cue sheets with composer/publisher credits, PRO affiliations, IPI/CAE, ISWC, and timing. Import scene notes or timecodes, then export broadcaster‑ready formats (ASCAP, BMI, PRS, SOCAN, CSV/PDF). Reduces admin back‑and‑forth and speeds post‑air reporting.

Requirements

Metadata Autofill Engine
"As a music supervisor, I want cue sheets to auto-populate from the tracks’ existing rights metadata so that I don’t have to re-enter credits and can deliver accurate paperwork faster."
Description

Implements automated population of cuesheet fields by extracting rights metadata from TrackCrate assets and project-level defaults. Maps track participants to cues, including composer and publisher names, PRO affiliations, IPI/CAE numbers, ISWC, and writer/publisher split percentages. Supports multiple writers and publishers per cue, canonical name formatting, and per-project or per-broadcaster mapping preferences. Handles real-time refresh and on-demand recalculation when underlying asset metadata changes, while preserving manual overrides with clear precedence rules. Ensures deterministic output, conflict resolution for duplicate entities, and localized character handling for international names.

Acceptance Criteria
Autofill From Asset Metadata and Project Defaults
Given a cue linked to an asset containing composer, publisher, PRO, IPI/CAE, ISWC, and splits When the autofill engine runs Then all corresponding cue fields are populated from the asset metadata with exact values and field-level source=asset Given a cue where any required field is missing on the asset but present in project defaults When the autofill engine runs Then the missing fields are populated from project defaults with field-level source=project-default Given both asset and project default values exist for the same field When the autofill engine runs Then the asset value prevails and the default is ignored Given a cue has unresolved fields across both asset and project defaults When the autofill engine runs Then those fields remain empty with field-level status=unresolved Given 100 cues are processed When the autofill engine runs in batch Then the average processing time per cue is <= 300 ms and p95 <= 800 ms
Multiple Writers/Publishers with Splits Mapping
Given an asset with multiple writers and publishers with defined split percentages totaling 100.0% When autofill runs Then the cue lists all writers and publishers with their exact splits and the writer and publisher totals each sum to 100.0% (±0.1 rounding tolerance in exports) Given a writer has multiple publishers When autofill runs Then each publisher entry is associated to the correct writer relationship as provided in metadata Given splits expressed with more than two decimals When autofill runs and values are prepared for export Then values are rounded to two decimals and across all entries still sum to 100.0% Given splits total outside 100.0% by more than 0.5% When autofill runs Then the cue is flagged status=split-inconsistency and no normalization is applied
Broadcaster-Specific Field Mapping Preferences
Given a project with an active broadcaster profile (e.g., ASCAP, BMI, PRS, SOCAN) When autofill populates cue fields Then field names, required fields, and ordering conform to the selected profile without altering underlying stored metadata Given per-project mapping overrides are configured When autofill runs Then overrides are applied consistently across all cues in the project Given the broadcaster profile is changed When the user triggers recalc Then cue fields re-map to the new profile consistently and the values remain identical aside from formatting and field placement
Manual Override Precedence and Persistence
Given a user manually edits any populated field on a cue When autofill is triggered by any source Then the manually overridden field value remains unchanged and is labeled source=manual-override with timestamp and user id Given a user clears a manual override for a field When recalc is triggered Then the field repopulates from the current highest-precedence source per rules (asset over project default) Given a user selects Reset to Autofill for a cue When executed Then all fields revert to current engine values and source labels update accordingly Given an overridden field and the underlying metadata changes When real-time refresh occurs Then the override remains intact and a non-blocking notification indicates a newer suggested value is available
Real-Time Refresh and On-Demand Recalculation
Given an asset’s metadata changes When the change is saved Then all linked cues update non-overridden fields within 5 seconds and record a recalculation event Given a user clicks Recalculate on a cue or project When executed Then all non-overridden fields recompute using the current metadata and mapping preferences and produce the same values as a fresh calculation given identical inputs Given multiple assets linked across cues change concurrently When refresh events are processed Then final cue values are identical regardless of processing order
Duplicate Entity Conflict Resolution and Deduplication
Given multiple records refer to the same person or publisher (matching IPI/CAE and PRO) with variant names When autofill runs Then a single canonical party is used in output and duplicate entries are merged with splits combined Given two parties share a name string but have different IPI/CAE or PRO When autofill runs Then they are treated as distinct entities and not merged Given conflicting metadata for the same party (same name but different IPI/CAE) When autofill runs Then the record with a valid IPI/CAE matching the asset’s participant link is used; otherwise the cue is flagged status=identity-conflict for manual resolution Given deduplication is applied When the same input set is recalculated Then the output parties and ordering are identical (deterministic)
Canonical Name and Localization Handling
Given names with diacritics or non-Latin scripts When autofill runs Then stored output preserves the original Unicode name and also generates a canonical ASCII representation per style guide for profiles requiring ASCII-only Given names in varying case and punctuation When autofill runs Then names are formatted as Last, First Middle with single spacing and normalized punctuation according to configured rules Given PRO-specific formatting requirements in the selected broadcaster profile When autofill runs Then names and identifiers are formatted to that profile’s specification without changing the underlying canonical store
Scene Notes and Timecode Import
"As a post coordinator, I want to import scene notes and timecodes so that cue entries are created automatically without manual typing."
Description

Enables importing scene notes and timecodes to generate cue entries. Accepts CSV and plain text with configurable column mapping, plus common EDL formats. Parses multiple timecode formats (HH:MM:SS:FF and HH:MM:SS.mmm) with selectable frame rates, validates ranges, and normalizes to a common internal timeline. Matches notes to assets via filename, ISRC, or TrackCrate asset ID, with interactive fallback matching for ambiguities. Deduplicates overlapping or duplicate entries, preserves original note text, and supports batch uploads with per-file parsing profiles.

Acceptance Criteria
CSV Import With Column Mapping and Profile Save
Given a valid CSV file containing scene notes, start_time, end_time, and asset_identifier columns When the user maps CSV columns to internal fields and saves the mapping as a parsing profile named "Show S1" Then the system imports all rows using the mapping and creates cue entries And the saved parsing profile is available for reuse and auto-selected on subsequent imports of files with the same header signature And required fields (start_time, end_time, asset_identifier) must be mapped before import; otherwise the Import button remains disabled with inline errors And the import summary reports total rows, rows imported, rows skipped with reasons, and time taken
EDL Import With Frame Rate Normalization
Given a CMX 3600 EDL file with event in/out codes at 23.976 fps When the user selects 23.976 fps in the import dialog Then all timecodes are parsed and normalized to the internal millisecond timeline with frame-accurate rounding And each cue entry's start is strictly less than its end and within project duration bounds And if the user selects a mismatched frame rate, the system warns and provides a per-file override before proceeding And drop-frame and non-drop-frame 29.97 fps EDLs are both accepted and normalized when selected accordingly
Mixed Timecode Format Parsing and Validation
Given a text file with timecodes expressed as HH:MM:SS:FF and HH:MM:SS.mmm When the user selects a frame rate of 25 fps Then HH:MM:SS:FF values validate frame component < 25; invalid rows are rejected with line numbers and reasons And HH:MM:SS.mmm values validate milliseconds in [0,999]; invalid rows are rejected with line numbers and reasons And all accepted timecodes are converted to a single internal timeline and displayed consistently as HH:MM:SS.mmm in the preview And overlapping rows with identical identifiers are flagged for deduplication review prior to final import
Asset Matching With Fallback Resolution
Given imported notes reference assets by filename, ISRC, or TrackCrate asset ID When automatic matching runs Then exact asset ID matches are linked with 100% confidence And filename and ISRC matches are linked when unique; conflicts (multiple candidates) or no matches open an interactive resolver listing candidates with confidence scores And the user can approve, change, or skip each ambiguous match; decisions are applied to all rows with the same identifier in the batch And all unresolved items are skipped with reasons, and the summary lists counts by match method
Deduplication of Duplicate and Overlapping Cues
Given imported cue entries for the same asset When two or more entries are exact duplicates (same start, end, and note text) Then only one entry is created and duplicates are reported in the summary with source row indices And when entries overlap by any amount and share the same asset and note text, they are merged into a single [min(start), max(end)] cue And when entries overlap but have different note text, both are kept and flagged for manual review And deduplication operates deterministically so repeated imports of the same file do not create additional cues
Batch Upload With Per-File Profiles and Reporting
Given a batch upload containing multiple files of different types (CSV, TXT, EDL) When the user assigns or auto-applies a parsing profile per file and starts import Then each file is parsed using its assigned profile without affecting other files And the system processes files in parallel up to a configurable concurrency limit and shows per-file progress And the final report includes per-file metrics (rows parsed, imported, skipped, warnings, duration) and a downloadable CSV of all errors And a partial failure in one file does not block successful files from completing
Preservation of Original Note Text
Given imported rows contain free-form note text including Unicode characters, quotes, and line breaks When the system creates cue entries Then the original note text is stored verbatim and is retrievable via API and UI detail views And any normalization for display is non-destructive and a stored raw_text field preserves the exact imported value And the export pipeline uses the preserved text unless a user-specified transform is applied at export time And the preview view renders special characters correctly and flags lines exceeding a defined length threshold without truncation
Cue Timing Calculator and Alignment
"As a music editor, I want cue timings to be calculated and normalized so that the cue sheet accurately reflects on-air use without manual math."
Description

Calculates start time, end time, and duration for each cue and aligns them with referenced assets. Supports overlaps, reprises, stingers, and partial uses. Applies configurable fade-in/fade-out offsets, rounds durations per broadcaster requirements, and flags zero or negative durations. Integrates optional waveform markers or in/out points from TrackCrate to refine timings and reconcile discrepancies against imported notes. Produces a normalized, conflict-free timeline ready for validation and export.

Acceptance Criteria
Compute Start, End, and Duration From Imported Timecodes
Given a cue with start_timecode and end_timecode in HH:MM:SS.mmm When the timing calculator runs Then duration_ms equals end_timecode minus start_timecode in milliseconds And end_timecode is greater than or equal to start_timecode And cues with end_timecode earlier than start_timecode are flagged as "negative duration" And cues with start_timecode equal to end_timecode are flagged as "zero duration" And if only start_timecode and duration_ms are provided, end_timecode is computed as start_timecode plus duration_ms
Apply Configurable Fade-In and Fade-Out Offsets
Given fade_in_ms and fade_out_ms are configured And a cue has raw_start_ms and raw_end_ms When offsets are applied Then effective_start_ms equals max(0, raw_start_ms + fade_in_ms) And effective_end_ms equals max(effective_start_ms, raw_end_ms - fade_out_ms) And effective_duration_ms equals effective_end_ms minus effective_start_ms And if effective_duration_ms is less than or equal to 0 the cue is flagged "invalid after offsets"
Broadcaster Profile Duration Rounding
Given a selected broadcaster profile with a duration_rounding rule (nearest, up, down, or frame-accurate at a specified frame rate) And a cue with effective_start_ms, effective_end_ms, and raw_duration_ms When preparing values for export Then exported_duration is rounded according to the profile and raw_duration_ms remains unchanged And if the profile requires rounded endpoints, exported_end equals exported_start plus exported_duration And rounding is applied after offsets and base duration calculation And rounding never produces exported_end earlier than exported_start And profile minimum duration constraints are enforced (e.g., not rounded below the profile's minimum)
Support Overlaps, Reprises, Stingers, and Partial Uses
Given multiple cues that may overlap and may reference the same asset When the timeline is calculated Then overlapping cues are preserved without error and each cue retains a unique identifier And cues shorter than or equal to a configured stinger_threshold_ms may be marked as stingers without validation failure And cues referencing a subsection of an asset are stored with asset-relative in_ms and out_ms corresponding to the use And reprises of the same asset/theme in distinct time ranges are treated as distinct cues and not merged
Reconcile Waveform Markers With Imported Notes
Given TrackCrate asset markers with in_ms and out_ms and imported scene notes with timecodes for the same cue And a discrepancy_threshold_ms is configured (default 250ms) When the absolute difference between notes and nearest marker exceeds discrepancy_threshold_ms Then the cue is flagged "requires reconciliation" and displays both values And when "Snap to marker" is chosen, cue timings update to marker values and the flag clears And when "Keep notes" is chosen, cue timings remain as noted and the flag clears And the reconciliation decision and delta_ms are logged for audit
Validation Blocks Export on Zero or Negative Durations
Given one or more cues have calculated or rounded durations less than or equal to 0 When an export is initiated Then the export is blocked and a validation report lists each offending cue with reason "zero or negative duration" And once all offending cues are corrected, the export proceeds without this validation error
Normalized, Conflict-Free Timeline for Export
Given a set of calculated cues aligned to valid TrackCrate asset IDs When the normalization step runs Then all cues are sorted by start time and assigned stable, unique cue IDs And no duplicate cue IDs exist and no overlapping intervals share the same cue ID And no cue end time exceeds program_end_ms when program_end_ms is provided And the final validation summary reports zero unresolved discrepancies and zero invalid durations And the export preview renders successfully with all cues in chronological order
Validation and Compliance Rules
"As a label operations manager, I want built-in compliance checks so that exported cue sheets are accepted by PROs and broadcasters the first time."
Description

Runs automated validations to ensure cuesheets meet PRO and broadcaster requirements before export. Checks mandatory fields by template (ASCAP, BMI, PRS, SOCAN), verifies ISWC and IPI/CAE formats, ensures writer and publisher splits total 100%, confirms PRO affiliations are present and valid, and validates timing consistency. Provides actionable error messages, inline fixes, and warnings for non-blocking issues. Blocks export on critical failures and logs validation results for audit purposes.

Acceptance Criteria
Template-Specific Mandatory Fields Validation Blocks Export
Given a user selects an export template (ASCAP, BMI, PRS, or SOCAN) When Validate is run Then the system evaluates only the mandatory fields configured for that template Given one or more mandatory fields for the selected template are empty or invalid When Validate is run Then each field is flagged with an inline error and listed in the validation summary, and the Export action is disabled Given all mandatory fields for the selected template are present and valid When Validate is run Then no missing-field errors are returned and the Export action is enabled Given the user switches the export template When Validate is run Then the mandatory field set updates to the new template and validation re-runs against it
Writer and Publisher Splits Must Equal 100%
Given a cue with one or more writers When Validate is run Then the sum of writer share percentages must equal 100.00% ± 0.01; otherwise a blocking error is raised on the cue and export is disabled Given a cue with one or more publishers When Validate is run Then the sum of publisher share percentages must equal 100.00% ± 0.01; otherwise a blocking error is raised and export is disabled Given writer/publisher splits contain more than two decimal places When the user clicks Normalize Splits Then the system rounds to two decimals and adjusts the largest share to total 100.00% and re-validates the cue Given a cue is marked Writer-only (no publishers) When Validate is run Then only the writer split total check is enforced
ISWC and IPI/CAE Identifier Format Enforcement
Given a contributor record includes an IPI/CAE value When Validate is run Then the value must be numeric and 9–11 digits; otherwise a blocking error is shown inline and export is disabled for templates that require IPI/CAE Given a work includes an ISWC value When Validate is run Then it must match T-XXX.XXX.XXX-X or TXXXXXXXXXX with a valid check character; otherwise a warning is shown with a suggested corrected format if determinable Given the selected template requires IPI/CAE or ISWC and the value is missing When Validate is run Then a blocking error is raised; if not required by the template, a non-blocking warning is raised Given the user clicks Auto-format IDs When possible Then the system inserts/removes separators to match canonical formats without altering digits and re-validates
PRO Affiliation Presence and Validity per Contributor
Given each writer and publisher listed on a cue When Validate is run Then a PRO affiliation must be selected from the supported canonical list; otherwise a blocking error is raised for templates that require PRO Given a contributor is unaffiliated When the user selects Unaffiliated/None Then Validate passes with a non-blocking warning and exports include the unaffiliated designation Given a non-canonical PRO alias is entered (e.g., PRS for Music) When Validate is run Then the system normalizes it to the canonical name or presents an inline error if no match can be made
Cue Timing Consistency and Overlap Rules
Given a cue has Start and End timecodes When Validate is run Then End must be greater than Start and Duration must equal End minus Start within 10 ms; otherwise a blocking error is raised with an Auto-calc Duration fix available Given a program length is set When Validate is run Then no cue End may exceed the program length; violations raise blocking errors Given multiple cues exist When Validate is run Then overlapping cues are flagged as non-blocking warnings and list overlapping cue IDs/time ranges Given imported timecodes include a frame rate When Validate is run Then all cues are checked against the project frame rate; mismatches are warned and conversions are applied consistently
Actionable Inline Fixes for Validation Failures
Given validation errors exist When the user opens the validation panel Then each error displays a concise message, impacted field(s), and a one-click inline fix or a deep link to the exact field Given the user applies an inline fix (e.g., Normalize Splits, Auto-format IDs, Auto-calc Duration) When the fix completes Then the system updates the data, re-runs affected validations incrementally, and updates the status without a full page refresh Given a validation has no safe automated fix When displayed Then the error includes prescriptive next steps and a stable rule reference code
Validation Logging, Severity Classification, and Export Gating
Given a user runs Validate When the run completes Then the system writes an immutable validation log with timestamp, user ID, cue sheet ID/version, selected template, and rule outcomes {pass|warn|fail} with messages Given a user attempts Export and any fail-severity validations exist When the export is initiated Then export is blocked, the summary shows counts of fails and warnings, and the blocked attempt is recorded in the log Given only warnings remain When the user exports Then export proceeds and the associated log records the warning list for audit Given an admin requests validation logs via UI or API When a date range and filters (template, outcome) are provided Then logs for at least the last 24 months are retrievable with exact entries and associated export events
Broadcaster-Ready Export Generator
"As a rights administrator, I want to export broadcaster-ready cue sheets in the correct formats so that I can submit them without reformatting."
Description

Generates export files in required formats, including ASCAP, BMI, PRS, SOCAN, as well as generic CSV and printer-friendly PDF. Implements per-format field mapping, headers, encoding, line endings, and rounding rules. Supports batch exports, file naming conventions with project identifiers and air dates, and optional watermarking of PDFs. Allows template preview, sample export with dummy data, and regeneration from any historical version with consistent checksums for traceability.

Acceptance Criteria
Per-Format Compliance: ASCAP/BMI/PRS/SOCAN Exports
Given a completed cue sheet with required credits, affiliations, identifiers, and timings When the user exports to a single selected format (ASCAP or BMI or PRS or SOCAN) Then the system generates a file that matches the selected format’s template version and field mapping exactly And the file contains the required headers in the defined order with no extra columns And all field values are transformed and mapped per the template rules (including codes and value formats) And text encoding and line endings match the template configuration And timing fields are rounded per the template precision using round-half-up And any missing required field blocks export with a descriptive error listing each missing field And on success the download link and the template version used are shown and recorded in history
Generic CSV and Printer-Friendly PDF Outputs
Given a completed cue sheet When the user exports Generic CSV Then the CSV is UTF-8 with BOM, uses CRLF line endings, applies RFC 4180 quoting, and includes mapped columns in defined order And non-ASCII characters are preserved and time/numeric fields use the system default formatting rules When the user exports Printer-Friendly PDF Then the PDF reflects the same records and ordering as the CSV, uses the configured page size and margins, and embeds fonts for special characters And the PDF page count, headers, and totals align with the data set and are readable at 100% zoom
Batch Export Across Projects and Formats
Given the user selects multiple cue sheets across projects and multiple export formats When the user starts a batch export Then a background job is created and a progress indicator shows percentage complete and per-item status And the system produces a single ZIP containing all requested files organized by project and format subfolders plus a manifest.csv And partial failures are reported per item without aborting the batch And the batch can be retried to regenerate failed items only without duplicating successful ones And upon completion the ZIP is available via immediate download and emailed link with expiration per policy
File Naming Conventions with Project and Air Date
Given project identifier, cue sheet name, air date, format code, and version metadata are available When an export file is generated Then its filename follows {ProjectID}_{AirDateYYYYMMDD}_{FormatCode}_{CueSheetName}_{vN}.{ext} And characters are ASCII-safe with spaces replaced by underscores and filename length <= 128 And AirDate is taken from cue sheet metadata or 'TBD' if absent And filenames are unique within a batch; on conflict an incrementing suffix (_2, _3, ...) is appended And all filenames appear exactly as generated in the batch manifest
Optional Watermarking for PDFs
Given watermarking is enabled for PDF exports When a PDF is generated Then each page contains a diagonal background watermark "CONFIDENTIAL • {ProjectID} • {UTC ISO8601}" at 20–30% opacity And the watermark appears on all pages behind content layers and does not obscure tabular data And when watermarking is disabled, no watermark artifacts are present And the export record logs whether watermarking was applied
Template Preview and Sample Export with Dummy Data
Given a user opens Template Preview for a selected format When the preview loads Then it displays header row, field order, required vs optional fields, encoding, line ending style, and rounding precision And the user can download a sample export populated only with generated dummy data clearly marked SAMPLE And generating a sample does not create an export history entry and does not modify any project data And the sample file validates against the same template rules as real exports
Deterministic Regeneration with Checksums and Audit Trail
Given an export history entry is selected for regeneration and the same template and cue sheet versions are available When the user regenerates the export Then the resulting file matches byte-for-byte and the SHA-256 checksum equals the original And if any input (template version or cue sheet data) differs, the checksum changes and the differences are recorded And if a required historical template or cue sheet version is missing, regeneration is blocked with a descriptive error And the export record stores checksum, template version, cue sheet version, user, timestamp, and environment and exposes a copyable checksum in the UI
Versioning, Audit Trail, and Post-Air Lock
"As a producer, I want a versioned record of cue sheet changes with the ability to lock after air so that compliance and audits are straightforward."
Description

Maintains version history for each cue sheet with timestamped changes, diffs of field-level edits, and attribution to the user and source action (autofill, import, manual edit). Supports comments and revision notes, and generates immutable, checksumed export artifacts per version. Provides a post-air locking mechanism to freeze a cue sheet, with controlled unlock requiring a reason and retaining full auditability.

Acceptance Criteria
Version Snapshot on Any Edit Source
Given a cue sheet exists with version number V When AutoFill updates one or more fields and the user saves Then a new version V+1 is created within 2 seconds of save confirmation And the version record includes ISO 8601 UTC timestamp, editor userId, and sourceAction = "AutoFill" And the stored snapshot includes the full cue sheet payload and per-field changes vs. V Given a cue sheet exists with version number V When fields are updated by Import and the user confirms save Then a new version V+1 is created And the version record includes ISO 8601 UTC timestamp, editor userId, sourceAction = "Import", and import source identifier (filename or job id) Given a cue sheet exists with version number V When a user manually edits any field and saves Then a new version V+1 is created with sourceAction = "Manual" Rule: Version numbers increment sequentially starting at 1 with no gaps for successful saves Rule: If validation fails or the user cancels, no new version is created
Field-Level Diff Accuracy and Attribution
Given two consecutive versions V and V+1 of the same cue sheet When viewing the Diff for V → V+1 Then each changed field appears exactly once with fieldKey, previousValue, and newValue And unchanged fields are omitted from the Diff And array/list changes are annotated as added/removed/updated with index or path And the Diff header displays editor userId, sourceAction, and timestamp for V+1 And rendering completes in under 500 ms for up to 500 changed fields Given a version is compared to itself When viewing the Diff for V → V Then the Diff reports 0 changed fields
Comments and Revision Notes Persistence
Given a user is saving a new version When the user optionally enters a revision note up to 500 characters Then the note is stored immutably with that version and displayed in version history And the note cannot be edited after save Given a version exists When a user adds a comment on that version Then the comment is stored with author userId, ISO 8601 UTC timestamp, and message body And adding, editing, and deleting comments do not create a new version And comment edits and deletions are logged in the audit trail with before/after content and actor Given a cue sheet is in a Locked state When users attempt to add comments on existing versions Then commenting is allowed and fully audited And revision notes can only be added when creating a new version (i.e., not while locked)
Immutable, Checksummed Export Artifacts
Given a specific version V of a cue sheet When exporting to ASCAP, BMI, PRS, SOCAN, CSV, or PDF Then the artifact content exactly reflects the snapshot of V and excludes any unsaved changes And a SHA-256 checksum is computed and stored with artifactId, format, createdAt (ISO 8601 UTC), and createdBy userId And subsequent exports of the same format for version V return a byte-for-byte identical file with the same checksum And attempts to overwrite or delete an existing artifact are blocked and logged Given an export artifact exists When downloaded by any authorized user Then the system verifies stored checksum against the artifact bytes before serving And a download event with userId and timestamp is appended to the audit trail
Post-Air Lock Enforcement and Controlled Unlock
Given an editable cue sheet When a user initiates Post-Air Lock and provides a reason between 10 and 500 characters Then the cue sheet state changes to Locked with locker userId, timestamp, and reason recorded And all create/edit/delete operations on cue sheet fields, imports, AutoFill, and manual saves are blocked while Locked And viewing versions, viewing diffs, exporting artifacts, and adding comments remain allowed And prohibited write attempts return a 423 Locked response and are logged with actor and attempted action Given a cue sheet is Locked When a user initiates Unlock and provides a reason between 10 and 500 characters Then the cue sheet state changes to Unlocked with unlocker userId, timestamp, and reason recorded And the next successful save creates a new version V+1; the Locked snapshot remains unchanged And the lock and unlock events appear in the audit trail and in version history context
Audit Trail Querying and Export
Given a cue sheet with versions, exports, comments, and lock/unlock events When requesting the audit trail with filters (date range, userId, actionType, versionNumber, sourceAction, fieldKey) Then the response contains only entries matching the filters, sorted by timestamp descending by default And each entry includes timestamp (ISO 8601 UTC), actor userId, actionType, target (field/artifact/lock), sourceAction if applicable, versionNumber if applicable, and reason if provided And the query returns within 800 ms for up to 5,000 entries Given the audit trail results are visible When the user exports the audit log to CSV Then the CSV contains the same rows and columns as the on-screen results And the export action itself is appended to the audit trail with actor and timestamp
Roles, Permissions, and Approval Workflow
"As a small label team member, I want controlled editing and an approval flow so that the cue sheet is vetted by the right people before submission."
Description

Introduces role-based access control for cue sheet creation, editing, validation, and export. Allows assigning reviewers and approvers, requesting review, and capturing approvals with timestamps. Restricts editing of sensitive fields (e.g., splits, PRO affiliations) to authorized roles, while providing view-only links for external stakeholders. Sends notifications for review requests, validation failures, and completed exports to keep cross-timezone teams aligned.

Acceptance Criteria
Cue Sheet Role Assignment and Governance
Given I am a Project Admin on a cue sheet When I assign roles to users (Editor, Reviewer, Approver, Viewer, External Viewer) Then the assignments persist, appear in the Access panel within 1s of save, and an audit log entry is created with admin id and UTC timestamp Given I am not a Project Admin When I attempt to change user roles on the cue sheet Then the UI controls are disabled and any API call returns 403 with errorCode=RBAC_FORBIDDEN and no changes are saved Given at least one Reviewer and one Approver are required to start review When the set is incomplete Then the Request Review control is disabled and the API returns 400 with message "Reviewer and Approver required"
Sensitive Field Edit Restrictions
Given a user without Edit Sensitive Fields permission When they attempt to modify splits, PRO affiliations, IPI/CAE, or ISWC fields Then inputs are read-only and any PATCH is rejected with 403 RBAC_FORBIDDEN and no data change Given a user with Edit Sensitive Fields permission When they modify those fields Then the save succeeds, the cue sheet version increments by 1, and an audit entry records before/after values, editor id, and UTC timestamp Given a cue sheet is Approved When any user attempts to edit any sensitive field Then 409 STATE_LOCKED is returned and the field remains unchanged until an Admin Reopens the sheet with a recorded reason
Review Request and Notifications
Given a Creator or Editor with permission selects assigned Reviewers and Approvers When they click Request Review and confirm Then status changes to In Review, reviewRequestedAt (UTC) and requester id are recorded Then notifications are sent to all assigned Reviewers within 60 seconds via in‑app and email, containing cue sheet title, version, link, requester, and due date (if set); delivery is retried up to 3 times on failure Given duplicate Request Review actions occur within 5 minutes When processing notifications Then duplicates are suppressed and the state remains a single In Review instance
Validation Failure Handling and Notifications
Given a Reviewer runs validation on a cue sheet with missing or inconsistent data When validation fails Then a blocking error list with field paths is displayed, approval is blocked, and status becomes Needs Fix Then notifications are sent to the Creator and Editors within 60 seconds including up to the first 10 errors and a link to the full validation report Given all blocking errors are resolved and validation passes When validation is re-run Then status can be set back to In Review without creating a new cue sheet version
Approval Capture, Timestamps, and Locking
Given I have the Approver role When I approve a cue sheet that has passed validation Then the system records approver id, role, UTC timestamp, version id/hash, and optional comment; status becomes Approved Then sensitive fields become read-only for all roles, and further edits require an Admin to Reopen with a required reason; Reopen writes an audit entry with UTC timestamp and admin id Given a user without Approver role attempts approval When calling the Approve action Then the Approve control is disabled and the API returns 403 RBAC_FORBIDDEN
Export Gated by Approval and Export Audit
Given a cue sheet is Approved When a user with Export permission exports to ASCAP, BMI, PRS, SOCAN, CSV, or PDF Then the export succeeds, files include correct composer/publisher/pro metadata and timings, and an export log records user id, format, UTC timestamp, and file checksum Given a cue sheet is not Approved When any user attempts an export Then the action is blocked; UI shows "Approval required" and API returns 409 PRECONDITION_FAILED Then on successful export, notifications are sent to the requester and watchers within 60 seconds including format and a link to the artifact
External Stakeholder View-Only Link
Given a Project Admin generates a view-only link When an external stakeholder accesses it before expiry Then the cue sheet and comments are viewable, but edit/approve/export actions are unavailable; any POST/PATCH returns 401 or 403 and no changes occur Then the link supports configurable expiry (1–30 days) and optional password; after expiry or revocation, access returns 410 GONE with no data leakage Then each access creates an audit entry with UTC access time, IP, and user agent; only provided email (if collected) is stored, no additional PII

Jurisdiction Pack

Assembles territory‑specific addenda, society mappings, and contact references based on your Scope Builder choices. Optional localized Rights Summary (EN/FR/DE/ES) and field name remapping ensure the Capsule lands correctly with global teams and reduces clearance friction abroad.

Requirements

Scope-Driven Territory Assembly Engine
"As a release manager, I want the system to auto-assemble jurisdiction packs from my Scope Builder choices so that I can ship correct territory documentation without manual collation."
Description

Build an engine that consumes Scope Builder selections (territories, rights, terms, assets) and automatically compiles a Jurisdiction Pack per territory. The engine resolves applicable clauses, data fields, and attachments, outputs a structured bundle (JSON + PDFs + CSVs) with machine-readable metadata, and flags missing prerequisites. It integrates with TrackCrate’s project model, versioning, and permissioning so updates to scope or assets trigger deterministic re-builds and versioned diffs. Expected outcomes: consistent, reproducible packs that reduce manual assembly time and eliminate territory omissions.

Acceptance Criteria
Deterministic Rebuilds on Scope Update
Given a project with existing Jurisdiction Packs and a recorded scope fingerprint When a user saves a change in Scope Builder that affects only certain territories Then only those affected territory packs are rebuilt and others retain their prior version and hashes And the rebuilt packs increment version numbers and record the reason for rebuild in the manifest And a territory-level diff JSON is produced listing added/removed/changed clauses, data fields, and attachments And two builds with identical inputs (scope, assets, clause library versions) produce byte-identical bundle hashes And a rebuild event with job id, start/end timestamps, and actor is appended to the project timeline
Territory Clause Resolution and Precedence
Given a clause library with territory/rule mappings and precedence definitions When generating a pack for a selected territory with chosen rights and terms Then all required clauses for that territory-rights-terms combination are included and no disallowed clauses are present And any rule conflicts are resolved per precedence and the chosen resolution is recorded in the manifest with rule ids And each included clause entry carries machine-readable metadata (clauseId, libraryVersion, locale, effectiveDates) And validation produces zero unresolved references or duplicate clauses
Missing Prerequisites Detection and Reporting
Given a project scope that references required assets and metadata When the engine attempts to build a territory pack with missing prerequisites (e.g., ISRC, asset licenses, society codes) Then the build completes in Draft state with status "Incomplete" and does not publish the pack externally And a JSON and CSV report of missing/invalid items is attached listing item path, requirement id, severity, and suggested remediation And the manifest flags incomplete=true and enumerates blockers per territory And resolving the missing items and rebuilding clears the blockers and produces a publishable pack
Structured Bundle Output and Naming Conventions
Given a successful pack generation for a territory When the bundle is produced Then it contains at minimum: manifest.json, clauses.pdf, assets.csv, contacts.csv, society_mapping.csv, and (if selected) rights_summary_[locale].pdf And every attachment declared in manifest.attachments exists on disk and no extra files are present And file names follow the convention {projectSlug}_{territoryCode}_{packVersion}_{artifact}.{ext} And manifest.json validates 100% against schema version v1.x and includes a SHA-256 for each artifact And bundle total size and per-file MIME types are recorded in the manifest
Localized Rights Summary Generation
Given the user selects specific locales (EN, FR, DE, ES) for Rights Summary in Scope Builder When generating a pack for a territory Then rights summary PDFs are produced only for the selected locales and none for unselected locales And each summary renders with resolved translation keys (no placeholders) and correct language tags (IETF BCP 47) And the manifest records locales generated and the translation bundle version used And text content for rights and terms matches the territory-resolved clauses for that locale
Field Name Remapping per Territory
Given field remapping configurations for target territories When generating CSV exports for different territories (e.g., US, JP) Then CSV headers reflect the territory-specific mapping exactly and values are preserved without loss or transformation errors And the manifest documents the mappingId and version applied per file And a mapping audit log lists each remapped field from source -> target name And schema validation passes for each target territory format (0 errors)
Permissions and Audit Integration
Given project roles Owner, Editor, and Viewer and existing project ACLs When a Viewer attempts to trigger a pack build or access restricted artifacts Then the system denies the action with HTTP 403 and logs the attempt without creating a build When an Owner or Editor triggers a build Then the build runs, inherits project permissions on all resulting artifacts, and an audit entry is created with actor, action, territory, and version And downloading artifacts respects ACLs and private links; unauthorized users cannot access bundle URLs
Rights Society Mapping Registry & Contacts
"As a rights administrator, I want authoritative society mappings with up-to-date contacts so that I can route clearances correctly and avoid rejections."
Description

Create a centralized, versioned registry of territory-specific performance/mechanical/neighboring rights societies, codes (e.g., IPI/CAE, ISWC, ISRC requirements), intake formats, and verified contact channels. Provide API-backed lookup and auto-fill during pack assembly, with fallbacks per sub-territory and society mergers/aliases. Include validation to ensure required identifiers are present per territory and attach society-specific cover letters or forms as needed.

Acceptance Criteria
API Lookup: Territory + Rights Type -> Society Record
Given the registry contains society mappings for territories and rights types When a client requests GET /api/registry/societies?territory=GB&rightsType=performance Then the API responds 200 within 500 ms And returns JSON records with fields: id, name, territoryCode, rightsType, aliases, mergedInto, requiredIdentifiers, intakeFormats, contactChannels, forms And the primary record's territoryCode = GB and rightsType = performance And aliases returns known alternative names for the society And mergedInto is populated when the society has been merged and points to the canonical id And requiredIdentifiers lists the identifiers mandated by the registry for that territory and rights type
Pack Assembly Auto-Fill and Validation of Required Identifiers
Given a user assembles a Jurisdiction Pack for territories GB and US And the contributor has an IPI/CAE stored in TrackCrate When the pack editor loads the Rights Society section Then required identifier fields auto-populate per territory and rights type from the registry And missing required fields are highlighted inline with messages specifying the field name and territory And Continue/Generate is disabled until all required fields are satisfied or marked Not Applicable where allowed by registry metadata And on save, values persist and reload accurately across sessions
Sub-territory Fallbacks and Society Alias Resolution
Given a lookup for a sub-territory without a direct mapping When requesting a mapping for GB-WLS performance rights Then the registry resolves using the GB parent mapping And if the queried society name matches an alias, the API returns the canonical society record And the response includes resolvedFrom indicating the fallback path used And generated packs use canonical society names and codes, not aliases
Blocking Submission When Mandatory Identifiers Are Missing
Given territory FR requires IPI/CAE and ISWC for composition clearance And those identifiers are missing for at least one work When the user clicks Generate Pack Then generation is blocked with error code RS-REQ-IDs And the error lists missing fields grouped by territory and rights type And each listed field links to the screen to supply the value And after providing all required values, clicking Generate Pack succeeds and the prior error is not shown
Attach Society-Specific Forms and Cover Letters
Given a society requires a specific intake form or cover letter template When generating a pack that targets that society and rights type Then the pack includes the latest effective form/template with placeholders prefilled from available metadata And the attachment version satisfies effectiveFrom <= assemblyDate < effectiveTo (or open-ended) And file names follow {territoryCode}_{societyCode}_{docType}_{version}.pdf And if a required form is missing, generation is blocked with error RS-FORM-MISSING; if optional, a warning RS-FORM-OPTIONAL shown and generation proceeds
Registry Versioning and Effective-Dated History
Given a society record has multiple versions with distinct effective date ranges When the registry is queried with asOf equal to the pack assembly timestamp Then the returned record matches the version effective on that date And the response includes versionHash, effectiveFrom, and effectiveTo And the pack metadata stores registryVersionHash and registryAsOf equal to the assembly timestamp And GET /api/registry/societies/{id}/history returns an ordered audit of versions with author, timestamp, and change summary
Verified Contact Channels and Auto-Routing Eligibility
Given each society record includes contactChannels with type, value, and verificationStatus When the nightly verification job runs Then email channels pass if MX records resolve and a TLS handshake succeeds on SMTP submission And web channels pass if the HTTPS endpoint returns 200–399 with valid TLS certificates And phone channels pass if numbers are valid E.164 and carrier lookup succeeds And failing channels are marked unverified and excluded from auto-routing in pack outputs And a verification report is stored with counts of verified/unverified per territory and society
Localized Rights Summary Generator (EN/FR/DE/ES)
"As a global PR lead, I want localized rights summaries so that regional teams immediately understand usage permissions and restrictions without misinterpretation."
Description

Implement a templated generator that produces concise, locale-specific Rights Summaries in EN/FR/DE/ES derived from the underlying rights model. Legal phrasing is curated per locale with glossary and translation memory to ensure consistency. The output is available as PDF and embeddable HTML, supports pluralization and variable insertion, and is linked to the pack’s version. Falls back to English when a locale is unavailable and flags untranslated strings for review.

Acceptance Criteria
EN Summary Generation from Rights Model
Given a valid rights model with grants, restrictions, territories, effective/expiry dates, and rights holder When the generator is invoked for locale "en" Then the summary content reflects the model values exactly (grants, restrictions, territories, dates, rights holder) And only phrases from the curated EN phrasebank are used And no unresolved variables or placeholders remain in the output And the PDF and HTML outputs are textually equivalent (ignoring formatting)
FR/DE/ES Phrase Consistency via Glossary & Translation Memory
Given approved glossary terms for fr, de, es and a populated translation memory When generating summaries for locales "fr", "de", and "es" Then glossary terms appear exactly as approved in each locale And identical source segments across documents are translated identically per TM suggestions And no locale contains mixed-language segments (e.g., EN fragments) unless explicitly marked as untranslatable
Locale Unavailable Fallback to English
Given a request to generate a summary for an unsupported locale (e.g., "it") When the generator runs Then the content falls back to English phrases while still inserting model data correctly And the output metadata records locale_requested="it" and locale_used="en" And a fallback event is logged for review
Untranslated Strings Flagging for Review
Given locale "es" where some phrase keys lack ES translations When generating the summary Then missing phrase keys are rendered using EN fallback And each missing key is captured in an "Untranslated Strings" report with key ID, locale, and count And the generation completes successfully with a review flag attached to the artifact
Pluralization and Variable Insertion Across Locales
Given templates with variables {territory_count}, {expiry_date}, {rights_holder} and pluralizable segments for "territory/territories" When generating summaries for locales "en", "fr", "de", "es" with territory_count values of 0, 1, and 3 Then each locale applies correct plural rules for 0/1/other as per CLDR And variables are inserted with correct locale formatting for dates and numbers And no raw variable tokens remain in any output
PDF and Embeddable HTML Output Fidelity
Given a generated summary in each supported locale (en, fr, de, es) When exporting to PDF and HTML Then the human-readable text content is identical between formats (ignoring layout) And special characters (e.g., é, ü, ñ, ß) render correctly in both outputs And the PDF contains selectable text (not rasterized) with embedded fonts supporting required glyphs
Version Linking to Jurisdiction Pack
Given a Jurisdiction Pack at version V and a subsequent update to V+1 When generating summaries for both versions Then each summary is tagged with the corresponding packVersionId (V or V+1) And regenerating for the same pack version produces a new artifact with the same packVersionId and a unique documentId And previously generated summaries remain immutable and retrievable by packVersionId
Field Name Remapping Profiles by Territory
"As a metadata manager, I want field names remapped to local schemas so that receiving teams can import data without manual fixes or reformatting."
Description

Provide configurable remapping profiles that translate TrackCrate’s canonical metadata fields into territory- and organization-specific field names and schemas (e.g., GVL, SCPP, JASRAC). Support CSV/XLSX/JSON export schemas, type coercion, enum mapping, and required/optional field rules. Profiles are versioned, testable against sample payloads, and selectable per territory during pack assembly to minimize ingestion errors downstream.

Acceptance Criteria
Create and Apply Territory-Specific Remapping Profile
Given a user with admin permissions creates a remapping profile for territory DE and org GVL with explicit field-name mappings, field order, and inclusion rules When the user exports a pack using this profile Then the exported dataset uses the target field names exactly as configured, in the specified order, and excludes fields marked as excluded And any unmapped canonical fields follow the profile’s configured fallback (drop or pass-through) And the export log reports the percentage of fields mapped (target 100%) and lists any unmapped fields by name
Profile Versioning and Backward Compatibility
Given a remapping profile v1.2.0 is published and pinned to Pack A When a new revision v1.3.0 is created and published Then Pack A continues to resolve to v1.2.0 until explicitly repinned by a user And previously published versions are immutable (no edits permitted) and cannot be deleted while referenced by any pack And the export artifact embeds profile name, version, and checksum metadata And a changelog entry is generated capturing diff of mappings and rules between v1.2.0 and v1.3.0
Territory Selection Drives Profile Assignment in Pack Assembly
Given a user assembles a Jurisdiction Pack for territory FR When the user opens Profile selection Then system suggests available profiles tagged for FR (e.g., SCPP, SACEM) in priority order And the user can select exactly one default export profile for the pack And the selected profile persists with the pack and is used for validations and exports by default And the UI displays the selected profile name and version, with ability to override at export time
Data Transformation: Type Coercion and Enum Mapping
Given a profile defines type coercion rules (e.g., date format YYYY-MM-DD, int, decimal) and enum mappings with fallbacks When validating or exporting records through this profile Then values that can be safely coerced are transformed without data loss and logged as coerced And values that cannot be coerced are flagged as errors with record ID, field, original value, and reason And enum values map to target codes per table; unknown values follow the configured policy (reject with error or map to OTHER) And the run fails if any hard errors are present; warning-only runs produce a Pass with warnings count And the report summarizes counts: total, coerced, warnings, errors
Conditional Required and Optional Field Rules Enforcement
Given a profile defines required/optional rules including conditional requirements (e.g., ISRC required when WorkType = SoundRecording; IPI required when Role IN [Composer, Author]) When running validation against a dataset Then rows missing required fields per rule set are marked as errors with a specific error code and rule reference And optional fields may be blank without error And conditional rules evaluate per-record based on canonical values after mapping/coercion And the summary report includes counts of failed rules by rule ID and the first 50 affected record IDs
Sample Payload Validation Run and Report Persistence
Given a user selects a profile version and uploads or selects a sample canonical payload When the user clicks Test Profile Then the system validates the payload against mappings, types, enums, and rules and produces a report with totals (records, passes, warnings, errors) and execution time And the report is persisted with a timestamp, profile version, initiator, and a reproducible test ID And the UI shows Pass only if errors = 0; otherwise Fail, with ability to download a CSV/JSON of detailed findings And the last successful test indicator is displayed on the profile
Multi-Format Export Schema Compliance and File Naming Conventions
Given a user exports using a selected profile and chooses format CSV, XLSX, or JSON When the export completes Then CSV uses the configured delimiter, quote, line ending, header row, UTF-8 encoding (BOM on/off per profile), and field order And XLSX uses the configured sheet name and data types, with header row matching mapped field names And JSON keys match mapped field names and validate against the profile’s JSON Schema; root node name follows profile settings And the output filename matches the pattern TrackCrate_{territory}_{org}_{profileVersion}_{packId}_{YYYYMMDD}.{ext} And the export log records row count, file size, checksum, and validation outcome for the chosen format
Jurisdiction Addenda Template Manager & Merge
"As label operations, I want correctly rendered territory addenda from approved templates so that legal documents are consistent and compliant without manual editing."
Description

Store and version legal addenda templates per territory with merge fields bound to the rights model (terms, exclusivity, carve-outs, parties, identifiers). Support conditional clauses, territory-specific disclaimers (e.g., moral rights), and dynamic schedules for assets and contributors. On build, render locked PDFs with optional e-sign placeholders, embed pack metadata, and attach to the territory bundle. Include preview, diff on template updates, and rollback.

Acceptance Criteria
Territory Template Creation and Versioning
- Given a user with Template Manager permissions selects territory "DE", When they create a new addenda template with name, language, and tags and save v1.0.0, Then the template is stored with territory=DE, version=1.0.0 (semver), author and timestamp recorded, and content checksum persisted. - Given an existing template v1.0.0, When the user edits and saves as a new version, Then version auto-increments (minor by default), the previous version remains immutable, and both versions are retrievable via the version list. - Given a locked version, When a user attempts to edit it, Then the system prevents edits and prompts to create a new version.
Merge Fields Bound to Rights Model and Field Remapping
- Given a template contains merge tokens mapped to rights model fields (terms, exclusivity, carveOuts, parties, identifiers), When a build runs against a Capsule, Then all tokens resolve with data from the rights model snapshot at build time. - Given a token without a binding, When validation runs pre-build, Then the build is blocked with an error listing the missing tokens and suggested bindings. - Given field name remapping for localized output (EN/FR/DE/ES), When rendering a localized Rights Summary, Then field labels and enumerations are localized per selected locale and values remain unchanged.
Conditional Clauses and Territory Disclaimers
- Given a clause with condition "exclusivity==Exclusive", When exclusivity is Exclusive, Then the clause renders; When Non-Exclusive, Then the clause is omitted without leaving orphan punctuation or numbering. - Given a territory-specific disclaimer for moral rights in FR, When building for FR, Then the disclaimer inserts in the correct section; When building for non-FR territories, Then it is excluded. - Given nested conditions, When evaluation occurs, Then conditions are processed in defined order and the evaluation result is logged for audit.
Dynamic Schedules for Assets and Contributors
- Given a Capsule with assets and contributors, When building schedules, Then Schedule A lists assets with identifiers (ISRC/UPC where available), durations, and filenames; Schedule B lists contributors with roles and splits summing to 100%. - Given pagination rules (max 50 lines per page), When schedules exceed limits, Then pages are added with continued headers and table numbering maintained. - Given an asset or contributor flagged "exclude from schedule", When building, Then they are omitted and the decision is logged.
PDF Rendering with E-Sign Placeholders and Metadata Embedding
- Given a built addenda document, When rendering to PDF with "locked" option, Then the PDF is flattened, copy/paste disabled, form editing disabled, and a SHA-256 hash is generated and stored. - Given e-sign placeholders are enabled, When rendering, Then signature/date/initial fields are embedded at configured anchors with unique IDs and are detectable by the e-sign service. - Given pack metadata (territory code, template version, build ID), When rendering, Then metadata is embedded into the PDF properties and an XMP packet, and is retrievable via API.
Attachment to Territory Bundle and Distribution
- Given a Jurisdiction Pack build for territories DE and FR, When the addenda PDFs render, Then each PDF is attached to its respective territory bundle with canonical filenames and exposed via the bundle manifest. - Given retention policies, When files are stored, Then they are persisted in the pack’s storage with access controls matching the Capsule and inherit expiring, watermarked download settings. - Given a user downloads the bundle, When inspecting contents, Then the addenda are present and checksums match the build manifest.
Template Preview, Diff, and Rollback
- Given a template v1.1.0, When preview is requested with a sample Capsule, Then a draft render is generated within 10 seconds and watermarked "Preview - Not for Signing". - Given a change from v1.0.0 to v1.1.0, When viewing diff, Then textual changes and clause inclusion/exclusion differences are highlighted with line-by-line and condition evaluation summaries. - Given rollback is invoked to v1.0.0, When confirmed, Then v1.2.0 is created as a copy of v1.0.0 (preserving immutability), the audit log records the event, and bindings/conditions are retained.
Capsule Export & Access Controls for Jurisdiction Pack
"As a project owner, I want to include jurisdiction packs in Capsules with expiring, trackable access so that external teams receive the right materials securely."
Description

Integrate the assembled Jurisdiction Pack into Capsule export flows with fine-grained access controls. Allow per-recipient scoping of territories, expiring links, watermarking on downloadable artifacts, and audit logging of views/downloads. Ensure pack assets are included alongside AutoKit press materials and private stem player, with clear separation of legal vs. promo content. Provide shortlinks per territory and track engagement analytics for follow-up.

Acceptance Criteria
Territory-Scoped Capsule Export per Recipient
Given a Capsule containing an assembled Jurisdiction Pack and three recipients with distinct territory scopes, When the Capsule is exported, Then the system generates one unique share URL per recipient limited to the recipient’s assigned territories only, And attempts to access any unassigned territory content via direct URL return 403 Forbidden and create an audit log entry, And AutoKit press materials and the private stem player are included in each share and inherit the recipient’s access controls, And localized Rights Summary language and field-name remapping are preserved per territory in the recipient’s view.
Per-Territory Shortlinks Generation and Routing
Given a recipient share that includes multiple territories, When the export is finalized, Then a unique, trackable shortlink is created for each included territory and for the overall share, And each territory shortlink resolves to the correct territory section within the share with HTTP 200, And clicks on each shortlink are attributed with recipient ID and ISO territory code, And requests to unknown or revoked shortlinks return HTTP 404 and are not attributed.
Time-Bound Access Links Enforcement
Given a recipient share configured with an expiration date/time in UTC, When the share or any shortlink is accessed prior to expiration, Then the page and all permitted downloads respond with HTTP 200, And when accessed after the expiration time (±60 seconds), Then the page responds with HTTP 410 Gone and downloads are blocked, And extending the expiration updates enforcement within 60 seconds without changing the share URL, And shortening the expiration to a past time immediately invalidates access and records an audit event.
Watermarked Downloads for Pack Artifacts and Stems
Given watermarking is enabled for a recipient share, When any downloadable artifact (documents, images, audio stems) is requested, Then the generated file contains a recipient-specific watermark including recipient identifier, share ID, and timestamp, And visual assets display a visible watermark overlay; audio files include recipient metadata tags in ID3/metadata, And the watermarked file checksum is deterministic per recipient (repeated downloads produce identical checksums), And if watermark generation fails, the download is blocked and an error is shown and logged, And streaming in the private stem player is allowed (no file download) and is not watermarked in playback.
Audit Logging of Views and Downloads
Given a recipient views the share and downloads files, When events occur (page view, territory section view, file download), Then an audit log entry is written with ISO8601 UTC timestamp, recipient ID/email, share ID, territory code (if applicable), artifact ID, action type, IP hash, and user agent, And audit entries are queryable by Capsule ID and exportable to CSV from the admin UI, And audit records are immutable to non-admin users and reflect within 60 seconds of the event.
Separation of Legal vs Promo Content in Export
Given a Capsule export that includes AutoKit materials, the private stem player, and the Jurisdiction Pack, When a recipient opens the share page or downloads the ZIP bundle, Then content is organized with a clear separation: a “Promo” section/folder (AutoKit, press, stem player) and a “Legal” section/folder (Jurisdiction Pack addenda, society mappings, contacts, rights summaries), And no legal assets appear in Promo and no promo assets appear in Legal, And toggling a promo-only share hides all Legal content; toggling a legal-only share hides all Promo content.
Engagement Analytics per Territory and Recipient
Given engagement activity on a shared Capsule with territory shortlinks, When the analytics view is opened for that Capsule, Then the system reports per-territory and per-shortlink metrics: unique visitors, total views, unique downloaders, and total downloads per artifact, And metrics can be filtered by date range and recipient, and exported to CSV, And new events appear in analytics within 5 minutes of occurrence.

Alternates Kit

Bundles sync‑friendly alternates—instrumental, TV mix, clean/explicit, and 15/30/60 cutdowns—pulled from your versioned assets. Standardizes filenames, embeds usage metadata into ID3/BWF, loudness‑matches outputs, and includes a quick‑audition page, so supervisors can test‑in‑picture immediately.

Requirements

Auto-Alternate Orchestration
"As a label manager, I want alternates to be generated automatically from approved mixes so that I can deliver a complete sync-ready kit without manual editing."
Description

Automates assembly of sync-friendly alternates from versioned assets, producing instrumental, TV mix, clean/explicit variants, and 15/30/60-second cutdowns. Uses project metadata (BPM, key, section markers) to select intelligent edit points for cutdowns and ensures deterministic, repeatable outputs. Rebuilds alternates when a source mix or stem is updated, with queued background processing, retry/error handling, and notifications on success/failure. Integrates with TrackCrate’s asset store and permissions, writing outputs back to the release’s asset tree under a standardized "Alternates" bundle for downstream delivery and press-kit inclusion.

Acceptance Criteria
Deterministic Alternate Generation
Given a release with versioned source mix, labeled stems, and project metadata (BPM, key, section markers) When Auto-Alternate Orchestration runs with those inputs Then alternates are produced for the set: instrumental, tv, clean (if clean vocals exist), explicit (if explicit vocals exist), 15s, 30s, 60s And Then running the orchestration again with identical inputs and toolchain version yields byte-identical files (matching SHA-256) and identical embedded metadata for each generated alternate And Then any variant lacking required inputs is marked "Skipped - Missing Input", other variants still succeed, and a structured warning is recorded
Intelligent Cutdown Edit Points
Given BPM, grid-aligned downbeats, and section markers (e.g., INTRO, VERSE, CHORUS/HOOK) When generating 15s/30s/60s cutdowns Then each cutdown starts at the nearest CHORUS/HOOK marker; if absent, the first grid downbeat after bar 1 And Then internal edit points occur only at section boundaries or downbeats; crossfades of 10–30 ms are applied at all edit seams And Then each cutdown duration is 15.00/30.00/60.00 seconds within ±0.05 s, with a clean tail (no clicks) and silence trimmed to <100 ms
Loudness Matching and Peak Control
Given a project loudness target (LUFS-I) and true-peak limit (dBTP) When alternates are rendered Then each alternate measures within ±0.2 LU of the target (per ITU-R BS.1770-4) and true-peak does not exceed the limit And Then a diagnostic report is attached per file (measured LUFS-I, LRA, TP); out-of-range results fail the job for that file with the report included
Standardized Filenames and Bundle Placement
Given a releaseId, trackId, and trackSlug When alternates are written to the asset store Then files are saved under /releases/{releaseId}/{trackId}/Alternates/{variant}/ And Then filenames match regex ^[a-z0-9\-]+_(instrumental|tv|clean|explicit|15s|30s|60s)(_v[0-9]+)?\.(wav|flac|mp3)$ using trackSlug as prefix and no spaces And Then the "Alternates" bundle is visible on the release, inherits permissions from the release, and only authorized users can list/download the files
Embedded Rights and Usage Metadata
Given rights metadata (ISRC, writers, publishers and splits, contact, explicit flag) and musical metadata (BPM, key) When alternates are rendered to WAV and MP3 Then WAV files include BWF bext and iXML chunks populated with Title, Artist, ISRC, Variant, BPM, Key, ExplicitFlag, RightsContact; MP3 files include ID3v2.3+ frames for the same fields And Then reading tags via the metadata API returns exact values matching the project data and variant; missing source fields are logged per file and do not block other metadata fields
Change Detection and Targeted Rebuild
Given a source mix or any stem for a track is updated or replaced When the update is committed Then only affected alternates for that track are enqueued for rebuild (e.g., vocal-only changes do not rebuild instrumental) And Then the new outputs supersede prior versions with incremented version numbers; prior outputs remain accessible via version history And Then an audit log entry links the source change to rebuild jobs with a correlation id
Queue Processing, Retry, and Notifications
Given an alternate build job is enqueued When processed by the background worker Then job status transitions Queued -> Running -> Succeeded or Failed; status is retrievable via API and UI in under 1 s latency per request And Then transient failures retry up to 3 times with exponential backoff (>=30 s initial) and idempotency key {releaseId}-{trackId}-{mixVersion}-{variant} prevents duplicate work And Then persistent failures are dead-lettered with error stack, and project members with access receive success/failure notifications (in-app and email) containing links to outputs/logs; non-members receive nothing
Versioned Source Mapping & Approval Gate
"As an audio lead, I want to control which asset revisions feed each alternate so that the kit reflects our approved mixes and avoids accidental use of drafts."
Description

Provides rules and UI to map alternates to specific approved source versions (e.g., latest "Final Mix" or a pinned revision) and to define vocal/no-vocal sources for TV/instrumental outputs. Enforces an approval gate (e.g., "Approved for Sync") before generation, validates presence of required assets (lead vocal stem, full mix), and surfaces actionable warnings for missing components. Supports per-track overrides, global defaults at project/label level, and a dry-run checker that reports what will be generated and from which versions before execution.

Acceptance Criteria
Map Alternates to Latest 'Final Mix' or Pinned Revision
Given a track has multiple source versions labeled "Final Mix" with distinct revision timestamps, And the mapping rule is set to "Latest 'Final Mix'", When a dry-run or generation is initiated for the Alternates Kit, Then the system selects the most recent version labeled "Final Mix" as the source for all alternates mapped to "Full Mix". Given a pinned revision is set for the track, When a dry-run or generation is initiated, Then the pinned revision is selected as the source regardless of newer versions, And the dry-run report indicates the selection reason as "Pinned". Given no versions are labeled "Final Mix", When the mapping rule "Latest 'Final Mix'" is applied, Then the system produces a blocking validation error instructing the user to pin a revision or choose a different label rule.
Approval Gate Blocks Alternate Generation Until Approved
Given the selected source version or track is not marked "Approved for Sync", When the user attempts to execute generation, Then generation is blocked, And the user sees a consolidated error listing each unmet approval with direct links to approve. Given the selected source version and track are marked "Approved for Sync", When the user executes generation, Then the approval gate passes, And generation begins without approval-related errors.
Define Vocal/No‑Vocal Sources for TV/Instrumental Outputs
Given the mapping UI allows assignment of roles "Vocal Source" (lead vocal stem) and "No‑Vocal Source" (instrumental/full mix without lead), When the user saves role assignments for a track, Then the assignments persist and are reflected in dry‑run. Given "No‑Vocal Source" is assigned, When generating Instrumental outputs, Then the system uses the assigned "No‑Vocal Source" as the input, And if it is missing at generation time, the system warns and skips only Instrumental outputs. Given "Vocal Source" and "Full Mix" are present, When generating TV Mix outputs, Then the system verifies both assets exist before proceeding, And if "Vocal Source" is missing, the system warns and skips only TV Mix outputs.
Required Asset Validation and Actionable Warnings
Given the requirement for "Full Mix" and "Lead Vocal Stem" for relevant outputs, When a dry-run is executed, Then the report validates presence of each required asset per output type, And lists missing components with explicit asset names and locations to fix. Given at least one required asset is missing for a requested output, When generation is attempted, Then generation proceeds only for outputs with all required assets present, And skipped outputs are reported with actionable messages and links to upload or remap. Given all required assets are present, When generation is attempted, Then no validation warnings block execution.
Per‑Track Overrides Supersede Project/Label Defaults
Given label‑level defaults and project‑level defaults are configured, And a per‑track override is set for mapping rules, When dry‑run or generation is executed for that track, Then the per‑track override is applied over project and label defaults. Given a per‑track override is cleared, When dry‑run or generation is executed, Then the effective rule falls back to project default, or to label default if project default is absent. Given the effective rule is computed, When viewing the mapping UI or dry‑run report, Then the source of the rule is shown as "Track Override", "Project Default", or "Label Default".
Dry‑Run Checker Reports Sources and Outputs
Given a user initiates a dry‑run for Alternates Kit generation, When the dry‑run completes, Then the report lists each intended output (Instrumental, TV Mix, Clean/Explicit, 15/30/60s) with: - the selected source version identifier, - version label and revision timestamp, - approval status, - required asset checklist, - and any warnings or blocks. Given no changes are made to mappings, versions, approvals, or assets after the dry‑run, When the user immediately executes generation, Then the outputs and selected source versions match the dry‑run report 1:1. Given a dry‑run is executed, When observing system behavior, Then no files are generated or modified, and no usage metrics are recorded for outputs.
Filename Template Standardizer
"As a music supervisor, I want consistent filenames across all alternates so that I can quickly identify and sort the right version for my cut without confusion."
Description

Implements a tokenized naming system to enforce consistent, sync-friendly filenames across all alternates and formats (e.g., {Artist}_{Title}_{AltType}_{BPM}_{Key}_{ISRC}_{YYYYMMDD}). Includes template presets, preview, collision detection, Unicode normalization, and filesystem-safe sanitation. Applies consistently to on-disk assets, ZIP bundles, and download filenames, ensuring downstream supervisors and MAM systems receive predictable, sortable names.

Acceptance Criteria
Batch Apply Template to Alternates
Given a release with at least 10 alternates across WAV and MP3 When the user selects the "{Artist}_{Title}_{AltType}_{BPM}_{Key}_{ISRC}_{YYYYMMDD}" preset and clicks Apply Then 100% of target files are renamed on disk using the resolved tokens, preserving file extensions And the renamed filenames exactly match the previewed names And all rename operations complete atomically; if any rename fails, no files are renamed and the user sees a single error message
Unknown and Missing Tokens Validation
Given a template string containing at least one unsupported token When the user attempts to save or apply the template Then the action is blocked and the UI lists each unknown token by name Given a template with supported tokens but one or more assets lack required metadata to resolve a token (e.g., BPM) When preview is generated Then those rows are flagged and Apply is disabled until values are provided or the token is removed from the template
Collision Detection
Given multiple assets resolve to the same target filename in the current scope When preview is generated Then collisions are indicated per row and as a summary count And Apply is disabled until collisions are resolved by editing source metadata or the template Given an existing file on disk already has a conflicting name When preview is generated Then the conflict is flagged as "existing on disk" and Apply is disabled
Unicode Normalization and Filesystem-Safe Sanitation
Given source metadata includes diacritics, emoji, combining marks, or mixed-width characters When filenames are generated Then each output is normalized to Unicode NFC Given generated names contain filesystem-prohibited characters or patterns When filenames are generated Then prohibited characters \/:*?"<>| are replaced with an underscore, control characters are removed, leading/trailing spaces and periods are trimmed, and names CON, PRN, AUX, NUL, COM1–COM9, LPT1–LPT9 are suffixed with an underscore And no output filename exceeds 240 bytes in UTF-8; names exceeding the limit are truncated at the end of the base name before the extension, while preserving token order
ZIP and Download Name Consistency
Given a batch of alternates is exported as a ZIP When the ZIP is created Then the ZIP filename is the common template prefix plus ".zip" And all entries inside the archive use the same generated filenames as on disk And the HTTP Content-Disposition download filename matches the ZIP filename And individual direct downloads use the same filenames as on disk
Template Presets Management
Given the user is on the template editor When creating a new preset Then they can name it, define the token string, and save it to the workspace Given an existing preset When the user edits, duplicates, or deletes it Then changes are persisted and reflected in the Presets list Given a project is open When the user selects a default preset for the project Then subsequent Alternates Kit operations default to that preset
Live Preview and Dry-Run
Given a batch of at least 500 files When the template is edited Then the preview updates within 500 ms and shows before/after names, validation badges, and collision counts Given the user clicks "Export Preview" When the system generates the dry-run report Then a CSV is downloaded listing source path, target filename, collision flag, sanitation changes, and truncation flag
Embedded Rights & Usage Metadata
"As a sync librarian, I want rights and contact metadata embedded in each file so that anyone receiving the audio can clear usage without asking for separate documents."
Description

Embeds standardized rights and usage metadata into audio files at export: ID3v2.4 for MP3/AIFF and iXML/BWF for WAV. Fields include ISRC/ISWC, composer/publisher splits and PROs, contact/licensing email, P-line/C-line, territories, usage notes (clean/explicit), moods, BPM, key, and TrackCrate shortlink. Pulls authoritative values from TrackCrate’s catalog, validates required fields, preserves Unicode, and writes sidecar CSV/JSON manifests for MAM ingestion. Ensures parity between embedded tags and press/AutoKit pages.

Acceptance Criteria
Format-Specific Metadata Embedding Compliance
Given Alternates Kit exports MP3, AIFF, and WAV files for a release When export is initiated Then MP3 and AIFF files embed metadata using ID3v2.4 (not 2.3 or earlier) And WAV files embed metadata into BWF (bext) and iXML chunks per spec And AIFF files store ID3 tags in the AIFF ID3 chunk per spec And all specified fields (ISRC/ISWC, composer/publisher splits, PROs, contact/licensing email, P-line/C-line, territories, usage notes, moods, BPM, key, TrackCrate shortlink) are present in their standard frames/elements and readable by a spec-compliant parser And audio content is unchanged by tagging (duration identical and audio checksum matches pre-tag version)
Validation of Required Fields from Authoritative Catalog
Given authoritative metadata exists in TrackCrate’s catalog for selected tracks And required fields are defined by the system schema for embedded tags and manifests When a user attempts to export alternates Then values are pulled read-only from the catalog with no inline override in the export flow And export is blocked if any required field is missing or invalid, with a per-track error list identifying the exact field(s) And validation rules include: ISRC format valid; email RFC-compliant; territories are ISO 3166-1 alpha-2 or “Worldwide”; usage notes explicitly indicate Clean/Explicit; BPM is numeric; musical key in recognized format (e.g., C#m, F minor); PRO codes from controlled list; Unicode characters allowed And export proceeds only when all tracks pass validation
Parity With AutoKit/Press Pages at Time of Export
Given an AutoKit/press page exists for the release When alternates are exported Then embedded metadata values exactly match the values displayed on the AutoKit/press page at the time of export for all common fields And a content revision/hash of the page data used for export is recorded in the manifest And a parity check re-reading the embedded tags against the captured page data returns a 100% match for all applicable fields
Unicode and Special Characters Preservation
Given metadata contains non-ASCII characters (e.g., diacritics, CJK, RTL scripts, emoji) When metadata is embedded and then re-read by a compliant parser Then the re-read values are byte-for-byte equivalent to the source (same Unicode code points) And tags are written using encodings permitted by the respective specs (e.g., UTF-16/UTF-8 for ID3v2.4, UTF-8 for iXML) And normalization is preserved (NFC by default) without lossy transliteration in tags and manifests
Sidecar CSV/JSON Manifest Generation for MAM Ingestion
Given an export job completes When sidecar manifests are generated Then both a CSV and a JSON manifest are written alongside the files And each manifest includes one record per output with at least: file path, filename, checksum (SHA-256), duration, sample rate, bit depth, channels, format, ISRC/ISWC, composer/publisher splits with PROs, contact/licensing email, P-line/C-line, territories, usage notes, moods, BPM, key, TrackCrate shortlink, export timestamp, AutoKit revision/hash And manifests validate against the TrackCrate Manifest Schema v1 without errors And CSV uses UTF-8 encoding (no BOM), RFC 4180 compliant quoting, and JSON is UTF-8 with stable field ordering And embedded tag values and manifest values are identical for all fields
Composer/Publisher Splits and PRO Mapping Integrity
Given multiple composers and publishers with splits and optional identifiers (IPI/CAE, ISWC) When metadata is embedded and manifests are generated Then writer and publisher split percentages each sum to exactly 100.000% with up to three decimal places preserved And PRO names/codes are taken from a controlled vocabulary; unknown values are rejected at validation And IDs (IPI/CAE, ISWC) are included when present and placed in the appropriate frames/elements and manifest fields And ordering of contributors in tags and manifests is deterministic (primary credit order)
TrackCrate Shortlink Embedding and Reachability
Given a TrackCrate shortlink exists for the track When exporting metadata Then the shortlink is embedded in standard URL fields (e.g., ID3 WXXX:TrackCrate Shortlink, iXML <NOTE> or dedicated metadata field) and included in manifests And the shortlink uses HTTPS and resolves with HTTP 200/3xx during export validation (no 4xx/5xx) And the embedded shortlink exactly matches the catalog value and the AutoKit/press page link
Loudness Match & Peak Control
"As a trailer editor, I want all alternates to play back at a consistent level so that I can swap versions in my timeline without riding gain."
Description

Normalizes all alternates to a configurable loudness target using ITU-R BS.1770-4 measurements, with per-profile presets (e.g., Sync Review: -16 LUFS-I, -1.0 dBTP). Preserves inter-channel phase, avoids clipping via true-peak limiting, and skips processing when within tolerance. Generates a QC report (LUFS-I, LRA, TP, gain applied) stored with the asset and displayed in the UI. Provides per-track overrides and batch reprocessing when targets change.

Acceptance Criteria
Auto Loudness Normalization to Profile Target (Sync Review -16 LUFS-I)
Given a processing profile "Sync Review" with target loudness -16.0 LUFS-I ±0.5 LU per ITU-R BS.1770-4 When alternates are processed for loudness match Then each output file's integrated loudness measures between -16.5 and -15.5 LUFS-I using BS.1770-4 measurement And the gain applied for each file is recorded with precision 0.1 dB in the QC report
True-Peak Limiting Prevents Clipping at Ceiling
Given a true-peak ceiling of -1.0 dBTP in the selected profile When any file's predicted true peak after gain would exceed -1.0 dBTP Then a true-peak limiter is applied with inter-sample peak detection And the measured output true peak per channel is ≤ -1.0 dBTP (±0.1 dB tolerance) using BS.1770-4 true-peak measurement And no output sample exceeds 0 dBFS
Inter-Channel Phase Preservation and Linked Processing
Given a stereo or multi-channel input file When loudness normalization and limiting are applied Then all gain and limiting are applied with linked channels (single control path) And inter-channel delay equals 0 samples and relative channel gain difference ≤ 0.1 dB And the phase correlation between corresponding channels changes by < 0.02 And channel count and ordering are unchanged in the output
Tolerance-Based No-Op Processing
Given an input whose integrated loudness is within ±0.5 LU of the profile target and true peak ≤ the ceiling When processing is executed Then no gain or limiting is applied (gain applied = 0.0 dB; limiter inactive) And the QC report status is "Skipped (within tolerance)"
QC Report Generation, Storage, and UI Display
Given any processing run completes (processed or skipped) When the system generates the QC report Then the report includes fields: standard="ITU-R BS.1770-4", LUFS-I (0.1 LU), LRA (0.1 LU), true peak dBTP (0.1 dB), gain applied dB (0.1 dB), profile id/name, processing_status, timestamp, tool version And the report is stored with the asset version and accessible via API And the report is displayed in the Alternates Kit UI within 5 seconds of job completion
Per-Track Target Override
Given a user defines a per-track override target of -20.0 LUFS-I and -2.0 dBTP for a specific alternate When that track is processed Then the override values are applied instead of profile defaults And the measured output integrated loudness is between -20.5 and -19.5 LUFS-I and the true peak ≤ -2.0 dBTP (±0.1 dB) And the QC report records override=true and the override targets used
Batch Reprocessing on Profile Target Change
Given a profile's loudness and/or true-peak targets are updated When the user triggers batch reprocessing for an alternates bundle Then only assets whose existing QC reports show values outside the new tolerances are reprocessed And new versions are created for reprocessed assets with lineage to the prior version; skipped assets retain their versions And QC reports are regenerated for reprocessed items and prior reports are marked superseded And a batch summary is produced with counts of processed, skipped, failed and error reasons
Quick Audition with Video Sync
"As a music supervisor, I want to audition alternates against my cut in the browser so that I can choose the best fit without opening a DAW."
Description

Delivers a tokenized audition page that loads the Alternates Kit and supports drag-and-drop of a reference video for immediate in-browser sync testing. Provides offset controls, loop regions, hotkeys to A/B alternates, waveform/markers display, and low-latency preloading. Respects permissions, streams watermark previews for external recipients, and integrates with AutoKit press pages for seamless inclusion in pitches.

Acceptance Criteria
Tokenized audition page loads Alternates Kit
Given a valid, unexpired tokenized link with access to Kit X When the page is opened by an authenticated internal user or a permitted external recipient Then the Alternates Kit list (instrumental, TV, clean, explicit, 15/30/60) is fetched and displayed within 1500 ms on a 10 Mbps connection And the first two alternates begin preloading audio buffers within 500 ms after list render And an invalid or expired token returns a 403 page with "Link expired or invalid" and no asset requests are made
Drag-and-drop reference video for in-browser sync
Given the audition page is loaded and the user has permission to preview When a user drags and drops a .mp4 or .mov file up to 500 MB Then the video is decoded client-side only (no upload) and ready to play within 2000 ms for a 1080p H.264 file on a 10 Mbps connection And unsupported file types show an inline "Unsupported video format" error and the player remains stable And the video timeline is aligned to 00:00:00:00 by default and is controlled by the shared transport (play/pause/seek)
Offset controls and loop regions
Given an audio alternate and a video are loaded When the user adjusts the sync offset via controls or hotkeys Then the audio-video offset range is +/- 10.000 s with 10 ms resolution and the numeric offset display updates live And pressing Left/Right nudges offset by 50 ms; Shift+Left/Right by 10 ms And setting loop in/out creates a loop region; loop playback repeats seamlessly with gap < 20 ms And offset and loop region persist when switching alternates within the session
Hotkeys for A/B alternates during playback
Given multiple alternates exist in the kit When the user presses numeric keys 1..9 mapped to alternates during playback Then playback switches to the selected alternate with continuity gap <= 50 ms and preserves playhead position, offset, and loop region And the selected alternate is visually highlighted and announced via ARIA live region And cached alternates switch immediately; uncached alternates begin playing within 300 ms due to prefetching
Waveform and markers display
Given an alternate is selected When the waveform renders Then a full-length waveform appears with zoom levels 1x–16x and renders within 1200 ms for a 4-minute track And embedded markers (cue points/sections) display at correct timestamps within ±10 ms tolerance And when a video is loaded, a timecode ruler (HH:MM:SS:FF at 24/25/30 fps selectable) is shown aligned with audio time 0
Permissions and watermark preview for externals
Given an external recipient opens the tokenized audition link When previewing any alternate Then audio is streamed with an audible watermark every 8 seconds at approximately -18 LUFS, and downloads are disabled And users without download permission do not see download controls; authorized internal users can access clean previews per role And upon token expiry or revocation, further API requests return 403 and the UI shows "Access revoked" without exposing assets
AutoKit press page integration
Given an AutoKit press page includes an Alternates Kit When the recipient clicks "Quick Audition" Then the audition page opens with the correct kit preselected and shares the same token/session without re-authentication And UTM parameters from the press page are preserved on the audition URL for analytics And closing the audition returns focus to the press page without losing current scroll or media play state
Secure Kit Packaging & Trackable Delivery
"As a label marketer, I want to send a single secure link to the full Alternates Kit so that supervisors can access everything they need while we retain control and visibility."
Description

Bundles alternates into a signed, expiring ZIP/folder with checksum manifest, README, and cue sheet CSV/PDF. Supports per-recipient, revocable, uniquely watermarked links with download limits, branded shortlinks, and analytics (opens, plays, downloads). Integrates with TrackCrate’s existing expiring/watermarked delivery to ensure safe distribution to supervisors and editors, and exposes audit logs for compliance.

Acceptance Criteria
Generate Signed, Expiring Alternate Kit Package
Given an Alternates Kit with selected assets and metadata is ready, and the user selects ZIP packaging and sets an expiration datetime When the user confirms creation Then the system packages all selected alternate files into a single ZIP with a root folder named {ReleaseSlug}_{YYYYMMDD}, and includes README.txt, cue_sheet.csv, cue_sheet.pdf, manifest.json, and manifest.sha256 And the manifest is signed with the TrackCrate packaging key and exposes signing_key_id and signature And the package is generated within 90 seconds for kits up to 2 GB total size (95th percentile) And every file listed in manifest.json has a SHA-256 checksum and byte size that matches the packaged file And attempting to download after expires_at returns HTTP 410 with a branded expiry page
Per-Recipient Expiring, Revocable, Watermarked Link
Given a recipient record and a generated package with expires_at and download_limit configured When the sender creates a per-recipient delivery link Then the system issues a unique branded shortlink (HTTPS) with at least 128 bits of entropy in the token tied to that recipient and delivery_id And each downloaded file embeds recipient watermark metadata (ID3/BWF) fields: tc_recipient_id, tc_delivery_id, tc_request_id, and timestamp (UTC) And download limits are enforced atomically across devices and IPs; exceeding the limit returns HTTP 429 with guidance And revoking the link disables access within 60 seconds globally; subsequent requests return HTTP 403 And HSTS is enabled and tokens can be configured as single-use, invalidating after the first completed download
Branded Shortlink Creation with Fallback
Given the organization has a verified custom short domain configured and passing health checks When a new delivery link is created Then the shortlink uses the custom domain with a 6–10 character path and resolves within 2 seconds globally (95th percentile) And landing pages include noindex and security headers (no-referrer, X-Content-Type-Options, CSP) And if the custom domain fails health checks, the system automatically falls back to trackcrate.io links and records an audit entry And redirects preserve UTM parameters and do not leak raw tokens via referrers
Track Opens, Plays, Downloads per Recipient
Given a per-recipient delivery is active When the recipient opens the landing or audition page Then an open event is recorded with delivery_id, recipient_id, request_id, timestamp (UTC), approximate geo (country/region), user agent, and referrer, and appears in the dashboard within 60 seconds When the recipient plays an audio preview Then a play event is recorded with track_id, play_duration_ms, and quartile markers (25/50/75/100) When the recipient downloads the package or allowed individual files Then a download event is recorded with file list, bytes transferred, and completion flag; retries do not create duplicate events And analytics can be filtered by recipient, delivery, and event type, and exported as CSV for a date range
README and Cue Sheet Inclusion and Accuracy
Given packaging completes successfully When the sender opens README.txt Then README.txt contains package_id, delivery_id, created_at (UTC), expires_at (UTC), verification steps for manifest and signature, support contact, and licensing summary And cue_sheet.csv and cue_sheet.pdf include one row per alternate with fields: Track Title, Alternate Type, Duration (mm:ss), ISRC, Writers, Publishers, PRO, Contact, Rights/Usage, BPM, Key And durations in cue sheets match the packaged audio durations within ±0.5 seconds And filenames in cue sheets exactly match packaged filenames And cue_sheet.pdf is under 1 MB and contains selectable text
Immutable Audit Log Exposure
Given the user has org admin permissions When viewing the Audit Log for a delivery Then entries exist for creation, edits, link creation, revocation, opens, plays, downloads, exports, and domain fallback events And each entry includes actor (user or system), action, subject_id, timestamp (UTC), request_id, and source IP (truncated) And logs are tamper-evident via hash chaining, are immutable to all users, and deletions are prohibited And logs are retained for at least 24 months and are exportable as CSV/JSON for a date range And access to logs is itself logged
Compatibility with Existing Expiring/Watermarked Delivery
Given the organization already uses TrackCrate expiring and watermarked deliveries When creating a Secure Kit delivery Then the same watermark fields and formats (ID3/BWF keys tc_recipient_id and tc_delivery_id) are applied to all files And the same expiration policy engine is used; overrides require the Delivery Policy Override permission and are recorded in the audit log And legacy deliveries created before this feature remain accessible and unaffected And CDN and storage paths follow existing access controls; no public buckets are exposed And automated tests validate parity by comparing watermark fields and expiration behavior across both delivery types

Milestone Builder

Define clear, review-ready payment checkpoints tied to deliverables. Attach assets, due dates, and approvers to each milestone so everyone knows what unlocks escrow. Visual timelines, reminders, and status badges keep collaborators aligned and prevent “what’s next?” confusion.

Requirements

Milestone Composer (Core CRUD)
"As a project lead, I want to define structured milestones with clear acceptance criteria so that everyone understands what unlocks payment and what happens next."
Description

Provide creation, editing, duplication, reordering, and deletion of milestones within a TrackCrate release or project. Each milestone includes title, overview, due date with time zone, assignees, approvers, payout amount/currency, acceptance criteria checklist, dependency links, and tags. Enforce field validation, autosave drafts, and role-based permissions (creator, editor, viewer). Expose CRUD APIs and webhook events (milestone.created, milestone.updated, milestone.deleted) for integration with the broader TrackCrate workflow. Milestones appear in project views, feed the visual timeline, and surface status badges across the app to create a single source of truth for payment checkpoints.

Acceptance Criteria
Create Milestone with Required Fields and Validation
Given I have the Creator role in a project When I open the Milestone Composer and provide Title (1–120 chars), Due Date and Time (ISO 8601) with a valid IANA Time Zone, Payout Amount ≥ 0.01 with a valid ISO 4217 Currency, and at least one Approver, plus any optional fields (Overview ≤ 2000 chars, Assignees, Acceptance Criteria checklist items, Dependency links to existing milestones in the same project, Tags), and click Save Then the milestone is created and appears in the project milestone list and visual timeline within 3 seconds And the API responds 201 Created with milestone.id and the system emits a milestone.created webhook containing projectId and milestoneId within 5 seconds And if any required field is missing/invalid (empty Title, invalid Time Zone, invalid Currency, non-numeric or < 0.01 Payout), the Save action is disabled and field-level errors are shown in the UI, no record is created, and no webhook is emitted And API attempts with invalid data return 400 with field-specific error codes
Autosave Draft During Composition
Given I am creating or editing a milestone and have not clicked Save When I modify any editable field Then a draft autosaves to the server within 2 seconds and an “All changes saved” indicator is shown And reloading the composer restores the last autosaved content exactly And autosave does not update project views/timeline and does not emit webhook events
Edit Milestone Fields and Persist Updates
Given I have the Creator or Editor role and open an existing milestone When I update any editable field (Title, Overview, Due Date, Time Zone, Assignees, Approvers, Payout Amount/Currency, Acceptance Criteria items, Dependency links, Tags) and click Save Then the changes persist and are reflected in the project milestone list and visual timeline within 3 seconds And the API responds 200 OK and a milestone.updated webhook is emitted including the changed fields And attempts to set invalid values are blocked with inline errors in the UI and API returns 400 with field-specific error codes
Duplicate Milestone
Given I have the Creator or Editor role on the project When I choose Duplicate on a milestone Then a new milestone is created with a new unique ID, and with the same fields as the source (Title appended with “(Copy)”, Overview, Due Date/Time Zone, Assignees, Approvers, Payout Amount/Currency, Acceptance Criteria items, Dependency links, Tags) And the duplicate appears in project views and the visual timeline within 3 seconds And the system emits a milestone.created webhook for the duplicate including metadata.sourceMilestoneId set to the original milestone ID
Reorder Milestones Within a Project
Given I have the Creator or Editor role on the project When I reorder milestones via drag-and-drop in the UI or by updating their position via API Then the new order persists and is reflected consistently across project views and the visual timeline within 2 seconds And affected milestones have their position/index updated and a milestone.updated webhook is emitted for each affected milestone with the new position/index And the API responds 200 OK for position updates
Delete Milestone with Dependency Protection
Given I have the Creator role on the project When I request deletion of an existing milestone and confirm the action Then the milestone is removed from project views and the visual timeline and a milestone.deleted webhook is emitted within 5 seconds And the API responds 204 No Content And if the milestone is referenced by other milestones via dependency links, deletion is blocked with a clear error listing the blocking milestone IDs, no record is removed, no webhook is emitted, and the API responds 409 Conflict
Enforce Role-Based Permissions Across UI and API
Given I am a Viewer on the project Then I can read milestones but cannot create, edit, duplicate, reorder, or delete; related UI controls are hidden/disabled and API write attempts return 403 Forbidden Given I am an Editor on the project Then I can create, edit, duplicate, and reorder milestones but cannot delete; delete controls are hidden/disabled and delete API attempts return 403 Forbidden Given I am a Creator on the project Then I have full CRUD permissions and all related UI controls are available; API write operations succeed when inputs are valid
Asset Attachment & Version Pinning
"As a mixing engineer, I want milestones to reference pinned asset versions so that reviewers approve exactly the files I delivered."
Description

Enable attaching deliverables to a milestone directly from the TrackCrate library (stems, mixes, artwork, press assets, agreements) or via upload. Pin attachments to specific file versions to prevent drift, with optional automatic update prompts when new versions exist. Provide secure, expiring review links and watermarked previews for non-downloadable evaluation. Preserve attachment history and checksums, and restrict access by role to protect unreleased material. Surface version labels and basic diff cues (e.g., mix v3 vs v2) to reduce review confusion.

Acceptance Criteria
Attach Assets from Library or Upload
Given a milestone exists and the user has Editor permissions When the user attaches assets from the TrackCrate library or uploads new files Then the milestone displays the attachments list with asset type (stems/mixes/artwork/press/agreement), name, size, and version label And uploads are ingested into the TrackCrate library with metadata and associated to the milestone And multiple attachments can be added, removed, and reordered without page refresh And invalid file types are rejected with a clear error message
Pin Attachment to Specific File Version
Given the user attaches an asset from the library When the user selects a specific version to pin Then the attachment is locked to that exact version ID and checksum and is read-only relative to future library changes And a visible indicator shows the pinned version label (e.g., Mix v3) And a manual action "Update to Newer Version" is available without auto-switching the file
Optional Update Prompt on New Versions
Given an attachment is pinned and the "Auto-Notify on New Versions" toggle is enabled When a newer version of the same asset is added to the library Then approvers and editors on the milestone receive a non-blocking prompt to review and optionally update the pin And the attachment remains on the original pinned version until explicitly updated And if the toggle is disabled, no prompt is sent
Secure Expiring Review Links with Watermarked Previews
Given an attachment is marked Preview Only When a review link is generated with a specified expiry duration Then the link uses a signed token and expires at the configured time, after which access is denied And the preview streams a watermarked rendition (visual for images/video, audible/visual overlay for audio) containing asset name and recipient identity And download controls are disabled and direct-download attempts are blocked And all link accesses are logged with timestamp and requester identity (if authenticated)
Attachment History, Audit Trail, and Checksums
Given an attachment has one or more pin updates When a user opens the attachment history Then the system shows an immutable audit trail including actor, timestamp, action (pin/update/remove), version ID/label, and checksum (SHA-256) And the history can be exported as CSV And a basic diff view highlights changes such as file size and duration deltas between versions
Role-Based Access Controls for Unreleased Material
Given project role permissions are configured When a Viewer without download rights accesses the milestone Then they can see attachment metadata and watermarked previews but cannot download source files And only Approvers and Editors can change pins or download where permitted And users without any access cannot view attachments and receive a permission error if following a link
Version Labels and Diff Cues in Milestone UI
Given an attachment has at least two versions in the library When viewing the milestone card or attachment detail Then the UI displays the current pinned version label and the previously pinned version label And basic diff cues are shown (e.g., Mix v3 vs v2, file size change, duration change) And hovering or expanding reveals any version notes/changelog provided in the library
Approval Rules & Escrow Unlock
"As a label finance admin, I want escrow to auto-release when the defined approvers sign off so that payouts are timely and auditable."
Description

Allow configurable approval logic per milestone, including All Approvers, Any One, Quorum, or Role-Based requirements. Support internal and external approvers via secure magic links with limited-scope access. Capture approve/reject decisions with comments, timestamps, and optional required checkboxes for acceptance criteria. Rejections reopen the milestone and notify owners with requested changes. When approval conditions are met, trigger escrow unlock rules via TrackCrate’s payment module or external providers using webhooks, and record payout events with references for finance reconciliation.

Acceptance Criteria
All-Approvers Rule: Full Sign-off Required
- Given a milestone with approval rule set to "All Approvers" and N approvers assigned, When each assigned approver submits an "Approve" decision, Then the milestone status changes to Approved, an escrow unlock event is enqueued, and a success notification is sent to owners. - Given a milestone with approval rule "All Approvers", When fewer than N approvers have approved, Then the milestone remains In Review and no escrow unlock is triggered. - Given a milestone in "All Approvers" mode, When any approver submits a "Reject" with a comment, Then the milestone transitions to Reopened, all pending approvals are invalidated, owners are notified with the comment, and no escrow unlock occurs. - Given any approval or rejection is submitted, Then an audit record is stored capturing approver identity (user or email), decision, timestamp (UTC), and acceptance checklist snapshot.
Any-One Approver: First Approval Unlocks
- Given a milestone with approval rule "Any One", When the first assigned approver submits "Approve", Then the milestone status changes to Approved, an escrow unlock event is enqueued once (idempotent), and remaining approvers are marked Informational. - Given a milestone with approval rule "Any One", When any approver submits "Reject" with a comment before an approval occurs, Then the milestone transitions to Reopened and no escrow unlock is triggered. - Given a milestone already Approved via "Any One", When any additional approver attempts to approve or reject, Then the decision is recorded for audit but does not change state or trigger additional unlocks.
Quorum-Based Approval: Threshold Met
- Given a milestone with approval rule "Quorum" and a required threshold of Q approvals from M assigned approvers, When approval_count >= Q and no "Reject" has been submitted, Then the milestone changes to Approved and an escrow unlock event is enqueued. - Given "Quorum" rule, When approval_count < Q and no "Reject" has been submitted, Then the milestone remains In Review and no escrow unlock is triggered. - Given "Quorum" rule, When any approver submits "Reject" with a comment, Then the milestone transitions to Reopened and no escrow unlock occurs. - Rule: Quorum may be configured as an absolute number or percentage; percentage thresholds are rounded up to the nearest whole number of approvals.
Role-Based Approval: Required Roles Sign-off
- Given a milestone with approval rule "Role-Based" and required roles R (e.g., Legal, A&R, Label), When at least one assigned approver per required role submits "Approve" and no one has rejected, Then the milestone changes to Approved and escrow unlock is enqueued. - Given "Role-Based" rule, When any required role lacks an approval, Then the milestone remains In Review and no escrow unlock is triggered. - Given "Role-Based" rule, When any approver (internal or external) submits "Reject" with a comment, Then the milestone transitions to Reopened and no escrow unlock occurs. - Rule: If a required role has multiple assignees, any one assignee's approval satisfies that role; optional roles do not block approval.
Magic Link Access: Secure External Approvals
- Given an external approver is invited, When they access their magic link, Then they can view only the assigned milestone review page with attached assets, acceptance checklist, and approve/reject controls, and cannot navigate to other project content. - Rule: Magic links are limited in scope to a single milestone, expire at the configured time-to-live or upon decision submission, and become invalid if the milestone state changes to Reopened or Approved. - Rule: Only the designated email identity can use the link; reuse or access after expiration returns an access denied message and logs the attempt. - Then: All actions taken via magic links are audit logged with token ID, email, timestamp (UTC), IP, and user agent.
Acceptance Checklist and Decision Capture
- Given a milestone with required acceptance checklist items, When an approver attempts to submit "Approve" with any required checkbox unchecked, Then the action is blocked with a validation error listing the unmet items. - Given a milestone with acceptance checklist items, When "Approve" is submitted with all required checkboxes checked, Then the decision is accepted and the checklist state is stored with the audit record. - Rule: "Reject" requires a non-empty comment; "Approve" allows an optional comment; both decisions store timestamps (UTC) and approver identity.
Escrow Unlock Trigger and Payout Recording
- Given a milestone reaches Approved state per its rule, When escrow unlock is triggered, Then the system calls TrackCrate Payments or emits an external webhook including the milestone ID, amount, currency, beneficiary, and an idempotency key. - Given an external provider returns success, Then a payout event is recorded with provider reference ID, amount, currency, beneficiary, timestamp (UTC), and associated milestone and project IDs, and the milestone shows Escrow Unlocked status badge. - Given the provider returns failure or times out, Then no payout record is created, the milestone shows Unlock Failed with error details, owners are notified, and the system supports retry with the same idempotency key. - Rule: All webhook callbacks are authenticated and logged; duplicate callbacks do not create duplicate payout events.
Visual Timeline & Dependencies
"As a producer working across time zones, I want a visual timeline with dependencies so that I can see schedule impact and plan work realistically."
Description

Provide a project-level visual timeline that renders milestones with status coloring and due dates. Support drag-and-drop rescheduling with optional auto-cascade to dependent milestones (finish-to-start, start-to-start). Enforce date constraints and warn on conflicts or critical path slippage. Offer calendar export (ICS) and read-only share links for stakeholders. Ensure full time zone awareness and accessibility (keyboard navigation and high-contrast). Timeline changes sync bi-directionally with milestone records and generate change notifications.

Acceptance Criteria
Timeline Rendering with Status, Dates, Time Zones, and Accessibility
Given a project with milestones having statuses (Not Started, In Progress, Blocked, Completed) and due dates, When the timeline loads, Then each milestone bar renders with the correct status color per design tokens and displays start/end dates in the viewer’s local time zone. Given the user changes their time zone preference or the browser time zone differs, When the page is refreshed or the preference is saved, Then all milestone dates/times re-render in the selected time zone within 1 second and relative ordering remains correct. Given keyboard-only navigation, When the user uses Tab/Shift+Tab and Arrow keys, Then focus moves logically across timeline elements, Enter opens the selected milestone details, and ESC closes dialogs, with visible focus indicators. Given a screen reader user, When navigating the timeline, Then each milestone exposes an accessible name (title + dates + status), role, and position (index of total) and all critical buttons have labels, meeting WCAG 2.1 AA. Given high-contrast mode is enabled, When viewing the timeline, Then all text and interactive elements meet a minimum 4.5:1 contrast ratio and status colors include non-color indicators (patterns or icons).
Drag-and-Drop Rescheduling with Optional Auto-Cascade
Given milestone A with dependent milestones B (finish-to-start with 2-day lag) and C (start-to-start with 0 lag), When the user drags A to a new start/end on the timeline with Auto-Cascade enabled, Then B and C reschedule automatically preserving dependency types and lags, and all affected dates update in one atomic operation. Given Auto-Cascade is disabled, When the user drags A, Then only A changes, and any impacted dependents display a dependency warning indicator until conflicts are resolved. Given the user releases a dragged milestone, When the drop target is valid, Then a confirmation tooltip shows the old/new dates and the number of milestones changed, and an Undo action is available for 10 seconds. Given the user attempts to drop a milestone outside the allowed timeline range, When the drop occurs, Then the drop is rejected, the item snaps back, and an error toast explains the reason.
Date Constraints Enforcement and Conflict Warnings
Given a milestone with a hard constraint (Must Start On/No Earlier Than/No Later Than), When a user action or cascade would violate the constraint, Then the change is blocked, the UI explains the violated rule, and the original dates are preserved. Given a dependency would be violated (e.g., moving a successor before its predecessor in finish-to-start), When the user attempts the move with Auto-Cascade off, Then the system allows the move only after explicit user confirmation and shows a persistent warning badge on the violating milestones until resolved. Given soft constraints (target dates) are exceeded, When a change causes the due date to slip beyond the target, Then a non-blocking warning is displayed with the delta in days/hours. Given overlapping milestones assigned to the same owner are not prohibited by policy, When overlaps occur, Then the system surfaces a non-blocking resource conflict warning in the sidebar summary.
Critical Path Calculation and Slippage Alerts
Given the project critical path is computed from milestone durations and dependencies, When the user shifts any critical-path milestone resulting in a later project completion date, Then the system highlights all affected critical-path milestones and shows a banner indicating the projected slip duration (e.g., +3d 4h). Given the user reverts the change (via Undo or moving back), When the project completion date returns to its previous value, Then the slippage banner and highlights are removed. Given non-critical milestones are shifted, When the project completion date does not change, Then no slippage banner is shown and critical path highlighting remains accurate.
Calendar Export (ICS)
Given the user selects Export Calendar, When choosing between Download .ics and Subscribe (webcal), Then the system provides a valid ICS file and a webcal URL limited to the current project filter scope. Given an event is created for each visible milestone, When the ICS is generated, Then each VEVENT includes UID, SUMMARY (project − milestone title), DTSTART/DTEND with TZID, DESCRIPTION (status and link to milestone), and URL to the read-only timeline. Given a milestone date changes, When the user or an external calendar client refreshes the webcal feed, Then the feed publishes the updated event immediately with incremented SEQUENCE. Given access is revoked, When the export is disabled, Then the webcal URL returns HTTP 410 and previously downloaded static .ics files are not updated (documented on export screen).
Read-Only Share Links for Stakeholders
Given a user generates a read-only share link, When an unauthenticated stakeholder opens it, Then the timeline renders without edit affordances (no drag handles, no context menus) and all write endpoints are blocked (returns 403) if called. Given optional link settings (expiry date and password) are configured, When the link is accessed after expiry or with an incorrect password, Then access is denied with a non-revealing error (410 Gone after expiry, 401 Unauthorized for bad password). Given a viewer in any time zone opens the link, When the timeline loads, Then dates render in the viewer’s local time zone and a time zone indicator is visible. Given the share link is revoked, When it is accessed, Then the page shows that the link is no longer available and no project metadata is leaked.
Bi-Directional Sync with Milestone Records and Notifications
Given a milestone is rescheduled on the timeline, When the change is committed, Then the corresponding milestone record updates immediately (<=1s) with new start/end dates and any adjusted dependencies. Given a milestone is edited in the milestone detail view or via API, When the record changes, Then the timeline reflects the change within 3 seconds without a full page reload. Given watchers and approvers are configured for the project, When a timeline change affects dates or dependencies, Then a change notification is generated including author, timestamp, old vs. new values, and a link to the item, and batched if multiple changes occur within 10 seconds. Given notifications are disabled for the project, When changes occur, Then no notifications are sent and an audit log entry is still recorded for each change.
Smart Reminders & Notifications
"As an artist, I want timely reminders and clear notifications so that I don’t miss reviews or approvals that hold up payment."
Description

Deliver configurable reminders to assignees and approvers relative to due dates (e.g., 7d/3d/1d, at due, overdue) via email, in-app, and Slack. Provide digest mode, quiet hours by time zone, snooze, and escalation to project owners on persistent overdue states. Trigger notifications on key events (assets attached, approval requested, approval granted/rejected, escrow released). Implement rate limiting and grouping to prevent notification fatigue, with per-user preferences and project-level defaults.

Acceptance Criteria
Configurable Due-Date Reminder Cadence
Given a milestone with due date T and an assignee or approver with reminder cadence 7d, 3d, 1d, at due, and daily overdue x3 When the scheduler runs Then reminders are queued at T-7d, T-3d, T-1d, T, and T+1d..T+3d in the user's local time at their configured notification hour Given a milestone due date changes from T to T' When the scheduler updates jobs Then future reminders for T are canceled and new reminders for T' are created Given a milestone is marked completed before T When pending reminders exist Then those reminders are canceled and no overdue reminders are sent Given a user has disabled a specific cadence (e.g., 7d) in preferences When the schedule is generated Then that specific reminder is not queued Given both an assignee and approver exist When reminders are queued Then each recipient receives only their role-appropriate reminder content
Event-Triggered Multi-Channel Delivery
Given an event Assets Attached occurs on a milestone When the event is emitted Then notifications are sent to designated approvers via all enabled channels for each recipient (email, in-app, Slack) Given events Approval Requested, Approval Granted, Approval Rejected, and Escrow Released When each event occurs Then notifications are dispatched to the correct recipients with required fields: project name, milestone name, event type, actor, timestamp, and a CTA link Given a recipient has Slack connected and Slack channel is enabled When a notification is sent Then a Slack message is delivered within 60 seconds; if Slack fails, then an email fallback is sent and the failure is logged Given in-app notifications are enabled When a notification is sent Then the recipient's in-app bell shows an unread badge increment and the item appears in the feed with accurate read/unread state Given an email notification is sent When received Then the subject follows the pattern [TrackCrate] {Project}: {Milestone} — {Event}, and the body includes a deep link that opens the milestone in TrackCrate All event-triggered notifications respect user preferences, quiet hours, grouping, and rate limits
Digest Mode, Grouping, and Rate Limiting
Given a user enables Daily Digest at 08:00 local When multiple notifications accrue in the prior 24 hours Then the user receives a single digest at 08:00 summarizing items grouped by project and milestone with counts Given a user enables Weekly Digest (Mon 09:00 local) When notifications accrue during the week Then only critical escalations bypass the digest per user setting, and all other items appear in the weekly digest Given multiple notifications for the same milestone and recipient occur within a 15-minute window When sending Then they are grouped into one message per channel with a combined summary Given per-user per-channel rate limits (defaults: email 5/hour, Slack 10/30min) When the limit is exceeded Then additional notifications are deferred into the next digest window or the next available rate window, whichever is sooner Given grouping or rate limiting suppresses an individual notification When the digest is sent Then the digest shows that items were consolidated and provides one-click expand in-app
Quiet Hours by Time Zone with Deferred Delivery
Given a user sets quiet hours 22:00–08:00 in their time zone When a notification would fire during quiet hours Then it is deferred to 08:00 next allowable time Given the user's time zone observes DST changes When a DST transition occurs Then quiet hours honor the new local time without duplicate or missed notifications Given project-level quiet hours are set and the user has not overridden them When scheduling notifications Then project defaults apply; if the user overrides, then user settings take precedence Given an escalation is scheduled during quiet hours and the project setting "Escalations bypass quiet hours" is disabled When the time arrives Then the escalation is deferred; if enabled, then the escalation is delivered immediately
Snooze and Reschedule Reminders
Given a recipient views a notification When they click Snooze for 1h, 4h, or Until tomorrow 09:00 Then the next notification for that item across all channels is deferred accordingly and marked Snoozed in-app Given a snoozed notification reaches its snooze end time When the scheduler runs Then the notification is delivered unless the underlying milestone is resolved Given a user snoozes a Due reminder When the overdue cadence would start Then overdue reminders do not start until the snooze expires Given a user snoozes a grouped notification When the snooze expires Then delivery resumes as a single grouped notification if grouping criteria still apply
Escalation on Persistent Overdue States
Given a milestone remains in Awaiting Approval or Not Delivered N days past due (default 2 days) When the escalation job runs Then an escalation notification is sent to project owners and producers with a summary of blockers and latest activity Given escalation frequency of once every 48 hours with a maximum of 3 escalations When the milestone remains unresolved Then subsequent escalations follow the schedule until resolved or the cap is reached Given the milestone is resolved (approved or delivered) When pending escalations exist Then all future escalation notifications are canceled Given per-project escalation recipients are customized When escalation is sent Then only configured recipients receive it, respecting their channel preferences and quiet hours settings unless bypass is enabled
Per-User Preferences and Project-Level Defaults
Given project-level notification defaults are defined for each event type and channel When a new collaborator joins the project Then those defaults apply to the user until they set personal overrides Given a user updates their preferences to disable a channel for a specific event type When that event occurs Then the user does not receive that channel while other enabled channels still deliver Given a user resets preferences to project defaults When scheduling future notifications Then the project defaults take effect immediately Given a user toggles Digest Mode for email only When notifications are sent Then email follows digest rules while Slack and in-app continue real-time delivery Given preferences are updated When saving Then changes are persisted and auditable, and future schedules are recalculated within 60 seconds
Status Badges & Audit Trail
"As a collaborator, I want clear status badges and a reliable audit trail so that I can see what’s blocking progress and who took the last action."
Description

Display milestone status badges (Planned, In Review, Changes Requested, Approved, Blocked, Paid) consistently across project lists, milestone detail, and timeline. Provide progress indicators at project and release levels, plus filters and sorting by status, due date, and approver. Maintain an immutable audit log of all milestone events (edits, attachments, approvals, rejections, escrow triggers) with actor, timestamp, and IP, exportable as PDF/CSV for accounting and compliance. Ensure badges and logs update in real time and are permission-aware.

Acceptance Criteria
Consistent Status Badge Rendering Across Views
Given a milestone in each status (Planned, In Review, Changes Requested, Approved, Blocked, Paid) When viewed on project list, milestone detail, and timeline Then the badge label text, color, and icon match across all views and match the design spec Given a milestone with a single status When any view is rendered Then exactly one status badge is displayed per milestone and it matches the backend status value Given a screen reader user When focus lands on a status badge Then an accessible name announces "Status: <Status>" and the badge meets WCAG AA contrast (>= 4.5:1 for text/icons) Given cached application state When a milestone status changes server-side Then no stale badge is shown after the next render cycle
Real-Time Status and Progress Updates
Given two clients viewing the same project When a milestone transitions from In Review to Approved Then both clients reflect the new badge and updated project/release progress within 2 seconds without manual refresh Given intermittent connectivity When the client reconnects Then all badges and progress indicators reconcile to the latest server state within 5 seconds Given a status change that affects progress (e.g., Approved→Paid) When it is committed server-side Then the project and release progress percentages update consistently in all locations Given a user without access to a project When a real-time event occurs for that project Then no badge/progress updates are pushed to that user
Permission-Aware Badge and Audit Visibility
Given a user with view_milestones permission When they open project list or milestone detail Then they can see status badges; otherwise badges are hidden Given a user with view_audit_log permission When they open the audit log Then actor, timestamp (UTC ISO 8601), action, target, and IP are visible; otherwise IP is redacted and actor is limited to display name Given a user without export_audit_log permission When they attempt to export the audit log Then the export action is blocked and logged with a permission error event Given a private milestone restricted to approvers When a non-approver visits the milestone URL Then no status badges or audit entries for that milestone are rendered
Filtering and Sorting by Status, Due Date, and Approver
Given milestones with varied statuses, due dates, and approvers When filtering by Status=In Review AND Approver=<Name> AND Due Date within <Range> Then only milestones matching all selected filters are shown and the result count matches the number of items displayed Given a sort selection of Due Date ascending When two milestones have the same due date Then sort is stable and secondary order is by milestone name ascending Given milestones without a due date When sorting by Due Date Then items with no due date appear after items with dates in both ascending and descending modes Given an applied filter and sort When the page is reloaded or URL is shared Then the same filter and sort are restored via query parameters Given a cleared filter action When the user resets filters Then all milestones become visible and the result count returns to the full set
Accurate Project and Release Progress Indicators
Given a project with N total milestones and K milestones in Approved or Paid When progress is computed Then Progress % = round_down((K/N)*100) and "K of N complete" is displayed Given milestones in statuses other than Approved or Paid (Planned, In Review, Changes Requested, Blocked) When progress is computed Then they do not count toward completion Given a project with zero milestones When progress is displayed Then show 0% and "0 of 0 complete" without errors or NaN Given any milestone status change that alters K When the change is saved Then project and release progress indicators update consistently across header, list, and timeline views
Immutable Audit Log of Milestone Events
Given an event occurs (edit, attachment add/remove, approval, rejection, status change, escrow trigger) When recorded to the audit log Then an append-only entry is created with fields: event_id (UUID), actor_id, actor_display_name, action, target_type, target_id, timestamp (UTC ISO 8601), source_ip, metadata Given any role attempts to edit or delete an audit entry When the request is made via UI or API Then the operation is rejected with 403 and a tamper_attempt event is appended Given system-initiated actions (e.g., automated escrow release) When logged Then actor is "system" and source_ip is null, not an arbitrary placeholder Given multiple events in quick succession When listed Then entries are ordered by timestamp then event_id and no gaps or duplicates are present for a single action
Audit Log Export as PDF and CSV (Permission-Respecting)
Given a user with export_audit_log permission and applied filters When they export as CSV Then the file is UTF-8 with header row, RFC 4180 compliant, includes all filtered rows and columns (event_id, timestamp UTC, actor_display_name, actor_id, action, target_type, target_id, source_ip per permission, metadata JSON) Given a user with export_audit_log permission and applied filters When they export as PDF Then pagination, column headers, and row counts match the on-screen filtered set and timestamps are rendered in UTC with timezone label Given a user without view of IP addresses (no view_audit_log) When they receive an export link Then exported files redact IP consistently with on-screen redaction Given an export is generated When the file is ready Then a corresponding audit entry "audit_export_created" is appended with the export format, row count, and SHA-256 checksum of the file

Recoup Tracker

Log advances and expenses (mixing, artwork, ads) and set recoup order before splits are paid. Escrow auto-deducts approved costs at release, showing each collaborator a transparent, per-party breakdown. This eliminates surprise shortfalls and builds trust around payout math.

Requirements

Unified Recoup Ledger
"As a label admin, I want to log and manage all advances and expenses with receipts so that recoupable costs are tracked accurately before payouts."
Description

Centralized ledger to log and manage all advances and expenses per release and track. Supports cost categories (e.g., mixing, mastering, artwork, ads, PR), multi-currency amounts with FX conversion at booking date, receipt uploads, tags, and cost centers. Enables draft/submitted/approved/rejected states with audit trail, comments, and role-based permissions. Provides bulk CSV import and validation, prevents edits after pre-release lock, and links each cost to related assets and contracts within TrackCrate for contextual traceability.

Acceptance Criteria
Log Expense with FX Conversion at Booking Date
Given a release has a base currency configured And a user has permission to create costs When the user logs a cost with amount, ISO-4217 currency different from the base, and a booking date Then the system fetches the FX rate for the booking date from the configured provider and stores: original amount, original currency, booking date, FX rate, rate source, conversion timestamp, and base-currency amount rounded to 2 decimals And the base-currency amount is displayed alongside the original amount And if the user edits the amount, currency, or booking date while the cost is in Draft Then the base-currency amount is recalculated using the FX rate for the new booking date And once the cost is Submitted or beyond Then the stored FX rate and base-currency amount are immutable
Workflow: Draft to Approved with Role Permissions and Audit Trail
Given roles are configured for Creator, Submitter, Approver, and Viewer When a Creator saves a new cost Then its state is Draft and only Creators or Submitters can edit fields When a Submitter submits a Draft cost Then the state changes to Submitted and only Approvers can change state or fields When an Approver approves a Submitted cost Then the state becomes Approved and the record becomes read-only except for comments When an Approver rejects a Submitted cost Then the state becomes Rejected and a rejection reason is required And every create, edit, submission, approval, or rejection writes an audit entry with actor, UTC timestamp, action, and before/after values for changed fields
Receipt Upload and Validation
Given a cost is in Draft When a user uploads a receipt Then only PDF, JPEG, or PNG files up to 25 MB are accepted And the system stores file name, size, checksum, uploader, and upload timestamp And the cost cannot be submitted without at least one receipt attached unless explicitly marked as 'No Receipt' with a required justification And if the file fails validation Then the upload is rejected with an error message and no file is stored
Bulk CSV Import with Row-Level Validation
Given a CSV file with headers: release_id, category, amount, currency, booking_date, description (optional), cost_center (optional), tags (optional), track_ids (optional), contract_ids (optional), receipt_url (optional) When the user imports the file Then each row is validated for required fields, data types, ISO-4217 currency codes, and existing references And rows with validation errors are skipped and reported with line numbers and messages; valid rows are created as Draft costs And the import result returns counts of processed, created, and error rows and a downloadable error report And no costs are created for releases that are in pre-release lock
Pre-Release Lock Prevents Edits
Given a release has been pre-release locked When any user attempts to create, update, submit, approve, reject, or delete costs for that release Then the action is blocked with an explicit 'Release is locked' error And comments on existing costs remain allowed And CSV imports targeting the locked release are blocked with the same error
Link Costs to Assets and Contracts for Traceability
Given a cost is in Draft When the user links one or more assets (e.g., tracks, artwork) and/or contracts from the same release Then the cost stores references to those entities and displays them in the cost detail And links must reference entities that exist and belong to the same release; invalid references are rejected And a cost must have at least one related asset or contract linked before it can be Submitted
Categorization with Cost Categories, Tags, and Cost Centers
Given system-supported categories include at least: mixing, mastering, artwork, ads, PR When a user creates or edits a cost Then category is required and must be one of the allowed categories And users may assign zero or more tags (max 10; each 1–32 characters; alphanumeric plus spaces and dashes) And if a cost center is provided Then it must match an existing configured cost center And category, tags, and cost center are all immutable once the cost is Approved
Recoup Waterfall Builder
"As a project manager, I want to define and lock the order in which costs are recouped so that payouts follow agreed contract terms automatically."
Description

Configurable engine to define the order and rules by which costs recoup before revenue splits. Offers drag-and-drop step sequencing, templates (e.g., advance first, then marketing), per-item recoupable/non-recoupable flags, caps and floors, optional interest, and per-party inclusion/exclusion. Validates rule completeness, simulates outcomes with sample revenues, and supports pre-release lock with version history for auditability.

Acceptance Criteria
Rule Sequence Creation and Validation
Given a project with no waterfall configured When the user creates steps via drag-and-drop Then the sequence reflects the user-defined order and persists on save Given a valid step is moved via mouse or keyboard When the user reorders steps Then the new order is saved and an undo action reverts the last move Given at least one revenue source and at least one recoup step When validation runs Then no cycles exist, each step has a defined source and target, and 100% of inflows are allocated or explicitly left as residual Given an invalid configuration (cycle, missing target, orphan step, or unallocated inflow) When the user attempts to save Then save is blocked and inline errors identify each offending step with guidance Given a valid configuration When the user saves Then a success confirmation appears and last-modified metadata (user, timestamp) is recorded
Apply and Customize Recoup Templates
Given a library of templates exists (e.g., Advance First, Marketing Then Net) When a user applies a template to a project Then the waterfall pre-populates with the template’s steps and defaults Given a template-applied configuration When the user edits any step or parameter Then the state is marked "Modified from template" and the specific changes are tracked Given a template-applied configuration with edits When the user selects Reapply Template and confirms Then edits are discarded and the template defaults are restored Given a customized configuration When the user selects Save as Custom Template Then a new template is created with a name and version number and becomes selectable for other projects
Configure Per-Item Recoup Parameters
Given a cost item When the Recoupable toggle is off Then the item is excluded from all recoup steps and simulations Given a cost item with Recoupable on When the user sets Cap and Floor Then validation ensures Floor ≤ Cap and both are non-negative currency values Given a cost item with interest enabled When the user selects Simple APR and a monthly period Then accrued interest in simulation equals Principal × (APR/12) × Months Given a cost item with interest enabled When the user selects Compound APR (monthly) Then accrued interest equals Principal × (1 + APR/12)^Months − Principal Given caps and floors on a recoupable item When simulated recoup exceeds the Cap Then additional revenue bypasses the item and flows to the next step Given an effective start date for interest When the release date precedes the start date Then interest accrues starting on the effective date, not before
Set Party-Specific Inclusion/Exclusion
Given parties are defined on the project When a user toggles inclusion for a party at a step Then simulation excludes that party from recoup at that step and reallocates shares per defined rules Given a step with at least one excluded party When the user views the breakdown Then an explanation badge lists excluded parties and the rationale Given all parties are excluded at any recoup step When validation runs Then save is blocked with an error requiring at least one included party Given inclusion/exclusion changes When the configuration is saved Then an audit entry records who changed what, where, and when
Simulate Outcomes With Sample Revenues
Given sample revenue inputs (single amount or time series) When the user runs a simulation Then the system shows step-by-step allocations, remaining balances, and per-party outcomes Given a simulation with up to 100 cost items and 50 parties When executed Then the results render within 2 seconds for 95% of runs Given zero or negative net revenue in a period When simulated Then no recoup occurs for that period and balances carry forward without error Given a completed simulation When the user exports Then a CSV and PDF are generated including timestamp, project ID, and configuration version ID
Pre-Release Lock and Version History
Given a valid configuration When a user with permission locks the waterfall pre-release Then the configuration becomes read-only and is labeled Locked with timestamp and user Given a locked configuration When an unauthorized user attempts an edit Then the action is blocked with a permission error Given the configuration evolves over time When saved Then a new version number is created with author, timestamp, and a field-level diff from the prior version Given two versions in history When the user selects Compare Then a human-readable diff of steps and parameters is displayed Given a need to branch from a prior version When the user clones a version Then a new draft is created referencing the parent version ID and is unlocked for editing Given a request for audit export When executed Then a JSON export is produced containing the full configuration, version chain, and a SHA-256 hash per version
Escrow Auto-Deduction Engine
"As a finance owner, I want approved costs auto-deducted at payout so that collaborators are paid only after recoup is satisfied without manual math."
Description

Automated application of approved costs against incoming revenues in escrow at payout time. Applies the configured waterfall sequence, supports partial recoup across multiple periods, multi-source revenue mapping (DSPs, Bandcamp, direct), pro-rata distribution and rounding rules, and handles adjustments such as refunds or chargebacks. Generates an immutable deduction journal, ensures idempotent processing on retries, and reconciles against provider statements.

Acceptance Criteria
Waterfall Application and Carry-Forward Recoup
Given a release with approved recoupable costs and a configured waterfall order When a payout is triggered with an escrow balance of N Then the engine deducts costs in the configured order until costs are fully recouped or escrow reaches 0 And only costs with status "Approved" and effective date on or before the payout date are applied And per-item and per-category recoup caps are not exceeded And any unrecouped balance is carried forward and reported for the next payout And any remaining escrow after recoup is marked as distributable Given previous unrecouped balances exist When a subsequent payout runs Then the engine continues recoup from the remaining balances before any distributions occur
Multi-Source Revenue Mapping and Netting
Given revenue line items from DSPs, Bandcamp, and Direct with identifiers (ISRC/UPC/Asset ID) When the engine ingests revenue for a payout window Then each line is mapped to the correct release/track using configured identifier rules And source-specific fees and taxes are applied to derive net revenue per line And amounts in foreign currencies are converted using the configured FX rate as of settlement date with rate source recorded And all mapped net amounts are aggregated into a unified escrow balance for the release And unmapped or invalid lines are quarantined with an error code and excluded from payout
Pro-Rata Split Distribution and Rounding Rules
Given a distributable balance D and configured party splits that sum to 100% When allocating post-recoup amounts Then each party allocation equals round_half_up(D * split_i, 2) And the sum of rounded allocations equals D by assigning residual cent(s) to parties with the largest fractional remainders, tie-broken by lowest party UUID And each allocation record stores inputs (D, split_i), pre-round amount, rounded amount, and remainder used for residual assignment
Adjustments for Refunds and Chargebacks
Given previously recognized revenue contributed to recoup or distributions When a refund or chargeback is received for specific source lines Then the engine creates negative revenue adjustments linked to the original lines (or an aggregated reference if unavailable) And reverses recoup progress and party distributions up to but not exceeding amounts previously applied And does not reduce escrow below 0; any deficit becomes a carry-forward negative balance And emits compensating journal entries referencing the originals And updates outstanding balances so the next payout applies the negative balance before new distributions
Immutable Deduction Journal and Auditability
Given any deduction, distribution, adjustment, or reconciliation event When the engine records the event Then an immutable journal entry is appended including timestamp, actor/process, idempotency key, source references, before/after balances, and a hash linking to the prior entry in the release ledger And attempts to edit or delete existing entries are rejected; only append-only compensating entries are allowed And journal export (CSV/JSON) reproduces entries with stable IDs whose totals reconcile to the payout summary
Idempotent Retry Processing
Given a payout run identified by a unique batch ID and idempotency key When the run is retried due to timeout or worker restart Then no additional deductions, allocations, or journal entries are created beyond the original run And the engine returns the original run result with an idempotent-replay indicator And concurrent retries are serialized via a lock at the release/payout scope And metrics/logs show one processed run and any additional retries as replays
Provider Statement Reconciliation
Given imported provider statements for the payout window When reconciliation executes Then each revenue line included in payout is marked reconciled with matched provider reference and checksum/amount match And variances beyond the configured tolerance (e.g., > 0.01 in currency or FX delta > 0.1%) flag the payout as Needs Review and block finalization And unreconciled or quarantined lines are listed with reason codes and required actions And a reconciliation report summarizing totals, matched/unmatched counts, and variances per provider is attached to the payout record
Collaborator Statements & Dashboards
"As a collaborator, I want a transparent view of deductions and my net split so that I can trust the payout and understand how it was calculated."
Description

Per-party, real-time statements showing gross revenue, itemized deductions, recouped-to-date, remaining balance, and net payable. Provides filters by date range, release, and category; drilldowns to track-level and receipt detail; and export to PDF/CSV. Supports private share links with expiry, localized currency display with source-currency drilldown, watermarked documents, and access logs to ensure transparency and trust.

Acceptance Criteria
Per-Party Real-Time Statement Calculations
Given a collaborator with defined revenue splits and recorded transactions for a selected period When the statement loads Then it displays gross revenue, itemized deductions by category, recouped-to-date, remaining recoup balance, and net payable for that collaborator. Given new revenue or an expense is recorded/approved When it is saved Then the collaborator’s statement recalculates and updates within 5 seconds. Given the collaborator has a positive remaining recoup balance When net payable is computed Then net payable equals 0.00 until remaining recoup balance is 0.00, and remaining recoup balance decreases by the recoup-eligible amounts. Given amounts are displayed Then rounding is to 2 decimal places and totals equal the sum of line items within ±0.01 in the display currency.
Statement Filtering by Date, Release, and Category
Given the user selects a date range, one or more releases, and one or more categories When filters are applied Then the statement updates to show only matching transactions and recalculated totals. Given multiple filters are applied simultaneously Then results reflect the logical AND of the filters. Given a filter is cleared or the user taps Reset Then the results revert and totals recalculate accordingly. Given datasets up to 10,000 transactions When filters are applied or cleared Then the view updates within 2 seconds. Given the user navigates to a drilldown and returns Then previously applied filters persist.
Drilldown to Track-Level and Receipt Detail
Given a statement with aggregated totals When the user clicks a release or track link Then a drilldown shows track-level revenues and deductions honoring the current filters. Given a deduction line item with an attached receipt When the user opens it Then a detail view shows receipt ID, vendor, date, category, amount in source currency, FX rate and rate date, notes, and attached file(s). Given the user has permission to view attachments When the receipt file is opened Then a watermarked preview is displayed; otherwise access is denied. Given the user closes the drilldown Then control returns to the statement with filters unchanged.
Export to PDF and CSV with Watermarking
Given a filtered statement view When the user exports to PDF or CSV Then the export contains only filtered data and totals match the on-screen totals within ±0.01. Given a PDF export Then each page includes collaborator name, project/release identifiers (if filtered), period, generation timestamp (UTC), page X of Y, and a semi-transparent watermark including org name and user ID or share link ID. Given a CSV export Then a UTF-8 CSV with headers is generated where numeric fields are plain numbers using a dot decimal separator and text is properly quoted/escaped. Given an export is generated Then the filename follows {project}-{collaborator}-{period}-{exportType}-{YYYYMMDD-HHMMSS}.pdf/csv and is available for download. Given datasets up to 10,000 rows When exporting Then the file is generated within 15 seconds.
Private Share Links with Expiry and Access Controls
Given an owner on a statement view When a share link is created with expiry date/time and optional password and permissions (view-only; export allowed/disabled) Then a unique URL is generated and the selected permissions are enforced. Given a recipient opens the link before expiry and passes authentication if enabled Then they can view the statement with the owner-saved filters, and all exports are watermarked to the share link ID. Given the link is expired by time or manually revoked When a recipient attempts access Then access is denied with HTTP 410 (expired) or 403 (revoked) and the event is logged. Given a share link with view-only scope and locked filters Then recipients cannot modify locked filters or access releases/categories outside the shared scope.
Localized Currency Display with Source-Currency Drilldown
Given a user has set a locale and preferred display currency When viewing a statement Then all amounts render in that currency with locale-appropriate symbols, grouping, and decimals. Given converted amounts are displayed Then the FX rate and rate timestamp used per transaction are stored and viewable via hover or drilldown. Given the user toggles to source-currency view Then each line shows original currency amounts and subtotals per source currency are provided. Given rounding and conversion rules Then the sum of converted line items matches displayed totals within ±0.01 and uses bankers’ rounding.
Access Logs for Shared Statements
Given share link creation, view, export, and revoke events When these events occur Then an access log entry is recorded with UTC timestamp, action, status, link ID, actor (user ID or anonymous), IP address, and user agent. Given the statement owner opens access logs Then they can filter by date range, action, and link ID, sort by timestamp, and export the filtered log to CSV. Given an event is recorded Then it appears in the access log UI within 10 seconds. Given log integrity requirements Then entries are append-only and cannot be modified or deleted via the UI; any administrative redactions are logged as separate events.
Contract Rules Binding
"As a label owner, I want recoup settings tied to contract terms so that payouts consistently reflect the agreements without manual setup each time."
Description

Binding of recoup rules to rights and contract metadata per party. Captures participation flags, carve-outs by territory and format, caps/thresholds, interest rates, and effective dates. Supports multiple contracts per release with inheritance from label templates, versioning and change history, and validation against ledger entries. Blocks payout if required contractual parameters are missing or inconsistent.

Acceptance Criteria
Binding Recoup Rules to Party Rights
Given a release with parties and rights metadata When a contract with recoup participation and rule set is bound to a party Then the system persists a binding linking releaseId, partyId, contractId, and rights scope and the binding is retrievable via API/UI Given a bound contract When escrow calculation is executed for the release Then the party’s payout plan uses the bound contract’s recoup rules and the calculation log references the bindingId Given a bound contract is unbound or replaced When escrow calculation is executed Then only the current binding is applied and the change is timestamped and visible in the calculation log
Territory and Format Carve-Outs Enforcement
Given a contract with carve-outs for specific territories and formats When calculating recoup for ledger lines that match a carve-out Then those lines contribute $0 to recoup for the affected party and the exclusion is logged per line item Given ledger lines outside the carve-outs When calculating recoup Then those lines are eligible for recoup per the contract rules Given overlapping carve-outs (e.g., territory and format) When both could apply Then the most specific carve-out takes precedence and the applied rule id is recorded
Caps, Thresholds, Interest, and Effective Dates
Given a contract with a recoup cap amount When cumulative eligible deductions reach the cap Then further deductions stop and the cap-reached event is recorded Given a contract with a revenue threshold before recoup begins When net revenue to date is below the threshold Then no recoup deductions are taken and the threshold shortfall is displayed Given a contract with an interest rate and an effective date When interest accrual runs Then interest is applied only to outstanding balances after the effective date using the defined compounding schedule and a per-period breakdown is available Given ledger entries dated before the contract effective date When calculating recoup Then those entries are excluded from recoup for that party
Multiple Contracts with Template Inheritance
Given a release with multiple contracts applicable to a party When calculating recoup for a date range Then the system resolves precedence by latest effective date then most specific scope (territory/format) and flags conflicts for review Given a label template with default recoup settings When creating a new party contract from the template Then default values are pre-filled and any overridden fields are explicitly stored as overrides Given the template is updated before the child contract is activated When reapplying inheritance Then non-overridden fields update to match the template and overridden fields remain unchanged with an audit note
Versioning and Change History Auditability
Given a user edits a contract’s parameters When saving changes Then a new immutable version is created with incremented version number user and timestamp and the prior version remains readable Given multiple contract versions with different effective dates When running a historical payout recalculation for a past date Then the system uses the version active on that date and records the versionId used Given a request to view change history When opening the audit view Then a field-by-field diff between consecutive versions is displayed including who changed what and when
Ledger Validation Against Contract Rules
Given ledger entries for expenses and revenues When validating against the bound contract Then entries outside permitted territory/format windows outside effective dates or disallowed categories are rejected with specific error codes Given valid ledger entries When validation passes Then each entry is linked to the ruleId that justified inclusion and is available in the calculation trace Given a rejected entry is corrected When re-validating Then the entry status updates to valid and it becomes eligible for recoup in the next calculation run
Payout Block on Missing or Inconsistent Parameters
Given required contractual parameters (e.g., participation flag recoup order effective date cap/threshold units) are missing or inconsistent When initiating payout for a release Then the payout is blocked a Blocked status is set for the job and a machine-readable list of missing/inconsistent fields is returned Given all required contractual parameters are present and internally consistent When initiating payout Then the payout job proceeds past contract validation without errors Given a previously blocked payout due to missing parameters When the gaps are remediated and validation is rerun Then the block is lifted and the job can continue
Notifications and Dispute Workflow
"As a collaborator, I want to dispute questionable deductions and track resolution so that incorrect charges are not taken from my payout."
Description

End-to-end workflow for notifications and dispute handling. Sends email and in-app alerts for submissions, approvals, locks, deductions, and balance changes. Allows collaborators to dispute specific items with evidence uploads; introduces Disputed/Under Review/Resolved statuses; pauses or earmarks deductions during review; includes comment threads, SLA timers, and role-based visibility for timely and fair resolution.

Acceptance Criteria
Notify on Expense Submission
- Given a collaborator submits an expense for a release they have access to, When the expense is saved and passes validation, Then an in-app notification is created for Owner/Admin/Finance roles within 5 seconds. - And an email is sent within 60 seconds including expense ID, amount, currency, category, submitter, and a deep link to the expense. - And duplicate notifications are not sent if the client retries within 30 seconds (idempotency key enforcement). - And the submitter sees a success confirmation and the expense enters "Submitted" status. - And users without access to the release receive no notification.
Notify on Approval, Lock, and Scheduled Deduction
- Given an expense is approved, locked for payout, or scheduled for deduction, When the state changes, Then recipients with roles Owner, Admin, Finance, and the submitter receive in-app and email notifications within 60 seconds. - And each notification includes previous status, new status, effective date, and per-party balance deltas with new totals. - And batch changes within 10 minutes are aggregated to a single digest per recipient per action type. - And the in-app notification badge increments exactly once per event per recipient. - And recipients can navigate via deep link to the affected item.
Open a Dispute with Evidence
- Given a collaborator has visibility to an expense or advance, When they click Dispute, Then they must select a reason from a configured list and enter a comment (min 10 chars, max 1,000). - And they may upload up to 5 files (PDF/JPG/PNG/MP3/WAV), each ≤ 25 MB; uploads are virus-scanned and rejected if unsafe. - And upon submission, the item status becomes Disputed, a dispute record is created with timestamp and actor, and Owner/Admin/Finance roles are notified. - And evidence visibility is limited to Owner/Admin/Finance and the disputing party. - And an immutable audit entry records user ID, IP, and request ID.
Pause or Earmark Deductions During Review
- Given an item is Disputed or Under Review when a payout run executes, When escrow calculates deductions, Then the disputed amount is not deducted; it is earmarked as Reserved against escrow. - And statements and balance views show a Pending Deduction line with amount, reference, and dispute ID. - And split calculations exclude reserved amounts from deductions until the status becomes Resolved. - And clearing or resolving the dispute re-queues the deduction for the next payout run automatically.
SLA Timers and Escalations
- Given a dispute is created, When the timer starts, Then the initial reviewer response SLA is 3 business days and overall resolution SLA is 7 calendar days. - And countdown timers are visible to disputing party and reviewers, showing target timestamps in the project timezone. - And if the initial response SLA is missed, the dispute auto-escalates to Owner role, marked Overdue, and sends notifications immediately. - And if the resolution SLA is missed, payouts involving the disputed item remain paused and daily reminders are sent until resolution.
Threaded Comments and Mentions
- Given a dispute exists, When participants post comments, Then comments are threaded by parent, sortable by newest/oldest, and time-stamped with editor/edited flags. - And @mentions notify mentioned users (with visibility) via in-app within 5 seconds and email within 60 seconds, linking directly to the comment. - And commenters can edit their own comments within 10 minutes; edits are versioned; deletions are soft and auditable. - And comment attachments inherit dispute visibility rules and are downloadable only by authorized users.
Resolution Outcomes and Balance Change Notifications
- Given a dispute is marked Resolved with outcome Upheld, Partially Upheld, or Rejected, When saved, Then the system recalculates affected deductions and per-party balances within 30 seconds. - And all affected collaborators receive a notification showing outcome, adjusted amounts per party, before/after balances, and the effective payout run. - And the dispute record stores resolver, rationale (min 10 chars), and outcome; the dispute becomes read-only. - And an exportable audit trail (CSV/JSON) includes timestamps, actors, state transitions, and monetary deltas.
Roles & Compliance Controls
"As an admin, I want fine-grained permissions and compliance controls so that sensitive financial data remains secure and we meet regulatory obligations."
Description

Granular role-based access controls for creating, approving, and viewing financial data. Supports two-person approval for high-value items, PII redaction on receipts, secure storage with watermarking, and immutable audit logs. Provides GDPR/CCPA export and deletion workflows, configurable data retention, and field-level permissions for external accountants to meet security and regulatory requirements.

Acceptance Criteria
Role-Based Access to Financial Records
Given a project with roles Owner, Manager, Contributor, External Accountant, and Viewer And resources Expenses, Advances, Splits, and Receipts with fields including Amount, Vendor, Notes, and PII fields (name, address, email, phone, account identifiers) When a user attempts Create, Read, Update, Approve, or Export on a resource Then access is granted only if the user's role has explicit permission for the action on the resource And External Accountant can Read Approved Expenses and non-PII fields only, cannot Create, Update, Approve, or Export And Contributor can Create Expenses and upload Receipts, cannot Approve or view others' PII And Viewer has read-only access to non-PII financial summaries only, no resource-level data access And deny-by-default is enforced for unspecified permissions, returning HTTP 403 with a permission code And every access decision is logged with user, role, resource, action, outcome, and timestamp
Two-Person Approval for High-Value Items
Given an approval threshold of 1000 USD configured for the organization And an expense of 1500 USD is submitted by a Contributor or Manager When approvals are requested Then the expense status is Pending Dual Approval And two distinct approvers with Approver permission must approve before status becomes Approved And the submitter cannot approve their own expense And if a single approver attempts to approve twice, the second attempt is rejected with HTTP 409 And all approvals record approver identity, timestamp, and immutable signature in the audit log And the expense is excluded from payout calculations until Approved
PII Redaction on Receipts
Given a PDF or image receipt is uploaded with potential PII When the receipt is processed Then the system flags detected PII fields and requires manual confirmation before approval And a redacted rendition is generated that masks detected PII fields And non-privileged roles can view only the redacted rendition And the original unredacted file is restricted to Privacy Admins And an audit log entry records redaction actions and access to the unredacted file
Watermarked, Encrypted Receipt Storage and Access
Given a user with permission to view a receipt When the user downloads or previews the receipt Then the file is delivered from encrypted storage over TLS And a visible watermark includes the recipient email, project name, and timestamp on previews and downloads And access links are single-use and expire after 24 hours by default And attempts to access after expiry or without permission return HTTP 403 And each access is logged with user, IP, user agent, and timestamp
Immutable Audit Trail for Finance Actions
Given audit logging is enabled When a user creates, updates, deletes, approves, exports, or views financial data Then an append-only audit record is written containing actor ID, role, action, resource ID, before-and-after hashes, request ID, IP, and timestamp And audit records are chained with cryptographic hashes to detect tampering And no API or UI exists to edit or delete audit records And authorized auditors can export audit records for a date range as CSV or JSON And any audit export itself generates an audit record
GDPR/CCPA Data Export (DSAR)
Given a verified data subject access request for user U When an admin with Privacy role initiates an export for U Then the system produces a machine-readable JSON package and a human-readable PDF summary containing U's personal data within the configured SLA (default 7 days) And the package includes financial submissions by U, associated receipt metadata, and audit references where lawful And data for other users is excluded or anonymized And the export is made available via an expiring link visible only to the requester and Privacy admins And all steps are recorded in the audit log
Configurable Data Retention and Deletion with Legal Hold
Given retention policies are configured for Receipts (7 years), Expenses (7 years), and Audit Logs (10 years) When a record reaches its retention end date and is not under legal hold Then the system queues the record for deletion and removes it from active indexes within 24 hours And the record is purged from primary storage within 7 days and from backups within 30 days And a tombstone with record ID, type, and deletion timestamp is retained for auditing And if a legal hold is active, deletion is deferred and the hold reason is displayed to authorized users And right-to-erasure requests delete or anonymize personal fields unless retention or legal hold prevents it, in which case an exception with rationale is logged and shown

AutoRelease

Set smart release rules—funds clear automatically on milestone approval, after a grace window, or when a fallback date hits. One-tap ‘Pause’ prevents premature payout, while alerts flag overdue decisions. Approvals sync with TrackCrate’s Signoff Ledger for hands‑off, on‑time payments.

Requirements

Milestone-Based Payout Rules
"As a label manager, I want to configure payout rules per release milestone so that payments are triggered automatically when the right approvals happen."
Description

Enable creation of payout rules tied to release milestones (e.g., mix approved, artwork final, delivery to DSPs). Admins can define rule types—approval-triggered, time-triggered, or hybrid—with per-party splits, currencies, and minimum thresholds. Support sequential or parallel evaluation with deterministic ordering, idempotent execution, and dry-run simulation. Expose configuration via UI and API, persist in a versioned schema linked to releases/projects, and store times in UTC with project-level time zone display. Outcome: automatic, predictable payouts when conditions are met, eliminating manual coordination.

Acceptance Criteria
Approval-Triggered Payout on Milestone Signoff
Given a release with a milestone "Mix Approved" linked to the Signoff Ledger and an active approval-triggered payout rule with party splits totaling 100%, ISO 4217 currencies, and per-party minimum thresholds When the milestone status changes to Approved in the Signoff Ledger Then a payout is scheduled for each party whose computed share meets or exceeds its minimum threshold And no payout record is created for any party whose computed share is below its threshold, with status "BelowThreshold" logged And the payout uses the configured currency per party and stores timestamps in UTC And execution is idempotent: reprocessing the same Approved event does not create duplicate payouts And the payout event is recorded with a deterministic rule identifier and evaluation order
Time-Triggered Payout After Grace Window and Fallback Date
Given a time-triggered payout rule with a 72-hour grace window and a fallback date set for the milestone And the milestone remains in Pending state with no approval during the grace window When the grace window elapses Then the system emits an OverdueDecision alert to the project’s configured notification channel And no payout executes while the rule is Paused; otherwise a payout is scheduled per configured splits When the fallback date is reached without approval Then the payout is executed automatically with UTC timestamps and project-level display in the project time zone And execution is idempotent across retries and scheduler restarts
Hybrid Rule Prioritizing Approval with Fallback Timeout
Given a hybrid payout rule configured to trigger on approval or on fallback date after a 48-hour grace window When the milestone is approved before the fallback date Then the payout is executed immediately upon approval and the fallback trigger is canceled When approval does not occur before fallback date Then the payout executes at fallback without requiring approval And in all cases only one payout occurs per rule due to idempotency And the audit log records which trigger path fired (Approval or Fallback) with timestamps in UTC
Deterministic Sequential and Parallel Rule Evaluation
Given a project with multiple payout rules assigned explicit sequence numbers and an evaluation mode per rule (Sequential or Parallel) When evaluation runs Then rules marked Sequential execute in ascending sequence order, and a subsequent rule does not start until the prior rule’s payouts have status Paid or Skipped And rules marked Parallel may execute concurrently but resolve using their sequence numbers for deterministic logging and audit And re-running evaluation produces the same ordered audit trail and no duplicate payouts (idempotent) And race conditions are prevented: concurrent executions use an idempotency key per rule+milestone+version
One‑Tap Pause Prevents Premature Payout
Given an active payout rule with upcoming triggers and an operator presses Pause in the UI (or calls the Pause API) When a triggering event occurs (approval, grace-window lapse, or fallback date) Then no payout is executed while the rule status is Paused And a queue entry is recorded with reason "Paused" and the next reevaluation time When the operator presses Resume Then the system immediately re-evaluates the rule and executes any now-eligible payouts once And all pause/resume actions are captured in the audit log with actor identity and UTC timestamps
Dry‑Run Simulation Produces Predictive Payout Report
Given a payout rule (any type) and current project/release state When the user runs a Dry‑Run from the UI or API Then the system returns a read-only report including: trigger path that would fire, per-party amounts, currencies, whether thresholds would block any party, estimated execution timestamp (UTC), and evaluation order And no payouts, holds, or balance changes are created And the dry-run result is version-stamped with the rule version and data snapshot time and is reproducible for that snapshot
Versioned Persistence and API/UI Configuration for Payout Rules
Given an admin creates a payout rule via API with valid schema When POST /rules is called Then the API responds 201 with the stored rule, ruleId, version v1, ETag, and links it to the target release/project And all datetime fields are stored in UTC; the UI displays them in the project’s time zone When the rule is updated via PUT/PATCH Then a new immutable version (v2, v3, …) is created; prior versions remain readable; ETag changes; audit log captures diff And GET /rules?releaseId=… returns the latest version by default with an option to request a specific version And the UI reflects the latest version and warns on concurrent edits using ETag preconditions
Grace Window and Fallback Date Logic
"As a project lead, I want a grace window and fallback date so that payouts happen on time without me babysitting every decision."
Description

Provide a configurable grace window after milestone approval during which payouts are queued but not released, allowing last-minute changes or pauses. Allow a fallback date to auto-clear funds if no explicit approval or rejection occurs by that date. The engine must evaluate grace and fallback concurrently with other rules, respect pause states, and update expected payout timestamps. Include countdown indicators in UI and persist all timers with durable, recoverable schedulers to survive restarts. Outcome: timely, hands-off payments with predictable safeguards against premature release or indefinite delay.

Acceptance Criteria
Grace Window Holds Payout After Approval
Given a milestone is approved at T0 and a grace window GW is configured When the approval event is recorded by the system Then the system sets expected_payout_at = T0 + GW And the payout is queued but not released before expected_payout_at And the UI shows a real-time countdown from GW to 0 with accuracy ±1s And an audit entry is written with approval_id, GW, expected_payout_at
Fallback Date Auto-Clears Without Decision
Given a milestone has a fallback_date FD and no approval or rejection exists by FD When current_time reaches FD Then the system sets expected_payout_at = FD And the payout is released at FD unless Pause is ON And an audit entry is written with trigger = "fallback" and expected_payout_at = FD
Pause Prevents Release Across All Triggers
Given a milestone payout is queued by grace or pending by fallback And Pause is turned ON before funds are released When current_time reaches expected_payout_at or FD Then no payout is released while paused And the UI displays status = "Paused" with countdown halted And upon turning Pause OFF at time Tr, the system: - If approval exists and remaining_grace > 0: sets expected_payout_at = Tr + remaining_grace - If approval exists and remaining_grace <= 0: releases payout immediately - If no decision and Tr >= FD: releases payout immediately; otherwise sets expected_payout_at = FD And all changes are audited
Concurrent Rule Evaluation and Recalculation
Given a milestone has both a grace window GW and a fallback date FD configured When an approval is recorded at Ta where Ta < FD Then the fallback trigger is canceled and payout schedule follows expected_payout_at = Ta + GW When a rejection is recorded at any time before release Then all timers are canceled and no payout is released When GW or FD is changed by an authorized user Then expected_payout_at is recalculated and updated in UI and API within 5 seconds
Expected Payout Timestamp Updates and Audit Trail
Given a milestone with AutoRelease enabled When any of the following occurs: approval, rejection, Pause ON, Pause OFF, GW change, FD change Then expected_payout_at is recomputed deterministically, persisted, and exposed via API/UI within 5 seconds And each change writes an immutable audit entry capturing actor, old_value, new_value, timestamp, and reason
UI Countdown Indicators and State Display
Given a milestone with an upcoming expected_payout_at When the user views the release panel Then the UI displays a countdown to expected_payout_at with 1-second resolution and accuracy ±1s And shows state badges for Queued by Grace, Pending Fallback, Paused, Released, or Canceled as applicable And on page refresh, the countdown resumes correctly based on server time with drift ≤1s And if expected_payout_at changes, the countdown and label update within 5 seconds
Durable Scheduler and Recovery Guarantees
Given scheduled payouts exist for one or more milestones When the AutoRelease service restarts, crashes, or a node failover occurs Then all timers are restored from durable storage within 10 seconds And no payout is duplicated (idempotency keys enforce at-most-once release) And if expected_payout_at elapsed during downtime and the item is not paused or rejected, the payout executes immediately upon recovery and is audited And countdown indicators resynchronize to the recovered schedules within 5 seconds
One-Tap Pause/Resume Control
"As a producer, I want a single pause switch I can hit if something looks wrong so that no money goes out until we resolve the issue."
Description

Implement a permissioned, one-tap control to pause or resume all pending payouts for a release, a milestone, or a specific collaborator. Pausing must immediately halt new disbursements, cancel scheduled jobs, and mark queued transactions as paused with a required reason. Resuming should recalculate schedules and re-queue eligible payouts. Display a prominent pause banner and history, and expose pause state via API and webhooks. Outcome: prevent premature payouts while providing clear visibility and fast recovery.

Acceptance Criteria
Release-Level One-Tap Pause Halts Disbursements
Given a release with at least one pending payout and at least one scheduled payout job And the actor has pause:release permission When the actor taps Pause at release scope and provides a non-empty reason (3–250 chars) Then the system records a pause with scopeType=release, scopeId={releaseId}, reason, actorId, pausedAt=serverTime And no new disbursement records are created for that release with createdAt > pausedAt And all scheduled payout jobs for that release are cancelled within 5 seconds And all queued payouts for that release transition to state=paused with pauseReason populated and pausedAt set And the API GET /releases/{id}/pause-state returns isPaused=true with matching metadata within 2 seconds
Milestone-Level Pause Scopes Correctly
Given a release with multiple milestones (A paused target, B unpaused) each with pending payouts and scheduled jobs And the actor has pause:milestone permission When the actor taps Pause on Milestone A and provides a valid reason Then only payouts and jobs associated to Milestone A are cancelled or marked paused within 5 seconds And Milestone B payouts and jobs remain unaffected And the API GET /milestones/{id}/pause-state for A returns isPaused=true and for B returns isPaused=false
Collaborator-Level Pause Isolated to Collaborator
Given a release with multiple collaborators having pending payouts on the same milestone And the actor has pause:collaborator permission When the actor taps Pause for Collaborator X and provides a valid reason Then only payouts and jobs for Collaborator X are cancelled or marked paused within 5 seconds And payouts for other collaborators on the same milestone remain eligible and continue per schedule And the collaborator’s queued payout records show state=paused and pauseReason for X only
Resume Recalculates Schedules and Re-queues Eligible Payouts
Given a scope (release, milestone, or collaborator) previously paused with queued payouts in state=paused And applicable grace windows and fallback dates are configured When an authorized actor taps Resume for that scope Then the system sets isPaused=false and records a resume event with resumedAt=serverTime And recalculates payout schedules using current time, milestone approvals, grace windows, and fallback dates And re-queues only payouts that are eligible as of resumedAt within 10 seconds, marking jobs with requeueReason=resume And does not create duplicate disbursement records for payouts that were previously created And if a higher-scope pause (e.g., release-level) remains active, payouts remain blocked and the scope shows isPaused=false with effectivePauseSource=parent
Pause Banner and History Visibility
Given a user views any page within a paused scope (release/milestone/collaborator) When the page loads after a pause has been recorded Then a prominent banner is displayed above primary content within 1 second indicating scope, reason, actor, and pausedSince timestamp And the banner provides a single Resume action if the viewer has permission, otherwise hides the action And a Pause History panel lists entries with timestamp, actor, scope, action (pause/resume), and reason, most recent first, retaining at least 180 days
API and Webhook Exposure of Pause State
Given a system integrator queries pause state When GET /releases/{id}/pause-state (and equivalent for milestones/collaborators) is called Then the response is 200 with isPaused, scopeType, scopeId, reason, pausedAt, pausedBy, effectivePauseSource, and history[] fields And when a pause or resume occurs, webhooks autoRelease.pause.created and autoRelease.pause.resumed are delivered within 10 seconds with a signed payload including scopeType, scopeId, releaseId, actorId, reason, occurredAt, and correlationId And webhook delivery is retried at least 3 times with exponential backoff on non-2xx responses and is idempotent via eventId
Permission, Validation, and Audit for One-Tap Pause/Resume
Given a user without the relevant permission attempts to pause or resume any scope When they invoke the control Then the system returns 403 Forbidden, shows no state change, and records a security audit entry with userId, scope, action, clientIp, and attemptedAt And given a permitted user invokes Pause/Resume When the confirmation prompt is submitted with a non-empty reason (3–250 chars) Then the state change completes with a single primary action, and the audit log records the event with oldState and newState And the control is accessible (keyboard operable, ARIA-labelled) and responds within 2 seconds with success or error
Overdue Decision Alerts & Reminders
"As an artist manager, I want timely reminders when approvals are late so that I can chase the right people and avoid delays."
Description

Generate alerts when approvals are overdue or grace windows are nearing expiry. Support email, in-app notifications, Slack/webhook channels, and digest modes to reduce noise. Provide configurable thresholds, escalation paths, and time-zone aware delivery windows. Include snooze and unsubscribe controls per user and per release. Log all notifications for audit and measure response times. Outcome: stakeholders are nudged to act, keeping releases on schedule.

Acceptance Criteria
Trigger Overdue Approval Alert on Missed Milestone
Given an approval milestone with due_at and assigned approvers When current_time exceeds due_at by 2 minutes and no decision is recorded in the Signoff Ledger Then create an "Overdue Approval" alert event And deliver the alert to each assigned approver via their enabled channels (email, in-app, Slack/webhook) within 5 minutes of event creation And include release name, milestone name, due_at with timezone, time overdue, and CTAs to open approval, Pause payout, Snooze, and Unsubscribe And do not send duplicate overdue alerts to the same user for the same milestone within a 12-hour cooldown unless the milestone state changes (approved/rejected/paused or due_at updated)
Warn Approvers Before Grace Window Expires
Given a milestone with a configured grace_window_end and a reminder_threshold (e.g., 24h) for grace expiry warnings When current_time equals grace_window_end minus reminder_threshold and payout has not executed and Pause is not active Then send a "Grace Window Expiry" warning to approvers and watchers via their enabled channels And include remaining time, the impact (auto-payout on expiry), and CTAs to Approve/Reject or Pause payout And if grace_window_end is reached with no action, mark the prior warning as expired and trigger the overdue/escalation flow
Deliver Alerts Across Channels with Optional Digest
Given each user has notification preferences defining enabled channels (email, in-app, Slack/webhook) and digest mode (none, daily, weekly) with a local send_time When alert events occur Then deliver immediately via enabled channels unless digest mode is active And if digest mode is daily or weekly, aggregate events into a single digest sent at the user's configured local send_time, grouped by release with counts and highest priority per group And send webhooks with a documented JSON payload; consider delivery successful on 2xx; retry up to 3 times with exponential backoff on non-2xx; on final failure, send a fallback email and log the failure And post Slack messages to the configured destination with deep links; show in-app notifications in the Notification Center within 1 minute of event creation
Respect User Time-Zone and Quiet Hours
Given a user has a stored IANA time_zone and configured delivery window (e.g., 09:00–18:00 local) or quiet hours When an alert is scheduled outside the user's delivery window Then queue the alert and deliver at the next opening of the window in that user's local time And if a grace-window warning has less than 1 hour remaining at the start of quiet hours, override quiet hours and deliver immediately And handle daylight saving transitions by using the user's IANA time zone rules so that digests are never skipped; if the exact send_time does not exist, send at the next valid local time
Per-User, Per-Release Snooze and Unsubscribe Controls
Given a user receives an alert for a specific release and milestone When the user selects Snooze for 1h, 4h, until next delivery window, or until due_at/grace_window_end Then suppress further alerts for that user–release–milestone across all channels for the chosen duration When the user selects Unsubscribe for a channel at the release or milestone scope Then stop non-digest alerts on that channel for that user at the selected scope and confirm the change; include a one-click resubscribe link in confirmation And allow the user to manage and reverse Snooze/Unsubscribe in Notification Settings; changes take effect within 60 seconds across services
Escalate Non-Responses per Policy
Given a workspace or release-level escalation policy defines steps (e.g., step1 after 24h to manager, step2 after 48h to label owner) When no decision is recorded after an overdue alert for the configured step duration Then send an escalation alert to the designated recipients including prior notification history and current time overdue And halt further escalation immediately when a decision is recorded or Pause is activated And allow per-release overrides to escalation steps and recipients; overrides take precedence over workspace defaults
Notification Logging and Response Time Measurement
Given any alert or digest is sent or attempted Then create an immutable audit log entry with event_id, release_id, milestone_id, recipient user_id, channel, payload hash, created_at, queued_at, sent_at, delivery_status (queued/sent/bounced/failed/acknowledged), and retry_count And record interaction metrics where available (email open/click timestamps, in-app view timestamp, Slack acknowledgement) linked to the event_id And compute response_time as the duration between the first alert to a user for a milestone and that user's subsequent Approve/Reject/Pause; expose per-release aggregates (count, median, p90) and per-user metrics; provide CSV export filtered by date range and release
Signoff Ledger Sync & Reconciliation
"As a finance admin, I want AutoRelease to sync with the Signoff Ledger so that approvals accurately and automatically trigger payouts without manual entry."
Description

Integrate AutoRelease with TrackCrate’s Signoff Ledger to consume approval events and update payout rule evaluation in near real-time. Ensure two-way links between signoffs, milestones, and payout transactions. Implement reconciliation jobs to detect and correct drift (e.g., missing events, reverted approvals) with conflict resolution policies and human-readable discrepancy reports. Provide webhooks/events for downstream systems. Outcome: a single source of truth where approvals reliably drive payments.

Acceptance Criteria
Near Real-Time Ledger Event Sync
Given a valid approval event from Signoff Ledger with idempotency key and correlation id, When AutoRelease consumes the event, Then payout rule evaluation is triggered and persisted within 10 seconds of event receipt. Given multiple events for the same signoff arrive out-of-order, When they are processed, Then the final milestone state reflects the event with the highest ledger sequence/timestamp and earlier events do not overwrite newer state. Given a duplicate event (same idempotency key) is received, When processed, Then no duplicate payout transaction or state mutation occurs and the handler logs an idempotent no-op. Given a transient processing failure occurs, When retries are attempted, Then the event is retried up to 5 times with exponential backoff and moved to a dead-letter queue with alerting if still failing.
Two-Way Linking Between Signoffs, Milestones, and Payouts
Given a milestone is associated to a signoff and a payout is created, When records are persisted, Then each entity stores immutable foreign keys linking signoff_id, milestone_id, and payout_id and these links are retrievable via API. Given links are established, When data integrity checks run, Then referential integrity is enforced: a milestone may have at most one payout per active release rule and dangling links are rejected. Given a payout is updated (voided, paused, executed), When the change is saved, Then the related signoff and milestone references remain intact and the audit trail records the linkage before and after the change.
Handling Reverted or Withdrawn Approvals
Given a previously approved signoff is changed to changes_requested or withdrawn, When the event is received or detected by reconciliation, Then any scheduled payout for the linked milestone transitions to Paused within 10 seconds and pending disbursement jobs are canceled if not yet executed. Given the payout was already executed, When the revert is processed, Then a reversal task is created (or manual intervention required is flagged) and the discrepancy is recorded with current resolution state. Given a revert is later re-approved, When the new approval event is received, Then payout evaluation re-runs and scheduled payout status returns to Ready according to the active rules.
Scheduled Reconciliation and Drift Correction
Given the reconciliation job runs every 30 minutes with a 90-day lookback, When ledger events are compared to internal payout and milestone state, Then missing, extra, or mismatched approvals are detected with explicit error codes (missing_event, duplicate_event, state_mismatch). Given discrepancies are found, When auto-correction policies apply, Then the system replays or compensates events idempotently and updates state to match the ledger within the same run. Given reconciliation completes, When reporting is generated, Then a human-readable discrepancy report (CSV and HTML) is produced with counts, affected IDs, actions taken, and unresolved items, and delivered to configured channels within 5 minutes of job completion.
Conflict Resolution Policies and Deterministic Outcomes
Given conflicting inputs exist (ledger approval, AutoRelease pause, manual hold), When state is evaluated, Then precedence is deterministic: Pause > Manual Hold > Ledger Approval, and evaluation outcome is logged with the precedence reason. Given two ledger events with conflicting statuses arrive close together, When processed, Then the event with the higher ledger version/sequence wins and the losing event is recorded as superseded without altering final state. Given a payout has been executed, When a later conflict would otherwise reverse it automatically, Then automatic reversal does not occur and a manual resolution task is opened with clear instructions and links to the related entities.
Outbound Webhooks and Events for Downstream Systems
Given a payout-relevant state change occurs (evaluation result, scheduled, executed, paused, voided, reconciled), When publishing outbound events, Then an event is sent within 10 seconds with a documented schema, HMAC signature, idempotency key, and correlation id. Given a subscriber endpoint responds with non-2xx, When retries are attempted, Then delivery is retried with exponential backoff for up to 24 hours and then moved to a dead-letter queue with alerts. Given the same event is redelivered, When the subscriber uses the idempotency key, Then duplicate processing is avoided on the subscriber side and TrackCrate records the deduplicated delivery in metrics.
End-to-End Auditability and Traceability
Given any payout transaction ID is provided, When an audit trace is requested, Then the system returns a complete timeline including source ledger event IDs, processing attempts, rule evaluations, reconciliation actions, user actions (pause/hold), and outbound webhook deliveries with timestamps. Given an audit trace is exported, When requesting export, Then JSON and CSV formats are available and produced within 60 seconds for traces containing up to 10,000 events. Given retention policies, When querying historical traces, Then audit data remains accessible for at least 24 months and access is permissioned and logged.
Escrow Hold & Payment Orchestration
"As a label owner, I want funds held and released automatically based on approvals so that everyone is paid correctly and on time."
Description

Orchestrate funds capture, holding, and release through supported payment providers with split payouts to collaborators. Create payment intents at milestone creation or contract signature, hold funds in escrow, and release on rule satisfaction. Handle multi-currency, tax withholding, fees, and minimum payout amounts; support retries with exponential backoff and provider webhooks for finality. Store only tokens to remain out of PCI scope and encrypt sensitive metadata. Provide payout statements and exportable remittance reports. Outcome: reliable, compliant disbursements aligned with approval states.

Acceptance Criteria
Idempotent Payment Intent Creation at Milestone/Contract
Given a milestone is created with payable amount and currency and a connected supported provider account When the milestone is saved Then a payment intent/authorization is created with capture_later semantics, stored as a provider token/ID only, and the local escrow_state is set to 'pending_hold' Given a contract is countersigned and milestones require upfront escrow When the contract signature event is processed Then payment intents are created per payable milestone according to the schedule and linked to each milestone Given the same milestone creation request is retried within 24 hours When processed with the idempotency key derived from agreement_id+milestone_id+amount+currency Then the existing intent is returned and no duplicate holds are created Given the provider returns a non-retryable error for intent creation When the attempt completes Then escrow_state is set to 'failed', the error is logged with code, and the user is notified with remediation steps
Escrow Hold Finality via Provider Webhooks
Given a payment authorization/intent is created When the provider sends a hold/authorization-confirmed webhook Then the event signature is verified, replay is rejected (unique event_id window 7 days), the event is durably persisted, and escrow_state transitions to 'held' Given an out-of-order or duplicate webhook arrives When the local state is at an equal or later terminal state Then the event is acknowledged without changing state and an idempotent log entry is recorded Given webhook processing succeeds When responding to the provider Then a 2xx response is returned only after transactionally committing state and enqueueing follow-up jobs
Rule-Based Release with Grace Window, Pause, and Ledger Sync
Given the Signoff Ledger marks milestone as Approved at t0 When the configured grace window elapses and the milestone is not Paused Then the system triggers funds release at t0 + grace_window within ±1 minute and records a release job Given the milestone is Paused before release When rules would otherwise release funds Then no capture/payout occurs and an alert is sent to the project owner and collaborators Given no decision is made by the fallback date When the fallback date is reached Then funds are released automatically unless Paused and an overdue decision alert is recorded Given an approval is revoked before the grace window expires When the revocation event is processed Then the scheduled release is canceled and escrow_state remains 'held' Given a release completes successfully When syncing to the Signoff Ledger Then payment state is updated to Paid with provider transaction IDs and timestamps
Split Payouts with Multi-Currency, Fees, Taxes, and Minimums
Given a release amount in source currency with collaborator splits (percent or fixed) When executing payouts Then each collaborator's net is computed as gross_share minus platform fees, provider fees, and tax withholding, with rounding to 2 decimals and the sum of nets + all fees + all tax equals the released amount per currency within 0.01 Given collaborators have different payout currencies When executing payouts Then FX rates from the provider at capture time are applied and each statement records the rate and timestamp used Given a collaborator's computed net is below their minimum payout threshold When executing payouts Then no payout is created for that collaborator; the amount is accrued to their pending balance with a reason code Given a collaborator lacks a valid payout method or required tax form When executing payouts Then their share is withheld to pending with a reason code and other payouts proceed
Resilient Retries with Exponential Backoff and Idempotency
Given a transient provider error (HTTP 5xx, timeout, rate limit) occurs during intent creation, capture, or payout When retrying the operation Then the system retries up to 5 times with exponential backoff starting at 1 minute and doubling up to 16 minutes with ±20% jitter Given the final retry attempt fails When the operation exceeds max attempts Then the job is moved to a dead-letter queue, the milestone is flagged 'action_required', and alerts are sent to ops and the project owner Given an operation is retried When using the same idempotency key Then provider-side and system-side deduplication ensure no duplicate charges or payouts are created
PCI-Safe Tokenization and Encrypted Metadata
Given payment details are collected When persisting payment information Then only provider tokens/IDs are stored; no PAN, CVV, or raw bank account numbers are persisted and requests use TLS 1.2+ Given sensitive metadata (e.g., tax IDs, bank last4, provider secrets) is stored When writing to the database Then values are encrypted at rest using AES-256-GCM via managed KMS with key rotation every 90 days and access is enforced via least-privilege roles with audit logs Given a token or secret is rotated or revoked When the system attempts to use it Then stale tokens are detected, the operation fails closed with a non-retryable error, and the user is prompted to reconnect
Payout Statements and Exportable Remittance Reports
Given a release event completes (success or partial) When generating statements Then a payout statement is produced within 60 seconds including milestone ID, approvals, gross, fees, taxes, FX rates, collaborator allocations, net amounts, and provider transaction IDs Given a date range and project filter When exporting remittance reports Then CSV and PDF files are generated with per-currency subtotals that reconcile to provider payouts within ±0.01 and include pending/withheld items with reason codes Given the project time zone is configured When rendering timestamps in statements and exports Then all times are displayed in the project time zone with UTC offset and ISO-8601 formatting
Audit Trail & Dispute Freeze
"As a label counsel, I want a complete audit trail and a way to freeze payouts during disputes so that we can investigate and resolve issues without losing control."
Description

Maintain an immutable audit log of rule evaluations, approvals, pauses, notifications, and payment events with actor, timestamp, and before/after state. Support exporting logs and linking them to releases and collaborators. Provide a dispute freeze that temporarily locks payouts for a scope (release, milestone, or collaborator) while preserving evidence and timers, with clear UI and API to lift the freeze. Outcome: transparency for stakeholders and defensible records for resolving disputes.

Acceptance Criteria
Append-only audit logging for AutoRelease events
Given AutoRelease processes a rule evaluation, When the evaluation completes, Then an audit entry is appended containing event_type="rule_evaluation", actor (user/service), correlation_id, entity_scope (release_id/milestone_id/collaborator_id), before_state, after_state, outcome, and timestamp (UTC ISO 8601 with millisecond precision). Given an approval, pause, notification, or payment event occurs, When it is handled, Then an audit entry is appended with the required common fields and type-specific fields (e.g., ledger_approval_id, notification_recipient_checksum, payment_provider_reference). Given any attempt to update or delete an existing audit entry, When the write is attempted via UI, API, or internal service, Then the system rejects it (HTTP 403 or 409) and appends a security_event audit entry. Given the audit log store, When integrity is verified, Then each entry includes previous_hash and entry_hash and the hash chain validates end-to-end for the requested scope.
Export audit logs by scope/time with linkage
Given a user with Export Audit Logs permission, When they request an export with scope=release|milestone|collaborator and a time_window and format=CSV|JSON, Then the system returns a downloadable file containing all matching entries with linkage fields (release_id, milestone_id, collaborator_id, ledger_approval_id) and export_metadata (generated_at, filter, row_count, checksum). Given an export of up to 100,000 entries, When requested, Then it completes within 30 seconds at the 95th percentile and streams results to avoid timeouts. Given a user without PII clearance, When they export logs, Then sensitive fields are redacted and a redaction_notice is included in export_metadata. Given results exceed a single page, When the API is used, Then cursor-based pagination is supported with stable ordering by timestamp and entry_id.
Initiate dispute freeze (UI) and block payouts while preserving timers
Given a user with Manage Disputes role selects a scope (release|milestone|collaborator), When they initiate a freeze with reason and optional expires_at, Then freeze_status=active is set for that scope, a freeze_started audit event is appended, and a UI banner shows freeze details on affected entities. Given a freeze is active for a scope, When payout execution jobs run, Then they are blocked for that scope (HTTP 423 Locked with freeze_id) while rule evaluations and timers continue and log payout_action="frozen" without transferring funds. Given overlapping freezes exist, When eligibility is computed, Then the most restrictive scope applies (release > milestone > collaborator) and is recorded in the audit entry.
Lift dispute freeze via API with re-evaluation
Given an active freeze exists, When an authorized client POSTs /v1/disputes/{freeze_id}/unfreeze with idempotency_key and rationale, Then permission and rationale are validated, freeze_status is set to cleared, and a freeze_cleared audit event is appended with actor and rationale. Given a freeze is cleared, When the unfreeze operation completes, Then pending payouts for the affected scope are immediately re-evaluated, due payouts are triggered, and payment_attempt audit events are recorded with correlation_id linking to the unfreeze. Given duplicate unfreeze requests share the same idempotency_key, When processed, Then only one state transition occurs and subsequent requests return the prior result without side effects.
Overdue decision alerts respect freeze state
Given a decision becomes overdue for a frozen scope, When the alerting job runs, Then an overdue_decision notification is sent annotated with freeze_id and scope and is recorded as notification_sent with recipient list, delivery status, and content_checksum. Given a scope is frozen, When an item is overdue, Then payout triggers are suppressed while alerting continues, and an alert_generated audit event is appended. Given alert cooldown is configured (e.g., 24h), When additional overdue checks run within the window, Then duplicate notifications are suppressed and suppression is logged.
Verify audit log integrity and external anchoring
Given an auditor calls GET /v1/audit/verify with scope and time_window, When verification completes, Then the response includes verification_pass=true|false, entries_hashed, chain_start_hash, chain_end_hash, and an anchored_at proof from a timestamp authority or blockchain. Given the daily anchoring job runs, When it completes successfully, Then a merkle_anchor_created audit event is appended and includes merkle_root and anchor_reference; if verification fails, a critical Ops alert is created within 5 minutes and logged.

KYC FastPass

Streamlined Stripe Connect onboarding for every collaborator. Collect payout details and tax forms up front with localized guidance and status tracking. Reduce last‑mile friction so funds can move the moment a milestone is approved—no chasing bank info at midnight.

Requirements

Express Onboarding Wizard
"As a release manager, I want to invite collaborators to complete payouts setup in one guided flow so that funds can move immediately when milestones close."
Description

A guided, self-serve flow to invite collaborators and provision Stripe Connect Express accounts with minimal friction. The wizard handles account type selection (individual/company), localized onboarding link generation, return/callback URLs, and persistence of Stripe account IDs. It embeds Stripe-hosted components where possible and falls back to hosted onboarding when required. The flow tracks progress and errors, integrates with TrackCrate’s collaborator and project models, and ensures collaborators complete payout setup early to enable instant fund movement upon milestone approval.

Acceptance Criteria
Invite Collaborator and Send Localized Onboarding Link
Given a project owner invites a collaborator with a country and preferred language set When the owner confirms the invitation in the wizard Then a Stripe Connect Express account is created or reused for the collaborator with business_profile.country matching the collaborator country And a localized onboarding link is generated with valid return_url and refresh_url pointing to TrackCrate And the collaborator receives an email with the onboarding link within 60 seconds and an in-app notification is shown And the onboarding link expiry timestamp is stored and displayed to the inviter And the invitation and onboarding link creation are recorded in the audit log with collaborator, project, and account_id
Account Type Selection and Validation
Given the collaborator selects an account type (Individual or Company) in the wizard When they proceed to onboarding Then the Stripe account is created/updated with business_type matching the selection And TrackCrate stores only the Stripe account_id and non-PII references (no SSN/TIN or bank details) And if Stripe indicates a mismatch or unsupported type for the selected country, a clear validation error is shown and the user cannot continue And switching the account type before submission updates the parameters without creating duplicate accounts
Embed Express Onboarding with Fallback to Hosted
Given the collaborator’s region and browser environment supports embedding Stripe-hosted onboarding components When the onboarding step loads Then the embedded component renders within the wizard without third-party cookie or CSP errors And no PII is stored by TrackCrate beyond the Stripe account_id And the step progresses only after the embedded component reports completion Given embedding is not supported or fails to initialize When the step loads or an initialization error occurs Then the wizard falls back to a hosted onboarding link opened in a new tab with preserved state and CSRF protection And the user can resume the wizard upon return_url without loss of progress
Persist and Reuse Stripe Account IDs
Given a collaborator is invited to multiple projects or re-invited to the same project When the onboarding flow starts Then TrackCrate reuses the existing Stripe account_id associated with the collaborator profile And the project-collaborator record references the same account_id And idempotency keys are used on Stripe API calls to prevent duplicate account creation under retries And an audit entry records create vs reuse events with timestamps
Onboarding Callback Handling and Progress Tracking
Given a collaborator completes, abandons, or partially completes onboarding When Stripe sends an account.updated webhook or the collaborator returns via return_url Then TrackCrate fetches the latest account state and capabilities and updates the collaborator’s onboarding_status (Not Started, In Progress, Needs More Info, Completed) And the wizard step UI reflects the latest status within 5 seconds of webhook receipt or page return And both the collaborator profile and the project view show consistent status and last-updated timestamps And if requirements.currently_due is non-empty, the wizard highlights outstanding items with localized guidance
Error Handling and Expired Link Regeneration
Given an onboarding link has expired or an API error occurs during onboarding When the inviter or collaborator clicks Regenerate Link Then a new onboarding link is generated, the old link is invalidated, and the new expiry is stored and displayed And link regeneration is rate-limited to 3 attempts per hour per collaborator with clear feedback on remaining attempts And user-facing errors include a human-readable message and a correlation ID And errors are logged server-side with Stripe request IDs while avoiding storage of sensitive PII
Tax Form Collection and Payout Gating
Given the collaborator’s country or payout configuration requires tax information When onboarding reaches the tax step Then the appropriate Stripe tax form flow (e.g., W-9/W-8) is presented via Stripe-hosted components And TrackCrate reflects Tax Info Required until Stripe marks tax requirements as satisfied And milestone payouts are blocked for the collaborator until onboarding_status is Completed and tax requirements are satisfied And once satisfied, the project finance UI shows Tax Verified and allows instant payouts on milestone approval
Localized Tax Form Guidance
"As an international collaborator, I want clear localized instructions for my tax and identity info so that I can complete onboarding without confusion."
Description

Contextual, locale-aware guidance and dynamic tax form collection for collaborators based on country, entity type, and residency. The UI surfaces the correct forms (e.g., W‑9/W‑8 series), explains required fields in plain language, and links to authoritative resources. Content is automatically localized (language, currency, date formats) and adapts to Stripe’s capability requirements, capturing completion status and errors. This reduces support burden and increases successful first-pass completion rates for global collaborators.

Acceptance Criteria
Correct Form Routing by Country and Entity Type
Given a collaborator with country = United States, entity type = Individual, and US tax residency = Yes When the tax form step loads Then only the W-9 form is displayed and pre-selected And no W-8 forms are shown. Given a collaborator with country ≠ United States and entity type = Individual When the tax form step loads Then only the W-8BEN form is displayed And W-9 is not shown. Given a collaborator with country ≠ United States and entity type ∈ {Company, Organization, Partnership, Nonprofit} When the tax form step loads Then only the W-8BEN-E form is displayed And W-9 and W-8BEN are not shown. Given a collaborator selecting that they claim treaty benefits and a W-8 form applies When the form is rendered Then treaty-related fields (e.g., Article, Paragraph, Rate) become required only for countries with an applicable treaty per IRS rules. Given Stripe capabilities/requirements indicate 1099 reporting is required When the form selection is evaluated Then US persons must complete W-9 and non-US persons must complete the appropriate W-8 series form. Given a regression test dataset of at least 30 personas across ≥10 countries When routed through form selection Then 100% are presented with the correct form per IRS/Stripe requirements.
End-to-End Localization of UI, Formats, and Content
Given the collaborator’s browser language is supported (en, es, fr, de, pt-BR, ja) When the tax form step loads Then UI labels, helper text, and error messages are shown in that language And the user can manually change language, which persists across sessions. Given a selected locale When dates, numbers, and currency examples are displayed Then formats follow the locale conventions (e.g., MM/DD/YYYY vs DD/MM/YYYY, decimal/thousand separators, currency symbol placement). Given localized guidance content exists When the step loads Then 100% of helper text, field descriptions, and CTA labels are localized for supported locales And any missing translation falls back to English with a visible “EN” indicator. Given external authoritative resources are linked When a user clicks “Learn more” Then the link opens in a new tab with localized link text and correct resource (e.g., IRS W-9, W-8BEN/E instructions) and uses rel=noreferrer noopener.
Field-Level Guidance and Authoritative Resources
Given a tax form is selected (W-9, W-8BEN, or W-8BEN-E) When required fields are rendered Then each required field displays concise, plain-language guidance and at least one example value appropriate to the form and locale. Given fields with domain-specific terminology (e.g., FTIN, disregarded entity, chapter 4 status) When the field is focused Then a help tooltip or inline explainer appears with a definition in plain language and a link to the authoritative resource section. Given the “Learn more” link is available When clicked Then it navigates to the correct official instructions section for the active form (e.g., IRS Instructions for Form W-9/W-8BEN/W-8BEN-E) in a new tab. Given entity type or residency toggles change When a selection is updated Then the visible guidance updates immediately without page reload to reflect the new context.
Validation, Error Messaging, and Submission Blocking
Given client-side validation rules When a user enters a US SSN/EIN/TIN Then only 9 numeric digits are accepted (hyphens optional), and formatting is normalized; invalid lengths or non-digits trigger inline errors. Given address and identity fields are required by the selected form When fields are left empty or in an invalid format for the locale Then real-time inline errors are shown in the selected language and the Submit button remains disabled until resolved. Given server-side validation and Stripe API responses When Stripe returns a requirements/error code (e.g., verification failure, missing fields) Then the UI maps it to a human-readable, localized message within 1 second and highlights the precise fields to fix; no raw codes are shown to end users. Given a network failure during submission When the user retries Then no previously entered data is lost, and submission can be retried without re-entry. Given all required fields pass validation When the user submits the form Then the submission succeeds, an electronic certification checkbox is required and captured with timestamp, and the user is advanced to the next step.
Stripe Capability-Driven Requirements Adaptation
Given a collaborator account exists in Stripe Connect When the KYC FastPass step initializes Then the app retrieves capabilities and requirements (including current_due and eventually_due) and determines required tax forms and fields before rendering. Given Stripe updates required information (e.g., due to risk review or capability changes) When requirements change Then the UI reflects new required fields within 60 seconds via webhook/polling and marks the step as “Needs attention.” Given certain capabilities are not requested (e.g., US 1099 reporting not applicable) When determining the required tax form Then the UI does not require forms beyond Stripe’s current requirements for that collaborator. Given the collaborator completes newly required items When Stripe marks requirements as satisfied Then the UI updates the status to “Verified” without manual refresh.
Completion Status Tracking, Autosave, and Resume
Given a collaborator begins the tax form step When they complete fields Then progress is autosaved on each field blur within 500 ms and survives page reloads/sign-in on another device for at least 30 days. Given the collaborator exits mid-flow When they return via the same onboarding link Then they resume at the exact step and scroll position, with all prior valid entries preserved. Given key milestones (Started, In Progress, Submitted, Verified, Needs Attention) When state changes occur (local validation pass, submission, Stripe verification) Then the status badge in the collaborator list updates within 60 seconds and is queryable via API. Given analytics are enabled When a collaborator completes the tax form without errors Then a “tax_form_first_pass_success” event is recorded with locale, form type, and duration metrics for funnel analysis.
Real-Time KYC Status Sync & Dashboard
"As a label admin, I want to see each collaborator’s KYC and payout readiness status so that I can resolve blockers before release dates."
Description

A consolidated dashboard that displays each collaborator’s KYC and payout readiness across projects and releases. It consumes Stripe webhooks (account.updated, person.updated, capability.updated) and polls when necessary to maintain an accurate state machine (e.g., requirements_due, past_due, verified). The dashboard highlights blockers, required actions, and deadlines, with timestamps and deep links to resume onboarding. This provides proactive visibility and reduces last‑minute delays.

Acceptance Criteria
Real-Time Webhook Sync Updates KYC State
Given TrackCrate receives a Stripe webhook of type account.updated, person.updated, or capability.updated for a known connected account And the webhook signature is valid And the event.id has not been processed before When the event is processed Then the collaborator’s KYC state is updated according to the mapping rules (e.g., verified, requirements_due, past_due) And the dashboard reflects the new state within 5 seconds of receipt And the event is stored idempotently with event.id and event.created timestamps And older/out-of-order events are ignored if a newer event for the same account has already been applied
Resilient Polling Ensures Data Freshness
Given a connected account is in a non-final state (requirements_due, past_due, or pending) and no webhook has been received for 15 minutes When the scheduled poller runs Then TrackCrate calls Stripe to fetch current account/person requirements and capabilities And updates the internal KYC state to match Stripe And records last_polled_at and source=poll And respects Stripe rate limits (backs off and retries up to 3 times on 429, then schedules next attempt) And stale accounts (>24h since last sync) are refreshed within the next poll cycle And the dashboard shows an updated last_synced_at timestamp within 1 minute of poll completion
Consolidated Cross-Project KYC Dashboard View
Given an Org Admin or Label Manager opens the KYC dashboard When collaborators exist across multiple projects and releases Then the dashboard displays one row per collaborator with columns: name, masked account ID, current_state, blockers_count, next_deadline_at (if any), last_synced_at, and action_link And rows are sortable by next_deadline_at and current_state severity (past_due > requirements_due > pending > verified) And rows are filterable by state (verified, requirements_due, past_due, pending) And the table paginates at 50 rows per page with accurate counts
Blockers, Required Actions, and Deep Links
Given a collaborator’s account has requirements_due or past_due in Stripe When the user expands the collaborator row Then the dashboard lists each outstanding requirement from Stripe (currently_due and past_due) with human-readable labels And shows due_by timestamps when provided by Stripe And renders a Resolve action that opens a valid Stripe onboarding/deep link in a new tab And generates a fresh onboarding link if the previous link is expired or absent And records an analytics event for the click without storing sensitive PII
Deadlines Highlighting and SLA Timestamps
Given a collaborator has a due_by deadline within 7 days or is past_due When the dashboard renders the row Then the row is visually highlighted by severity (past_due in red, due within 7 days in amber) And the deadline is displayed as both absolute (localized) and relative time And the row includes last_synced_at with source (webhook or poll) And sorting by deadline orders by earliest due date first
Audit Trail for KYC State Changes
Given any KYC state transition occurs for a collaborator When the new state is persisted Then an immutable audit record is created capturing: previous_state, new_state, source (webhook|poll|manual), stripe_event_id (if applicable), occurred_at (UTC), and request_id And audit records can be queried by collaborator and by date range And the dashboard displays the most recent state change timestamp and source
Role-Based Access and Tenant Isolation
Given a user attempts to access the KYC dashboard When the user has Org Admin or Label Manager role within the tenant Then access is granted and only collaborators within that tenant are visible When the user has Collaborator or Viewer role without KYC permissions Then access is denied with a 403 message and no KYC details are leaked And direct deep links to collaborator detail views enforce the same access controls
Pre-Release KYC Gate & Milestone Rules
"As a project owner, I want payouts to be automatically gated on KYC completion so that we avoid delays and compliance issues at release time."
Description

Enforcement rules that block milestone approval and payout execution until all designated payees have completed required KYC and Stripe capabilities are enabled. Includes preflight checks at project setup, early invitations when collaborators are added, and clear warnings in the milestone workflow. Admins can configure strictness, exemptions, and lead-time thresholds. This ensures compliance and eliminates late-stage payout friction.

Acceptance Criteria
Project Setup Preflight KYC Readiness Check
Given a new project is created with milestones and designated payees and an admin-defined lead-time threshold L and strictness S When the KYC preflight runs at project creation or first milestone creation Then the system evaluates each payee’s KYC status and required Stripe capabilities and computes readiness And displays a preflight summary with counts for total (N), ready (R), pending (P), and blocked (B) And generates onboarding invites for any payee lacking an active link within 60 seconds And if the nearest milestone due date is within L days and any payee is not ready, a “KYC At Risk” banner is shown on the project dashboard And an audit event "kyc_preflight_run" is recorded with snapshot details
Automatic KYC Invites on Collaborator Add
Given a collaborator with a non-zero payout share is added to a project When the collaborator is saved Then a Stripe Connect onboarding link is created or refreshed And an email and in-app notification are sent within 60 seconds, localized to the collaborator’s locale And the collaborator’s KYC status appears as Pending in the project/milestone collaborators list And re-invite is available on demand with a rate limit of one per 24 hours And an audit event "kyc_invite_sent" is logged with recipient and initiator
Milestone Approval Gate (Strict Mode)
Given the project KYC strictness is set to Strict And at least one designated payee is not KYC complete or lacks required Stripe capabilities When an approver attempts to approve the milestone via UI or API Then the approval is blocked and the UI shows a disabled control with a list of blocking payees and missing items And the API returns 409 Conflict with error code KYC_GATE_BLOCKED and details per payee And the milestone state does not change and an audit event "kyc_gate_blocked" is recorded And if all designated payees are ready, the approval proceeds successfully
Milestone Approval Gate (Soft Mode)
Given the project KYC strictness is set to Soft And at least one designated payee is not KYC complete or lacks required capabilities When an approver approves the milestone Then the approval succeeds but the milestone displays "Payouts Blocked: KYC Pending" And the payout job is scheduled in On Hold state with reasons per payee And the API response includes payout_blocked=true with reasons When all designated payees become ready before payout date Then the On Hold state is automatically lifted and payout becomes eligible without further user action
Payout Execution Gate
Given a payout run (manual or scheduled) includes payees with incomplete KYC or missing required capabilities When the payout run starts Then no funds are transferred and the run aborts before any partial transfers unless exemptions explicitly allow partial payouts And per-payee failures are recorded with code KYC_NOT_READY and specific missing items And project admins receive a notification within 2 minutes summarizing blockers And a retry becomes eligible automatically after the next status sync or manual re-run
Exemptions Management & Role-Based Overrides
Given a user with permission Manage KYC Exemptions When the user creates an exemption for a payee with a reason and expiration date Then the payee is excluded from KYC gating calculations until the exemption expires And the exemption is displayed in the milestone gating panel and recorded in audit logs And only users with the permission can create, edit, or delete exemptions; others have read-only visibility And when the exemption expires, gating recalculates within 5 minutes
Lead-Time Warnings & Localized Guidance
Given a lead-time threshold L days is configured for KYC readiness And a milestone due date falls within L days while any designated payee is not ready When the threshold window is entered Then a banner on the milestone page shows a countdown and lists pending payees with missing items And daily reminders are sent at 10:00 in each payee’s local time via email and in-app until completion or due date, up to N reminders And reminder and invite content is localized to the recipient’s locale and includes region-appropriate tax/KYC guidance And when all payees complete KYC, the banner and reminders stop within 10 minutes
Secure Payout Details via Hosted Components
"As a collaborator, I want to enter my bank details securely through a trusted interface so that my payouts are safe and compliant."
Description

Collection of bank account details and payout preferences using Stripe-hosted onboarding and Financial Connections to avoid handling sensitive data directly. TrackCrate stores only Stripe account IDs, capabilities, and non-sensitive metadata, supporting multiple countries, currencies, and payout schedules. Includes graceful fallbacks for unsupported regions and clear user messaging. This approach ensures PCI/privacy compliance and builds user trust.

Acceptance Criteria
Stripe Connect Hosted Onboarding Redirect & Return
- Given an authenticated collaborator clicks "Set up payouts" in TrackCrate When an onboarding session is requested Then a Stripe Connect account link is created with correct return_url and refresh_url and the user is redirected within 2 seconds. - Given the user completes onboarding on Stripe When Stripe redirects back Then TrackCrate stores only the Stripe account ID, country, and business type, and displays a success state without persisting any bank numbers or tokens. - Given the onboarding link expires or is abandoned When the user retries Then a fresh account link is generated and the session can be resumed without data loss. - Given an error occurs creating the account link When the user attempts onboarding Then an actionable error message is shown and no partial records are saved.
Bank Account Linking via Financial Connections (No Sensitive Data Stored)
- Given a connected Stripe account without an external payout account When the user selects "Link bank" Then the Stripe Financial Connections UI opens in a hosted modal. - Given the user successfully links a bank via Stripe When the flow completes Then the external account is attached to the Stripe account; TrackCrate stores no account/routing/IBAN numbers; the UI shows bank name and last4 from Stripe only. - Given the user cancels or the connection fails When the flow exits Then no partial bank data is stored and the UI shows a retriable state. - Given the linked bank currency is incompatible with the account country When verification occurs Then the user is prompted to select a supported currency/bank before enabling payouts.
Capabilities & Payout Eligibility Status Tracking
- Given onboarding progresses on Stripe When capabilities change (e.g., transfers, payouts) Then TrackCrate updates and displays capability statuses (active, pending, disabled) within 60 seconds of a webhook or poll. - Given required capabilities are not active When a user attempts to approve a milestone payout Then the payout action is blocked with a clear, localized reason and a link to continue onboarding. - Given capabilities move from pending to active When the system receives the event Then the UI automatically enables payout actions without requiring a page refresh. - Given an error fetching capability status When the dashboard loads Then a retry occurs up to 3 times with exponential backoff and a non-blocking warning is shown.
Localization: Country, Currency, and Payout Schedule Support
- Given TrackCrate knows the collaborator’s country and locale When launching Stripe onboarding Then the hosted pages render in the correct language and country-specific requirements are applied. - Given Stripe returns available payout currencies and schedules for the account When viewing payout preferences Then only supported options are displayed; selected preferences are saved as non-sensitive metadata or via Stripe API where applicable. - Given a user selects an unsupported currency or schedule When they attempt to save Then validation prevents the change and explains supported alternatives. - Given a locale is unsupported When rendering guidance Then English is used as a safe fallback.
Unsupported Region Fallbacks & Clear Messaging
- Given Stripe Connect is not available for a collaborator’s country When they attempt onboarding Then TrackCrate explains the limitation, disables payout setup, and labels the account as "External payouts required" without collecting sensitive data. - Given the collaborator later changes to a supported country When eligibility is rechecked Then onboarding becomes available and the fallback label is removed. - Given unsupported status persists When viewing milestones Then payout actions remain disabled with context-specific guidance and a link to documentation.
Data Minimization, Privacy, and Audit Controls
- Given onboarding and bank linking flows complete When data is persisted Then only Stripe account IDs, capability status, country, and non-sensitive metadata are stored; no bank account numbers, routing numbers, or tokens exist in databases, caches, or logs. - Given engineers or admins view collaborator records When accessing via the admin UI or APIs Then sensitive fields are not present; access to Stripe account IDs is role-restricted and every view/change is audit logged with timestamp and actor. - Given application logs are generated When onboarding events occur Then logs contain no raw PII or secrets; secrets live only in the secret manager and are never returned via APIs. - Given a security scan runs nightly When searching data stores and logs for bank-number patterns Then zero matches are reported; any match fails the build.
Webhook Verification, Idempotency, and Reconciliation
- Given Stripe sends account.updated, capabilities, or financial_connections.* events When received Then webhook signatures are verified, events are processed idempotently, and responses return 2xx within 2 seconds. - Given duplicate or out-of-order events arrive When processing Then the latest state is applied exactly once without regression. - Given webhooks are delayed or missed When the hourly reconciliation job runs Then statuses are synchronized by querying Stripe for accounts updated since the last checkpoint and the UI is corrected. - Given a webhook processing error occurs When retries exceed Stripe’s limit Then an alert is created and the account is queued for reconciliation.
Notification & Reminder Engine
"As a release manager, I want automated reminders for collaborators who haven't finished KYC so that I don't have to manually chase them."
Description

Automated email and in-app notifications to drive KYC completion. Triggers include initial invite, periodic reminders based on status (requirements_due, past_due), and release timeline proximity. Messages are localized, rate-limited, and include secure deep links back to onboarding. Owners can CC themselves and view delivery/completion metrics. This reduces manual chasing and keeps projects on schedule.

Acceptance Criteria
Initial Invite: Email + In-App with Secure Deep Link
Given a collaborator is added to a project with KYC required and has a verified email When the owner sends the KYC invite Then the system dispatches a localized email and creates an in-app notification within 60 seconds And the email subject includes the project name and "KYC required" And the body contains a secure deep link to onboarding And the deep link is HTTPS, signed, contains no PII, and expires in 7 days When an expired link is opened Then the user is routed to sign-in and then to the onboarding flow with preserved context And if CC is enabled for the project Then the owner receives a copy within 60 seconds and is not visible in the collaborator's recipient list Then a delivery event is logged with message_id, recipient_id, and timestamp When the email provider signals a hard bounce Then message status is recorded as bounced and no further emails are attempted until the address is updated
Status-Based Reminder: requirements_due Cadence and Rate Limit
Given a collaborator has KYC status "requirements_due" for a project When 72 hours have elapsed since the last KYC message for that project-recipient Then send a localized reminder email and an in-app notification And reminders are capped at 1 per 24 hours per recipient and 3 per 7-day window per project When the cap is reached Then additional reminders are suppressed and a suppression event is logged When the collaborator completes KYC Then all pending or scheduled reminders for that project-recipient are canceled within 5 minutes Then reminder content includes the count of remaining required items and a secure deep link When the deep link is clicked Then the click event is tracked and attributed to the specific reminder
Escalation: past_due Reminders
Given a collaborator's KYC status changes to "past_due" When 24 hours elapse since the last KYC message Then send an escalation reminder with an in-app high-priority badge and subject tag "[Action Required]" And escalation reminders repeat every 24 hours up to 3 times or until KYC completion, whichever comes first And the project owner is CC'd on escalation reminders when CC is enabled When the owner disables CC for the project Then subsequent escalation CCs are not sent Then metrics record escalation sends separately from standard reminders
Release Timeline Proximity Trigger
Given a project has a planned release date When days_to_release is less than or equal to 10 and the collaborator's KYC is incomplete Then send a timeline proximity reminder distinct from the standard cadence And proximity reminders are sent at most once every 48 hours per recipient and respect global rate limits If multiple triggers coincide within the same hour Then only one message is sent with merged context Then the reminder includes the release date rendered in the collaborator's local timezone and a secure deep link When KYC completes Then any future proximity reminders for that project-recipient are canceled
Localization and Content Personalization
Given a recipient locale is known Then email subject, body, date/time formats, and tax guidance links are rendered in that locale When the locale is unknown Then default to English (en-US) And placeholders for recipient name, project name, and required tasks are fully resolved with no missing tokens If a translation key is missing at runtime Then the system falls back to English for that key and logs the missing key with severity warning And right-to-left locales render correctly for in-app notifications
Owner Metrics Dashboard for Notifications
Given a project owner opens the Notifications dashboard Then they can filter by project, collaborator, KYC status, message type (invite, reminder, escalation, proximity), and date range And the view displays counts for sent, delivered, bounced, opened, clicked, and KYC completed within 24h and overall And metrics update within 5 minutes of new events And the owner can export the current filtered view to CSV When exported Then the file contains one row per message with message_id, collaborator_id, message type, send timestamp, last status event, open_count, click_count, and attributed_completion flag Then all recipient metrics exclude owner CC copies, while owner_send_count includes them
In-App Notification Center Behavior
Given a collaborator logs into TrackCrate Then an unread badge displays the count of pending KYC notifications When a KYC notification is opened Then it is marked as read and navigates to the onboarding flow via a secure deep link And in-app notifications follow the same cadence and rate limits as email When a notification is suppressed by rate limiting Then a single digest card is shown instead of multiple items And notifications older than 30 days auto-archive unless KYC is still incomplete
Role-Based Access & Privacy Controls
"As a compliance-conscious admin, I want strict access controls around KYC data so that we protect collaborators' privacy and meet regulations."
Description

Granular permissions that restrict who can invite collaborators, view KYC status, and access sensitive details. Most users see only high-level readiness (e.g., Ready/Action Needed) while privileged roles can review detailed statuses and logs. The system captures consent, maintains audit trails for invites and changes, applies data minimization/retention policies, and supports export/delete to meet GDPR/CCPA obligations. This safeguards collaborator privacy and ensures regulatory compliance.

Acceptance Criteria
Enforce Invite Permissions by Role and Scope
Given a user without the InviteCollaborator permission, When they POST /invites, Then the API returns 403, no invitation record is created, no email is sent, and an audit event invite.create is recorded with outcome=denied. Given a user with InviteCollaborator permission scoped to Project A, When they attempt to invite a collaborator to Project B, Then the API returns 403 and an audit event invite.create is recorded with outcome=denied, reason=scope_violation. Given a user with InviteCollaborator permission scoped to Project A, When they invite a collaborator to Project A with valid inputs, Then the API returns 201, the invite is linked to Project A, and an audit record includes actorId, targetEmailHash, projectId, timestamp(ISO8601 UTC), ip, userAgent, requestId. Given any user, When they list invites, Then only invites within their permission scopes are returned; attempts to access out-of-scope invites return 403 and are audited.
Tiered KYC Status Visibility (High-level vs Detailed)
Given a non-privileged user, When viewing a collaborator’s KYC panel, Then only readiness in {Ready, Action Needed, Pending, Restricted} and lastUpdated are shown; no PII is displayed; API response excludes fields [address, dob, taxId, bankAccount, documentImages]. Given a user without ViewKycDetails permission, When calling /kyc/details, Then the API returns 403 and logs an access_denied audit event with actorId and targetId. Given a privileged user with ViewKycDetails, When viewing /kyc/details, Then detailed status codes and event history are visible; sensitive numbers are masked (last4 only) and redaction is consistently applied in both UI and API responses.
Audit Trails for Invites, Permission Changes, and Sensitive Access
Given any of {invite.create, invite.update, invite.delete, role.assign, role.revoke, permission.grant, permission.revoke, kyc.viewDetails} occurs, When the action completes (success or denial), Then an immutable audit record is appended with {eventId, action, actorId, targetId, timestamp(UTC ISO8601), ip, userAgent, requestId, outcome} and before/after value hashes where applicable; no raw PII is persisted in audit values. Given a Compliance Admin, When querying audit logs by date range and action, Then results are filterable, paginated, exportable (CSV/JSON), and include a verifiable integrity hash chain for each page. Given an integrity check runs, When a checksum mismatch is detected, Then the system flags an integrity_alert audit event and prevents further writes to the affected segment until remediated.
Consent Capture, Versioning, and Revocation Controls
Given a collaborator begins KYC onboarding, When the consent checkbox is unchecked, Then the Continue/Submit action is disabled. Given the collaborator checks consent and submits, Then a consent record is stored with {dataSubjectId, consentType="KYC Processing", policyVersion, textHash, locale, timestamp, ip, userAgent, onboardingSessionId} and is linked to the subject. Given the data subject revokes consent, When revocation is confirmed, Then non-mandatory processing stops, readiness shows "Consent Withdrawn", non-privileged access to KYC details is revoked, and a consent.revoked audit event is recorded. Given a data export is generated, When it includes legal basis, Then the consent record and policyVersion are present in the export package.
Data Minimization and Retention Policy Enforcement
Rule: TrackCrate never stores full bank account numbers, full tax identifiers, or document images; only provider tokens/references and masked last4 where necessary. Given a non-privileged user views a collaborator profile, When sensitive fields are present, Then values are redacted/masked and omitted from API responses lacking the required permission. Given a collaborator is removed or a workspace requests deletion, When the deletion job executes, Then personal KYC data held by TrackCrate is deleted or irreversibly anonymized within 30 calendar days; a deletion audit record includes jobId, subjectId, timestamp, and affectedRecordCount; backups older than 35 days no longer contain the data. Given a legal hold is applied, When a deletion request exists, Then data is retained, status is set to "Deletion Deferred - Legal Hold", the legal basis is recorded, and this exception is included in the deletion report.
GDPR/CCPA Data Export and Deletion Workflow
Given a verified Data Subject Request (export), When submitted, Then an acknowledgment is sent within 72 hours and a machine-readable export (JSON/CSV in ZIP with schema manifest) is delivered within 30 calendar days including personal data from KYC, invites, permissions, and audit entries linked to the subject, excluding secrets and third-party tokens. Given a verified Data Subject Request (delete), When processed, Then eligible personal data is deleted or irreversibly anonymized within 30 calendar days; exceptions are documented with legal basis; a confirmation report is provided and a suppression record is retained to prevent re-collection. Given a Compliance Admin views DSR tracking, When accessing the dashboard, Then each request shows status in {Received, In Review, Waiting on Third-Party, Completed, Deferred - Legal Hold, Rejected - Unverified} with timestamps, SLA timers, and audit links.

Invoice Sync

Auto‑generate branded invoices per milestone and contributor from agreed splits. Attach them to the escrow, export to QuickBooks/Xero, and mark paid on release. Clear, consistent paperwork cuts admin loops and keeps accountants, managers, and artists on the same page.

Requirements

Milestone‑Split Invoice Auto‑Generation
"As a label project manager, I want invoices to be auto‑generated per contributor at each milestone based on agreed splits so that I eliminate manual paperwork and reduce errors."
Description

Automatically generate per‑contributor invoices when a project milestone (e.g., mix delivery, master approval, release) is reached and splits are finalized. Use the project’s agreed rates and percentage splits to create line‑itemized invoices in the selected brand template, with currency, tax fields, and sequential numbering per organization. Support draft and regenerate workflows when splits or milestones change, with versioning and validation to prevent generation when required data is missing. Link invoices to their release, assets, and contributors for context, store exchange rates at generation time, and provide preview, consolidation options, and error messaging. Save invoices to the project workspace for collaboration and downstream syncing.

Acceptance Criteria
Auto-generation on Milestone with Finalized Splits
Given a project milestone is marked "Reached" and all contributor splits for that milestone are finalized and valid And a brand template and numbering sequence are configured for the organization When auto-generation is triggered for the milestone Then the system creates exactly one draft invoice per eligible contributor And each invoice includes line items that reflect the milestone name, asset identifiers, the contributor's agreed percentage or rate, quantity, unit price, subtotal, tax lines (if applicable), and grand total And the invoice is stored in the project workspace and linked to the milestone, release, and contributor
Blocking Validation on Missing or Inconsistent Data
Given a user requests invoice auto-generation for a milestone When any required data is missing or inconsistent (e.g., splits not finalized, contributor payout or tax profile incomplete, project currency not set, or numbering sequence unavailable) Then no invoices are created And the user sees a blocking error summary with field-level messages identifying each missing or invalid item And an audit log entry is recorded with specific reason codes for the failure And the action is retryable after data is corrected without leaving partial artifacts
Brand Template, Currency, Tax, and Sequential Numbering Applied
Given the organization's brand template, numbering sequence, currency, and tax configuration are set When invoices are generated for a milestone Then each invoice renders with the selected brand template (logo, address, footer) And uses the configured currency with correct symbol and decimal precision And applies tax based on the contributor's tax profile and project tax rules And receives the next sequential invoice number in the organization's sequence And numbering is unique across the organization and not consumed by failed generations
Draft, Regenerate, and Versioning on Changes
Given draft invoices exist for a milestone And the splits or milestone amounts change When the user selects Regenerate Then the system creates a new draft invoice version for each affected contributor with an incremented version identifier (e.g., v2) And the prior version becomes read-only with status "Superseded" And an immutable audit trail records the differences (line items, totals, tax, exchange rate) between versions And default links in the project point to the latest version while preserving access to prior versions
Exchange Rate Capture and Cross-Currency Calculation
Given the invoice currency and contributor payout currency differ per project finance settings When invoices are generated Then the system records the FX source, timestamp, base/quote currencies, and rate used at generation time And calculates converted line totals and grand totals using that rate with the configured rounding rules And stores these values on the invoice version for auditability And upon regeneration, current rates are fetched and stored on the new version while prior versions retain their original rates
Preview, Consolidation, and Context Linking
Given multiple milestones and contributors are eligible for invoicing When the user selects Preview Then the system renders read-only previews showing exact totals with numbering placeholders And when the user selects Consolidate by Contributor across selected milestones Then exactly one invoice per contributor is generated with separate line items per included milestone referencing milestone IDs, dates, and asset IDs And consolidated invoice totals equal the sum of included line items And all generated invoices are saved to the project workspace and linked to their release, assets, and contributors for context
Escrow Attachment & Payment Sync
"As an artist, I want my invoice to update to paid automatically when escrow releases funds so that my records stay accurate without manual entry."
Description

Attach generated invoices to TrackCrate escrow records and mirror state transitions (funded, in review, partially released, released). Mark invoices fully or partially paid automatically when escrow funds are released, recording payment dates, methods, fees, and balances. Lock paid invoices against edits, while allowing authorized users to perform controlled adjustments. Maintain a transaction ledger and detailed audit of state changes, and surface payment status within the release view to keep all collaborators aligned.

Acceptance Criteria
Auto‑Attach Invoices to Escrow Record
Given an escrow record exists for a release milestone and contributor splits have been agreed When TrackCrate auto‑generates invoices for the milestone Then each generated invoice is attached to the corresponding escrow record with a persistent escrow_id reference And the escrow detail view lists the attached invoices with invoice_number, contributor, amount, and currency And the invoice count on the escrow increases by the number of attachments And only users with finance or release permissions can view/download attached invoices; others cannot And an audit entry is written capturing invoice_id, escrow_id, actor_id, and timestamp
Mirror Escrow State to Attached Invoices
Given invoices are attached to an escrow record When the escrow state changes to Funded, In Review, Partially Released, or Released Then each attached invoice stores escrow_state equal to the escrow state within 5 seconds And the change is recorded in the audit log with previous_state, new_state, actor_id, and timestamp And the invoice list badges in the escrow view reflect the new state without a manual refresh
Partial Release Auto‑Payment Allocation
Given an escrow with attached unpaid invoices and a partial release of funds occurs When TrackCrate processes the partial release amount Then payment records are allocated to invoices according to each invoice’s remaining balance and agreed splits And each payment record stores payment_date (UTC), method (enumerated), gross_amount, fee_amount, net_amount, and reference_id And each affected invoice’s paid_amount and remaining_balance are updated accurately And the sum of net_amount across created payments plus total fee_amount equals the partial release gross amount And an audit entry is appended for each invoice payment with before/after balances
Full Release Auto‑Payment and Invoice Settlement
Given an escrow with attached invoices has its remaining funds released When TrackCrate applies the full release Then all invoices with remaining_balance > 0 are marked Paid with paid_at (UTC) timestamps And each invoice’s remaining_balance is set to 0 and paid_amount equals invoice total And a final payment record is created per invoice capturing method, fees, and references And the invoices become read‑only (locked) for direct edits And audit and ledger entries are created summarizing the settlement
Locking and Controlled Adjustments on Paid Invoices
Given an invoice status is Paid or Partially Paid When a non‑authorized user attempts to edit any monetary field on the invoice Then the action is blocked with a permissions error and no changes are persisted When an authorized finance user initiates an adjustment Then adjustments are recorded as separate immutable ledger entries (credit/debit) requiring a reason and note And original invoice line items remain uneditable; only adjustments affect balance And adjustments cannot reduce remaining_balance below 0 or change paid_at retroactively And all adjustments update the audit log with actor_id, timestamp, and before/after amounts
Transaction Ledger and Audit Trail Completeness
Given TrackCrate processes invoice attachment, escrow state change, payment, or adjustment events When any such event occurs Then a ledger entry is appended with a unique id, event_type, entity ids (escrow_id, invoice_id, payment_id), actor_id, timestamp (UTC), and before/after snapshots of affected fields And ledger entries are immutable (no update/delete), queryable by date range and entity id And the count of ledger entries increases by exactly one per event processed
Surface Payment Status in Release View
Given a release has contributors with invoices linked to an escrow When an authorized user opens the release view Then each contributor row displays payment status (Unpaid, Partially Paid, Paid), paid_amount/total_amount, and last_payment_date And links to the related escrow and invoice(s) are available And the view reflects escrow/payment changes within 30 seconds of an update And users without finance permission see status labels but not monetary amounts
QuickBooks/Xero Export & Sync
"As a label accountant, I want to export invoices to QuickBooks or Xero with correct mappings so that our accounting stays synchronized without re‑keying data."
Description

Provide OAuth connections to QuickBooks Online and Xero to export invoices with correct mappings to contacts, items, accounts, tax codes, and tracking categories. Support multi‑org selection, idempotent exports, sandbox environments, and scheduled batch exports. Store external IDs, handle retries and rate limits, and surface per‑invoice export logs and errors. Ingest webhook callbacks to pull back payment status and reconcile within TrackCrate, ensuring amounts, currency, taxes, and numbering remain consistent across systems.

Acceptance Criteria
OAuth Connection Setup for QuickBooks Online and Xero
Given I am a TrackCrate admin, When I connect to QuickBooks Online via OAuth 2.0, Then I am redirected to consent, I can authorize required scopes, and upon success TrackCrate stores encrypted access/refresh tokens and displays the connected company name and ID with status "Connected". Given I am a TrackCrate admin, When I connect to Xero via OAuth 2.0 and select a tenant, Then TrackCrate stores encrypted tokens, records the selected tenant ID, and displays the tenant name with status "Connected". Given an OAuth access token will expire within 5 minutes, When background refresh runs, Then the token is refreshed using the stored refresh token without user interaction and the new expiry is persisted. Given a provider connection is active, When I click Disconnect and confirm, Then tokens are revoked/removed and the connection status changes to "Disconnected" and no further exports are allowed until reconnected. Given an OAuth attempt fails (user denies consent or provider error), When TrackCrate receives the callback, Then a clear error with provider code and description is shown and no tokens are stored.
Multi‑Org Selection and Persistence
Given my TrackCrate workspace has connections to multiple QuickBooks companies and/or Xero tenants, When I open Export Settings, Then I can select the default company/tenant per provider and save the selection. Given I have saved a default organization, When I start an export, Then the export targets the saved organization and the target organization is shown in the confirmation UI. Given no organization is selected for a connected provider, When I attempt to export, Then the action is blocked and I am prompted to select an organization. Given I switch the selected organization, When I refresh or sign back in, Then the selection persists for the workspace and is recorded in the audit log with user, old value, new value, and timestamp.
Invoice Export Mapping and Idempotency
Given a TrackCrate invoice with contact, line items, accounts, tax codes, tracking categories, currency, and numbering configured, When I export to QuickBooks Online, Then a QBO Invoice is created in Draft/Open with Customer mapped by external ID (or created if missing), line Items mapped by ItemRef and IncomeAccountRef, TaxCodeRef set per configuration, Class/Department set for tracking, amounts and tax totals equal TrackCrate values within 0.01, currency matches, invoice number respects the configured numbering rule, and the external Invoice ID is stored on the TrackCrate invoice. Given the same TrackCrate invoice, When I export to Xero, Then a Xero Invoice is created in Draft/Awaiting Approval with Contact mapped or created, line Items mapped to Items and AccountCodes, TaxType set per configuration, Tracking Categories set, amounts/taxes equal within 0.01, currency matches, InvoiceNumber respects the configured numbering rule, and the Xero InvoiceID is stored on the TrackCrate invoice. Given an invoice has been exported and an idempotency key is present, When I re‑export without changes within the idempotency window, Then no duplicate invoice is created and the same external Invoice ID is returned and confirmed in logs. Given an invoice has been exported and only allowed updatable fields change (e.g., memo), When I re‑export, Then the existing external invoice is updated in place (subject to provider state rules), the version/revision is stored, and no duplicate is created; if the provider blocks updates, the export fails with a clear, non‑duplicating error.
Sandbox Environment Support and Isolation
Given I connect to a provider sandbox organization, When I view the connection, Then a visible "Sandbox" badge is shown and the organization is labeled as sandbox in settings and export dialogs. Given a sandbox organization is selected, When I export an invoice, Then the invoice is created only in the sandbox environment and is never sent to a production organization. Given both sandbox and production organizations are connected, When I choose a target for export, Then the UI clearly distinguishes them and prevents selecting both at once for a single export. Given a scheduled export is configured, When it runs against a sandbox organization, Then all created records remain in sandbox and are tagged as sandbox in logs and summaries.
Scheduled Batch Exports with Rate Limits and Retries
Given there are pending invoices eligible for export, When the daily scheduled job runs at the configured time, Then invoices are exported in deterministic batches, progress is tracked, and a summary with counts (exported, skipped, failed, retried) is recorded. Given the provider returns a rate‑limit response, When the batch job detects limit headers, Then it applies exponential backoff with jitter, respects reset windows, and resumes without data loss. Given a transient provider error occurs (5xx/timeout), When exporting a batch, Then TrackCrate retries up to 3 times per invoice with exponential backoff and preserves idempotency keys across retries. Given the maximum retry attempts are exhausted, When an invoice still fails to export, Then the invoice status is set to "Failed", the error is logged with provider code and correlation ID, and the invoice is excluded from further automatic retries until manually retriggered.
Per‑Invoice Export Logs and Error Visibility
Given an invoice export is attempted, When I open the invoice Export tab, Then I can see a chronological log of export attempts with timestamp, target organization, operation (create/update), idempotency key, request/response status, and provider correlation/request IDs. Given an export succeeds, When I view the log, Then it shows the external Invoice ID, version/revision (if applicable), and a link to view in the provider (respecting permissions). Given an export fails, When I view the log, Then it shows the provider error code, human‑readable message, HTTP status, and the attempt number, and provides a "Retry" action when eligible. Given compliance requirements, When I download logs, Then only metadata is exported by default, with raw payloads redacted unless I have explicit permission to view sensitive data.
Webhook Payment Status Sync and Reconciliation
Given provider webhooks are configured, When QuickBooks or Xero sends a valid signed webhook for a payment created or updated, Then TrackCrate verifies the signature, locates the related external Invoice ID, and updates the TrackCrate invoice to Paid/Partially Paid/Void with payment date, amount, and currency. Given a payment is partial, When the webhook is processed, Then the remaining balance is updated in TrackCrate and the invoice status reflects Partially Paid. Given the amounts, currency, taxes, or invoice number received in the webhook disagree with TrackCrate by more than 0.01 or involve a different currency, When reconciliation runs, Then TrackCrate flags a "Mismatch" state, does not auto‑reconcile, and surfaces a clear alert with the differing fields. Given a payment or invoice is voided/deleted in the provider, When the webhook arrives, Then TrackCrate updates the local invoice status accordingly and logs the event with provider correlation ID.
Branded Templates & Localization
"As a small label owner, I want branded and localized invoices so that my paperwork is professional and fits local requirements."
Description

Enable customizable invoice templates per organization with logo, color palette, address, and tax identifiers. Provide a template editor with placeholders for milestone, release, contributor, and split data, generating PDF and email‑ready HTML outputs. Support localization for language, date formats, currency symbols, and jurisdictional tax fields (e.g., VAT/GST). Allow multiple templates per brand and role‑based access to manage templates, with preview and test send capabilities.

Acceptance Criteria
Create Organization Branded Invoice Template
Given I am an Organization Admin in Template Settings When I create a new invoice template and upload a logo (PNG/JPG/SVG up to 5 MB, minimum 256x256 px) And I set primary and secondary colors as valid HEX codes (#RRGGBB or #RRGGBBAA) And I enter organization name, postal address, and select a tax jurisdiction And I enter at least one tax identifier required for the chosen jurisdiction And I save the template Then the template saves successfully with logo, colors, address, and tax identifiers persisted And invalid files, color codes, or missing required fields prevent save with specific inline error messages
Placeholder Mapping and Validation
Given I open the Template Editor When I open the placeholder picker Then the editor offers these placeholders: {{organization.logo}}, {{organization.name}}, {{organization.address}}, {{tax.jurisdiction}}, {{tax.id}}, {{tax.label}}, {{invoice.number}}, {{invoice.date}}, {{due.date}}, {{milestone.name}}, {{milestone.id}}, {{release.title}}, {{release.catalog_number}}, {{contributor.name}}, {{contributor.role}}, {{split.percent}}, {{amount.subtotal}}, {{amount.tax}}, {{amount.total}}, {{currency.symbol}}, {{currency.code}} And required placeholders for invoice generation are validated: {{invoice.number}}, {{invoice.date}}, {{amount.total}}, {{currency.code}} When I click Preview with sample data Then any unresolved placeholders are highlighted and listed, and Save is disabled until all required placeholders are resolved
Localization by Template Locale
Given a template locale is set to fr-FR When I preview or generate an invoice totaling 12345.6 EUR dated 2025-01-02 Then the date displays as 02/01/2025 And the currency displays as 12 345,60 € and EUR appears where configured Given a template locale is set to en-US When I preview the same invoice totaling 12345.6 USD dated 2025-01-02 Then the date displays as 01/02/2025 And the currency displays as $12,345.60 And negative values display with a leading minus sign (e.g., -$10.00) Given a template locale is set to ar-EG Then HTML output sets dir="rtl" and uses RTL-safe fonts while numbers remain LTR
Jurisdictional Tax Fields (VAT/GST)
Given the template jurisdiction is set to EU VAT When seller and buyer VAT numbers are provided and tax rate is 0% Then the invoice displays both VAT numbers and includes the note 'Reverse charge — Article 196, Council Directive 2006/112/EC' And VAT numbers validate against the pattern ^[A-Z]{2}[A-Z0-9]{2,12}$ Given the template jurisdiction is set to AU GST When a valid ABN (11 digits) is entered Then the invoice displays the label 'ABN' and calculates GST at the configured rate, with GST line item and totals shown And if GST is 0%, the invoice shows 'GST-free' next to relevant line items
Multiple Templates per Brand and Default Selection
Given an organization with two brands (Brand A, Brand B) When I create three templates for Brand A and two for Brand B Then all five templates are listed grouped by brand And I can mark exactly one template per brand as Default And when generating an invoice for Brand B, the Brand B Default template is preselected And non-default templates remain selectable And archiving a template removes it from selection without altering existing issued invoices
Role-Based Access to Manage Templates
Given roles exist: Organization Admin, Template Manager, Member, Viewer When a Template Manager creates, edits, sets default, archives, or restores a template Then the action succeeds When a Member attempts to create, edit, set default, archive, or restore a template Then the action is blocked with a 'permission denied' message; viewing and preview remain allowed When a Viewer accesses templates Then they cannot access the editor and only see rendered invoices And all create/edit/archive/restore actions are recorded in the audit log with user, timestamp, and action
PDF and Email-Ready HTML Output with Preview and Test Send
Given a saved template When I click Preview Then the system renders both HTML and PDF previews for a 1-page invoice within 5 seconds And the PDF uses the locale’s paper size (A4 for fr-FR, Letter for en-US), embeds fonts, preserves brand colors, and keeps layout fidelity within a 5 mm tolerance And hyperlinks and email addresses are clickable in both HTML and PDF When I click Test Send and enter a valid recipient email Then an email is sent with the HTML body and the PDF attached (<1.5 MB) using the brand’s From name and logo And the system records a Test Sent event with recipient, template ID, and timestamp, and shows status 'Accepted by SMTP' or 'Delivered'
Roles, Approvals & Notifications
"As a producer, I want to be notified when my invoice is generated and paid so that I can track my income without constantly checking the app."
Description

Implement role‑based permissions for creating, editing, approving, exporting, and viewing invoices. Provide an approval workflow gate before escrow attachment or external export, with change requests and comments. Send notifications to contributors, managers, and accountants on key events such as invoice generated, needs approval, exported, payment received, or export errors. Offer in‑app inbox, email notifications with deep links, reminders, and user‑level notification preferences.

Acceptance Criteria
Role-Based Permissions for Invoice Actions
- Given a Manager, when they create an invoice for a project they manage, then the invoice is created. - Given an Admin, when they create or edit any invoice in the organization, then the action succeeds. - Given a Contributor, when they attempt to create or edit an invoice, then the system returns 403 and records an audit entry. - Given a Manager or Admin, when they edit an invoice in Draft or Changes Requested, then edits are saved; when the invoice is Approved, then edit is blocked unless the user selects Revert to Draft with a required reason, which clears approvals and logs the change. - Given an Approver (Manager, Accountant, or Admin), when they approve an invoice, then approval is recorded; given a Contributor, when they attempt to approve, then 403 is returned. - Given an Accountant or Admin, when they export to QuickBooks/Xero, then the action is allowed only if the invoice is Approved; otherwise blocked with validation. - Given any user, when they view invoices, then visibility is scoped: Contributors see their own invoices; Managers see invoices for their managed projects; Accountants see all invoices in the organization; Admins see all. - Given any unauthorized action, then return 403, make no data changes, and record user, action, resource, timestamp, and IP in the audit log.
Approval Gate Before Escrow Attachment and Export
- Given an invoice in Draft or Changes Requested, when a user attempts to attach it to escrow or export it, then the action is blocked with Approval required. - Given an Approver, when they confirm Approve, then the invoice status becomes Approved and approver, timestamp, and a checksum are recorded. - Given an Approved invoice, when any editable field changes, then status resets to Draft, all approvals are cleared, and re-approval is required before escrow or export. - Given an Approver, when they select Request Changes with a required comment, then status becomes Changes Requested and the invoice owner is notified. - Given an Approved invoice, when a user attaches it to escrow, then the attachment succeeds and is logged; when status is not Approved, then the action is disabled. - Given an Approved invoice, when a user exports it, then the export proceeds and external reference IDs and outcomes are stored.
Change Requests with Threaded Comments on Invoices
- Given a user with Approver or Manager role, when they submit a change request, then a comment with required category and text is added and the invoice moves to Changes Requested. - Given any participant with access, when they view the invoice, then comments display in chronological threads with author, role, and timestamp and are immutable; any edits create a visible revision history. - Given a comment with @mention of an organization user, when posted, then the mentioned user is notified according to preferences. - Given an attachment upload to a comment (max 25 MB; pdf, png, jpg, docx), when uploaded, then it is virus-scanned and stored; on detection of malware, the upload is rejected with an error. - Given a change request thread, when the requester marks it Resolved, then a required resolution note is captured and the thread is marked Resolved. - Given any comment activity, then an audit log entry is created.
Notifications on Invoice Lifecycle Events
- Given event Invoice Generated, when an invoice is created, then send notifications to the owner, listed contributors, and project managers via in-app and email with a deep link. - Given event Needs Approval, when an approval request is sent, then notify assigned approvers. - Given event Approval Granted, when an invoice is approved, then notify the owner, contributors, and accountants. - Given event Changes Requested, when an approver requests changes, then notify the owner and tagged users. - Given event Exported, when export to QuickBooks/Xero succeeds, then notify owner and accountants including the external reference. - Given event Export Error, when export fails, then notify owner and accountants with the error code and a retry link. - Given event Payment Received, when escrow marks paid, then notify owner, contributors, managers, and accountants. - Given notifications dispatch, then in-app messages are delivered within 5 seconds and email queued within 60 seconds; duplicate events within 15 minutes are deduplicated; failures are retried up to 3 times with exponential backoff and logged. - Given a user opted out of an event or channel, then no notification is sent to that user for that event.
User Notification Preferences and Reminders
- Given a user, when they open Notification Preferences, then they can enable or disable each event type per channel (In-App, Email), defaulting to enabled. - Given preferences are changed, when saved, then the changes take effect immediately and persist across sessions. - Given a pending approval older than 24 hours, when reminders are enabled, then send a reminder to assigned approvers; if still pending after 72 hours, then send a second reminder. - Given reminder scheduling, then reminders are sent in the recipient’s timezone between 09:00 and 18:00 on weekdays and are suppressed on weekends. - Given a reminder notification, when the recipient selects Snooze 24h, then the next reminder is delayed by 24 hours. - Given a user has opted out of reminders, then no reminders are sent to that user.
Secure Deep Links in Email Notifications
- Given an email notification with a deep link, when the recipient clicks within 48 hours, then they are routed to the invoice detail; if not authenticated, then they must log in before being redirected back. - Given a deep link token, when it is expired or revoked, then redirect to a safe landing page without revealing invoice metadata. - Given a user without permission clicks a deep link, then show 403 and reveal no invoice content or title. - Given deep link usage, then each click is logged with user, timestamp, and IP for audit. - Given token security, then tokens are signed, time-limited to 48 hours, scoped to the organization, and revocable by Admin.
Compliance, Numbering & Audit Trail
"As a label administrator, I want compliant numbering and a full audit trail so that we meet regulatory and audit requirements without manual spreadsheets."
Description

Support configurable invoice numbering schemes per organization with prefixes, sequences, and reset rules. Capture legal names and tax identifiers, and allow attaching compliance documents (e.g., W‑9, W‑8BEN, VAT certificates). Make invoices immutable after payment, with governed correction flows. Maintain a complete audit trail of edits, approvals, exports, and payments with timestamps and actor identity, provide exportable logs, and apply data protection controls including field‑level encryption and access logging.

Acceptance Criteria
Org-level configurable invoice numbering scheme
Given an organization has a numbering template "ACME-{YYYY}-{SEQ:6}" with annual reset on Jan 1 UTC and prefix "ACME" When three invoices are generated on 2025-03-14 UTC Then their numbers are ACME-2025-000001, ACME-2025-000002, ACME-2025-000003 And when the next invoice is generated on 2026-01-01 UTC Then its number is ACME-2026-000001 And when 100 invoices are generated concurrently for that org Then all 100 invoice numbers are unique and the highest SEQ equals the count generated And if a user without "Billing Admin" role attempts to manually edit an assigned invoice number Then the change is rejected with HTTP 403 and an audit entry is recorded
Capture and validate legal names and tax identifiers
Given a payee profile with country = US and entity_type = Company When a tax_id is entered that does not match EIN format "NN-NNNNNNN" Then the profile cannot be saved and a validation error is shown Given a payee profile with country = US and entity_type = Individual When a tax_id is entered that does not match SSN format "NNN-NN-NNNN" or ITIN format Then the profile cannot be saved and a validation error is shown Given a payee profile with country in EU and a VAT ID is provided When the VAT ID fails checksum/format validation Then the invoice cannot be approved for export and an actionable error is displayed Given a payee profile missing required legal_name or applicable tax_id When an invoice is generated or exported for that payee Then the operation is blocked with a specific error referencing the missing fields And the legal_name and tax_id appear on the rendered invoice/PDF according to jurisdictional requirements
Attach and manage compliance documents to payees
Given a payee opens Compliance Documents When they upload a document of type W-9, W-8BEN, or VAT Certificate in PDF/PNG/JPG up to 10 MB Then the upload succeeds, the file is virus-scanned, stored with SHA-256 checksum, and linked to the payee and upcoming invoices Given a document with an expiration date has expired When an invoice is approved for export Then the export is blocked until a current document is attached Given a new version of a document is uploaded When viewing the document history Then prior versions remain visible with version, uploader, timestamp, and checksum And deleting a document is prevented if it is referenced by a paid invoice
Immutability after payment with governed correction flow
Given an invoice is marked Paid When any user attempts to edit header fields, line items, tax, totals, or numbering via UI or API Then the request is rejected with HTTP 409 and a message indicating the invoice is immutable after payment Given a paid invoice requires a correction When a user with "Billing Admin" role initiates "Issue Credit Note" referencing the original invoice Then a credit note is created with its own unique number, the linkage to the original is recorded, and an audit entry is created Given a paid invoice requires a replacement When a user with "Billing Admin" and "Approver" roles initiates "Issue Replacement Invoice" Then a new invoice is created referencing the original, the original remains immutable, and downstream exports are marked as corrections And original invoice numbers are never reused in any flow
End-to-end audit trail with timestamps and actor identity
Given any action occurs on an invoice or payee profile (create, edit, approve, export, payment status change, document upload/delete attempt) When the action completes or fails Then an immutable audit record is created containing ISO-8601 UTC timestamp, actor id, actor role, action type, object id, source IP/user-agent, correlation id, and before/after values (with sensitive fields redacted) And audit records are append-only; delete/update operations on audit records are rejected with HTTP 405 And audit records are hash-chained so that altering any prior entry invalidates a stored chain checksum
Exportable audit logs with filters and redaction
Given a user with "Compliance Admin" or "Accountant" role opens Audit Export When they request an export with filters (date range, actor, action type, object id) and format CSV or JSON Then the system generates a file within 60 seconds containing only matching records with headers, includes a SHA-256 checksum and HMAC signature, and streams it for download And sensitive fields are redacted unless the user has explicit "View Sensitive" permission and provides a purpose-of-use note And the export action itself is logged in the audit trail with the same filters and signature metadata
Field-level encryption and access logging for sensitive fields
Given sensitive fields (tax_id, SSN/ITIN, VAT ID) are stored When inspecting the database at rest outside the application context Then those fields are not readable in plaintext and are encrypted using field-level encryption with keys managed by KMS Given a user without "View Sensitive" permission views a payee profile When the profile is rendered Then sensitive fields are masked (e.g., last 4 only) and no decryption occurs And an access log entry is created for every read/write attempt to sensitive fields including actor, timestamp, field, purpose (if provided), success/failure, and key version used Given a user with "View Sensitive" permission and active MFA attempts to view full tax_id When they provide a purpose-of-use note Then the full value is shown and the decrypt event is logged; without MFA or purpose the request is denied with HTTP 403
Split Revisions & Credit Notes
"As a project manager, I want a safe way to revise invoices when splits change so that financial records remain accurate across TrackCrate and our accounting system."
Description

Provide a controlled revision process when contributor splits change after invoice generation. Generate credit notes or adjustments that reverse superseded amounts and issue revised invoices, maintaining links between original and replacement documents. Update exports in QuickBooks/Xero to reflect credit memos and replacements, reconcile balances, and notify impacted parties. Support period locks to prevent changes after financial close while allowing admin overrides with justification.

Acceptance Criteria
Split Revision Triggers Credit Note and Revised Invoice
Given a finalized invoice exists for a milestone with split version V1 And a user with edit permissions submits split version V2 that changes contributor allocations When the revision is confirmed Then the system generates a credit note reversing the net difference per contributor and line item from V1 And issues a new invoice for V2 amounts And links the credit note and revised invoice to the original invoice and to each other And records revision ID, timestamps, and actor And prevents duplicate generation for the same revision event
Accurate Delta, Tax, Discount, and Rounding Calculations
Given currency, tax settings (inclusive/exclusive), discounts, and rounding rules are configured And original (V1) and revised (V2) line totals per contributor are known When generating the credit note and revised invoice Then each contributor delta equals V2 line total minus V1 line total, respecting discounts and tax mode And tax is recalculated per jurisdiction and matches within 0.01 of the currency minor unit And totals of credit note plus revised invoice equal the revised allocation totals And if a contributor's net delta is zero, no credit or revised line is created for that contributor And document totals and PDF previews exactly match stored ledger values
Document Linkage, Statuses, and Audit Trail
Given a revision occurs after invoice issuance When the documents are generated Then the original invoice is marked Superseded and becomes immutable And the credit note and revised invoice store Supersedes/Replaces references to the original and each other And a human-readable change log shows old vs new splits by contributor and milestone And the audit log records who made the change, when, what changed, and the justification comment And versioned PDFs are retained for all related documents
QuickBooks/Xero Export with Credit Memos and Replacements
Given the workspace is connected to QuickBooks and/or Xero And the original invoice may or may not have been exported When the revision is processed Then a credit memo is created in the accounting system referencing the original external document and applied to it if the original was exported And a replacement invoice is exported with a reference to the TrackCrate revision ID And if the original was not exported, only the replacement invoice is exported and no external credit memo is created And exports are idempotent via stable keys, preventing duplicates on retry And export status updates to Exported or Failed with error details and retry capability
Balance Reconciliation Across Escrow and Payments
Given the original invoice may be partially or fully paid When the credit note and replacement invoice are posted Then the credit note is applied to the original invoice within TrackCrate and the connected accounting system And any excess credit becomes an unapplied credit that can be applied to the replacement invoice And project escrow/payable summaries update to reflect revised totals And the net outstanding across original, credit note, and replacement equals the revised allocation within 0.01 And payment allocations and histories remain intact and traceable
Impacted Party Notifications and Access Control
Given contributors, managers, and accountants are subscribed to billing updates When a split revision is finalized Then impacted parties receive an in-app and email notification with change summary (old vs new amounts, taxes, reason, affected milestones) And notifications are suppressed for parties with no amount change And links to the original, credit note, and replacement invoice are included with permissions enforced And notification delivery status is tracked for audit
Period Locks with Admin Override and Justification
Given financial period P is locked in TrackCrate (and/or in the connected accounting system) When a user attempts to revise an invoice dated in period P Then the system blocks the action and shows a clear error without generating documents And an admin with override permission can proceed only after entering a justification and selecting an effective date in the next open period And the override action, user, timestamp, justification, and effective dates are logged And exported credit memos and replacement invoices respect period locks by using the next open period dates

Multi-Currency Payouts

Let each collaborator choose their payout currency with transparent FX quotes and fee estimates before approval. Lock rates at release, batch smaller payouts to reduce fees, and surface net amounts up front—fewer surprises for global teams working across time zones.

Requirements

Collaborator Currency Profiles
"As a collaborator, I want to set my preferred payout currency and method so that I receive funds in my local currency with minimal friction and fees."
Description

Store per-collaborator payout currency, payout method, and beneficiary details with per-release overrides. Validate currency–country compatibility and required fields for each rail (e.g., IBAN for EUR, routing/account for USD). Integrate with TrackCrate team roles and permissions so owners/managers can request changes while collaborators manage their own profiles. Provide secure UI and API endpoints for create/read/update with encryption at rest and tokenized access to third-party payout providers. Support minimum payout thresholds, base-currency holding, and automatic prompts to complete missing details before approval. Include migration tools and audit history for changes.

Acceptance Criteria
UI & API Profile CRUD with Security
Given an authenticated collaborator, When they create or update their payout profile via UI or API, Then the profile is persisted and sensitive fields are encrypted at rest And Given a saved profile, When it is returned via API/UI, Then sensitive fields (e.g., IBAN, account number) are masked with only the last 4 characters visible And Given API access, When calling POST/PUT/GET endpoints, Then responses use standard codes (201 on create, 200 on update/read, 400/422 on validation errors) and ETags for caching where applicable And Given integration with a third-party payout provider, When storing provider credentials, Then only provider-issued tokens are stored; raw secrets are never persisted
Currency–Country Compatibility and Rail Field Validation
Given a selected payout country, currency, and rail, When the user submits profile details, Then incompatible combinations are rejected with a clear error (e.g., USD + SEPA is invalid) And Given EUR via SEPA, When details are submitted, Then IBAN and BIC are required and format-validated And Given USD via ACH, When details are submitted, Then routing number and account number are required and checksum-validated And Given validation fails, When using the API, Then a 422 response is returned with field-level error codes and messages; the UI shows inline errors
Per-Release Payout Overrides
Given a collaborator has a default payout profile, When a release manager sets a per-release override for currency and/or method, Then the override is stored scoped to that release without altering the default profile And Given an override exists, When removed, Then the release reverts to the collaborator’s default payout profile And Given an override change, When saved, Then an audit entry records who changed what and when And Given a release approval, When overrides exist, Then the system uses override settings to determine payout rail validations
Role-Based Change Requests and Self-Management
Given TrackCrate roles, When a collaborator accesses their own payout profile, Then they can create and edit it And Given TrackCrate roles, When an owner/manager views a collaborator’s payout profile, Then they can request changes but cannot directly edit protected financial fields And Given a change request, When it is sent, Then the collaborator receives an in-app notification and email with a secure link to review and apply changes And Given insufficient permissions, When a user attempts to access or modify another user’s payout profile, Then access is denied (403) and logged
Minimum Payout Thresholds and Base-Currency Holding
Given a collaborator sets a minimum payout threshold T in their payout currency, When their accrued balance is < T at payout cycle, Then no payout is initiated and the UI/API marks the status as Below Threshold And Given balances accrete over time, When the balance reaches >= T, Then the payout becomes Eligible and is included in the next cycle And Given system constraints, When setting T, Then the value must be >= the system minimum and respect currency precision (e.g., 2 decimals) And Given base-currency holding is enabled, When payouts are deferred, Then funds remain in base currency until eligibility, at which time FX and fees are applied
Missing Details Prompts During Approval
Given a release is pending approval, When any collaborator tied to the release lacks required payout details for their selected rail, Then the approval flow flags the collaborator and blocks payout for that collaborator until completion And Given a flagged collaborator, When the approver proceeds, Then the system sends the collaborator a prompt via in-app notification and email with a checklist of missing fields And Given the collaborator completes the missing fields and passes validation, When the approver refreshes the approval view, Then the block is cleared without re-entering previously provided data And Given these events, When audited, Then the approval attempt and subsequent profile completion are linked in the audit trail
Migration Tooling and Audit History
Given legacy payout data in CSV or JSON, When an admin runs the migration tool, Then valid rows are imported, invalid rows are skipped with a downloadable error report, and no secrets are logged And Given migrated records, When stored, Then sensitive fields are encrypted at rest and masked in UI/API And Given any profile change (create/update/delete/override), When it occurs, Then an immutable audit record is written including actor, timestamp (UTC), fields changed, and source (UI/API/migration) And Given an audit viewer, When an owner/manager searches by collaborator or release, Then results can be filtered by date range and action type
Transparent FX Quote & Fee Engine
"As a release manager, I want a transparent FX rate and fee breakdown before approving payouts so that I can predict net amounts and avoid surprises."
Description

Provide real-time indicative FX quotes and fee breakdowns prior to approval. Fetch rates from primary and fallback providers, apply configurable markup, calculate provider and payout-rail fees, and show estimated delivery date/time. Quotes include timestamp, expiry TTL, and slippage tolerance; cache short-lived quotes per collaborator to reduce API calls. Round to currency precision, localize display by user locale/time zone, and show gross, fees, rate, and net side-by-side. Surface quotes in the approval flow, AutoKit release console, and via API, and persist each quote to an immutable audit log.

Acceptance Criteria
Primary/Fallback FX Quote Sourcing With Markup & Slippage
Given a configurable markup of 50 bps and slippageTolerance of 30 bps, and the primary provider returns EUR→USD = 1.100000 at time t0 When a collaborator requests a quote to convert 1,000.00 EUR to USD Then the engine uses the primary provider, applies markup, and returns indicativeRate = 1.105500 (rounded to 6 dp), slippageToleranceBps = 30, sourceProvider = <name>, sourceType = "primary", createdAt = t0, expiresAt = t0 + 60s, and the response latency is ≤ 600 ms p95 Given the primary provider times out (≥ 700 ms) or returns 5xx When a quote is requested Then the engine falls back to the secondary provider, applies the same markup, returns a structurally identical quote with sourceType = "fallback", and total response latency is ≤ 1,200 ms p95 Given both providers are unavailable When a quote is requested Then the API returns HTTP 503 with a Retry-After header ≥ 5s and the UI displays a non-destructive error message with a retry option
Fee Breakdown & Net Amount Display
Given grossAmount = 1,000.00 USD, indicativeRate (USD→EUR) = 0.900000, providerFee = 0.20% of converted amount + €1.00, payoutRailFee = €3.00, markup is applied in the rate (not listed as a fee) When the quote is generated Then convertedAmount = €900.00, providerFee = €2.80, payoutRailFee = €3.00, totalFees = €5.80, and netAmount = €894.20, all rounded to EUR precision (2 decimals) and displayed side-by-side with fields: grossAmount, indicativeRate, providerFee, payoutRailFee, totalFees, netAmount Given sourceCurrency equals payoutCurrency When the quote is generated Then indicativeRate = 1.000000, no provider FX fee is applied, payoutRailFee is still applied if configured, and netAmount = grossAmount − payoutRailFee Given the UI displays the quote When reviewed before approval Then gross, rate, each fee component, and net are visible in a single view with labels and tooltips explaining each component
Quote TTL, Expiry, and Delivery ETA Behavior
Given a quote created at 2025-09-02T12:00:00Z with TTL = 60s When the current time is 2025-09-02T12:00:30Z Then the quote status is "valid" and approval is allowed Given the same quote When the current time is 2025-09-02T12:01:01Z (beyond TTL) Then the quote status is "expired", the UI shows a "Refresh quote" CTA, API approve calls return HTTP 410 Gone with code = QUOTE_EXPIRED, and refreshing yields a new quoteId Given payout rail = SEPA and business rule = T+1 business day by 17:00 Europe/Berlin When createdAt = 2025-09-02T14:00:00+02:00 and user timezone = America/Los_Angeles Then estimatedDeliveryAt is 2025-09-03T08:00:00-07:00 in the UI and API Given slippageToleranceBps = 30 When market mid-rate moves by > 30 bps between quote creation and approval attempt Then approval is blocked with HTTP 409 Conflict code = QUOTE_OUT_OF_TOLERANCE and the UI prompts to refresh
Per-Collaborator Quote Caching & Invalidation
Given collaboratorId = A, pair = EUR→USD, amount = 1,000.00, payoutRail = SWIFT, TTL = 60s When 5 quote requests are made within 60s Then the provider API is called at most once, all responses share the same quoteId, rate, fees, createdAt, expiresAt, and the cache hit rate for these requests is ≥ 80% Given the cached quote for collaboratorId = A When amount changes to 1,100.00 or pair/payoutRail changes or TTL expires or markupBps changes Then a new provider call is made and a distinct quoteId is returned Given collaboratorId = B requests the same parameters within A’s TTL When the request is made Then B does not receive A’s cached quote (quotes are isolated per collaborator) and a separate cache entry is created Given 10 concurrent identical requests for the same collaborator and params When they occur within a 50 ms window Then the system deduplicates and performs at most one upstream provider call (single-flight), returning the same quoteId to all callers
Rounding and Locale/Timezone Localization
Given user locale = fr-FR and payout currency = EUR When displaying €1234.5 Then the UI shows "1 234,50 €" and API returns value = 1234.50 with currency = "EUR" Given user locale = ja-JP and payout currency = JPY When displaying ¥1234.49 Then the UI shows "¥1,234" (0 decimals) and API returns value = 1234 with currency = "JPY" Given user locale = en-GB and payout currency = KWD (3 decimals) When displaying 12.3457 KWD Then the UI shows "KWD 12.346" and API returns value = 12.346 with currency = "KWD" Given createdAt = 2025-09-02T15:00:00Z and user timezone = America/New_York When showing timestamps Then the UI displays in the user’s timezone with locale-appropriate format (e.g., "09/02/2025, 11:00 AM EDT" for en-US) and the API provides both ISO-8601 UTC and the user-localized string
Multi-Surface Consistency: Approval Flow, Release Console, API
Given a quote is generated for collaboratorId = A When viewed in the approval flow UI and the AutoKit release console Then grossAmount, indicativeRate, fees (provider, payoutRail, total), netAmount, createdAt, expiresAt, slippageToleranceBps, sourceProvider, sourceType, and estimatedDeliveryAt are identical on both surfaces Given the same quoteId When fetched via GET /v1/payouts/quotes/{quoteId} Then the JSON includes: quoteId, collaboratorId, sourceCurrency, payoutCurrency, amount, indicativeRate (6 dp), markupBps, slippageToleranceBps, fees {provider, payoutRail, total}, grossAmount, netAmount, createdAt, expiresAt, estimatedDeliveryAt, sourceProvider, sourceType, locale, timezone, and all numeric fields have explicit currency codes Given comparisons across UI and API for the same quoteId When values are compared Then all numeric fields match exactly to the minor unit precision of the payout currency and textual metadata matches case-sensitively; API p95 latency ≤ 500 ms
Immutable Audit Log Persistence for Quotes
Given any quote is created When persisting the record Then an audit entry is appended with fields: quoteId, userId, collaboratorId, requestParams (currencies, amount, payoutRail), providerName, providerType, rawProviderRate, markupBps, indicativeRate, fees breakdown, createdAt, expiresAt, slippageToleranceBps, estimatedDeliveryAt, sourceType, and a SHA-256 contentHash; no existing audit entries are updated in-place Given an attempt is made to modify an audit entry When using admin or public APIs Then the operation is rejected with HTTP 405 Method Not Allowed and a new correction entry must be appended instead (if needed) Given audit retrieval by quoteId When the entry is fetched Then the stored contentHash matches a recomputed hash of the persisted payload (integrity passes) and the data matches the original quote response exactly
Release-Time Rate Locking
"As a label owner, I want to lock FX rates at release so that collaborators know exactly what they will receive."
Description

Lock FX rates at release approval to guarantee net amounts. Convert indicative quotes into locked rates per collaborator with defined lock duration (e.g., 24–72 hours) and automatic re-quote if the lock expires before execution. Store locked rate, fees, and expected net in the payout statement; prevent changes without re-approval. Support partial re-locks for amended splits, schedule-aware locks by collaborator time zone, and notifications for impending expirations.

Acceptance Criteria
Rate Lock at Release Approval
- Given a release with collaborators requiring FX conversion is pending approval, When an authorized approver approves the release, Then the system creates a locked FX rate per collaborator-currency pair using the latest available indicative quote at approval time, records a unique lock ID, and sets lock start and expiry timestamps according to the configured lock duration (e.g., 24–72 hours). - Given locked rates are created, Then the payout statement displays for each collaborator within 1 second of approval: currency pair, locked rate, itemized fees, and expected net amount in the collaborator’s currency. - Given a locked rate exists, Then the Execute Payout action is permitted only if current time is strictly before the lock expiry; otherwise it is disabled with reason "Lock expired".
Payout Statement Data Persistence and Immutability
- Given a release has been approved and rates locked, Then for each collaborator the payout statement stores: lock ID, quote provider, currency pair, locked rate (6-decimal precision), fee components (FX spread, provider fee, platform fee, network fee) with currencies, expected gross, expected net, approver ID, lock start, lock expiry, and statement version. - When any client (UI or API) attempts to modify stored locked fields without re-approval, Then the API returns 403 Forbidden with error code LOCK_IMMUTABLE and the UI presents the fields as read-only with an "Re-approval required" badge. - Then all lock lifecycle events (create, expire, re-lock, execute, cancel) are written to the audit log with timestamp, actor, and before/after values.
Automatic Re-Quote on Lock Expiry
- Given a locked rate reaches its expiry before payout execution, When current time >= lock expiry, Then the system sets the lock status to Expired, blocks payout execution for impacted collaborators, and fetches a fresh indicative quote per collaborator within 5 seconds. - When new indicative quotes are obtained, Then the system updates the draft payout statement with new provisional net amounts and sends re-approval requests to approvers and notifications to impacted collaborators. - Given an expired lock with new approval granted, When the approver re-approves, Then a new locked rate is created using the latest indicative quote, a new lock window is set, the statement version increments by 1, and the prior version remains immutable and viewable.
Change Control and Re-Approval Enforcement
- Given a locked rate exists, When any of the following change for a collaborator: payout currency, split percentage, fees configuration, or beneficiary details that affect fees, Then the system invalidates that collaborator’s lock, sets status to Requires Re-Approval, and prevents payout execution for that collaborator until re-approved. - Then unaffected collaborators’ locks remain valid and executable if their data is unchanged and unexpired. - Then an audit entry captures changed fields with old and new values, user, timestamp, and associated lock IDs.
Partial Re-Lock for Amended Splits
- Given multiple collaborators have active locks, When a split update impacts only a subset of collaborators, Then the system re-locks rates only for impacted collaborators, preserving existing locks and expiries for unaffected collaborators. - Then the payout statement clearly labels lock state per collaborator (Locked, Re-locked, Expired, Requires Re-Approval) and recalculates expected net amounts only for impacted collaborators. - Then payout execution is allowed only for collaborators with valid (non-expired) locks; collaborators pending re-approval are blocked until their re-lock completes.
Schedule-Aware Lock Windows by Time Zone
- Given each collaborator has a configured time zone, When a release is approved and locks are created, Then for each collaborator the system selects a lock duration such that at least one contiguous 6-hour window of the lock falls between 08:00 and 20:00 in the collaborator’s local time; if necessary, extend the duration up to the configured maximum to satisfy this rule. - Then all UI and notifications display lock expiry in both UTC and the collaborator’s local time zone. - Then if the rule cannot be satisfied within the maximum duration, the system flags the lock as "Out-of-hours" and requires explicit approver acknowledgment before proceeding.
Expiration Notifications and Alerts
- Given a locked rate with a future expiry exists, Then the system schedules notifications at T-12h, T-2h, and T-15m before expiry to approvers and impacted collaborators via email and in-app, including collaborator name, currency pair, locked rate, expected net, and expiry times in local and UTC, with links to execute or re-approve. - Given a lock is executed or re-locked before expiry, Then pending expiry notifications for that lock are canceled within 60 seconds. - Given a notification delivery failure, Then the system retries up to 3 times with exponential backoff (minimum 1 min, maximum 10 min) and records success/failure in the audit log.
Smart Batch Payouts
"As an accountant, I want smaller payouts to be batched intelligently so that we minimize fees without delaying releases."
Description

Aggregate eligible payouts by collaborator, currency, and provider to reduce transaction fees while honoring due dates. Define batching windows and minimum thresholds, with simulations that show expected fee savings versus immediate settlement. Allow per-collaborator opt-out and manual force-send. Handle partial fulfillment when thresholds aren’t met, ensure idempotent execution, and log batch composition, savings, and outcomes. Respect provider cut-off times and regional holidays.

Acceptance Criteria
Batch Formation by Collaborator, Currency, and Provider
Given eligible payouts exist across multiple collaborators, currencies, and providers When the batch job runs within a defined batching window Then payouts are grouped into distinct batches where each batch contains payouts with the same collaborator, the same payout currency, and the same payout provider And no batch includes payouts past their due date unless force-send is invoked And payouts with mismatched attributes are not co-mingled And each batch is assigned a unique batch_id and computed totals (gross, fee estimate, net)
Batching Windows, Minimum Thresholds, and Partial Fulfillment
Given system configuration defines batching_window duration and min_batch_amount per currency/provider When the batching window closes for a candidate batch Then if batch sum >= min_batch_amount, all eligible payouts in that batch are marked Ready for Settlement And if batch sum < min_batch_amount by any payout’s due date, the system force-sends only those due payouts and leaves non-due payouts queued And if due dates are in the future and threshold not met, payouts remain queued until the earlier of threshold met or due date And the applied configuration version and evaluation timestamp are recorded with the decision
Simulation of Fee Savings vs Immediate Settlement
Given a user requests a simulation for current eligible payouts When the system evaluates immediate vs batched settlement Then the response includes for each batch: immediate_settlement_fee, batched_settlement_fee, expected_savings_amount, expected_savings_percent, projected_net_per_payout, and assumed execution date/time And the simulation includes the FX rate snapshot timestamp, provider fee schedule version, and batching window end And sum(projected_net_per_payout) + batched_settlement_fee + applicable taxes equals sum(gross) within rounding rules And the simulation completes within 3 seconds for up to 1,000 payouts
Per-Collaborator Opt-Out and Manual Force-Send
Given a collaborator has opted out of batching When their payouts are evaluated Then their eligible payouts bypass batching and are scheduled for immediate settlement if due within 24 hours And given an authorized user triggers force-send for specified payouts When the action is confirmed Then the selected payouts are removed from any pending batches and sent immediately, with an audit log capturing user, reason, timestamp, and affected payout IDs
Idempotent Batch Execution
Given a batch is created with a deterministic batch_id and idempotency_key When the settlement job is retried or receives duplicate execution requests within the idempotency window Then no payout is disbursed more than once And the system returns the existing batch execution result without side effects And partially executed batches resume from the last confirmed payout only And idempotency is verified across service restarts
Provider Cut-Off Times and Regional Holidays
Given provider cut-off times and regional holiday calendars are configured per currency/region When a batch would settle after a cut-off time or on a holiday in any relevant region Then settlement is scheduled for the next eligible business day at the earliest available cut-off And simulations and UI reflect the adjusted execution date and any updated fee/FX assumptions And items that would miss their due date due to cut-offs/holidays are force-sent to meet due dates
Batch Logging, Auditability, and Outcomes
Given any batch is simulated, created, executed, partially executed, or failed When the event occurs Then the system logs batch_id, payout IDs, collaborator IDs, currency, provider, totals (gross, estimated/actual fees, net), estimated/actual execution timestamps, FX quote snapshot IDs, configuration version, initiator (user/system), and outcome status per payout And logs are immutable, queryable by batch_id and payout_id, and retained for at least 24 months And outcome statuses are synchronized to payout records within 60 seconds of provider callbacks And a downloadable audit report (CSV/JSON) can be generated per batch
Net Amount Transparency & Approvals
"As a collaborator, I want to see my guaranteed net amount before I approve a release so that I can make informed decisions."
Description

Present a clear, itemized view of gross amounts, fees, FX rate, and final net for each collaborator before approval. Require explicit consent from stakeholders, capturing digital signatures, timestamps, IP, and versioned terms. Provide downloadable statements and API exports, notify collaborators with a preview of their net in their currency, and reflect updates when splits or currencies change. Enforce access controls and show change history to maintain trust and accountability.

Acceptance Criteria
Pre-Approval Net Summary Modal
Given a payout draft includes collaborator X with currency preference Y When the review screen loads Then display gross amount in source currency, itemized fees (platform, network, FX), quoted FX rate, and computed net in Y with currency symbol and ISO code Given any fee or FX quote is unavailable When the review screen loads Then show "quote unavailable" with retry and disable the Approve action until a valid quote is retrieved Given the user edits split percentages or currency preference When values are changed Then recompute and refresh all displayed amounts within 1 second and show a "Last updated" timestamp in UTC Given rounding rules for currency Y When net amounts are displayed and exported Then on-screen net equals exported net within ±0.01 Y and the sum of collaborator nets equals gross minus fees within ±0.01 of source currency
Digital Approval and Consent Capture
Given a collaborator clicks Approve on their payout summary When they confirm Then capture digital signature (typed full name), UTC timestamp to milliseconds, originating IP, user agent, and terms/version hash, and persist as an immutable approval record linked to the payout run and collaborator Given an approval record exists When viewing the audit trail Then display approver, values approved (gross, fees, FX rate, net), terms/version, timestamp, and IP Given splits or currency preferences change after approval and before release When the system detects the change Then mark the prior approval as stale, notify the collaborator, and require re-approval before release Given the terms version changes before release When the collaborator attempts to approve Then require consent to the new terms and store the new version hash on approval
Notifications with Net Preview
Given a payout draft is ready for collaborator review When notifications are triggered Then send an email and in-app notification to each collaborator containing their net amount in their currency, itemized fees, FX rate, and a secure link to the approval screen Given the FX quote includes an expiration When composing the notification Then include the expiration timestamp (UTC) and a notice that the rate may change after expiry Given splits or currency preferences are updated prior to release When recalculation completes Then send an updated notification marked as a revision that supersedes prior notifications and include the new net amount and revision number
Change History and Access Controls
Given a user with Project Admin role opens payout change history When the log is displayed Then show chronological entries with actor, action, before/after values (splits, currencies, fees, FX rate), timestamp (UTC), and originating IP Given a user without financial permissions attempts to view another collaborator's payout details When they request the page or API Then return 403 Forbidden and do not reveal amounts or rates Given a collaborator views their own payout When accessing details Then show only their figures, approval status, and download statement option; do not show other collaborators' data Given any change occurs after a collaborator has approved When viewing the history Then display that approvals were invalidated, including who made the change and when, and which collaborators must re-approve
Rate Lock at Release
Given a payout run is ready for release When an authorized user performs Rate Lock Then snapshot FX rates and fee schedules, compute final nets, persist the lock with UTC timestamp and locker identity, and prevent further edits that affect amounts Given rates are locked When a user attempts to modify splits or currency preferences Then block the change or require creating a new payout run version, clearing prior approvals and requiring re-approval Given statements are generated after rate lock When comparing to disbursed amounts Then statements show the locked FX rate and match disbursed totals exactly
Statements and API Exports
Given a collaborator has an approval (current or stale) When they click Download Statement Then generate PDF and CSV containing gross, itemized fees by type, FX rate, net amount, currency codes, approval metadata (signature, timestamp, IP, terms/version), and rate-lock info Given the API client requests /payouts/{id}/statements?format=csv When authorized Then return a pre-signed URL valid for 15 minutes and log the request in the audit trail Given a statement is regenerated after changes When a new version is produced Then increment the statement version, mark the prior version as superseded (if approvals invalidated), and keep all versions accessible via audit history
Batch Small Payouts Fee Optimization
Given multiple collaborators have small net payouts below threshold T When batch optimization is enabled for the payout run Then aggregate eligible payouts by currency and payment rail, recompute fees, and display expected fee savings and the updated net per collaborator prior to approval Given batching alters the settlement date When presenting the approval screen Then show the new estimated settlement date and require collaborator acknowledgment before approval Given a collaborator opts out of batching When saving preferences Then exclude them from aggregation and update their fee estimate and net accordingly
Compliance & KYC/AML Controls
"As a compliance manager, I want automated KYC and sanctions checks on payout recipients so that we meet legal obligations and reduce risk."
Description

Enforce jurisdiction-specific KYC requirements and sanctions screening for beneficiaries based on payout currency, country, and method. Collect and securely store identity artifacts where required, handle repeat verification via provider tokens, and block payouts until checks pass. Apply transaction monitoring rules (limits, velocity, risk scoring), surface clear remediation steps, and retain auditable logs for regulators. Provide data retention schedules, PII masking, user consent capture, and deletion workflows aligned with privacy laws.

Acceptance Criteria
Jurisdiction-Specific KYC Enforcement on First Payout
Given a beneficiary selects payout currency, country, and method, When they attempt to add a payout method, Then the system determines the KYC tier from the rules table for that jurisdiction and method and displays required fields and documents. Given required KYC fields or document uploads are incomplete or invalid, When the beneficiary submits, Then the submission is rejected with field-level errors and a checklist of missing items. Given all required fields and documents are complete, When submitted, Then the system creates a verification case with the provider, sets KYC status=Pending, and blocks payout initiation. Given the provider returns Approved, When status updates, Then KYC status=Approved and payout initiation is enabled for that beneficiary. Given the provider returns Rejected or Needs More Info, Then payout remains blocked and the system surfaces specific remediation steps and allowed document types.
Sanctions and Watchlist Screening at Onboarding and Payout
Given a new beneficiary record is created or updated, When saved, Then the system screens beneficiary and UBO names/identifiers against OFAC, EU, UN, HMT, and local lists via the screening provider with similarity threshold set to 85%. Given a payout is queued, When the last screening cache age exceeds 24 hours, Then the system re-screens before execution. Given screening returns a potential match at or above threshold, Then payout status=On Hold, the beneficiary is flagged, and a case is created for manual review with matched list, score, and watchlist entry details. Given a case is resolved as False Positive by a reviewer with justification, When saved, Then the beneficiary is unblocked and an allowlist exemption with expiry date is stored. Given a case is confirmed as True Match, Then all payouts are blocked, the account is frozen, and an alert is logged for compliance reporting.
Reuse of Provider Verification Tokens
Given the provider returns a reusable verification token after KYC, When the same beneficiary initiates additional payouts, Then the system uses the token to satisfy KYC without re-collecting documents. Given the token is expired, revoked, or scope-mismatched, When detected, Then the system prompts for re-verification and lists required documents based on current rules. Given token use succeeds, Then no PII documents are stored again and the KYC status remains Approved; an audit log records token reference and provider transaction ID.
Transaction Monitoring: Limits, Velocity, and Risk Scoring
Given transaction monitoring rules are configured (per-transaction/daily/weekly/monthly amount limits, count-per-window, country/method risk weights), When a payout is created, Then the system computes a risk score using current FX rates and evaluates rules. Given any rule threshold is exceeded, Then the payout is moved to On Hold, the reason code and breached rule are attached, and the beneficiary is notified in-app and by email. Given the risk score is greater than or equal to 80 or the amount exceeds USD 10,000 equivalent, Then enhanced due diligence is required and the system requests additional information before release. Given an authorized compliance role applies an override with justification, Then the payout may proceed and the override is recorded with user, timestamp, rule version, and justification.
Payout Blocking and User-Facing Remediation
Given any KYC/AML check is incomplete, pending, failed, or stale beyond policy (e.g., older than 365 days), When a payout is attempted, Then the payout is blocked and the UI displays specific remediation steps with an estimated review SLA. Given required remediation items are submitted, Then the system updates the case and re-runs checks automatically; the payout remains On Hold until checks pass. Given all checks pass, Then the payout status automatically transitions from On Hold to Ready for Disbursement without manual intervention.
Immutable Audit Logs and Regulator Exports
Given any compliance-relevant event occurs (KYC submission, screening result, rule evaluation, block/unblock, override, consent capture, deletion), Then an immutable audit record is written with timestamp (UTC), actor, subject IDs, rule versions, provider response IDs, and before/after states, and cannot be edited. Given a compliance officer requests an export for a time range and beneficiary, When requested, Then a CSV/JSON export is generated within 60 seconds containing all audit records with PII masked per policy. Given an access attempt to audit logs by a non-authorized role, Then access is denied and an alert is logged.
Privacy: Consent, Masking, Retention, and Deletion
Given a beneficiary starts KYC, When they submit, Then explicit consent is captured with policy version, timestamp, and jurisdiction and is required to proceed. Given a user without Compliance Admin role views identity artifacts, Then only masked values are displayed (e.g., SSN last 4, document IDs partially redacted); full artifacts require time-bound privileged access with justification. Given data retention policies are configured (e.g., 5 years post last payout or per jurisdiction), Then PII is automatically scheduled for deletion or anonymization when retention expires, with legal hold exceptions honored. Given a verified data subject requests deletion and there is no legal hold, When approved, Then PII is deleted or irreversibly anonymized within 30 days, minimal legally required records are retained, and a deletion certificate is made available to the requester.
Payout Rails Integration & Reconciliation
"As an operations manager, I want reliable multi-rail payout execution with clear reconciliation so that accounting stays accurate across currencies."
Description

Integrate with multiple payout rails and providers (e.g., ACH, SEPA, Faster Payments, SWIFT via providers like Wise/Stripe) to execute payouts in the chosen currency. Implement webhook-based status updates, retries with exponential backoff, idempotent request keys, and robust error handling for rejections and returns. Model provider-specific cutoffs and delivery estimates, handle reference metadata for statement matching, and reconcile settlements back to TrackCrate statements. Provide admin dashboards for monitoring, exception queues, and manual remediation.

Acceptance Criteria
Idempotent Payout Requests Across Providers
Given a payout creation request with idempotency_key K and an identical payload When the request is submitted multiple times within 24 hours Then exactly one provider API call is executed and a single payout record is created And subsequent responses return HTTP 200 with the same payout_id and provider_transfer_id And an audit trail logs the initial create and all idempotent replays Given a payout creation request with the same idempotency_key K but a different payload When the request is submitted Then the API returns HTTP 409 Conflict with error code IDEMPOTENCY_PAYLOAD_MISMATCH and no new payout is created Given idempotency keys are tenant-scoped When two tenants use the same key Then their requests are treated independently and do not collide
Secure Webhook Processing and State Transitions
Given a webhook event with a valid HMAC-SHA256 signature and known event_id When the event is received Then it is acknowledged within 2 seconds (p99) and processed exactly once using event_id de-duplication retained for 7 days And the payout state transitions according to the provider→internal state map (e.g., created→processing→sent→delivered/failed/returned) Given a webhook event with an invalid signature When the event is received Then the system responds 401, ignores the payload, and records a security alert without changing payout state Given two out-of-order events for the same payout When a later-state event arrives before the earlier-state event Then the later event is queued for up to 5 minutes or until predecessor states are processed, whichever comes first Given processing errors occur while handling a webhook When 5 processing attempts with exponential backoff fail Then the event is moved to a dead-letter queue and the payout is surfaced in the Exceptions list
Exponential Backoff and Retry Policy for Transient Failures
Given a provider API call fails with a transient error (HTTP 5xx or network timeout) When submitting a payout or polling status Then the call is retried up to 5 attempts with backoff schedule 5s, 15s, 45s, 135s, 405s and ±20% jitter And retries stop upon the first 2xx response or provider duplicate/exists error code Given a provider API call returns HTTP 429 with a Retry-After header When handling the response Then the system waits the Retry-After duration (max 60s) and retries up to 3 times before failing Given a provider API call returns a non-retriable 4xx error (excluding 429) When handling the response Then no retry is attempted and the payout is marked Failed with the provider error code/message and added to the Exceptions queue
Provider Cutoffs and Delivery ETA Calculation
Given a user creates a payout after the provider cutoff for the source→destination corridor When the system computes the delivery estimate Then the ETA reflects the next business day in the destination time zone and displays an “After cutoff” indicator with the cutoff timestamp used Given a user creates a payout before cutoff on a business day When the system computes the delivery estimate Then the ETA falls within the provider’s advertised delivery window for the rail and corridor with ≥99% of integration test cases within ±1 business day Given a holiday or weekend per provider calendars for source or destination When computing the ETA Then the date skips non-business days according to corridor rules and shows the time zone used in the estimate
Reference Metadata for Statement Matching
Given a payout includes reference fields (purpose_code, end_to_end_id, invoice_number, release_id) When submitting to ACH/SEPA/SWIFT/Faster Payments Then fields map to provider-specific locations (e.g., ACH addenda, SEPA RemittanceInformation, SWIFT MT103 :70/:72, FPS reference) and enforce per-rail length/character constraints And values exceeding limits are truncated with an ellipsis suffix while full values are stored internally Given a sandbox payout completes When the beneficiary statement simulation is retrieved Then the reference value appears where supported by the rail and is stored as beneficiary_statement_reference on the payout record Given a rail does not support custom references When the payout is sent Then the reference metadata is retained internally and visible in admin/recipient views without being transmitted
Settlement Ingestion and Reconciliation to TrackCrate Statements
Given provider settlement reports are available via webhook or SFTP When the ingestion job runs hourly Then files/events are fetched securely, parsed, and matched to payouts by end_to_end_id or composite (amount, currency, beneficiary, value_date) with ±0.01 currency tolerance; unmatched items are added to Exceptions Given a settlement matches a payout When reconciliation is applied Then the payout status updates to Settled and TrackCrate statements record gross_amount, fees, net_amount, fx_rate_applied (if any), settlement_date, and reconciled_at timestamp Given settlement data indicates a return or rejection When reconciliation runs Then the payout transitions to Returned or Rejected with provider reason code, funds movements are reversed, and notifications are sent to relevant collaborators Given provider settlements are delayed When 24 hours elapse after provider “Sent” status with no settlement Then the payout is flagged Awaiting Settlement and surfaced in the Exceptions queue
Admin Monitoring Dashboard and Exception Remediation
Given an Admin or Finance user opens the Payouts dashboard When the list loads Then they can filter by status, provider, rail, corridor, currency, date range, user, and exception flag; default sort is newest first; P95 load time < 2 seconds for 10k payouts Given an exceptioned payout is opened When the detail view is displayed Then it shows audit trail, redacted provider request/response, webhook history, current state, ETA, and cutoff rationale Given the user has the required role (Admin or Finance) When they perform remediation (retry submission, retry webhook processing, cancel if pending, resync with provider, force reconcile with settlement id, update reference before submission) Then the action executes with authorization, requires a reason note, and writes an immutable audit log with actor, timestamp, and changes Given the dashboard remains open When new events occur Then the list auto-refreshes every 30 seconds without losing filters or scroll position and updates totals and exception counts

Dispute FastTrack

If something’s off, place an instant hold while the system packages evidence—approvals, comments, file hashes, and change history. Propose partial releases or holdbacks, set response timers, and keep a clean audit trail to resolve disagreements quickly without derailing the timeline.

Requirements

Instant Hold & Scoped Freeze
"As a label admin, I want to instantly place a scoped hold on disputed assets so that no contested files or pages are shared or released while we investigate."
Description

Enable authorized users to place an immediate hold on a release, track, asset set, shortlink, or AutoKit press page, with clear scoping options (release-wide, per-asset, or per-link). Upon hold activation, downloads (including expiring/watermarked links) are suspended, public access to AutoKit pages and private stem players is gated, and external sync or distribution tasks are paused. Disputed items become read-only to prevent destructive edits while allowing comments and proposal workflows. The UI surfaces a persistent dispute banner, reason code, and dispute ID across impacted objects. Holds are reversible and auditable, with full logging of who initiated, when, and the chosen scope, ensuring minimal disruption to unaffected collaboration while preventing unintended release of contested materials.

Acceptance Criteria
Immediate Hold Propagation & Scope Enforcement
Given an authorized user initiates a hold at scope = Release When the hold is confirmed Then within 15 seconds all child assets, shortlinks, and the AutoKit page under that release are placed On Hold and gated, and the hold is assigned a unique dispute ID Given an authorized user initiates a hold at scope = Asset on a specific stem When the hold is confirmed Then within 15 seconds only that asset and its derivatives (previews, transcodes) are placed On Hold and gated, and unrelated assets in the same release remain unaffected Given an authorized user initiates a hold at scope = Shortlink on a specific link When the hold is confirmed Then within 15 seconds that shortlink is gated and all other shortlinks and assets not in scope remain accessible
Access Control for Hold Initiation and Release
Given a user without hold permissions attempts to place or release a hold When they submit the action Then the operation is blocked with an insufficient permissions message and no hold state changes occur Given a user with hold permissions places a hold with a reason code and optional note When the hold is created Then the system records the initiator, timestamp (UTC), scope, affected object IDs, reason code, and dispute ID in the audit log Given a user without hold permissions attempts to lift an existing hold When they submit the action Then the operation is blocked with an insufficient permissions message and the hold remains active
Suspension of Downloads and Gating of AutoKit/Stem Player
Given active expiring or watermarked download links exist for an item placed On Hold When a recipient attempts to download via any tracked shortlink or direct link Then the download is blocked, no file bytes are served, and a gate page displays the dispute ID and reason code Given a public visitor opens an AutoKit press page whose parent release is On Hold When the page loads Then the page renders a gated state with no asset access and displays the dispute ID and reason code Given a collaborator opens the private stem player for an On Hold asset When they attempt playback Then playback is disabled while commenting remains available
Read-Only Mode with Allowed Collaboration
Given an object is On Hold When a user attempts any destructive or state-changing action (upload, replace, delete, rename, move, edit metadata/rights, publish/unpublish, regenerate links) Then the system blocks the action, shows a read-only message referencing the dispute ID, and persists no changes Given the same object is On Hold When a user posts a comment or submits a proposal in the dispute workflow Then the action succeeds and is recorded with linkage to the dispute ID in the audit trail Given a bulk operation includes a mix of On Hold and not-on-hold items When the bulk operation runs Then On Hold items are skipped with explicit per-item errors and all non-held items complete successfully
Pause and Resume of External Sync/Distribution
Given outbound sync or distribution jobs are pending for a release When a hold is placed at Release scope Then no new outbound jobs are enqueued and all pending jobs transition to Paused within 60 seconds with reason = Dispute Hold Given jobs were paused due to a hold When the hold is lifted Then paused jobs resume within 60 seconds in original order without creating duplicates Given event webhooks or partner callbacks are configured When an object is On Hold Then outbound notifications related to gated actions are suppressed; when the hold is lifted, only the current state is emitted (no replay of suppressed events)
Persistent Dispute Banner and Cross-Object Visibility
Given an On Hold object is viewed in the internal UI (release, track, asset, shortlink, AutoKit editor) When the page loads Then a persistent banner is displayed containing the dispute ID, reason code, initiator, timestamp (UTC), and scope, with a link to view the dispute record Given a collaborator lacks permission to view dispute details When they view an affected page Then the banner displays only dispute ID and reason code without restricted details Given a public visitor views a gated public resource When the page loads Then only public-safe gating messaging is shown; no internal dispute banner or sensitive metadata appears
Hold Reversibility, Auditability, and Overlapping Holds
Given multiple holds exist with overlapping scopes (e.g., Release-level and Asset-level) When evaluating hold state for an object Then the object remains On Hold if any applicable hold exists; lifting one hold only removes its own scope effects and others persist Given a hold is created or lifted When the action completes Then an immutable audit entry is recorded with dispute ID, actor, action (create or lift), scope, affected object IDs, timestamp (UTC), and reason code, and the entry is available via UI export and API Given a hold is lifted on an item with previously issued, unexpired download links When a recipient accesses those links Then access is restored within 15 seconds, AutoKit pages and private stem players are re-enabled, and paused external jobs resume as per policy
Evidence Packaging & Chain-of-Custody
"As a project owner, I want the system to auto-package evidence with hashes and history so that all parties can trust the record without manual assembly."
Description

Automatically assemble a tamper-evident evidence bundle at hold creation and as the dispute evolves, including approvals/sign-offs, comment threads, file/version hashes, change history, rights metadata, contributor identities, and timestamps. Generate a structured manifest (JSON) with a human-readable summary (PDF), embed cryptographic digests for each referenced artifact, and store the bundle in immutable storage with versioned updates. Provide authorized download sharing and deep links back to versioned assets within TrackCrate. Clearly capture provenance and chain-of-custody for each artifact to streamline resolution and support legal-grade verification if needed.

Acceptance Criteria
Auto-bundle creation at hold initiation and ongoing updates
Given a dispute hold is created and submitted When the hold is saved Then the system generates an evidence bundle within 30 seconds that includes approvals/sign-offs, full comment threads with authors and timestamps, file and version hashes for all referenced assets, change history, rights metadata, contributor identities, and hold metadata And the bundle contains manifest.json (schema version evidenceManifest.v1) and summary.pdf that mirrors manifest content And manifest.json and summary.pdf are stored as artifacts of the bundle version Given a new relevant event occurs (approval, comment, file/version change, rights edit) on the disputed release When the event is saved Then a new bundle version is created within 60 seconds with an incremented version number and previousBundleDigest linking to the prior version
Immutable storage and tamper evidence for bundles
Given a bundle version is finalized When it is written to storage Then it is stored in immutable (WORM or equivalent) storage where delete and in-place update operations are blocked for all roles And the bundle version is addressed by a content digest equal to the SHA-256 of the canonicalized manifest.json And both manifest.json and summary.pdf have detached JWS signatures verifiable with the published TrackCrate public key When any artifact in a stored bundle version is altered outside the expected versioning flow Then signature verification fails and the retrieval endpoint returns 409 with code EVIDENCE_TAMPER_DETECTED and the UI displays a tamper-evident warning
Per-artifact digests and download integrity
Given an artifact is referenced in the manifest When the manifest is generated Then each artifact entry includes canonical ID/path, byteLength, mimeType, algorithm=SHA-256, digestHex, and trackCrateAssetVersionId When a client downloads an artifact via deep link Then the server returns Digest and Content-Length headers matching the manifest values and client-side recomputation matches If the recomputed digest does not match Then the download is aborted and the API returns 409 with code ARTIFACT_HASH_MISMATCH and the bundle is flagged as suspect
Authorized sharing and deep links to versioned assets
Given an authorized user requests a shareable download for a specific bundle version When the request is made Then the system issues a scoped, expiring URL with configurable TTL (1 hour to 30 days, default 7 days), optional usage limit, and permission scope limited to that bundle version And access to the URL requires token validation; expired or revoked tokens return 403; successful access returns manifest.json and summary.pdf with correct MIME types And all access events (timestamp UTC, IP, user/token id, outcome) are logged and appear in the dispute audit trail And summary.pdf and manifest.json contain deep links to the exact versioned assets in TrackCrate; users without permission see 403 without data leakage
Provenance and chain-of-custody timeline
Given any artifact is included in the bundle When provenance is recorded Then entries capture actorId, actorRole, actionType, method(UI/API), timestampUTC (RFC3339), artifactId, fromVersion->toVersion, and signature/fingerprint where applicable And each approval/sign-off includes actorId, decision, scope, timestampUTC, and a non-repudiable signature or cryptographic fingerprint And summary.pdf renders a chronological chain-of-custody timeline with per-step hashes that map 1:1 to manifest entries
Schema validation and external verification
Given a bundle version is generated When JSON Schema validation runs against manifest.json Then it conforms to evidenceManifest.v1 with zero errors; non-conforming manifests block sharing and surface validation errors to the user And the bundle includes a verification kit (manifest.json, summary.pdf, detached signatures, public key fingerprint, and CLI instructions) enabling offline verification When the verification script is executed against the downloaded bundle Then it outputs PASS for intact bundles and FAIL with explicit error codes (e.g., SIG_INVALID, HASH_MISMATCH, SCHEMA_ERROR) for issues
Partial Release & Financial Holdbacks
"As an artist manager, I want to propose partial releases with holdbacks so that the project can keep moving while we isolate the disputed components."
Description

Offer structured proposals to release non-disputed assets while keeping specific items on hold, and to set financial holdbacks by amount or percentage for affected parties until resolution. Apply holdback logic to royalty splits and payouts, and reflect the state across shortlinks, AutoKit pages, and download permissions at an asset level. Validate proposals against rights metadata and existing approvals to prevent conflicts. Once a proposal is accepted, automatically enact the configuration and update project timelines to avoid derailing the overall release.

Acceptance Criteria
Validate Partial Release Proposal Excludes Disputed Assets
Given a project with at least one disputed asset and at least one non-disputed asset And an active dispute hold exists When a user with Partial Release permission creates a proposal and selects assets including both disputed and non-disputed items Then the system prevents adding disputed assets to the proposal and displays "Asset is under dispute and cannot be released" And the proposal can only contain non-disputed assets And the proposal summary displays the count of assets to release, affected surfaces (shortlinks, AutoKit, downloads), and projected visibility changes And the proposal is saved with a unique ID and status "Pending Decision" And an audit trail entry is recorded with proposer, timestamp, selected asset IDs, and validation outcome
Configure Financial Holdbacks by Percentage and Amount per Party
Given a project with defined royalty splits and payable parties When a user adds a holdback of 15% for Party A and $500 for Party B on affected assets Then the system applies holdbacks only to parties marked as affected by the dispute And validates that per-party holdbacks do not exceed that party's net payable for a payout period And validates that cumulative holdbacks for any party do not exceed 100% of their payable And calculates and displays projected withheld amounts per party per upcoming payout cycle And updates ledger previews and exports to include holdback lines with reason "Dispute Holdback" and the proposal ID And removing or editing a holdback immediately updates projections and validations
Rights Metadata and Approval Conflict Validation
Given rights metadata exists for each asset (ownership shares, roles, territories) and approval records When a user submits a proposal that changes asset availability or payout distributions Then the system verifies the submitter has the required role (Rights Admin or Project Owner) for the affected assets And blocks submission if any selected asset is missing mandatory metadata (e.g., ISRC, writers) and lists the missing fields And ensures ownership shares and royalty splits remain balanced (sum to 100%) after holdbacks And prevents proposals that broaden territorial availability beyond existing approvals And displays a consolidated, actionable error list identifying the conflicting assets and rules violated And records the validation result in the audit trail with references to rights and approval records checked
Auto-Enact Accepted Proposal with Timelines and Payout Updates
Given a saved proposal with defined partial releases and financial holdbacks When the proposal decision changes to Accepted Then within 60 seconds the system updates asset availability flags across shortlinks, AutoKit, stem player, and download permissions And applies the configured holdbacks to the payout engine effective next payout cycle And regenerates or invalidates caches so changes are visible to end users within 5 minutes And updates the project timeline by shifting only impacted tasks and preserving the global release date unless the critical path is affected And sends notifications to affected parties summarizing enacted changes And writes a signed audit entry including a hash of the enacted configuration and affected entity IDs And if any step fails, the system rolls back all changes and marks the proposal "Failed Enactment" with an error reason
Response Timers with Default Outcomes
Given a proposal includes a response window of N calendar days and a default outcome (Auto-Apply or Auto-Reject) When the response window elapses without required approvals collected Then the system executes the default outcome automatically And sends reminder notifications at 48 hours and 4 hours before expiry to required approvers And locks the proposal from further edits after expiry, requiring a new proposal for changes And records timestamps for creation, reminders, expiry, and auto-action in the audit log
Surface Reflection of Partial Release State Across Links and Pages
Given assets are marked Released or On Hold after enactment When a user or recipient accesses a track shortlink, an AutoKit page, the private stem player, or a download endpoint Then Released assets are visible/playable/downloadable according to permissions and watermark/expiry settings And On Hold assets are hidden or clearly labeled "On Hold (Dispute)" and are not playable or downloadable And API responses include accurate asset-level availability flags for clients And analytics events record blocked access attempts to held assets And UI and API reflect state changes within 5 minutes due to cache invalidation
Response Timers, Reminders & Escalations
"As a collaborator, I want clear response timers and automated reminders so that disputes resolve quickly without constant manual follow-up."
Description

Allow initiators to set response deadlines for counterparties, with default SLAs configurable at workspace or project level. Provide time zone–aware countdowns, scheduled reminders (in-app and email), and automated escalation to designated admins when deadlines lapse. Support policy-driven auto-outcomes (e.g., maintain hold, auto-accept holdback) when no response is received. Enable snooze/reschedule with audit logging and present a consolidated timeline of all dispute-related due dates alongside the release calendar.

Acceptance Criteria
Workspace & Project Default SLAs with Per-Dispute Overrides
Given a workspace default SLA in hours is configured And a project-level SLA override is configured When an initiator starts a Dispute FastTrack under that project Then the initial response deadline is calculated from the project SLA When the initiator edits the deadline before sending Then the edited deadline overrides defaults and is persisted on the dispute And the due timestamp is computed from initiation time and stored in UTC And the audit log records default source, override value, actor, and timestamp
Time Zone–Aware Due Dates and Countdowns
Given a dispute with a stored due timestamp in UTC When any participant views the dispute Then the due date/time renders in the viewer’s local time zone And a countdown displays days/hours/minutes remaining and updates at least every minute And daylight saving transitions are handled correctly And hovering or details reveal the UTC timestamp And due time renders consistently across web UI and email notifications
Scheduled Reminders (In-App and Email)
Given reminder offsets are configured (e.g., 24h, 4h, 1h before due) When a dispute is pending response Then in-app reminders are posted to assigned counterparties at each offset And email reminders are sent at the same offsets with a deep link to the dispute And reminders are not sent to users not assigned to the dispute And duplicate reminders are suppressed within a 10-minute window And reminder deliveries and failures are logged with timestamp, channel, and recipient
Lapse Escalation to Designated Admins
Given designated escalation admins are set at workspace or project level And a dispute reaches its due time without a counterparty response When the deadline lapses Then an escalation event is created within 5 minutes And in-app and email notifications are sent to the designated admins And the dispute status reflects "Escalated" with the lapse timestamp And duplicate escalation notifications for the same lapse are suppressed And the escalation is recorded in the audit log with actor=system
Policy-Driven Auto-Outcome on No Response
Given a policy for no-response is configured (e.g., Maintain Hold or Auto-Accept Holdback with percentage) When the response deadline lapses without required responses Then the configured auto-outcome is applied to the dispute And the dispute state and any holdback value are updated accordingly And all involved parties are notified of the auto-outcome via in-app and email And the auto-outcome execution is logged with policy details, timestamp, and actor=system And auto-outcome does not execute if a response was received before the due timestamp
Snooze/Reschedule with Audit Logging
Given an authorized actor (initiator or designated admin) opens the dispute When they snooze or reschedule the response deadline Then a reason must be provided And the new due time must be in the future and within policy maximum extension limits And reminder and escalation schedules are recalculated to align with the new due time And all viewers see the updated due time and countdown within 1 minute And the audit log records old due, new due, actor, reason, and timestamp And previously scheduled reminders/escalations after the old due are canceled
Consolidated Timeline with Release Calendar
Given the release calendar is open When the user enables the Dispute Timeline overlay Then all dispute-related due dates, scheduled reminders, escalations, and auto-outcomes are displayed alongside release items And items can be filtered by workspace, project, release, counterparty, and dispute status And items are sortable by due date/time and show status badges (upcoming, due, lapsed, escalated, auto-outcome) And clicking any item deep-links to the corresponding dispute And all timestamps display in the viewer’s local time with an option to toggle UTC
Structured Negotiation & E‑Sign Off
"As a rights holder, I want a structured proposal and e-sign flow so that agreements are clear, enforceable, and immediately actionable."
Description

Provide a dedicated dispute workspace with structured proposals (terms, affected assets, amounts/percentages, timelines) and versioned counters. Each proposal supports accept/decline/counter actions with typed rationales and attachment of supporting artifacts from the evidence bundle. When parties reach agreement, capture legally binding acceptance via e-sign, lock the terms, and auto-apply changes (e.g., lift hold on specified assets, set holdbacks). Store the signed agreement with the dispute record for future reference.

Acceptance Criteria
Initial Proposal Creation & Validation
- Given an authenticated dispute participant with edit permissions is in a dispute with an active hold, When they select "New Proposal", Then the system displays a structured form with fields: Terms (rich text), Affected Assets (multi-select), Amounts/Percentages per party and asset (currency with 2 decimals, percent with 4 decimals), Timelines (dates and milestones), and Holdbacks toggle and values. - Given the form is filled, When the user clicks Submit, Then validation enforces: at least one asset selected; for each asset, percentage allocations per royalty bucket total <= 100%; currency fields use the dispute currency; required fields are not empty; timeline dates are valid and not in the past (except start date = today allowed); and invalid fields are inline-highlighted with messages and submission blocked. - Given a valid submission, When submitted, Then the system saves the proposal as version 1 with a unique ID, timestamps, author, and status "Proposed", associates it to the dispute record, and does not change current holds. - Given a valid submission, When saved, Then notifications are sent to all designated counterparties and a response timer can be set (optional, 1–14 days, default 7) and stored.
Counterproposal Versioning & Change History
- Given a proposal v1 exists, When a counterparty clicks "Counter", Then the system creates v2 linked to v1 and pre-populates all fields for editing. - Given edits are made, When v2 is saved, Then the system records a change log capturing field-level diffs with old and new values, author, timestamp, and rationale (required), and marks v2 status "Proposed". - Given multiple versions exist, When viewing the proposal history, Then users can compare any two versions side-by-side with changed fields highlighted and previous versions read-only. - Given concurrent counters are attempted by multiple parties, When a counter is submitted, Then optimistic locking prevents overwriting and prompts the user to review the latest version if the base version changed.
Rationale and Evidence Attachment
- Given a user is creating or countering a proposal, When they enter a decision rationale, Then a minimum of 20 characters is required and a maximum of 2000, otherwise submission is blocked with inline error. - Given attachments are added, When selecting supporting artifacts, Then only files from the dispute's evidence bundle can be attached (no external uploads), up to 10 files per proposal, total size <= 200 MB. - Given artifacts are attached, When the proposal is viewed, Then each attachment shows filename, source evidence ID, file hash, uploader, and a quick preview or download link. - Given attachments are removed before submission, When saving, Then the proposal saves without them and the audit trail records the removal action.
Accept, Decline, or Counter with Timers
- Given a user is a designated counterparty on a proposed version, When they open it, Then actions "Accept", "Decline", and "Counter" are enabled based on permissions and current status. - Given the user selects Decline, When submitting, Then a typed rationale (min 20 chars) is required and the proposal status becomes "Declined" and the response timestamp is recorded and notifications sent. - Given the user selects Counter, When submitting, Then a new version is created per versioning rules and the previous version is marked "Superseded". - Given the user selects Accept, When confirming, Then the system launches the e-sign workflow and temporarily marks the proposal "Pending Signature" without altering asset holds yet. - Given a response timer is set, When the timer expires with no action, Then the proposal status becomes "Expired", actions are disabled except "Reissue", and notifications are sent to the proposer.
E‑Sign Agreement Capture and Locking
- Given a proposal is Pending Signature, When all required signers are invited, Then each signer receives a secure link with signer identity bindings (name, email, role) and must consent to e-sign. - Given a signer completes the signature, When the final required signature is collected, Then the system generates a signed agreement PDF including the terms snapshot, signer names, timestamps, IP addresses, and an SHA-256 document hash. - Given the agreement is signed, When stored, Then the dispute record is updated with status "Agreed", the signed file and hash are stored immutably, prior proposal versions are locked read-only, and further edits are disabled. - Given the agreement is signed, When users view the dispute, Then they can download the signed agreement and verify its hash matches the stored value.
Auto‑Apply Agreed Terms and Audit Trail
- Given an agreement is signed, When auto-apply runs, Then holds are lifted on only the specified assets, holdbacks are created with the agreed percentages/amounts and timelines, and partial releases are scheduled according to the terms. - Given rights metadata changes are required, When auto-apply completes, Then the system updates affected assets' rights/royalty splits and effective dates without altering unrelated assets. - Given any auto-apply step fails, When an error occurs, Then the system rolls back all changes, leaves holds unchanged, logs the failure with error codes, and notifies stakeholders. - Given auto-apply succeeds, When complete, Then an audit event records who/what/when, the applied deltas, and references the signed agreement ID, and the dispute activity log reflects the transition to "Resolved".
Immutable Audit Trail & Reporting
"As a compliance lead, I want an immutable audit trail with exportable reports so that we can demonstrate exactly what happened and when."
Description

Record an append-only, immutable audit trail for all dispute actions, including user identity, role, timestamp, IP/device fingerprint, object references, and before/after states. Provide filters and exports (CSV/PDF) for legal review, and summary dashboards showing dispute age, timer status, proposal history, and affected releases. Link audit entries to TrackCrate’s version history to correlate asset changes with dispute events, ensuring a clean, defensible timeline that supports compliance and postmortems.

Acceptance Criteria
Append-Only Dispute Action Logging
Given a dispute action occurs (create_hold, propose_release, accept_proposal, reject_proposal, adjust_timer, add_comment, attach_evidence) When the action is committed Then an audit entry is appended capturing: action_type, dispute_id, related_object_ids, user_id, user_role, timestamp_utc (ISO 8601 Z, ms precision), requester_ip, device_fingerprint, before_state, after_state, entry_id And the write is atomic and cannot overwrite or delete prior entries And the API returns 201 with the new entry_id And subsequent reads show the entry in chronological order by timestamp_utc
Tamper-Evident Hash Chain
Given any new audit entry for a dispute When it is written Then content_hash = SHA-256 of the normalized payload and chain_prev_hash equals the content_hash of the immediately prior entry in that dispute (or null for the first) And attempts to update or delete any audit entry return 405 and are not applied And an integrity-check endpoint returns Pass when recomputing the chain over all entries and identifies the first mismatched entry when tampering is simulated
Filterable Audit Reporting UI/API
Given audit data exists for multiple disputes When a reviewer applies filters (date_range, dispute_id, release_id, asset_id, user_id, role, action_type, timer_status, ip_cidr) Then the result set only includes entries matching all filters And results are pageable and sortable by timestamp_utc ascending/descending And the API responds within 2 seconds for up to 100k matching entries with indexed filters And counts (total, page_count) reflect the filtered set exactly
CSV Export for Legal Review
Given a reviewer requests CSV export for a filtered result set up to 100k entries When export is generated Then the CSV follows RFC 4180 with UTF-8 encoding, a header row, and quoted fields where needed And includes columns: entry_id, dispute_id, action_type, related_object_ids, user_id, user_role, timestamp_utc, requester_ip, device_fingerprint, before_state_json, after_state_json, content_hash, chain_prev_hash And the file name follows pattern trackcrate_audit_{scope}_{YYYYMMDDTHHMMSSZ}.csv And a SHA-256 checksum is provided alongside the file for verification
PDF Evidence Dossier Export
Given a reviewer requests PDF export for a dispute When export is generated Then the PDF includes a summary (dispute_id, created_at, age, latest_timer_status, affected_releases, proposal_history_count) And a chronological table of audit entries with the same core fields plus readable diffs for before/after where applicable And each page includes page X of Y and export timestamp in UTC And the last page includes a digest block with the CSV checksum and document hash And download completes for disputes with up to 5k entries under 60 seconds
Correlation with Version History
Given an audit entry references an asset or release affected by a dispute When the entry is viewed via UI or fetched via API Then it includes version_history_ref IDs linking to TrackCrate version records And the UI provides a one-click diff view between the referenced versions and the before/after states recorded in the audit entry And if the referenced version is missing, the entry displays a Missing Reference flag and the integrity-check endpoint reports the missing linkage
Dispute Dashboard Metrics
Given active and resolved disputes exist When a user opens the Dispute FastTrack dashboard Then the dashboard shows per-dispute: age in days/hours, current timer status (running, paused, expired), count of proposals with last action timestamp, and list of affected releases And global widgets display counts by timer status and average time-to-resolution And clicking any metric drills through to the filtered audit log for that dispute or cohort And metrics refresh at least every 60 seconds or on manual refresh
Dispute Roles & Access Control
"As a workspace owner, I want fine-grained dispute permissions so that only the right people can see evidence, place holds, and approve resolutions."
Description

Introduce role-based permissions and visibility rules specific to disputes. Define who can open disputes, place holds, submit proposals, view evidence bundles, or sign agreements. Limit participant access to only affected releases/assets and mask sensitive information for non-impacted collaborators. Enforce download restrictions for disputed items and ensure invitations/notifications only reach authorized parties. Integrate with existing workspace roles and per-release permissions to maintain least-privilege access throughout the dispute lifecycle.

Acceptance Criteria
Open Dispute Permission Enforcement
Given a user with Workspace Admin, Release Owner, or Rights Manager role has write permission on Release R When the user attempts to create a new dispute on Release R via UI or API Then the dispute is created, assigned an ID, and an audit entry records user, role, release ID, and timestamp And users without these roles or without write permission receive HTTP 403 and no dispute record is created
Place Hold on Assets — Role-Limited
Given an active dispute D on Release R and a user with Hold privilege (Workspace Admin, Release Owner, or Rights Manager) When the user places a hold on assets A1…An within Release R Then download endpoints for A1…An return HTTP 403 to non-privileged users within 5 seconds of hold submission And existing public or short links to A1…An are invalidated within 5 seconds and display a dispute hold message And an audit entry records who placed the hold, scope (A1…An), and timestamps
Evidence Bundle Visibility and Masking
Given a user is a participant on dispute D for impacted assets {Ai} When the user opens the evidence bundle for D Then the user can view approvals, comments, file hashes, and change history only for {Ai} And sensitive fields (royalty splits, private emails, internal legal notes) are masked unless the user has Workspace Admin, Rights Manager, or Finance role And exports of the evidence bundle exclude masked fields for non-privileged roles
Download Restrictions on Disputed Items
Given asset A is under an active dispute hold D When any non-privileged user attempts to download A directly or via a shortlink Then the download is blocked with HTTP 403 and a dispute hold message is displayed And if the user has Legal Review Download override (Admin or Legal) the system serves a watermarked file with a 24-hour expiring token and logs the event
Invitations and Notifications Restricted to Authorized Parties
Given a dispute D is created for Release R with participant list P When the system sends invitations or notifications for D (creation, updates, proposals) Then only users in P and users with workspace-level Legal Oversight receive the messages And collaborators not associated with Release R or D receive no messages And the audit log records the exact recipient list and delivery statuses
Role/Permission Change Propagates to Dispute Access
Given a user’s workspace role or Release R permission is changed or revoked When the change is saved Then the user’s dispute permissions and visibility related to R update within 60 seconds across UI and API And existing sessions are re-evaluated on next request; if access is revoked, dispute-scoped tokens are invalidated and the user is removed from D’s participant list with an admin notification
Proposal Submission and Agreement Signing Access Control
Given dispute D is in Negotiation state and a user is a Rights Manager, Release Owner, or a designated Signer for D When the user submits a partial release/holdback proposal or attempts to sign an agreement Then the action is permitted and recorded with user, role, and timestamp And users without these roles receive HTTP 403 and no proposal or signature is recorded And e-sign invitations are only sent to designated Signers, and finalization requires one signer per side when multiple parties exist

CodeSense

Smart code validation and autofill for ISRC/ISWC/IPI/UPC/GRID. Detects duplicates and format errors, cross‑checks catalog consistency, and proposes missing codes with clear confidence hints. One‑click writes codes to files and forms so releases don’t get rejected or delayed at the last mile.

Requirements

Real-time Code Format Validation
"As a label manager, I want instant validation of industry codes as I enter them so that I avoid format errors and fix issues before delivery."
Description

Provide immediate client- and server-side validation for ISRC, ISWC, IPI, UPC, and GRID during manual entry and bulk import. Enforce canonical formats, character sets, and known structural rules (length, country/registrant/prefix patterns, checksums where applicable), with inline messages and suggested fixes. Support paste-in normalization (strip spaces/dashes), input masks, and batch validation for CSV/JSON uploads. Expose a reusable validation service for TrackCrate forms (release, track, composition, contributor) and AutoKit setups. Emit machine-readable error codes and severity levels to enable gating (blocker/warning) and preflight checks before delivery.

Acceptance Criteria
Client-Side Real-Time Input Masks and Inline Validation
Given any TrackCrate form field for ISRC, ISWC, IPI, UPC, or GRID is focused When the user types or edits a value Then the input mask restricts entry to the allowed character set for that code type and uppercases letters in real time And invalid characters are rejected or immediately removed without moving the caret And an inline validation message appears within 150 ms when the value is invalid, naming the code type and issue And the message clears within 150 ms after the value becomes valid And the stored value is the canonical form (no spaces, no dashes) even if the UI displays mask separators for readability
Paste-in Normalization and Suggested Fixes
Given a user pastes a code containing spaces, dashes, dots, or lowercase (e.g., "us-qm5-18-00001", "T-123.456.789-0", " 012345678905 ") into a code field When the paste event occurs Then non-essential separators and whitespace are stripped, Unicode digits/letters are normalized to ASCII, and letters are uppercased And the field is revalidated immediately And if the normalized value matches a valid pattern, the UI shows a non-blocking tip with the normalized form And if a fix is possible (e.g., adding leading zeros or removing extra characters), a one-click "Apply fix" updates the field to the suggested canonical value
Server-Side Validation API with Machine-Readable Errors
Given a POST to /api/validation/codes with one or more code objects { codeType, value } When the request is processed Then the response returns 200 with an array mapping each input to { codeType, inputValue, canonicalValue, valid, issues[] } And each issue includes { errorCode, severity in ["blocker","warning"], message, field, suggestedFix? } And checksum validations are applied where applicable (e.g., UPC, ISWC) And structural segment rules (lengths, prefixes, country/registrant/year segments) are enforced per authoritative specifications for the codeType And the API responds within 300 ms for up to 50 codes and within 1000 ms for up to 1000 codes under nominal load And results are deterministic and idempotent for identical inputs
Bulk CSV/JSON Upload Validation with Row-Level Reporting
Given a CSV or JSON file with up to 50,000 records containing code fields is uploaded When validation runs Then each row receives a status in ["valid","valid-with-warnings","invalid"] And row errors include { rowNumber, field, value, errorCode, severity, message } And a file-level summary reports counts of valid, warnings, invalid, and duplicates And a downloadable error report is available in the same format as the upload (CSV/JSON) And validation completes within 2 minutes for 50,000 records under nominal load And rows with any "blocker" issues are gated from import; rows with only "warning" issues may be imported with explicit user confirmation
Cross-Catalog Duplicate and Consistency Checks
Given a user enters or uploads a code that already exists in the same catalog scope When the existing usage conflicts with the current entity (e.g., ISRC mapped to a different recording, UPC mapped to another release) Then a "blocker" duplicate error is raised with deep links to existing record(s) And if reuse is permitted by policy (e.g., same ISWC across multiple recordings), a "warning" is shown instead And duplicate detection is case-insensitive and ignores formatting separators And near-duplicate variants differing only by extraneous characters are flagged with a suggested canonical match
Reusable Validation Service Integration Across Forms and AutoKit
Given a TrackCrate form (release, track, composition, contributor) or an AutoKit setup renders a code field When the field initializes Then it binds to the shared validation service via a common interface and displays consistent messages, severities, and suggested fixes And feature flags can enable/disable specific code types without impacting others And if offline, client-side rules run immediately and server-side checks replay when connectivity is restored, reconciling any differences without losing user input
Preflight Delivery Gating for Codes
Given a user initiates a delivery/preflight for a release or bundle When the preflight runs Then all codes in scope are validated via the service and a checklist shows counts per code type by severity And deliveries with any "blocker" issues are prevented until resolved And deliveries with only "warning" issues can proceed after explicit acknowledgement And the preflight result, including canonical values and issues, is persisted and exportable as JSON for delivery logs
Global Duplicate & Collision Detection
"As a catalog admin, I want the system to flag duplicate or colliding codes so that I can resolve conflicts before releases are rejected by distributors."
Description

Detect exact and likely duplicate codes across the entire workspace catalog with configurable scope (per imprint, per label, global). Flag collisions where the same code is assigned to multiple conflicting assets (e.g., one ISRC on two different recordings) and surface likely duplicates using fuzzy matching and pattern heuristics. Provide a resolution workflow to reassign, merge, or override with justification, and maintain a history of collisions and resolutions. Integrate with search and batch tools to prevent re-use at entry time and during imports.

Acceptance Criteria
Real-time Duplicate Detection by Configurable Scope
Given a user enters an ISRC/ISWC/IPI/UPC/GRID in a code field and the duplicate-detection scope is set to Imprint, When the normalized code already exists on any asset within the same imprint, Then the Save action is blocked and an inline error lists up to 5 linked matches with imprint labels. Given the scope is Label, When the normalized code already exists on any asset under the same label (across imprints), Then the Save action is blocked and matches are shown with the label context. Given the scope is Global, When the normalized code exists anywhere in the workspace, Then the Save action is blocked and matches are shown with owning label/imprint badges. Given the code exists outside the current scope but not within it, When the user attempts to save, Then the Save action is allowed and an info hint warns "Code exists outside scope" with a link to view the external match(es). Given duplicates differ only by case, hyphens, spaces, or standard punctuation, When compared, Then they are treated as identical for duplicate detection. Performance: p95 duplicate check latency per entry action <= 300 ms; p99 <= 600 ms.
Collision Detection on Conflicting Asset Assignment
Given a normalized code is already assigned to Asset A and a user assigns the same code to Asset B (Asset B != Asset A), When the user attempts to save, Then a collision is created and the save is blocked pending resolution options. Given the same code is re-applied to Asset A, When saved, Then no collision is created and no duplicate warning is shown. Given a collision exists for a code, When viewing any involved asset, Then a persistent Collision banner displays with links to the resolution workflow and a count of involved assets. Given multiple assets share the same exact code, When displayed, Then all involved assets are listed in the collision detail with asset IDs and owners. Given scope is configured, When evaluating collisions, Then the system uses the configured scope to determine whether assignments conflict.
Fuzzy Likely Duplicate Suggestions with Confidence
Given a user enters a code that differs by a single character substitution, insertion, deletion, transposition, or missing separators from an existing code within the configured scope, When focus leaves the field, Then the system shows a "Likely duplicate" suggestion with confidence score >= 0.90 and a short explanation of the difference. Given the computed confidence score < 0.90, When evaluated, Then no suggestion is shown. Given the user accepts the suggestion, When confirmed, Then the field value is replaced with the suggested code and an audit event "autocorrected_from" is recorded with the original and new values. Given the user dismisses the suggestion, When dismissed, Then the suggestion is not re-shown for that exact field value during the current session. Given an exact duplicate is detected, When present, Then the blocking duplicate error takes precedence over fuzzy suggestions.
Search and Batch Import Reuse Prevention
Given a user selects assets via Search to perform a batch code write for ISRC/ISWC/IPI/UPC/GRID, When they stage the changes, Then a preflight runs duplicate/collision checks within the configured scope and blocks commit for offending rows, listing conflicts with links to existing assets. Given a CSV import of up to 50,000 rows contains codes, When preflight runs, Then each row is labeled Pass, Duplicate, or Collision with conflicting asset IDs, and rows labeled Duplicate/Collision are not imported by default. Given the "Skip duplicates" option is enabled, When the import executes, Then only Pass rows are applied and a summary is shown with counts by status and a downloadable error report. Given the REST API import endpoint is used, When a row would create a duplicate or collision and no override is provided, Then the API responds 409 for that row with a machine-readable error including code type, normalized code, scope, and conflicting asset IDs. Performance: preflight for 50k rows completes within 5 minutes p95 and 8 minutes p99.
Resolution Workflow: Reassign, Merge, Override with Audit
Given a collision exists between two or more assets, When a user with Collision:Resolve permission opens the resolution modal, Then they can choose one of: Reassign (move the code to a selected asset and remove from others), Merge (merge assets into one canonical record with a single code), Override (keep all assignments and mark as intentional). Given any resolution is applied, When confirmed, Then justification text (minimum 15 characters) is required and the action is blocked until provided. Given Reassign is chosen, When applied, Then the code is removed from all non-selected assets and reassigned to the selected asset; all affected assets update immediately and the collision status becomes Resolved. Given Merge is chosen, When applied, Then a canonical asset is chosen/created, secondary assets are marked merged-into the canonical, and the code remains only on the canonical; references and shortlinks are updated. Given Override is chosen, When applied, Then all assets retain the code and the collision status is Resolved (Overridden) with the stored justification. Post-action: an immutable audit record includes action, actor, timestamp, scope, justification, before/after assignments, and impacted asset IDs.
Collision and Resolution History Log & Reporting
Given collisions occur, When viewing the Collision History page, Then users can filter by code type (ISRC/ISWC/IPI/UPC/GRID), status (Open/Resolved), date range, scope, and actor, and export the filtered view to CSV. Given a specific code is searched, When results load, Then the history shows all collision and resolution events for that code with deep links to the involved assets and actions. Given an override was used, When viewing its audit detail, Then the justification text, actor, timestamp, and resolution type are displayed and cannot be edited. Given audit retention is set to indefinite, When attempting to edit or delete audit entries via UI or API, Then the system denies the action and logs the attempt. Given there are open collisions, When loading the dashboard, Then a widget shows counts of Open Collisions and average time-to-resolution for the past 30 days.
Normalization & Comparison Rules for Dedupe
Given users enter codes in any case with or without separators, When normalized, Then comparison uses canonical form: uppercase, trimmed, separators removed, leading zeros preserved. Given codes include optional punctuation or whitespace, When compared for equality, Then these characters are ignored for exact duplicate detection. Given different code types have distinct canonicalization, When stored, Then the system stores both raw input and canonical form and uses canonical form for duplicate/collision detection across ISRC/ISWC/IPI/UPC/GRID. Given two canonical forms are identical within the configured scope, When compared, Then they are treated as exact duplicates.
Catalog Consistency Rules Engine
"As a metadata specialist, I want automated cross-checks between codes and entities so that catalog integrity is maintained without manual auditing."
Description

Implement a rule engine that cross-checks relationships among ISRC/ISWC/IPI/UPC/GRID and associated entities (tracks, compositions, contributors, releases). Provide out-of-the-box rules (e.g., each track must have one ISRC; a composition referenced by multiple tracks should share ISWC; contributor IPIs must match attached writers/publishers; a UPC can bundle multiple ISRCs) with configurable severity and overrides. Run rules on save, bulk import, and preflight, presenting a consolidated issues panel with filters and quick-fix actions. Allow teams to configure label-specific prefixes, reserved ranges, and exceptions.

Acceptance Criteria
Single Save Validation & Issues Panel
Given I edit an existing track's metadata and click Save When the rules engine validates the track and its linked composition and contributors Then validation completes within 500 ms for the single entity And any violations are displayed in the consolidated issues panel scoped to the saved track And each issue shows rule name, severity (Error/Warning/Info), entity, field, and suggested quick-fix And at least the following rules are evaluated: track has exactly one ISRC; linked composition has an ISWC; contributor with writer/publisher role has valid IPI format
Bulk Import Validation with Filters and Batch Quick-Fix
Given I import a CSV containing 500 mixed entities (tracks, compositions, contributors, releases) When the rules engine runs post-import Then a consolidated issues panel lists all violations across entities with total counts And filters are available for Severity, Rule, Entity Type, Label, and Issue Status (New/Overridden/Resolved) And I can multi-select at least 50 issues and apply a batch quick-fix (e.g., propagate an ISWC to all referencing tracks) in one action And validation completes within 3 minutes for 500 records And a CSV export of the current filtered issues is downloadable
Preflight Release Gate with Severity Enforcement
Given I initiate preflight for a release with a UPC When rules are evaluated with configured severities Then any Error-severity violations block preflight and show a blocking summary count and list And Warning-severity violations allow continuation only after explicit user confirmation to proceed with warnings And Info-severity items are logged but do not block And when all Error violations are resolved, preflight passes and marks the release as ready
Out-of-the-Box Rule Set Availability & Correctness
Given a fresh workspace with default CodeSense settings When I open the rule catalog Then the following rules are enabled with default severities: Track must have exactly one ISRC (Error); Composition referenced by multiple tracks must share one ISWC (Warning); Contributor with writer/publisher role must have valid IPI and match attached party (Error); A UPC can bundle multiple ISRCs but cannot be assigned directly to a track (Error); GRID if present must match valid format (Warning) And when I create data that violates each rule, the engine flags the correct rule with the specified default severity And when data complies with a rule, no issue is emitted for that rule
Label-Specific Prefixes, Reserved Ranges, and Exceptions
Given Label A is configured with ISRC prefix US-ABC, a reserved ISRC range US-ABC-24-00001..US-ABC-24-00100, and an exception list for specified legacy releases When ISRCs are assigned or validated for entities under Label A Then codes outside the configured prefix are flagged according to the rule's severity And codes within the reserved range are blocked from new assignment and flagged if detected on new entities And entities on the exception list bypass the specified rules while other rules still apply And these configurations affect only Label A and do not impact Label B
Override Workflow and Auditability
Given a rule violation exists for a contributor missing IPI at Error severity When an Editor adds an override with reason "Legacy contributor without IPI" and expiry 2026-12-31 Then the issue is marked Overridden with user, timestamp, reason, and expiry captured And subsequent validations treat this instance as non-blocking while other instances of the rule remain blocking And after the expiry date the override lapses and the issue returns to active Error state And an audit log entry is recorded and filterable by entity, rule, user, and date
Code Autofill with Confidence Scoring
"As a producer preparing a release, I want the system to propose missing codes with clear confidence so that I can complete metadata faster while trusting what’s auto-filled."
Description

Suggest missing codes using deterministic generators (label prefixes, registrant codes, next-in-sequence from reserved pools) and contextual lookups (prior releases, templates, linked compositions). Display confidence labels with reasons (e.g., high: next unused in reserved range; medium: inferred from similar track/version) and allow one-click acceptance or manual override. Support bulk suggestion for an entire release and ensure suggestions never claim codes already in use. Maintain transparent suggestion provenance for auditing and rollback.

Acceptance Criteria
High-Confidence Next-in-Sequence ISRC Suggestion from Reserved Pool
Given a track without an ISRC and a label configured with registrant code and a reserved sequence range for the current year, When the user opens CodeSense suggestions for the track, Then the system proposes the next unused ISRC within the reserved range with confidence "High" and reason "Next unused in reserved range". And Then the proposed ISRC conforms to ISRC format rules. And Then if the next sequential code is already used anywhere in the catalog (including drafts and archived items), the system skips it and proposes the next available unused code. And Then the suggestion is not marked as reserved/used until the user accepts it.
Medium-Confidence Contextual ISWC Suggestion from Linked Composition
Given a recording linked to a composition that has an ISWC on a prior release/version, When the recording is missing an ISWC, Then the system suggests that ISWC with confidence "Medium" and reason "Inferred from linked composition/prior version". And Then if multiple candidate ISWCs exist, the top-ranked candidate is shown and the candidate count is indicated. And Then users may accept in one click or select Manual Override to enter a different ISWC. And Then manual overrides are validated for format and uniqueness; duplicates are blocked with a conflict message referencing the existing record.
Bulk Suggest for Entire Release with Selective Acceptance
Given a release with up to 100 tracks and missing codes across ISRC/ISWC/IPI/UPC/GRID, When the user clicks "Suggest for All", Then the system generates suggestions for all missing codes in a single batch and displays per-item statuses (Suggested, Conflict, No Match) within 5 seconds. And Then items with user-locked fields are skipped and labeled Skipped (Locked). And Then the user can choose "Accept All High Confidence" which applies only to High-labeled suggestions and skips Medium/Low. And Then the UI shows a summary with counts of accepted, skipped, conflicts, and no-matches.
Duplicate and Collision Prevention at Acceptance
Given any suggested or manually entered code value, When the user attempts to accept or save the code, Then the system revalidates uniqueness across the entire catalog, including drafts and pending releases, and blocks acceptance if a duplicate exists. And Then the error message includes the conflicting code, code type, and a link to the owning asset. And Then in bulk acceptance, items with collisions are isolated and reported without failing the entire batch. And Then a "Re-suggest" action is offered to fetch an alternative valid code where a deterministic generator is available.
One-Click Acceptance Writes to Forms and File Metadata with Rollback
Given a visible suggestion for a code on a track or release, When the user clicks Accept, Then the code is written to the corresponding form fields and, where assets are stored in TrackCrate, to the file metadata tags (ID3/RIFF/MP4 as applicable) in the same operation. And Then the operation returns success and marks the suggestion as Applied within 2 seconds for single-item acceptance. And Then an Undo option is available for 15 minutes that restores the previous value in both the form and file tags and releases the code from the suggestion ledger. And Then the audit log records user, timestamp, code type, code value, action (accept/undo), affected assets, and outcome.
Suggestion Provenance Transparency and Export
Given any displayed suggestion, When the user opens Provenance details, Then the system shows generator type (Deterministic/Contextual), input parameters (e.g., label prefix, registrant code, sequence index or matched source IDs), data sources consulted, and factors contributing to the confidence score. And Then each suggestion exposes a stable Suggestion ID used to correlate audit and rollback entries. And Then provenance details can be exported as JSON for a selected item or the entire batch. And Then provenance records are retained for at least 1 year.
Confidence Labeling, Reason Display, and Gated Actions
Given a generated suggestion with a computed confidence score, When confidence thresholds are applied, Then the label maps as follows: High ≥ 0.85, Medium ≥ 0.60 and < 0.85, Low < 0.60, and the numeric score is displayed alongside the label. And Then a short human-readable reason is shown that matches the generator (e.g., "Next unused in reserved range" for High; "Inferred from similar track/version" for Medium). And Then bulk action "Accept All High Confidence" applies only to items labeled High and excludes Medium/Low. And Then the confidence label, score, and reason are included in the audit record upon acceptance.
One-Click Metadata Writeback
"As an engineer finalizing delivery assets, I want to write approved codes into files and forms with one click so that my exports are compliant and ready for distribution."
Description

Write approved codes back to TrackCrate forms and to asset files in common formats (MP3/ID3, FLAC/Vorbis, WAV/BWF/AXML, MP4, image/document XMP where applicable) using standards-compliant fields or configurable mappings. Support per-asset and bulk writeback with progress, conflict detection (e.g., existing tags), dry-run preview, and atomic rollback on failure. Preserve file integrity via non-destructive writes and checksums, and update version history so downstream collaborators receive the latest tagged assets. Expose CLI/API endpoints for automation.

Acceptance Criteria
Single-Asset One-Click Writeback (UI)
Given a user views an asset with approved codes in CodeSense and selects mapping profile "Standard" When the user clicks "Write back" for that asset Then TrackCrate updates the asset’s form fields with the approved codes And writes the codes to the file’s metadata using the active mapping profile And completes within 5 seconds for files ≤200 MB on a stable connection And preserves original media data (no change to audio/video frames or image pixels) And records a new asset version with a changelog listing fields written and mapping profile used And displays a success confirmation with a link to tag details and version diff
Bulk Writeback with Progress & Atomic Rollback (UI)
Given a selection of N ≥ 2 assets with approved codes and a chosen mapping profile When the user initiates bulk writeback Then a job starts showing overall percent, ETA, and per-file statuses (queued, writing, verifying, done, failed) And writes execute with a concurrency limit configurable in settings And if any file fails verification, all changes from the batch are rolled back and no asset remains partially updated And the summary reports counts of succeeded, rolled back, and failed with error codes and actionable messages And on full success, all N assets have updated forms and tags and each has a batch changelog entry referencing the job ID And the job supports pause/resume and retry for failed items only without duplicating successful writes
Dry-Run Preview and Change Diff
Given one or more assets and a selected mapping profile When the user runs Dry Run Then the system performs a read-only evaluation and produces a per-file diff of planned metadata changes And for each field it shows current value, new value, tag location (e.g., ID3, Vorbis, AXML, MP4, XMP), and mapping rule applied And highlights conflicts, missing required fields, and invalid values with specific reasons And estimates file size deltas (bytes) and confirms media-data expected unchanged And allows exporting the preview report as CSV and JSON And makes no modifications to files or forms
Conflict Detection and Resolution Policy
Given targeted files contain existing values that differ from approved codes and a conflict policy is selected (Overwrite, Merge, Skip) When writeback is executed Then conflicts are detected per field before writing and presented in the preview and final report And Overwrite replaces existing values; Merge appends non-duplicates while preserving canonical formatting; Skip leaves existing values unchanged And per-field user overrides in the UI are honored and logged And the final summary reports counts of overwritten, merged, skipped, and unchanged per field type And no unrelated metadata fields are modified
Standards-Compliant Field Mapping per Format with Configurable Overrides
Given mapping profile "Standard" is active When writing identifiers to supported formats (MP3/ID3, FLAC/Vorbis, WAV/BWF/AXML, MP4, XMP-capable images/docs) Then identifiers are written to standards-compliant fields when defined and to configured custom fields otherwise, per the active profile And a per-format conformance check passes (e.g., ID3v2 uses TSRC for ISRC; Vorbis uses key ISRC; WAV/BWF uses AXML/iXML per profile; MP4 uses standard/iTunes-compatible atoms per profile; XMP uses appropriate namespaces) And custom mapping profiles override defaults and are recorded in the changelog And a round-trip read immediately after write returns identical values and encodings to what was written And invalid values (e.g., ISRC length/charset violations) are rejected with 422 errors prior to any writes
File Integrity: Non-Destructive Writes and Checksums
Given each target file has a recorded file checksum and a media-data checksum that excludes metadata/tag regions When writeback completes Then files are updated using a safe-write process (temp file, fsync, atomic swap) without re-encoding media And the media-data checksum matches the pre-write value for every file And modified byte ranges are limited to tag regions and are logged per file And simulated interruption (power loss) during write does not corrupt the original or result file And pre- and post-write checksums are stored in job logs and version history
Automation via CLI/API Endpoints
Given an authenticated user with permissions and an API token When they POST /api/writeback/dry-run with asset IDs and mappingProfile Then a dry-run job is created and returns 202 with jobId and a polling URL; results match the UI dry-run When they POST /api/writeback with asset IDs, policy, mappingProfile, idempotencyKey, and options to update forms and files Then a writeback job is created; progress is available at GET /api/jobs/{jobId}; structured logs and webhooks are emitted on state changes And the CLI command `trackcrate writeback --assets <ids> --mapping <profile> --policy <mode> [--dry-run] [--idempotency-key <key>]` executes equivalently, returning non-zero on failure And idempotency ensures repeat submissions with the same key within 24 hours do not duplicate writes And errors return standard status codes (401/403/422/429/500) with machine-readable details
External Registry Verification & Sync
"As a rights manager, I want to verify our codes against external sources so that discrepancies are caught before distribution and royalty processing."
Description

Integrate optional verification against supported third-party endpoints (e.g., distributor portals, PRO/rights databases, DDEX/CSV interchange files) to confirm code validity and detect conflicts. Provide importers for partner exports (CSV/DDEX) that reconcile codes into TrackCrate, highlight discrepancies, and suggest updates. Cache verification results with timestamps, handle rate limits and failures gracefully, and allow users to whitelist trusted sources. All external interactions should be configurable per workspace with credentials stored securely.

Acceptance Criteria
Workspace Credential Configuration & Source Enablement
Given I am a workspace admin with Manage Integrations permission When I add credentials for a supported endpoint and click Test & Save Then the system validates connectivity and scopes without storing plaintext secrets and returns a success indicator And credentials are stored encrypted at rest and masked on subsequent views And only this workspace can use these credentials; other workspaces cannot access them And a configuration audit log entry records user, timestamp, source, scopes, and test result
On-Demand Code Verification with Caching
Given a track with ISRC/ISWC/IPI/UPC/GRID codes and at least one enabled source When I click Verify Now Then for each code, the system either returns a cached result less than 24 hours old or queries the external source And verification results include: status (Valid/Invalid/Not Found/Conflict), source, last_verified_at timestamp, and confidence score if provided And I can click Refresh to bypass cache for selected codes And results are persisted per code per source for auditability
Background Verification Job with Rate Limit Handling
Given a batch verify job of up to 5,000 codes is scheduled When a source returns HTTP 429 or rate-limit headers Then the system respects Retry-After, applies exponential backoff with jitter, and pauses requests to that source And no source receives requests exceeding documented limits And transient failures are retried up to 3 times; persistent failures are logged and surfaced with actionable messages And the job continues other sources/codes in parallel without blocking and produces a completion summary
CSV/DDEX Import and Reconciliation
Given I upload a supported partner export (CSV with header or DDEX ERN/XML) and select a mapping template When I run validation Then the system parses rows, maps fields to internal schema, and shows a preview with counts: valid, duplicate, invalid And on Continue, matches rows to existing assets via identifiers (ISRC, UPC, TrackCrate ID) with deterministic tie-break rules And suggested updates do not alter data until I click Apply And on Apply, updated codes/metadata are written, and a reconciliation report (CSV) is generated for successes and failures
Discrepancy Detection and Guided Resolution
Given external data conflicts with TrackCrate metadata for the same asset When I open the Review Discrepancies view Then conflicting fields are highlighted with current vs proposed values and source And I can Accept or Reject per field or Apply All And on Accept, changes are saved and, if enabled, written to embedded metadata of linked files in one click with a success or error per file And all actions are audit-logged with user, timestamp, source, and before/after values
Trusted Source Whitelisting & Auto-Apply Rules
Given a source is whitelisted with Auto-Apply rules for specific fields (e.g., ISRC, UPC) When new verification or import results arrive from that source Then matching suggestions for whitelisted fields are auto-applied without manual review And non-whitelisted fields remain pending review And auto-applied changes generate notifications and can be rolled back within 7 days And disabling the whitelist halts further auto-apply but preserves history
Audit Trail & Preflight Reporting
"As a project lead, I want a clear audit and preflight report of code decisions so that I can sign off confidently and trace any issues later."
Description

Record all code-related actions (entry, edit, suggestion generation, approval, writeback, overrides) with user, timestamp, source, and rationale. Provide exportable preflight reports (PDF/CSV) summarizing validation results, duplicates, rule violations, and final codes per release for internal sign-off and distributor submissions. Surface a change history per asset and per release with diff views and restore points to support compliance and investigations.

Acceptance Criteria
Audit Trail Completeness for Code Actions
Given a user performs any of [entry, edit, suggestion_generation, approval, writeback, override] on a code (ISRC/ISWC/IPI/UPC/GRID), When the action completes, Then an immutable audit record is created with fields: action_type, code_type, user_id, user_name, timestamp_utc (ISO 8601), source (UI/API/Auto), entity_type (asset/release), entity_id, fields_changed, previous_value, new_value, rationale (required for overrides), and request_id. Given an audit query is run by entity_id and time range, When executed, Then results are returned in reverse chronological order with pagination (page_size configurable up to 200) and the total count equals the number of actions performed in that range. Given a transient failure occurs during audit write, When the operation is retried with the same request_id, Then exactly one audit record exists for the action (idempotent) and no duplicates are present. Given an audit record is viewed, When accessed, Then it cannot be edited or deleted; any redaction is logged as a separate audit event recording redaction_actor, reason, and timestamp.
Preflight Report Generation and Export
Given a release with up to 500 tracks, When Preflight is run, Then a report is produced within 30 seconds containing: release_id, release_title, run_timestamp_utc, operator, ruleset_version, code coverage summary (ISRC/ISWC/IPI/UPC/GRID), validation results, duplicates, rule violations, and final codes per asset with overall status (Pass/Fail). Given the report exists, When exported, Then both PDF and CSV are downloadable in UTF-8, have matching row counts and totals, and are named releaseID_preflight_YYYYMMDD_HHMMUTC.(pdf|csv). Given the PDF is generated, When opened, Then it contains page numbers and section headings; the CSV contains one row per asset-code pair with headers: asset_id, asset_title, code_type, code_value, status, notes. Given validation rules change, When Preflight is run, Then the report records the ruleset_version used and a link to the rules changelog.
Change History and Diff View per Asset
Given an asset exists, When viewing Change History, Then all code-related events display with timestamp_utc, actor, action_type, and a field-level diff (previous_value vs new_value) highlighting adds, deletes, and modifications. Given filters are applied by action_type, actor, or date range, When the filter is submitted, Then the list updates within 2 seconds for up to 1,000 events and the total/filtered counts are accurate. Given a specific history event is opened, When viewing the diff, Then values are shown side-by-side with whitespace-insensitive comparison, original case preserved, and masked sensitive IDs can be revealed only by users with the appropriate permission. Given pagination is used, When moving between pages, Then the scroll position and applied filters persist.
Restore Point Creation and Reversion
Given a writeback or bulk edit is initiated on a release or asset, When the operation starts, Then the system auto-creates a restore point capturing pre-change state and metadata (creator, timestamp_utc, scope, checksum) before applying changes. Given a user with Restore permission, When a manual restore point is created, Then it is recorded in the audit log and available in the restore point list for that entity. Given a restore point exists, When Revert is executed, Then the entity is restored atomically to the captured state, dependent denormalized fields are recalculated, and a restore audit event with rationale is logged. Given a revert encounters partial failure, When the process completes, Then changes are rolled back (no partial state), the user is notified with error details, and no restore point data is lost.
Distributor-Ready Preflight Sign-off
Given a preflight report with overall status Pass, When a reviewer with Sign-off permission approves, Then a digital sign-off is recorded (reviewer_id, reviewer_name, timestamp_utc, signature_hash) and the report is locked from edits. Given any code change occurs after sign-off, When Preflight is rerun, Then the prior sign-off is automatically invalidated, the invalidation is logged, and a new sign-off is required before export. Given a signed report is exported as PDF, When opened, Then it includes a signature summary page with a tamper-evident hash and a verification URL/QR that validates the report contents and signature. Given a report status is Fail, When a sign-off attempt is made, Then the action is blocked with an explanatory message and no sign-off record is created.
Duplicate and Rule Violation Summary Accuracy
Given a test release with X known duplicates and Y known rule/format violations, When Preflight is run, Then the report lists exactly X duplicates and Y violations and each item links to the implicated asset(s) or code(s). Given a summary item is clicked in-app, When navigated, Then the asset or release detail opens with the specific validation details scrolled into view and highlighted. Given warning suppressions are configured, When Preflight is run, Then suppressed items are excluded from headline totals but included in an appendix with suppression_reason and rule_id.

Fingerprint Merge

Acoustic fingerprinting clusters near‑duplicate takes and bounces, flags the best keeper, and safely merges duplicates. Consolidates comments, approvals, and shortlinks to the chosen master so teams avoid version confusion and catalog bloat.

Requirements

Acoustic Fingerprint Generation & Indexing
"As a project engineer, I want TrackCrate to fingerprint every audio I add so that potential duplicates can be found reliably across formats and exports."
Description

Generate robust acoustic fingerprints for all uploaded and existing audio assets (stems, mixes, bounces) on ingest and via backfill jobs. Normalize inputs to handle channel count, sample rate, time offsets, and lossy encoding so fingerprints remain comparable across exports. Store fingerprints in a deduplicated, queryable index scoped to workspace and project, with metadata (duration, codec, checksum) to aid matching. Run processing as idempotent, retryable background tasks with resource throttling to protect upload performance. Expose health metrics and failure reporting for observability.

Acceptance Criteria
Ingest Fingerprinting on New Uploads
Given an authenticated user uploads an audio asset (<=200 MB) to a project When the upload completes Then a fingerprinting job is enqueued within 5 seconds and the job ID is associated with the asset Given the job runs When processing completes Then a fingerprint record is created in the index with fields: fingerprintId, assetId, workspaceId, projectId, durationSec, codec, sampleRate, channels, byteChecksum, processingVersion, createdAt Given the asset has been fingerprinted When the system is queried by assetId Then exactly one fingerprint record is returned for that asset
Backfill Job for Existing Assets
Given a workspace with existing audio assets When a backfill job is started scoped to a workspace or project Then 100% of assets without fingerprints are enqueued within 2 minutes for up to 10,000 assets Given the backfill runs When it encounters an asset that already has a fingerprint Then it skips processing without altering the existing record Given the backfill is interrupted When it is resumed Then it continues from the last processed offset without duplicating fingerprint records Given the backfill runs under normal capacity When one worker processes assets Then it achieves a sustained throughput of at least 300 assets/hour for files <=100 MB
Normalization Across Formats and Offsets
Given a reference WAV file When variants are created by converting sample rate (44.1/48/96 kHz), bit depth (16/24-bit), channel layout (mono/stereo), and lossy encoding (MP3 96 kbps, AAC 128 kbps, Ogg 160 kbps) Then each variant’s top match against the workspace index is the reference asset with a match score >= 0.98 Given the same content with leading or trailing silence added or up to 10 seconds trimmed at start or end When fingerprinting is performed Then the variant matches the reference with a match score >= 0.98 and aligned offset reported within ±0.5 seconds Given unrelated audio from the same workspace When matched against the index Then the match score is <= 0.90 to avoid false positives
Scoped, Deduplicated Fingerprint Index
Given two assets in the same project with acoustically identical content When fingerprinting completes Then both reference the same fingerprintId and the index stores one canonical fingerprint record with a reference count of 2 Given two assets with identical content in different workspaces When querying the index Then records are isolated per workspace and no cross-workspace matches are returned Given a fingerprint record exists When queried by project scope Then only fingerprints within that project are returned Given a fingerprint is stored When retrieving it Then metadata fields (durationSec within ±0.1s of decoded media, codec, sampleRate, channels, byteChecksum) are present and non-null
Idempotent and Retryable Processing
Given the same asset triggers fingerprinting multiple times due to duplicate events When the jobs run concurrently or sequentially Then exactly one fingerprint record exists for the asset and extra jobs complete as no-ops Given a transient storage or decoding error occurs during processing When the job retries Then it uses exponential backoff and succeeds within 5 attempts without creating duplicate records Given a non-retryable error (e.g., unsupported codec) occurs When max attempts are reached Then the job is marked failed with errorCode and errorMessage, and the asset state is updated to fingerprint_failed
Resource Throttling Protects Upload Performance
Given 10 concurrent uploads of 100 MB assets When fingerprinting is enabled Then p95 upload completion time increases by no more than 5% compared to fingerprinting disabled Given system CPU exceeds 70% or the upload queue backlog exceeds 50 requests When background workers are active Then worker concurrency is reduced within 5 seconds to maintain CPU <=70% and backlog <=50 for at least 95% of the next 5-minute window Given throttling engages When new uploads arrive Then the upload API responds with HTTP 2xx and no 429/5xx caused by fingerprint workers
Health Metrics and Failure Reporting
Given the service is running When the metrics endpoint is requested Then it returns counters and gauges for jobs.enqueued, jobs.in_progress, jobs.succeeded, jobs.failed, processing.duration_ms p50/p95/p99, queue.depth, and backfill.progress with non-negative values Given a fingerprint job fails When querying the asset via API or UI Then the last failure timestamp, errorCode, and errorMessage are visible to users with project access Given the rolling 5-minute failure rate exceeds 2% for fingerprint jobs When monitoring evaluates Then an alert is emitted to the configured channel within 1 minute
Near-duplicate Clustering & Thresholds
"As a label manager, I want suspected duplicates to be clustered automatically so that I can review and act on them in one place."
Description

Compute similarity scores between fingerprints to group near-duplicate takes and bounces into clusters, resilient to small edits like trimmed intros/outros or gain changes. Provide configurable similarity thresholds at workspace and project levels with sensible defaults, and produce a confidence score per pair and per cluster. Support segment alignment to account for leading silence or time shifts. Persist cluster membership and update incrementally as new assets arrive.

Acceptance Criteria
Similarity Resilience to Minor Edits
Given two assets derived from the same source with up to 2.0 seconds of leading or trailing silence added/removed and a uniform gain change within ±3 dB, When fingerprints are computed and compared with default settings, Then the pairwise similarity confidence is ≥ the active near-duplicate threshold and they are assigned to the same candidate cluster. Given two assets where the computed pairwise similarity confidence is < the active near-duplicate threshold, When clustering runs, Then they are not assigned to the same cluster. Given identical assets, When compared, Then the pairwise similarity confidence equals 1.00 ± 0.01.
Threshold Configuration Defaults and Overrides
Given a new workspace, When no custom configuration is provided, Then the workspace near-duplicate similarity threshold defaults to 0.92. Given a new project in that workspace, When no project override is set, Then the project inherits the workspace threshold. Given a project owner updates the project threshold to a value between 0.80 and 0.99, When saved, Then the new value is stored and used for all subsequent clustering and a re-evaluation of existing clusters is triggered. Given both workspace and project thresholds exist, When evaluating pairs in that project, Then the project threshold is used. Given an invalid threshold outside [0.80, 0.99], When saving, Then validation fails and the previous value remains unchanged.
Segment Alignment for Time Shifts
Given two assets where one has up to ±5.0 seconds of leading silence or time offset relative to the other, When fingerprints are aligned before comparison, Then the computed pairwise similarity confidence differs by no more than 0.02 from the confidence of the unshifted pair and they are assigned to the same cluster under the default threshold. Given two assets that are identical except for 3.0 seconds of inserted silence at the beginning, When compared with alignment enabled, Then they meet or exceed the active near-duplicate threshold.
Cluster Formation and Confidence Calculation
Given a set of assets within a project and their pairwise similarity confidences, When clustering runs, Then any two assets are placed in the same cluster if there exists a path connecting them where every edge confidence ≥ the project's active threshold (single-linkage). Given a cluster with N members and M qualifying pairwise confidences (edges) ≥ threshold, When computing cluster_confidence, Then cluster_confidence equals the median of those M confidences and is stored/displayed rounded to two decimal places in [0,1]. Given a pair of assets in the same cluster, When retrieving cluster details, Then their pairwise confidence used to justify membership is available.
Persistence and Incremental Cluster Updates
Given existing clusters and unchanged thresholds, When the system reprocesses fingerprints without file changes, Then cluster membership and cluster_ids remain unchanged. Given a new asset is added to a project, When clustering updates run, Then the system reuses stored fingerprints for existing assets, computes scores only for the new asset against candidates in the same project, and updates cluster membership accordingly. Given a new asset connects two existing clusters via edges with confidence ≥ threshold, When clusters merge, Then the resulting cluster retains the lowest existing cluster_id and the deprecated id is recorded as an alias to preserve references.
Confidence Score Storage and Access
Given any two assets compared, When requesting their relationship, Then a pairwise_confidence in [0,1] with two-decimal precision and a last_computed_at timestamp are available. Given any cluster, When retrieving it, Then cluster_confidence in [0,1] with two-decimal precision is included, along with member asset_ids and each member's pairwise_confidence to the cluster representative. Given a recalculation occurs due to a threshold change or new asset arrival, When retrieving confidences, Then last_computed_at reflects the most recent computation.
Keeper Recommendation & Confidence Explanation
"As an A&R lead, I want the system to suggest the best master take with reasons so that I can confirm quickly without second-guessing."
Description

Recommend a single “keeper” asset within each cluster using deterministic rules and weighted signals such as approval count, metadata completeness, audio quality (bitrate/sample rate), recency, and existing shortlink/AutoKit usage. Generate an overall confidence score with human-readable reasons to support decision-making. Allow users to override the keeper selection before merge and persist overrides for future clusters in the same project.

Acceptance Criteria
Weighted Keeper Auto-Selection
Given a fingerprint cluster within a project containing two or more candidate assets with signals (approval count, metadata completeness, audio bitrate and sample rate, recency, shortlink/AutoKit usage) When the recommendation engine runs on the cluster Then exactly one asset is marked as Keeper - Recommended And each candidate is assigned a numeric score between 0 and 100 with two-decimal precision And the selected keeper is the asset with the highest computed score And the recommendation result includes per-asset score and rank in the cluster And rerunning on identical inputs returns the same keeper and the same scores
Deterministic Tie-Breaking for Equal Scores
Given two or more assets in a cluster have equal scores within 0.0001 When the recommendation engine selects a keeper Then the tie is broken deterministically in this order: higher bitrate, higher sample rate, higher approval count, more recent lastUpdated timestamp, lexicographically smaller assetId And identical inputs always produce the same selected keeper
Confidence Score and Human-Readable Reasons
Given a recommended keeper has been computed When the confidence score and reasons are generated Then confidence is a value between 0 and 100 with two-decimal precision And at least three reasons are returned, each citing a specific signal and whether it raised or lowered the score And reasons include the top positive and top negative contributors And reasons contain no secret URLs or access tokens And the output includes selectedAssetId, confidence, reasons[], and perAssetScores[]
User Override Before Merge and Persistence in Cluster
Given a user with edit permission views a cluster with a recommended keeper When the user selects a different asset as the keeper and confirms Then the newly selected asset is marked as Keeper - Overridden And the previous recommendation is recorded with timestamp and userId And the override persists on subsequent views of the same cluster And any subsequent merge uses the overridden keeper
Persisted Override Applied to Future Clusters in Same Project
Given an override was saved for asset A as the keeper in project P And a new fingerprint cluster in project P includes asset A When the recommendation runs for the new cluster Then asset A is preselected as Keeper - Overridden (persisted) And reasons include a line indicating user override persistence And if asset A is not present in the new cluster, no override is applied
Resilience to Missing or Incomplete Signals
Given some assets in a cluster are missing signals (approvals, bitrate, sample rate, metadata completeness, usage) When the recommendation engine computes scores Then missing numeric signals are treated as 0, missing boolean signals as false, and missing categorical signals as lowest rank And the engine completes without error And reasons note which signals were missing and how they affected confidence And the computed confidence is reduced when key signals are missing
API Exposure and Performance of Recommendation
Given an authorized API client requests the keeper recommendation When it calls GET /projects/{projectId}/clusters/{clusterId}/keeper-recommendation Then the response is 200 with JSON containing selectedAssetId, confidence, reasons[], perAssetScores[] And the endpoint responds within 500 ms for clusters up to 50 assets And unauthorized requests receive 403 and non-existent clusters return 404
Safe Merge & Consolidation with Undo
"As a producer, I want duplicates to merge safely into one master so that history and context are preserved and I can revert if needed."
Description

Merge all duplicate assets in a cluster into the chosen keeper by consolidating comments, approvals, rights metadata, tasks, and analytics while preserving referential integrity across TrackCrate. Archive source files, update all internal references, and maintain a complete audit log of changes. Provide a reversible “undo merge” within a retention window that restores original assets and relationships without data loss.

Acceptance Criteria
Keeper-Centric Consolidation of Duplicates
Given a fingerprinted duplicate cluster with a selected keeper asset When the user executes Merge Then all comments from source assets are moved to the keeper with original author, timestamps, and threading preserved And duplicate comments (identical body+author+timestamp within ±2s) are deduplicated And approvals from sources are merged; conflicting approvals by the same user resolve to the latest decision by timestamp And rights metadata is consolidated using rules: for scalar fields, keep keeper's non-null value; if keeper is null, take the most recently updated non-null source value; for multi-select fields, union all unique values; for controlled vocabulary conflicts, prefer keeper And tasks associated with sources are reassigned to the keeper, preserving status, assignees, and due dates And analytics counters (plays, downloads, shortlink clicks) are summed onto the keeper and preserved for Undo in the audit payload And the merge reports a summary delta including counts of comments moved, approvals merged, tasks reassigned, analytics aggregated
System-Wide Referential Integrity Update
Given internal references to any source asset (AutoKit press pages, stem player playlists, projects, tasks, collections, shortlinks, API consumers) When Merge completes Then all internal foreign keys are updated to the keeper asset ID atomically And embed codes and public shortlinks that targeted a source respond with HTTP 301 to the keeper's canonical URL within 1 minute And no 404/500 responses occur for previously valid shortlinks during or after merge And the number of updated references equals the pre-merge count of references to sources, as recorded in the audit summary And old source asset API GET requests for authorized users return 410 Gone with a link to the merge audit event
Source File Archival and Access Controls
Given Merge completes When inspecting storage and access Then binary files of source assets are moved to archival storage with checksum parity verified And archived sources are hidden from all end-user lists and search results And any existing expiring or watermarked download links tied to sources are invalidated within 60 seconds And only org Admins can access archived binaries via the audit log for the purpose of Undo And storage accounting reflects only the keeper's active files post-merge
Audit Log Completeness and Integrity
Given a Merge is performed When viewing the audit log entry Then the event records: cluster_id, keeper_asset_id, source_asset_ids, actor_id, timestamp, fingerprint_version, and a field-level before/after diff for rights metadata And it includes lists of moved comment IDs, merged approval IDs, reassigned task IDs, counts of reference updates by object type, and analytics aggregation totals per metric And the audit entry is immutable (read-only, non-deletable) and time-synchronized And a cryptographic checksum of the event payload is stored to detect tampering And performing Undo creates a linked UndoMerge event referencing the original Merge event_id
Undo Merge Within Retention Window
Given a Merge completed and the current time is within the configured retention window When an authorized user triggers Undo Merge from the audit entry and confirms Then the original source assets and their IDs are restored to active status with their binaries, metadata, comments, approvals, tasks, analytics, and shortlinks exactly as they were pre-merge And references that were automatically redirected by the merge are reverted to their pre-merge targets; references created after the merge remain unchanged And comments, approvals, and tasks created on the keeper after the merge remain on the keeper and are not duplicated to sources And the keeper's aggregated analytics are de-aggregated back to their original assets with no loss And the system returns a success status and records a complete UndoMerge audit entry
Authorization and Confirmation for Merge/Undo
Given a user attempts a Merge or Undo When the user lacks the required role or permission Then the action is blocked with HTTP 403 and no side effects occur When the user has permission Then the UI requires explicit confirmation showing a summary of affected items (counts by type) and the retention window for Undo And both actions require an idempotency key; repeated submissions within 24 hours do not create duplicate events And each successful action emits audit/security notifications to configured channels
Transactional Safety and Failure Handling
Given any step of the merge pipeline (metadata consolidation, reference update, archival, analytics aggregation) fails or times out When the operation cannot complete within the transaction boundary Then all changes are rolled back, leaving pre-merge state intact And the user receives a clear error with a correlation ID and suggested retry And a MergeFailed audit entry is recorded including the failing stage and partial counts And a retry using the same idempotency key completes successfully without duplicating work
Shortlink and AutoKit Rebinding
"As a marketing manager, I want all links and press pages to follow the keeper automatically so that campaigns continue uninterrupted after a merge."
Description

Rebind existing shortlinks and AutoKit press pages to the keeper after a merge, preserving click analytics, UTM parameters, and expiration behavior. Create redirects from deprecated asset URLs and ensure private stem player and watermarked download endpoints resolve to the keeper without breaking access controls or watermark policies. Validate rebinding in a post-merge check and surface any failures for remediation.

Acceptance Criteria
Rebind Shortlinks to Keeper With Analytics and UTM Preservation
Given a project with at least two shortlinks pointing to deprecated assets and a keeper selected post-merge When rebinding executes Then each shortlink resolves to the keeper resource without changing its shortlink code And historical click analytics for each shortlink remain unchanged And subsequent clicks after rebinding are attributed to the same shortlink IDs And inbound UTM parameters are preserved verbatim in the redirect chain and recorded in analytics And each shortlink’s original expiration timestamp and policy remain in effect
AutoKit Press Page Rebinding and Asset Consistency
Given an AutoKit press page bound to a deprecated asset When rebinding runs post-merge Then the press page URL remains unchanged And page metadata (title, ISRC, artwork, credits) reflects the keeper And all media embeds and share links point to keeper assets And the page renders without 4xx/5xx errors and without missing assets And the canonical tag remains unchanged And existing expiration behavior for the page is preserved
Private Stem Player and Watermarked Downloads Maintain Access Controls
Given a valid signed stem-player session and watermark policy tied to a pre-merge asset When accessing via a pre-merge shortlink after rebinding Then the private stem player streams the keeper stems And access control enforcement remains unchanged (authorized users succeed; unauthorized users receive 403) And the audio watermark user/session identifier is preserved across rebinding And expiring signed URLs continue to honor their original expiry times And download rate limiting and audit logging remain active
Redirects From Deprecated Asset URLs to Keeper
Given direct requests to deprecated asset file URLs, artwork URLs, and API asset endpoints When those URLs are requested after rebinding Then the response redirects to the corresponding keeper endpoints And HTTP status is 301 for public, non-signed endpoints and 307 for signed download endpoints And the original query string (including UTM and signature parameters) is preserved And no redirect loops occur And CORS and content-type headers are appropriate for the target resource
Post-Merge Rebinding Validation and Failure Surfacing
Given a merge that triggers rebinding When the post-merge validation job completes Then it verifies: shortlink target mapping, analytics continuity, AutoKit asset references, redirect responses, and access-control checks And if any check fails, the merge is flagged "Rebind Failed" with a list of failing checks and impacted links And a remediation task is created with actionable error messages And a notification is sent to project owners And no irreversible changes are committed until validation passes or an explicit override is recorded
Idempotency, Rollback, and Performance SLAs
Given a rebinding operation is executed more than once for the same merge When the operation is retried Then the result is idempotent with no duplicate redirects or analytics fragmentation And a single-click rollback restores all bindings and endpoints to their pre-merge state within 30 seconds And 95th percentile rebinding completion time is <= 30 seconds for up to 500 shortlinks and one AutoKit page And global propagation results in >= 99.9% successful resolutions within 60 seconds of completion And link resolution error rate (4xx/5xx excluding 403 due to access control) remains <= 0.1% for 10 minutes post-rebind
Duplicate Review UI & Bulk Actions
"As a project coordinator, I want an efficient review screen for duplicates so that I can resolve version chaos quickly across releases."
Description

Provide a review interface that lists clusters with waveforms, metadata diffs, quality badges, and activity summaries, enabling users to confirm the keeper, adjust thresholds, or exclude assets. Support bulk merge, defer, or ignore actions with preview of impacted shortlinks and pages. Include accessibility, keyboard shortcuts, and responsive design to streamline high-volume cleanup.

Acceptance Criteria
Cluster List: Waveforms, Metadata Diffs, Quality Badges, Activity Summary
- Given a workspace contains 25 duplicate clusters (2–6 assets each), when the Duplicate Review UI is opened, then each cluster displays for every asset: an inline waveform thumbnail, highlighted metadata diffs (title, version, ISRC, BPM, key, bitrate), a quality badge, and an activity summary showing counts of comments, approvals, and shortlinks. - Given clusters are sorted by fingerprint confidence, when the list renders, then clusters with a flagged keeper appear above others and the proposed keeper is visually marked and positioned first within its cluster. - Given a standard network (150 ms RTT, 10 Mbps), when loading the initial viewport (up to 10 clusters), then time to interactive is ≤ 2.5 seconds and all waveform placeholders are visible with progressive loading; no broken media or missing badges.
Keeper Confirmation with Merge Impact Preview
- Given a cluster has a proposed keeper and at least one duplicate, when the user selects "Preview merge," then a panel displays: number of assets to be merged, list of assets to be archived, total comments/approvals to be consolidated (with deduplication rules), number of shortlinks to be redirected, and AutoKit pages to be updated. - Given the merge preview is visible, when the user expands "Shortlinks and Pages," then each affected shortlink shows current target and new target (keeper), and each affected AutoKit page is listed with a link to preview; counts match the summary. - Given the user confirms "Merge," when the operation completes, then all selected duplicates are merged into the keeper with no loss of comments or approvals, all listed shortlinks redirect to the keeper, AutoKit pages embed the keeper, and the cluster’s status updates to "Merged" within 5 seconds for clusters of ≤ 10 assets. - Given a merge fails for any reason, when the operation errors, then no partial merge is committed, the user sees an error with a retry option, and the cluster remains unchanged.
Bulk Actions: Merge, Defer, Ignore with Progress and Error Handling
- Given the user selects 1–100 clusters via checkboxes, when "Bulk merge" is initiated, then a confirmation modal shows aggregate counts of assets to merge, comments/approvals to consolidate, shortlinks and pages impacted, and requires explicit confirmation. - Given bulk merge begins, when processing, then a progress panel shows per-cluster status (queued, running, succeeded, failed) and an overall completion percentage; successes are summarized and failures display actionable error messages. - Given the user selects clusters, when "Bulk defer" is confirmed, then selected clusters move to "Deferred" state and are hidden from the "Open" view by default. - Given the user selects clusters, when "Bulk ignore" is confirmed, then selected clusters move to "Ignored" state and are excluded from future bulk actions unless explicitly re-included. - Given bulk actions are previewable, when the user opens "Preview impact" for the current selection, then an aggregated list of shortlinks and AutoKit pages impacted is displayed with totals matching the selection. - Given 50 clusters (≤ 5 assets each), when bulk merge runs, then 95% complete within 60 seconds and the UI remains responsive.
Per-Cluster Similarity Threshold Adjustment
- Given a cluster is open in detail view, when the user adjusts the similarity threshold between 0.50 and 0.99, then membership of the cluster recalculates and updates in the UI within 500 ms, reflecting assets that enter/leave based on the new threshold. - Given the threshold is adjusted, when the user clicks "Reset to default," then the threshold returns to the workspace default and the original cluster members are restored. - Given a threshold was adjusted for a cluster, when the user navigates away and back within the same session, then the chosen threshold persists until an action (merge/defer/ignore) is completed or the user resets it.
Exclude Assets from Merge
- Given a cluster contains assets not to be merged, when the user marks an asset as "Exclude," then the asset is visually separated and excluded from keeper merge calculations and bulk actions. - Given excluded assets exist, when the user opens merge preview, then counts and lists reflect the exclusion and no operations will be performed on excluded assets. - Given an asset is excluded, when the user selects "Include" before confirmation, then the asset returns to the merge set and preview updates accordingly.
Accessibility: WCAG AA and Keyboard Shortcuts
- Given the Duplicate Review UI is used with a keyboard only, when navigating, then all interactive elements (cluster selection, keeper toggle, threshold slider, preview, merge, defer, ignore) are reachable in a logical order with a visible focus indicator; no keyboard traps exist. - Given a screen reader is active, when interacting with clusters, then controls have descriptive labels and roles, the proposed keeper state is announced, the merge preview opens as an ARIA modal with focus trapped, and success/error toasts are announced via live regions. - Given standard color contrast testing, when checking UI elements, then text has ≥ 4.5:1 contrast and essential non-text graphics (badges, waveform overlays) have ≥ 3:1 contrast or provide an equivalent text cue. - Given shortcuts are enabled, when the user presses K (confirm keeper), P (preview), M (merge), D (defer), I (ignore), A (select all), and ? (show shortcuts), then the corresponding action occurs or a help overlay is shown; shortcuts do not conflict with OS/screen reader defaults and can be toggled off in preferences.
Responsive Layout and Performance Budgets
- Given a device width of 320–479 px, when the UI loads, then the cluster list renders as a single-column layout with stacked asset rows, tap targets ≥ 44 px, and horizontal scroll is avoided; all actions remain accessible. - Given device widths of 768–1023 px and ≥1024 px, when loading, then the UI adapts to two-column and multi-pane layouts respectively, showing waveforms side-by-side where space allows without causing horizontal scroll. - Given the user scrolls through 100 clusters, when interacting, then scrolling remains ≥ 55 FPS on modern devices and waveforms/metadata diffs are lazy-loaded just-in-time, keeping peak browser memory usage under 300 MB. - Given a viewport or orientation change, when it occurs, then layout reflows without content overlap or loss of focus, and any open modal/panel retains state.
Permissions, Policy Gates & Notifications
"As a workspace admin, I want controlled merging and clear notifications so that changes are authorized and the team stays informed."
Description

Enforce role-based permissions for reviewing and merging duplicates, with optional policy gates requiring a second approval at low confidence. Send in-app notifications and email summaries to watchers when clusters are detected, recommendations change, or merges complete, including links back to the review screen and audit entries. Respect workspace notification preferences and quiet hours.

Acceptance Criteria
RBAC: Merge Action Visibility and Enforcement
Given a user lacks the "Merge Duplicates" permission, when they open a duplicate cluster, then the Merge action is not rendered in UI and attempts to call POST /clusters/{id}/merge return 403 and an audit event "merge_denied_permission" is recorded. Given a user has only "Review Duplicates" permission, when they open a cluster, then they can comment and flag but cannot approve or execute a merge and related endpoints return 403. Given a user has the "Merge Duplicates" permission, when they open a cluster, then Approve and Merge actions are available and a successful merge returns 200 with merge result payload and an audit event "merge_completed".
Policy Gate: Second Approval Required at Low Confidence
Given the workspace setting "Require second approval below <= THRESHOLD%" is enabled and a cluster's confidence <= THRESHOLD, when the first approver approves, then the system records approval_1_of_2 and prevents merge execution until a distinct second approver approves. Given the same user attempts the second approval on that cluster, when they submit, then the request is rejected with error code second_approval_distinct_required and is audited. Given a different user with Merge permission submits the second approval, when both approvals exist, then the merge executes and the audit entry stores both approver IDs and timestamps. Given the policy is disabled or the cluster confidence > THRESHOLD, when a user with Merge permission approves, then the merge executes with a single approval.
Notifications: Cluster Detected to Watchers
Given a new duplicate cluster is created, when watchers exist per workspace/project or asset, then an in‑app notification is created for each watcher within 60 seconds and an email is sent within 5 minutes to watchers with email enabled. Then the notification payload includes cluster_id, confidence_score, recommended_keeper_id, and an "Open Review" deep link. Given no watchers are configured, when a cluster is created, then no notifications are generated.
Notifications: Recommended Keeper Change
Given an existing cluster's recommended keeper changes, when the update is persisted, then watchers receive an in‑app notification and, based on preference, either immediate email or inclusion in the next digest, including previous_keeper_id, new_keeper_id, and deep links to Review and Audit. Given a watcher has disabled notifications for Fingerprint Merge events, when the recommendation changes, then that watcher receives no notification.
Notifications: Merge Completion Summary
Given a cluster merge completes successfully, when the operation finishes, then watchers receive an in‑app notification and an email per their channel preference (respecting quiet hours) that includes merged_count, keeper_asset_id, cluster_id, an "Open Review" link to the master, and a "View Audit" link to the merge entry. Given a merge fails, when the error is returned, then only the acting user receives an in‑app failure notification with error_summary and retry link, and watchers are not emailed.
Preferences: Channel, Digest, and Quiet Hours
Given a watcher has email disabled, when any Fingerprint Merge event occurs, then no email is sent to that watcher and only in‑app notifications are created (unless in‑app is also disabled). Given a watcher has quiet hours configured in their local timezone, when an event occurs during quiet hours, then the email is queued and delivered at the end of quiet hours while the in‑app notification is created immediately. Given a watcher is set to Daily Digest at 09:00 local, when multiple events occur, then a single digest email is sent at the configured time summarizing events with deep links and no immediate emails are sent.
Audit Trail and Deep Links
Given any approval, rejection, or merge action occurs, when the event is processed, then an immutable audit entry is stored with actor_id, actor_roles, action_type, cluster_id, timestamp_utc, prior_state, new_state, and approver_ids as applicable, queryable by cluster_id. Then every notification created for these events contains two working links: Open Review (navigates to the specific cluster review) and View Audit (filters audit log to the event) which both require and respect auth. Given a user without required permissions clicks a deep link, when navigation occurs, then they see an access‑denied view and an "access_denied" audit event is recorded.

FormatFix

Automatic conformance for sample rate, bit depth, channel count, and loudness against project or distributor specs. Batch converts with high‑quality resampling, preserves originals, and updates checksums so everything plays correctly, uploads cleanly, and meets platform requirements.

Requirements

Spec Profiles & Validation Rules
"As a project owner, I want to set format spec profiles per destination so that assets are automatically validated and prepared to meet each distributor’s requirements."
Description

Provide a reusable library of format specification profiles that can be attached at the workspace, project, release, or destination (distributor) level. Each profile defines allowed sample rates, bit depths, channel configurations, loudness targets (integrated LUFS, LRA), true‑peak limits, container/codec constraints, dithering policy, and channel mapping rules. Expose profile selection in upload flows and asset detail views, and validate incoming or existing assets against the active profile to surface pass/fail results. Offer REST/GraphQL endpoints for managing profiles and querying compliance so other TrackCrate features (AutoKit, shortlinks, stem player) can resolve the correct rendition automatically.

Acceptance Criteria
Create and Manage Spec Profiles via API
Given I am an authenticated user with manage:profiles permission When I POST /v1/profiles with a valid spec defining allowedSampleRates, allowedBitDepths, allowedChannelConfigs, loudnessTarget.integratedLUFS, loudnessTarget.LRA, truePeakLimit, allowedContainers, allowedCodecs, ditheringPolicy, channelMappingRules, name, and scope=Reusable Then the API returns 201 Created with profileId, version, createdAt, updatedAt, and persists all fields And When I GET /v1/profiles/{profileId} or query GraphQL profile(id) Then the stored spec is returned exactly And When I PATCH /v1/profiles/{profileId} with valid changes Then 200 OK and updatedAt changes and version increments And When I DELETE /v1/profiles/{profileId} Then 204 No Content and subsequent fetch by id returns 404 Not Found
Attach Profiles at Workspace/Project/Release/Destination Levels
Given a profile exists and entities Workspace W, Project P, Release R, Destination D in a hierarchy where P∈W and R∈P When I attach the profile to W Then the resolved active profile for P and R is the workspace-attached profile And When I attach a different profile to P Then P and its releases resolve to the project profile And When I attach another profile to R Then the release resolves to the release profile And When I attach another profile to D for that release Then precedence is Destination > Release > Project > Workspace and the destination-specific profile is resolved And When I GET /v1/profiles/resolve?releaseId=...&destinationId=... Then the API returns the resolved profileId and sourceLevel
Validate Uploads Against Active Profile During Upload
Given an active profile is resolved for the upload context When I upload an audio file via UI or POST /v1/assets with content-type audio/* Then the system extracts technical metadata (sampleRate Hz, bitDepth bits, channelConfig, container, codec) and computes loudness metrics per EBU R128 (integrated LUFS, LRA) and true-peak using ≥4x oversampling And Then the asset is marked Compliance=Pass if and only if all profile constraints are satisfied; otherwise Compliance=Fail with reason codes for each violated rule And The validation result includes measured values and thresholds and is stored on the asset record and retrievable at GET /v1/assets/{id}/compliance and via GraphQL asset { compliance { pass reasons metrics } } And The upload UI displays a pass/fail badge within 5 seconds of upload completion and lists any violations
Revalidate Existing Assets on Profile Change
Given assets exist under a context with Profile A When Profile B is attached and becomes the active profile for that context Then all affected assets are enqueued for revalidation within 30 seconds and their previous compliance results are superseded by new results computed against Profile B And A progress endpoint GET /v1/compliance/revalidate?contextId=... returns counts { total, pending, processing, succeeded, failed } And Upon completion, each asset's compliance endpoint reflects the new pass/fail and reasons, and an event compliance.updated is emitted per asset
UI Exposure of Profile Selection and Compliance
Given I open the upload flow for Project P When the page loads Then a Profile selector shows the resolved active profile name and its source level (Workspace/Project/Release/Destination) and allows users with manage:profiles to change it And On an asset detail page, the Compliance panel shows the active profile, overall pass/fail, per-rule status, measured metrics (sampleRate, bitDepth, channels, LUFS-I, LRA, true-peak, container/codec), timestamp, and a Revalidate action And Changing the selected profile triggers immediate revalidation and updates the panel state within 10 seconds
Compliance Query Endpoints for Feature Integrations
Given an asset with multiple renditions and an active profile for a destination When a client calls GET /v1/compliance/resolve-rendition?assetId=...&destinationId=... Then the API returns the highest-fidelity rendition that passes the active profile, with fields { renditionId, pass=true, profileId, metrics, reasons=[] } And If no rendition passes, the API returns pass=false, reasons including all violated rules, and null renditionId with HTTP 200 And Equivalent GraphQL query returns the same resolution
Channel Mapping and Dithering Rule Enforcement
Given a profile with channelMappingRules requiring Stereo L/R and disallowing additional channels When an asset has channel configuration not matching the rule or declared channel order not L/R Then validation fails with reason codes channel.config.invalid or channel.order.invalid And Given a profile with ditheringPolicy=requireOnBitDepthReduction When a rendition's bit depth is lower than its source and the processing metadata lacks ditheringApplied=true Then validation fails with reason code dithering.policy.violation
Preflight Validation & One‑Click Auto‑Fix
"As an artist, I want to quickly see which files fail spec and fix them in one click so that I don’t waste time troubleshooting formats manually."
Description

Add a preflight screen that scans selected assets against the chosen spec profile, highlights mismatches (e.g., 48 kHz vs required 44.1 kHz, 24‑bit vs 16‑bit, stereo vs mono, LUFS off‑target), and previews the exact changes that will be applied. Provide a one‑click "Fix All" action that enqueues conversions with the appropriate resampling, bit‑depth reduction with dithering, channel remapping, and loudness normalization. Include a dry‑run mode, delta metrics (before/after), per‑file overrides, and a summary of estimated processing time and storage impact before execution.

Acceptance Criteria
Preflight scan highlights spec mismatches
Given a selected set of audio assets and an active spec profile When the preflight scan is initiated Then each asset is validated for sample rate, bit depth, channel count, and integrated loudness against the profile targets And any parameter not meeting the profile is flagged per-asset with the current value and the required value And compliant assets are labeled "Compliant" with zero flags And the preflight performs no writes or modifications to any files
One‑click Fix All enqueues correct conversions
Given one or more assets with preflight flags and Dry Run is disabled When the user clicks "Fix All" Then a single batch job is created that enqueues only the operations needed per asset And resampling is applied to nonconforming sample rates to match the profile rate And bit‑depth reduction uses dithering when reducing to the profile bit depth And channel remapping conforms to the profile channel count and mapping And loudness normalization adjusts integrated loudness to the profile target within ±0.1 LU unless the profile defines a different tolerance And after job completion, all previously flagged parameters pass preflight for the converted assets
Dry‑run preview with delta metrics and no file writes
Given Dry Run mode is enabled on the preflight screen When the user clicks "Fix All" Then no new audio files are written and no existing files are modified And a per‑asset preview displays the operations that would run and before/after metrics (sample rate, bit depth, channel count, integrated loudness) And a summary displays estimated total processing time and net storage impact for the batch And disabling Dry Run and re‑running yields identical planned operations and metrics
Per‑file override of auto‑fix actions
Given an asset has one or more preflight flags When the user opens per‑file overrides and adjusts the planned operations Then the preview updates immediately to reflect the overrides and resulting after‑metrics And "Fix All" respects the per‑file overrides for that execution And the job summary records which operations were overridden per asset
Processing summary with estimated time and storage impact
Given preflight results are available When the user opens the processing summary prior to execution Then the summary shows: number of assets to fix, operations count by type, estimated total duration, and estimated additional storage required (converted outputs plus preserved originals), with per‑asset rollups available And the summary is displayed before the user confirms execution
Checksum updates and original preservation post‑fix
Given "Fix All" has completed successfully for one or more assets When the system finalizes outputs Then the original files are preserved unmodified and linked to the new versions in version history And new files receive updated checksums and retain existing rights metadata And the system stores delta metrics (before/after) with each new version
Error handling and partial failures reporting
Given a batch contains assets that encounter conversion errors When the job completes Then failed assets are reported with per‑asset error messages And successful assets are retained and validated as compliant And no partial or corrupted outputs replace originals for failed assets And the job summary lists failed assets flagged as "Needs Fix"
High‑Fidelity Resample, Dither & Channel Mapper
"As a collaborator, I want batch conversions that sound transparent and meet specs so that everything plays back correctly and uploads cleanly."
Description

Implement a conversion engine that performs high‑quality sample‑rate conversion, precision bit‑depth reduction with selectable dither/noise shaping, and safe channel operations (mono↔stereo with proper summing/duplication and headroom). Ensure deterministic, gapless output with preserved phase and minimal aliasing. Support large files via streaming I/O, chunked processing, and hardware‑accelerated paths where available. Expose tunable quality presets and per‑job parameters via API and UI. Integrate tightly with TrackCrate’s media pipeline to regenerate previews and ensure the stem player and shortlinks serve the correct conformed renditions.

Acceptance Criteria
SRC Fidelity and Aliasing Control
Given a 96 kHz/24-bit stereo PCM test file with broadband content up to 40 kHz and known reference metrics When it is converted to 44.1 kHz/24-bit using qualityPreset=High Then passband ripple (0–20 kHz) is <= ±0.05 dB relative to a SoX VHQ reference And stopband attenuation above 22.05 kHz is >= 100 dB And inter-sample true-peak error vs reference is <= 0.1 dBTP And output duration differs from ideal resampled length by <= 1 sample And L/R phase difference at 1 kHz is <= 0.1° And the output file checksum is identical across two separate runs with the same parameters
Bit‑Depth Reduction with Selectable Dither/Noise Shaping
Given a 24-bit stereo PCM file and a -3 dBFS 1 kHz tone fixture When converting to 16-bit with dither=TPDF and noiseShaping=Moderate Then measured wideband noise floor (20 Hz–20 kHz) is between -95 dBFS and -91 dBFS And THD+N at -3 dBFS is <= -88 dB And no idle tones or spurs exceed -100 dBFS during digital silence segments And true peak at any sample does not exceed -0.1 dBTP (no clipping) And output metadata reflects bitDepth=16, ditherType=TPDF, noiseShaping=Moderate And selecting dither=None yields output with bitDepth=16 and no HF tilt characteristic of shaped dither
Mono↔Stereo Channel Mapping with Safe Headroom
Given a stereo file with identical in-phase L and R signals peaking at -1 dBFS When downmixing to mono with headroom protection enabled Then the mono output true peak is <= -1.0 dBTP and contains no hard clipping And the downmix uses mathematically correct summing/averaging with deterministic gain structure Given a stereo file with L=+1.0 and R=-1.0 antiphase 1 kHz tones When downmixing to mono Then the output magnitude is <= -90 dBFS (near-total cancellation) and contains no added DC offset > -100 dBFS Given a mono file When upmixing to stereo Then both channels are sample-identical within 1 LSB and channel count metadata=2
Streaming and Chunked Conversion of Large Files
Given a 12 GB, 6-hour 192 kHz/32-bit float stereo file When converting to 48 kHz/24-bit with streaming chunked processing (chunkSize <= 16 MB) Then peak resident memory stays <= 400 MB And temporary disk usage does not exceed output size by > 15% And the produced output is bitwise identical to a reference non-streamed conversion with the same parameters And output length matches expected resampled length within 1 sample with no inserted gaps or dropped samples And if hardware acceleration is unavailable at runtime, the job falls back to CPU-only and records the execution path in job metadata And average throughput on the reference CI environment is >= 1.0× realtime
API/UI Parameterization and Preset Overrides
Given the POST /formatfix/jobs endpoint with OpenAPI-documented schema When a request includes sampleRate, bitDepth, channels, ditherType, noiseShaping, qualityPreset, and headroomDb within allowed ranges Then the API responds 202 with a jobId and persists the exact parameters in job metadata And invalid combinations (e.g., noiseShaping without dither, unsupported bitDepth) yield 400 with field-specific error codes And GET /formatfix/jobs/{jobId} returns the persisted parameters and current status Given the FormatFix UI When a user selects presets (Fast, Standard, High, Max) and overrides per-job parameters Then the UI reflects effective values, validates ranges inline, and saves overrides to the job metadata And project/distributor spec defaults pre-populate fields when present
Media Pipeline Integration and Correct Asset Serving
Given a conformance job completes successfully When the pipeline finalizes artifacts Then previews (e.g., 128 kbps AAC, 320 kbps MP3, waveform images) are regenerated from the conformed master And the stem player streams the conformed stems at the target sample rate with no pitch or tempo drift beyond ±0.1% And shortlinks serve the correct conformed rendition for downloads and streams And original source files are preserved unmodified in the originals store And new artifact checksums are computed, stored, and exposed via API And CDN caches are invalidated or cache-busted so clients fetch the new renditions And expiring, watermarked downloads are generated from the conformed assets
Deterministic Output Across Nodes and Paths
Given the same input file and identical parameters When the job is executed twice on the same node and once on a different node using a hardware-accelerated path Then all three outputs are byte-identical, including container headers and metadata ordering And internal timestamps/fields that would vary by run are normalized so as not to affect file hashes And no added leading or trailing silence exceeds 1 sample at either end And recorded job metadata includes the exact code version, preset, and execution path used
Non‑Destructive Versioning & Checksum Lineage
"As a label admin, I want originals kept intact with verifiable derived versions so that I can audit, share compliant files, and revert if needed."
Description

Preserve original uploads and write conformed renditions as derived versions linked to their source. Compute and store cryptographic checksums for both originals and outputs, capturing the full transformation recipe (profile, engine settings, timestamps) for auditability and repeatability. Update asset manifests so AutoKit, shortlinks, and expiring downloads default to conformed outputs while allowing users to access or restore originals. Display version lineage in the UI with quick diff of technical metadata and allow per‑asset rollback.

Acceptance Criteria
Derived Version Creation Preserves Original and Links Lineage
Given an uploaded audio asset that fails project or distributor specs And the user initiates FormatFix conformance When processing completes Then the system creates a new derived version in the same asset group And the original binary remains unchanged and downloadable And the derived version stores references to source asset-id and version-id And the lineage graph reflects a parent→child relationship And the asset manifest marks the derived version as the default playable/downloadable version for the asset
Cryptographic Checksums for Originals and Derivatives
Given any asset upload or derived output When the binary is stored Then a SHA-256 checksum is computed and persisted immutably with the version record And the checksum is exposed via API and visible in the UI And re-downloading the file and re-hashing yields a value identical to the stored checksum And if an upload matches an existing version's checksum, the system records the match and does not duplicate the binary
Transformation Recipe Captured for Audit and Repeatability
Given a conformance job runs for an asset When a derived version is produced Then the system captures and persists the full transformation recipe including: profile name, engine name and version, input checksum, output checksum, all conversion parameters (sample rate, bit depth, channels, dither, resampler, loudness target), timestamps, initiator (user or system), and host/environment identifiers And the recipe is retrievable via API and UI And re-running the same recipe against the same source with the same engine version produces an identical binary (checksum equality with stored output)
Manifests Default to Conformed Outputs in Integrations
Given an asset has a conformed derived version marked as default When an AutoKit page renders, a shortlink is created or resolved, or an expiring download is generated Then the integration serves the conformed version by default And existing AutoKit pages and shortlinks reflect the manifest default without manual edits And users can override per link or page to serve the original or another version
UI Version Lineage with Technical Metadata Diff
Given an asset with at least one derived version When the user opens the Version Lineage view and selects any two versions Then a visual diff displays technical metadata side-by-side And changed fields are highlighted and include at minimum: sample rate, bit depth, channel count, loudness (LUFS), duration, codec/format, file size, checksum, creation timestamp, profile, and engine version And the lineage tree renders parent→child relationships with timestamps and processing status And the view loads within 1 second for assets with up to 20 versions
Per-Asset Rollback Updates Defaults Without Data Loss
Given an asset with multiple versions including a conformed default When the user selects a different version and confirms "Set as Default" Then the asset manifest updates to point to the selected version And AutoKit pages, shortlinks, and expiring downloads start serving the new default within 60 seconds And no versions are deleted; all binaries remain accessible based on permissions And the change is audit-logged with user id, timestamp, and previous default
Batch Conformance Is Idempotent With Per-Item Results
Given a batch conformance job for N assets When the job runs Then each asset that has no matching derived version for the specified recipe gets a new derived version linked to its source And each asset that already has an output with an identical recipe is skipped without creating a duplicate, and the manifest remains unchanged And failures for individual assets are recorded with error codes and do not stop other items from processing And the job summary reports counts for processed, created, skipped, and failed And re-running the same batch with the same recipe is idempotent (no new versions created for previously successful items)
Loudness Analysis & Normalization to Target
"As a mastering engineer, I want reliable loudness and peak control to specified targets so that my deliveries are accepted and playback is consistent across platforms."
Description

Analyze integrated loudness (LUFS‑I), loudness range (LRA), and true‑peak for each asset, then apply normalization to the selected profile’s target with true‑peak limiting and optional oversampling to avoid intersample clipping. Support analyze‑only mode and normalization of stems, mixes, and masters with appropriate headroom policies. Write measured and post‑process metrics into asset technical metadata, surface before/after comparisons, and ensure previews and stem player reflect the normalized output.

Acceptance Criteria
Analyze-only loudness metrics written to metadata
Given a supported audio asset (e.g., 24-bit WAV, 44.1–192 kHz) and Analyze-Only mode selected When loudness analysis is run using ITU-R BS.1770-4 with EBU R128 relative gating and true-peak measured at >= 4x oversampling Then the system produces Integrated Loudness (LUFS-I), Loudness Range (LRA), and True-Peak (dBTP) within +/-0.1 LU and +/-0.1 dB of a calibrated reference meter for provided validation files And writes the measured values, analysis standard, oversampling factor, analysis timestamp, and analyzer version into the asset technical metadata And leaves the original asset file and checksum unchanged And does not create a new derivative file
Normalize to profile targets with true-peak limiting
Given a normalization profile with targets Integrated Loudness = -14.0 LUFS and True-Peak Ceiling = -1.0 dBTP and Normalization mode selected When an asset is normalized Then the output file's Integrated Loudness equals -14.0 +/-0.2 LU and True-Peak <= -1.0 dBTP when measured at >= 4x oversampling And no samples clip (sample peak < 0.0 dBFS) And a brickwall limiter is applied only as needed, with max gain reduction logged and stored in technical metadata And the normalized output is stored as a new derivative linked to the source; the original file remains unchanged And technical metadata includes both pre-process and post-process loudness metrics and the applied gain (dB)
Role-aware normalization policies for stems, mixes, and masters
Given the "Indie DSP Default" profile defines role-specific targets: - Stem: peak-only to -3.0 dBTP (no LUFS normalization; no limiting) - Mix: -16.0 LUFS-I target, True-Peak Ceiling -1.0 dBTP - Master: -14.0 LUFS-I target, True-Peak Ceiling -1.0 dBTP When a batch containing labeled stems, mixes, and masters is normalized Then each stem output has True-Peak <= -3.0 dBTP and uses gain only (0.0 dB limiter gain reduction) And each mix output is -16.0 +/-0.2 LUFS-I with True-Peak <= -1.0 dBTP And each master output is -14.0 +/-0.2 LUFS-I with True-Peak <= -1.0 dBTP And all outputs have their role and applied policy recorded in technical metadata
Oversampling option prevents intersample clipping
Given limiter oversampling is configurable with options Off, 2x, 4x, 8x and a test file containing intersample peaks When normalization runs with oversampling set to 8x Then reconstructed True-Peak measured at 8x is <= the profile ceiling and no ISP-induced clipping occurs And when normalization runs with oversampling Off, True-Peak measured at 1x is <= the profile ceiling, and a warning flag "oversampling disabled" is recorded in metadata
Before/after loudness metrics surfaced in UI and API
Given an asset has been analyzed and then normalized When viewing the asset in the UI and via API Then pre-process and post-process values for LUFS-I, LRA, True-Peak, applied gain, and max limiter reduction are displayed side-by-side and returned by API fields And numeric deltas (post - pre) are shown with at least 0.1 LU and 0.1 dB precision And a downloadable normalized derivative is clearly labeled with its target profile and policy
Previews and stem player reflect normalized output
Given a normalized derivative exists for an asset and preview playback is initiated When the user plays the preview Then the audio source used is the normalized derivative by default, matching its LUFS-I within +/-0.2 LU and True-Peak ceiling within +/-0.1 dB when re-measured from the preview stream And the user can toggle A/B between original and normalized; the correct version is audible and labeled, with seamless switching (<100 ms gap) And the stem player, when present, loads the normalized stem derivatives and not the originals
Batch normalization with checksum updates and idempotency
Given a selection of 25 assets across a project and a normalization profile When batch normalization completes Then each normalized derivative receives a new checksum (e.g., SHA-256) and content-addressed URI; originals' checksums remain unchanged And rerunning normalization with the same profile and unchanged source does not create duplicate derivatives (idempotent) And a processing report lists per-asset status, pre/post metrics, applied gain, limiter usage, oversampling setting, and any warnings
Batch Queue, Progress & Notifications
"As a remote team member, I want to queue large batches and be notified when they finish so that I can keep working without monitoring conversions."
Description

Introduce an asynchronous job queue for batch conversions with per‑workspace concurrency limits, resumable jobs, automatic retries with backoff, and granular progress indicators at job and file levels. Provide real‑time status updates in the UI, activity feed entries, and optional email or webhook notifications on completion or failure. Allow users to prioritize or pause jobs and to download partial results as files complete, supporting teams working across time zones.

Acceptance Criteria
Enqueue and Process with Per-Workspace Concurrency Limits
Given workspace W has a concurrency limit of 2 and jobs J1..J5 are queued, When the scheduler dispatches jobs, Then no more than 2 jobs in W are in Running state at any time and the remainder are Pending. Given a Running job in W completes, When a slot becomes available, Then the oldest Pending job without higher priority transitions to Running within 5 seconds. Given a higher-priority Pending job exists behind lower-priority Pending jobs, When a slot becomes available, Then the higher-priority job is dispatched before lower-priority jobs. Given workspaces W1 (limit 1) and W2 (limit 3) each have queued jobs, When the scheduler dispatches, Then each workspace adheres to its own limit independently and does not block the other. Given any job is created, When it is persisted, Then it has a unique job_id and state lifecycle timestamps (Created, Pending, Running, Paused, Succeeded, Failed) recorded.
Resume Interrupted Batch Jobs
Given a job with multiple files is in Running state, When the worker process is terminated unexpectedly during processing, Then upon worker restart the job returns to Running within 30 seconds and resumes from the next unprocessed file without duplicating already completed outputs. Given files F1..Fk completed before interruption, When the job resumes, Then F1..Fk remain marked Succeeded and are not reprocessed, and their output checksums match the last successful attempt. Given a file was partially written at the time of interruption, When processing resumes, Then the partial artifact is cleaned up or replaced to prevent corruption. Given the job resumes after interruption, When the UI is refreshed, Then job-level progress and per-file statuses reflect the resumed state accurately.
Automatic Retries with Exponential Backoff and Error Classification
Given a transient error occurs while converting file Fx and max_retries=3 with exponential backoff (base=2, initial_delay=10s), When retries are scheduled, Then retries occur at approximately 10s, 20s, and 40s after the failure with jitter up to ±20%. Given Fx succeeds on a retry before max_retries is exhausted, When the retry completes, Then Fx is marked Succeeded and no additional retries are attempted. Given a non-retryable error (e.g., unsupported format) occurs for Fx, When error classification runs, Then Fx is marked Failed immediately without retry and an error_code and message are recorded. Given some files fail after all retry attempts, When the job completes, Then the job summary includes counts of succeeded, failed, and skipped files and exposes failure details for each failed file.
Granular Job and File Progress Indicators
Given a job has N files, When k files complete, Then the job-level progress percentage reflects k/N to the nearest 1% and updates within 2 seconds of change. Given a file Fi is processing, When conversion steps advance (e.g., resample, bit depth, loudness normalization), Then Fi displays a progress percentage from 0–100% and current step label, updating at least every 2 seconds while active. Given the user reloads the UI during processing, When the page loads, Then job-level and file-level progress restore from server state without regression. Given estimated time remaining (ETR) is available for a file or job, When processing has been running for at least 10 seconds, Then ETR is displayed and refreshed at least every 5 seconds.
Real-Time UI Updates, Activity Feed, and Notifications
Given a server-side status change for a job or file occurs, When the user has the TrackCrate UI open, Then the UI reflects the change within 2 seconds via real-time updates without requiring a manual refresh. Given a job transitions through key states (Queued, Started, Paused, Resumed, Completed, Failed), When each transition occurs, Then an activity feed entry is created with timestamp, actor (if applicable), job_id, and a short description. Given a workspace has email or webhook notifications enabled for job completion, When a job reaches a terminal state (Completed with or without failures), Then exactly one notification is sent containing job_id, status, counts (succeeded/failed/skipped), duration, and links to results. Given a webhook is configured with a signing secret, When the webhook is delivered, Then the request includes an HMAC signature and timestamp header that validates against the secret.
Job Prioritization and Pause/Resume Control
Given multiple Pending jobs exist in a workspace, When a user changes job Jp priority to High, Then Jp is placed ahead of lower-priority Pending jobs and is dispatched next when a slot becomes available (without preempting Running jobs). Given a job Jr is in Running state, When a user clicks Pause, Then Jr stops within 10 seconds at a safe checkpoint (end of current file or step), transitions to Paused, and frees a concurrency slot. Given a job is Paused, When a user clicks Resume, Then the job transitions to Pending or Running (if a slot is free) and continues without reprocessing completed files. Given priority or pause/resume actions occur, When the activity feed updates, Then each action is recorded with actor, timestamp, and previous/new state or priority.
Partial Results Download Availability
Given a batch job is Running and files complete incrementally, When a file Fi finishes successfully, Then Fi's converted output is available for individual download via UI and API within 5 seconds, with a pre-signed link that expires per workspace policy. Given multiple files have completed in a job, When the user requests a partial bundle download, Then a bundle is generated containing only completed files at the time of request without pausing the job in progress. Given downloads are generated, When the user verifies integrity, Then checksums for downloaded files match the recorded output checksums. Given watermarking or access controls are enabled for downloads, When links are generated, Then the policies are enforced consistently for partial and final results.
Compliance Report & Audit Trail Export
"As a label, I want a shareable compliance report so that I can prove conformance to partners and streamline deliveries."
Description

Generate a detailed report per batch or release listing each asset, the applied profile, pre/post technical attributes, pass/fail outcomes, changes applied, checksums, timestamps, and operator identity. Allow export as CSV/PDF and shareable shortlinks with expiration. Link entries to the global audit log and API for external compliance workflows or distributor submissions.

Acceptance Criteria
Batch Compliance Report: Required Fields and Accuracy
Given a completed FormatFix batch with at least 3 processed assets and an applied conformance profile When a user with Report:Generate permission requests a compliance report for that batch Then the report contains one row per asset and includes: asset identifier, file name, applied profile name and version, pre-conversion attributes (sample rate Hz, bit depth, channel count, integrated loudness LUFS), post-conversion attributes (same fields), per-check pass/fail outcome, list of changes applied, original file checksum (SHA-256), output file checksum (SHA-256), queued/start/complete timestamps in ISO 8601 UTC, and operator identity (user email or API client ID) And the row count equals the number of assets in the batch And all timestamps are in UTC and use ISO 8601 format (e.g., 2025-09-02T14:03:00Z)
Release-Level Report: Correct Scoping and Completeness
Given a release containing assets processed across multiple batches within the last 90 days When a user generates a compliance report scoped to that release Then the report includes all and only assets linked to the release And each asset row includes the release ID and release title And totals for assets, passes, and fails are shown at the end of the report and match the sum of rows
Export: CSV and PDF Fidelity
Given a generated compliance report with at least 10 rows When the user exports as CSV Then the file downloads within 10 seconds, is UTF-8 encoded with a single header row, comma delimiter, double-quote escaping, LF line endings, and uses filename pattern <scopeId>-YYYYMMDDTHHMMSSZ.csv And numeric fields preserve precision and use dot as decimal separator When the user exports as PDF Then the PDF renders all rows and totals, includes the report title, generation timestamp (UTC), page numbers, and uses filename pattern <scopeId>-YYYYMMDDTHHMMSSZ.pdf
Shareable Shortlink with Expiration and Revocation
Given a generated compliance report When the user creates a shareable link with an expiration time of 72 hours Then a shortlink is generated and returns HTTP 200 with the URL and expiry timestamp (UTC) And unauthenticated access to the link displays the report in view-only mode And the link returns HTTP 410 Gone within 1 minute after expiry And the owner can revoke the link, after which it returns HTTP 403 Forbidden within 1 minute And link creation, access, expiry, and revocation events are recorded in the audit log
Audit Log and API Linking
Given any row in a compliance report When the user clicks its Audit ID Then the global audit log entry opens and shows the same operator identity, checksums, and timestamps as in the report When a client calls GET /api/compliance-reports/{reportId} Then the API responds 200 with JSON that includes the same rows and fields as the on-screen report And unauthenticated requests receive 401, unauthorized roles receive 403, and non-existent IDs return 404 And API responses include ETag and are cacheable for 60 seconds
Checksum Verification and Mismatch Handling
Given the originals are preserved and outputs exist in storage When a compliance report is generated Then the system verifies the stored SHA-256 checksums for both original and output artifacts And if any recalculated checksum does not match the stored value, the asset row is flagged checksum_mismatch = true and the overall outcome is Fail And a remediation hint is included in the row notes field with code CHECKSUM_MISMATCH And the mismatch event is appended to the global audit log

TagForge

Unified tagging and naming: standardizes filenames and embeds clean credits, codes, splits, and contact info into ID3/BWF. Ensures every file is portable and ingest‑ready, eliminating repetitive manual tagging and mismatched metadata across stems and versions.

Requirements

Metadata Schema Templates & Frame Mapping
"As a label manager, I want to define a standard metadata schema and map it to ID3/BWF frames so that every file we export is ingest-ready across platforms."
Description

Provide configurable metadata templates for releases, tracks, and stems that map TrackCrate fields (credits, codes, splits, contacts) to ID3v2.3/v2.4 frames and BWF bext/iXML/RIFF chunks. Support industry-standard frames (e.g., TSRC for ISRC, TPE1/TPE2, TIT2, TALB, TXXX for custom keys, TMCL/TIPL for roles) and BWF fields (Description, Originator, OriginatorReference) with optional iXML nodes for detailed credits and contact info. Ensure correct character encoding (UTF-8/UTF-16 for ID3), field length limits, and safe fallbacks when a target format lacks a native field. Integrate with release records and asset library so exports and downloads inherit the mapped schema, producing ingest-ready files for distributors, PROs, and collaborators across systems.

Acceptance Criteria
Release Template → MP3 (ID3v2.4) mapping
Given a release and track with populated fields (Title, Album, Primary Artists, Album Artist, ISRC, UPC, Date, Roles, Splits, Contacts) When TagForge exports an MP3 using the "Release v2.4" metadata template Then the file SHALL contain ID3v2.4 frames: - TIT2 = Track Title (UTF-8) - TALB = Album Title (UTF-8) - TPE1 = Primary Artist(s) as multi-value entries - TPE2 = Album Artist - TSRC = ISRC (exactly 12 characters; fails with error META-ISRC-INVALID if not valid) - TXXX:UPC = UPC/EAN - TDRC = Release date in ISO 8601 (YYYY-MM-DD) - TMCL/TIPL = contributor role-person pairs for all credits - TXXX:Splits = JSON-serialized splits - TXXX:Contact = primary contact email and/or URL And all written frames SHALL be verified by a TagForge read-back with exact value round-trip.
Stem Template → WAV (BWF bext + iXML) mapping
Given a stem WAV asset with associated release/track metadata When TagForge exports a WAV using the "Stem BWF+iXML" template Then: - bext:Description = "<Track Title> — <Stem Name>" and <= 256 bytes - bext:Originator = "TrackCrate" - bext:OriginatorReference = TrackCrate Asset ID (<= 32 ASCII chars) - RIFF INFO: INAM = Track Title; IART = Primary Artist(s); IPRD = Album Title; ICRD = YYYY-MM-DD - iXML contains <PROJECT> = Release Title; <NOTE> includes contact info; and a <TRACKCRATE><CREDITS> section listing each contributor with role and share - All mapped values use UTF-8 where applicable; overflow beyond bext limits is mirrored fully into iXML And TagForge read-back confirms 100% parity between intended mappings and written chunks.
Encoding and length constraints enforcement
Given metadata including non-ASCII characters and overlength fields When writing ID3 and BWF/iXML tags Then: - ID3 v2.4 text frames are UTF-8; ID3 v2.3 text frames are UTF-16 with BOM if non-ASCII, otherwise ISO-8859-1 - BWF bext fields respect max lengths (Description 256 bytes; Originator 32; OriginatorReference 32) - On overflow, TagForge truncates bext without breaking UTF-8 code points and writes the full value to a fallback (ID3 TXXX:TrackCrate.Full.<FieldKey> or iXML <TRACKCRATE><FULL field="FieldKey">) - A warning log is attached to the export report for each truncated field, including fallback location
Safe fallback for unmapped fields
Given a template mapping a field with no native target in the chosen format When TagForge writes tags Then: - For ID3, write TXXX:TrackCrate.<FieldKey> with the original value - For WAV/BWF, write iXML <TRACKCRATE><FIELD name="<FieldKey>">value</FIELD> - Fallback keys are stable, ASCII-only, and included in the export manifest - Round-trip read maps these fallbacks back to original TrackCrate fields with no data loss
Template inheritance on export and download
Given a release with a selected metadata template and track-level overrides When exporting or generating a download (shortlink or AutoKit) for any associated asset Then: - Mapping precedence is Track Template > Release Template > Global Default - The file contains tags according to the resolved template and target format (ID3 v2.3/v2.4 or BWF+iXML) - If no template is resolved, the export is blocked with error META-TEMPLATE-NOT-FOUND - TagForge read-back verification passes for the produced file
ID3 version-specific frame mapping
Given a user-selected ID3 version When exporting MP3/AIFF with ID3 tags Then: - If v2.3: use TYER/TDAT/TIME for dates and IPLS for roles; do not write TDRC/TMCL/TIPL - If v2.4: use TDRC for date/time and TMCL/TIPL for roles; do not write TYER/TDAT/TIME/IPLS - Multi-value artists use the correct separator for the version (null separator for v2.4; single string or slash-separated per v2.3 constraints) - Post-write read-back shows expected frames and absence of version-inappropriate frames
Mapping completeness and round-trip verification
Given any export produced via a metadata template When TagForge completes tag writing Then: - A read-back pass compares intended mappings to actual frames/chunks and fallbacks - 100% of mapped fields are present in native targets or designated fallbacks - Any mismatch fails the export with error META-MAP-MISMATCH and an attached diff report
Filename Standardization Engine
"As an audio engineer, I want to apply a consistent filename pattern across hundreds of stems so that collaborators and distributors can instantly recognize versions and contents."
Description

Implement a pattern-driven renaming engine that applies consistent, human-readable filenames across masters, alternates, and stems. Provide a tokenized pattern builder (e.g., {artist} - {title} ({version}) [{role}] {bpm} {key} {isrc}) with rules for casing, separators, zero-padding, track numbers, and optional suffixes (sample rate/bit depth/territory/explicit). Support transliteration and Unicode normalization, collision handling with deterministic disambiguation, preview/dry-run mode, and batch application on upload, bulk edit, and export. Enforce organization-wide patterns per label or project to eliminate mismatched names across versions and collaborators.

Acceptance Criteria
Pattern tokens resolve with optional suffixes and clean separators
Given an organization-wide filename pattern "{artist} - {title}{ (version)?} [{role}] {bpm} {key} {isrc}{ - {samplerate}?}{ - {bitdepth}?}{ - {territory}?}{ - EXPLICIT?}" And a master file with extension ".wav" and metadata: artist="Night Drive", title="Neon City", version=null, role="Master", bpm=122, key="Fm", isrc="US-ABC-24-00012", samplerate=null, bitdepth=null, territory=null, explicit=false When the renaming engine is applied Then the filename is exactly "Night Drive - Neon City [Master] 122 Fm US-ABC-24-00012.wav" And no dangling separators or extra spaces appear where optional values are empty And the original file extension is preserved in lowercase And whitespace is collapsed to single spaces and trimmed at both ends And the operation completes within 200ms per file for batches up to 1,000 files
Casing, separators, and zero-padding rules are applied consistently
Given project-level rules: casing=Title Case for {artist} and {title}, key=uppercase, isrc=uppercase; major separator=" - "; role wrapped in square brackets; track numbers zero-padded to width=2; illegal characters [\\/:*?"<>|] removed; repeated separators collapsed; max filename length (without extension) <= 200 characters And pattern "{track:00} - {artist} - {title}{ (version)?} [{role}] {bpm} {key} {isrc}" And a file with metadata: track=3, artist="the WEEKND", title="SAVE YOUR TEARS* (feat. Ariana Grande)", version="Radio Edit", role="Master", bpm=118, key="f#", isrc="us-um7-21-01234", ext=".wav" When the renaming engine is applied Then the filename is exactly "03 - The Weeknd - Save Your Tears (Radio Edit) [Master] 118 F# US-UM7-21-01234.wav" And the asterisk and illegal characters are removed, multiple spaces collapsed, and separators normalized And titles/artists are converted to Title Case using locale-insensitive rules And filenames longer than the max are truncated with a single ellipsis character before the extension without cutting inside a multibyte character
Transliteration and Unicode normalization produce portable filenames
Given normalization rules: Unicode normalized to NFC; diacritics removed for ASCII-only patterns; emojis and control characters stripped; language-agnostic transliteration enabled And pattern "{artist} - {title} [{role}]" And files: - File A: artist="Beyoncé", title="Café Del Mar", role="Master", ext=".wav" - File B: artist="Молчат Дома", title="Судно (Борис Рижий)", role="Instrumental", ext=".wav" - File C: artist="Angèle ✨", title="Tout Oublier", role="Master", ext=".aiff" When the renaming engine is applied with ASCII-only output required Then File A filename is exactly "Beyonce - Cafe Del Mar [Master].wav" And File B filename is exactly "Molchat Doma - Sudno (Boris Rizhiy) [Instrumental].wav" And File C filename is exactly "Angele - Tout Oublier [Master].aiff" And no combining marks remain; all filenames contain only [A-Za-z0-9 space - _ ( ) [ ]] plus the dot before extension
Deterministic collision handling appends stable disambiguators
Given two assets with immutable IDs AID1="4f2a7c91-0d3e-4b9b-9c4e-12ab34cd56ef" and AID2="9c1b5e22-7a80-4a8f-8a3b-98ff12aa33bb" And both resolve to the same base filename "Night Drive - Neon City [Master] 122 Fm US-ABC-24-00012.wav" And the disambiguation rule: append "-d" plus the first 6 hex characters of the asset ID before the extension When the renaming engine processes both files in any order, with or without existing files present Then the resulting filenames are exactly "Night Drive - Neon City [Master] 122 Fm US-ABC-24-00012-d4f2a7c.wav" and "Night Drive - Neon City [Master] 122 Fm US-ABC-24-00012-d9c1b5e.wav" And re-running the engine produces the same names (idempotent), without appending additional disambiguators And if a natural suffix already ends with the same disambiguator, it is not duplicated And no collisions remain within the target folder after processing
Preview (dry-run) shows exact changes without modifying files
Given a batch of 250 files with mixed metadata completeness And dry-run mode is enabled When the renaming engine is executed in dry-run Then no filenames on disk or in storage are changed And a report is produced within 5 seconds containing for each item: original name, proposed name, reason codes (e.g., casing, separators, optional omission, disambiguation), and warnings for missing tokens And the report can be exported as CSV and JSON with identical counts and item order And toggling from dry-run to live run produces the same proposed names applied as actual names
Batch application on upload, bulk edit, and export
Given organization-level enforcement is enabled When files are uploaded via web, API, or drag-and-drop Then the filename standardization is applied at ingest before link sharing, and the UI shows the final name within 2 seconds of upload completion When a user performs Bulk Edit that changes any token values (e.g., version, role, bpm) Then affected filenames are recomputed and updated atomically, with progress and a success/failure count summary When exporting a selection as a zip or to cloud storage Then export filenames reflect the latest pattern and disambiguation, independent of original source names
Organization/label/project pattern enforcement and permissions
Given patterns can be set at Organization, Label, and Project levels And precedence is Project > Label > Organization And only users with Admin or Editor role at the respective scope can modify the pattern When a project has its own pattern defined Then all renames within that project use the project pattern regardless of label/org settings When a project has no pattern but its label does Then the label pattern is used; otherwise the org pattern is used When a pattern is changed Then existing asset filenames in-scope are re-evaluated and updated in a single migration job with audit logs capturing who changed the pattern, when, and how many files were affected
Batch Tagging & Credit Propagation
"As a producer, I want to bulk apply core metadata to all assets with per-stem overrides so that I don’t have to tag each file manually while keeping credits accurate."
Description

Enable bulk metadata editing that propagates core fields from a parent track/release to selected files with per-stem overrides. Support role-based credits (performer, producer, mixer), composer/publisher splits, codes (ISRC/ISWC/UPC), and contact details, writing to appropriate ID3 frames (TMCL/TIPL, TSRC, TXXX) and BWF bext/iXML. Provide multi-select editing, CSV import/export, and rules for inheritance vs. override with conflict resolution previews. Parse and merge existing embedded tags to pre-fill fields, minimizing manual entry while ensuring each stem carries accurate, role-specific metadata.

Acceptance Criteria
Propagate Parent Metadata with Per‑Stem Overrides and Preview
- Given a parent track with populated core fields and 12 selected stems, When the user clicks Propagate Core Fields and confirms the preview, Then all 12 stems inherit parent values for non-overridden fields and retain any explicit per-stem overrides. - Given stem-level overrides for any field, When propagation runs, Then only non-overridden fields are updated and overridden fields remain unchanged. - Given the preview dialog, When displayed, Then it shows per-file diffs of changed fields, a conflict count, and requires explicit Confirm before write. - Given a batch of 100 files, When propagation is confirmed, Then the write completes within 5 seconds and returns a per-file success/failure summary.
Role-Based Credits Mapped to ID3 (TMCL/TIPL) with Round‑Trip Verification
- Given role-based credits (Performer, Producer, Mixer) for a track, When writing to MP3 with ID3v2.4, Then Performer credits are serialized in TMCL and Producer/Mixer in TIPL as role–name pairs with correct multi-value formatting and UTF-8 encoding. - Given an existing file written by TagForge, When re-opened, Then the displayed credits exactly match the previously written values (round-trip, no loss or duplication). - Given a target of ID3v2.3, When writing credits, Then roles are serialized in IPLS and any non-mappable roles are preserved in TXXX:ROLE entries.
Write and Verify BWF bext and iXML for WAV/AIFF Stems
- Given WAV/AIFF stems with metadata, When saving, Then BWF bext fields Description, Originator, OriginatorReference, and OriginationDate/Time are populated and existing CodingHistory is preserved. - Given role-based credits and project details, When saving, Then an iXML chunk is written containing program name, project, track title, and a structured role list; all strings are UTF-8 and chunk sizes/ordering remain valid. - Given saved files, When validated with a BWF/iXML validator, Then the files pass validation and PCM audio checksums/duration are unchanged (bit-identical audio).
Composer/Publisher Splits and Codes Validation and Embedding
- Given composer and publisher entries with percentages, When saving, Then each work's splits must sum to 100.00% with up to two decimal places or the save is blocked with inline errors. - Given ISRC, ISWC, and UPC values, When validating, Then ISRC matches [A-Z]{2}[A-Z0-9]{3}\d{7}, ISWC matches T\d{9}[0-9], and UPC is 12 digits; invalid codes prevent save and are highlighted. - Given valid codes, When saving to ID3, Then ISRC is written to TSRC and ISWC/UPC are written to TXXX frames with keys ISWC and UPC; duplicates are merged and round-trip displays the same values.
CSV Import/Export Round‑Trip for Batch Metadata
- Given 50 selected files, When exporting CSV, Then the file contains one row per file and columns for file_id, path, filename, inherited fields, overrides, roles (pipe-delimited), codes, and splits, with a single header row. - Given the exported CSV, When re-imported without edits in dry-run mode, Then zero changes are reported and no writes occur. - Given edited CSV values, When importing with commit, Then only changed fields are updated; invalid rows are rejected with line numbers and reasons; a per-row status report is produced.
Conflict Resolution Preview and Inheritance Rules
- Given selected files with differing embedded values, When opening multi-select edit, Then fields with discrepancies are labeled Mixed and proposed parent values are shown per inheritance rules. - Given a field-level decision (Inherit Parent, Keep Existing, Custom), When applied, Then the choice is applied to all or per-file as selected and per-stem overrides are visibly marked. - Given unresolved conflicts, When attempting to commit, Then the system blocks save and highlights required resolutions; upon commit, an audit summary lists decisions and counts.
Pre-Fill from Existing Embedded Tags across Formats
- Given a mixed selection of MP3, WAV(BWF), AIFF, and FLAC files, When loaded, Then TagForge parses ID3v2.x, BWF bext/iXML, and Vorbis comments respectively and pre-fills corresponding editor fields. - Given multiple potential sources, When pre-filling, Then precedence is per-stem override > CSV import > parent values > embedded tags > defaults; unmapped tags appear in an Advanced panel. - Given 200 files, When parsing begins, Then pre-fill completes within 3 seconds; unreadable tags generate non-blocking warnings and are skipped.
Metadata Validation & Compliance Rules
"As a distro operations specialist, I want automated validation with a preflight report so that I can catch and fix metadata issues before delivery causes rejections."
Description

Add a validation layer that preflights files before write/export, enforcing required fields, format-specific constraints, and business rules. Validate codes (ISRC format, optional ISWC, UPC/EAN), splits sum to 100%, credit roles from a controlled dictionary, contact formats, frame length limits, ID3 version compatibility, and BWF bext field sizes. Detect duplicate codes, prohibited characters, and mismatches between header properties and declared attributes. Surface errors vs. warnings with actionable fixes, a downloadable report per batch, and blockers to prevent exporting non-compliant files.

Acceptance Criteria
Code Format Validation and Duplicate Detection
Given a batch of files with declared ISRC/ISWC and release-level UPC/EAN When preflight validation runs before write/export Then each ISRC must match ^[A-Z]{2}[A-Z0-9]{3}\d{7}$ and be exactly 12 characters (error if invalid) And each UPC must be 12 digits with a valid check digit or each EAN must be 13 digits with a valid check digit (error if invalid) And ISWC, when provided, must match ^T-\d{9}-\d$ (error if invalid; warning if missing) And codes marked as required by the selected export profile must be present (error if missing) And any duplicate ISRC/UPC/EAN within the batch or across versions of the same track are flagged as errors with references to conflicting files
Splits, Roles, and Contact Format Validation
Given a track with contributor splits, roles, and contacts When preflight validation runs Then the sum of all contributor royalty splits must equal 100.00% with a tolerance of ±0.01% (error if outside tolerance) And no individual split may be negative or exceed 100% (error) And each contributor must have at least one role from the controlled dictionary (error if role is missing or not in dictionary) And duplicate identical role assignments for the same contributor are flagged (warning) And contributor email, if required, must be a valid RFC 5322 email format (error if invalid) And phone numbers, when provided, must be E.164 formatted (error if invalid)
ID3 Compatibility and Frame Size Constraints
Given an export target specifying ID3 v2.3 or v2.4 When preflight validation runs Then only frames supported by the selected ID3 version are written (warning listing unmapped/unsupported frames) And text encodings must conform: ID3v2.3 uses ISO-8859-1 or UTF-16; ID3v2.4 may use UTF-8/UTF-16/ISO-8859-1 (error if encoding invalid for version) And any single frame payload exceeding the spec-defined maximum size for the selected version is an error And control characters (except TAB/CR/LF) and NUL bytes are prohibited in text frames (error) And TXXX/WXXX descriptions must not be empty when values are present (error)
BWF BEXT Compliance and Header-Attribute Consistency
Given a WAV/BWF file with declared TrackCrate attributes When preflight validation runs Then bext fields must respect size limits: Description ≤ 256 bytes, Originator ≤ 32 bytes, OriginatorReference ≤ 32 bytes, UMID ≤ 64 bytes (error if exceeded) And bext string fields must be ASCII printable (0x20–0x7E); CodingHistory may include CR/LF (error if other control characters present) And TimeReference must be a non-negative integer; OriginationDate (YYYY-MM-DD) and OriginationTime (HH:MM:SS) must be valid (error if invalid) And file header properties (sample rate, bit depth, channels, duration) must match declared attributes (error on mismatch) And LIST-INFO/iXML fields that conflict with declared attributes are flagged (error)
Preflight Error/Warning Handling and Export Blocking
Given the user initiates write/export for a selection of files When preflight completes Then errors and warnings are displayed per file with rule ID, field path, and actionable fix text And any file with one or more errors cannot be exported; its export action is disabled and batch export excludes it And files with only warnings remain exportable And correcting a flagged field and re-running preflight removes the corresponding issue And the UI displays total counts of errors and warnings at batch and file levels
Batch Validation Report Generation and Contents
Given a completed preflight on a batch of files When the user downloads the validation report Then a CSV and a JSON report are available for download And each issue entry includes: file name, track/release identifier, severity (Error/Warning), rule ID, field path, message, and suggested fix And the report includes a batch summary of total files, files with errors, files with warnings, and total issues And the report content matches the on-screen issues for the same preflight run
Prohibited Characters and Filename–Metadata Alignment
Given files to be exported with standardized filenames When preflight validation runs Then filenames are validated against the configured naming template and disallowed characters set (error if violated) And a proposed sanitized filename is provided for any violation And filename tokens (e.g., ISRC, Version, Mix) must match corresponding metadata fields (error on mismatch) And filenames must not exceed 255 bytes per path segment (error if exceeded)
Presets, Rules, and Auto-Fill
"As a label owner, I want presets and auto-fill rules so that new releases and stems inherit clean, complete metadata by default without repetitive data entry."
Description

Provide reusable presets per label/project that auto-fill common fields and apply transformation rules. Support ISRC auto-generation from assigned prefixes, default credit role mappings, contact details from label profiles, default ID3 version selection, casing rules, and intelligent field derivations (e.g., versions, clean/explicit flags, release dates). Allow scheduling of auto-tag on upload and triggering via API/CLI, with variable tokens and conditions. Include preset governance (owner, visibility, change history) to keep outputs consistent across teams and releases.

Acceptance Criteria
Apply Label-Level Preset on Upload
Given a label has a default TagForge preset with auto-tag-on-upload enabled and scoped label-wide And the preset defines: ID3 version=v2.4, filename casing=Title Case, and default publisher contact from the label profile When a user uploads a new MP3 or WAV to any project under that label via the web uploader Then the system applies the preset automatically within 60 seconds of upload completion And embedded tags are written using ID3v2.4 for MP3 and BWF bext/iXML for WAV per preset And filenames are transformed to Title Case without changing extensions or path depth And the UI shows an "Auto-tag applied" badge and a summary of fields changed And an immutable audit log records file ID, preset ID and version, fields changed, timestamp, and actor=system
ISRC Auto-Generation from Assigned Prefix
Given the label has an assigned ISRC prefix configured in TagForge and ISRC auto-generation is enabled in the preset And the preset counter policy is set to YY-NNNNN with zero-padded sequence per calendar year When a track without an ISRC is processed by the preset Then the system generates an ISRC matching the pattern <PREFIX>-<YY>-<NNNNN> And ensures uniqueness across the label by atomically reserving the next available sequence And logs the generated ISRC and sequence in the audit trail And if a track already contains a valid ISRC, the system does not overwrite it And if a collision is detected, the system retries by incrementing the counter up to 10 attempts and surfaces a failure if none are available
Credit Role Mapping and Contact Auto-Fill
Given a preset defines role mappings (e.g., Producer -> TIPL:producer, Mixer -> TIPL:mixer, Mastering Engineer -> TIPL:engineer) And the label profile stores default contact info (email, phone, website) and publisher/PRO identifiers When the preset is applied to a file with contributor names but non-standard role labels Then roles are normalized to the preset's mapping and embedded into standard tag frames (ID3 TIPL/TMCL; BWF iXML person elements) And missing contact fields are auto-filled from the label profile for publisher/owner where applicable And no existing non-empty contributor fields are overwritten without an explicit override flag in the preset And validation fails with a clear error if required credits are missing per preset policy (e.g., at least one primary artist and one writer)
ID3/BWF Version Selection and Casing Rules Enforcement
Given a preset specifies default tag container versions (ID3v2.3 or v2.4 for MP3; BWF bext+iXML for WAV) and filename casing rules When the preset is applied to a batch containing MP3 and WAV files Then MP3 files are written using the specified ID3 version with compatible frames (e.g., use TXXX for unsupported frames in v2.3) And WAV files include bext description/originator fields and iXML with credits per mapping And filename casing rules are applied deterministically (e.g., keep acronyms in ALL CAPS, lowercase connector words) as configured And a dry-run mode shows the exact filename/tag changes without committing, when dry_run=true And the system rejects write operations if a file is read-only, surfacing a "permission denied" error and skipping to the next item
Intelligent Derivation of Version and Explicit/Clean Flags
Given a preset defines derivation rules using variable tokens and conditions (e.g., version from parentheses, explicit flag from [Explicit]/[Clean], release_date from project) When files named "Artist - Title (Radio Edit) [Clean].wav" and "Artist - Title (Remix) [Explicit].mp3" are processed Then the system sets version=Radio Edit and explicit=false for the first, and version=Remix and explicit=true for the second And derives release_date from the linked project if not present on the asset, formatted as YYYY-MM-DD And derivations do not override fields already set unless override=true in the preset And a rules trace is stored per file listing which conditions matched and which fields were derived
API/CLI Triggered Auto-Tag with Variable Tokens
Given an authenticated client calls POST /tagforge/apply or runs tagforge apply --preset <id> --files <glob> with variables {release_date, project_code, track_number} And the preset contains filename pattern "{artist} - {title} ({version}) [{project_code}]" When the request is submitted for 100 files with dry_run=false Then the job is accepted and a job ID is returned within 2 seconds And processing completes within 2 minutes, with per-file statuses available via GET /jobs/{id} And variable tokens are substituted in both filename and tag fields; missing required variables cause HTTP 422 with details And re-running the same job with the same inputs is idempotent and does not duplicate ISRCs or append duplicate credits
Preset Governance: Ownership, Visibility, and Change History
Given presets have owner, visibility (private, project, label), and versioned change history When a non-owner without admin rights attempts to edit a label-visible preset Then the request is rejected with HTTP 403 and no changes are recorded And owners/admins can publish a new preset version with a change note, which increments the version and snapshots all rules And any auto-tag run records the exact preset version used; reverting a preset restores the prior rules without altering past audit logs And visibility can be promoted (private -> project -> label) only by owners/admins, with promotion recorded in history
Audit Trail & Reversible Tag Writes
"As a project lead, I want a full audit trail with the ability to roll back tag changes so that I can maintain compliance and undo mistakes quickly."
Description

Record every metadata and filename change with before/after diffs, user/time attribution, and file checksums. Store write outcomes per file (success, warnings, errors) and the exact frames/chunks written. Enable one-click rollback to prior states and lock protection for approved masters. Provide exportable manifests (JSON/CSV) summarizing final tags and filenames for delivery packets and compliance audits, ensuring transparency and recoverability across collaborative workflows.

Acceptance Criteria
Per-Change Audit Log with Diffs and Attribution
Given a user modifies metadata fields or renames a file via UI or API When the change is saved Then an immutable audit record is created per affected file containing: file ID, pre-change tag/filename snapshot, post-change snapshot, field-level diff, acting user ID, action source (UI/API), request IP, UTC ISO 8601 timestamp with ms precision, pre- and post-write SHA-256 checksums, and batch ID if applicable And the record is retrievable via audit API by file ID or batch ID within 1 second And the field-level diff enumerates added/updated/removed fields with old/new values, including empty-to-value and value-to-empty transitions
Write Outcomes with Exact Tag Frames/Chunks
Given TagForge writes tags to audio assets When the write operation completes per file Then the system stores the outcome as one of: success, warning, error And it stores a list of tag elements written/updated/deleted with exact identifiers (ID3 frame IDs with version e.g., TPE1 v2.4, TXXX keys; RIFF/BWF chunk names e.g., bext, iXML, INFO), operation type, and byte length written And warnings include standardized codes and messages; errors include code, message, and no file content change (pre/post checksum equal) And outcomes are visible in UI and retrievable via API for each file
One-Click Rollback to Prior Snapshot
Given a file has at least one prior audited snapshot When an authorized user selects a snapshot and confirms rollback Then the system restores tags and filename to exactly match that snapshot and writes to disk And it computes a new checksum and verifies restored fields equal the snapshot values field-for-field And it creates a new audit entry labeled rollback linking to the source snapshot and recording the outcome And the operation completes within 5 seconds per file for files ≤200 MB And if the file is locked, the rollback is blocked with a permission error and an audit entry of the blocked attempt
Master Lock Protection
Given a file is marked as Approved Master and locked When any user attempts to edit tags, rename, or rollback via UI or API Then the system prevents the write, displays "Locked by {user} on {UTC timestamp}" in UI, and returns HTTP 423 (Locked) via API And it records an audit entry of the blocked attempt including actor, attempted fields, and timestamp And only users with role Release Manager or Owner can unlock; unlocking requires a reason note, secondary confirmation, and is itself audited
Exportable Delivery Manifests (JSON/CSV)
Given a user exports a manifest for a finalized selection or release When exporting to JSON or CSV Then the manifest contains one row/object per file with: filename, file ID, path, final tag fields (Title, Artist(s), Album, ISRC, splits/percentages, rights holder, contact email), checksum (SHA-256), and last successful write batch ID and UTC timestamp And JSON validates against schema tagforge.manifest.v1; CSV contains a fixed, documented column set in UTF-8 with BOM And manifest contents match the current stored state exactly for all selected files And the export begins within 2 seconds and completes within 10 seconds for up to 500 files And the generated manifest is downloadable and stored against the release with a unique URL and an audit entry
Batch Edits with Mixed Outcomes and Recoverability
Given a batch update is executed across multiple files When some files succeed, some warn, and some fail Then each file has its own audit entry with precise outcome and element-level details, all linked by a unique batch ID And a batch summary shows counts by outcome and is available via UI/API within 1 second after completion And failed files remain unchanged (pre/post checksum equal), successful files reflect the new tags/filenames, and warned files reflect changes with warnings recorded And rollback can be executed per file or for the entire batch to the pre-batch snapshot And exporting manifests can include only successful files without blocking on failures

LinkHealer

Continuously checks for broken or moved asset links, auto‑relocates files in connected drives, and prompts owners when action is needed. Seamlessly re‑links and updates previews and shortlinks so press pages and reviewer access never break mid‑campaign.

Requirements

Continuous Link Health Monitor
"As a label manager, I want continuous monitoring of all asset links so that press pages and reviewer access don’t break during a campaign."
Description

Implements a continuous, low-latency monitor that validates the health of all asset references used by TrackCrate (stems, artwork, press kits, previews, watermarked downloads). Uses provider-specific APIs and stable file identifiers to detect moved, renamed, deleted, and permission-changed items across connected drives (e.g., Google Drive, Dropbox, S3) and 404s on external URLs. Categorizes issues by type and severity, emits events to the platform’s event bus, and maintains a normalized link state for each asset. Proactively detects problems before links are consumed by shortlinks, AutoKit pages, or private stem players, reducing campaign downtime.

Acceptance Criteria
Moved or Renamed File Detected via Provider APIs
Given an asset tracked by TrackCrate with a stored stable provider identifier When the underlying file is moved to a different folder or renamed on Google Drive, Dropbox, or S3 Then the monitor detects the change within p95 ≤ 2 minutes of provider change timestamp and p99 ≤ 5 minutes And updates the normalized link state to Moved or Renamed, capturing new path/URL and provider revision And publishes a LinkStateChanged event with asset_id, provider, change_type, old_location, new_location, timestamp, severity, correlation_id, and dedup_key And internal references (shortlinks, AutoKit pages, previews, stem player) resolve to the new location with 0 synthetic 4xx/5xx within 5 minutes post-detection And the change is recorded once (idempotent) with a history entry containing before/after values
Deleted or Missing Asset Classified and Contained
Given an asset reference used by TrackCrate When the underlying item is deleted, trashed, or returns 404/410 from the provider Then the monitor detects the condition within p95 ≤ 2 minutes and p99 ≤ 5 minutes And sets normalized link state to Deleted (provider-backed) or Unreachable (external URL) And classifies severity as Critical when the asset is attached to an active campaign or has >100 unique shortlink clicks in the last 24 hours; High when scheduled for release within 7 days; Medium otherwise; Low for archived items And publishes a LinkStateChanged event with severity and remediation_hint And shortlinks/AutoKit/stem player do not surface a broken asset; a fallback experience is served with HTTP 200 and clear messaging to authenticated users; bots receive a 404 for removed assets And the owner is notified via in-app notification and email within 2 minutes of classification
Permission Change Triggers Owner Prompt and Safe Fallback
Given an asset accessible to TrackCrate’s service identities When provider permissions change such that access results in 401/403 Then the monitor detects and sets normalized link state to PermissionChanged within p95 ≤ 2 minutes And publishes a LinkStateChanged event including required fields plus required_scopes/permissions_diff And creates an owner task with a direct provider link and step-by-step remediation, with reminders every 24 hours until resolved And end-user surfaces provide a safe fallback (no private asset leakage), returning 200 with placeholder for authenticated users and 403 for unauthenticated direct asset hits And clearing the permission issue automatically restores Healthy state and closes the task within 1 minute of successful access
External URL Health Monitoring and Backoff
Given a TrackCrate asset that references an external URL (non-provider) When the URL returns 4xx/5xx or times out (>5s) for two consecutive probes Then the monitor marks the asset Unreachable with cause_code and last_error_at And probes use exponential backoff with jitter and domain-level rate limits (≤2 RPS/domain, respect robots.txt where applicable) And a 7-day rolling uptime percentage is computed and exposed via metrics API And AutoKit hides or badges the affected tile by default unless an override is set And a LinkStateChanged event is emitted with deduplication for identical consecutive failures And recovery (two consecutive 200/206 responses) restores Healthy and emits a recovery event
Normalized Link State Lifecycle and API
Given any asset tracked by the monitor When state transitions occur due to detected conditions Then states are limited to {Healthy, Moved, Renamed, Deleted, PermissionChanged, Unreachable, Relinked} And illegal transitions are rejected and logged; valid transitions update updated_at, cause, provider_evidence, and version And the Link State Read API returns the canonical state in ≤200 ms p95 and includes last_checked_at and severity And state and history persist across restarts, with exactly-once history entries per dedup_key And concurrent detections resolve via provider timestamp last-write-wins with conflict telemetry
Auto-Relocate and Seamless Re-link Across Endpoints
Given a moved/renamed asset on a provider that supports stable identifiers mapping to new locations When the monitor confirms the new location Then internal references used by previews, watermarked downloads, shortlinks, and stem player are updated within 2 minutes p95 And a Relinked state is set with previous_location recorded and a backward-compatible redirect for shortlinks is in place for ≥7 days And synthetic checks across all endpoints return 200 and correct content hash/length within 5 minutes of relink And if auto-relocate is not possible (e.g., cross-provider move), an owner prompt is created within 2 minutes with required actions, and no user-facing 4xx/5xx are observed due to pre-consume detection And an audit log captures who/what performed the relink and the event is published with relink_method=auto|manual
Auto-Relocation and Safe Re-linking
"As an artist, I want files I reorganize in my drive to be automatically re-linked in TrackCrate so that I don’t have to manually fix broken references."
Description

Automatically resolves and re-links assets when files are moved or renamed inside connected drives by leveraging provider file IDs, revision metadata, and content hashes. Updates the canonical asset record and all internal references, including preview/transcode sources, watermark pipelines, and rights metadata associations. Preserves version history, verifies checksums to avoid mismatches, and executes in a transactional manner to prevent partial updates. Rolls back on failure and queues follow-up tasks for regeneration of previews and expiring download links.

Acceptance Criteria
In-Provider Move With Unchanged Content
Given an asset with a canonical record storing provider file ID FID and content hash H When the underlying file is moved to a different folder within the same connected drive without content modification Then LinkHealer detects the change and resolves the new path using FID within 60 seconds of change detection And verifies the current content hash equals H and revision lineage matches the original file And updates the canonical asset record and all internal references (preview/transcode sources, watermark source, rights metadata associations, shortlink targets) atomically in a single transaction And existing press pages and shortlinks return HTTP 200 with the correct asset before, during, and after the update with zero 4xx/5xx responses observed over a 5-minute window And the asset’s version history remains intact and unchanged And an audit log entry is recorded with old path, new path, FID, H, timestamp, and actor "LinkHealer"
In-Provider Rename With Unchanged Content
Given an asset with canonical record storing provider file ID FID and content hash H When the file name is changed in the provider without modifying file content or location Then LinkHealer updates the canonical asset record and all internal references to reflect the new name within 60 seconds of change detection And content hash remains H and version history is preserved And rights metadata associations remain unchanged And press pages and shortlinks continue to resolve with HTTP 200 and return the correct asset throughout And follow-up tasks for preview validation/regeneration and expiring link re-signing are enqueued (deduplicated) and complete successfully without altering media content
Checksum Mismatch During Relocation
Given an asset with stored content hash H and provider file ID FID When a move or rename is detected but the provider reports a different content hash H' that does not match H Then LinkHealer aborts the auto re-linking attempt and rolls back any tentative changes And all internal references and the canonical asset record remain exactly as before the attempt And a "ChecksumMismatch" error is written to the audit log with FID, H, H', paths, timestamp, and correlation/transaction ID And the asset is flagged as Needs Attention and owners are notified for manual resolution And press pages and shortlinks continue to serve the last verified content with HTTP 200
Transactional Update and Rollback on Partial Failure
Given a simulated failure occurs during internal reference updating (e.g., preview source update throws an error) When LinkHealer executes an auto relocation for the asset Then the operation is wrapped in a transaction And upon failure, all changes are rolled back, leaving the canonical asset record and references identical to their pre-operation state And a retry is scheduled with exponential backoff up to 3 attempts And an audit log entry is recorded with failure reason, stack trace reference, and transaction ID And press pages and shortlinks experience no downtime (zero 4xx/5xx responses) during the attempt
Follow-up Regeneration and Link Continuity
Given an asset has been successfully re-linked with unchanged content When follow-up tasks are processed Then preview/transcode regeneration or validation jobs are enqueued with the correct asset ID and new path and begin within 2 minutes And all active expiring download links are re-signed to the new path while remaining valid until their original expiry; new tokens deliver the identical content (hash H) And watermark pipelines reference the new source and produce valid outputs; preview playback and watermarked downloads return HTTP 200 and match prior media duration, format, and bitrate And all tasks complete successfully or are retried per policy until success or explicit failure recorded; task outcomes are visible in job logs
Bulk Move/Rename and Concurrency Safety
Given 100 assets within a provider folder are moved and/or renamed within a 60-second interval When LinkHealer processes the resulting provider events Then each asset is re-linked to its correct final location using provider file IDs without any cross-linking between assets And per-asset updates are atomic and isolated; no partial updates are observed And processing completes for all 100 assets within 10 minutes from first change detection And if an asset experiences a rename followed by a move within 30 seconds, events are coalesced and applied as a single transaction yielding the provider's final path And press pages and shortlinks maintain 100% availability (no 4xx/5xx) for all affected assets during processing
Shortlink and Press Page Propagation
"As a publicist, I want shortlinks and press pages to stay accurate after assets move so that reviewers always land on a working page."
Description

Propagates asset re-link updates to all dependent resources: trackable shortlinks, AutoKit press pages, Open Graph/Twitter Card metadata, CDN caches, and embedded private stem players. Maintains stable slugs and analytics continuity while refreshing targets and signed URLs. Performs fast cache invalidation and background prefetch to minimize downtime, ensuring reviewers always land on a working page during active campaigns.

Acceptance Criteria
Stable Slugs and Analytics Continuity After Asset Re-link
Given an existing shortlink and AutoKit press page with accumulated analytics When LinkHealer re-links one or more assets due to move/rename across connected drives Then the shortlink slug and press page URL remain exactly unchanged And the resource IDs for the shortlink and press page remain unchanged And historical analytics (clicks, visits, plays, downloads) remain attributed to the existing resource IDs with no reset And new events after the re-link are appended to the same IDs and totals And no duplicate resources are created in the analytics store
Open Graph and Social Card Metadata Refresh
Given a press page whose title, description, and artwork are derived from asset metadata When an asset is re-linked or its metadata changes Then the page's Open Graph and Twitter Card tags reflect the latest title/description/image within 2 minutes And the HTML response for the press page shows updated meta tag content values And image URLs in meta tags include a cache-busting version parameter linked to the new asset digest And the page returns HTTP 200 and Cache-Control headers that allow immediate re-scrape by major social bots
CDN Cache Invalidation and Prefetch Warmup
Given dependent resources (press pages, artwork thumbnails, audio previews) have cached variants on the CDN When LinkHealer updates target URLs or regenerates derived assets Then the CDN purges affected cache keys within 60 seconds of the update event And background prefetch warms the new URLs so the next request in each primary region is served as cache HIT And time-to-first-byte for the first post-update request is under 300 ms for the 95th percentile in each primary region
Embedded Private Stem Player Session Continuity
Given a reviewer is actively playing stems on a press page using signed stream URLs When assets are re-linked and signed URLs rotate Then playback continues without manual refresh and without a fatal error And any audio interruption is under 1 second with no 4xx/5xx media segment errors logged And new download URLs are signed against the new targets and old signatures are revoked within 60 seconds
Shortlink Target Refresh With Zero-Downtime Redirects
Given a published shortlink experiencing concurrent traffic during an active campaign When its target is updated by LinkHealer Then at least 99.9% of requests over the 5 minutes following the update resolve to a working target (HTTP 200/302) with no HTTP 404/410 responses And UTM parameters and click attribution query strings are preserved through the redirect And the redirect chain length is at most 2 hops for 99% of requests
Auto-Relocate Failure Fallback and Owner Prompt
Given an asset path becomes unreachable and auto-relocate cannot determine a new location When re-linking fails Then the owner is notified within 5 minutes via in-app notification and email with a deep link to resolve And dependent shortlinks and press pages remain accessible, showing the last known good asset or a non-blocking placeholder rather than a 404 And upon manual fix by the owner, propagation to all dependents completes within 2 minutes and health status returns to green
Derived Previews and Thumbnails Refresh
Given a press page with artwork thumbnails, waveform images, and 30-second audio previews When an asset is re-linked or its binary changes Then all derived previews are revalidated and regenerated as needed, and their URLs updated to versioned variants And CDN caches for the old variants are purged and new variants are warmed And the press page displays correct previews within 2 minutes without broken images or 4xx responses
Owner Prompting and Resolution Workflow
"As a project owner, I want clear prompts with one-click fixes when LinkHealer can’t auto-heal so that I can resolve issues quickly and keep campaigns on track."
Description

Provides a guided resolution flow when automatic healing is not possible (e.g., file deleted, access revoked, ambiguous matches). Notifies owners and collaborators via in-app alerts, email, and optional Slack with clear diagnosis and recommended actions. Offers one-click options to select a replacement file, request or re-grant permissions, restore from a previous version, or mark the asset as intentionally archived. Tracks SLA timers, supports assignment and comments, and closes the incident once validation tests pass.

Acceptance Criteria
Auto-Heal Failure: Deleted Source File
Given a monitored asset link returns a not-found response (HTTP 404 or provider fileNotFound) and no relocated candidate is found When the link check runs and auto-heal fails Then create an incident with diagnosis "Source file deleted" and severity P2 And present recommended actions: "Restore from previous version", "Select replacement file", "Mark as archived" And show affected shortlinks and press pages counts in the incident And notify owners and collaborators via in-app alert within 60 seconds, email within 3 minutes, and Slack (if connected) within 60 seconds When the user selects "Restore from previous version" Then list available versions with timestamps and sizes and disable the action if no versions exist with an explanatory tooltip When a version is chosen Then re-link the asset, update previews and shortlinks, run the validation suite, and auto-close the incident only if all checks pass; otherwise keep it open with failure reasons displayed
Permission Revoked on Connected Drive
Given the asset is inaccessible due to permission errors (HTTP 403 or provider insufficientPermissions) When detected by LinkHealer Then create an incident with diagnosis "Access revoked" and recommended actions: "Request access", "Re-grant connection", "Select replacement file", "Mark as archived" And expose a one-click "Request access" that sends a provider-native permission request to the file owner and records the request event And set incident status to "Awaiting Access" while showing the last request timestamp and requester And do not expose previews or downloads until access is restored When access is granted or the connection is re-authorized Then revalidate the asset and auto-close the incident if validation passes; otherwise keep it open with the failed checks listed
Ambiguous Match Resolution Flow
Given multiple candidate files are discovered with similarity scores within the ambiguity threshold When auto-heal cannot determine a unique match Then create an incident with diagnosis "Ambiguous match" and list 3–10 candidates with path, size, duration, modified date, and similarity score And provide inline preview/metadata compare and a one-click "Use this file" action per candidate When the owner selects a candidate Then update the asset link, refresh previews, and update all dependent shortlinks and press pages within 2 minutes And record an audit log entry with the chosen candidate and the selector And run the validation suite and auto-close on pass; keep open with reasons on fail
Multi-Channel Notifications and De-duplication
Given user notification preferences enable in-app and email, and Slack is optionally configured per workspace When an incident is created Then send notifications only via enabled channels And include a deep link to the incident in all notifications And suppress duplicate notifications for the same incident to the same user within a 30-minute window And log and surface delivery failures (email bounces, Slack webhook errors) on the incident When a user unfollows the incident Then stop further notifications to that user except for SLA breach escalations
SLA Timers, Assignment, and Collaboration
Given the workspace SLA for LinkHealer incidents is configured to 8 hours When an incident is created at t0 Then display the SLA target as t0+8h with a live countdown and color states (OK, Warning at t0+7h, Breach at t0+8h) And allow assignment to a user with timestamped audit, and allow unassignment/reassignment And support comments with @mentions, basic Markdown, and file attachments And send a reminder 1 hour before SLA breach to assignee and owners, and escalate on breach to the workspace Slack channel (if configured) When the incident is resolved as "Archived intentionally" Then set resolution code to "Archived" and stop timers And make the full event timeline exportable as JSON
Incident Closure and Validation Suite
Given a resolution action is taken (restore, replacement, permission re-granted, archived) When validation runs Then verify: asset URL resolves (HTTP 200 or provider success), preview renders, shortlink returns 200 and redirects correctly, AutoKit press page loads asset, private stem player streams at least 30 seconds, and watermarked download generates and respects expiry And invalidate caches and rebuild previews on pass And auto-close the incident and post a success note to the timeline If any check fails Then keep the incident open, show failing checks with messages, and re-run validation every 15 minutes until pass or manual override with required reason
Permission-Aware Healing and Re-auth
"As an admin, I want LinkHealer to handle permissions and re-auth securely so that access remains correct while keeping links functional."
Description

Ensures LinkHealer operations respect authentication and authorization boundaries across storage providers and TrackCrate. Differentiates permission errors from missing resources, initiates secure re-auth flows with token refresh and least-privilege scopes, and validates that healed links preserve intended audience restrictions. Regenerates expiring, watermarked download tokens and signed preview URLs as needed without widening access, and logs permission changes for auditing.

Acceptance Criteria
Permission Error vs Missing Resource Classification
Given LinkHealer receives an error from a storage provider When the response indicates 401/403 or a permission_denied code Then classify the incident as PermissionError and do not mark the asset as deleted And queue a re-auth action within 1 minute and record provider error code and request ID Given LinkHealer receives a 404/not_found with valid credentials When the asset lookup completes Then classify the incident as MissingResource and start relocate search workflows And do not initiate a re-auth flow
Token Refresh Without Scope Escalation
Given a stored provider token is expired or invalid When LinkHealer performs a token refresh Then request only the previously granted scopes And verify the refreshed token scopes are equal to or narrower than the prior set And if broader scopes are returned, reject the token, revoke it, and require interactive re-auth And write an audit entry with before/after scope sets and outcome
Owner Re-auth Flow and Least-Privilege Consent
Given a permission error is detected for an asset with a known owner When LinkHealer initiates re-auth Then send a single email and in-app notification to the owner within 5 minutes And present an OAuth flow with PKCE and nonce-bound state, listing the exact scopes and justification And issue a single-use re-auth link that expires in 30 minutes And upon success, encrypt and store the refreshed token tagged to the owner and scopes And retry healing within 2 minutes with up to 3 attempts And set incident status to Resolved on success or NeedsOwnerAction on failure, and log the outcome
Preserve Audience Restrictions on Healed Links
Given a press page or shortlink enforces audience restrictions (private, allowlist, embargo) When LinkHealer updates the underlying file pointer or regenerates a signed preview URL Then preserve the shortlink access policy unchanged And ensure regenerated URLs enforce the same allowlist, expiry, and referrer constraints And verify a non-allowed user receives 403 and an allowed user receives 200 And ensure no publicly accessible direct file URLs are exposed in UI or logs
Regenerate Expiring Watermarked Downloads and Signed Previews
Given a watermarked download token or signed preview URL will expire within the policy window (e.g., 24h) When LinkHealer heals the asset or detects impending expiry Then regenerate tokens/URLs retaining the same watermark recipient ID and salt And set expiry according to campaign policy without exceeding the configured cap And revoke previous tokens/URLs within 60 seconds of regeneration And update dependent shortlinks atomically so any 404 window is under 5 seconds
Comprehensive Permission Audit Logging
Given any permission-affecting action occurs (token refresh, scope request, ACL validation, re-auth prompt) When the action completes (success or failure) Then write an immutable audit record with UTC timestamp, actor, asset ID, provider, action type, before/after scopes or ACL, outcome, and provider request IDs And ensure logs are queryable by asset and time range within 2 seconds for the last 90 days And redact PII per policy and verify integrity via hash chain or WORM storage
Rate Limiting and Backoff for Re-auth and Healing Attempts
Given repeated permission errors occur for the same provider account When retries exceed 3 within 1 hour Then pause automated healing and re-auth prompts for 12 hours and suppress duplicate notifications And create one consolidated incident visible to the owner and workspace admins And resume only after successful re-auth or manual acknowledgement And apply exponential backoff with jitter to background retry jobs
Link Health Dashboard and Alerts
"As a label ops lead, I want a health dashboard and alerts so that I can prioritize and track link issues across campaigns."
Description

Delivers a real-time dashboard showing link health status, incident counts by severity and campaign, mean time to detect/repair, and outstanding owner actions. Provides filters for release, asset type, assignee, and storage provider. Supports threshold-based alerts, daily/weekly summaries, CSV exports, and webhook notifications for integration with issue trackers and chat tools.

Acceptance Criteria
Real-Time Link Health Dashboard
- Given the dashboard is open, when a link’s health state changes in the backend, then the visible status and severity badge update within 5 seconds without a full page reload. - Given summary tiles are visible, when data updates, then tiles for Incident Count by Severity and by Campaign refresh at least every 60 seconds and display a Last Updated timestamp. - Then severity badges are color-coded (Critical=red, Major=orange, Minor=yellow, Healthy=green) and meet WCAG AA contrast; badges include aria-labels and are keyboard focusable. - Then each incident row displays: link_id, asset_type, campaign, storage_provider, severity, status, assignee, last_check_at (UTC), detected_at, repaired_at (if any). - Then the Outstanding Owner Actions widget shows total pending actions and a list sorted by time-to-SLA-breach ascending; clicking an item navigates to its detail drawer. - Then the detail drawer shows the incident timeline (detected, acknowledged, reassigned, rechecked, repaired) and provides actions: Recheck, Assign/Reassign, Mark Ignored (with reason), and Add Note.
Filtering and Saved Views
- Given filters for Release, Asset Type, Assignee, and Storage Provider, when the user applies selections (including multi-select), then both the incident list and aggregates reflect only matching records within 1 second for result sets up to 10,000 rows. - Then the current filter state is encoded in the URL query string and is restored on page reload and when the URL is shared with team members who have access. - Then a Clear Filters control resets all filters to defaults in one click and refreshes the results accordingly. - Given a user saves the current filters as a named view, then it appears in the Views menu, can be renamed or deleted by the owner, and can be shared with the team (shared views are read-only for non-owners). - Then filter components support keyboard navigation and announce changes to assistive technologies.
Threshold-Based Alerts
- Given an alert rule "Critical incidents > 0 for 5 minutes" scoped to a campaign, when the condition is met, then a notification is delivered to the selected channels (email, in-app, Slack, webhook) within 2 minutes of breach. - Then alerts are deduplicated per rule and incident state: no additional notifications are sent until the state clears or a configurable cooldown elapses (default 60 minutes; range 15–120 minutes). - Then a Snooze option is available per rule and per incident (15m, 1h, 1d) and prevents notifications during the snooze window. - Then a Test Alert action sends a sample notification to all configured channels and records the result in the audit log. - Then all alert rule create/update/delete actions and last fire times are recorded in an immutable audit log with actor, timestamp, and before/after values.
Daily and Weekly Email Summaries
- Given a user subscribes to daily and/or weekly summaries with a selected local time and timezone, then the email is generated at the scheduled time and delivered within 10 minutes. - Then the summary includes counts for incidents opened and resolved, MTTR and MTTD for the period, incidents by severity and campaign, and total outstanding owner actions at send time. - Then the email includes both HTML (dark-mode friendly) and plain-text parts, with links deep-linking to the dashboard with the corresponding filters applied. - Then the recipient can unsubscribe or change preferences via links in the email without contacting support, and changes take effect before the next scheduled send.
CSV Export of Incidents and Metrics
- Given the user clicks Export CSV, then the file contains only records matching the current filters and preserves the current sort order. - Then exports up to 100,000 rows are generated within 60 seconds; larger exports are queued, and the user is notified in-app and by email when ready. - Then the CSV is UTF-8 with BOM, RFC 4180 compliant (comma delimiter, quoted fields for values containing commas/newlines), and includes a header row. - Then date/times are ISO 8601 UTC; durations are in whole seconds; booleans are true/false. - Then the column set includes: link_id, campaign, asset_type, storage_provider, status, severity, detected_at, repaired_at, time_to_detect_seconds, time_to_repair_seconds, assignee, owner_action_required, last_check_at. - Then the download remains available for at least 7 days and is access-controlled to authorized users only.
Webhook Notifications for Incidents and Alerts
- Given a webhook endpoint is configured with URL, name, and shared secret, when an event occurs (incident.created, incident.updated, incident.repaired, alert.fired), then a POST is delivered within 1 minute with a versioned JSON payload including event_id, event_type, occurred_at, link/asset metadata, campaign, severity, status, metrics deltas, and a dashboard URL. - Then each delivery includes an HMAC-SHA256 signature over the raw body and a timestamp header; receivers can verify within ±5 minutes clock skew. - Then deliveries are retried with exponential backoff up to 6 attempts over 30 minutes on network errors or 5xx responses; retries use the same event_id for idempotency. - Then persistent 410 Gone or 404 Not Found responses for 3 consecutive attempts disable the endpoint and notify owners. - Then users can send a Test Delivery and view delivery logs (timestamp, status code, latency, truncated response body) for the last 100 attempts per endpoint.
Audit Trail and Reporting
"As a product manager, I want a complete audit trail of link healing activity so that we can prove stability and improve our processes."
Description

Captures a complete audit trail for link health events, automated fixes, and user-driven resolutions, including timestamps, actors, before/after URIs, file IDs, checksums, and affected dependents (shortlinks, AutoKit pages). Exposes this history on the asset timeline, supports immutable export for compliance, and provides aggregate reports for post-mortems and campaign wrap-ups.

Acceptance Criteria
Automated Relocation Event Logged with Before/After URIs
Given a tracked asset’s source file is moved in a connected drive And LinkHealer auto-relocates the file and updates internal references When the relocation completes Then an audit event is appended with fields: event_type='auto_relocate', event_id (UUIDv4), occurred_at (ISO 8601 UTC), actor='system:linkhealer', asset_id, before_uri, after_uri, before_file_id, after_file_id, before_checksum, after_checksum, dependents (array with type and id) And after_checksum equals before_checksum And the event is immutable (no update/delete operation permitted) And the event appears on the asset timeline within 15 seconds of completion
Manual Relink Action Captures Actor and Checksums
Given a broken link is resolved by a user via manual relink When the user confirms a new source file Then an audit event is appended with fields: event_type='user_relink', event_id (UUIDv4), occurred_at (ISO 8601 UTC), actor=user_id, asset_id, before_uri, after_uri, before_file_id, after_file_id, before_checksum, after_checksum, note (optional) And content_changed is set to true if after_checksum != before_checksum else false And the event appears on the asset timeline within 5 seconds and is immutable
Dependents Enumeration and Update Confirmation
Given an asset has N active shortlinks and M AutoKit pages at the time of a link-health event When the event is recorded Then the audit event includes dependents_count_shortlinks=N, dependents_count_autokit=M, and lists each dependent as {type, id} And each listed dependent has update_status in {'updated','skipped'} and, if 'skipped', includes skip_reason And the total dependents listed equals N+M
Asset Timeline Display and Filtering
Given an asset with at least 50 audit events When a user opens the asset timeline view Then events are sorted by occurred_at descending and display: event_type, actor, before_uri→after_uri diff, checksums, and dependents counts And the user can filter by event_type, actor, and date range and only matching events are shown And exporting from the timeline yields exactly the currently filtered events
Immutable Audit Export with Cryptographic Integrity
Given a user requests an audit export (JSON and CSV) for an asset or campaign over a date range When the export is generated Then the system produces a ZIP containing data files and a manifest.json with: export_id (UUIDv4), created_at (ISO 8601 UTC), scope, and SHA-256 checksum for each file And a detached signature file (export.sig) for the manifest is included And signature verification succeeds on the unmodified ZIP and fails if any exported file is altered And the export contains all and only events within the requested scope
Aggregate Report for Post‑Mortem and Wrap‑Up
Given a campaign with audit events and a time window selected When the aggregate report is generated Then it includes: total events, counts by event_type, daily trend for the window, mean time to repair (MTTR) computed as mean(resolved.occurred_at − first_broken_detected.occurred_at) per incident, top 10 assets by event count, and total affected dependents And the sum of counts across event_type equals the total events for the window And the CSV export of the report matches on-screen totals and breakdowns

CreditMatch

Contributor disambiguation and split validation that normalizes aliases/diacritics to canonical profiles, verifies PRO/CAE/IPI identifiers, and enforces 100% totals. Highlights conflicts and requests missing data in‑context, reducing disputes and failed ingests.

Requirements

Canonical Contributor Matching
"As a label manager, I want ambiguous contributor names to resolve to canonical profiles so that credits are consistent across releases and ingest errors are minimized."
Description

Provide a locale-aware matching service that normalizes names (diacritics, casing, spacing, punctuation) and consolidates aliases into a single canonical profile. The system should generate confidence scores, surface potential duplicates, and route low-confidence matches to a review queue with manual override controls. Each contributor selected is stored as a canonical profile ID referenced across projects, ensuring consistent credits, deduplication within and across workspaces, and faster ingest to downstream systems. The service must support bulk operations for large rosters, operate within strict performance SLAs for batch imports, and maintain an audit of match decisions.

Acceptance Criteria
Locale-Aware Name Normalization
Given a set of contributor name inputs containing diacritics, casing, spacing, and punctuation variants across supported locales (e.g., "Jose Alvarez", "José Álvarez", "JOSE-ALVAREZ", "José Alvarez") When the matching service is invoked via API or UI during credit entry or batch import Then the service normalizes the inputs and returns the same canonical profile ID for all true variants with a match confidence >= 90 And the normalized form and canonical ID are persisted and available for reuse in subsequent matches And the false-positive rate on the standardized evaluation dataset is <= 1% at the auto-accept threshold
Alias Consolidation into Canonical Profile
Given a user with appropriate permissions links an alias to a canonical profile or merges two duplicate profiles When the operation is confirmed Then the alias becomes a searchable synonym, the secondary profile is archived, and all existing references across projects and workspaces point to the primary canonical profile ID within 60 seconds And no data loss occurs (attachments, rights metadata, identifiers, and activity history retained) And the system records an audit entry with before/after IDs, actor, timestamp, and reason
Confidence Scoring and Review Queue Routing
Given the service outputs a confidence score from 0–100 for each candidate match And configurable thresholds are set to Auto-Accept >= 90, Auto-Review 41–89, and Auto-Create/New <= 40 When a match is evaluated Then candidates >= Auto-Accept are automatically selected; candidates in Auto-Review are placed into the review queue within 2 minutes with the top 5 candidates and key features contributing to the score; candidates <= Auto-Create/New prompt creation of a new canonical profile And reviewers can manually override to select a candidate or create new, with the decision applied immediately and recorded in the audit trail And every queued item reaches a terminal state (accepted/rejected/created) within 24 hours or is escalated per policy
Potential Duplicate Surfacing and Merge
Given a workspace contains profiles with similarity scores >= 80 to another profile When a deduplication scan runs or a profile detail page is opened Then potential duplicates are surfaced with scores and conflicting fields highlighted (names, identifiers, locales) And a merge action consolidates into a chosen primary canonical profile, converts secondaries to aliases, migrates all references, and prevents future duplication by marking merged IDs as aliases And reference migration completes in <= 60 seconds and subsequent searches return a single canonical profile
Bulk Import Performance SLA for Large Rosters
Given a CSV bulk import of 20,000 contributor rows with names and optional identifiers When submitted to the bulk matching API Then processing starts within 30 seconds and completes in <= 12 minutes end-to-end And p95 per-record match evaluation latency is <= 800 ms and overall failure rate is <= 0.5% with automatic retries on transient errors And the job exposes progress (percent complete, ETA) and produces a results file mapping each input row to canonical profile ID, confidence score, decision (auto/manual/new), and any error codes And the operation is idempotent using a client-supplied idempotency key such that replays do not create duplicates
Match Decision Audit Trail
Given any match decision (auto-accepted, auto-created, manual override, merge) When the decision is committed Then an immutable audit record is written within 500 ms containing: actor/service, timestamp (UTC), request/batch ID, input tokens, candidate list with scores, thresholds and algorithm version, final outcome (selected ID/new), and rationale/notes if manual And auditors can query by profile ID, batch ID, date range, or actor and export results to CSV/JSON And audit retrieval p95 latency is <= 2 seconds for queries returning <= 1,000 rows and retention is >= 24 months with access controls enforced
Cross-Workspace Canonical ID Referencing and Downstream Consistency
Given the same contributor appears in multiple workspaces or projects When credits are assigned or profile updates occur Then the system uses a single canonical profile ID across all workspaces while respecting access controls; cross-workspace duplicates are auto-deduplicated when confidence >= 90 or queued for review otherwise And all project credit references use the canonical profile ID; downstream export/ingest payloads include that ID and remain stable after merges And a change event is emitted when merges or alias additions occur so downstream systems can update within 5 minutes
PRO/IPI Identifier Verification
"As a rights administrator, I want contributor identifiers verified so that royalty splits link to the correct PRO accounts and do not get rejected by distributors."
Description

Validate contributor identifiers (e.g., IPI, CAE, PRO member numbers) for format and integrity, and verify them against authoritative sources or partner datasets. Support multiple IDs per contributor, country- and society-specific rules, and clear statuses (verified, mismatch, pending). Provide actionable suggestions for likely corrections, handle rate limits with resilient retry/caching, and log verification provenance for compliance. Surface verification badges in the UI and expose verification state via API for downstream delivery and reporting.

Acceptance Criteria
Society-Specific Identifier Format Validation
Given a contributor enters an identifier and selects a society, When the identifier is saved, Then it is normalized (trimmed whitespace, removed separators) and validated against the configured pattern and checksum rules for that society and identifier type. Given an identifier fails format or checksum validation, When the user attempts to save, Then the system blocks the save, returns error code invalid_format, and highlights the failing rule. Given configurable rules per country/society (e.g., IPI Name Number length, IPI Base/CAE length), When rules are updated, Then validation behavior reflects the new rules without code changes and is applied to all new edits. Given a normalized identifier matches the valid pattern, When saved, Then it is stored in canonical form and flagged as ready_for_verification.
Multiple IDs per Contributor with Normalization and Deduplication
Given a contributor profile, When multiple identifiers (IPI Name, IPI Base/CAE, PRO member numbers) across different societies are added, Then the system accepts them up to the configured per-profile limit and stores type, society, and canonical value for each. Given an identifier equal to an existing one after normalization, When added again, Then the system prevents duplication and returns error code duplicate_identifier. Given multiple identifiers for the same society and type, When saved, Then the system allows multiple but requires exactly one marked as primary; otherwise returns error code missing_primary. Given a merge of two contributor profiles, When identifiers overlap, Then duplicates are deduplicated and primaries are preserved according to merge policy, logging any reassignment.
Authoritative Source Verification and Status Mapping
Given an identifier with valid format, When verification runs, Then the system queries configured authoritative sources/partner datasets for the associated society and attempts a match on identifier, name (diacritic-insensitive), and society. Given a positive match on identifier and a name match within configured normalization rules, When verification completes, Then status=verified, lastVerifiedAt is set, and source metadata is stored. Given identifier found but associated name/society does not match the contributor profile beyond allowed normalization, When verification completes, Then status=mismatch with reason=name_or_society_mismatch and details include authoritative name/society. Given all sources return not found, When verification completes, Then status=mismatch with reason=not_found. Given sources are unreachable or rate-limited after all retry attempts, When verification halts, Then status=pending and nextRetryAt is scheduled per policy.
Actionable Correction Suggestions for Invalid or Mismatched IDs
Given an identifier fails format validation, When the system detects likely fixes (e.g., removed illegal characters, corrected length, transposed digit), Then it presents up to 3 suggestions with confidence scores >= 0.6 and an Apply action per suggestion. Given an identifier verified to a different society or name, When mismatch occurs, Then the system suggests linking the authoritative identity, updating the contributor’s society, or marking as secondary, each as a one-click action with preview of changes. Given a user applies a suggestion, When saved, Then the identifier is updated, re-verified automatically, and the previous value is retained in history with reason=suggestion_applied. Given no suggestion meets the confidence threshold, When mismatch/invalid occurs, Then the system displays guidance text and a link to manual entry without suggestions.
Rate Limit Resilience, Retry Policy, and Caching
Given an external verification call returns HTTP 429 or 503, When retry policy is applied (maxAttempts=3, baseBackoff=1s, maxBackoff=30s, jitter=±20%), Then retries are scheduled and the identifier remains in status=pending until attempts are exhausted. Given retries are exhausted without a definitive result, When the job completes, Then status=pending with reason=deferred and nextRetryAt is set using exponential backoff with cap. Given a successful verification result, When stored, Then it is cached for 30 days (configurable) keyed by {society, idType, identifier} and reused to avoid duplicate external calls. Given a mismatch or not_found result, When stored, Then it is cached for 24 hours (configurable) and rechecked thereafter. Given multiple verification requests for the same key within a short window, When processing, Then the system deduplicates in-flight requests and serves all callers from a single result.
Verification Provenance Logging and Auditability
Given any verification attempt occurs, When processing completes (success, mismatch, or pending), Then an append-only provenance record is written including: timestamp, actor (system/user), contributorId, society, idType, identifier (canonical), source endpoints invoked, request/response hashes, outcome, confidence, retryCount, and correlationId. Given a provenance record exists, When an auditor requests the history for an identifier, Then the API returns a chronological list with immutable entries and redacts PII per policy while retaining decision-critical fields. Given data retention policy is configured (default 24 months), When records exceed retention, Then they are archived or purged according to policy with an audit entry of the action. Given a suggestion is applied, When verification reruns, Then the provenance chain links the suggestion event and subsequent verification outcome.
UI Badges and API Exposure of Verification State
Given identifiers are present on a contributor profile, When the page loads, Then each identifier displays a badge with status one of {verified, mismatch, pending}, color-coded and with a tooltip showing lastVerifiedAt and source. Given accessibility requirements, When badges render, Then they meet WCAG AA contrast, include aria-labels with status and date, and are navigable via keyboard. Given a user clicks a badge, When the details panel opens, Then it shows verification outcome, authoritative data, and available correction actions. Given an API client fetches identifiers, When calling GET /contributors/{id}/identifiers, Then each entry includes fields: idType, society, identifier, status, lastVerifiedAt, source, nextRetryAt, and suggestions (if any). Given webhooks are enabled, When a verification status changes, Then a verification.updated event is emitted with the new status and provenanceId.
Split Total Enforcement
"As a producer, I want the system to ensure all splits add up correctly per rights category so that we can finalize releases without last-minute corrections."
Description

Enforce that splits for defined rights categories (e.g., Composition, Publishing, Master) sum to 100% within each category. Support configurable rounding rules, fractional precision, and territory/version variants when required. Prevent over- or under-allocation, flag inconsistent role assignments, and block approval until totals are valid. Provide clear, inline guidance and auto-calculation helpers to resolve minor discrepancies. Ensure validations are compatible with downstream delivery specifications to reduce failed ingests.

Acceptance Criteria
Block Approval Until 100% per Category
Given a release has splits entered for one or more categories (Composition, Publishing, Master) When a user attempts to approve the splits Then the Approve action is disabled unless every category total equals exactly 100.00% under the active precision and rounding rules And if any category total != 100.00%, an inline error is shown per category with the computed total and deficit/excess And API submissions with invalid totals return HTTP 422 with per-category totals and error codes And drafts can be saved but are marked Invalid until corrected
Rounding Rules and Precision Enforcement
Given organization settings define precision = 2 decimal places and rounding = bankers (half-even) When contributor shares of 33.33%, 33.33%, and 33.34% are entered for a category Then the displayed category total equals 100.00% and no validation error is shown Given contributor shares of 33.333%, 33.333%, and 33.334% with precision = 2 When totals are calculated Then internal calculations use full precision; final display and exports use configured precision/rounding; the category total displays as 100.00% And if configured precision and rounding would produce a total of 99.99% or 100.01%, an inline warning appears with an Auto-Fix suggestion
Territory Variant Split Validation
Given a release defines territory variants (e.g., US, Rest of World) When splits are entered per category for each variant Then within each variant and category, totals must equal 100.00% And the UI highlights any invalid variant/category and blocks approval until all variants are valid And attempting to export a variant with invalid totals is blocked with an error naming the variant and category
Version Variant Split Validation
Given a release has versions (e.g., Original, Radio Edit) And Radio Edit inherits splits from Original When an override is applied to Radio Edit for a category Then the overridden category total must equal 100.00% And approval is blocked if any version/category total != 100.00% And if no overrides are made, inherited totals remain valid without additional edits
Inconsistent Role Assignment Detection
Given allowed roles are defined per category (e.g., Composition: Writer/Composer; Master: Performer/Producer) When a split line item uses a role not allowed for its category Then an inline validation error is shown listing allowed roles, and approval is blocked while the error exists And when the same contributor appears multiple times in the same category without distinct sub-roles, a warning prompts merge/confirmation to avoid double-counting And drafts can be saved but remain marked Invalid until all role inconsistencies are resolved
Inline Guidance and Auto-Calc Helpers
Given a category total differs from 100.00% by a delta within the auto-fix threshold (e.g., ≤ 0.02%) When the user clicks Auto-Fix Then the system adjusts the largest-share, unlocked line item by the exact delta using configured rounding so the total becomes 100.00% And a tooltip and change log entry explain the adjustment And locked line items are never modified; if all are locked, Auto-Fix is disabled with an explanatory message And after Auto-Fix, validation passes with no new discrepancies
Downstream Delivery Compatibility Checks
Given an export target requires 2 decimal places and totals of exactly 100.00% per category When the user runs Preflight Check Then conversion of internal shares to the target precision/rounding yields 100.00% totals per category across all variants and versions And if any category would not equal 100.00% after conversion, Preflight fails with a report naming categories and computed totals, and Export is disabled And when Preflight passes, generated export payloads pass schema validation and per-category total assertions
Conflict Detection & Resolution
"As a project lead, I want conflicts clearly flagged with suggested fixes so that we can resolve credit issues quickly and avoid release delays."
Description

Continuously detect and highlight conflicts such as duplicate contributors, identifier mismatches, and split overages/underages. Present conflicts inline with contextual explanations and suggested fixes, allowing users to accept, override (with justification), or assign to a reviewer. Notify stakeholders of blocking conflicts, track resolution status, and prevent release finalization until all critical conflicts are cleared. Maintain a comment thread per conflict for collaboration and traceability.

Acceptance Criteria
Inline Duplicate Contributor Detection
Given two or more contributor entries in the split editor normalize to the same canonical profile via CreditMatch When validation runs on field blur or save Then a "Duplicate contributor" conflict is displayed inline on the affected entries within 2 seconds And the entries are grouped under the canonical profile with a visible tag And a "Merge to canonical" suggested fix is available And selecting "Accept" merges the entries into one under the canonical profile and sums their split percentages without data loss And the conflict status changes to Resolved And an audit log entry is recorded with user, timestamp, and before/after values
Identifier Mismatch Validation
Given a contributor with a non-zero split has a CAE/IPI that does not match the verified identifier on the canonical profile or is missing When validation runs on save or via background sync with PRO lookup Then an "Identifier mismatch" conflict is created and displayed inline within 2 seconds And the UI shows the provided and canonical identifiers side by side with an explanation tooltip And suggested fixes include "Replace with canonical" (if mismatch) and "Request data" (if missing) And choosing "Replace with canonical" updates the contributor record and resolves the conflict And choosing "Request data" sends an in-app notification immediately and an email within 5 minutes to the designated contact and logs the request And the conflict remains Open until a verified identifier is present and matches the canonical profile
Split Total Enforcement & Auto-Fix Suggestion
Given the sum of all contributor splits is not exactly 100.00% When the user attempts to save or finalize the release Then a "Split total must equal 100%" conflict is displayed within 1 second And finalize actions are disabled And a "Fix splits" action is presented that adjusts only unlocked splits (rounding to 2 decimals) to reach exactly 100.00% And applying the fix updates values and resolves the conflict when the total equals 100.00% And if no unlocked splits are available, the fix action is disabled with a tooltip explaining why
Override With Mandatory Justification
Given a non-critical conflict is Open When a user selects "Override" Then a justification text field is required with a minimum of 10 characters before confirmation is enabled And upon confirmation the conflict state becomes Overridden and is excluded from release blocking checks And the system records user, timestamp, justification, and before/after values in the audit log And attempts to override Critical conflicts require Admin role and explicit confirmation; otherwise the action is blocked with an error
Assign Conflict to Reviewer & Track Status
Given a conflict is Open When it is assigned to a reviewer or team with a due date Then the conflict status changes to In Review and displays assignee and due date And the assignee receives an in-app notification immediately and an email within 5 minutes with a deep link to the conflict And assignment is logged in the activity feed And when the reviewer marks it Resolved or applies a fix/override, the status updates accordingly and the assigner is notified
Conflict Comment Thread & Audit Trail
Given a conflict exists When users add comments to the conflict thread Then comments display in chronological order with author, timestamp, and edit history And users can @mention participants to trigger in-app notifications immediately and email within 5 minutes And attachments up to 25 MB can be added and are virus-scanned before availability And deletion is soft-deleted and visible to admins in the audit trail And exporting the conflict includes the full comment history and attachments metadata
Blocking Conflicts: Notifications & Finalization Gate
Given one or more Critical conflicts exist on a release When 10 minutes pass without any activity on those conflicts Then stakeholders configured for the release (Owner, Project Manager, Finance) receive a consolidated notification via in-app immediately and email within 5 minutes with counts, types, and deep links And repeated notifications for the same release are throttled to no more than once every 2 hours until all Critical conflicts are cleared And when a user attempts to finalize the release while any Critical conflicts are Open, the action is blocked with a modal listing unresolved conflicts and deep links And the "Finalize" button remains disabled until all Critical conflicts are Resolved or Overridden (if policy allows) And upon successful finalization a snapshot of conflict states and resolutions is stored for audit
In-Context Data Collection & Consent
"As an artist, I want to receive a simple, secure link to confirm my credits and splits so that my information is accurate without needing a full account."
Description

Enable secure, shareable requests for missing data directly to contributors and managers via shortlinks tied to a specific track or release context. Prefill known details and allow recipients to confirm identity, provide/confirm identifiers, and acknowledge/approve their splits. Support link expiry, reminders, basic identity verification, and watermarking of any shared materials. Store explicit consent artifacts (timestamps, IP/device, before/after values) to reduce disputes and provide a defensible audit record.

Acceptance Criteria
Send Missing Data Request via Contextual Shortlink
Given I am an authorized project member viewing a track or release with missing contributor data When I select "Request Missing Data," choose one or more contributors, and click "Send" Then the system creates a unique, non-guessable shortlink per recipient bound to that specific track/release and contributor And the token has at least 128 bits of entropy And the shortlink landing portal displays only the selected context and targeted fields And the request is logged with requester ID, recipient ID, context ID, and UTC timestamp
Recipient Identity Verification and Access
Given a recipient opens their shortlink When they attempt to proceed beyond the landing page Then they must complete identity verification via one of: email magic link to an address on file, SMS OTP to a phone on file, or OAuth with a previously linked account And on success a session scoped to that context and recipient is issued for max 30 minutes of inactivity And after 5 consecutive failures the link locks for 15 minutes and a lock event is logged And all subsequent API calls require the verified session token bound to the shortlink token
Prefill and Edit Contributor Details
Given a verified recipient views the data form When the form loads Then known values (e.g., legal name, aliases, role, existing PRO, CAE/IPI) are prefilled and visibly labeled as "From TrackCrate" And verified identifiers are read-only unless the recipient initiates "Propose Correction" with a justification note And required fields are clearly marked and cannot be blank And field validations enforce allowed formats (e.g., IPI/CAE matches library patterns, PRO is from supported list) And on submit, invalid fields show inline errors and submission is blocked until resolved And the system captures before/after values for every changed field
Split Approval and 100% Validation
Given the splits table is displayed for the recipient's role When the recipient acknowledges their share or proposes a change Then the system enforces that total splits equal exactly 100.00% using two-decimal rounding rules documented in-app And the recipient cannot directly edit other parties' shares (only propose adjustments) And if the recipient's proposed share differs from the current proposal, a conflict is flagged and routed to admins And submission requires an explicit "I acknowledge and approve my split" checkbox And a versioned snapshot of the splits is stored at submission
Link Expiry and Reminder Workflow
Given a data request is created When no completion occurs before expiry Then the shortlink is invalidated at the configured TTL (default 7 days, range 1–30 days) and returns an Expired page with no data exposure And an admin may reissue a new shortlink with a new token; the old token remains invalid And up to 3 automated reminders are sent only if the request is incomplete at T+2 days, T+5 days, and 24 hours before expiry And all reminders and expiry events are logged with UTC timestamps and delivery statuses
Watermarked Material Delivery
Given the request includes preview/downloadable materials (e.g., stems, artwork, press assets) When a verified recipient streams or downloads an asset from the portal Then the asset is delivered with a per-recipient watermark (audible for audio previews, visible overlay for images/PDFs) and a forensic identifier embedded And access is denied to unverified users and after link expiry And each stream/download is logged with UTC timestamp, IP, and recipient ID And watermarked files are not shareable via direct URLs (signed URLs expire within 10 minutes)
Consent Artifact Logging and Audit Export
Given a recipient submits their data and split acknowledgement When the submission is accepted Then the system stores an immutable consent artifact containing: UTC timestamp, IP address, User-Agent, device fingerprint, verified identity method, before/after field values, split snapshot, and a SHA-256 hash of the submitted payload And the artifact is linked to the contributor ID and context ID and is tamper-evident (hash chain with prior version) And an admin can export an audit bundle (machine-readable JSON plus human-readable PDF) within 2 seconds (p95) from the request detail view And the artifact is retained for at least 7 years per policy
Versioned Credits & Audit Trail
"As a compliance officer, I want a complete history of credit changes so that I can resolve disputes and demonstrate due diligence to partners."
Description

Version credits as they evolve, capturing who changed what and when, with diff views and rollback capability. Persist an immutable audit log of matches, verifications, overrides, and approvals to support disputes, compliance, and partner audits. Expose version metadata in exports and APIs so downstream systems can reconcile changes. Ensure audit data is retained per workspace policies and is searchable by contributor, track, or release.

Acceptance Criteria
Auto-Versioning on Credit Edit
Given a user with edit permission updates any credit field on a track or release When they save the changes Then a new version is created with a unique versionId, incremented versionNumber, createdAt (UTC ms), and editorUserId And the version stores a full, normalized snapshot of credits (contributors, roles, shares, PRO/CAE/IPI, canonical profile ids) And the splitTotal must equal 100.00%; otherwise the save is blocked with a validation error And prior versions remain immutable and readable And concurrent edits to the same base version cause the later save to fail with 409 and a prompt to reload
Human-Readable Diff Between Versions
Given two credit versions Vx and Vy are selected for comparison When the diff view is opened Then field-level changes are highlighted (additions in green, removals in red, modifications in yellow) And before/after values are shown for names, roles, percentages, and identifiers per contributor And unchanged sections are collapsed by default with an expand option And a summary shows counts of added, removed, and modified items And the diff renders within 500 ms for up to 200 contributors
Rollback to Prior Credit Version
Given a user with Editor or Admin role selects a prior version Vn When they confirm a rollback action Then a new version Vm is created that exactly matches Vn’s snapshot (Vn is not altered or deleted) And an audit event of type ROLLBACK is appended with actor, timestamp, sourceVersionId, and targetVersionId And the operation is blocked with a descriptive error if required identifiers would be missing or if a retention lock prevents rollback
Immutable Audit Log of Matches, Verifications, Overrides, and Approvals
Given any of the following occurs: alias match, identifier verification, split override, approval, or rollback When the action completes Then an audit entry is appended containing eventId, eventType, entityType, entityId, actor (userId or service id), timestamp (UTC), origin (UI/API), and requestId And the entry links to before/after version hashes or snapshot references And audit entries are write-once and cannot be edited or deleted by any role And the log is hash-chained (SHA-256); chain verification fails if any entry is altered And GET /audit/verify returns integrity status valid for an untampered log
Expose Version Metadata in Exports and APIs
Given a user exports credits as CSV or JSON When the export is generated Then each record includes versionId, versionNumber, versionCreatedAt (UTC), editorUserId, and a changeSummary And the API exposes GET /credits/{id}/versions and GET /credits/{id}/versions/{versionId} returning the same metadata and full snapshot And ETag/If-None-Match is supported per version resource, where ETag equals versionId And downstream systems can reconcile by comparing versionId or versionNumber in exports and API responses
Retention Policy Enforcement and Searchability
Given a workspace retention policy R years is configured When audit entries exceed R years Then they are purge-eligible and removed by a scheduled job that writes a non-destructive tombstone with eventId, purgeAt, and retentionPolicyId And audit and version data newer than R years remain fully accessible And search supports filters by contributorId, trackId, releaseId, date range, eventType, and actor And queries return within 2 seconds for up to 50,000 events with pagination And access control ensures users can only query data within their workspace; unauthorized access returns 403
Credits Export & Webhooks
"As an operations manager, I want verified credits to flow automatically to press pages and delivery partners so that we reduce manual work and prevent ingest failures."
Description

Provide structured export of verified credits and splits to internal modules (e.g., AutoKit press pages) and external partners via API, CSV, and standard schemas where applicable. Offer webhooks/events for key state changes (e.g., credits verified, splits approved, conflict opened/cleared) to trigger downstream workflows. Include field-level mapping, validation reports, and retryable deliveries to ensure reliability and alignment with distributor ingestion requirements.

Acceptance Criteria
AutoKit Press Page Credits Sync
Given a release with all track credits marked "Verified" and splits marked "Approved" When AutoKit requests credits via the internal Credits Export API using the release ID Then the API responds with HTTP 200 and a JSON payload containing for each track: track_id, isrc, version_hash, last_verified_at, and an array of contributors with canonical_contributor_id, display_name (with diacritics), normalized_name, role_code, instrument_codes (optional), pro, ipi_cae, isni (if available), and split_share_percent And the sum of split_share_percent for each track equals 100.00% And the payload excludes contributors or splits not in "Verified/Approved" state And the response p95 latency is <= 500 ms for payloads <= 1 MB And AutoKit reflects the updated credits within 60 seconds of verification via webhook-triggered refresh or polling
CSV Export for Distributor Ingestion
Given a user with export permission selects CSV export for a release whose credits are Verified and splits Approved When the export is generated Then the file is UTF-8 encoded and adheres to RFC 4180 (proper quoting/escaping) And the header exactly matches the selected distributor template And each row represents one track–contributor–role with columns: release_id, track_id, isrc, upc (optional), canonical_contributor_id, contributor_name, role_code, ipi_cae, pro, split_share_percent (two decimals), rights_type And per-track split_share_percent totals exactly 100.00 after applying the defined rounding rules And diacritics are preserved in contributor_name And the row count equals the number of approved contributor-role associations And if any required field is missing or fails format validation, the export is blocked and a validation report is produced listing row number, field, and error message
Webhook Events for Credit State Changes
Given a client has an active webhook subscription with a shared secret When a release's credits are marked Verified Then an event credits.verified is enqueued within 5 seconds with payload including release_id, version_hash, last_verified_at, and actor_id When a release's splits are marked Approved Then an event splits.approved is enqueued within 5 seconds with payload including release_id, per-track split totals, and version_hash When a credit conflict is opened Then an event conflict.opened is enqueued within 5 seconds with payload including conflict_id, release_id, affected_track_ids, and reason_code When a credit conflict is cleared Then an event conflict.cleared is enqueued within 5 seconds with payload including conflict_id, release_id, resolution_code, and resolver_id And for each release, events are delivered in the order they occurred And each event includes idempotency_key and occurred_at (ISO 8601 UTC)
Retryable Webhook Delivery with Signing
Given a subscribed webhook endpoint and shared secret When an event is delivered Then the HTTP request includes headers: X-TrackCrate-Event, X-TrackCrate-Delivery-Id, X-TrackCrate-Signature (HMAC-SHA256), and X-TrackCrate-Timestamp And the signature verifies against the exact request body using the shared secret And on 2xx response, the delivery is marked succeeded And on network error, timeout (>10s), or non-2xx response, the delivery is retried with exponential backoff at approximately 1m, 5m, 15m, 1h, 6h (max 5 attempts) And each retry reuses the same Delivery-Id and idempotency key and preserves the payload And a dashboard/API is available to view delivery attempts and trigger a manual retry And after max retries, the subscription is flagged and a notification is sent to the owner
Field Mapping Configuration and Validation Reports
Given an export destination/template is selected When configuring field-level mapping Then users can map internal fields (e.g., canonical_contributor_id, ipi_cae) to destination fields and save a named, versioned mapping And the saved mapping is applied to exports and emitted as mapping_manifest.json alongside each export And running an export produces a validation report with counts of errors and warnings by field, and per-record entries with record identifier and messages And exports are blocked on errors and proceed with warnings, with warnings included in the report And the validation report is downloadable as JSON and CSV and retained for at least 90 days
Standard Schema Exports (DDEX/CWR) Compliance
Given a partner template requiring a standard schema is selected When exporting a release's verified credits and approved splits Then the system generates a payload conforming to the selected standard (e.g., DDEX RIN XML, CWR flat file) and passes validation against the standard's official schema/validator with zero errors And mandatory identifiers (e.g., ISRC for tracks; IPI/CAE for contributors when available) are populated or flagged as errors in the validation report if missing And contributor names preserve diacritics and include normalized forms where allowed by the standard And the file meets the standard's encoding and line-ending requirements (e.g., UTF-8 for XML; specified encoding for CWR) And at least one sandbox ingest test with a partner using the exported file completes without rejection

PrimeTime Send

Automatically times each nudge to land in the recipient’s local high‑response window, learned from past opens, clicks, and approvals. Adapts to weekdays, daylight saving shifts, and regional holidays. You get faster decisions with fewer pings and less guesswork.

Requirements

Global Timezone & Locale Resolution
"As a project manager collaborating across time zones, I want nudges aligned to each recipient’s true local time so that messages land during their working hours without me tracking time differences."
Description

Resolve and maintain each recipient’s current IANA timezone and locale to schedule nudges in accurate local time. Ingest multiple signals (contact profile, last interaction IP on TrackCrate shortlinks/AutoKit pages, email client headers, calendar invites) to derive timezone with confidence scoring, persist per contact, and auto-update on detected travel or daylight saving transitions. Support multiple emails per contact, organization-level defaults, and manual override. Expose a normalized service for downstream schedulers, ensure DST correctness via tzdb updates, and log changes for auditability. Integrates with TrackCrate’s contact model and messaging pipeline so PrimeTime Send can reliably map planned send times to recipients’ actual local clocks.

Acceptance Criteria
Multi‑Signal Timezone Derivation with Confidence and Fallback
Given a contact has multiple timezone and locale signals with timestamps from contact profile, last shortlink/AutoKit IP, email client headers, and calendar invites And recency weighting favors signals within the last 7 days and same-channel interactions When the resolver computes a timezone and locale Then it selects a single IANA timezone and BCP 47 locale with a numeric confidence score between 0 and 1 And confidence is computed deterministically for identical inputs And if confidence >= 0.70 the selected timezone and locale are persisted on the contact and, where applicable, on the specific email identity And if confidence < 0.70 the organization default timezone and locale are returned and persisted with confidence 0 and reason "fallback" And the resolution result is exposed via the normalized service with fields timezone, locale, confidence, sources[], last_updated
Travel Detection and Auto‑Update with Change Log
Given a contact currently resolved to Australia/Sydney with confidence >= 0.70 And within 48 hours a new interaction occurs from an IP that geolocates to Europe/Berlin with an offset change of >= 3 hours When the resolver processes the new signal Then the contact’s timezone updates to Europe/Berlin with updated confidence And a change log entry is written with old_timezone, new_timezone, detected_at, reason "travel", confidence_before, confidence_after, and sources And pending scheduled nudges for this contact are recalculated to preserve the intended local hour in the new timezone And the update and reschedule occur within 5 minutes of signal ingestion
DST Correctness and tzdb Update Compliance
Given a contact in America/New_York with a recurring 09:30 local nudge And an upcoming DST transition occurs as defined by tzdb When the DST transition occurs Then the nudge is sent at 09:30 local clock time the day before, the day of, and the day after the transition without duplicates or skips And a single send within a missing or ambiguous hour is resolved using standard tzdb rules to the correct UTC time And the platform updates to the latest IANA tzdb within 48 hours of a new release and records tzdb_version in system metadata And resolution responses include the tzdb_version used
Per‑Email Identity Resolution and Fallbacks
Given a contact has two emails e1 and e2 with differing resolved timezones and locales And a nudge is addressed to e2 When the scheduler requests resolution for this send Then the resolver returns e2’s timezone and locale And if e2 has no resolved timezone/locale, the contact-level values are returned And if neither exists, the organization defaults are returned with confidence 0 and sources ["fallback"] And the selected values are persisted on the appropriate record (email identity or contact)
Manual Override Precedence and Auditability
Given an admin with permissions sets a manual override to timezone Asia/Tokyo and locale ja-JP for a contact via UI or API When the override is saved Then the resolver returns the override for all subsequent requests regardless of auto-detected signals And automatic updates are suppressed until the override is cleared And clearing the override resumes automatic updates on the next valid signal And every change (set or clear) is logged with actor, channel (UI/API), old_value, new_value, timestamp, and reason "manual_override" And logs are queryable via the audit API and UI within 5 minutes of the change
Normalized Resolution Service API Contract and Performance
Given a GET /v1/time-resolve request with a valid contact_id or email address and organization context When the lookup succeeds Then the service responds 200 within 150 ms at p95 and 500 ms maximum latency with JSON containing timezone (IANA), locale (BCP 47), confidence (0..1), sources[], last_updated, tzdb_version, observes_dst (boolean) And when the contact/email is unknown, the service responds 200 with organization defaults, confidence 0, sources ["fallback"], and not_found=true And input validation errors return 400 for malformed identifiers and 404 for unknown organization And responses include cache-control headers permitting 60s caching when no manual override is active
Scheduler Mapping, Reschedule on Change, and Idempotency
Given the scheduler supplies a target local window (e.g., 09:00–11:00 local) for a contact’s next nudge And the resolver provides the contact’s current timezone When the scheduler requests a UTC send_at for a specific date Then the mapping to UTC yields a send time that falls within the specified local window And if the contact’s timezone changes before send, the system recalculates send_at to preserve the same local window without creating duplicate sends And an idempotency key ensures a single send per logical nudge across timezone updates And all mappings and recalculations are logged with a correlation_id for auditability
Response Window Learning Model
"As an indie artist, I want TrackCrate to learn when my collaborators tend to respond so that my nudges reach them when they’re most likely to act."
Description

Learn individualized high-response windows per recipient using historical opens, clicks on TrackCrate shortlinks, AutoKit approvals, and reply timestamps. Model weekday patterns, weekend differences, recency-weighted signals, and seasonality to predict a probability curve over 24 hours by day-of-week. Provide confidence scores, minimum-data thresholds, and progressive fallbacks (team-, label-, region-level heuristics) for cold starts. Continuously update with new events, handle outliers, and cap send frequency to avoid fatigue. Expose an inference API returning the next best send window, justification, and expected lift. Store only necessary aggregates to reduce PII risk while enabling accurate scheduling.

Acceptance Criteria
Per-Recipient 24h Probability Curve by Day-of-Week
Given a recipient has ≥ 30 qualifying engagement events (opens, TrackCrate shortlink clicks, AutoKit approvals, or email replies) across ≥ 10 distinct local-calendar days When the learning job runs Then the system stores a 7×24 probability matrix with non-negative values and each day’s bins summing to 1.0 ± 0.05 And returns a confidence score in [0,1] that is ≥ 0.70 And records model_version and last_trained_at timestamps And the matrix is computed in the recipient’s resolved timezone
Cold Start with Progressive Fallbacks
Given a recipient has < 30 qualifying events or < 10 distinct local-calendar days of history When an inference is requested Then the system selects fallbacks in priority order: team → label → region → global And returns fallback_level, sample_size_used, and confidence ≤ 0.50 And returns a valid next_send_window derived from the selected fallback And logs that a fallback was applied with correlation to request_id
Inference API: Next Best Send Window and Justification
Given a POST to /v1/prime-time/infer with recipient_id, now_utc, and optional blackout_windows and timezone override When processed under normal load Then the API responds 200 with p95 latency ≤ 300 ms And the payload includes next_send_window.start and .end as ISO 8601 with timezone, expected_lift (0–100%), confidence (0–1), justification (top contributing hours and signals), model_version, request_id, and fallback_level (if any) And schema validation passes and times are in the recipient’s local wall-clock And if no window is available due to fatigue cap or blackout, responds 409 with reason and earliest_eligible_at
Continuous Learning and Outlier Handling
Given new engagement events arrive via stream (opens, clicks, approvals, replies) When an event is ingested Then per-recipient aggregates are updated and eligible for inference within ≤ 15 minutes And events flagged as temporal outliers (≥ 3× MAD from median local-hour behavior) are down-weighted so that any single outlier contributes ≤ 10% to the next-day hour ranking And in a synthetic test with 100 midday events and 1 midnight event, the top predicted hour shifts by < 1 local hour
Recency Weighting, Seasonality, DST, and Regional Holidays
Given a recipient has events spanning > 90 days When the model computes weights Then events from the most recent 30 days contribute ≥ 60% of total signal weight (if ≥ 15 events exist in that window) And weekly seasonality is modeled so day-of-week effects differ when supported by data (p-value ≤ 0.05) And across a DST transition, top predicted local hour remains within ±1 wall-clock hour of the pre-shift top hour And on regional public holidays where historical response is ≥ 20% lower vs adjacent weekdays, the model reduces baseline probability for that day by ≥ 15% (else no holiday adjustment is applied)
Fatigue Cap and Send Spacing
Given a recipient’s recent nudge history When an inference is requested Then no more than 3 nudges are scheduled within any rolling 7-day window And a minimum spacing of 6 hours is enforced between nudges And if the cap/spacing prevents a send in the next 48 hours, the API returns 409 with reason=fatigue_cap and earliest_eligible_at And the decision is logged with recipient_id hash, rule_triggered, and next review time
Data Minimization and PII Safeguards
Given storage of learning artifacts When inspecting persisted structures Then only aggregated hourly bins, exponential moving averages, sample sizes, and hashed recipient identifiers are stored And no raw email content, IP addresses, or full raw timestamps older than 7 days are retained And all stored artifacts are encrypted at rest and access is audited And upon a deletion request for a recipient, all related aggregates are purged within ≤ 24 hours and the inference API returns 404 for that recipient thereafter
Holiday- and DST‑Aware Scheduling Engine
"As a label coordinator, I want nudges to avoid local holidays and handle DST changes so that I don’t annoy collaborators or miss their best response times."
Description

Schedule nudges into predicted high-response windows while accounting for regional public holidays, observances, and daylight saving shifts. Integrate a holidays service keyed by recipient locale and a tzdb-backed clock to avoid sending during off-hours created by DST transitions. Provide rules for deferring to the next viable window on holidays, and allow per-project overrides (e.g., “send even on holidays”). Batch and sequence multi-recipient sends so each lands in local prime time, with idempotent job queuing and retries. Ensure deliverability-safe rate limiting and compatibility with TrackCrate’s email/notification senders and tracking pixels/links.

Acceptance Criteria
Holiday Deferral to Next Viable Window
Given a recipient in time zone America/New_York with locale US-NY And the predicted high-response window is 09:00–11:00 local on 2025-07-04 And 2025-07-04 is a public holiday per the holidays service When the nudge is scheduled without overrides Then the scheduled send time is 2025-07-07 09:00 America/New_York (next non-holiday weekday window start) And no send is queued on 2025-07-04 And job metadata contains defer_reason=holiday and holiday_code=US_FED_INDEPENDENCE_DAY And an audit log entry exists with code=nudge_deferred_holiday referencing the recipient and project
Per-Project Holiday Override
Given project settings have send_even_on_holidays=true And a recipient with locale US-CA has a holiday on 2025-11-27 And the predicted window is 13:00–15:00 local on 2025-11-27 When the nudge is scheduled Then the scheduled send time is within 13:00–15:00 local on 2025-11-27 And job metadata contains holiday_override_applied=true And no holiday deferral log entry is created
DST Transition Safety
Given time zone Europe/Berlin And the predicted window is 02:00–03:30 local on 2025-03-30 (spring-forward day) And 02:30 local does not exist due to DST start When scheduling with target 02:30 Then the scheduled send time is 03:00 local (first valid time within the window) And job metadata contains adjust_reason=dst_nonexistent And time calculations use the IANA tzdb zone for conversion Given time zone America/New_York And the predicted window is 01:00–02:30 local on 2025-11-02 (fall-back day) And 01:30 occurs twice due to DST end When scheduling with target 01:30 Then exactly one job is queued for the selected UTC instant corresponding to 01:30 (first occurrence) And no duplicate send occurs in the repeated hour And job metadata contains adjust_reason=dst_ambiguous_resolved with the chosen UTC instant
Multi-Recipient Sequencing with Deliverability Rate Limits
Given a batch of 1,000 recipients across multiple time zones each with an individualized predicted window And deliverability rate limits configured as 200 emails/minute per domain and 1,000 emails/minute global When the batch is scheduled at 2025-09-02T10:00:00Z Then each recipient’s send is queued to land within their local predicted window (±5 minutes) And in any 1-minute interval no more than 200 emails are sent per domain and no more than 1,000 emails globally And sequencing distributes sends within each window to respect limits And if rate limits would push a send outside its window, the job is deferred to the next viable window and marked with defer_reason=rate_limit
Idempotent Queueing and Safe Retry
Given a dedupe key computed as hash(project_id, recipient_id, campaign_id, content_signature) When two schedule requests with the same dedupe key are received within 24 hours Then only one send job exists in the queue and the second request returns status=duplicate_noop And subsequent duplicate requests do not create additional jobs or reschedule the existing one And on transient send failure (e.g., SMTP 4xx) the job retries up to 3 times with exponential backoff (5m, 15m, 45m) while preserving the dedupe key And on permanent failure (e.g., SMTP 550) no retry is attempted and the final state is failed with reason=permanent
Holiday Service Integration and Fallback Behavior
Given a recipient with locale GB-SCT and time zone Europe/London When querying the holidays service for 2025-08-04 Then the day is treated as a holiday if the service marks “Summer Bank Holiday (Scotland)” and the send is deferred per rules And if the holidays service is unavailable or errors, the engine proceeds without holiday deferral for that date and logs warning code=holiday_service_unavailable with a correlation_id And when no locale is provided, the engine derives region from time zone country mapping; if mapping fails, it defaults to no-deferral and logs code=locale_unresolved
Tracking and Sender Compatibility on Deferred Sends
Given a nudge with tracking pixel and shortlinks enabled And the send is deferred due to holiday or rate limiting When the message is ultimately sent via TrackCrate’s email/notification sender Then the outbound content includes the tracking pixel and wrapped links And open and click events are attributed to the original campaign_id and recipient_id And the tracking shortlinks resolve and record click events correctly And scheduling/defer metadata is included in send logs without altering deliverability headers (SPF/DKIM/DMARC remain valid)
Send Orchestration & Overrides UI
"As a producer managing releases, I want clear controls and visibility into when each nudge will go out so that I can trust PrimeTime but still override it when needed."
Description

Add PrimeTime controls in the Send Nudge flow: a toggle to enable PrimeTime, per-recipient predicted send times with reason codes and confidence, bulk scheduling summary, and an override to send now or pick a specific time. Support deadlines (send by X), quiet hours, and per-contact preferences. Allow editing or canceling queued sends, and show a timeline of planned deliveries across time zones. Surface fallbacks when data is insufficient and display expected impact. Integrate with TrackCrate assets by attaching trackable shortlinks and AutoKit press pages to queued nudges, preserving versioned rights metadata and watermarked download rules.

Acceptance Criteria
PrimeTime Toggle In Send Nudge Flow
Given I am in the Send Nudge flow with at least two recipients And PrimeTime is toggled Off When I toggle PrimeTime On Then per-recipient predicted local send times populate within 500 ms And the bulk scheduling summary updates to reflect counts by date and time zone And manual date/time inputs are disabled by default When I navigate to Attach Assets and return Then the PrimeTime toggle state and current predictions persist unchanged
Per-Recipient Predictions With Reasons & Confidence
Given a recipient with three or more past interactions and a known time zone When PrimeTime predictions are generated Then the UI displays the predicted local timestamp, time zone offset, a reason code (e.g., "High open rate Tue 09:00–11:00"), and a confidence score from 0–100% with a bucket label (High ≥70, Medium 40–69, Low <40) And if a regional holiday or daylight saving transition affects the window, the reason includes the adjustment (e.g., "Avoid local holiday", "DST shift +1h") And the predicted time falls within the recipient’s allowed days/hours
Bulk Scheduling Summary With Deadline "Send By X"
Given PrimeTime is On and a global deadline "Send by <timestamp UTC>" is set When scheduling is computed Then no recipient is scheduled after the deadline And recipients with predicted times beyond the deadline are pulled forward to the nearest earlier acceptable slot respecting quiet hours and preferences And the bulk summary shows totals by date and time zone, and a count of recipients adjusted due to the deadline
Override Send Now Or Specific Time
Given PrimeTime is On for selected recipients When I choose "Send Now" Then those recipients’ nudges are dispatched within 60 seconds and removed from the queue When I choose "Schedule Specific Time" and select a timestamp Then the selected recipients are scheduled at that exact timestamp in their local time zone if specified per-recipient, or in the sender’s time zone if applied globally, and each item displays an Override badge And overrides supersede PrimeTime predictions while still enforcing do-not-contact rules
Quiet Hours And Per-Contact Preferences Compliance
Given recipients have quiet hours and preference settings configured When PrimeTime schedules or I manually schedule Then no send is scheduled within each recipient’s quiet hours or on blocked days And if a "send by" deadline conflicts with quiet hours, the schedule shifts to the last allowable slot before the deadline; if none exists, the recipient is flagged "Unscheduled—deadline conflict" And the UI displays a badge per affected recipient indicating "Quiet hours shift" or "Preference conflict"
Edit/Cancel Queued Sends And Cross-Time-Zone Timeline
Given there are queued sends across multiple time zones When I open the timeline view Then I see a chronological list grouped by recipient local date with corresponding UTC times When I edit a queued send time or cancel it Then the change is saved and reflected in the timeline and bulk summary within 2 seconds, and an audit entry records user, action, timestamp, and before/after values And cancelling removes the item from the queue and prevents delivery
Fallbacks, Expected Impact, And Asset Attachment Integrity
Given some recipients lack sufficient interaction data When PrimeTime is On Then those recipients are scheduled using a default safe-window policy and labeled "Fallback" And the expected impact panel displays predicted response uplift versus baseline (e.g., +12%) with confidence bucket and population size used When I attach trackable shortlinks and an AutoKit press page to the nudge Then queued sends maintain associations to the correct versioned rights metadata, and watermarked downloads follow configured expiration rules on delivery And test-previewing a queued message resolves links correctly and records a test click without counting toward live metrics And if an attachment is missing or invalid, scheduling is blocked for affected recipients and a specific error is shown
Privacy, Consent & Data Retention Controls
"As a label admin, I want PrimeTime to honor consent and minimize personal data usage so that we stay compliant and maintain trust."
Description

Ensure PrimeTime complies with privacy regulations and recipient expectations. Respect per-contact communication consent, global opt-outs, and Do Not Track signals. Minimize stored personal data by retaining aggregated response patterns instead of raw event timelines where feasible, with configurable retention periods and secure deletion. Provide admin export of contact data, consent logs, and PrimeTime inferences. Document automated decision-making, let recipients opt out of behavioral timing, and gate features in restricted regions if required. Implement access controls, audit logs, and encryption in transit/at rest for all event data used by the learning model.

Acceptance Criteria
Enforce Per-Contact Consent and Global Opt-Outs in PrimeTime Send
- Given a contact is marked Global Opt-Out, when a user attempts to schedule a PrimeTime nudge to that contact, then the scheduling action is blocked with a visible Opted out reason and no nudge is queued. - Given an API request to POST /nudges targets an opted-out contact, when processed, then the API responds 403 with error_code CONSENT_REQUIRED and no job is created. - Given a contact’s channel-level consent is Email: Allowed and In-App: Denied, when scheduling via Email, then scheduling succeeds; when scheduling via In-App, then scheduling is blocked with reason Channel not consented. - Given any blocked attempt due to consent, when the action completes, then an audit log entry is recorded with actor_id, contact_id, reason CONSENT_BLOCK, timestamp, and IP address.
Honor Do Not Track and Behavioral Timing Opt-Out
- Given a recipient’s client sends DNT=1 or tracking is disabled, when the recipient opens/clicks a message, then no open/click events are stored and the behavioral model is not updated for that contact. - Given a contact opts out of Behavioral Timing in the preference center, when future nudges are scheduled, then PrimeTime uses the organization’s default send window and sets schedule_reason behavioral_opt_out. - Given behavioral timing opt-out is enabled, when an admin views the consent log, then an entry BEHAVIORAL_TIMING_OPT_OUT with timestamp and source=Recipient is present. - Given a recipient toggles behavioral timing opt-out, when checked via GET /contacts/{id}, then the state reflects the change within 10 minutes.
Configurable Data Retention and Secure Deletion for Event Data
- Given org retention for raw event data is set to 30 days and aggregated features to 365 days, when any raw event exceeds 30 days, then it is irreversibly deleted within 24 hours and is not retrievable via UI or API. - Given a purge cycle runs, when completed, then an audit log DATA_PURGE records item_count, storage_location, job_id, started_at, finished_at. - Given retention is shortened (e.g., 60→30 days), when saved, then an immediate catch-up deletion job is queued and completes within 24 hours for out-of-policy data. - Given retention is lengthened, when saved, then previously deleted raw events are not restored and APIs return empty for out-of-policy time ranges. - Given cold backups exist, when deletion of corresponding data occurs, then backup purge completes within 7 days and status is visible to Org Admins.
Admin Export of Contact Data, Consent Logs, and PrimeTime Inferences
- Given an Org Admin requests an export for a specific contact, when the export runs, then a downloadable package (JSON and CSV) is produced within 30 minutes containing: contact profile, consent history, DNT/opt-out states, aggregated response patterns, and the last 90 days of scheduling decisions without raw event timestamps. - Given the export is ready, when the admin is notified, then a signed URL is provided over TLS, expires in 24 hours, and the file’s SHA-256 checksum matches the manifest. - Given a non-admin user requests the same export, when processed, then the system returns 403 and records an audit event EXPORT_DENIED with actor_id and contact_id. - Given the exported JSON, when validated, then required fields (contact_id, consents[], dnt, opt_out_behavioral_timing, inference_summary, generated_at, org_id) are present and correctly populated.
Restricted Region Gating for Behavioral Timing
- Given Restricted Region Gating is enabled at the org level, when a recipient’s region is in the restricted list (e.g., EEA), then PrimeTime disables behavioral timing, schedules using the default window, and sets schedule_reason region_gated, with a UI badge Gated by region. - Given a recipient’s region cannot be reliably resolved, when scheduling, then the system defaults to gating off behavioral timing and uses the default window. - Given the restricted regions configuration is updated, when saved, then scheduling services reflect the change within 15 minutes, verified via a health/config endpoint. - Given an admin audits a send, when viewing the schedule details, then the gating rule and region source (IP/locale/org setting) are displayed.
Access Controls, Encryption, and Audit Logs for PrimeTime Event Data
- Given role-based access control is configured, when a user with role Org Admin or Data Steward accesses PrimeTime event data/inferences, then access is granted; when any other role attempts access, then a 403 is returned and an audit event ACCESS_DENIED is recorded. - Given any inter-service or client connection handling PrimeTime data, when inspected, then TLS 1.2+ with HSTS is enforced and weak ciphers are rejected. - Given event data at rest in analytics storage, when checked, then encryption uses AES-256 with keys managed by KMS and key rotation occurs at least every 90 days, evidenced by KMS logs. - Given any read/export of PrimeTime event data, when completed, then an immutable audit log captures actor_id, resource, fields_accessed, purpose_code, timestamp, and is retained for 365 days; attempts to tamper are detected and alerted.
Automated Decision-Making Transparency and Recipient Opt-Out Path
- Given PrimeTime sends an email or in-app nudge, when the message is delivered, then it includes a visible link About PrimeTime scheduling pointing to documentation describing automated decision-making and data sources; the link returns HTTP 200 and the content is localized to the recipient’s locale when available. - Given a sender uses the compose/scheduling UI, when PrimeTime timing is selected, then a disclosure tooltip is shown with a link to the same documentation. - Given a recipient follows the notice link and opts out of behavioral timing, when redirected to the preference center, then the contact’s opt_out_behavioral_timing flag is set and takes effect within 10 minutes. - Given compliance review, when scanning outbound messages, then at least 99% within a rolling 24h window include the required notice link, with failures logged for remediation.
Performance Analytics & Controlled Rollout
"As a product owner, I want to measure how PrimeTime affects response and approval speed so that we can iterate and roll it out confidently."
Description

Deliver dashboards and reports to quantify PrimeTime impact: open/click rates, approval rates, median time-to-decision, and send-time distributions compared to immediate sends. Enable holdout experiments and per-project A/B toggles with configurable treatment ratios and significance indicators. Provide cohort filters (timezone, role, project) and export to CSV/BI tools. Include delivery logs with planned vs actual send times, chosen window rationale, and fallback triggers. Support phased rollout flags to enable PrimeTime by workspace, project, or user segment, with safety levers to revert quickly.

Acceptance Criteria
Analytics Dashboard: PrimeTime Impact Metrics vs Baseline
Given a workspace with ≥1,000 sends containing both PrimeTime and Immediate cohorts within a selected date range, When the user opens Analytics and selects that range, Then the dashboard shows per-cohort open rate, click rate, approval rate, median time-to-decision (minutes), and a send-time distribution histogram. Given independently computed reference queries, When metric values are compared, Then rates match within ±0.1 percentage points, medians within ±1 minute, and histogram bin counts sum to total sends. Given the user enables "Compare to Immediate", When the toggle is on, Then absolute difference and relative uplift are displayed for each KPI. Given events include multiple recipient interactions, When rates are computed, Then opens and clicks are unique per recipient-thread, approvals are deduped to final approval per thread, and time-to-decision is measured from first send to final approval timestamp. Given the user adjusts the date range or timezone display, When applied, Then all metrics recompute using recipient local time and render within 2 seconds p95 for up to 50k sends.
Experiment Configuration: Holdout and A/B Toggles
Given a project settings page, When a user sets a treatment ratio (e.g., 70/30) and saves, Then subsequent eligible sends are assigned PrimeTime vs Immediate at 70/30 within ±2% over any consecutive 1,000 assignments. Given deterministic assignment, When the same recipient-thread is evaluated repeatedly, Then it remains in the same variant unless the ratio is explicitly changed. Given a workspace-level holdout (e.g., 10%), When enabled, Then projects inherit the holdout by default and may override at the project level. Given the per-project PrimeTime On/Off toggle, When toggled, Then new sends honor the setting within 60 seconds and an audit log entry is recorded. Given low traffic (fewer than 100 assignments/week), When ratio enforcement would exceed ±5%, Then the UI displays a warning that variance may be high.
Significance Indicators for KPIs
Given A/B data for open rate, click rate, and approval rate, When sample sizes per variant ≥ 500 or Wilson 95% CI half-width ≤ 2pp, Then the system computes a two-sided z-test at α=0.05 and displays one of: Insufficient Data, Not Significant (p≥0.05), or Significant (p<0.05), along with uplift and 95% CI. Given median time-to-decision, When comparing variants, Then the system uses Mann–Whitney U at α=0.05 and displays effect direction and 95% CI for the median difference. Given multiple KPIs, When indicators are shown, Then each KPI’s test and result are independently labeled and reflect the current filters/date range. Given zero events in a variant for a KPI, When computing significance, Then the status shows Insufficient Data and no p-value is displayed.
Cohort Filters: Timezone, Role, Project
Given multi-select filters for timezone(s), role(s), and project(s), When any combination is applied, Then all charts and tables update to reflect only the filtered cohort and a visible filter summary is shown. Given up to 100k sends in range, When filters are applied or cleared, Then results return within 3 seconds p95 and 6 seconds p99. Given the user shares the dashboard URL, When opened in a new session, Then the same filters, date range, and compare toggle state are restored. Given conflicting filters that yield zero results, When applied, Then the UI displays a zero-state with no errors and offers to clear filters.
Data Export: CSV and BI Endpoint
Given any applied filters and date range, When the user clicks Export CSV, Then a UTF-8 CSV downloads (or a link is provided) within 60 seconds for datasets up to 500k rows and includes KPI columns, cohort attributes, variant breakdowns, and a header with generation timestamp and filter summary. Given an authenticated request to /analytics/exports with the same parameters, When the dataset exceeds 100k rows, Then an asynchronous job is created and returns a job ID; when completed, the API provides a paginated JSON or CSV download with identical values to the UI and a 24-hour expiring link. Given the Export History view, When the user opens it, Then each job shows status (Queued, Running, Succeeded, Failed), row count, requested filters, requester, start/end times, and a retry option for failed jobs.
Delivery Logs: Planned vs Actual with Rationale & Fallbacks
Given any nudge scheduled by PrimeTime, When its Delivery Log is opened, Then the log displays planned send time, actual send time, recipient local timezone, learned high-response window, and the difference in minutes. Given the decision engine selected a window, When rationale is requested, Then the log lists top features (e.g., past open hours, weekday preference, holiday adjustment), their normalized contributions, the chosen window, and a reason code. Given a fallback trigger occurs (e.g., missing timezone, SLA breach, rate limit, service error), When recorded, Then the log shows the trigger type, timestamp, fallback path (immediate or next-best window), and outcome. Given DST or regional holiday effects, When applicable, Then the log notes the adjustment and the rule source. Given search and filters on the Delivery Logs page, When filtering by project, recipient, date, or reason code, Then results return within 2 seconds p95 and can be exported to CSV.
Phased Rollout Flags, Kill Switch, and Auto-Revert
Given workspace, project, and user-segment flags, When PrimeTime is enabled for a targeted subset, Then only that subset receives PrimeTime scheduling while others use immediate sends, and the effective scope is visible in settings. Given the global Kill Switch is toggled off, When services poll flags, Then all PrimeTime scheduling halts within 60 seconds and any unsent scheduled nudges revert to immediate send; an audit entry is created with actor, scope, and affected count. Given model/service degradation (latency p95 > 2000 ms or error rate > 2% for 5 consecutive minutes), When auto-safety rules evaluate, Then PrimeTime automatically reverts to immediate sends, emits an on-call alert, and surfaces a banner in Analytics. Given any flag change, When reviewed later, Then Audit Log shows who, what, when, before/after values, and scope, and can be exported.

Quiet Hours Shield

Respects per‑recipient do‑not‑disturb windows and locale holidays, auto‑shifting nudges to the next acceptable slot. Keeps relationships healthy and compliant while ensuring messages still arrive when they’ll be welcomed.

Requirements

Recipient Quiet Hours
"As a label project manager, I want to set quiet hours per contact so that our nudges arrive when they’re welcome and don’t strain relationships."
Description

Enable per-recipient do-not-disturb windows with day-of-week schedules, multiple time blocks, and exception dates, all evaluated in the recipient’s local time. Provide workspace defaults with contact-level overrides and channel granularity (email and in-app). Validate overlapping windows, handle daylight saving transitions, and surface a clear summary in contact profiles. Expose UI controls and API endpoints so rules can be created in-app or imported with contacts. Changes must propagate to the messaging pipeline in real time to ensure nudges for approvals, stem requests, and release milestones are deferred appropriately across TrackCrate.

Acceptance Criteria
Workspace Defaults Applied Without Overrides
Given workspace default quiet-hours schedules are configured for email and in-app, and a contact has no overrides, and the contact’s IANA timezone is set When a nudge (approval, stem request, or release milestone) is triggered during a default quiet-hours block in the contact’s local time Then the message for that channel is deferred to the first minute after the block ends in the recipient’s local time And when triggered outside quiet hours, the message sends immediately And removing all default quiet hours results in messages sending without deferral
Contact-Level Override With Channel Granularity
Given workspace defaults exist and a contact defines per-channel quiet hours with multiple time blocks and exception dates, evaluated in the contact’s local time When a nudge is triggered during the contact’s email quiet hours but outside the contact’s in-app quiet hours Then the email nudge is deferred and the in-app notification is delivered immediately When a nudge is triggered on a defined exception date Then quiet hours are bypassed for both channels on that date When a channel is set to inherit Then the workspace default applies to that channel only
Overlapping Quiet Windows Validation
Rule: Quiet-hour blocks for a given contact/channel/day must not overlap or duplicate, including cross-midnight cases Rule: UI attempts to save overlapping blocks are blocked with inline error identifying conflicting ranges Rule: API requests with overlapping blocks return HTTP 400 with error code "quiet_hours_overlap" and the conflicting ranges Rule: Schedules that wrap midnight (e.g., 22:00–02:00) are valid if they do not overlap any other block on adjacent days for that channel
Daylight Saving Time Transition Handling
Given a contact in a DST-observing timezone with quiet hours 22:00–07:00 local When the spring-forward transition occurs (skipping 02:00) Then messages between 22:00 and 07:00 wall-clock are deferred and resume at 07:00 local When the fall-back transition occurs (repeating 01:00) Then both 01:00–02:00 occurrences are treated as within quiet hours if the block covers that range And all evaluations use the contact’s IANA timezone, not server time
Real-Time Rule Propagation to Messaging Pipeline
Given a contact’s quiet hours are updated via UI or API When the change is saved Then new triggers are evaluated using updated rules within 5 seconds And queued but not yet sent messages are re-evaluated within 5 seconds and deferred or released accordingly And no message is sent during a newly introduced quiet block
Quiet Hours Rule Management via API and Import
Rule: POST /contacts with quiet_hours creates or updates the contact and associated quiet-hour rules atomically Rule: PUT/PATCH /contacts/{id}/quiet-hours upserts per-channel schedules, multiple blocks, and exception dates Rule: Input uses ISO-8601 dates for exceptions, HH:MM 24h for times, and IANA timezone identifiers; invalid inputs yield HTTP 400 with field-specific errors Rule: Overlapping or duplicate blocks are rejected with error code "quiet_hours_overlap" Rule: Responses return the canonical normalized schedule (merged, ordered, timezone annotated)
Contact Profile Quiet Hours Summary Accuracy
Given a contact has channel-specific quiet hours, exceptions, and timezone set When viewing the contact profile Then the Quiet Hours summary shows per-channel schedules in the contact’s local timezone and indicates whether each channel is override or inherits workspace default And cross-midnight blocks are labeled clearly (e.g., "22:00–07:00 next day") And exception dates are displayed or summarized with a count and detail on hover/click And changes to rules update the summary immediately without page reload
Automatic Time Zone Detection
"As a campaign coordinator, I want TrackCrate to detect recipient time zones automatically so that I don’t have to maintain them manually and messages send at sensible local times."
Description

Automatically infer and maintain each recipient’s time zone using declared profile data, recent shortlink clicks, AutoKit page visits, download events, and email interaction IPs, with confidence scoring and last-seen timestamps. Adjust for daylight saving changes and drift, and allow manual override at the contact level. Provide a fallback when unknown (e.g., sender’s default or campaign-level requirement) and emit events when time zone confidence changes so scheduled messages can be recalculated. Store normalized IANA identifiers to ensure consistent scheduling across TrackCrate’s notification services.

Acceptance Criteria
Multi-Signal Time Zone Inference and Confidence Scoring
Given a contact without a manual time zone override and no existing inferred time zone And the platform confident_tz_min threshold is set to 0.7 When the system receives within the last 30 days at least two agreeing signals from distinct sources indicating the same time zone (e.g., shortlink click IP and AutoKit visit both map to Europe/Berlin) Then inferred_time_zone is set to "Europe/Berlin" (IANA) And confidence is calculated and stored >= 0.7 And confidence_sources includes the contributing sources and their weights And the change is audit-logged with timestamp and correlation_id
Last-Seen Timestamp and Source Attribution
Given a qualifying time zone signal with timestamp t from source S for a contact When processed Then time_zone_last_seen_at equals t and time_zone_last_seen_source equals S And older or duplicate events do not reduce time_zone_last_seen_at (idempotent and monotonic) And processing out-of-order signals updates last_seen only if t is newer than the stored value
DST Transition and Drift Stabilization for Local-Time Scheduling
Given a contact with inferred_time_zone "America/Los_Angeles" and a nudge scheduled for 09:00 local time When a daylight saving time transition changes the UTC offset Then the next run still occurs at 09:00 local time using the updated offset without changing the stored IANA identifier And when signals indicate a different time zone, the system switches inferred_time_zone only after at least two agreeing signals are observed over a span of ≥ 6 hours within a rolling 7-day window; otherwise it retains the current time zone and records pending_evidence
Manual Time Zone Override Persistence
Given a contact with manual_time_zone_override set to "Asia/Tokyo" When new signals indicate any other time zone Then the stored time zone remains "Asia/Tokyo" with source "manual" And inferred_candidate_time_zone is updated separately with confidence And clearing the override promotes the highest-confidence candidate to inferred_time_zone and emits a contact.timezone.override_cleared event
Fallback Time Zone Resolution Order
Given a message is to be scheduled for a contact whose time zone is unknown or confidence is below the confident_tz_min threshold When determining scheduling context Then the system uses campaign_time_zone if configured; otherwise uses sender_default_time_zone And the scheduling metadata records fallback_source as "campaign" or "sender" And no contact.timezone.updated event is emitted by fallback selection alone
Event Emission on Time Zone or Confidence Change
Given a contact with previous inferred_time_zone A and confidence CA When new evidence results in inferred_time_zone changing to B or confidence crossing the confident_tz_min threshold (either entering or leaving) Then the system publishes contact.timezone.updated with payload: contact_id, old_tz=A, new_tz=B, old_confidence=CA, new_confidence=CB, change_reason, occurred_at (UTC ISO-8601), and sources And all pending scheduled messages for the contact are enqueued for recalculation within 60 seconds of the event being published
IANA Identifier Normalization and Validation
Given any provided time zone input such as an alias (e.g., "US/Pacific") or a raw UTC offset (e.g., "UTC-8") When stored on the contact Then the system resolves and persists the canonical IANA identifier "America/Los_Angeles" And rejects non-IANA labels like "PST" with a validation error including suggested canonical alternatives And the stored value validates against the current IANA tz database version configured by the platform
Locale Holiday Calendars
"As a marketer, I want TrackCrate to avoid local holidays for recipients so that we respect regional norms and improve engagement."
Description

Integrate public holiday calendars per country and region to suppress non-urgent sends on observed holidays for each recipient’s locale. Allow workspace-level policy to opt in/out by channel, define custom blackouts (e.g., label shutdowns), and create per-contact exceptions. Cache calendars, support multi-year lookahead, and degrade gracefully if a provider is unavailable. Expose the applied holiday rule in message previews and logs so users understand deferrals. This layer must compose with quiet hours and time zones to determine the final permissible send windows.

Acceptance Criteria
Suppress Non‑Urgent Sends on Recipient’s Observed Holiday
Given a recipient’s locale country and region are resolved and mapped to a public holiday calendar And a non-urgent message is scheduled within that locale’s observed public holiday When the permissible send window is calculated Then the send is deferred to the first datetime after the holiday that is not within quiet hours And the recipient’s time zone is used to determine holiday observance and quiet hours And the decision stores holiday name and calendar source for auditing
Channel Policy Opt‑In/Out for Holiday Suppression
Given a workspace policy enables holiday suppression for Email and Slack but disables it for SMS And the same message is scheduled to all three channels during a recipient holiday When channel policies are applied Then Email and Slack sends are deferred per holiday rules And SMS sends are delivered at the scheduled time And policy changes take effect within 60 seconds of being saved
Workspace Custom Blackout Windows
Given an admin defines a workspace-level blackout window with start, end, time zone, and channel scope And a non-urgent message is scheduled within that blackout When suppression rules are applied Then the send is deferred to the first datetime after the blackout that also avoids holidays and quiet hours And the blackout is unioned with public holidays (suppression applies if either matches) And the decision stores blackout identifier and name for auditing
Per‑Contact Exception Overrides
Given a contact has an explicit exception to allow sending on holidays And a non-urgent message is scheduled during a holiday for that contact’s locale When suppression rules are evaluated Then the message is sent at the scheduled time for that contact And quiet hours still apply to that contact And logs record that a per-contact exception bypassed holiday suppression
Calendar Caching and Multi‑Year Lookahead
Given holiday data is fetched for a country/region When the system builds or refreshes its cache Then holidays for the current year and the next two years are stored per locale/region And cached entries have a maximum TTL of 7 days before background refresh And per-recipient holiday lookup completes in ≤50 ms on cache hit (P95)
Provider Outage Graceful Degradation and Auditability
Given the holiday data provider is unavailable during evaluation When computing permissible send windows Then the system uses last-known cached holiday data if available And if no cache exists for the locale/region, the message proceeds without holiday suppression And previews and logs display a warning that holiday data was unavailable and which fallback was used
Composed Rules: Precedence, Preview Explanation, and Next Send Calculation
Given a case where holiday, custom blackout, quiet hours, and channel policy may all apply When the system determines the send outcome Then rule precedence is: per-contact exception (bypass) > channel opt-out (no holiday suppression) > union of custom blackout and public holiday > quiet hours And the next permissible send datetime is computed accordingly in the recipient’s local time And message previews and logs show the applied rule(s), reason code(s), and next send datetime (ISO 8601 with time zone)
Smart Send Scheduler
"As a label owner, I want outgoing messages to be automatically rescheduled to the next acceptable window so that I avoid disturbing collaborators while keeping projects moving."
Description

Introduce a central scheduling service that intercepts outbound nudges and computes the next acceptable delivery slot per recipient by combining quiet hours, holiday rules, sender business hours, message urgency, and delivery-by deadlines (e.g., expiring links). Maintain a durable queue with idempotency, deduplication, and retry policies, and expose APIs to preview, schedule, cancel, or force-send with policy checks. Provide fairness across campaigns, resolve collisions, and emit webhooks for state changes. Ensure compatibility with TrackCrate triggers (approvals, stem requests, press kit shares) and respect content expirations by alerting the sender if a message cannot be delivered before expiry.

Acceptance Criteria
Deferral During Recipient Quiet Hours and Locale Holidays
- Given a recipient with quiet hours 21:00–08:00 in their local timezone and a locale holiday on the send date When a nudge is created at a blocked local time or date Then the scheduler MUST not send immediately and MUST compute the next acceptable slot after both quiet hours and holiday end - Given the recipient timezone is America/Los_Angeles and the nudge is created at 22:15 local on a holiday When the next business day begins at 08:00 local Then scheduled_at MUST be set to 08:00 local and include timezone offset, with deferral reasons ["quiet_hours","holiday"] - When previewing the schedule for the same inputs Then the preview API MUST return the same computed scheduled_at and reasons and no delivery attempt occurs before that time - When the recipient updates quiet hours or holiday rules before scheduled_at Then the scheduler MUST recompute scheduled_at within 1 minute and update job state and reasons accordingly
Urgency- and Deadline-Aware Slot Selection
- Given sender business hours 09:00–18:00 sender local and recipient quiet hours 21:00–08:00 recipient local When urgency = "normal" Then the selected slot MUST satisfy recipient allowed windows AND sender business hours - Given the same constraints When urgency = "urgent" Then sender business hours MAY be bypassed, but recipient quiet hours and locale holidays MUST still be respected - Given a delivery_by deadline exists When computing the next acceptable slot Then scheduled_at MUST be <= delivery_by; otherwise the job MUST NOT be scheduled and MUST enter an undeliverable state (handled in expiration criteria) - When multiple acceptable slots exist before the deadline Then the earliest slot that satisfies all active policies MUST be chosen deterministically
Content Expiration Alert When No Acceptable Slot Exists
- Given a message contains an expiring asset with expires_at and the next acceptable slot per policy is after expires_at When the scheduler evaluates the job Then the job MUST NOT be scheduled and MUST transition to state="undeliverable_expired" with reason="no_acceptable_slot_before_expiry" - When the job transitions to undeliverable_expired Then the sender MUST be alerted via in-app notification and email within 60 seconds, including the conflict details (expires_at, next_possible_slot) - When preview is requested for the same inputs Then the preview MUST indicate deliverable=false and include the earliest possible slot after expiry plus the blocking policies - The system MUST NOT attempt any send or provider API call for undeliverable_expired jobs
Durable Queue: Idempotency, Deduplication, and Retry
- Given a schedule request is submitted with idempotency_key K and identical payload multiple times within 24 hours When processed Then exactly one job MUST be created; subsequent requests MUST return 200 with the same job_id and state - Given two distinct requests with different idempotency keys but identical message payload and recipient within a 5-minute window When deduplication is enabled Then only one outbound delivery MUST occur and the duplicate job MUST be marked deduplicated_against=job_id - Given a transient failure (e.g., HTTP 5xx) at delivery time When within the allowed delivery window and before any deadline Then the system MUST retry with exponential backoff (e.g., 1m, 5m, 15m) up to a configurable max_attempts and record attempts[] - Given a non-retryable failure (e.g., policy_blocked, 4xx permanent) When encountered Then the job MUST NOT be retried and MUST move to a terminal or waiting state consistent with the policy (e.g., state="policy_blocked") - Outbound provider calls MUST include an idempotency token derived from job_id to guarantee exactly-once delivery despite retries
Policy-Gated APIs: Preview, Schedule, Cancel, and Force-Send
- Given valid auth When POST /scheduler/preview is called with recipient_id, message metadata, urgency, delivery_by Then 200 MUST return computed scheduled_at (or undeliverable), active policy reasons, timezone used, and whether force_send is permitted - When POST /scheduler/schedule is called with a valid payload and idempotency_key Then 201 MUST return job_id, state in {queued|deferred|scheduled|policy_blocked}, scheduled_at if applicable, and it MUST be idempotent on repeated calls - When POST /scheduler/{job_id}/cancel is called before send begins Then the job MUST transition to state="canceled" and become ineligible for delivery; cancel after send start MUST be rejected with 409 - When POST /scheduler/{job_id}/force is called Then the system MUST re-evaluate policies; it MAY bypass sender business hours and fairness limits but MUST NOT violate recipient quiet hours or locale holidays unless the recipient has an explicit "allow urgent during DND" flag; all overrides MUST be logged with actor and reason - All API responses MUST include audit_id and be rate-limited; invalid inputs MUST return 4xx with structured error codes
Fairness and Collision Resolution Across Campaigns
- Given multiple campaigns with pending jobs targeting overlapping time windows When dispatching at scale Then per-campaign concurrency limits and weighted round-robin MUST ensure no campaign receives more than +10% share over peers of equal priority across a 15-minute window - Given two jobs for the same recipient colliding on the same minute When ordering is required Then the system MUST prioritize by urgency desc, then created_at asc, then apply a small jitter (<=30s) to prevent simultaneous sends - The system MUST enforce a per-recipient rate limit of at most 1 nudge per N minutes (configurable) regardless of campaign and emit deferral reasons when applied - When fairness prevents immediate send Then the job MUST be deferred with a recomputed scheduled_at that preserves delivery_by constraints
TrackCrate Trigger Integration and State Webhooks
- Given TrackCrate events (approval granted, stem request, press kit share) When they fire Then each MUST create or update a scheduler job with correct recipient set, template reference, urgency, and any delivery_by derived from expiring assets - For each job lifecycle When state changes occur Then signed webhooks MUST be emitted for {queued, deferred, scheduled, policy_blocked, sending, sent, failed, canceled, undeliverable_expired} in order, with retries and idempotency keys; consumers MUST be able to verify signatures - When webhook delivery fails Then the system MUST retry with backoff for at least 24 hours and move to a dead-letter topic after max attempts while preserving an audit trail - All state transitions and policy evaluations MUST be persisted to an immutable audit log with actor, timestamp, reasons, prior_state, new_state
Pre-Send Local Time Preview
"As a user preparing a send, I want to preview recipients’ local times and quiet-hour conflicts so that I can adjust timing before sending."
Description

Display each recipient’s current local time, next permissible send window, and any blocking rules directly in the compose and review flows for nudges, shortlink shares, and AutoKit invitations. Surface warnings when a send would be deferred, with one-click options to schedule, adjust content, or request an allowed override where permitted. Provide a compact summary for bulk sends, including estimated delivery windows by region and percentage of recipients affected. Ensure accessibility and support for mobile and desktop layouts.

Acceptance Criteria
Compose Flow: Real-Time Local Time and Quiet Hours Preview
Given the compose flow is open with at least one recipient, When a recipient is added, removed, or edited, Then their current local time (with timezone offset) displays within 500ms and updates within 1s on further changes. Given a recipient has configured quiet hours and locale holidays, When the compose flow loads or recipients change, Then the next permissible send window is calculated and shown per recipient with start–end timestamps. Given a send would currently violate a recipient’s quiet hours or holiday, When the user views that recipient’s row, Then a blocking rule label and reason code (e.g., Quiet Hours, Holiday) are visible. Given the message type is nudge, shortlink share, or AutoKit invitation, When the type is switched, Then all previews (local time, next window, blocking rules) remain accurate for each type. Given a recipient’s timezone is unknown or cannot be resolved, When the compose flow renders, Then the UI shows “Time unknown” with a prompt to schedule or collect timezone, and immediate send is disabled for that recipient. Given network latency or cache is stale, When data is older than 15 minutes, Then a “Last updated” timestamp and a Refresh control are visible and functional.
Review Flow: Deferred Send Warning and One-Click Scheduling
Given at least one recipient would be deferred by Quiet Hours Shield, When the user enters the review flow, Then a summary banner shows the count of affected recipients and top blocking reasons. Given the user clicks “Schedule to next allowed window”, When the action completes, Then the send is scheduled per recipient at their earliest permissible time and a confirmation shows the earliest and latest estimated delivery timestamps. Given policy allows overrides for some recipients, When the user clicks “Request override”, Then only eligible recipients are included and ineligible ones remain deferred with clear labels. Given the user clicks “Adjust content”, When adjustments are saved, Then the preview recalculates blocking rules and updates affected counts accordingly. Given the user opts to “Split send”, When confirmed, Then recipients are partitioned into Send Now and Deferred groups with exact counts and editable lists. Given an override causes an immediate send, When the send is dispatched, Then an audit entry is created with user, timestamp, recipients, reason, and policy reference.
Bulk Send Summary: Regional Windows and Impact Percentage
Given a bulk send with recipients across multiple timezones, When the compose or review summary panel opens, Then regional groupings (by timezone offset or locale) display with estimated delivery windows. Given recipients are distributed across regions, When the summary is shown, Then the percentage of recipients affected by deferral is calculated to within ±1% and displayed. Given a user expands a region, When clicked, Then a list of representative time windows and recipient counts is shown without exceeding 300ms render time for up to 1,000 recipients. Given all recipients are currently permissible, When the summary loads, Then the panel shows “All recipients can be sent now” and hides deferral controls. Given some recipients have unknown timezones, When the summary displays, Then they are included in an “Unknown” bucket with count and guidance to collect timezones.
Override Request: Permission-Gated Flow and Audit Trail
Given organization policies define who may override quiet hours, When a user without permission opens the override dialog, Then the controls are disabled with an explanatory message. Given a user with permission requests an override, When submitting the request, Then a reason field is required (minimum 10 characters) and per-recipient eligibility is validated. Given an override is submitted, When processing completes, Then only eligible recipients are sent immediately; ineligible recipients remain scheduled and are annotated as “Override not permitted.” Given an override is executed, When logs are generated, Then an audit record includes user ID, recipients, timestamps, reason, policy references, and IP address. Given compliance notifications are enabled, When an override is granted, Then a notification is sent to the designated channel within 1 minute.
Accessibility and Responsiveness: Mobile and Desktop Support
Given a screen reader user in compose or review flow, When navigating the local time preview and warnings, Then all elements have programmatic names, roles, and states and are announced clearly, meeting WCAG 2.1 AA. Given a keyboard-only user, When tabbing through the UI, Then focus order is logical and all actions (schedule, adjust content, request override, split send) are reachable and operable via keyboard. Given a color-vision-deficient user, When viewing warnings, Then color is not the sole indicator; icons and text labels convey state and contrast ratios meet AA. Given a mobile device (viewport 320–768px), When the compose/review UI loads, Then the local time and next window preview condense into a compact stack without horizontal scrolling and touch targets are at least 44x44dp. Given a desktop device (viewport ≥1024px), When the UI loads, Then per-recipient previews align in a readable table with columns for Local Time, Next Window, and Blocking Rules, without overlapping content. Given a list of up to 50 recipients, When the preview renders, Then time and window data appear within 400ms on desktop and 700ms on mobile on a warm cache.
Timezone and Calendar Accuracy: DST, Holidays, and Cross-Midnight Windows
Given recipients across timezones with upcoming DST changes, When calculating current local time and next permissible windows, Then DST offsets are applied using the IANA TZ database with no off-by-one-hour errors. Given locale holidays for a recipient’s region, When today is a recognized holiday, Then the system treats the full day as blocked (unless policy exceptions apply) and computes the next permissible window accordingly. Given quiet hours span across midnight (e.g., 22:00–08:00), When now is within that span, Then the next window starts at 08:00 local time on the correct day and the UI indicates the date. Given the time is within 90 seconds of a quiet hours boundary, When calculating the next window, Then the display rounds to the nearest minute and updates automatically at the boundary. Given a recipient’s locale uses 12-hour or 24-hour formats, When times are displayed, Then they follow the locale or user preference consistently across compose and review flows. Given multiple blocking rules apply simultaneously (e.g., quiet hours and holiday), When showing the reason, Then the UI displays all applicable reasons ordered by priority: Holiday, Quiet Hours, Other.
Quiet-Hour Batching for Campaigns
"As a PR manager, I want bulk sends to be batched by allowable windows so that large lists go out at optimal times per region without manual segmentation."
Description

For multi-recipient campaigns, automatically segment recipients into batches by time zone and allowed windows, queueing deliveries to hit optimal local times without manual list slicing. Honor per-recipient rules while maintaining campaign cohesion, rate limits, and throughput targets. Provide progress tracking, per-batch ETAs, and automatic re-queuing on transient failures. Ensure analytics attribute opens/clicks/downloads back to the original campaign regardless of staggered send times.

Acceptance Criteria
Timezone and Quiet-Hours Segmentation
Given a campaign with recipients spanning at least three IANA time zones and each recipient has a configured allowed send window and quiet hours, When the campaign is launched without manual segmentation, Then the system partitions recipients into batches such that every recipient in a batch shares an overlapping local send window of at least 30 minutes and no recipient is scheduled outside their allowed window, And each batch is tagged with its target local start time, time zone/offset range, and recipient count, And no recipient appears in more than one batch, And the total number of batches is the minimum needed to satisfy all recipients' allowed windows.
Optimal Local-Time Scheduling
Given a configured drift threshold of 10 minutes and a policy to start at the earliest allowed local time, When batches are queued, Then each batch's first delivery begins within recipients' allowed windows and within ±10 minutes of the batch's target local start time, And recipients crossing a quiet-hours boundary are shifted to the next acceptable slot automatically, And immediate-launch campaigns begin with batches currently in-window while out-of-window batches are deferred until their windows open, And no manual list slicing or user intervention is required to achieve the timing.
Holiday and Per-Recipient Overrides
Given locale holiday calendars mapped by recipient locale and per-recipient override rules (e.g., never send on holidays; shift to next business day 09:00–17:00), When a batch includes recipients with a holiday on the planned send date, Then those recipients are excluded from that batch and automatically rescheduled to the next allowed non-holiday window per their override, And the UI/API marks these as Deferred with reason=holiday and shows the new ETA, And if no acceptable window exists before the campaign end-by date, the recipients are marked Skipped with reason=window_unavailable and are not sent, And holiday/override handling does not delay or block other recipients in the batch.
Rate Limits and Throughput Targets
Given a provider rate limit L (messages/minute) and a workspace throughput target T (messages/hour), When the campaign is sending, Then the send engine ensures rolling 60-second throughput does not exceed L and overall campaign throughput meets T within ±10% when sufficient in-window capacity exists, And rate limiting is enforced per provider credential and per workspace to prevent bursts > L for more than 1 second, And pacing across batches preserves per-recipient window compliance while smoothing to meet T, And if T conflicts with recipients' windows or L, the system computes spillover and updates batch ETAs accordingly.
Progress Tracking and Batch ETAs
Given a running campaign with multiple batches, When viewing the campaign via API or UI, Then progress is reported per batch with counts for Queued, Sending, Sent, Deferred, Failed, and Retrying, and a campaign-level roll-up, And each batch exposes an ETA to completion that updates at least every 60 seconds and accounts for quiet hours, holidays, and rate limits, And event logs include state transitions with timestamps and reasons for deferrals/retries, And completed status is emitted only when 100% of recipients in the campaign are in a terminal state (Sent, Skipped, or Failed).
Transient Failure Auto-Retry and Re-queue
Given transient failure classifications including HTTP 5xx, 429, network timeouts, and provider timeouts, and a retry policy of up to 5 attempts with exponential backoff and jitter (initial delay 30s, max 5m), When such a failure occurs for a recipient, Then the recipient is re-queued respecting quiet hours and holidays, preserving original campaign and batch context, And retries do not exceed provider rate limits or violate per-recipient windows, And after max attempts, the recipient moves to Failed with last_error_code and last_error_category=permanent, And successful retries are attributed to the original campaign without creating duplicate sends.
Analytics Attribution Across Staggered Sends
Given tracking is enabled with campaign_id and recipient_id embedded in links, pixels, and download shortlinks, When opens, clicks, and asset downloads occur within the attribution window (e.g., 30 days) for messages from any batch of the campaign, Then all events are attributed to the original campaign_id in analytics and roll up across batches without double-counting per recipient, And UTMs and shortlinks resolve to the same campaign codes across batches, And campaign reports show totals and per-batch breakdowns that sum consistently, with zero variance between per-batch totals and campaign totals.
Audit and Compliance Logging
"As an admin, I want an audit trail of why and when messages were sent or deferred so that we can demonstrate compliance and resolve disputes."
Description

Persist an immutable audit record for each message decision, including recipient local timestamp, evaluated rules (quiet hours, holidays, overrides), scheduler calculations, and final delivery outcome. Capture user identity and justification for any override, and expose searchable logs with export to CSV/JSON for policy reviews. Provide retention controls and privacy safeguards to align with data protection obligations. Surface a human-readable explanation in message detail views so users understand why a send was delayed or delivered immediately.

Acceptance Criteria
Immutable Audit Record Per Message Decision
Given a message decision event (immediate send, delay, suppress, or override) When the decision is finalized by the Quiet Hours Shield Then the system writes an append-only audit record containing: messageId, recipientId, recipientLocalTimestamp, evaluatedRules (quietHours, holidays, overrides), schedulerInputs (timeZone, calendars, windows), schedulerCalculations (nextAcceptableSlot), finalOutcome (sent|delayed|suppressed|overridden), correlationId, createdAt. Given an existing audit record When an attempt is made to modify or delete it via UI or API Then the operation is rejected with 403 and the original record remains unchanged. Given daily integrity verification When the immutable log is validated Then 100% of records pass integrity checks and each record includes contentHash and previousHash for tamper-evidence.
Override Identity and Justification Capture
Given a user attempts to override quiet hours for a message When they submit the action Then a non-empty justification (minimum 10 characters) is required and the audit record captures userId, role, timestamp, and IP/device fingerprint. Given an override action is logged When viewing the audit record Then the overrideJustification and approver identity are visible to users with Audit:Read permission. Given an override is attempted via API without justification When the request is processed Then the system returns 400 with error code "justification_required" and no audit record is written.
Human-Readable Explanation in Message Detail
Given a message detail view is opened When the message was delayed by Quiet Hours Shield Then the view shows an explanation including triggering rule(s), recipient local time, next acceptable slot (with time zone), and a plain-language summary of the decision. Given the message was delivered immediately When the detail view is opened Then the view shows that no quiet hours/holiday restrictions applied or that an override was authorized, matching the audit record values. Given the explanation is displayed When compared to the corresponding audit record Then rules, timestamps, time zones, and outcome match exactly.
Searchable Audit Log UI and API
Given audit logs exist for the last 90 days When a user with Audit:Read permission filters by date range, recipient, messageId, outcome, and actor Then the UI returns the first page within 2 seconds for up to 10,000 matching records and displays total count and pagination. Given the Audit Logs API is called with the same filters When the request is valid Then it returns 200 with a JSON array of records, supports cursor-based pagination, and includes responseTimeMs <= 2000. Given a user without Audit:Read attempts to search When the request is made Then access is denied with 403 and no data is leaked.
Export Audit Logs to CSV and JSON
Given a filtered audit log result set up to 100,000 records When the user requests export as CSV or JSON Then an export job is created and completes within 5 minutes, and a secure download link valid for 24 hours is provided. Given the exported file is downloaded When opened Then it contains exactly the filtered records, correct headers/field names, UTF-8 encoding, and timestamps in ISO 8601 with time zone offsets. Given the viewer lacks permission to view personal data When exporting Then personal data fields (email, IP, deviceId) are masked in the file per policy.
Retention Controls and Privacy Safeguards
Given workspace retention is configurable When an admin sets audit log retention to N days (30–730) Then the setting is saved, versioned, and takes effect for future automatic deletion jobs. Given records older than the retention period and not on legal hold When the nightly job runs Then the records are permanently deleted or anonymized per policy, and a deletion summary report is logged. Given a legal hold is placed by an admin with a reason When retention jobs run Then records matching the hold are preserved until the hold is lifted, and all access is logged. Given a non-admin views logs When accessing fields classified as personal data Then those fields are redacted (e.g., email as a***@domain.com) unless the viewer has Privacy:Unmask permission.
Access Control and Access to Logs Audited
Given audit logs contain sensitive information When a user attempts to access the Audit Logs UI or API Then access is permitted only to roles granted Audit:Read, and changes to retention or export settings require Audit:Admin. Given any read or export operation is performed When it completes Then a meta-audit entry is written capturing actor, action (view|search|export), filters used, record counts, and timestamp. Given repeated failed access attempts (>=5 within 10 minutes) When detected Then the system throttles the source and alerts admins per security policy.

Smart Escalation

Build step‑up paths that move from email to in‑app to SMS after 48 hours unopened (fully configurable). Throttles repeats, adds one‑tap Snooze/Delegate, and optionally CCs a manager at the final rung—driving progress without spamming inboxes.

Requirements

Visual Escalation Rule Builder
"As a label manager, I want to visually configure escalation paths that move from email to in‑app to SMS after 48 hours unopened so that collaborators progress work without me chasing them manually."
Description

Provide a drag-and-drop rule builder to configure step-up paths that progress from email to in-app to SMS with fully configurable delays (e.g., 48 hours unopened), conditions, and stop rules. Support multiple steps per flow, per-channel templates, audience targeting by project/release/role, business-hour windows, time zone alignment, and flow versioning with save/clone. Include preview and test-send modes using real TrackCrate entities (stems, artwork, press kits) and shortlinks with expiring, watermarked downloads. Enforce role-based access to create/edit/activate flows and ensure flows can be attached to releases, tasks, or asset approvals to drive timely collaboration without manual follow-ups.

Acceptance Criteria
Configure Multi‑Step Channel Escalation with Drag‑and‑Drop and Delays
Given a user with Flow Editor permission is in the rule builder, When they drag Email, In‑App, and SMS steps onto the canvas, Then the steps appear in the flow with sequential numbering. Given steps are on the canvas, When the user reorders steps via drag‑and‑drop, Then the new order persists after Save and on reload. Given the Email step is configured with condition "Unopened after 48 hours", When the initial email is sent, Then the next step is scheduled exactly 48 hours after send only if the email remains unopened. Given per‑channel templates are required, When a step is missing a template or required placeholder, Then Save is blocked and inline errors identify the missing fields. Given a valid configuration, When the user clicks Save, Then the flow is saved as a draft version with all step settings, delays, and conditions intact.
Apply Stop Rules, Throttling, Snooze/Delegate, and Manager CC
Given a stop rule "Stop on reply/approval" is enabled, When a recipient replies to any step or approves the targeted asset, Then all future steps for that recipient are cancelled and marked as Stopped by Rule in the log. Given a throttle window of N hours is configured, When a recipient has received a step within the last N hours, Then subsequent steps to that recipient are not sent until the window elapses. Given Snooze options are enabled, When a recipient taps Snooze for X days, Then the flow pauses for that recipient for X days and automatically resumes in their next business window. Given Delegate is enabled, When a recipient delegates to a teammate with an allowed role, Then future steps target the delegate and the delegation appears in the audit trail. Given the final step has Manager CC enabled, When the final step is dispatched, Then the manager is CC’d on the message and CC details are visible in message metadata.
Audience Targeting by Project/Release/Role with Business Hours and Time Zone Alignment
Given a flow is attached to Release R and targets roles Producer and Designer, When the audience is built, Then only collaborators on R with those roles are included. Given business hours are set to 09:00–18:00 and time zone alignment is "Recipient Local", When a step is scheduled outside business hours, Then delivery is deferred to the next business window in the recipient’s local time. Given recipients span multiple time zones, When a step is scheduled for 10:00 local, Then each recipient receives between 10:00 and 10:15 local (if jitter is enabled) and delivery logs show localized timestamps. Given channel preferences and opt‑outs are enforced, When a recipient has opted out of SMS, Then they are excluded from SMS steps and the exclusion is logged.
Flow Versioning: Save, Clone, Activate, and Audit
Given an existing flow version v1, When the user clicks Clone, Then a new draft version v2 is created with identical steps and a new version ID. Given a draft version v2 is valid, When a user with Activator permission clicks Activate, Then v2 becomes Active and v1 is Archived without interrupting in‑flight runs on v1. Given an Active version exists, When an editor modifies the flow, Then changes are saved as a new Draft and the Active version remains immutable. Given version history is opened, When the user selects two versions, Then the system displays who changed what and when, including step/order/conditions/template diffs.
Preview and Test‑Send with Real Entities and Expiring, Watermarked Shortlinks
Given a flow has templates with entity placeholders, When the user selects a stem, artwork, or press kit for preview, Then the preview renders with real metadata and active shortlinks. Given test‑send mode is enabled and test recipients are specified, When the user triggers Test Send, Then only test recipients receive messages, messages are marked as Test, and no escalation schedules are created. Given shortlinks are configured with expiry and watermarking, When a test recipient opens a shortlink, Then access is logged, the asset is watermarked, and the link expires after the configured duration. Given a template references a missing field, When running preview or test‑send, Then the system surfaces a blocking validation error identifying the missing field.
Role‑Based Access Control for Flow Create/Edit/Activate
Given RBAC roles Viewer, Editor, Activator, and Admin, When a Viewer opens the builder, Then they can view but cannot edit, save, or activate. Given an Editor opens the builder, When they attempt to activate a draft, Then activation is denied with an explicit permission error. Given an Activator reviews a valid draft, When they click Activate, Then the flow activates and the action is recorded with user, timestamp, and version in the audit log. Given an unauthorized API client attempts to create/edit/activate, When the request is processed, Then the system returns 403 and no changes are persisted.
Attach Flows to Releases, Tasks, and Asset Approvals and Triggering
Given a flow is attached to Task T with trigger "On Assignment", When T is assigned to a user in the targeted role, Then step 1 is scheduled according to the configured delay and business hours. Given a flow is attached to Asset Approval A with stop rule "Stop on approval", When A is approved by any required approver, Then all pending steps for involved recipients are cancelled. Given a flow is attached to Release R with audience refresh policy "Include new matching collaborators", When a new collaborator with a targeted role is added to R, Then they are added to the audience for future steps. Given the flow status is Paused, When trigger events occur, Then no messages are sent and the events are logged as Skipped due to Pause.
Engagement Detection and Timers
"As a project coordinator, I want accurate detection of whether a recipient has engaged and timers that respect time zones and quiet hours so that escalations only happen when truly needed."
Description

Implement reliable engagement tracking and timers that determine when to step up a message. Define "opened/seen" across channels (email open or link click via TrackCrate shortlinks, in-app view, SMS link click) with deduplication and bounce handling. Start, pause, and resume per-recipient timers (e.g., 48 hours) that respect user time zones, weekends, and quiet hours. Stop escalation on any qualifying engagement or task activity, and handle edge cases such as email forwarding, disabled images, and soft bounces by falling back to link-click signals. Store events for analytics and auditing while respecting privacy settings.

Acceptance Criteria
Email engagement with images disabled and deduplication
Given an email is sent with per-recipient tracking pixel and shortlinks When the recipient opens with images blocked and later clicks any tracked shortlink Then record exactly one qualifying email engagement for that recipient and stop any active escalation timer within 60 seconds Given multiple open or click events occur for the same email within 7 days When events are processed Then deduplicate so only the first counts toward stopping escalation and the rest are stored as non-qualifying events Given no click occurs and images remain blocked When evaluating engagement after the timer window Then do not mark as engaged based on missing open pixel alone
In-app view or task activity stops escalation
Given a Smart Escalation step is pending for a recipient When the recipient views the message in-app (web or mobile) or completes a qualifying task activity (Reply, Comment, Mark Done, Snooze, Delegate) Then stop the escalation flow for that message-recipient, cancel the active timer within 60 seconds, and record an Engagement event with source in_app or task Given the recipient later engages via another channel When processing events Then do not restart or alter the already stopped escalation and deduplicate the later event as non-qualifying
SMS link click engagement and quiet hours
Given an SMS is sent with a per-recipient shortlink and a 48-hour escalation timer is active When the recipient clicks the shortlink Then record an Engagement event with source sms_click and stop the timer within 60 seconds Given recipient quiet hours are configured as 22:00–07:00 local time and weekends are excluded When computing the 48-hour timer Then pause the timer during quiet hours and Saturdays and Sundays and resume outside those windows so that only allowed hours count toward the 48 hours Given an engagement occurs during quiet hours When processing events Then still stop the timer immediately and do not delay the stop until quiet hours end
Per-recipient timer start pause resume with time zone
Given a message is sent to a recipient with time zone set to America/Los_Angeles and a 48-hour step-up threshold When the send is acknowledged by the channel provider Then start a per-recipient timer at that timestamp using the recipient’s time zone Given quiet hours and weekend exclusions are configured When the timer reaches a quiet or weekend boundary Then pause the timer and resume after the boundary, maintaining cumulative elapsed allowed time with end-time accuracy within ±1 minute Given a system restart or worker failover occurs When services recover Then timers restore from durable state and continue with no more than ±1 minute drift Given the threshold is changed from 48h to 24h mid-flight When recomputing timers Then apply the new threshold prospectively and fire the next escalation when cumulative allowed time reaches 24h if not already escalated
Bounce handling and escalation impact
Given an email soft bounce is received for a message-recipient When the bounce event is processed Then mark delivery_status soft_bounce, pause the timer, schedule an automatic retry per provider policy, and resume the timer upon successful delivery acknowledgement Given three consecutive soft bounces occur within 24 hours When evaluating delivery Then mark delivery_status soft_bounce_exceeded and escalate to the next channel at the next permissible window while logging the reason Given a hard bounce is received When the event is processed Then mark delivery_status hard_bounce, stop further email attempts for this message-recipient, cancel the email timer, and schedule escalation to the next channel at the next permissible window
Email forwarding and per-recipient link security
Given a recipient forwards an email and a third party clicks a tracked link without the original recipient’s token When processing the click Then do not count it as engagement for the intended recipient, log event_type forwarded_click, and continue the timer Given the intended recipient later clicks their tokenized link When processing the click Then record engagement for that recipient and stop the timer within 60 seconds Given a click arrives with a mismatched token and different IP or user-agent When processing the click Then still treat it as forwarded_click and never as engagement for the intended recipient
Event storage analytics and privacy controls
Given any engagement, timer, delivery, or bounce event occurs When persisting the event Then store fields at minimum: event_id, message_id, recipient_id, channel, event_type, source, occurred_at (UTC), recipient_tz, metadata, and ensure writes succeed at p95 <= 200 ms Given privacy setting do_not_track_opens is enabled for a recipient or workspace When processing email opens Then do not record open events or use them to stop escalation and rely only on link-click or in-app events Given privacy setting redact_ip_user_agent is enabled When persisting events Then hash or drop IP address and user-agent fields and exclude them from analytics exports Given an auditor queries the event log by message_id and recipient_id for a date range When requesting results via API Then return chronologically ordered events with pagination and counts within p95 <= 500 ms for up to 10,000 events
Multi-Channel Delivery Connectors
"As an indie artist, I want notifications to reach me on the channel I actually check and fail over if one fails so that I don’t miss time‑sensitive release tasks."
Description

Integrate and orchestrate delivery across email, in-app notifications, and SMS with provider adapters (e.g., SendGrid for email, native TrackCrate in-app, Twilio for SMS). Support per-channel templates with dynamic variables (release name, asset links, due dates), secure shortlinks, and optional access-gated stem previews. Provide failover to the next channel on delivery failure, per-channel quiet hours and rate limits, and per-user channel preferences. Enforce SMS consent and regional compliance (opt-in/opt-out, STOP/HELP keywords), handle invalid contact info gracefully, and record delivery, failure, and cost metrics.

Acceptance Criteria
Provider Adapters and Automatic Channel Failover
Given a notification with channel order [email, in-app, SMS] and valid user contact details And SendGrid, TrackCrate in-app, and Twilio connectors are configured and reachable When email delivery returns a hard failure (e.g., provider status "bounced" or "rejected") or exceeds the configured retry limit for transient failures Then the system attempts in-app delivery within 5 seconds using the in-app template for the same notification intent And, if in-app delivery returns success, SMS is not attempted And, if in-app delivery fails, the system attempts SMS delivery within 5 seconds using the SMS template And the system records a single escalation trace linking all channel attempts by the same intent ID And each attempt persists the provider message ID and final status And if any channel returns success, subsequent channels are skipped
Per-Channel Template Rendering with Dynamic Variables and Secure Shortlinks
Given per-channel templates reference variables release_name, due_date, and asset_link And a notification payload supplies values for all required variables When the system renders templates for email, in-app, and SMS Then all variables resolve without placeholder leakage in the outgoing content And a unique, signed shortlink is generated per recipient and asset_link with a configurable TTL And requests to the shortlink after TTL expiration return HTTP 410 and are logged And if a required variable is missing, the send is blocked and a validation error is logged; no channels are attempted And, if access_gated=true for the asset_link, recipients without a valid access token are prompted to authenticate; unauthorized requests return HTTP 403
Per-Channel Quiet Hours and Rate Limiting Enforcement
Given quiet hours are configured per channel in the recipient’s local timezone And per-user rate limits are configured per channel When a notification becomes eligible to send during a channel's quiet hours Then that channel's attempt is deferred until quiet hours end (no send occurs during quiet hours) And deferral entries are recorded with the intended send time When a notification would exceed the channel's per-user rate limit Then the attempt is queued until the rate window resets, and the queue time is logged And quiet-hour and rate-limit deferrals do not count as delivery failures
Per-User Channel Preferences Respect and Escalation Eligibility
Given a user has channel preferences set to email=enabled, in-app=enabled, SMS=disabled When attempting to deliver a multi-channel notification Then the system does not attempt SMS delivery And failover skips disabled channels and proceeds only through enabled channels while preserving the defined order And preferences are read at send time; updates made before send completion are honored And an audit entry records which channels were skipped due to preferences
SMS Consent Management and Regional Compliance
Given a recipient has not opted in to SMS and their region requires explicit consent When a notification would attempt SMS Then SMS is not sent; a compliance error is logged; no SMS cost is incurred And if the recipient subsequently opts in (double opt-in where required), future SMS become eligible Given an opted-in recipient sends the keyword "STOP" to the sending number When the STOP inbound message is received Then the system marks the recipient as opted-out immediately, sends a one-time confirmation SMS, and blocks further SMS until "START" is received And the opt-out event stores timestamp, region, and source for audit Given a recipient sends "HELP" Then the system replies with a help message including brand, support contact, and opt-out instructions Given a message is sent to regions requiring sender ID and opt-out language Then those elements are included per region policy and logged for audit
Invalid Contact Information Handling
Given an email address that hard-bounces or fails validation When attempting email delivery Then the email address is marked invalid, the attempt is recorded as failed, and no further retries are made for that address during this notification And the system proceeds according to failover rules to the next eligible channel Given a phone number that Twilio reports as undeliverable or invalid (e.g., error code 30003/30005) When attempting SMS delivery Then the number is marked invalid, no further SMS retries are made for that number during this notification, and failover proceeds if applicable Given the recipient has no active TrackCrate in-app inbox When attempting in-app delivery Then the attempt is recorded as failed and failover proceeds if applicable
Delivery, Failure, and Cost Metrics Recording
Given a multi-channel notification with attempts across email, in-app, and SMS When providers return status updates (queued, sent, delivered, bounced/failed) and cost data (for SMS) Then the system records for each attempt: channel, provider, provider message ID, attempt start/end timestamps, final status, failure reason code, and unit cost where applicable And aggregates are available per notification, per user, and per channel via API and UI And metrics are stored within 2 minutes of receiving provider webhooks And all records include the notification intent ID to correlate cross-channel attempts
Throttling and Frequency Capping
"As a collaborator, I want reasonable limits on how often I’m nudged so that I stay responsive without feeling spammed."
Description

Add safeguards that limit notification volume to prevent spam and fatigue. Provide global, per-user, and per-thread frequency caps; coalesce near-duplicate reminders; and suppress escalations when there is recent activity on the underlying asset/task. Support business-hour windows, label-level overrides for critical flows, rate limiting to external providers, and clear logging of throttled or suppressed events for troubleshooting. Ensure caps interact correctly with timers and step-up logic.

Acceptance Criteria
Global Daily Cap Enforcement
Given a global cap of 100 notifications in a rolling 24-hour window across all channels When the system attempts to send 105 notifications within the same 24-hour window Then only the first 100 notifications are sent and 5 are suppressed with reason code "GLOBAL_CAP" And the suppression does not exceed the cap by more than 0 notifications And at T+24h from the first sent notification in the window, the counter rolls and new sends are permitted And the global cap applies irrespective of user, thread, or label
Per-User Rolling Window Cap
Given a per-user cap of 5 notifications per 6-hour rolling window And User A has already received 5 notifications in the last 6 hours When the system attempts to send 2 additional notifications to User A Then both notifications are suppressed with reason code "USER_CAP" for User A And a simultaneous notification to User B is sent successfully And after the oldest of User A’s 5 notifications ages past 6 hours, one new notification is allowed And cap evaluation is performed per user across all threads and channels
Per-Thread Cap with Coalescing
Given a per-thread cap of 2 notifications per 24 hours and a coalescing window of 15 minutes And three near-duplicate reminders are queued for the same thread, recipient, and template within 15 minutes When delivery is evaluated Then the three reminders are coalesced into a single notification that summarizes all three items And the coalesced notification counts as 1 against global, per-user, and per-thread caps And if a fourth reminder is queued after the 15-minute window within the same 24-hour period Then only one additional notification is sent (reaching the per-thread cap of 2) and further ones are suppressed with reason code "THREAD_CAP" And near-duplicate detection uses a deterministic dedupe key comprising threadId+recipientId+templateVariant
Recent Activity Suppression
Given a recent-activity suppression threshold of 120 minutes on the underlying asset/task And a scheduled escalation notification is due at time T And a comment or file upload or status change occurs on the asset at time T-30 minutes When delivery is evaluated at time T Then the scheduled notification is suppressed with reason code "RECENT_ACTIVITY" And the suppression applies across all channels for that thread and recipient And activity older than 120 minutes does not trigger suppression
Business Hours with Critical Overrides
Given business-hour windows configured as Monday–Friday 09:00–18:00 in the recipient’s local time zone And a notification becomes due at 22:00 local time When delivery is evaluated Then the notification is deferred to the next window opening at 09:00 with reason code "BUSINESS_HOURS" And if the notification carries a label "Critical" with an override policy enabled Then it is delivered immediately at 22:00 despite business hours And critical overrides do not bypass the global cap or provider rate limits And all deferrals and overrides are logged with the applied policy name
Provider Rate Limiting with Backoff and Unified Logging
Given an email provider limit of 10 sends per minute and an SMS provider limit of 30 sends per minute And a burst of 50 email and 50 SMS notifications is queued at the same minute When delivery is executed Then at most 10 emails/min and 30 SMS/min are dispatched without exceeding provider quotas And excess notifications are enqueued with exponential backoff and jitter, honoring any Retry-After headers on 429 responses And no channel exceeds its configured provider limit during the burst or retries And each deferred or dropped attempt is logged with reason code "PROVIDER_RATE_LIMIT" including provider, quota, attempt count, next retry time, and correlationId And all throttled/suppressed outcomes across caps use standardized reason codes {GLOBAL_CAP, USER_CAP, THREAD_CAP, RECENT_ACTIVITY, BUSINESS_HOURS, PROVIDER_RATE_LIMIT} with timestamps and counters
Timer and Step-Up Interaction with Caps
Given a step-up path Email → In-App → SMS with a 48-hour unopened threshold between rungs And an Email rung is due at T0 but is deferred by business hours until T1 When the Email is delivered at T1 and remains unopened Then the 48-hour timer starts at T1 (actual delivery time), not T0 And if the Email rung is suppressed (not delivered) due to USER_CAP at T0 Then the system does not escalate to In-App based on time elapsed; the Email rung remains the active rung until delivered or canceled by RECENT_ACTIVITY And if RECENT_ACTIVITY cancels the Email rung before delivery Then the step-up sequence is halted and no further rungs fire And when a rung is coalesced with others, it counts as a single delivery event for timer purposes
One‑Tap Snooze and Delegate
"As a mixing engineer, I want to snooze or delegate a request in one tap so that I can manage my workload without breaking the escalation flow."
Description

Embed actionable controls in every notification: Snooze with preset durations (e.g., 2h, 24h, next business day) or custom time, and Delegate to another collaborator with optional note. Support email action links, in-app buttons, and SMS keywords for parity. On Snooze, pause the escalation timer and resume at expiry; on Delegate, reassign the underlying TrackCrate task/approval, notify the new assignee, and update access permissions. Validate permissions, surface confirmations, and write a full audit trail of actions.

Acceptance Criteria
Email One‑Tap Snooze (Preset Durations)
Given user U receives an email notification for task/approval T with action links [Snooze 2h], [Snooze 24h], and [Snooze Next Business Day] And each action link includes a signed, single‑use token bound to U, T, and the selected duration When U clicks [Snooze 2h] Then the system pauses T's escalation timer immediately And sets a snooze expiry to now + 2 hours in U's timezone And shows U a confirmation page stating the exact local expiry timestamp And writes an audit record with action=snooze, channel=email, duration=2h, actor=U, target=T, issuedAt, expiresAt, requestId And prevents token reuse by returning an "Already snoozed" message on subsequent clicks with no additional state change When U clicks [Snooze Next Business Day] Then the system sets the expiry to the next weekday at 09:00 in U's timezone (Mon–Fri), skipping weekends And updates the confirmation and audit with the calculated expiry
In‑App Snooze (Custom Time and Next Business Day)
Given user U views task/approval T in‑app and clicks the Snooze button When U selects Custom time and sets 2025‑09‑03 15:30 in U's timezone Then T's escalation timer is paused and snooze expiry is set exactly to 2025‑09‑03 15:30 local And the UI displays a success toast and the scheduled resume time in local time And an audit record is written with action=snooze, channel=in‑app, explicitExpiry, actor, target, requestId When U selects Next Business Day Then the system uses workspace business hours if configured (startOfDay), otherwise 09:00 next weekday local time And the updated expiry is reflected in the UI and audit log
SMS Keyword Snooze Parity
Given user U has a verified phone number and SMS notifications enabled for task/approval T When U replies to the T notification with "SNOOZE 24h" Then the system pauses escalation and sets snooze expiry to now + 24 hours in U's timezone And replies via SMS confirming the exact local expiry timestamp And writes an audit record with action=snooze, channel=SMS, duration=24h, actor, target, messageId When U sends "SNOOZE UNTIL 2025‑09‑07 09:00" Then the system parses the timestamp (assumed in U's timezone unless Z/offset provided) and sets explicit expiry accordingly And confirms via SMS and audit When U sends an unrecognized format (e.g., "SNOOZE LATER") Then the system sends a help reply listing accepted formats: "SNOOZE 2h|24h|3d", "SNOOZE UNTIL YYYY‑MM‑DD HH:mm", "SNOOZE NEXT" And makes no state change and records a failed action in audit
Delegate via In‑App with Optional Note
Given user U has permission to reassign task/approval T When U clicks Delegate, searches, and selects collaborator V from the eligible list And optionally enters a note "Please finalize stems" Then T is reassigned to V And V is granted required assignee access to all related TrackCrate assets for T And any exclusive assignee‑only permissions are removed from U while retaining U as a follower/watcher And V is notified immediately via their configured channels (email/in‑app/SMS) including the note And U sees an in‑app confirmation of successful delegation And an audit record is written with action=delegate, channel=in‑app, oldAssignee=U, newAssignee=V, note, actor, target, requestId
Delegate via SMS with Permission Validation
Given user U receives an SMS about task/approval T and has a verified phone number When U replies "DELEGATE @v Please finalize stems" Then the system validates that U can reassign T and that user @v exists and is assignable on this project And if valid, reassigns T to @v, updates access permissions, notifies @v with the optional note, and confirms to U via SMS And writes an audit record with action=delegate, channel=SMS, oldAssignee, newAssignee, note, actor, target, messageId And if U lacks permission, replies with "You don’t have permission to delegate this item" and makes no changes while auditing the failed attempt And if @v is invalid or not assignable, replies with an actionable error and makes no changes while auditing the failed attempt
Escalation Timer Pause/Resume and Throttle Interaction
Given task/approval T is on an active escalation path for assignee U When U snoozes T by any channel Then the system pauses the escalation timer and cancels any scheduled notifications until snooze expiry And upon expiry, the escalation resumes at the same rung with the remaining time preserved And the system respects the configured throttle window, ensuring no duplicate notifications fire within the throttle interval when resuming And if U snoozes again before expiry, the expiry is updated (extended) atomically without firing any notifications And all state transitions are recorded in audit with previousNextFireAt and newNextFireAt
Audit Trail Completeness and Idempotency
Given any Snooze or Delegate action is performed via email, in‑app, or SMS Then the system records an immutable audit entry containing: actorId, targetId, action (snooze|delegate), channel, parameters (duration|explicitExpiry|newAssignee|note), requestId/messageId, client IP or phone number, userAgent if web, timestamp (UTC and actor local), outcome (success|failure), and errorReason if failure And audit entries are viewable to authorized users within the task history and exportable by admins to CSV and JSON And repeat submissions of the same signed email link, in‑app double‑clicks, or duplicate SMS messages are treated idempotently: only the first succeeds and subsequent attempts return a non‑fatal "Already processed" response with no additional state change And all idempotent no‑ops are captured in audit as deduplicated with a reference to the original action
Manager CC on Final Rung
"As a label owner, I want to be CC’d only when an escalation truly stalls so that I can intervene efficiently without being flooded."
Description

Provide an optional final-step rule that CCs a manager or team distribution when prior steps fail to elicit engagement. Allow configuration by project/label, with RBAC to restrict who can be CC’d. Include context (what’s overdue, prior attempts, latest activity), respect data visibility for private stems/press, and offer digest or individual CC modes. Record CC events, expose opt-out where required, and ensure messages are professional and appropriately templated.

Acceptance Criteria
Final Rung CC Trigger Conditions
- Given a project/label has Smart Escalation enabled with a final "Manager CC" rung configured, When all prior steps have executed and no engagement event occurs within the configured wait window, Then the system sends the Manager CC to the configured recipients. - Given engagement is defined as any of: open/view of a prior notification, clickthrough, comment/reply, marking the item complete, Snooze, or Delegate, When any engagement occurs before the final rung delay elapses, Then no Manager CC is sent for that cycle. - Given an item’s activity timestamp updates (e.g., new comment/upload), When recalculating escalation timing, Then the final rung timer resets based on the latest activity.
RBAC-Constrained CC Recipient Selection
- Given a user is configuring escalation rules, When selecting Manager CC recipients, Then only users/distribution lists permitted by project/label RBAC are available and selectable. - Given a disallowed email/phone is entered manually, When attempting to save the rule, Then validation fails with an explicit RBAC error and the configuration is not saved. - Given CC recipients are resolved at send time, When evaluating the rule, Then any recipients that have lost access since configuration are excluded and the exclusion is logged.
Contextual CC Content With Data Visibility Enforcement
- Given a Manager CC is sent, When the message is rendered, Then it includes: item title, assignee, due date, days overdue, prior attempts (channels and timestamps), and latest activity summary. - Given the item references private stems/press/assets, When composing the CC, Then only metadata and authorized links are included; private files are not attached inline. - Given links are included, When a recipient follows a link, Then authorization is required and downloads are expiring and watermarked where applicable; unauthorized recipients see a permission notice and no asset content.
Digest Versus Individual CC Modes
- Given Digest mode is enabled with a configured cadence (e.g., 24 hours), When multiple final-rung triggers target the same recipient within the window, Then a single digest is sent summarizing all still-unengaged items. - Given Individual mode is enabled, When a final-rung trigger occurs, Then a CC is sent immediately for that single item. - Given items in a pending digest are resolved before send time, When the digest is generated, Then resolved items are excluded from the digest.
Throttling and De-duplication
- Given throttling is set to max 1 CC per item per 48 hours and max 3 CCs per recipient per 24 hours, When triggers exceed limits, Then excess CCs are suppressed and suppression events are recorded. - Given delivery retries or job restarts, When a CC with the same item-recipient idempotency key exists within the cooldown window, Then no duplicate CC is sent.
One-Tap Snooze and Delegate From CC
- Given a Manager CC is received, When the recipient taps "Snooze 48h", Then the item is snoozed for 48 hours, all active escalations for that item pause, and an audit event is recorded. - Given the recipient taps "Delegate" and selects an eligible assignee, When delegation is confirmed, Then the item’s assignee updates, the current escalation chain stops, and a new chain (if configured) begins for the new assignee. - Given Snooze/Delegate is triggered via email or SMS, When processed by the system, Then a confirmation is returned to the user and the item state updates within 5 seconds.
Event Logging, Opt-Out, and Professional Templating
- Given any Manager CC (digest or individual) is generated, When persisting the event, Then the log captures item ID, project/label, trigger step, timestamp, recipients, channel, mode, template ID, and provider message IDs, and appears in audit within 1 minute. - Given opt-out is required for a recipient type or jurisdiction, When a CC is sent to that address/number, Then an opt-out link/keyword is included; opting out suppresses future CCs and updates recipient preferences immediately. - Given branding and tone requirements, When rendering the CC, Then the configured professional template (logo, footer, legal, locale/time zone) is applied and must pass template validation; failures block send and surface an error.
Escalation Analytics and Simulation
"As a product lead, I want reports and a simulator so that I can optimize escalation flows for effectiveness and cost before and after launch."
Description

Deliver analytics for each flow: step conversion rates, time-to-acknowledge, drop-off by channel, throttle counts, snooze/delegate usage, and SMS cost tracking. Provide per-project and portfolio views with drilldowns to releases/assets. Include a sandbox simulator that runs a "what-if" preview against a sample cohort to validate timers, quiet hours, and capping before activation. Support CSV export and an API for BI tools, and surface actionable insights to refine flows.

Acceptance Criteria
Per-Project Flow Analytics
Given a project with at least one Smart Escalation flow and events within a selected date range and timezone When the user opens the project analytics view and selects a flow Then the dashboard displays for that selection: step-by-step conversion rates, median and p90 time-to-acknowledge per step, drop-off rate by channel, throttle counts by rule, Snooze count, Delegate count, total SMS sent/delivered/failed, and total SMS cost, each with an inline definition tooltip And changing date range or timezone recomputes all metrics within 5 seconds And selecting a funnel step reveals a channel and reason breakdown (drop-off, throttle) And if no data matches filters, the view shows zeroed metrics with a "No data for filters" state
Portfolio Analytics with Drilldowns
Given a user with access to multiple projects When the user opens the portfolio analytics view Then aggregate metrics are shown across all accessible projects with the same definitions as per-project analytics And clicking a project drills down to that project's analytics And clicking a flow drills down to that flow's analytics And clicking a release or asset in any breakdown opens its detail analytics with filters applied And applying filters (date range, label, owner, channel) updates aggregates and top lists accordingly And attempting to access an unauthorized project returns 403 and hides its data from aggregates
Throttle and Reason Codes Visibility
Given throttling rules are configured and have been triggered during the selected date range When viewing the throttle section of analytics Then counts are grouped by rule_id, channel, and step with reason_code (rate_limit, quiet_hours, cap_reached) and first_seen_at/last_seen_at timestamps in the selected timezone And applying a timeframe filter updates counts to reflect only events within the window And exporting throttling data produces a CSV with columns: project_id, flow_id, step_id, channel, rule_id, reason_code, count, first_seen_at, last_seen_at
Sandbox Simulator Preview
Given a draft or inactive flow with timers, quiet hours, and caps configured and a selectable sample cohort size (10–10,000) When the user runs the simulator Then a per-step, per-channel timeline is produced showing planned sends, suppressions due to quiet hours, and suppressions due to caps for the next 30 days (or flow duration if shorter) And conflicts are summarized with counts as warnings (e.g., messages suppressed by quiet hours, cap overages) and linked to the affected steps And predicted SMS cost is calculated using current provider rates by destination country and summarized per step and in total And after any change to timers, quiet hours, caps, or cohort size, re-running updates results and records a version-stamped snapshot of inputs and outputs And if no historical data exists, the simulator marks confidence as Low while still validating quiet hours and caps
CSV Export and Analytics API
Given an analytics view with applied filters When the user clicks Export CSV Then a CSV downloads within 10 seconds, RFC 4180 compliant (UTF-8 BOM), containing exactly the rows and columns visible for the selected view, splitting into multiple files if >1,000,000 rows And an API client with scope analytics:read can call GET /api/v1/analytics with parameters (project_id?, flow_id?, date_range, group_by, tz, cursor) Then the API returns paginated JSON with column parity to CSV, ISO 8601 timestamps honoring tz, and enforces a rate limit of 60 requests/minute And invalid parameters return HTTP 400 with error details; unauthorized returns 401; forbidden project access returns 403
Actionable Insights Generation
Given a flow with >= 500 recipients in the selected period When insights are computed on the analytics page Then up to 5 prioritized insights are shown, each including the affected metric, magnitude of impact, and a recommended action, covering at least: highest drop-off step, best-performing channel, suggested send window, throttling hotspots, and SMS cost per conversion outliers And clicking an insight deep-links to the relevant editor or report with filters pre-applied And dismissing an insight hides it for that flow for 30 days and logs the dismissal event And after applying a recommended change via the editor, the next refresh displays an impact summary with absolute and percentage change for the targeted metric

One‑Tap Approve

Embed secure, device‑bound Approve/Needs‑Changes actions directly in nudges and SMS. Actions sync to Signoff Ledger and link to a private stem player or AutoKit page, shaving days off review cycles by removing login and navigation friction.

Requirements

Device‑Bound Approval Links
"As a label approver receiving a nudge on my phone, I want to securely approve a mix from the message itself so that I don’t have to log in or navigate pages to keep the release moving."
Description

Generate cryptographically signed, one‑time approval links bound to the recipient identity (email/phone) and device fingerprint. Links are embedded in nudges and SMS, auto‑detect platform (iOS/Android/web), and deep‑link to a minimal confirmation screen with Approve/Needs‑Changes actions. Enforce token TTL, audience scoping (specific recipients only), and device binding with soft fallback to OTP if the device changes. Provide SDK/utilities to issue, validate, and rotate tokens, with centralized configuration for expiration windows and per‑project policy (e.g., require OTP on new device). Emit structured events for validation outcomes to support analytics and security monitoring.

Acceptance Criteria
Approve on Issued Device with One‑Time Link
Given a cryptographically signed, one‑time approval link is issued to recipient X bound to device fingerprint D for project P with TTL T When X taps the link on device D within TTL T Then a minimal confirmation screen opens without login and displays Approve and Needs Changes actions for the correct asset/context And selecting either action records the decision in the Signoff Ledger with actor X, device D, timestamp, action, and context And the token is immediately invalidated and cannot be reused And a structured event "approval_success" with masked identifiers is emitted
Expired Link Denied with Recovery
Given an approval link with TTL T has exceeded T at the time of tap When any user attempts to use the expired link Then the action is blocked with an explicit "Link expired" error And present CTAs to request a fresh link or complete OTP verification per project policy And emit structured event "token_expired" And no Signoff Ledger entry is created or modified
New Device Requires OTP Fallback
Given project policy require_otp_on_new_device = true and a link bound to device D1 for recipient X When X taps the link on device D2 within TTL T Then an OTP challenge is sent to X’s verified channel (email or SMS) and the action is blocked until OTP is verified And upon correct OTP (within 5 minutes, max 3 attempts), the confirmation screen is shown and the token is rotated/bound to D2; the original token is invalidated And emit events "device_mismatch" and "otp_success" (or "otp_failed" on failure) And if OTP is not successful, no Signoff Ledger change occurs
Platform Auto‑Detection and Deep‑Link
Given the link is tapped on iOS with the app installed Then open the native app via Universal Link and load the confirmation screen with preserved context in ≤1000 ms p50 and ≤2000 ms p95 on 4G Given the link is tapped on Android with the app installed Then open the native app via App Links and preserve context; else fall back to secure web Given the app is not installed or deep linking fails Then fall back to a secure web confirmation screen while preserving token, context, and analytics parameters And deep link handling preserves UTM/shortlink association for analytics
Audience Scoping Restricts Access
Given a link is scoped to recipient identity (email E and/or phone P) and project role R When a user whose verified identity does not match E/P or lacks role R attempts to use the link Then show a "Not authorized" message and block the action And emit structured event "audience_mismatch" with masked identifiers And optionally show a "Request access" CTA only if project policy allows And no Signoff Ledger entry is created or modified
SDK, Validation, Rotation, and Config
Given the SDK exposes issueToken(recipientId, deviceFingerprint, projectId, ttl), validateToken(token, deviceFingerprint), rotateToken(token), revokeToken(token) When integrators use the SDK with centralized config (default_ttl, require_otp_on_new_device, allowed_channels, max_attempts, clock_skew_sec) Then token signatures are verified (e.g., Ed25519 or HMAC‑SHA256), tampered tokens are rejected, and policy is enforced consistently across services And unit tests provide ≥90% line coverage for issuance/validation/rotation and config enforcement, with integration tests for iOS/Android/web deep linking And public docs include code samples (Node, Kotlin, Swift) and enumerated error codes/outcomes And all SDK methods are idempotent and safe for concurrent calls
Embedded Email/SMS Quick Actions
"As an artist on tour, I want approve/change buttons right in the email or text so that I can respond instantly without opening the app."
Description

Render Approve and Needs‑Changes as actionable buttons in emails and as tappable smart links in SMS that work across major clients and carriers. Support adaptive layouts, dark mode, and accessibility labels. For SMS, provide concise, brandable shortlinks with prefetch metadata. Implement deep link routing that carries context (project, track, version, recipient) so the action can be confirmed with a single tap. Include graceful fallbacks: keyword replies (APPROVE/CHANGES) where buttons aren’t supported, and a web fallback page for legacy clients. Centralize templates with localization and per‑workspace branding.

Acceptance Criteria
One‑Tap Approve via Email Button (Modern Clients)
Given a recipient opens a TrackCrate review email in a supported client (Gmail, Apple Mail iOS/macOS, Outlook desktop/web, Yahoo Mail), When they tap the Approve button, Then the action is confirmed without requiring login and a success confirmation screen is shown. And the decision is recorded in the Signoff Ledger with project_id, track_id, version_id, recipient_id, action=approve, channel=email, message_id, timestamp. And the deep link opens the associated private stem player or AutoKit page with the approved state reflected. And repeated taps are idempotent and show an "Already approved" state without creating duplicate ledger entries. And the button renders and remains tappable in light and dark mode on mobile and desktop layouts.
One‑Tap Needs‑Changes via SMS Smart Link
Given a recipient receives an SMS containing a brandable shortlink domain configured for the workspace, When they tap the Needs‑Changes link, Then a confirmation view shows project/track/version context and a single tap confirms the Needs‑Changes action without login. And the shortlink length is ≤ 22 characters and includes prefetch metadata (Open Graph title, description, image) for link previews where supported. And carrier link scanners and OS preview fetches do not consume the action token; only a human browser tap executes the action. And the action is recorded in the Signoff Ledger with action=needs_changes, channel=sms, message_id, timestamp. And delivery and tap-to-confirm behavior is verified across major carriers (e.g., Verizon, AT&T, T‑Mobile, Vodafone) on iOS and Android default browsers.
Keyword Reply Fallback (APPROVE/CHANGES) for Email and SMS
Given a recipient replies to the review email or SMS with the keyword "APPROVE", When the system ingests the message, Then the approval is applied to the correct project/track/version/recipient context derived from original message metadata. And replies are parsed case‑insensitively with whitespace trimming; "CHANGES" maps to Needs‑Changes; unrecognized keywords trigger a help response with valid options. And a confirmation message is returned on the same channel indicating the applied action and context. And idempotency ensures duplicate replies do not create multiple ledger entries; rate limiting mitigates rapid repeat messages. And parsing supports common signatures/quoted text without misclassification.
Deep Link Context and Single‑Tap Flow
Given a quick action link is generated, Then it is cryptographically signed and encodes project_id, track_id, version_id, recipient_id, action, and expiry. When the link is opened, Then the action can be confirmed with a single tap without login, and the execution is recorded with a unique action_id to prevent replay. And first successful open binds the token to the device; subsequent opens from other devices surface a safety notice and route to the web fallback for verification. And expired, revoked, or already‑used links render the fallback web page with clear status (Expired/Already Processed) and a path to request a fresh link. And context is preserved end‑to‑end and visible on the confirmation screen (project name, track title, version label, recipient name).
Signoff Ledger Sync and Notifications
Given an action is executed (email button, SMS link, or keyword reply), Then a single atomic ledger entry is written with actor (from recipient_id), action, source channel, device fingerprint hash, anonymized IP (per policy), message_id, previous_state, new_state, and timestamp. And a signoff.updated webhook/event is emitted to the workspace with action_id and full context payload. And the associated AutoKit page and private stem player reflect the updated decision state on refresh. And retrieval via API and UI audit views shows the new ledger entry and event trail. And duplicate execution attempts reference the original action_id and do not create new ledger rows.
Centralized Templates with Branding and Localization
Given a workspace has branding (logo, colors, sender display name, link domain) and a selected locale, When a review email or SMS is generated, Then the content applies the workspace branding and localized strings. And templates are centralized with variables: project_name, track_title, version_label, recipient_name, approve_url, changes_url, fallback_url. And supported locales include at least en, es, and fr, with automatic fallback to en when a localized string is missing. And admins can preview and test‑send each template per locale and channel before enabling. And the configured shortlink domain is applied consistently across email and SMS outputs.
Accessibility, Dark Mode, and Adaptive Layout Compliance
Given review emails and the fallback web page are rendered, Then primary action buttons meet WCAG 2.1 AA color contrast (≥ 4.5:1) and have minimum touch target size of 44×44 px with descriptive aria‑labels. And layouts adapt from 320 px to 1200 px widths without horizontal scrolling and with readable font sizes. And dark mode variants maintain legible text, visible logos, and distinguishable action buttons in major email clients. And SMS content remains ≤ 140 GSM‑7 characters per segment for default locale, with Unicode handling for non‑Latin locales. And screen readers announce the purpose of actions and logical focus order is maintained on the fallback web page.
Signoff Ledger Atomic Sync
"As a producer managing releases, I want every one‑tap decision recorded to a tamper‑resistant ledger so that I have auditability and can move the pipeline forward confidently."
Description

Persist each approval decision as an immutable, append‑only ledger entry containing project, asset/version IDs, decision, timestamp with timezone, recipient identity, delivery channel, device fingerprint hash, IP/country, token ID, and optional notes. Guarantee exactly‑once recording via idempotent action IDs and transactional writes. Expose read APIs, webhooks, and exports (CSV/JSON) for compliance and downstream automation. Display decision status badges in TrackCrate UI and AutoKit, and trigger post‑decision workflows (e.g., lock version, notify mastering, update shortlinks).

Acceptance Criteria
Exactly‑Once Idempotent Recording Across Channels
Given a valid approval action with action_id=A for project P, asset version V, and recipient R And identical Approve links are delivered via email and SMS referencing action_id=A And R activates the link multiple times across channels/devices, including concurrent clicks When the backend processes these activations Then exactly one ledger entry is appended with action_id=A for (P,V,R) And no duplicate ledger entries exist for action_id=A And subsequent activations return a successful, idempotent response without appending new entries And the recorded delivery_channel reflects the first successfully processed activation for action_id=A
Append‑Only, Immutable Entry With Complete Schema
Given a ledger entry E has been appended for action_id=A When a client attempts to update or delete E via any API or UI Then the operation is rejected (HTTP 403 or 409) and no new entries are created And the ledger remains unchanged And E contains non‑null values for project_id, asset_id, version_id, decision in {"approve","needs_changes"}, timestamp (ISO 8601 with timezone offset), recipient_identity, delivery_channel, device_fingerprint_hash (SHA‑256 hex), ip, country (ISO 3166‑1 alpha‑2), token_id And E may include optional notes (string) when provided
Transactional Write And Side‑Effects Ordering
Given an approval decision is received for asset version V When the system persists the decision Then the ledger append is committed atomically before any side‑effects execute And decision status badges in TrackCrate UI and AutoKit reflect the new state only after the commit And post‑decision workflows (lock version, notify mastering, update shortlinks) are enqueued only if the ledger write succeeds And if any side‑effect fails, the ledger entry remains committed and the side‑effect is retried asynchronously with backoff without creating additional ledger entries
Read API Filtering, Sorting, Pagination
Given multiple ledger entries exist across projects and assets When a client calls GET /ledger with filters (project_id, asset_id, version_id, decision, recipient_identity, token_id, from_ts, to_ts), sort by timestamp asc/desc, and page_size<=100 Then the API returns 200 with a JSON array of entries matching filters, ordered as requested, limited to page_size And a stable pagination cursor or page/total metadata is returned for subsequent pages And each entry contains the full schema fields and timestamp includes timezone offset
Webhook Delivery, Security, And Retry Semantics
Given a webhook endpoint is configured with a shared secret S for project P When a ledger entry is appended for P Then a POST is sent within 5 seconds to the endpoint with a JSON payload describing the entry And headers include X-TrackCrate-Signature (HMAC-SHA256 over the payload with S), X-TrackCrate-Timestamp (ISO 8601), and X-TrackCrate-Event-Id (entry_id) And if the endpoint responds non-2xx or times out, deliveries are retried with exponential backoff for up to 24 hours with the same X-TrackCrate-Event-Id for deduplication
CSV/JSON Export Parity And Scale
Given a request to export the ledger for project P as CSV and JSON for a date range When the export job completes Then the CSV and JSON contain identical record sets and fields as the read API schema, with deterministic column order for CSV And timestamps in exports include timezone offsets and are not localized during export And datasets of at least 100,000 records complete successfully via streaming or chunked downloads without memory errors
UI Badges And Post‑Decision Workflow Triggers
Given a user views an asset version V in TrackCrate UI or an AutoKit page When an Approve or Needs-Changes ledger entry is appended for a designated reviewer Then the visible decision status badge for that reviewer updates within 3 seconds to Approved or Needs Changes And clicking the badge reveals a link to the corresponding ledger details And for Approve on a releasable version, the system locks version V, notifies mastering, and updates related shortlinks within 10 seconds of ledger commit
Needs‑Changes Inline Feedback Capture
"As a collaborator reviewing stems on my phone, I want to quickly note what’s wrong and where so that the team can fix it without back‑and‑forth."
Description

When a recipient taps Needs‑Changes, present a lightweight, frictionless feedback form optimized for mobile: free‑text notes, quick tags (mix, vocal, timing, artwork), optional voice memo upload, and timecode/file reference pickers that map comments to specific stems. Support replying directly via SMS with parsed keywords and attachments where supported, and thread the feedback back into the project’s discussion/tasks. Sync comments to the Signoff Ledger entry and notify assignees with smart summaries.

Acceptance Criteria
Mobile Needs‑Changes Form Launch From Nudge
Given a recipient opens a TrackCrate nudge or SMS containing a Needs‑Changes action on a mobile device with a valid device‑bound token When the recipient taps Needs‑Changes Then the feedback form loads within 2 seconds over 4G and pre‑associates the correct release, asset version, and recipient identity without login And the form displays fields: free‑text notes (min 1, max 5,000 chars), quick tags (mix, vocal, timing, artwork, Other), optional voice memo, timecode picker, and file/stem reference picker And the submit button remains disabled until at least one of: notes length ≥ 3 chars, a quick tag is selected, a voice memo is attached, or a timecode/file reference is added
Quick Tags Capture And Validation
Given the feedback form is open When the user selects up to 5 quick tags including Other with custom text up to 30 characters Then tags are visually selected, deduplicated, can be deselected, and persisted to a local draft if the user navigates away and returns within 24 hours on the same device And on submit, tags are stored with an ISO8601 timestamp and linked to the project comment thread And Other requires non‑empty custom text and rejects terms on the profanity blocklist; submission is blocked and an inline error is shown within 100 ms
Timecode And Stem/File Reference Mapping
Given the feedback form is open for an asset with stems and/or masters When the user picks a stem and adds a timecode using the scrubber or manual entry Then timecode validates HH:MM:SS.mmm (0 ≤ time ≤ asset duration) and snaps to the nearest 10 ms And each comment stores a tuple {assetId, stemId|null, timecode|null} and renders as a clickable seek link in the private player And multiple references per submission are supported (min 1, max 50), with list reordering and deletion prior to submit
Voice Memo Upload And Transcription
Given the feedback form is open on a mobile device with microphone permission granted When the user records or uploads a voice memo up to 2 minutes or 10 MB (mp3, m4a, wav) Then the upload uses resumable chunks and shows progress; failures show a retry option with exponential backoff And server‑side transcription in English is generated within 60 seconds for at least 95% of memos and attached to the comment; the user can edit the transcript before submit And if transcription fails, submission still succeeds and the memo is linked; transcript status is recorded as failed
SMS Reply Parsing With Keywords And Attachments
Given a recipient receives an SMS nudge with a Needs‑Changes reply channel in a supported country When the recipient replies via SMS using keywords and optional attachments within 72 hours Then the system parses patterns NC:, TAG:, TIME:, STEM: and maps attachments (audio/image) to the referenced asset; unsupported formats are rejected with an explanatory SMS within 10 seconds And a feedback item is created with parsed fields, the original SMS text preserved, and attachments virus‑scanned and stored; the sender is confirmed via device‑bound token or phone verification fallback
Threading To Project Discussions And Tasks
Given feedback is submitted from the form or via SMS When the submission is accepted by the API Then a new thread or an appended comment is created in the project discussion with a backlink to the Signoff Ledger entry And tasks are auto‑created or updated for each unique quick tag and stem reference, assigned to release assignees, with due date set per project SLA; duplicates are merged by key {assetId, stemId, tag} And users mentioned with @username are notified; permissions ensure only collaborators can view the thread
Signoff Ledger Sync And Smart Summary Notifications
Given a Signoff Ledger entry exists for the review When Needs‑Changes feedback is submitted Then the Ledger entry is updated atomically with status Needs‑Changes, author, device fingerprint, and a hash of the payload; version history increments And a smart summary is generated within 60 seconds including tag counts, stem/timecode highlights, sentiment of notes, and transcript excerpts; delivered to assignees via email and in‑app And rate limits prevent more than 3 summaries per thread per 10 minutes; subsequent submissions are batched into the next summary
Contextual Deep‑Link to Private Player/AutoKit
"As an A&R reviewer, I want the approve action to take me straight to the correct version’s player so that I can confirm details before committing."
Description

Allow one‑tap links to open a secure, read‑only context page: the private stem player or AutoKit press page for the exact version under review. Enforce watermarking/preview rules, optional forced preview before enabling Approve, and expiring access aligned with token TTL. Preload relevant metadata (notes, change log) and support quick toggle between versions for A/B checks while keeping the action state intact. Maintain consistent theming and branding for recipients outside the workspace.

Acceptance Criteria
Deep Link Opens Exact Read‑Only Context Page
Given a recipient taps a contextual deep link for version V within its validity window When the link is opened Then the system renders a read‑only page for version V (Private Player or AutoKit) without requiring login And all edit, upload, delete, and share controls are hidden or disabled And a visible watermark/label is applied to previews per policy And full‑resolution downloads are disabled unless explicitly enabled for the link And a 200 response is returned within 2000 ms p95
Token TTL and Expiry Handling
Given a link token with TTL T hours When the link is opened before T expires Then access is granted and media is playable Given the same token after TTL expiry When the link is opened Then the page displays an expiry message and no media is playable And API responses for protected assets return 410 Gone And Approve/Needs‑Changes controls are disabled And an expired_link_open event is recorded with token id and timestamp
Device‑Bound Approval Enforcement
Given a link token is first used on Device A When Approve or Needs‑Changes is tapped on Device A Then the action executes and is recorded in the Signoff Ledger with version id, actor fingerprint A, and timestamp And repeat taps are idempotent and return the same decision state Given the same link is opened on Device B When Approve or Needs‑Changes is tapped Then the action is rejected with 403 Forbidden and message "Action restricted to original device" And view‑only access remains available
Forced Preview Gate Before Approval
Given ForcedPreview is enabled for version V with RequiredSeconds S When playback of V occurs for at least S cumulative seconds across previews without skipping unplayed segments Then Approve and Needs‑Changes buttons become enabled Given playback is less than S seconds or the user attempts to scrub past unplayed segments When Approve or Needs‑Changes is tapped Then the buttons remain disabled and a prompt "Complete preview to enable" is shown And preview completion state persists across reloads within the token TTL
Metadata and Change Log Preload
Given a valid contextual link for version V When the page loads Then release/track title, version label, submitter, submitted datetime, reviewer notes, and change log since previous version are displayed above the fold And values match the authoritative data for V in the Signoff Ledger And metadata is visible within 500 ms p95 after initial HTML response And no user interaction is required to reveal the metadata
A/B Version Toggle Preserves Action State
Given versions V and V‑1 are available for comparison When the user toggles between V and V‑1 Then playback position and preview‑completion status are preserved per version And the enabled/disabled state of Approve and Needs‑Changes persists per version And if Approve has been submitted for V, Approve on other versions in the thread is disabled with message "Decision recorded for V" And the Signoff Ledger reflects a single active decision per review thread
External Recipient Theming and Branding Consistency
Given a recipient outside the workspace opens the contextual page When the page renders (Player or AutoKit variant) Then the workspace logo, primary/secondary colors, and custom domain are applied consistently And no internal navigation or admin UI elements are visible And primary buttons meet WCAG AA contrast (>= 4.5:1 for text) And Open Graph preview shows correct title and artwork without exposing internal URLs when the link is unfurled
Link Security: Expiry, Single‑Use, and Revocation
"As a label admin, I want approval links to expire and be revocable so that only intended recipients can make binding decisions."
Description

Implement strict link lifecycle controls: single‑use tokens, short TTLs, automatic invalidation on new asset versions (configurable), and manual revocation from the project’s approvals panel. Detect replay or anomaly signals (IP drift, device mismatch) and require step‑up verification (OTP) when risk is high. Provide clear recipient messaging when a link is expired or revoked and offer a safe path to request a fresh link. Log all security events and surface them in an admin audit view.

Acceptance Criteria
Single‑Use Token Enforcement for One‑Tap Approvals
- Given a one‑tap approval link is generated with a single‑use token and device binding, When the recipient activates it the first time on the bound device within the token TTL, Then the action executes, the token state changes to consumed, and the API returns 200 with action_result=recorded. - Given the same token is requested again after being consumed, When any client calls the link, Then the API returns 410 Gone with reason=token_consumed, no approval side‑effects occur, and the UI shows "This approval link has already been used." - Given concurrent requests for the same token, When two or more activations occur within 1 second, Then only the first succeeds and all others receive 409 Conflict with reason=duplicate_activation and no side‑effects. - Given a token is single‑use, When a user attempts browser back/refresh to resubmit, Then the server prevents replay and responds as token_consumed.
Time‑To‑Live Expiry Handling with Recipient Messaging
- Given a token with TTL set (default 24h, min 15m, max 7d), When current time > expires_at, Then the link returns 410 Gone with reason=expired and no asset content is served. - Given an expired link is opened, When the landing page renders, Then it displays "Link expired" messaging, shows the original sender/project name, hides file previews and durations, and provides a single CTA "Request a fresh link." - Given rate limiting, When the requester clicks "Request a fresh link," Then requests are limited to 3 per 24h per recipient per project; excess attempts return 429 with reason=rate_limited.
Automatic Invalidation on New Asset Version (Configurable)
- Given auto_invalidate_on_new_version=true at project or asset level, When a new asset version is published, Then all unconsumed tokens for prior versions become invalid within 60 seconds and return 410 with reason=version_superseded. - Given auto_invalidate_on_new_version=false, When a new version is published, Then existing tokens for prior versions remain valid until their TTL or manual revocation; visiting them shows a non‑blocking banner "A newer version is available." - Given version supersession occurs, When the recipient opens an invalidated link, Then the UI explains the supersession and offers a CTA to open the latest AutoKit page or request a fresh link.
Manual Revocation from Approvals Panel with Instant Propagation
- Given a project owner selects one or more approval links in the approvals panel, When they click Revoke and confirm, Then the system revokes the selected tokens within 5 seconds globally and subsequent requests return 403 Forbidden with reason=revoked_by_admin. - Given a token is revoked, When the recipient opens the link, Then the UI shows "Link revoked by sender" messaging and provides a CTA to request a new link. - Given revocation occurs, When viewing Signoff Ledger, Then an entry appears with actor, scope (token/recipient/all), timestamp, and affected count.
Anomaly Detection and Step‑Up Verification (OTP)
- Given device mismatch or IP drift greater than configured threshold (e.g., >2 ASNs in 5 min or device_fingerprint != bound_device), When the link is opened, Then the action is gated behind OTP verification and no side‑effects occur until OTP success. - Given OTP verification is required, When OTP is sent, Then the code is delivered via the original channel (SMS/email), expires in 5 minutes, allows max 5 attempts, and is rate‑limited to 3 sends per hour. - Given OTP is successfully verified, When the user proceeds, Then the original action executes and the token is consumed; failures return 401 with reason=otp_failed and lock the token for 15 minutes after 5 failed attempts. - Given high‑risk replay (simultaneous requests from different countries), When detected, Then the system blocks the request outright with 403 reason=high_risk_replay and prompts to request a fresh link.
Audit Logging and Admin Security Events View
- Given any security event (issued, consumed, expired, revoked, superseded, otp_sent, otp_passed, otp_failed, anomaly_detected), When it occurs, Then an immutable log entry is written with timestamp, actor/recipient identifiers, IP, user‑agent/device_id, token_id, reason code, and outcome. - Given an admin opens the audit view, When they filter by project, recipient, event type, or date range, Then the system returns correct results within 2 seconds for up to 10k events and supports CSV export. - Given retention policy, When events exceed 365 days, Then they remain queryable via archive export but are hidden from default UI unless Retention=All is selected.
Safe Fresh‑Link Request Flow for Expired/Revoked Links
- Given a recipient lands on an expired, revoked, or superseded link, When they click "Request a fresh link," Then the system prepopulates the recipient identity from the token, collects an optional note, and sends a reissue request to project owners without exposing asset content. - Given project is configured for auto‑reissue on valid identity, When a fresh link is requested, Then a new single‑use token is issued with a new TTL and delivered via the original channel; the UI confirms "New link sent." - Given reissue request is sent, When project owners view requests, Then they can approve or deny, and the requester sees a neutral status page (pending/approved/denied) without leaking asset details; all actions are logged.
Engagement Analytics and Smart Nudges
"As a project manager, I want visibility and automated reminders so that I can keep approvals moving without manual follow‑ups."
Description

Track delivery, opened, clicked, previewed, and decision events across channels with per‑recipient timelines. Provide workspace views of outstanding approvals, aging items, and bottlenecks. Offer configurable nudge schedules that respect recipient time zones and quiet hours, escalating to alternate channels when appropriate. Surface insights (e.g., which versions stall, which channels convert) to optimize outreach and shorten review cycles.

Acceptance Criteria
Per-Recipient Cross-Channel Event Timeline
Given a review request with One‑Tap Approve is sent via email and SMS to a recipient with unique device‑bound links When delivery, open, click, preview (private stem player/AutoKit), and decision events occur across channels Then discrete events are recorded with ISO‑8601 UTC timestamps, channel attribution, device type, and appear in the recipient’s timeline Given duplicate or out‑of‑order events arrive within a 5‑minute window When processing events Then events are deduplicated idempotently by event ID and ordered by event time while retaining first and last occurrence metadata Given suspected bot opens are detected by heuristic (e.g., known user‑agents, zero‑duration) When rendering metrics Then suspected bot opens are flagged and excluded from open and conversion rates but remain visible in the raw timeline Given a decision occurs via One‑Tap Approve on any channel When syncing records Then the decision is committed to the Signoff Ledger with signer identity, channel, device fingerprint hash, timestamp, and immutable audit trail Given normal operating conditions When events are generated Then 95% of events appear in the timeline within 10 seconds and 99.9% within 60 seconds
Workspace Approvals Dashboard and Aging/Bottleneck Views
Given there are active review items in the workspace When the Approvals dashboard is opened Then users can filter by project, asset/version, recipient, channel, status (Pending/Approved/Needs Changes), and age buckets (0–24h, 1–3d, 3–7d, 7d+), and sort by age ascending/descending Given items exceed the configurable SLA threshold (default 72 hours) without a decision When the list is rendered Then those items are flagged as Bottleneck and grouped by recipient and version with a red indicator Given a dashboard row is selected When viewing details Then the side panel shows the per‑recipient timeline, last event type/time, and outstanding decision state Given an export is requested on a filtered view When generating the file Then a CSV is downloaded containing columns: project, asset, version, recipient, status, age_hours, last_event_type, last_event_time_utc, top_channel_by_ctr, bottleneck_flag Given role‑based access controls When non‑admin collaborators access the dashboard Then they only see items and recipient timelines for projects they are permitted to view
Time‑Zone Aware Nudge Scheduling with Quiet Hours and Escalation
Given a recipient has not made a decision within the configured window after initial send When the nudge cadence evaluates Then the next nudge is scheduled in the recipient’s local time zone (derived from last known TZ offset or geo IP from last interaction), respecting quiet hours (default 21:00–08:00 local) and optional weekend skip Given the primary channel nudge is delivered with no interaction within M hours When escalation rules apply Then an escalation nudge is sent to the next configured channel (e.g., email→SMS) with a maximum of one escalation per 24 hours and an escalation event recorded Given a decision event is recorded on any channel When pending nudges exist for that recipient and item Then all scheduled nudges and escalations are canceled within 60 seconds Given One‑Tap Approve links are included in nudges When a recipient taps the link after expiration Then the link gracefully falls back to an authenticated approval flow and logs an expired‑link event Given an urgent override is set by an admin When scheduling during quiet hours Then a single “Urgent” nudge may be sent with an audit log entry capturing the user, time, and justification
Conversion and Stalling Insights
Given a date range and filters are selected with at least 100 contacted recipients When viewing Insights Then conversion to decision per channel is displayed with counts, rates, and 95% confidence intervals, and filters for asset type, version, and channel are applied to all charts Given multiple versions of the same asset have activity When computing time‑to‑decision Then median and p75 time‑to‑decision per version are shown, and versions in the worst quartile are labeled “Stalling” Given funnel stages (delivered→opened→clicked→previewed→decision) are available When rendering funnels per channel Then step‑wise drop‑off percentages and absolute counts are displayed and exportable Given an export of insights is requested When generating the file Then a CSV including impressions, deliveries, opens, clicks, previews, decisions, conversion_rate, median_ttd_hours, p75_ttd_hours, and applied filters is produced Given data freshness SLOs When loading the Insights page Then a visible “Last updated” timestamp is shown and metrics reflect events no older than 5 minutes for p95 requests
Compliance, Consent, and Opt‑Out Respect
Given a recipient has opted out of a channel or tracking When preparing sends and analytics Then the system suppresses sends on opted‑out channels and disables tracking pixels and link tracking for that recipient while still allowing non‑tracked transactional delivery where permitted Given a GDPR/CCPA deletion request is submitted for a recipient When processing the request Then all personally identifiable analytics and timelines for that recipient are deleted within 30 days, removed from dashboards/exports, and an audit log entry is created Given region‑based consent is required When sending initial outreach Then consent capture is included where required, and no analytics events are stored until consent is granted; attempts to nudge prior to consent are blocked Given a project has analytics disabled at the workspace level When sending and reporting Then no tracking pixels or instrumented links are used, events are not stored, and Insights excludes the project from aggregate metrics
Event Webhooks and API Access
Given a workspace admin configures a webhook endpoint When delivery, open, click, preview, and decision events occur Then JSON payloads are delivered with an HMAC‑SHA256 signature header, include idempotency keys, and are retried with exponential backoff up to 24 hours on failure (excluding 410) Given clients query the Events API When filtering by recipient, project, item, channel, event type, and date range Then results are paginated, return within 2 seconds p95 for ≤10k records, and include consistent schemas with ISO‑8601 times and next/prev cursors Given webhook secret rotation is initiated When delivering events during rotation Then both old and new secrets validate for 1 hour overlap, and a test event confirms successful delivery Given OAuth scopes and project permissions are enforced When a token attempts to access events outside scope Then the API responds 403 with a standardized error code and correlation ID

Copy Optimizer

A/B test subject lines and CTAs by role (Mixer, PR, A&R) and let the system auto‑select the top performer per audience. Personalizes with project, track, and milestone tokens to boost open rates and convert silence into approvals.

Requirements

Role-based Audience Segmentation
"As a label manager, I want to target recipients by role so that each audience sees the most relevant subject line and CTA for their responsibilities."
Description

Introduce audience segmentation by professional role (e.g., Mixer, PR, A&R) and map contacts to one or more roles at the project or label level. Expose role filters when composing campaigns so the same message can carry role-specific variants. Sync segments with existing TrackCrate contact lists and collaborator records, support CSV import, and enforce deduplication across overlapping lists. Provide timezone-aware scheduling and per-role send eligibility rules. Store segment definitions with the campaign for reproducibility and reporting.

Acceptance Criteria
Map Contacts to Roles at Project and Label Levels
- Given a contact without roles at Label L, When a user assigns roles "PR" and "A&R" at the label level, Then the contact appears in the "PR" and "A&R" role segments for all projects under Label L. - Given a contact with label-level role "PR" on Label L and project-level role "Mixer" on Project P, When building segments for Project P, Then the contact appears in both "PR" and "Mixer" segments for Project P. - Given a contact with label-level role "PR" on Label L, When the "PR" role is removed at Project P only, Then the contact remains in "PR" segments for other projects under Label L and is excluded from "PR" segment for Project P. - Given an API request to fetch roles scoped to a project, When GET /contacts/{id}/roles?scope=project:P is executed, Then the response lists only the roles effective for Project P. - Given audit logging is enabled, When any role is added or removed for a contact at label or project scope, Then an audit event is recorded with actor, scope, role, timestamp, and before/after values.
Expose Role Filters and Variant Selection in Campaign Composer
- Given a user opens the campaign composer for Project P, When they open the Audience step, Then role filters "Mixer", "PR", and "A&R" are displayed as selectable chips if available for Project P or its Label. - Given the user selects roles "Mixer" and "PR", When the selection changes, Then the unique recipient count and per-role counts refresh within 2 seconds. - Given the user assigns a subject/body variant to "Mixer" and "PR", When no variant is assigned to any selected role, Then the composer blocks scheduling and displays a validation message requiring a default or per-role variant. - Given the user clicks "Preview Recipients", When the dialog opens, Then each listed recipient shows the resolved role and the variant that will be sent. - Given the user reorders role precedence to ["A&R","PR","Mixer"], When multiple roles match a contact, Then the variant matching the highest-precedence role is used.
CSV Import with Role Assignments and Validation
- Given a CSV with headers email, first_name, last_name, roles, scope, scope_id, When uploaded to the Contacts Importer, Then valid rows are upserted and roles are assigned according to scope (label or project) and scope_id. - Given the roles column contains unknown role values, When processing, Then those rows are rejected with line numbers and reasons in the import report; valid rows continue to import. - Given duplicate contacts by email or external_id exist in the CSV or database, When importing, Then records are merged without creating duplicates and roles are unioned per scope. - Given the import completes, When viewing the summary, Then totals show processed, created, updated, and rejected counts plus per-role assignments added. - Given a row specifies project scope with scope_id=P and role="Mixer", When verified post-import, Then the contact appears in Project P's "Mixer" segment and not in other projects unless roles were assigned for them.
Deduplicate Recipients Across Overlapping Lists and Roles
- Given a campaign includes contact lists L1 and L2 with overlapping emails and selected roles ["PR","Mixer"], When the audience is built, Then each email appears at most once in the final recipient list. - Given a contact qualifies for multiple selected roles, When role precedence is set to ["PR","Mixer","A&R"], Then the contact is assigned to "PR" and receives the "PR" variant. - Given deduplication occurs, When reviewing the Audience step, Then the UI displays total unique recipients, duplicates removed, and a downloadable CSV of dedup decisions. - Given a contact is explicitly excluded via an exclusion list E, When building the audience, Then the contact is excluded regardless of role membership. - Given the audience is rebuilt after list changes, When counts are recalculated, Then deduplication and role resolution are re-applied consistently.
Timezone-Aware Scheduling by Recipient Local Time
- Given a campaign is scheduled with "Send at 9:00 AM recipient local time" on Date D, When recipients span multiple time zones, Then the system creates per-time-zone send batches so that each recipient's send is enqueued for 09:00 ±10 minutes on Date D in their local time. - Given a recipient has no stored timezone, When scheduling, Then the system infers timezone from city/state/country if available, else falls back to the label timezone, and marks the inference method in logs. - Given the user opens Schedule Preview, When viewing, Then per-time-zone batch counts and estimated send times are displayed. - Given the campaign is paused before a future batch, When paused, Then pending batches do not send until the campaign is resumed. - Given a daylight saving time transition occurs on Date D in a recipient's region, When the send is executed, Then the local 09:00 time is respected using the correct offset for that date.
Per-Role Send Eligibility Rules and Safeguards
- Given eligibility rules are enabled, When building the audience, Then recipients are excluded if they are globally unsubscribed, project-unsubscribed, have no valid email, or have a hard bounce recorded in the last 12 months. - Given a recipient is marked ineligible for a specific role (e.g., PR), When building a campaign targeting that role, Then the recipient is excluded even if they belong to other selected roles. - Given exclusions are applied, When reviewing the Audience step, Then the UI displays counts by exclusion reason and allows export of the exclusion report. - Given eligibility may change over time, When using timezone-based waves, Then eligibility is re-evaluated immediately before each wave sends. - Given the user attempts to override exclusions by manually adding a recipient, When scheduling, Then the system blocks scheduling and explains the conflicting eligibility rule.
Persist Segment Definitions with Campaign for Reproducible Reporting
- Given a campaign is created, When the user finalizes the audience, Then the system stores the role filters, selected lists, exclusion lists, role precedence, and the evaluated recipient snapshot (IDs/emails) with a timestamp. - Given a user opens the Campaign Report, When viewing, Then open, click, and reply metrics are segmented by role as defined at send time and the exact segment definition is viewable. - Given a campaign is cloned, When cloning, Then the segment definitions are copied but the recipient snapshot is not, unless the user selects "Use original recipients". - Given an export is requested, When exporting, Then a JSON file of segment definitions and a CSV of resolved recipients with roles are generated. - Given a regulatory audit request, When retrieving the campaign audience, Then the stored snapshot and definitions reproduce the exact recipients and roles that were targeted at send time.
Tokenized Personalization Engine
"As a campaign creator, I want to personalize copy with project and track tokens so that messages feel relevant and drive higher engagement."
Description

Enable token insertion for project, track, and milestone metadata in subject lines and CTAs (e.g., {project_name}, {track_title}, {milestone}, {release_date}). Provide validation, default fallbacks, and safe encoding to prevent broken renders or leaking private fields. Allow per-role overrides for token values (e.g., Mixer sees mix version, PR sees press hook). Surface live previews per recipient sample and a coverage report indicating missing values. Pull data from TrackCrate’s canonical sources (projects, versions, AutoKit pages) to keep personalization accurate and current.

Acceptance Criteria
Token Rendering with Defaults in Subject and CTA
Given a template contains tokens {project_name}, {track_title}, {milestone}, and {release_date} And default values are configured for optional tokens When rendering a subject line and CTA for a recipient Then each token resolves to its current canonical value And any missing token with a configured default renders the default value And any missing token without a configured default renders as an empty string and does not output braces or placeholder text And the final rendered strings contain no unreplaced token syntax
Token Whitelist Validation and Safe Encoding
Given a user attempts to save a template with tokens When validation runs Then only tokens from the approved whitelist are permitted And any unknown or disallowed token prevents save with an error listing each offending token and its position And references to private or sensitive fields are blocked with a clear error and no value is shown in previews And at render time, inserted values are HTML-escaped in text/HTML contexts and URL-encoded in link/parameter contexts And literal curly braces can be escaped (e.g., by doubling) to render as text without triggering token parsing
Role-Based Token Overrides
Given role-specific overrides are configured for certain tokens for Mixer, PR, and A&R And a recipient has an assigned role When rendering the template Then the token resolves using the override for the recipient's role And if no override exists for that role, the global/default token value is used And recipients with unknown or unmapped roles receive the global/default token value And overrides can be enabled/disabled per token without affecting global values
Live Preview by Recipient Sample
Given the user selects a sample recipient and role in the preview panel When the user edits the template or switches the selected recipient/role Then the preview updates within 1 second to reflect resolved tokens with safe encoding applied And the preview visually indicates where fallback values were used And the user can cycle previews for Mixer, PR, and A&R without reloading the page
Missing Value Coverage Report
Given an audience list is selected for a campaign When generating the coverage report Then the report shows, per token, the count and percentage of recipients with missing values And provides up to 50 sample recipient rows per token with missing data and links to their records And tokens with 100% coverage are marked complete And the report can be exported as CSV And report generation completes within 10 seconds for up to 10,000 recipients
Canonical Source Data Consistency
Given tokens are mapped to canonical entities (Project, Version, AutoKit) When underlying metadata is updated in TrackCrate Then subsequent renders use the latest saved values without requiring template changes And each rendered message records source entity IDs and retrieval timestamps in a debug view And if a token's source data is unavailable at render time, the engine applies the configured fallback/default and logs a retrievable error without exposing raw token text
Variant Manager for Subject & CTA
"As a marketing coordinator, I want to set up multiple subject line and CTA variations per audience role so that I can compare performance without managing separate campaigns."
Description

Provide creation and management of multiple copy variants per role for subject lines and CTAs within a single campaign. Allow configuring traffic splits, minimum sample sizes, and test duration windows. Support activating, pausing, and archiving variants, with guardrails to prevent accidental deletion of active variants. Persist variant metadata and version history, and link each variant to its target destination (AutoKit page, shortlink) with auto-applied UTM parameters. Ensure variants can be cloned across campaigns to speed iteration.

Acceptance Criteria
Manage multiple copy variants per role
Given a campaign with roles Mixer, PR, and A&R When the user adds variants for each role Then the system allows at least 20 variants per role and displays a per-role variant count Given a new variant When the user saves it Then subject (1–120 chars), CTA text (1–60 chars), and role are required and validated; invalid fields show inline errors and prevent Save Given an existing variant When the user edits and saves it Then the list updates with the latest editor name and updated_at timestamp accurate to the second Given a variant in Draft When the user activates it Then its status changes to Active and it becomes eligible for traffic assignment within 60 seconds
Configure traffic splits, minimum sample size, and test window
Given two or more Active variants for a role When the user sets traffic splits Then each split accepts 0.0–100.0 and the total must equal 100.0 within 0.1% precision or Save is disabled Given min sample size N and test duration D When the test is running Then winner evaluation is blocked until every Active variant reaches at least N unique recipients AND D has elapsed since test start Given a variant is Paused When the user saves new splits Then the remaining Active variants' splits auto-normalize to total 100.0% preserving relative ratios Given invalid inputs (total ≠ 100%, N < 1, or D < 1 minute) When the user clicks Save Then inline validation messages are shown and Save is prevented
Activate, pause, archive, and deletion guardrails
Given an Active variant When the user opens the actions menu Then Delete is not available and a tooltip explains "Cannot delete an active variant; pause or archive first" Given an Active variant When the user clicks Pause and confirms Then status becomes Paused and the variant receives 0% traffic within 60 seconds Given a Paused or Archived variant When the user clicks Delete Then a double-confirmation modal requiring typing DELETE appears before permanent deletion is allowed Given a Draft variant with zero sends When the user clicks Delete and confirms Then the variant is permanently deleted and removed from all lists within 5 seconds
Persist variant metadata and version history
Given any change to variant copy, destination, or settings When the user clicks Save Then a new immutable version record is created storing editor, timestamp (UTC), and field-level diffs Given a variant with prior versions When the user opens Version History Then the system displays at least the last 50 versions in reverse chronological order with compare view Given a selected prior version When the user clicks Revert and confirms Then a new version is created that restores the prior content and settings without overwriting history Given a variant When the user clicks Export History Then a CSV and JSON download is available within 5 seconds
Link destinations with auto-applied UTM parameters
Given a variant When the user selects an AutoKit page or shortlink as destination Then the system appends utm_source=trackcrate, utm_medium=email, utm_campaign={campaign_id}, utm_role={role}, utm_variant={variant_id} and displays the final URL Given a destination URL that already contains UTM parameters When the system applies UTMs Then utm_campaign, utm_role, and utm_variant are overwritten; other UTM keys are preserved Given the final URL When the user clicks Test Link Then the request returns HTTP 200 or 3xx within 3 seconds; otherwise, Save is blocked with an error Given a variant is cloned to another campaign When saved in the target campaign Then UTMs are recalculated to reflect the target campaign and new variant_id
Clone variants across campaigns
Given a source campaign When the user selects one or more variants and a target campaign and clicks Clone Then the system creates Draft variants in the target campaign with copied subject, CTA, role, and destination; traffic splits are not copied by default Given the Clone dialog When the user checks "Copy test settings" Then min sample size, test window, and traffic splits are copied and normalized to 100% among the cloned set Given cloned variants When created Then each receives a new unique variant_id; version history is reset but origin_campaign_id and origin_variant_id are stored in metadata Given clone completion When the operation succeeds Then a confirmation shows the count cloned and deep links to the target campaign
Traffic redistribution on state changes
Given a role with multiple Active variants When one variant is Paused or Archived Then its traffic share is redistributed proportionally among remaining Active variants within 60 seconds and the change is logged with actor, timestamp, and before/after splits Given a variant resumes from Paused to Active When saved Then its configured split is reintroduced and other splits auto-normalize to keep total at 100% Given repeated state changes within 5 minutes When traffic recalculation occurs Then the system debounces adjustments to at most one recalculation per role every 15 seconds
Auto-Select Top Performer
"As a product marketer, I want the system to automatically pick the best-performing variant per role so that the rest of my audience receives the highest-impact copy without manual monitoring."
Description

Implement an automated winner selection mechanism that evaluates variants per role against defined success metrics (e.g., open rate for subject lines, click-to-approve for CTAs). Enforce minimum sample sizes and confidence thresholds before selection, with tie-breakers and cooldown periods. Once a winner is determined, automatically route remaining eligible sends to the top variant and optionally backfill future sends. Provide manual override, audit logging, and notifications when a winner is picked or conditions are not met.

Acceptance Criteria
Role-Scoped Winner Selection by Metric and Confidence
Given a running A/B test for subject line variants A, B, C targeted to role = PR with metric = open rate, minSampleSize = 200 deliveries per variant, and confidenceThreshold = 95% And event data shows each variant has reached at least 200 deliveries for the PR audience And variant B’s open rate is statistically higher than A and C at or above 95% confidence When the evaluation job runs for role = PR and templateType = subject line Then the system marks variant B as the Winner for role = PR and templateType = subject line And the selection timestamp and parameters (minSampleSize, confidenceThreshold, sample sizes, observed rates) are recorded And no winner is selected for roles that have not yet met minSampleSize and confidenceThreshold
Auto-Routing Remaining Sends and Optional Backfill
Given a winner is selected for role = Mixer and templateType = CTA within Campaign X And remainingEligibleSends = true for Mixer in Campaign X And backfillFutureSends = true for Campaign X When the system processes subsequent sends for Mixer in Campaign X Then 100% of remaining eligible sends for Mixer route to the winning CTA variant And any scheduled-but-unsent messages for Mixer in Campaign X are updated to use the winning variant before dispatch And historical sends already dispatched are not altered
Tie-Breaker Resolution on Equal Performance
Given an A/B test for role = A&R and templateType = subject line where variants A and B meet minSampleSize = 300 and confidenceThreshold = 95% And the statistical test returns no significant difference between A and B at the configured confidence level When the system must select a winner due to tie-breaker policy Then the system applies tie-breakers in order: (1) higher absolute opens, (2) earliest to reach minSampleSize, (3) lexicographically smallest variant ID And the chosen variant is marked Winner with tieBreakerUsed recorded in the audit log
Cooldown Enforcement After Winner Selection
Given cooldownHours = 12 is configured for Campaign Y And a winner has been selected for role = PR and templateType = CTA at 10:00 UTC When additional events arrive during the cooldown window (10:00–22:00 UTC) Then the system does not re-evaluate or change the winner for PR/CTA in Campaign Y during that window And at 22:00 UTC or later, re-evaluation is permitted only if explicitly triggered and only for scenarios where a winner has not yet been selected
Manual Override with Audit Trail and Safeguards
Given a user with permission = Campaign.Editor opens Campaign Z where no winner has been auto-selected for role = Mixer, templateType = subject line When the user manually sets variant C as Winner and confirms the override Then the system routes subsequent eligible sends for Mixer/subject line in Campaign Z to variant C immediately And an audit record is created with fields: action=manual_override, userId, timestamp, role, templateType, selectedVariant, reason, previousState, campaignId And the UI displays an override badge and offers a one-click revert to previous state
Notifications and Unmet-Conditions Handling
Given notificationRecipients are configured for Campaign X And either (a) a winner is auto-selected or (b) the campaign end time passes without meeting minSampleSize and/or confidenceThreshold When either condition occurs Then the system sends a notification within 5 minutes including: campaignId, role, templateType, selectedVariant (if any), metric, sample sizes, rates, confidence, and next steps And if unmet conditions occur, the notification states that no winner was selected and that traffic remains split per current allocation And all notifications are logged with delivery status for audit
Performance Analytics & Attribution
"As a label analyst, I want detailed performance metrics by role and variant so that I can understand what copy drives approvals and refine future campaigns."
Description

Deliver per-variant, per-role analytics including sends, opens, clicks, approval events, and conversion rate over time. Attribute clicks and conversions using TrackCrate shortlinks and AutoKit press page events, including gated stem player interactions and watermarked download requests. Provide role-level cohort breakdowns, trend charts, and exportable CSV. Include campaign and variant-level UTMs for external analytics alignment. Surface diagnostic insights (e.g., low token coverage, time-of-day effects) to inform future variant creation.

Acceptance Criteria
Variant Analytics by Role
Given a campaign with at least two variants targeted to roles Mixer, PR, and A&R and ≥100 recipients per role When a user opens the Copy Optimizer analytics view and selects a date range and conversion window Then the dashboard displays, per variant and per role: Sends, Unique Opens, Unique Clicks, Approval Events, and Conversion Rate as a time series (daily/weekly) for the selected range And filters for Role, Variant, and Date Range are available and update all widgets within 1 second on desktop and 2 seconds on mobile (p95) And Conversion Rate = Approval Events within the selected window ÷ Unique Recipients Sent within the same window, displayed to one decimal place And header totals match the sum of the displayed series for the selected filters within ±0.5%
Attribution via Shortlinks and Press Events
Given all outbound links are generated as TrackCrate shortlinks parameterized with campaign_id, variant_id, role, recipient_token, and UTMs When a recipient clicks any link in a variant message Then a Click event is recorded and attributed to the exact campaign, variant, role, and recipient, and UTMs are forwarded to the destination And when the recipient lands on the AutoKit press page, Page View, Stem Play Start, Stem Play 30%, Stem Play 100%, Gated Stem Access Attempt, Watermarked Download Request, and Approval events are tracked And events deduplicate per recipient within a 30-minute session window (max 1 unique open and 1 unique click per link per session) And events appear in analytics within 5 minutes (p95) and 15 minutes (p99) of occurrence
Role Cohorts and CSV Export
Given analytics contains events for multiple roles over at least 4 weeks When the user opens the Cohorts view and selects Group by Role and Cohort = Week of First Send Then the table shows, per role and week: Recipients, Sends, Open Rate, Click Rate, Approval Rate, and Conversion Rate, with 4-week trend sparklines And Export CSV is available for the current filters and includes columns: campaign_id, campaign_name, variant_id, variant_name, role, recipient_id_hash, send_ts_utc, first_open_ts_utc, opens_count, first_click_ts_utc, clicks_count, approval_ts_utc, converted_within_window, conversion_window_hours And the CSV row count equals the number of unique recipient–variant pairs for the selected filters, encoded UTF-8 with LF endings, and downloads in ≤5 seconds for up to 100k rows
UTM Tagging Consistency
Given campaign-level UTMs are configured and variant-level overrides exist When the system generates outbound shortlinks for the campaign Then each link includes UTMs: utm_source=trackcrate, utm_campaign=<campaign_slug>, utm_medium=<channel>, utm_content=<variant_id>, utm_term=<role>, preserving existing query parameters And after redirects, destination URLs retain these UTM parameters in the browser address bar for press pages and external links And duplicate UTM keys are not created, values are URL-encoded, and the final URL length is ≤2048 characters
Diagnostic Insights for Variant Creation
Given completed sends with at least two variants per role in the last 30 days When the nightly diagnostics job runs Then the system surfaces insights including: Low Token Coverage when <90% of tokenized fields resolve for a variant; Time-of-Day Effect when an hour-of-send yields ≥20% relative lift in open rate with p<0.05; Underperforming Subject/CTA when a variant is ≥15% below the role median click rate with p<0.10 And each insight displays impacted metric, estimated lift/drag, confidence, and sample size, with a recommended action (e.g., adjust send window, fix tokens, create new variant) And an Insight Details view deep-links to create a prefilled variant or scheduling change
Data Quality, Deduplication, and Bot Filtering
Given some mailbox providers prefetch images and scan links When events are ingested Then likely bot traffic is flagged using user-agent rules, burst patterns (>5 links in <1s), headless signatures, and known data-center IP ranges And flagged events are excluded from unique open/click metrics by default with a user toggle to include them; totals update within 2 seconds of toggling And events are deduplicated per recipient per session (30 minutes) and per message; unique vs total counts are both available And hard bounces are excluded from the Sends denominator for rate calculations; soft bounces are excluded unless delivery succeeds within 48 hours
Timezone and Conversion Window Controls
Given recipients span multiple time zones and the campaign owner has a set account timezone When the user selects Timezone = Campaign Owner and Conversion Window = 7 days and Date Range = Last 30 days Then all charts and tables render in the selected timezone, tooltips display both local offset and UTC timestamps, and conversion rate counts approvals occurring within 7 days of each send And switching Timezone to UTC updates all timestamps and bucket boundaries without changing underlying totals or rates And the selected timezone, conversion window, and date range persist per user and per campaign for future sessions
Preview, Test Send, and QA Checks
"As a campaign owner, I want to preview and test-send personalized variants with automated checks so that I catch issues before they reach my audience."
Description

Offer real-time previews of personalized subject lines and CTAs per role, with sample recipient toggling. Support test sends to seed lists and internal Slack/email notifications with tracking disabled. Run automated QA checks for broken tokens, missing fallbacks, malformed links, and inconsistent role mappings. Validate destination access (AutoKit page privacy, stem player permissions) before launch and block sending on critical failures. Provide a preflight checklist summary and one-click fixes where possible.

Acceptance Criteria
Real-time Role-Based Preview with Sample Recipient Toggle
Given a campaign with subject and CTA templates containing tokens and role variants, When the user selects a role (Mixer, PR, A&R), Then the preview renders the correct role variant for both subject and CTA. Given a list of sample recipients with role, locale, and metadata, When the user toggles between recipients, Then all tokens resolve using that recipient's data and configured fallback values. Given tokens without available data and with defined fallbacks, When preview renders, Then fallbacks are displayed and missing tokens without fallbacks are highlighted as errors. Given the user edits template text, When typing stops for 300 ms, Then the preview updates within 300 ms. Given the preview contains CTA links, When rendered in preview mode, Then links are clearly labeled as preview and tracking is disabled.
Test Sends to Seed Lists with Tracking Disabled and Notifications
Given one or more configured seed lists (emails and/or Slack channels), When the user clicks Send Test, Then a test message is delivered to all selected seed recipients. Given test sends, When delivered, Then all tracking is disabled: no open/click pixels are embedded, shortlinks resolve without logging metrics, and events are not recorded for analytics or A/B selection. Given Slack notification settings, When a test send is initiated, Then a Slack message is posted to the configured channel with campaign name, variant, role, and a visible Test label. Given email test sends, When received, Then the email subject contains a [TEST] prefix and the headers include X-TrackCrate-Test: true. Given a delivery failure to any seed recipient, When the attempt completes, Then the UI displays per-recipient status with error reasons and a retry option.
Automated QA for Tokens, Fallbacks, Links, and Role Mapping
Given a composed campaign, When QA is run, Then the system flags unknown tokens with location and severity Critical. Given tokens that lack recipient data and have no fallback, When QA is run, Then each is flagged as Warning and a one-click fix to add a fallback is offered. Given hyperlinks in subject, body, and CTAs, When QA is run, Then malformed or non-https URLs and links without resolvable destinations are flagged as Critical. Given audience segments mapped to role variants, When QA is run, Then any recipient without a corresponding role variant is flagged as Critical and the count of impacted recipients is displayed. Given the user clicks Fix All Suggested, When applicable, Then fallbacks are inserted, unknown tokens are replaced with suggested tokens, and role mappings are auto-completed using the default role (if configured), with a diff preview before apply.
Destination Access Validation for AutoKit and Stem Player
Given a CTA or body link pointing to an AutoKit page, When QA is run, Then the system verifies the page is published with the intended privacy setting and accessible via the link token for the audience role. Given links to the private stem player or expiring, watermarked downloads, When QA is run, Then the system performs a headless access check that returns HTTP 200 and verifies watermark/expiry policies are present. Given any destination fails access validation, When QA is run or before launch, Then the issue is marked Critical, details are shown, and sending is blocked until resolved. Given a fix is available (e.g., publish AutoKit, add access token, adjust permissions), When the user triggers the one-click fix, Then the system applies the change and re-validates within 3 seconds.
Launch Blocking and State Management on Critical Failures
Given one or more Critical QA findings exist, When the user attempts to launch, Then the Launch action is disabled and a tooltip lists the blocking items count. Given all Critical items are resolved, When the user attempts to launch, Then the Launch action is enabled and proceeds without regression. Given a new edit introduces a Critical issue after a prior pass, When autosave occurs, Then the launch state reverts to disabled and the preflight panel scrolls to the first blocking item.
Preflight Checklist Summary and One-Click Fixes
Given the user opens the Review & Launch step, When the preflight is displayed, Then a checklist shows sections for Personalization, Links, Roles, Destinations, and Tracking with pass/fail/warn counts. Given no Critical items and at most Warnings, When the user confirms, Then launch proceeds and a snapshot of the preflight report is stored in the audit log. Given the user clicks any checklist item, When navigated, Then the UI jumps to the relevant editor highlight and offers applicable one-click fixes.
Compliance, Safety, and Send Governance
"As a compliance manager, I want safeguards around consent, frequency, and data handling so that our A/B testing respects regulations and protects deliverability."
Description

Ensure compliant sending with per-recipient consent status, unsubscribe handling, bounce and complaint processing, and frequency caps. Rate-limit sends and throttle by domain to protect deliverability. Redact sensitive tokens from logs and previews where required, and enforce role-based access control for editing copy and viewing analytics. Provide audit trails of variant changes and winner selections. Support regional regulations (e.g., GDPR/CCPA) with data subject export/delete flows tied to contact records.

Acceptance Criteria
Per-Recipient Consent Enforcement
Given a recipient without marketing consent, When a marketing send is queued, Then the system blocks the send, logs suppression reason "no consent," increments a suppressed metric, and no email is transmitted to the ESP. Given a recipient with consent scope limited to a specific project, When a send from a different project is queued, Then the system blocks the send and logs "consent scope mismatch." Given a recipient with transactional-only consent, When a transactional message is queued, Then the message is sent; When a marketing message is queued, Then it is suppressed and logged. Given consent status is updated to granted, When a previously suppressed campaign is retried, Then eligibility is re-evaluated and the message proceeds subject to caps and rate limits.
Unsubscribe Link and Processing
Given a marketing message is generated, Then the body includes a functional single-click unsubscribe URL unique to the recipient and campaign. When the unsubscribe URL is clicked, Then the contact's marketing preference is set to unsubscribed within 60 seconds, a confirmation page is shown, and a suppression record is created with timestamp and source "link." Given a contact has unsubscribed, When any future marketing sends are queued, Then they are suppressed with reason "unsubscribed" and zero emails are sent. Given role-targeted messaging (Mixer, PR, A&R), When a contact unsubscribes from a specific role, Then sends for that role are suppressed while other roles remain eligible per consent.
Bounce and Complaint Handling
When the ESP returns a hard bounce, Then the contact is marked undeliverable, added to a permanent non-send suppression list, and future sends are blocked. When the ESP returns a soft bounce, Then the system retries up to 3 times over 24 hours with exponential backoff; after 3 failed attempts, mark the contact under temporary suppression for 7 days. When a complaint (FBL) is received, Then the contact is immediately and permanently suppressed for marketing sends and marked "complaint." Then all bounce and complaint events are appended to the contact's delivery log with timestamp, ESP response code, and campaign ID.
Frequency Caps by Recipient and Role
Given a frequency cap of 2 marketing emails per 7-day rolling window per role (Mixer, PR, A&R), When additional messages would exceed the cap, Then they are deferred until the window resets and logged as "frequency cap." Given multiple projects target the same recipient and role, When cumulative sends reach the cap, Then further sends from any project respect the cap. When a message is deferred due to caps, Then it is retried automatically at the next eligible window or expires after 14 days if still ineligible, with outcome logged.
Global Rate Limiting and Domain Throttling
Given a platform-wide send rate of 600 emails per minute, When queued volume exceeds the rate, Then messages are enqueued FIFO and released at or below the configured rate. Given a recipient domain, When sends to that domain would exceed 100 emails per 5 minutes, Then throttle to 100/5min per domain and queue the excess without dropping messages. When throttling occurs, Then delivery ETAs and queue depth are exposed via operational metrics, and no burst exceeds configured ceilings. When throttling is active, Then A/B variant traffic allocation remains within ±2% of the planned split for each audience segment.
Sensitive Token Redaction in Logs and Previews
Given a message contains sensitive tokens (e.g., expiring download links, private stem URLs, watermark keys), When logs and previews are generated, Then token values are replaced with "[REDACTED]" for users without Compliance Admin or Owner roles. When an authorized user views a preview, Then an unredacted view can be toggled, and the access is audited with user ID, timestamp, and message ID. When webhooks or data exports are produced, Then sensitive fields are either excluded or hashed (SHA-256 with per-tenant salt), and schemas indicate redaction.
RBAC, Audit Trails, and Regional Data Rights
Given role-based permissions, When a user without Editor privileges attempts to modify A/B variants or copy, Then the action is rejected with HTTP 403 and an audit record is created. When a variant is created, edited, or deleted, Then an immutable audit record stores actor, timestamp, reason, and before/after content hashes. When the system auto-selects a winning variant, Then a record captures audience, evaluation window, metrics (opens, CTR, approvals), and algorithm version, and is read-only. When a data subject export is requested for a contact, Then a machine-readable export including consent status, send history, variant exposures, unsubscribes, bounces, and complaints is produced within 24 hours. When a data subject deletion is requested, Then personally identifiable data is erased within 30 days, the contact is added to a hashed suppression list, and aggregate analytics remain non-reidentifiable.

Locale Smart

Auto‑translates nudge copy, adjusts date/time formats, and shortens for SMS length limits while preserving intent. Applies local sender ID and compliance rules so global teams understand the ask instantly and can act with confidence.

Requirements

Auto Locale & Timezone Detection
"As a collaborator receiving TrackCrate nudges, I want messages in my language and local time zone so that I immediately understand what to do and by when."
Description

Detect and persist each recipient’s preferred language, region, and time zone from profile settings, device/browser hints, phone country code, and past interactions. Provide a deterministic fallback chain (user > project > label > system default) and allow per-user overrides. Expose a service that downstream systems (nudges, AutoKit, link shortener) call to resolve locale contexts at send time. Store preferences securely, respect privacy/consent, and version changes for auditability. Ensure coverage for regional variants (e.g., pt-BR vs pt-PT) and script differences, and gracefully handle unknown locales with safe defaults.

Acceptance Criteria
Deterministic Fallback Chain Resolution at Send Time
Given a request to resolve locale for {userId, projectId, labelId} with system defaults configured When user-level language, region, and timeZone exist Then the service returns those values with source=user for each field Given any field is missing at user level When resolving Then the missing field(s) fall back to project, then label, then system default, field-by-field, and each field’s source is returned Given identical inputs across multiple resolutions When resolving repeatedly Then outputs are identical for all fields and sources
Per-User Override Persistence and Precedence
Given a user submits an override for language, region, or timeZone via UI or API with consent=true When the save request succeeds Then the values persist, supersede project/label defaults on subsequent resolutions, and source=user_override is returned per field Given a user removes an override When the change is saved Then the field reverts to the fallback chain within 60 seconds and source reflects the new highest available level
Multi-Source Detection with Consent-Aware Persistence
Given a first-time recipient with no stored preferences and consent=true When signals are available (profile settings, Accept-Language, device timeZone, phone E.164 country code) Then the system derives language/region/timeZone using precedence: profile > Accept-Language > phone country (region only) and device timeZone (timeZone only), normalizes to BCP 47 + IANA TZ, and persists values with signal_sources recorded Given consent=false When signals are detected Then values are used for current resolution only, are not persisted, and audit notes indicate not persisted due to no consent Given conflicting signals When deriving preferences Then the highest-precedence signal is chosen and lower-precedence signals are ignored for persistence
Locale Resolution Service API Contract and Performance
Given a POST /v1/locale/resolve with userId and projectId When authorized with scope locale:resolve Then respond 200 with fields: languageTag (BCP 47), region (ISO 3166-1 alpha-2), script (optional), timeZone (IANA), per-field source, effectiveAt, version Given sustained load of 100 RPS for 60 seconds When handling requests Then p95 latency <= 150 ms and error rate < 0.5% Given missing or invalid IDs When requested Then respond 400 with error code INVALID_REFERENCE and no PII in the message Given caller lacks permission When requested Then respond 403
Auditability and Secure Storage Controls
Given any create, update, or delete of stored locale preferences When the operation commits Then an immutable audit record is written with before/after values, actorId or source, timestamp (UTC), and requestId Given an authorized admin with scope locale:audit:read When requesting a user’s audit history (<=100 versions) Then records are returned in chronological order within 200 ms Given data at rest When inspected by security tooling Then preferences are encrypted with managed keys and access is restricted to roles with least-privilege (no direct store access without locale-reader role)
Regional and Script Variant Resolution
Given Accept-Language contains pt-BR When resolving Then languageTag resolves to pt-BR and not generic pt Given Accept-Language contains pt-PT When resolving Then languageTag resolves to pt-PT Given language zh and region TW When resolving Then languageTag resolves to zh-Hant-TW Given language zh and region CN or script=Hans When resolving Then languageTag resolves to zh-Hans-CN (if region CN present) or zh-Hans (if region absent)
Unknown or Invalid Locale Graceful Defaults
Given an invalid language tag or unrecognized timeZone When resolving Then the service falls back to system defaults for affected fields, returns source=default, and emits a warning metric locale.invalid_input incremented by 1 Given no signals at any level and no project/label defaults When resolving Then languageTag=en-US (or configured system default) and timeZone=UTC (or configured system default) are returned without error
Context-Aware Translation Pipeline
"As a release manager, I want translations that preserve intent and variables so that critical asks aren’t misinterpreted across locales."
Description

Implement a translation service that preserves intent using MT with glossary and translation memory, supports ICU MessageFormat placeholders, pluralization, gender, and right-to-left scripts. Attach domain-specific glossaries (music terms, rights metadata) and protect dynamic variables (artist, track, deadlines, shortlinks). Provide back-translation and confidence scoring with optional human-in-the-loop review for high-visibility strings. Version and cache strings per locale; auto-invalidate on source updates. Integrate with TrackCrate’s nudge templates and AutoKit snippets to produce localized copy consistently across channels.

Acceptance Criteria
Preserve ICU Placeholders and Variables During Translation
- Given a source string containing ICU MessageFormat placeholders (e.g., {artist}, {deadline, date, short}) and dynamic variables/shortlinks, When translated to any supported locale, Then all placeholders and variable tokens are preserved verbatim and remain syntactically valid per ICU. - And the count and names of placeholders in the target exactly match the source. - And a placeholder linter reports zero missing, extra, or renamed variables. - And shortlinks remain unmodified and are isolated for bidirectional safety where applicable. - And if validation fails, the translation is rejected and an actionable error is returned.
Apply Domain Glossary and Translation Memory
- Given domain-specific glossary entries for music and rights metadata and an existing translation memory (TM), When a source string is translated, Then all glossary terms marked must-translate are applied exactly as specified in the target unless flagged non-translatable. - And non-translatable terms are left unchanged. - And TM matches at or above a configurable threshold (e.g., 80% fuzzy, 100% exact) are reused; lower matches are machine-translated. - And the output includes a match report indicating glossary applications and TM leverage percentages. - And new approved translations are persisted to TM with context metadata (template, channel, locale).
Pluralization and Gender Handling via ICU MessageFormat
- Given a source string using ICU plural and gender selectors, When localized to any target locale, Then the generated target includes all required categories for that locale and compiles without ICU syntax errors. - And runtime evaluation renders the correct variant for sample inputs (e.g., counts 0,1,2,5 and genders female/male/other) per CLDR rules. - And unit tests verify category coverage and sample outputs for each target locale.
Right-to-Left Script Support and Rendering
- Given Arabic and Hebrew target locales and content containing Latin shortlinks, numbers, and placeholders, When translated, Then the output applies correct bidirectional isolation so URLs, numbers, and placeholders render LTR within RTL text. - And punctuation and spacing render correctly around isolated tokens in email, web, and SMS previews. - And visual snapshots match approved baselines for each channel.
Back-Translation with Confidence Threshold and Human Review
- Given a high-visibility string with a configured confidence threshold (e.g., ≥0.85), When the MT produces a translation, Then a back-translation and confidence score are generated and attached to the record. - And if the score is below the threshold, the item is routed to a human review queue; if at/above threshold, it is auto-approved. - And reviewer actions (approve/request changes) update the TM and audit log with user, timestamp, and diff. - And the final approved translation retains the original placeholders and passes the linter.
Locale-Specific Formatting and SMS Length Constraint
- Given a nudge template with dates, times, numbers, and CTA destined for Email and SMS channel profiles, When localized for en-GB, fr-FR, ar-SA, and ja-JP, Then dates/times are formatted per locale and recipient timezone using ICU skeletons. - And numerals follow locale conventions (e.g., Arabic-Indic in ar-SA if configured). - And for SMS, the localized message fits within 1 segment (≤160 GSM-7 or ≤70 UCS-2); if over, a shortened variant is returned that preserves meaning, variables, and shortlinks. - And truncation never breaks ICU tokens or grapheme clusters, and the final text validates against the channel’s encoding.
Versioning, Caching, and Auto-Invalidation on Source Update
- Given cached per-locale translations with version IDs and ETags, When the source string or glossary is updated, Then all affected locale caches are invalidated immediately and a new version is created with an incremented ID. - And subsequent requests return the updated translation within 1 minute. - And previous versions remain retrievable for audit with timestamp, author (if human-reviewed), and content hash. - And the TM context is updated so future matches prefer the latest approved version.
SMS Intent-Preserving Shorten
"As a product marketer, I want SMS nudges auto-shortened without losing meaning so that they fit segment limits and still drive action."
Description

Automatically adapt nudge copy to per-country SMS constraints by detecting GSM-7 vs UCS-2, calculating segment counts, and shortening text while preserving meaning. Apply rule-based elision (e.g., remove non-essential adjectives), abbreviation maps per locale, and move detail to a localized shortlink. Guarantee required compliance keywords (e.g., STOP/HELP equivalents) remain intact. Provide previews showing character count, segments, and costs; fail-safe to multipart or channel switch with user confirmation when needed. Integrate with TrackCrate shortlinks to ensure encoded URLs are locale-safe and counted accurately.

Acceptance Criteria
Encoding Detection and Segment Calculation
Given a message composed exclusively of GSM-7 characters, When the preview is generated, Then the encoding is GSM-7 and the single-segment limit is 160 characters. Given a GSM-7 message of 161 characters, When the preview is generated, Then segments = 2 and per-segment limit = 153 characters. Given a message containing any UCS-2 character, When the preview is generated, Then the encoding is UCS-2 and the single-segment limit is 70 characters. Given a UCS-2 message of 71 characters, When the preview is generated, Then segments = 2 and per-segment limit = 67 characters.
Intent-Preserving Shorten to Segment Limit
Given a source message with a single primary CTA, asset name, and one URL, And the target segment limit is 1, When shortening is applied, Then the output length is less than or equal to the applicable single-segment limit for its encoding. And Then the primary CTA phrase remains unchanged. And Then the asset name remains unchanged. And Then exactly one localized shortlink is present and the original URL text is elided. And Then removed tokens are limited to allowed-elision categories for the locale.
Locale-Specific Abbreviation and Elision Maps
Given locale L is selected, When shortening is applied, Then only abbreviations defined in the locale L abbreviation map are used. And Then no abbreviation marked disallowed for locale L appears. And Then punctuation and spacing follow locale L rules. And Then the output locale tag equals the input locale.
Compliance Keywords Preservation and Placement
Given required compliance keywords for the destination country are present (e.g., STOP/HELP equivalents), When shortening is applied, Then those keywords remain present, unmodified, and not abbreviated. And Then their placement matches the country rule (e.g., end-of-message if required). And If preserving compliance keywords would cause the message to exceed the target limit, Then the system must not remove or alter them and must trigger the fail-safe path.
TrackCrate Shortlink Integration and Accurate Counting
Given details are moved to a shortlink, When the shortlink is generated, Then it is a TrackCrate shortlink that resolves successfully and includes locale parameters. And Then any non-ASCII characters in the URL are safely encoded. And Then character counting for segments uses the exact transmitted URL string. And Then the number of URLs in the message matches the intended count and each is unique per locale for tracking.
Real-Time Preview of Characters, Segments, and Cost
Given a message is edited, When the user pauses typing, Then the preview updates within 300 ms with current character count, encoding, segment count, and estimated cost using configured per-country rates. And Then both original and shortened versions are visible. And Then warnings are shown when the message exceeds the target segment limit.
Fail-Safe: Multipart or Channel Switch with Confirmation
Given the message cannot be shortened to the target segment limit without violating preservation rules, When the fail-safe is triggered, Then the user is presented with options to send as multipart SMS or switch channel with updated cost/constraints. And Then sending proceeds only after explicit user confirmation. And Then an audit log entry records the decision, timestamp, user, selected option, and final message variant. And If the user declines, Then no message is sent and the status remains unchanged.
Localized Time Formatting & Send Windows
"As a global team member, I want deadlines displayed and scheduled in my local format and hours so that I can act at the right time."
Description

Render dates and times in locale-appropriate formats (12/24h, day-month order, week start) and convert deadlines to recipient time zones with DST awareness. Show both source and local time when ambiguity could cause errors, and support relative expressions (e.g., “in 3 hours”) per locale rules. Enforce per-locale quiet hours and schedule sends within acceptable windows; stagger global sends to hit business hours locally. Provide APIs and UI previews to validate time rendering within nudge templates and AutoKit pages.

Acceptance Criteria
Locale-Specific Date/Time Rendering in Nudge Templates
Given a template token {{deadline_at}} and recipient locale "en-US" When rendering 2025-03-04T17:05:00-05:00 Then the output equals "Mar 4, 2025, 5:05 PM" and calendar week starts on Sunday Given locale "en-GB" When rendering the same timestamp Then the output equals "4 Mar 2025, 17:05" and calendar week starts on Monday Given locale "de-DE" When rendering the same timestamp Then the output equals "04.03.2025, 17:05" Given locale "ja-JP" When rendering the same timestamp Then the output equals "2025/03/04 17:05" Rule: Hour clock style follows locale (12h vs 24h) without requiring template flags
Deadline Conversion to Recipient Time Zone with DST Awareness
Given a deadline specified as 2025-03-09T02:30:00 America/New_York (spring-forward gap) When resolving to an instant Then the system rolls forward to 2025-03-09T03:00:00 America/New_York and sets was_adjusted="gap" Given a deadline specified as 2025-11-02T01:30:00 America/New_York (fall-back overlap) When resolving to an instant without offset qualifier Then the system selects the earlier occurrence (EDT, UTC-4) and sets was_adjusted="ambiguous-earlier" Given a recipient in Europe/London and a sender-stored instant 2025-01-15T17:00:00Z When converting for the recipient Then the displayed local time equals 2025-01-15 17:00 GMT and carries the correct zone abbreviation and offset Rule: All conversions must use IANA time zones; UTC offsets are computed per date with DST; unit tests cover at least 20 locales and 12 time zones including DST boundaries
Dual Time Display When Ambiguity Risk Exists
Rule: If sender/source time zone differs from recipient time zone OR the converted time changes the recipient's calendar date, dual-time display is enabled by default Given source 2025-02-01 17:00 EST (UTC-5) and recipient time zone America/Los_Angeles When rendering Then the output contains "Feb 1, 5:00 PM EST (UTC-5) • Feb 1, 2:00 PM PST (UTC-8)" Given dual-time is disabled via template flag dual_time=false When rendering the same case Then only the recipient-local time is shown Rule: Zone labels use locale-appropriate abbreviations and numeric UTC offsets; formatting respects each locale's date order
Localized Relative Time Expressions
Given now=2025-04-10T10:00:00 Europe/Paris and target=2025-04-10T13:00:00 When locale="fr-FR" and format=relative Then output equals "dans 3 heures" Given now=2025-04-10T10:00:00 America/Chicago and target=2025-04-10T11:00:00 with locale="en-US" When format=relative Then output equals "in 1 hour" (singular) Given locale="de-DE" and delta=25 minutes When format=relative_short Then output equals "in 25 Min." Given locale="ja-JP" and delta=3 hours When format=relative Then output equals "3時間後" Rule: For deltas >= 7 days, system falls back to absolute localized date/time; rounding uses nearest unit (>=30s rounds up to 1m)
Quiet Hours Enforcement and Send Scheduling
Given per-locale quiet hours 21:00–08:00 local and a scheduled send at 22:15 local When queuing delivery Then the message is rescheduled to 08:00 next allowable day and an audit entry records reason="quiet_hours" original_time and new_time Given bypass_quiet_hours=true on the campaign When queuing during quiet hours Then the send proceeds and the audit entry records reason="override" Rule: No deliveries occur within quiet hours ±1 minute; enforcement is evaluated per recipient's local time zone; unit tests include edge cases at boundaries (20:59, 21:00, 07:59, 08:00)
Global Send Staggering to Local Business Hours
Given 10,000 recipients across at least 6 time zones and business window 09:00–17:00 local Mon–Fri When a campaign is scheduled for next business day Then ≥95% of deliveries occur within each recipient's 09:00–17:00 window, 0% occur during defined quiet hours, and total send time does not exceed 24 hours Rule: Staggering distributes sends within each local window using jitter to avoid single-minute spikes (>5% of segment in any minute) Rule: A scheduling report is available pre-send estimating local-window coverage by time zone with error margin ≤±5%
API and UI Previews for Time Rendering
Given the Preview API GET /v1/preview/time?locale=xx-YY&tz=Area/City&datetime=ISO&format=relative|absolute&dual=true|false When called with locale=de-DE, tz=Europe/Berlin, datetime=2025-03-04T17:05:00+01:00, format=absolute, dual=true Then response status=200 and JSON includes fields: absolute, relative, dual.absolute, dual.local, zone.abbrev, zone.offset, was_adjusted (nullable) Rule: Preview API p95 latency ≤300 ms under 50 RPS and is idempotent Given the Template Editor preview panel When the user switches locale/time zone controls Then the rendered tokens update within 200 ms and visually flag quiet-hours conflicts with an inline warning Given an AutoKit page preview When viewing from a browser with locale=ja-JP Then date/time elements render in Japanese formats and respect 24h clock
Country-Specific Sender ID & Compliance Engine
"As a compliance lead, I want sender IDs and required legal language applied per country so that every message is deliverable and compliant."
Description

Select appropriate sender IDs per destination (A2P 10DLC US, alphanumeric where allowed, registered templates for India DLT, etc.) and enforce per-country legal/compliance rules. Localize opt-in/opt-out keywords and mandatory disclosures; honor do-not-disturb hours and consent status. Validate content against country restrictions before send and block or route to compliant channels as needed. Maintain auditable logs of consent, templates, and rule evaluations. Expose configuration for label-level policies while keeping system defaults updated with regulatory changes.

Acceptance Criteria
US 10DLC Sender ID and Campaign Compliance
Given a queued outbound SMS to a US MSISDN and message category=marketing, When the system selects a sender ID, Then it must use a registered A2P 10DLC number linked to an approved campaign for the label and must not use an alphanumeric sender ID. Given the campaign registry status is not Approved or the use-case classification does not match the message category, When send is attempted, Then the message is blocked pre-send with error code US-10DLC-NONCOMPLIANT and no carrier submission occurs. Given carrier-mandated disclosures for US marketing SMS, When the message is composed, Then the text includes an opt-out instruction and a help keyword without exceeding 160 characters for the first segment; if exceeded, the system applies locale shortening before send. Given logging is enabled, When the rule evaluation completes, Then an audit record is written with sender number, campaign ID, rule outcomes, and payload hash.
India DLT Template and Header Enforcement
Given recipient country=IN and message category=promotional, When send is prepared, Then a registered DLT template ID and header (sender ID) mapped to the label must be present; otherwise abort with error IN-DLT-MISSING. Given the rendered content, When compared to the selected DLT template, Then it must match the approved text aside from declared variable placeholders; otherwise abort with error IN-DLT-MISMATCH. Given India DND hours 21:00–09:00 IST for promotional traffic, When scheduled send time falls within the window, Then the message is deferred until 09:00 IST next permitted window and the defer action is logged. Given compliance logging, When the message is sent or blocked, Then the audit record captures DLT PE ID, Header, Template ID, variable values, and DND evaluation result.
Alphanumeric Sender ID Where Allowed
Given recipient country is in the alphanumeric-allowed list and message category=transactional, When selecting a sender ID, Then the system uses the configured alphanumeric sender (<=11 chars, GSM 03.38 charset) unless a reply is required. Given two-way reply is required or the country disallows alphanumeric, When selecting a sender ID, Then a compliant numeric long code or short code is used and the alphanumeric is not used. Given a configured disallowed term list per country/MNO, When the alphanumeric sender equals a disallowed value, Then the send is blocked with error ALPHA-DISALLOWED and a numeric fallback suggestion is returned. Given rule evaluation completes, Then an audit record includes country, chosen sender ID type, normalization result, and decision rationale.
Localized Opt-In/Opt-Out Keywords and Disclosures
Given a recipient's locale and country, When composing an outbound SMS, Then the message includes the country-mandated disclosure text and opt-out instruction using the localized keyword set configured for that country. Given an inbound MO message matches any localized opt-in or opt-out keyword for the recipient’s country, When received, Then the recipient’s consent status is updated within 5 seconds and the change is logged with source=MO and the raw text. Given the outbound message length exceeds the allowed limit after adding disclosures, When detected, Then the system shortens non-essential text or appends a branded shortlink to a disclosure page to keep the message within carrier limits, without removing the opt-out instruction. Given a compliance check runs pre-send, When the message lacks a required disclosure or opt-out instruction, Then the message is blocked with error DISCLOSURE-MISSING and no carrier submission occurs.
Do-Not-Disturb Quiet Hours Enforcement by Local Time
Given a scheduled outbound marketing SMS and a recipient with known timezone, When the local time at destination falls within the country’s DND window, Then the message is deferred to the next allowed window and the schedule adjustment is logged. Given a message category is transactional and the country allows transactional traffic during DND, When the local time is within DND, Then the message is allowed to send and the exemption is logged. Given the recipient timezone is unknown, When enforcing DND, Then the system uses the country’s default timezone and flags the event in the audit record with tz_source=default. Given label policy prohibits sends on local public holidays, When date matches a configured holiday in the recipient’s country, Then the message is deferred unless policy whitelist overrides apply.
Content Restriction Validation and Channel Failover
Given country-specific content restrictions configured, When the message content or metadata matches a restricted category for the recipient’s country, Then SMS send is blocked pre-submit with error CONTENT-RESTRICTED. Given SMS is blocked for compliance reasons and an alternative compliant channel is configured for the label, When failover is enabled, Then the system routes the message to the next eligible channel and records the route decision. Given URL policies by country disallow public shorteners, When links are detected, Then the system enforces the use of an approved TrackCrate-branded short domain; otherwise block with error URL-NOT-COMPLIANT. Given a pre-send check runs, When all rules pass, Then the message is submitted to the carrier with a compliance checksum and the rule version used, stored in the audit log.
Audit Trail, Label Policies, and Regulatory Updates
Given label-level policy configuration, When an admin creates or updates a policy, Then changes are versioned, require dual control, and cannot weaken mandatory system defaults; attempts to do so are rejected with error POLICY-WEAKENING. Given a send event, When compliance checks execute, Then an immutable audit log entry is written containing message ID, recipient country, evaluated rules and versions, sender ID chosen, consent snapshot, decision, and timestamps; logs are retained at least 24 months and exportable in CSV and JSON. Given a regulatory ruleset update is published by the system, When activated, Then the new version is applied to all subsequent evaluations, affected queued messages are re-evaluated, and label admins receive a notification with a change summary. Given an auditor requests proof, When a search by message ID or recipient is performed, Then the system retrieves the full compliance trail within 3 seconds and produces a signed export including a hash of the original content.
Localization Preflight & Monitoring
"As an ops manager, I want preflight checks and delivery monitoring by locale so that issues are caught before send and tracked after."
Description

Provide a pre-send validator that checks for untranslated strings, placeholder mismatches, SMS segment overages, forbidden terms, missing consent, invalid sender ID, and quiet-hour violations. Offer actionable fixes and simulations by locale and channel. Post-send, monitor delivery, latency, opt-outs, and complaints by country/locale; trigger alerts on anomalies and auto-rollbacks to safer variants. Expose dashboards and exportable reports to track localization quality and compliance over time.

Acceptance Criteria
Preflight: Untranslated Strings & Placeholder Integrity
Given a campaign message with locales en-US (source), fr-CA, and de-DE containing placeholders {first_name} and {shortlink} When Preflight validation runs across all selected locales and channels Then the result status is fail if any locale has untranslated keys or placeholder mismatches And the report lists per-locale missing_keys (names and counts) and placeholder_diffs (missing/extra placeholders) with template paths And send is blocked until all error-severity findings are resolved And selecting Auto-translate missing fills untranslated keys and preserves placeholders verbatim And re-running Preflight after auto-fix returns status pass for the same content
Preflight: SMS Segment Limit & Auto‑Shorten
Given an SMS template for es-MX with GSM encoding length 168 and a per-locale limit of 1 segment When Preflight computes encoding and segment counts Then the message is flagged with segment_overage and shows segments=2 for es-MX and estimated cost impact And a suggested Shorten to fit variant is provided that preserves placeholders and reduces length to ≤153 chars (GSM) or ≤67 chars (UCS-2) And accepting the suggestion updates the draft content And re-running Preflight shows segments≤1 and removes segment_overage for es-MX
Preflight: Forbidden Terms, Consent & Opt-in Proof
Given a promotional SMS targeting audiences in en-US and de-DE When Preflight validates locale compliance Then any message matching a locale-specific forbidden term or missing consent evidence is marked fail with rule_id, offending_term (if applicable), and locale And sending is blocked until consent evidence (e.g., list_id with double-opt-in timestamp) is attached and forbidden items are removed or replaced And selecting Insert compliant footer adds locale-appropriate opt-out language and identifiers And re-running Preflight returns pass when all compliance checks succeed
Preflight: Sender ID Format & Quiet Hours
Given an SMS campaign to US and UK with sender IDs "TrackCrate" and "+447700900123" scheduled for 22:30 local time When Preflight validates sender ID rules and quiet hours by locale Then US is flagged error alpha_sender_id_not_allowed and UK passes sender ID validation And locales where 22:30 falls in quiet hours are flagged with next permissible window suggested And schedule/save is blocked until a valid sender ID is set per locale and the send time is moved to an allowed window or a documented override is provided
Simulation: Locale/Channel Preview & Cost Estimation
Given a campaign with Email and SMS variants for en-US, fr-CA, and ja-JP and a selected test profile set When the user runs Simulation Then previews render per locale/channel with resolved placeholders, localized date/time formats, and region-appropriate watermarked shortlinks And SMS previews display encoding, segment counts, and estimated total cost per 1,000 recipients by locale And a downloadable CSV report is generated containing per-locale metrics, segment counts, encoding, and Preflight findings
Monitoring: Delivery, Latency, Opt-outs & Complaint Anomalies with Auto‑Rollback
Given live sends over the last 15 minutes with 7-day per-locale baselines When any locale-channel pair meets an anomaly (delivery_rate drop ≥20% vs baseline, p95_latency ≥3x baseline, opt_out_rate ≥1.0%, or complaint_rate ≥0.2%) Then an alert is sent to configured channels (Slack and email) within 2 minutes including metric, locale, channel, baseline, current value, and incident_id And traffic for that locale-channel is automatically rolled back within 5 minutes to the most recent safe variant until metrics recover for 30 minutes And the incident log records timestamps, actions taken, and affected campaign IDs
Reporting: Localization Quality & Compliance Dashboard and Exports
Given a user opens the Localization dashboard and applies filters for date range, country/locale, and channel When the view loads Then charts display delivery_rate, p95_latency, opt_out_rate, complaint_rate, translation_coverage, placeholder_error_rate, and compliance_failures with per-locale breakdowns And the user can export the current view as CSV and schedule weekly emailed CSV exports to specified recipients And each export completes within 60 seconds for up to 12 months of data and includes metric definitions
Localization Admin Console
"As a localization admin, I want a console to manage glossaries and preview messages by locale so that quality stays high without engineering."
Description

Build an admin UI to manage locales, glossaries, abbreviation maps, and translation memory; configure tone and formality per language; and preview nudge templates across SMS, email, and AutoKit with locale toggles. Simulate GSM vs UCS-2, segment counts, quiet hours, and sender ID selection. Support back-translation spot checks, workflow approvals for high-impact copy, and role-based access with audit trails. Provide import/export for glossary/TM and APIs for CI integration.

Acceptance Criteria
Locale, Glossary, Abbreviation Map, and TM Management
Given I have Admin role in the Localization Admin Console When I create a locale with a unique ISO 639-1 code and optional region Then the locale is saved, listed, and can be toggled Active/Inactive Given I attempt to create a locale with a duplicate ISO code When I submit the form Then a validation error prevents save Given I add a glossary term for a locale (term, definition, optional notes) When the term duplicates an active entry for the same locale Then the system blocks the save or requires version increment with a reason Given I upload a CSV glossary of N rows for a specific locale When I run a dry-run import Then the report shows counts of creates, updates, conflicts, and errors without changing data Given I confirm the import When conflicts exist Then the chosen strategy (skip/overwrite/version) is applied and a downloadable import report is generated Given I create an abbreviation map entry (source phrase -> abbreviation) for a locale When the abbreviation is not shorter than the source or creates a cycle Then validation blocks the save with a clear message Given I add a translation memory (TM) segment pair (source locale, target locale, source text, target text) When a duplicate segment pair exists (same source text, source locale, target locale) Then the system deduplicates or versions according to chosen policy
Tone and Formality Configuration per Language
Given a language or locale is selected When I set tone (e.g., friendly, promotional, neutral) and formality (informal, neutral, formal) Then the settings save with versioning and display the current effective values Given a locale has no explicit tone/formality When I preview or generate copy Then language-level defaults apply; if absent, global defaults apply and this is indicated in the UI Given I enter an out-of-range or unsupported tone/formality value When I attempt to save Then validation errors are shown and no changes are persisted Given I change tone/formality for a locale When I refresh the preview Then generated copy updates to reflect the new parameters
Cross-Channel Nudge Template Preview with Locale Toggle
Given a nudge template with variables exists When I toggle the locale in the preview Then SMS, email, and AutoKit previews update simultaneously with localized text and assets Given I select a right-to-left locale (e.g., ar-SA) When the preview renders Then email and AutoKit use RTL layout and SMS displays RTL text correctly Given locale-specific assets and variables are defined When I switch locales Then the preview shows the correct localized images, date/time formats, and number formats Given unresolved placeholders exist When I preview Then placeholders are visibly marked without breaking layout, and a list of missing variables is shown
SMS Constraints Simulation (GSM/UCS-2, Segments, Quiet Hours, Sender ID)
Given an SMS draft for a selected locale When I toggle encoding to GSM 7-bit Then non-GSM characters are highlighted and the segment count displays for GSM encoding Given the same message is toggled to UCS-2 When I view the simulation Then the segment count recalculates for UCS-2 and the character counter updates accordingly Given local quiet hours are configured for the target region When the scheduled send time falls within quiet hours Then the UI warns me and suggests the next allowable window based on locale rules Given country-specific sender ID rules apply When I select a destination country Then only compliant sender ID options are available (e.g., short code, alphanumeric) and unsupported options are flagged Given the message exceeds the desired segment limit When I apply the locale's abbreviation map in simulation Then a shortened variant is generated side-by-side with the new segment count
Back-Translation Spot Checks
Given a localized template segment exists for a target locale When I request a back-translation to the source language Then the system returns a back-translation and displays source, target, and back-translation side-by-side with differences highlighted Given glossary terms are defined for the source/target locales When the back-translation indicates a glossary inconsistency Then the UI flags the segment with a glossary violation notice Given I review a back-translation When I mark the spot check as Pass or Fail and add an optional note Then the decision, user, timestamp, and segment version are recorded
Approval Workflow with RBAC and Audit Trails for High-Impact Copy
Given a template is labeled High Impact When a Contributor submits changes for one or more locales Then the template enters Pending Approval and cannot be published until at least one Approver per affected locale approves Given an Approver reviews a pending change When they Approve or Reject with an optional comment Then the decision is recorded, notifications are sent to subscribers, and the template status updates accordingly Given a user without the Approver role attempts to approve When they click Approve Then the action is blocked with a permission error Given any create/update/delete action occurs on locales, glossaries, abbreviation maps, TM, tone/formality, or templates When the action is saved Then an audit record is created capturing user, timestamp, entity, action, and before/after values and is queryable by filters (entity, user, date range)
Import/Export and CI API Integration
Given a glossary or translation memory dataset is selected When I export Then I can download CSV and TMX files containing all selected locales with required metadata and headers Given I have an import file with new and updated entries When I run a dry-run Then the system reports counts of creates, updates, conflicts, and validation errors without persisting changes Given I confirm the import When conflicts are present Then I can choose a resolution strategy (skip, overwrite, create new version), and a post-import report with row-level outcomes is available Given a CI pipeline calls authenticated APIs When it pushes templates or pulls glossary/TM for specific locales Then the API responds with 2xx, version IDs, and ETags for caching; invalid tokens or scopes receive 401/403 with error codes

Smart Digest

Rolls multiple pending asks into a single, timed daily or weekly digest per recipient, with deep links to the highest‑priority items. De‑duplicates across projects, pauses if recent activity is detected, and offers quick triage so reviewers stay focused without notification fatigue.

Requirements

Recipient-Timed Digest Scheduling
"As a reviewer collaborating across time zones, I want a single digest at a predictable time so that I can plan my day without constant interruptions."
Description

Implement per-recipient daily or weekly digest scheduling that aggregates all pending asks (e.g., artwork approvals, stem version reviews, rights metadata confirmations, AutoKit press page checks, shortlink tasks) into a single message delivered at the recipient’s preferred local time. Respect user-defined send windows, time zones, and channel preferences while skipping empty digests. The scheduler should batch tasks across all TrackCrate projects the recipient can access and generate a single digest payload. Integrate with the notifications service, user profile/time zone store, and permissions layer to ensure only authorized items are included. Provide admin defaults and guardrails (minimum/maximum cadence), concurrency-safe job orchestration, and idempotent scheduling to prevent duplicate sends.

Acceptance Criteria
Per-Recipient Cadence Scheduling (Daily/Weekly, Local Time, DST)
Given a recipient with a valid time_zone, preferred_send_time, and digest_cadence set to daily or weekly When the scheduler computes the next run time Then it schedules the digest for the recipient’s next local occurrence of the cadence at the preferred_send_time respecting DST transitions And the digest is delivered within ±5 minutes of the scheduled local time And at most one digest is sent per recipient per cadence period (one per day for daily, one per week for weekly) And changing the preferred_send_time or cadence re-computes and replaces the next scheduled run without producing an extra send
Send Window Compliance
Given a recipient with user-defined send windows (days and local time ranges) When the preferred_send_time falls outside the active window on a given day Then the digest is deferred to the next available in-window time And if no window exists for the configured day (e.g., weekends disabled), the digest is scheduled for the next eligible day within the window And no digest is sent outside the defined windows
Skip Empty Digests
Given the scheduled send time arrives And there are zero pending asks across all accessible projects for the recipient When the digest job executes Then no notification is sent And a skip event with reason "empty_digest" is logged And the next scheduled run remains unchanged
Permissions Filtering and Cross-Project Aggregation with De-duplication
Given a recipient has access to multiple TrackCrate projects with pending asks When generating the digest payload Then only items authorized by the permissions layer for that recipient are included And identical asks (same global task_id) are de-duplicated so they appear once And the digest consolidates items across all authorized projects into a single payload And each included item contains a deep link to the underlying task
Channel Preferences and Notification Routing
Given a recipient has channel preferences configured (e.g., email, Slack, in-app) When a digest is ready to send Then it is delivered via the highest-priority enabled channel for that recipient And no message is sent via any disabled channel And the notifications service is called once with the digest payload and channel routing parameters And the service returns a success response code
Idempotent and Concurrency-Safe Job Orchestration
Given multiple workers attempt to process the same recipient’s digest for the same cadence period When jobs start concurrently Then only one digest message is created and sent, guarded by an idempotency key composed of recipient_id + cadence_period And any duplicate attempts within the lock window exit without sending and log a deduplicated outcome And retrying the same job after a worker crash results in at most one send And the final state records exactly one notification ID for that recipient and period
Admin Defaults and Guardrails
Given a new recipient without explicit digest settings When the profile is created Then admin-configured defaults are applied (e.g., daily cadence at 09:00 local, channel=email) And if time_zone is missing, UTC is applied and flagged for update And attempts to set a cadence outside the admin min/max guardrails are rejected with a validation error And all changes to cadence, send windows, and channels are audited with actor, timestamp, old_value, new_value
Cross-Project Ask De-duplication
"As a label manager working across several releases, I want duplicate asks combined across projects so that I don’t waste time reviewing the same item multiple times."
Description

Create a de-duplication engine that collapses identical or equivalent asks across multiple projects into a single digest line item per recipient. Use stable asset and task keys (e.g., track/stem/artwork asset IDs, rights task IDs) and ask type to detect duplicates originating from different threads or projects. Merge metadata (originating projects, requestors) into a compact summary with badges, while preserving per-project audit trails. Ensure permission checks per project are enforced before surfacing the merged item. Provide deterministic tie-breaking and conflict resolution rules, and log mappings for traceability. If policy requires separation (e.g., differing confidentiality levels), gracefully split items while preventing redundant notifications.

Acceptance Criteria
Single Line Item for Identical Asks Across Projects
Given recipient R has access to Projects A and B And Project A has an open approval ask for asset_id ART123 version v3 And Project B has an open approval ask for asset_id ART123 version v3 When the daily Smart Digest is generated for R Then exactly 1 digest line item is created for ART123 approval for R And the line item displays project badges for A and B (count = 2) And the line item lists both requestors’ names And no additional separate notification for the same ask appears in the digest And per-project audit trails in A and B remain unchanged
Equivalent Ask Detection via Stable Keys
Given Project C has an open review ask for stem_id STEM42 with ask_type review on Thread X And Project D has an open review ask for stem_id STEM42 with ask_type review on Thread Y And the titles, comment text, or thread origins differ When the digest generator runs Then the two asks are detected as duplicates via stable key (stem_id STEM42 + ask_type review) And exactly 1 digest line item is produced for R And the line item includes origin badges for C and D And duplicate notifications for these asks do not appear elsewhere in the digest
Permission-Aware Merge Without Data Leakage
Given recipient R has access to Project E but not Project F And Project E and Project F each have an open rights_clearance ask for track_id TRK9 When the digest is generated for R Then R sees exactly 1 line item for TRK9 rights_clearance showing only Project E’s metadata And the line item does not display Project F’s name, title, or any restricted metadata And the line item shows a generic “+1 hidden source” indicator (no identifying details) And the deep link routes to Project E’s ask only and does not expose Project F And no policy or permission information about Project F is inferable from the digest content
Policy-Driven Separation by Confidentiality Level
Given two approval asks reference asset_id ART999 And Project G’s ask has confidentiality = Internal And Project H’s ask has confidentiality = External And recipient R has access to both projects When the digest is generated Then two separate digest line items are created, one per confidentiality level And within each confidentiality level, identical asks are merged to a single line item And R receives no more than 1 line item per confidentiality level for ART999 in the digest window And no cross-level metadata (e.g., project names) is shown across the split items
Deterministic Tie-Breaking and Conflict Resolution
Given three duplicate asks exist for asset_id ART777 with ask_type approval across Projects J, K, L And their attributes are: - Priorities: High (J), Blocker (K), Normal (L) - Due dates: J=D1, K=D2, L=D3 - Activity timestamps: J=T1, K=T2, L=T3 - Source IDs: J=ID_J, K=ID_K, L=ID_L When the digest is generated Then the primary source is selected by deterministic order: 1) Highest priority (Blocker > High > Normal > Low) 2) Earliest due date 3) Most recent activity timestamp 4) Lowest lexicographic source_ask_id And the deep link points to the selected primary source And the displayed priority equals the highest among the duplicates And the displayed due date equals the earliest among the duplicates And the summary lists the count of additional sources with project badges And the same inputs always yield the same primary selection and presentation
Traceable Mapping and Idempotent Merges
Given N duplicate source asks S1..Sn are merged into a digest item M for recipient R When system logs are inspected via the observability endpoint Then a mapping record exists linking M to {S1..Sn}, project IDs, algorithm_version, decision rationale, and timestamp And the mapping is queryable by either M or any Si And the mapping is retained for at least 90 days And rerunning deduplication with the same inputs produces the same merged item ID M and identical mapping And each source project’s audit trail remains intact and unmodified
Activity-Aware Send Pause
"As a frequent contributor, I want the digest to pause when I’ve just handled my queue so that I don’t receive unnecessary summaries."
Description

Detect recent recipient activity to intelligently pause, trim, or reschedule digests. If the recipient has cleared or acted on items (approve/reject/comment/download) within a configurable lookback window, either skip the digest entirely or remove the addressed items from it. Subscribe to the event stream for actions on stems, artwork, rights metadata, AutoKit press pages, shortlinks, and expiring download links to maintain up-to-date task states. Provide thresholds and rules per recipient or org, including quiet hours. Ensure graceful rescheduling, anti-thrashing debouncing, and accurate audit logs of why a digest was paused or altered.

Acceptance Criteria
Skip Digest When Recent Activity Detected Within Lookback
Given a recipient with lookback window = 2 hours and at least one action (approve, reject, comment, or download) recorded within that window And a digest is scheduled to send at 09:00 local time When the send decision runs Then the digest is not sent And an audit log entry is created with decision = "skipped_due_to_recent_activity", recipient_id, digest_id, matched_activity_ids, lookback_window = "2h", and timestamp And the next scheduled send remains at the next regular cadence with no immediate retry
Trim Addressed Items From Pending Digest
Given a recipient with 5 pending items across assets And 2 of those items were acted on within the 2-hour lookback window before the scheduled digest When the digest is generated Then the digest includes only the 3 remaining items And the removed items are listed in the audit log with decision = "trimmed_items", count = 2, and item_ids And no references or links to the removed items appear in the digest
Update Task State From Cross-Asset Event Stream
Given subscriptions exist for stems, artwork, rights metadata, AutoKit press pages, shortlinks, and expiring download links When an approve, reject, comment, or download event is received for an item Then the item's pending state is updated within 60 seconds And the update is idempotent for duplicate events And if the event arrives after digest assembly but before send, the digest is recalculated to remove addressed items or pause if none remain And failures to process events are retried with exponential backoff up to 3 times and surfaced to monitoring
Honor Quiet Hours With Timezone-Aware Rescheduling
Given recipient quiet hours are configured as 21:00–08:00 in the recipient's timezone And a digest is scheduled for 22:00 local time When the scheduler evaluates send time Then the digest is not sent during quiet hours And it is rescheduled to the next allowed window at 08:00 plus configured jitter not exceeding 10 minutes And an audit log entry records decision = "rescheduled_quiet_hours" with original_time, next_scheduled_at, and timezone
Debounce Thrashing Near Send Time
Given debounce_period = 15 minutes and max_deferral = 60 minutes And new activity is detected within 15 minutes before a scheduled send When the send decision runs Then the digest send is deferred by 15 minutes And additional activity within the debounce window extends deferral up to a maximum of 60 minutes And after reaching max_deferral, the system makes a final decision to send or skip based on latest state And only one notification is generated for that digest window
Apply Recipient-over-Org Policy Rules
Given an org default lookback = 2 hours and a recipient override lookback = 30 minutes And org policy min_pending_to_send = 3 and recipient override min_pending_to_send = 5 When evaluating a scheduled digest for the recipient Then the effective lookback is 30 minutes and min_pending_to_send is 5 And if pending items < 5, the digest is skipped with decision = "skipped_below_threshold" And the applied policy_ids and policy sources (recipient or org) are recorded in the audit log
Comprehensive Audit Trail For Pause/Alter Decisions
Given any decision to skip, trim, or reschedule a digest When the decision is committed Then an audit record is written containing: correlation_id, recipient_id, org_id, digest_id, decision, rule_id(s), reason, lookback_window, thresholds, matched_activity_ids, counts, original_scheduled_at, next_scheduled_at (if applicable), actor = "system", and timestamp And the record is immutable and queryable via the admin API within 5 minutes And each sent or skipped digest links to its audit record by correlation_id
Priority Scoring & Deep Links
"As a project lead, I want the most urgent items surfaced with one-click deep links so that I can jump straight to what matters and resolve blockers quickly."
Description

Rank digest items using a transparent scoring model that considers due dates, release milestones, rights/clearance risk, requester role, item age, and prior snoozes. Display the highest-priority items first and generate secure deep links that open directly to the actionable context in TrackCrate (task detail, asset preview, AutoKit press page, private stem player) with state preserved (e.g., filters, scroll position). Sign links with short-lived tokens and attribute clicks for analytics while enforcing permission checks upon open. Provide a top section (“Top 3 to Tackle”) and a secondary section for the remaining queue, with fast-loading targets and graceful fallback if tokens expire.

Acceptance Criteria
Top 3 Ordering by Priority Score
Given a recipient has at least 6 pending items across projects with varied due dates, release milestones, rights/clearance risk, requester roles, item age, and snooze counts When the Smart Digest is generated for that recipient Then each item receives a numeric priority score computed from the documented weights for the six factors And the list is sorted descending by score; tie-breakers are earlier due date, higher rights/clearance risk, then newer activity timestamp And the first three items render under the "Top 3 to Tackle" section in that order And all remaining items render under a secondary "Queue" section in order And each item exposes a "Why prioritized" view showing factor contributions and final score
Deep Links Preserve Context and State
Given a digest item link targeting a task detail, asset preview, AutoKit press page, or private stem player and including filter, sort, and scroll parameters When the recipient clicks the link within the token time-to-live window Then TrackCrate opens directly to the target view with the specified filters, sort, and scroll position applied And the actionable control (approve/reject, play, download, comment) is visible and interactable without additional navigation And if the user is unauthenticated, they are prompted to sign in and then returned to the exact target with state preserved
Permission Checks on Open
Given a deep link is clicked by a recipient without sufficient permission for the target item When the link is opened Then the app returns an access denied view without exposing sensitive metadata beyond item name and project And provides a one-click request-access flow that includes the item ID and recipient identity And logs a denied_open analytics event tied to the link and recipient And if permission is granted within 24 hours, the original link redirects to the target context on the next click without issuing a new digest
Short-Lived Signed Links With Graceful Expiration
Given digest links are signed with short-lived tokens (default TTL 24 hours, configurable per digest) When a token is valid Then the request is authenticated by token verification in ≤150 ms p95 and proceeds to the target When a token is expired, revoked, or tampered Then the user is shown a fallback that prompts sign-in (if needed) and regenerates a fresh token server-side if the user has access, then redirects to the original target with state preserved And if regeneration is not possible, the user is taken to the digest index for that date with an expiration notice And all expired or invalid token attempts are logged without echoing sensitive data to the client
Click Attribution Analytics
Given a recipient clicks any digest deep link When the click is processed Then an analytics event is recorded with recipient ID, digest ID, item ID, item type, timestamp, and outcome (opened, denied, expired), deduplicated for repeated clicks within 5 minutes And analytics capture occurs regardless of auth outcome and before redirect to the target And no asset contents or comment text are captured—only identifiers and timestamps And analytics is retried up to 3 times on network failure without adding more than 100 ms latency to navigation
Snoozes and Age Influence Priority Score
Given two otherwise identical items where one has been snoozed twice and the other never snoozed When the digest is generated Then the never-snoozed item receives a higher priority score by at least the configured snooze penalty and appears before the snoozed item Given an item has aged beyond the age threshold When the digest is generated Then the item's age factor increases its score up to the configured cap, and this contribution is visible in the "Why prioritized" view And adjusting snooze state or due date updates the item's ranking in the next digest
Fast-Loading Targets
Given a recipient clicks a valid deep link on standard test devices (mid-tier laptop and mobile) and networks (50 Mbps broadband, 3G) When the target loads Then time to first actionable paint is ≤1.5 s p95 on broadband and ≤3.0 s p95 on 3G for task detail, asset preview, AutoKit page, and stem player views And token validation adds ≤150 ms p95 to navigation And non-critical assets (waveforms, thumbnails) lazy-load without blocking primary controls And SLO breaches are emitted to monitoring with digest ID and target type
Quick Triage Actions in Digest
"As a busy reviewer on the go, I want to take quick actions directly from the digest so that I can keep releases moving without context-switching."
Description

Enable actionable controls within the digest (email, in-app, optional Slack) to Approve, Request Changes, Comment, Assign, or Snooze items without opening the full workspace. Use signed, single-use, idempotent action tokens and confirmation flows for sensitive operations. Support inline comments and lightweight previews (e.g., artwork thumbnail, stem snippet via secure preview link) that respect watermarking and expiry policies. Enforce RBAC checks before executing actions, write complete audit logs, and provide immediate feedback states (success, unauthorized, already handled). Ensure compatibility with common email clients and provide fallback action pages when buttons are stripped.

Acceptance Criteria
Execute Quick Triage Actions from Email Digest
Given a recipient with RBAC permission to act on items receives a Smart Digest email containing pending items with quick action buttons (Approve, Request Changes, Comment, Assign, Snooze) When the recipient clicks Approve on an item Then the item’s status updates to Approved and a confirmation page displays outcome "success" within 2 seconds (p95) without opening the full workspace And the digest reflects the updated state on next refresh When the recipient clicks Request Changes and submits a required comment Then the item’s status updates to Changes Requested, the comment is posted to the item, and the confirmation page displays "success" When the recipient clicks Comment and submits text Then the comment is added to the item with the recipient as author and the confirmation page displays "success" When the recipient clicks Assign and selects a valid assignee Then the item is assigned to that user and the confirmation page displays "success" When the recipient clicks Snooze and selects a duration (1h, 1d, 1w) Then the item is hidden from digests and in-app review queues for that recipient until the snooze expires, and the confirmation page displays "success"
Single-Use, Idempotent Action Tokens
Given an action URL contains a signed token bound to recipient, action, item, and expiry When the token is used successfully once Then the action executes exactly once, the token is marked consumed, and subsequent uses return outcome "already handled" with no side effects When the token is reused, forwarded, or invoked from another surface (email, in-app, Slack) Then the system returns outcome "already handled" or "unauthorized" as appropriate with no side effects When the token is modified or expired Then the system rejects the request with outcome "unauthorized" or "expired" and logs the attempt And token signatures are verified using server-side keys and include a TTL configurable per digest cadence
RBAC Enforcement and Immediate Feedback States
Given the system evaluates RBAC before executing any quick action When the recipient lacks the required role for the action or item Then the action is not executed, outcome "unauthorized" is displayed, and no state changes occur When the item has been updated by another actor since the digest was sent Then the action is prevented if it conflicts, outcome "already handled" is displayed with the latest status and actor, and no duplicate side effects occur When the action succeeds Then outcome "success" is displayed and returned via API with HTTP 200; failures return non-2xx with a specific error code
Inline Comments and Lightweight Previews Respect Policies
Given the digest displays lightweight previews When the item includes artwork Then a max-400px watermarked thumbnail is displayed with alt text and a fallback link; if the email client blocks images, the fallback link is visible When the item includes stems Then a secure preview link opens a stream-only, watermarked 15–30s snippet in the browser; downloads are disabled; the link expires per asset policy and returns "expired" after TTL When the recipient posts an inline comment from the digest Then the comment is persisted, attributed to the user, sanitized for HTML, and visible in the workspace thread And previews and links are suppressed if RBAC denies access
Audit Logging of Triage Actions
Given any quick action attempt (success, unauthorized, expired, already handled) When the request is received Then an immutable audit record is written within 5 seconds containing: timestamp (UTC), actor ID/email, item ID, project ID, action type, surface (email/in-app/Slack), token ID/hash, client IP, user agent, previous state, new state (if changed), outcome, and comment/assignment metadata (IDs) And audit records are queryable by admin users and exportable in CSV/JSON And audit logging occurs even when the action is denied or token validation fails
Email Client Compatibility and Fallbacks
Given common email clients (Gmail Web/Mobile, Outlook Desktop/OWA, Apple Mail, iOS Mail, Android Gmail) When the digest is rendered Then action buttons or links are visible and actionable in each client, or a plain fallback link is provided when buttons are stripped When the recipient views the plaintext version Then all actions are available as signed links with descriptive labels When styles or scripts are blocked Then the digest remains readable and actions remain accessible via fallback action pages And the fallback confirmation page is mobile-responsive and loads in under 2 seconds (p95)
Interactive Actions from Slack and In-App Digest
Given Slack notifications are enabled for a recipient When a Smart Digest is delivered to Slack Then interactive buttons for Approve, Request Changes, Comment, Assign, and Snooze are present and functional; actions acknowledge within 3 seconds and return an ephemeral success/error message When Slack interactivity is unavailable Then each action provides a signed fallback link to the confirmation page Given the in-app digest view When the user performs any quick action Then the same token, RBAC, idempotency, previews, and feedback rules apply and results are reflected in the UI immediately
Digest Preferences & Opt-Out Management
"As a recipient, I want to customize what and when I receive digests so that the summaries fit my workflow and reduce noise."
Description

Provide per-recipient preferences to control digest frequency (daily/weekly), delivery time, categories (stems, artwork, rights, press/AutoKit, shortlinks, downloads), channels (email, in-app, Slack), language, and quiet hours. Support org-level defaults and overrides, plus project-level inclusion/exclusion. Include compliant unsubscribe/opt-out mechanisms in email with granular controls rather than global mute, honoring regional regulations (e.g., CAN-SPAM, GDPR). Store preferences in the user profile service with versioning, expose a self-serve settings UI, and ensure the scheduler and composer respect these settings.

Acceptance Criteria
Per-Recipient Digest Frequency, Delivery Time, and Quiet Hours
Given a recipient sets frequency to Daily at 09:00 in their timezone and quiet hours 22:00–07:00 When the scheduler evaluates send times Then the next digest is scheduled for 09:00 local time on the next eligible day outside quiet hours And no digest is sent during the quiet hours window And if the configured time falls within quiet hours, the digest is rescheduled to 07:00 the same day And changes to frequency or delivery time take effect on the next scheduling cycle within 60 seconds And the system displays the computed next send time in the settings UI in the recipient’s local timezone
Category and Project-Level Inclusion Filters Applied to Digest Content
Given a recipient selects categories Artwork and Rights only and excludes Project X When a digest is composed for that recipient Then only items tagged Artwork or Rights from projects not excluded are eligible for inclusion And items from Project X are excluded regardless of category And items from unselected categories are excluded And the composer logs the applied filters and counts of included and excluded items
Multi-Channel Delivery Preferences Enforcement (Email, In‑App, Slack)
Given a recipient enables Email and Slack channels and disables In‑App When a digest is delivered Then the recipient receives the digest via Email and Slack only And no in‑app notification or badge is created And if Slack authorization is missing, Slack delivery is skipped and an actionable warning is shown in settings And disabling a channel takes effect before the next scheduled send without requiring a restart
Org Defaults and User Overrides Resolution
Given an organization sets defaults Weekly Monday 09:00, categories All, language en‑US And a recipient has no personal preferences When the scheduler composes a digest Then the org defaults are applied Given the recipient sets personal frequency Daily and language fr‑FR When the scheduler composes a digest Then the recipient’s overrides take precedence over org defaults And precedence order is User Project‑Level Override over User Profile over Org Default over System Default
Localization of Digests and Preference Pages
Given a recipient’s language preference is es‑ES When a digest email and linked manage‑preferences page are generated Then subject, headings, category labels, call‑to‑action text, and footer are localized to es‑ES And dates and times are formatted in the recipient’s locale and timezone And if a translation key is missing, the system falls back to the organization language, else en‑US And the language context is preserved when navigating from the email to the preferences page
Email Unsubscribe and Granular Opt‑Out Compliance
Given a recipient receives a digest email Then the footer contains the organization’s legal name, physical mailing address, and a visible one‑click unsubscribe link And a separate manage‑preferences link enables granular opt‑out by channel and category without requiring login When the recipient clicks the one‑click unsubscribe link Then email digest delivery is suppressed immediately for that recipient And the suppression event is recorded with timestamp, recipient identifier, and source And a confirmation page is shown indicating the change and offering granular options
Preference Storage, Versioning, and Scheduler Read Consistency
Given preferences are stored in the User Profile Service with versioning When a recipient updates any preference and clicks Save Then a new version is created with incremented version number, actor identifier, and updated_at timestamp And the change is durable and queryable within 2 seconds at the 95th percentile And the API enforces optimistic concurrency using ETag or If‑Match to prevent lost updates When the scheduler composes a digest Then it reads the latest committed version at compose start and uses a consistent snapshot for that send And updates occurring after compose start do not affect the in‑flight composition And the settings UI exposes fields for frequency, delivery time, categories, channels, language, quiet hours, and project inclusion or exclusion with validation of required fields And quiet hours validation supports overnight ranges and disallows identical start and end times
Delivery, Rendering & Engagement Tracking
"As a project manager, I want reliable, easy-to-scan digests across my preferred channel so that I can quickly review progress and ensure nothing is missed."
Description

Build responsive, accessible digest templates with dark-mode support, readable summaries, and clear call-to-action buttons. Render per-item summaries with badges (project, asset type), counts, and micro-previews where safe. Deliver via email and in-app; provide optional Slack digest posts for connected workspaces. Instrument open/click/action analytics, per-item engagement tracking, and UTM tagging for shortlinks while respecting privacy settings. Implement deliverability best practices (SPF/DKIM/DMARC, bounce handling, retries, rate limiting) and expose metrics dashboards to monitor notification fatigue (open rates, action rates, snooze rates) and run A/B tests on timing and layout.

Acceptance Criteria
Responsive Accessible Digest Template
Given any supported viewport from 320px to 1600px, When the digest email or in‑app view is rendered, Then there is no horizontal scrolling, text is readable, and tap targets are ≥44x44px. Given a user with a screen reader, When navigating the digest, Then focus order matches visual order and all actionable elements have descriptive labels and visible focus styles. Given images are blocked by the email client, When the digest loads, Then core content (subject, item summaries, CTAs) remains readable as live text with alt text on non‑decorative images. Given user settings prefer-reduced-motion are enabled, When the digest renders, Then no animated GIFs or motion effects auto‑play. Given WCAG 2.1 AA criteria, When evaluated, Then color contrast is ≥4.5:1 for text and ≥3:1 for large text and interactive components.
Dark Mode & High‑Contrast Support
Given the recipient’s client or OS is in dark mode, When the digest is opened, Then styles switch to a dark color scheme with no loss of legibility and brand assets use dark‑mode variants. Given high‑contrast mode is enabled, When the digest is viewed, Then all text and interactive elements meet WCAG 2.1 AA contrast requirements and remain distinguishable. Given the in‑app digest view, When prefers-color-scheme: dark is detected, Then the UI adopts the dark theme without visual regressions.
Per‑Item Summaries with Badges, Counts, and Safe Micro‑Previews
Given a digest contains multiple items, When rendered, Then each item shows a project badge, asset type badge, pending counts (e.g., comments, approvals), and a CTA button. Given an item’s asset is preview‑safe per policy, When the digest renders, Then a micro‑preview (e.g., image thumbnail or audio waveform) is shown with watermark if required; otherwise a placeholder with “Preview unavailable” is shown. Given items were de‑duplicated across projects prior to rendering, When a duplicate appears, Then it is shown only once with combined source badges. Given a CTA is clicked for an item, When the deep link opens, Then the user lands on the specific item context (project and asset) with the correct state applied. Given very long titles or labels, When displayed, Then they truncate with ellipsis and expose full text via title/aria-label without layout break. Given counts exceed 99, When displayed, Then they show as “99+”.
Multi‑Channel Delivery: Email, In‑App, and Optional Slack
Given a user is eligible for today’s digest and has email and in‑app delivery enabled, When the scheduled send time occurs, Then an email is sent and an in‑app digest card is created within 1 minute containing the same items and CTAs. Given the workspace is connected to Slack and the recipient enabled Slack digests, When the digest is sent, Then a Slack message posts to the configured channel with at least the top 3 items and a “View full digest” link; otherwise no Slack message is posted. Given recent recipient activity within the pause window, When the send time occurs, Then the email/Slack are skipped and only the in‑app digest is queued for next eligible window. Given an email hard bounce occurs, When processing provider webhooks, Then the address is suppressed within 5 minutes and future digest emails are not attempted while in‑app delivery continues.
Engagement Tracking & UTM Tagging with Privacy Controls
Given an email digest is opened, When privacy tracking is allowed and MPP is not detected, Then a single open event is recorded per device within a 24‑hour dedupe window. Given any digest CTA or item deep link is clicked, When the link is a TrackCrate shortlink, Then per‑item click events are recorded (recipient, digest id, item id, channel) and UTM parameters are appended: utm_source=trackcrate, utm_medium=digest_email|digest_inapp|digest_slack, utm_campaign=smart_digest, utm_content=item_{id}. Given recipient privacy is set to “no tracking” or regulatory consent is missing, When rendering links, Then no tracking pixels load, no UTMs are appended, and clicks are not stored beyond aggregate counts. Given a quick‑triage action (approve, request changes, snooze) is taken from the digest, When the action completes, Then a per‑item action event is recorded with outcome and latency.
Deliverability: Authentication, Bounces, Retries, and Rate Limiting
Given outbound email authentication is configured, When test messages are sent, Then SPF=pass, DKIM=pass, and DMARC=pass with alignment on the From domain. Given a soft bounce response is received, When retry logic runs, Then the system retries up to 3 times over 24 hours with exponential backoff; after final failure the attempt is marked soft‑fail without suppressing the address. Given a hard bounce or complaint is received, When processed, Then the address is immediately suppressed and flagged, and no further emails are attempted. Given domain‑level throttling requirements, When sending to large recipient sets, Then per‑domain rate limits are enforced to avoid 4xx throttling errors. Given an unsubscribe or digest snooze is set, When the next send cycle runs, Then the recipient is excluded until the preference expires.
Metrics Dashboard & A/B Testing for Notification Fatigue
Given the metrics dashboard is loaded, When a date range and segment are selected, Then open rate, click/action rate, and snooze rate are displayed per channel and per digest, updating within 15 minutes of new events. Given a privacy filter to exclude non‑trackable users is applied, When metrics are recalculated, Then denominators and rates reflect only eligible users. Given an A/B test is configured for timing or layout, When the send occurs, Then recipients are assigned evenly to variants, exposure is mutually exclusive, and variant‑level metrics are recorded. Given an A/B test reaches its configured end date or sample size, When results are viewed, Then the dashboard shows a clear winner (or no‑difference) with confidence metrics and the system stops enrolling new recipients.

Product Ideas

Innovative concepts that could enhance this product's value proposition.

Guest Guard Links

Create per-recipient expiring shortlinks embedding inaudible watermarks; auto-revoke after X plays or downloads. Get leak traces and tamper alerts.

Idea

Role Ring Templates

Apply one-click role presets—Artist, Mixer, PR, A&R—that cascade project and asset permissions. Prevent oversharing; onboard collaborators in seconds.

Idea

Stem Diff Player

A/B versions with phase-aligned playback and spectral change heatmaps. Comment by bar; jump to changed regions instantly.

Idea

Clearance Capsule

Export a license-ready bundle: cleared stems, ISRC/ISWC, splits, cue sheet, contacts. Share a shortlink that tracks opens for sync decisions.

Idea

SplitSafe Escrow

Hold collaborator payments in milestone-based escrow; release on approval. Supports splits, invoices, and Stripe Connect.

Idea

Metadata Sentry

Continuously flag missing codes, mismatched sample rates, duplicate takes, and broken links. Offer one-click fixes before AutoKit or distribution.

Idea

Timezone Nudgeboard

Schedule review nudges in each recipient’s local time; auto-escalate via SMS after 48 hours unopened. Converts silence into approvals.

Idea

Press Coverage

Imagined press coverage for this groundbreaking product concept.

P

TrackCrate Debuts Unified Hub to End Version Chaos for Indie Artists and Small Labels

Imagined Press Article

Los Angeles, CA — September 2, 2025 — TrackCrate today announced the general availability of its lightweight music asset hub built for indie artists, small labels, and distributed creative teams working across time zones. TrackCrate centralizes stems, artwork, and press assets with rights metadata; generates trackable shortlinks; and spins up one‑click AutoKit press pages with a private stem player. With expiring, watermarked downloads and built‑in version control, teams can finally kill version chaos and ship releases faster—without sacrificing security or professionalism. At its core, TrackCrate solves a daily headache: scattered files and unclear finals. Projects keep a living, versioned record of masters, alternates, stems, and visuals, all tied to credits, splits, codes, and usage notes. Shortlinks connect collaborators, reviewers, and partners to exactly what they need, while AutoKit creates clean, on‑brand press pages in seconds. A private stem player allows A/B comparisons and critical listening without exporting new bounces, keeping feedback loops tight and context‑rich. “Independent artists and boutique labels shouldn’t need enterprise IT to run a world‑class release,” said Alex Rivers, CEO and co‑founder of TrackCrate. “We built TrackCrate so creators can move quickly and confidently—version safely, share securely, and present professionally—using one streamlined workspace that respects their time and protects their work.” Built‑in protections address the modern realities of pre‑release sharing. Expiring links and inaudible watermarks travel with every recipient, while QuotaGuard Limits cap plays and downloads by asset type to curb overexposure. DeviceLock Binding attaches each link to trusted devices, discouraging forwards, and ForwardTrace Links enable controlled forwarding with automatic child‑link lineages. If anything suspicious occurs, Tripwire Tamper can downgrade or switch to a decoy preview, and Watermark Map reveals a clear chain of custody so leak sources are identified in seconds, not days. For production teams, TrackCrate keeps sessions agile. Fingerprint Merge clusters near‑duplicate takes and merges comments and approvals to a chosen keeper, while FormatFix conforms sample rate, bit depth, channel count, and loudness to project or distributor specs. TagForge ensures filenames and embedded credits are standardized, and LinkHealer continuously monitors for moved or missing files, repairing preview and shortlink targets before a campaign breaks. “TrackCrate eliminated the guesswork that used to slow my clients,” said a remote mixer and mastering engineer who participated in the beta. “I can post revisions with a clean version history, reviewers hear fair A/Bs in the private stem player, and approvals are captured in context. There’s no longer a mystery Google Drive folder with five ‘final’ files.” AutoKit press pages are designed for speed and accountability. Publicists can assemble embargos, bios, artwork, and press shots alongside watermarked audio or stems, enforcing Access Pledge terms with a lightweight clickwrap that captures name, role, and consent. Each shortlink tracks opens and engagement, so PR teams see who listened and when—valuable signal for timing follow‑ups and tuning outreach. Boutique label operators gain control and clarity. With versioned assets, standardized credits, and expiring access, they can push progress without compromising safety. A label manager who tested TrackCrate during pre‑release campaigns noted, “We finally run approvals and outreach from a single source of truth. If a mix changes, Recall & Replace updates live links and press pages without losing analytics or creating new URLs. That’s huge for schedule integrity.” TrackCrate was built for collaboration across roles: - DIY Artist‑Producers centralize stems, artwork, and rights, share watermarked previews, and move collaborators to approval with fewer nudges. - Boutique Label Operators standardize credits and codes, approve finals, and track partner engagement through shortlinks to keep releases on schedule. - Remote Mixers/Mastering Engineers upload revisions with clear history and deliver secure previews for rapid sign‑off. - PR/Publicity Leads spin up AutoKit pages, control access with expiring links, and monitor journalist engagement in real time. - Visuals & Artwork Collaborators iterate with versioned files and aligned timelines. - Feature Artists/Session Contributors and A&R/Partner Reviewers evaluate works‑in‑progress securely, offering structured feedback without handling file ownership. Availability and pricing TrackCrate is available today worldwide with flexible plans for individual creators and teams. Early users can start projects immediately, generate AutoKit press pages, and invite collaborators with per‑recipient protections. For plan details and a free getting‑started guide, visit trackcrate.com. About TrackCrate TrackCrate is the music asset hub for modern indie teams. It versions stems, artwork, and press with rights metadata; creates trackable shortlinks and AutoKit press pages with a private stem player; and enforces secure, expiring, watermarked downloads. By unifying workflows from creation to campaign, TrackCrate helps independent artists and small labels ship releases faster with less risk and chaos. Media Contact TrackCrate Press Office press@trackcrate.com +1 (323) 555‑0176 trackcrate.com/press

P

TrackCrate Unveils Guest Guard Suite to Share Music Securely Without Slowing Collaboration

Imagined Press Article

Los Angeles, CA — September 2, 2025 — TrackCrate today introduced Guest Guard, a suite of link‑level protections that lets creators share unreleased music with confidence—without adding friction for trusted reviewers. Guest Guard combines per‑recipient watermarks, expiring shortlinks, quota controls, device binding, forwarding lineage, tamper detection, and clickwrap terms to keep pre‑release workflows accountable and fast. “Creative momentum should not come at the expense of control,” said Priya Nandakumar, Head of Product at TrackCrate. “Guest Guard brings enterprise‑grade safeguards to indie‑friendly tools. You can invite mixers, artists, A&R, or press in seconds, yet every share remains traceable, revocable, and aligned with your campaign policies.” Guest Guard includes: - QuotaGuard Limits: Set per‑recipient plays and downloads with automatic expiry once limits are reached. Customize budgets by asset type—such as stems versus final masters—and receive threshold alerts before auto‑revoke. This curbs overexposure and removes the manual policing of usage. - DeviceLock Binding: Bind each shortlink to the first verified device or allow a limited number of devices with one‑tap approvals. Suspicious device changes trigger re‑verification and notify owners, discouraging link forwarding while accommodating legitimate multi‑device use. - ForwardTrace Links: Allow trusted recipients to forward access safely. Each forward creates a child shortlink with its own watermark, quotas, and expiry. Owners retain a clear lineage of who shared with whom, promoting healthy collaboration without sacrificing accountability. - Watermark Map: Visualize a chain‑of‑custody map tying every recipient to a unique watermark ID. Drop in a suspect clip to identify the originating link in seconds and see the propagation path across forwards, accelerating leak source discovery and response. - Tripwire Tamper: Detect scraping and automation patterns—abnormal chunking, headless requests, or unusual concurrency—and automatically downgrade the stream or switch to a decoy preview. Instant alerts include session details to help teams act quickly while keeping legitimate reviewers uninterrupted. - Access Pledge: Gate links with lightweight clickwrap terms (embargo, no reuploads, intended use). Capture recipient name, role, and consent, producing exportable receipts that reassure rights holders and reduce compliance back‑and‑forth. - Recall & Replace: Revoke or swap assets across all active Guest Guard links with one click—no new outreach needed. Recipients see a friendly update message while analytics and watermark history remain intact. “Guest Guard gave our publicity team breathing room,” said a PR/Publicity Lead at an indie label who participated in the private beta. “We can let a journalist forward access to a colleague with full accountability, then adjust quotas or expiry on the fly. If a mix changes, we swap the file globally and keep momentum.” For day‑to‑day collaborators, the experience remains simple. Invitees click a shortlink, accept the Access Pledge, and listen in a private stem player or on an AutoKit press page. If they need to switch devices, one‑tap re‑verification maintains continuity without support tickets. Owners get granular analytics—opens, time‑to‑first‑play, completion rates—so outreach stays targeted and respectful. Guest Guard is deeply integrated with TrackCrate’s broader workflow. Template Composer lets teams save Role Ring presets (Artist, Mixer, PR, A&R) with default Guest Guard policies—quotas, device limits, watermarks, and terms—so every invitation is consistent and least‑privilege by default. Access Preview allows senders to simulate exactly what a recipient will see before sharing, preventing oversharing and avoiding embarrassing misconfigurations. “Security is most effective when it’s invisible to good actors,” noted Daniel Ko, Security Lead at TrackCrate. “The right user should glide through a focused, private experience. Only when behavior looks risky do we add friction or route to a decoy. Guest Guard is the balance of trust and verification indie teams have been asking for.” Whether sending stems for mix notes, pitching to A&R, or sharing a time‑boxed preview for press, Guest Guard helps teams meet modern expectations for accountability without turning collaboration into compliance theater. And if the unexpected happens, Watermark Map and Recall & Replace enable swift, data‑driven responses that preserve relationships and schedules. Availability Guest Guard is available today in all TrackCrate plans. Existing customers can apply Guest Guard policies to current projects immediately and convert legacy links with a single click. New users can create an account and start sharing securely in minutes at trackcrate.com. About TrackCrate TrackCrate is the music asset hub for indie teams, unifying version control, rights metadata, trackable shortlinks, AutoKit press pages, and a private stem player. Guest Guard’s layered protections—quotas, device binding, forwarding lineage, watermarks, tamper detection, and clickwrap terms—help creators share widely with confidence. Media Contact TrackCrate Press Office press@trackcrate.com +1 (323) 555‑0176 trackcrate.com/press

P

Role Rings Bring One‑Click, Least‑Privilege Access and Automation to Music Releases

Imagined Press Article

Los Angeles, CA — September 2, 2025 — TrackCrate today announced Role Rings, a powerful access model and automation layer that standardizes who can see and do what across projects, tracks, stems, artwork, and press assets. With Role Rings and a new Template Composer, teams can apply one‑click presets—Artist, Mixer, PR, A&R—that bundle permissions and policies like quotas, watermarks, device binding, and terms. The result: faster onboarding, fewer errors, tighter security, and cleaner handoffs throughout a release. “Indie teams juggle collaborators, reviewers, and vendors that come and go at each milestone,” said Alex Rivers, CEO and co‑founder of TrackCrate. “Role Rings give them a simple, repeatable way to grant just enough access, for just long enough, while keeping momentum high and cleanup low.” Role Rings ship with a comprehensive toolset: - Template Composer: Design granular scopes for projects, tracks, stems, artwork, and press with actions like view, comment, upload, replace, and publish. Bundle default Guest Guard policies—QuotaGuard Limits, DeviceLock, ForwardTrace Links, Access Pledge—and save as presets. - Access Preview: Simulate exactly what a role will see and be able to do before sending an invite. Share a secure “view as role” link internally for QA and approval, preventing oversharing and last‑minute surprises. - Timeboxed Roles: Attach start/expiry windows or milestone triggers (e.g., auto‑downgrade Mixer to Reviewer after “Mix Approved”). Recipients are notified of changes, and owners can extend or revoke with one tap. - Drift Guard: Continuously monitor for permission creep by comparing live access against the applied role template. Flag manual overrides, forwarded child links, and inherited scopes, with one‑click “Reapply Template” or documented exceptions. - Smart Assign: Auto‑apply roles based on metadata rules, tags, status changes, or email domains (e.g., when the status switches to PR, invite the press list with the PR Ring and ForwardTrace quotas). - Handoff Switch: Move collaborators between roles with one click—Contributor → Approver → Publicist. Policies, quotas, and pledges migrate automatically, and Recall & Replace updates active links without churn. - Ring Insights: Track engagement and control health by role—invites accepted, time‑to‑first‑play, approval velocity, and leak/tamper incidents—so teams can refine templates and unblock progress. Boutique label operators gain a repeatable onboarding path that scales from singles to multi‑artist rosters. “Before Role Rings, every invite was a one‑off,” said a label manager who participated in the beta. “Now, we codify our best practices and apply them in seconds. Our PR Ring includes embargo terms and watermarked streams, while our Mixer Ring allows uploads and replacements with clear version history. It’s consistency without rigidity.” A&R partners and reviewers experience a focused, distraction‑free view that matches their task. Shortlinks route them to AutoKit press pages or the private stem player, already level‑matched and aligned for fair A/Bs. Owners retain analytics by role, using Ring Insights to spot bottlenecks like slow approvals or excessive forwards, and PrimeTime Send to schedule nudges when recipients are most responsive. “Too many release delays stem from small misconfigurations,” said Priya Nandakumar, Head of Product at TrackCrate. “Access Preview catches mistakes before they ship. Timeboxed Roles and Drift Guard keep access aligned to plan, and Handoff Switch makes transitions auditable. It’s the operational backbone indie teams have been missing.” Role Rings work hand‑in‑hand with TrackCrate’s creative and compliance features. Mix reviews benefit from PhaseLock Align and LevelMatch A/B, which eliminate timing and loudness bias so decisions are based on substance, not perception. Clearance tasks accelerate through the Clearance Capsule, where Readiness Score audits codes and splits, Signoff Ledger secures approvals with file hashes, and Alternates Kit packages TV mixes and cutdowns for sync. For global teams, Locale Smart and Quiet Hours Shield respect local norms and working hours, while Smart Escalation and One‑Tap Approve drive progress without inbox fatigue. LinkHealer and FormatFix ensure the assets behind Role Rings remain playable and compliant, protecting campaigns from broken links and mismatched specs. Availability Role Rings, Template Composer, Access Preview, Timeboxed Roles, Drift Guard, Smart Assign, Handoff Switch, and Ring Insights are available today to all TrackCrate customers. Starter presets ship out of the box and can be customized by admins to match house policy. Learn more at trackcrate.com/role‑rings. About TrackCrate TrackCrate is the music asset hub for indie teams, combining version control and rights metadata with trackable shortlinks, AutoKit press pages, a private stem player, and layered protections. Role Rings standardize access so creators can move faster with confidence and auditability. Media Contact TrackCrate Press Office press@trackcrate.com +1 (323) 555‑0176 trackcrate.com/press

P

New Stem Diff Player Puts Fair, Fast Mix Decisions at Your Fingertips

Imagined Press Article

Los Angeles, CA — September 2, 2025 — TrackCrate today launched its Stem Diff Player, a decision‑making toolkit that helps teams hear what changed, not just what got louder. By aligning versions transient‑ and tempo‑aware with PhaseLock Align, matching loudness and stereo with LevelMatch A/B, and exposing only the difference signal via Delta Solo, the Stem Diff Player turns vague mix notes into precise, time‑stamped decisions. Add Band Focus, Change Navigator, DAW Marker Sync, and Version Matrix, and approvals that once took days can land in hours. “Mix reviews often derail on perception bias and timecode drift,” said Priya Nandakumar, Head of Product at TrackCrate. “The Stem Diff Player neutralizes both. Teams compare apples‑to‑apples, jump to meaningful changes, and export actionable notes back to the DAW. It’s a faster path to ‘approved’ without compromising intent.” The Stem Diff Player includes: - PhaseLock Align: One‑click, transient‑ and tempo‑aware alignment that corrects timing and phase offsets between versions and stems, eliminating DAW bounce drift for true A/B parity. - LevelMatch A/B: Automatic LUFS and stereo balance matching across versions to remove “louder sounds better” bias. Optionally lock to a target LUFS to preserve dynamics for fair, repeatable evaluations. - Delta Solo: Instantly solo only what changed between takes—per stem or full mix. Scrub and loop the difference signal to pinpoint edits, automation moves, or processing tweaks. - Band Focus: Filter the diff by frequency band or instrument range—low‑end, vocal sibilance, air—using smart presets or custom bands to evaluate targeted fixes without distraction. - Change Navigator: Auto‑generated hotspot markers ranked by change magnitude and type (level, EQ, dynamics, stereo). Jump with arrow keys, filter by stem or band, and convert hotspots into to‑dos with one click. - DAW Marker Sync: Round‑trip comments and hotspots with Pro Tools, Logic, Reaper, and more via AAF/CSV exports and imports, so producers see exact bars and regions to address—no retyping. - Version Matrix: Compare A/B/C (and more) with pairwise diffs and quick‑reference switching. Audition per‑stem “best of” choices across versions to guide comp decisions and capture a clear verdict. For remote mixers and mastering engineers, the impact is immediate. “I spend less time convincing clients that a mix is actually tighter and more time making it better,” said a mastering engineer from Berlin who used the tool in early access. “Delta Solo exposes the real changes, LevelMatch removes loudness bias, and Marker Sync drops decisions straight on my timeline.” DIY artist‑producers benefit from focused feedback without exporting a dozen bounces. Within TrackCrate’s private stem player, collaborators can leave bar‑level comments tied to hotspots, while One‑Tap Approve in nudges captures decisions instantly. PrimeTime Send schedules reminders in the recipient’s local high‑response window, and Quiet Hours Shield keeps relationships healthy by respecting do‑not‑disturb windows and holidays. The Stem Diff Player is part of TrackCrate’s bigger goal: accelerate honest, secure collaboration from idea to release. Combined with Guest Guard, reviewers access only what they need, for as long as they need, with clear accountability. Role Rings ensure mixers can upload and replace, while A&R and managers see controlled previews and approval paths that move forward, not sideways. For labels and catalog teams, the benefits extend to quality control and reissue prep. Fingerprint Merge clusters near‑duplicates and consolidates comments to the keeper, while FormatFix enforces sample rate and loudness specs. TagForge unifies filenames and embeds correct credits, codes, and contact info, keeping ingest clean for distributors and societies. “Great records deserve great process,” said Alex Rivers, CEO and co‑founder of TrackCrate. “When teams can hear changes clearly and agree faster, everyone wins—artists, engineers, publicists, and fans. The Stem Diff Player gives indie teams tools once reserved for elite rooms, now in a browser, with the security and context TrackCrate is known for.” Availability The Stem Diff Player is available today across TrackCrate plans. Existing users will see the new tools in the private stem player automatically. New users can start a project, upload versions and stems, and experience PhaseLock Align, LevelMatch A/B, Delta Solo, Band Focus, Change Navigator, DAW Marker Sync, and Version Matrix within minutes at trackcrate.com. About TrackCrate TrackCrate is the music asset hub for indie teams, unifying version control, rights metadata, trackable shortlinks, AutoKit press pages, and a private stem player. Its mix decision tools help creators evaluate changes fairly and move to approval faster. Media Contact TrackCrate Press Office press@trackcrate.com +1 (323) 555‑0176 trackcrate.com/press

P

Clearance Capsule and SplitSafe Escrow Connect Rights Readiness to On‑Time Payouts

Imagined Press Article

Los Angeles, CA — September 2, 2025 — TrackCrate today introduced two connected workflows designed to keep releases compliant and collaborators paid on time: the Clearance Capsule for license‑ready, audit‑able bundles and SplitSafe Escrow for milestone‑based payments. Together, they bring transparency and predictability to the last mile of shipping music, from codes and splits to invoices and multi‑currency payouts. “Indie teams deserve the same rigor and reliability enjoyed by larger organizations,” said Alex Rivers, CEO and co‑founder of TrackCrate. “Clearance Capsule and SplitSafe Escrow transform end‑of‑cycle anxiety into a clear checklist with automatic follow‑through, so releases don’t stall at the finish line.” The Clearance Capsule assembles everything a supervisor, distributor, or partner needs to say yes: - Readiness Score audits each bundle for missing or inconsistent codes (ISRC/ISWC/IPI), uncleared samples, and contact gaps, surfacing a prioritized fix list with one‑click jumps to resolve issues via Metadata Sentry. - Scope Builder guides usage definitions—media, term, territory, exclusivity, MFN, carve‑outs—validates against contributor constraints, and outputs a one‑page Rights Summary inside the Capsule to align stakeholders quickly. - Signoff Ledger collects per‑split approvals with lightweight e‑consent tied to file hashes and watermark IDs. It timestamps roles, captures exceptions, and exports an audit‑ready PDF. - CueSheet AutoFill imports scene notes or timecodes and populates composer/publisher credits, PRO affiliations, IPI/CAE, ISWC, and timing, exporting broadcaster‑ready formats (ASCAP, BMI, PRS, SOCAN, CSV/PDF). - Jurisdiction Pack adds territory‑specific addenda, society mappings, and contact references. Optional localized Rights Summaries (EN/FR/DE/ES) smooth global workflows. - Alternates Kit bundles sync‑friendly alternates—instrumental, TV mix, clean/explicit, and 15/30/60 cutdowns—standardizes filenames, embeds usage metadata into ID3/BWF, and loudness‑matches outputs with quick‑audition pages. - CodeSense, CreditMatch, TagForge, FormatFix, Fingerprint Merge, and LinkHealer ensure that identifiers are valid, contributors are disambiguated, files conform to spec, duplicates are consolidated, and links don’t break mid‑campaign. For those tasked with approvals, the payoff is trust and speed. “I need cleared, testable assets fast,” said a music supervisor who evaluated Clearance Capsule in early access. “Readiness Score tells me if the house is in order; Alternates Kit lets me test against picture immediately; and the Rights Summary removes guesswork. I can make decisions in one sitting.” Once rights are aligned, SplitSafe Escrow connects deliverables to money with clarity: - Milestone Builder defines review‑ready checkpoints and due dates tied to assets and approvers, turning vague phases into concrete gates. - Recoup Tracker logs advances and expenses—mixing, artwork, ads—and sets recoup order before splits are paid. Escrow auto‑deducts approved costs at release with transparent, per‑party breakdowns. - AutoRelease moves funds automatically on approval, after a grace window, or at a fallback date. One‑tap Pause prevents premature payout; alerts flag overdue decisions. - KYC FastPass streamlines onboarding for every collaborator with localized guidance and tax forms via Stripe Connect, so funds can move the moment a milestone is approved. - Invoice Sync auto‑generates branded invoices per milestone and contributor from agreed splits, exports to QuickBooks or Xero, and marks paid on release. - Multi‑Currency Payouts let collaborators choose payout currency with transparent FX quotes and fee estimates; rates lock at release, and smaller payouts can be batched to reduce fees. - Dispute FastTrack provides an instant hold with packaged evidence—approvals, comments, file hashes, change history—and tools to propose partial releases or holdbacks with timers. For catalog and metadata managers, the new workflows provide confidence that what goes out matches what was agreed. “Standards compliance isn’t optional,” said a metadata manager from Toronto who has been using TrackCrate across a legacy catalog. “CreditMatch and CodeSense prevent bad data from leaking downstream, and Signoff Ledger documents chain of title. It’s the difference between smooth ingestion and costly rework.” TrackCrate ties the creative and administrative ends together. Role Rings control who can see and sign what, while Guest Guard ensures that pre‑release sharing remains traceable and revocable. Smart communications—PrimeTime Send, Quiet Hours Shield, Smart Escalation, One‑Tap Approve, Copy Optimizer, Locale Smart, and Smart Digest—move decisions forward respectfully and efficiently, with analytics that highlight where help is needed. “Money and metadata are where trust is truly tested,” said Priya Nandakumar, Head of Product at TrackCrate. “By connecting clearance health to automated payouts, we’re helping teams avoid last‑minute scrambles and strained relationships. Everyone sees the same plan, the same receipts, and the same timeline.” Availability Clearance Capsule and SplitSafe Escrow are available today to all TrackCrate customers. Teams can enable the features on existing projects and backfill historical approvals into the Signoff Ledger. Learn more and request onboarding help at trackcrate.com/clearance and trackcrate.com/escrow. About TrackCrate TrackCrate is the music asset hub for indie teams. It versions stems, artwork, and press with rights metadata; creates trackable shortlinks and AutoKit press pages with a private stem player; and enforces expiring, watermarked downloads. With clearance and payments connected, TrackCrate helps independent artists and labels ship confidently and get everyone paid on time. Media Contact TrackCrate Press Office press@trackcrate.com +1 (323) 555‑0176 trackcrate.com/press

Want More Amazing Product Ideas?

Subscribe to receive a fresh, AI-generated product idea in your inbox every day. It's completely free, and you might just discover your next big thing!

Product team collaborating

Transform ideas into products

Full.CX effortlessly brings product visions to life.

This product was entirely generated using our AI and advanced algorithms. When you upgrade, you'll gain access to detailed product requirements, user personas, and feature specifications just like what you see below.