Digital asset management (DAM)

IndieVault

One vault. Zero mix-ups.

IndieVault is a lightweight digital asset manager that centralizes tracks, artwork, stems, contracts, and press kits in one secure hub. Built for self-funded indie artists and managers shipping weekly, it organizes and versions assets into release-ready folders and sends watermarkable, expiring review links with per-recipient analytics, cutting mix-ups, leaks, and missed deadlines.

Subscribe to get amazing product ideas like this one delivered daily to your inbox!

IndieVault

Product Details

Explore this AI-generated product idea in detail. Each aspect has been thoughtfully created to inspire your next venture.

Vision & Mission

Vision
Empower indie artists and managers to own their catalogs, ship flawlessly, and build thriving, independent careers worldwide.
Long Term Goal
By 2029, equip 50,000 indie teams to cut missed deliverables 40%, asset search time 60%, and leaks 90%, lifting playlist/press acceptance 15% and powering 10% of independent releases.
Impact
IndieVault helps self-funded indie artists and managers shipping weekly reduce asset hunt time by 60% and cut missed deliverables by 40% within two months. Watermarked, expiring review links drop leaks to near-zero and accelerate feedback by 35%, while unified metadata lifts playlist and press acceptance rates by 15%.

Problem & Solution

Problem Statement
Self-funded indie artists and managers shipping weekly releases juggle tracks, artwork, stems, and contracts across scattered drives and links, causing version mix-ups and missed deadlines. Cloud folders lack versioning, metadata, expiry, and per-recipient tracking; enterprise DAMs are costly and complex.
Solution Overview
IndieVault stops version mix-ups by centralizing and versioning every track, artwork, stem, and contract into release-ready folders. Its Smart Link Locker delivers watermarkable, expiring review links with per-recipient analytics, so feedback is trackable, leaks drop to near-zero, and weekly releases ship on time.

Details & Audience

Description
IndieVault is a lightweight digital asset manager that centralizes tracks, artwork, stems, contracts, and press kits in one secure hub. Built for self-funded artists and small managers shipping releases across platforms weekly. It ends scattered files and version mix-ups by organizing, versioning, and tagging assets into release-ready folders. Its Smart Link Locker creates watermarkable, expiring review links with per-recipient analytics.
Target Audience
Self-funded indie music artists and managers (20-45) shipping weekly, battling scattered assets and version mix-ups.
Inspiration
At 12:47 a.m., my friend’s band manager called shaking—tomorrow’s blog premiere had the wrong master. I watched them dig through four Google Drives, a Dropbox, and ten WeTransfer links, filenames like FINAL_v7_MASTER2.wav. The problem wasn’t space; it was certainty. By sunrise, I sketched IndieVault: one vault with versioned assets, press-ready bundles, and watermarkable links that expire—so the right file ships, every time.

User Personas

Detailed profiles of the target users who would benefit most from this product.

M

Metadata Maven Mia

- Age 27-35, indie distribution coordinator for artists and micro-labels. - Brooklyn coworking studio; hybrid across time zones and teams. - 5+ years with DistroKid, FUGA, Labelcamp deliveries. - Bachelor’s in Music Business; spreadsheet power user. - Oversees 4-8 releases monthly across genres.

Background

Began as an intern fixing rejected uploads and missing ISRCs. A viral single delay from a filename mismatch made rigor non-negotiable. Now she quarterbacks delivery days with checklists and version locks.

Needs & Pain Points

Needs

1. Metadata validation against distributor requirements. 2. Templated folder structures with version locks. 3. Audit trail linking masters to contracts.

Pain Points

1. Distributor rejections from tiny metadata inconsistencies. 2. Wrong master uploaded under old filename. 3. Chasing missing ISRCs and split sheets.

Psychographics

- Worships clarity, dreads preventable errors. - Motivated by spotless delivery dashboards. - Values documented processes over heroics. - Loves automation that eliminates rework.

Channels

1. Email inbox 2. Slack workspace 3. LinkedIn groups 4. Reddit musicbiz 5. YouTube tutorials

V

Visual Version Vivian

- Age 24-38, motion designer and music video editor. - Home studio setup; collaborates remotely with artists and managers. - Juggles 6-10 active projects; milestone-based invoicing. - Primary tools: Adobe CC, Dropbox, Google Drive, Frame.io. - Delivers teasers, loops, cover visuals, and reels.

Background

Cut tour visuals on overnight timelines, learning speed and discipline. After a leaked teaser cost a client a brand partnership, she embraced strict link expirations and detailed change logs.

Needs & Pain Points

Needs

1. Watermarked preview links with expiration. 2. Versioned folders for cuts and assets. 3. Comment threads tied to specific timestamps.

Pain Points

1. Old edits circulating from unmanaged links. 2. Unclear feedback causing endless revisions. 3. Assets scattered across drives.

Psychographics

- Obsessed with visual polish under pressure. - Hates file confusion derailing timelines. - Values clear client feedback channels. - Prefers tools clients grasp instantly.

Channels

1. Instagram DM 2. Email inbox 3. Vimeo review 4. Behance portfolio 5. Slack workspace

F

Festival-Ready Felix

- Age 28-40, festival and showcase applicant. - Regional hub base; frequent travel and remote pitching. - Handles EPKs, tech riders, stage plots, live recordings. - 12-20 submissions per season; deadline-driven. - Coordinates with bandmates, FOH, and publicists.

Background

Missed a major showcase after an outdated EPK confused a booker. Built a submission system using single-source links, live-updating assets, and per-recipient tracking to time nudges.

Needs & Pain Points

Needs

1. Single link EPK with live updates. 2. Per-recipient open and play analytics. 3. Expiring links for unreleased live cuts.

Pain Points

1. Bookers using outdated press materials. 2. No visibility into whether links were viewed. 3. Last-minute asset requests derailing prep.

Psychographics

- Treats organization as competitive advantage. - Craves proof their pitch was opened. - Values concise, skimmable EPKs. - Deadline-driven and relentless.

Channels

1. Email outreach 2. Instagram DM 3. Website submissions 4. LinkedIn messages 5. X updates

S

Split-Savvy Sam

- Age 30-45, rights administrator or entertainment lawyer. - Hybrid work across studios and firm office. - Oversees 30-100 agreements per quarter. - Tools: DocuSign, Google Workspace, Airtable, Sheets. - Advises indie managers and micro-labels.

Background

Started as a paralegal untangling split sheets post-release. A costly ownership dispute prompted a workflow that links agreements, approvals, and specific masters with timestamps.

Needs & Pain Points

Needs

1. Contract storage linked to exact audio versions. 2. Split approval workflow with timestamps. 3. Permissioned access by project and role.

Pain Points

1. Mismatched files and agreements cause disputes. 2. Missing signatures stall release schedules. 3. Uncontrolled sharing risks leak liability.

Psychographics

- Risk-averse, documentation-first thinker. - Motivated by preventing future disputes. - Values traceable, immutable records. - Prefers structured over ad hoc exchanges.

Channels

1. Email legal 2. DocuSign reminders 3. LinkedIn messages 4. Slack workspace 5. Google Drive shares

C

Community-Drop Carmen

- Age 22-36, community manager for indie artist memberships. - Remote, mobile-first; nights and weekends heavy. - Coordinates exclusive early listens and behind-the-scenes. - 2-4 content drops weekly; 200-5,000 members. - Tools: Discord, Ko-fi, Shopify, Linktree.

Background

Scaled an artist’s Discord from 300 to 4,000 members. After a surprise premiere leaked, she implemented tier-based access, expirations, and watermarks to preserve trust.

Needs & Pain Points

Needs

1. Tiered, expiring links for exclusive content. 2. Watermarks to deter reposting. 3. Drop calendar with analytics per tier.

Pain Points

1. Leaks diminish member trust instantly. 2. Manual access management becomes unmanageable. 3. Confusion over which file is final.

Psychographics

- Fan-first, fairness-minded gatekeeper. - Thrives on hype without chaos. - Values trust and timely rewards. - Data-curious, action-oriented.

Channels

1. Discord server 2. Email newsletter 3. Instagram Stories 4. Patreon posts 5. TikTok lives

C

Catalog Curator Casey

- Age 32-50, catalog archivist for artist or small label. - Spare-room studio with NAS; cloud hybrid. - Migrating 5TB–20TB across decades of assets. - Tools: Finder, Excel, Resilio, Backblaze. - Multi-format audio, artwork, and docs chaos.

Background

Inherited messy drives after a manager change and lost a radio edit before a sync hold. Committed to de-duplication, standardized naming, and mapping versions to releases.

Needs & Pain Points

Needs

1. Bulk ingest with de-duplication and fingerprinting. 2. Batch tagging and renaming templates. 3. Release mapping across versions and formats.

Pain Points

1. Duplicate files wasting space and causing confusion. 2. Missing or mislabeled radio edits. 3. Manual renaming consuming days.

Psychographics

- Purist about order and provenance. - Patient, methodical problem-solver. - Allergic to ambiguous filenames. - Loves bulk actions and rules.

Channels

1. YouTube tutorials 2. Reddit datahoarder 3. Email updates 4. Gearspace forums 5. Slack communities

Product Features

Key capabilities that make this product valuable to its target users.

Delivery Profiles

Prebuilt, always‑current compliance templates per distributor and platform. Choose a profile (e.g., Apple Music, Spotify, Beatport) and Preflight autoloads the exact spec rules, folder structure, naming schema, and metadata toggles. Exports land perfectly formatted for each outlet, cutting resubmissions and guesswork.

Requirements

Auto-Updating Compliance Template Library
"As an indie label manager, I want a library of always-current delivery profiles so that my exports meet each platform’s latest specs without me tracking changes manually."
Description

Provide a centralized, versioned library of platform-specific delivery profiles (e.g., Apple Music, Spotify, Beatport), expressed as structured schemas defining required/optional metadata fields, allowed values, audio/image specs (sample rate, bit depth, loudness, color space), folder hierarchies, file naming patterns, and packaging rules. Implement an update service that ingests vendor spec changes (docs/APIs), applies semantic versioning, deprecates superseded profiles, and publishes changelogs. Notify workspace admins of changes and flag impacted releases. Support pinning a profile version per release for reproducibility and allow safe rollback to prior versions to preserve compliance for in-flight deliveries.

Acceptance Criteria
Vendor Spec Ingestion & Semantic Versioning
- Given a new vendor spec source (document URL or API payload) is provided, When the update service ingests and the change is classified as breaking, Then a new MAJOR version (X+1.0.0) is published within 5 minutes and the prior version remains available. - Given the change is classified as additive/non-breaking, When published, Then the MINOR version increments (X.Y+1.0); given a corrective/non-functional change, Then the PATCH version increments (X.Y.Z+1). - Given a new version is created, When stored, Then it contains a machine-readable schema defining required/optional metadata fields, allowed values, audio/image specs, folder hierarchy, file naming patterns (regex), and packaging rules, and passes internal schema validation. - Given an ingestion event completes, When audited, Then the source reference (URL or API version), classifier, author, and timestamp are recorded in the audit log.
Profile Deprecation & Changelog Publication
- Given a new profile version V+1 is published, When the previous version V is superseded, Then V is marked Deprecated with a deprecation timestamp and optional grace period in both UI and API. - Given a version is published, When the changelog is generated, Then it lists added/removed/changed rules, severity of changes (breaking/non-breaking), and migration guidance, and is accessible via in-app view and GET /profiles/{id}/versions. - Given a deprecated version exists, When queried, Then it remains retrievable and usable by pinned releases until the end of the grace period, and is never auto-upgraded.
Admin Notifications & Impact Flagging
- Given a workspace has admins and at least one release using profile P@V, When a new version V+1 of P is published, Then all workspace admins receive an in-app notification and an email within 10 minutes summarizing the change type and impact. - Given a profile update occurs, When impact analysis runs, Then all releases referencing P (pinned/unpinned) are evaluated and flagged as Breaking, Non‑breaking, or No impact, with counts displayed in the notification. - Given an impacted release is flagged, When viewed in the release dashboard, Then it shows current pinned state, recommended action (upgrade/pin/rollback), and a link to the diff of rules affecting that release.
Per‑Release Version Pinning
- Given a release selects profile P version V, When the user pins V, Then all subsequent validations and exports for that release use V regardless of newer library versions. - Given a release is pinned to V, When a newer version V+1 is published, Then the system does not auto-upgrade the release, and a non-blocking deprecation warning is shown if V becomes deprecated. - Given pin/unpin actions occur, When auditing, Then the audit log records actor, action (pin/unpin), profile, version, timestamp, and release ID. - Given repeated validations with the same inputs, When performed under a pinned version, Then the rule set hash is identical across runs, ensuring reproducibility.
Safe Rollback for In‑Flight Deliveries
- Given a release upgraded from P@V to P@V+1 now fails validation, When the user initiates rollback to V, Then the release re-pins to V, validations re-run automatically, and prior passing checks pass again with unchanged inputs. - Given a rollback is requested, When the target version is yanked/removed, Then the system blocks the rollback and displays an actionable message listing allowed rollback targets. - Given a rollback completes, When auditing, Then the audit log records actor, from/to versions, reason, and timestamp, and the release maintains a consistent state with no partial rule mixes.
Preflight Autoload & Export Compliance
- Given a user selects profile P@V, When Preflight runs, Then it auto-loads P@V rules (folder template, naming regexes, metadata toggles, allowed values, audio/image specs) and displays rule count and version ID. - Given assets violate any Critical or Required rule, When attempting export, Then export is blocked and errors list rule ID, severity, and remediation hint; Optional rules may warn but do not block. - Given all Required rules pass, When exporting, Then the package produced matches the profile exactly: folder hierarchy created, filenames match patterns, audio sample rate/bit depth/loudness within spec, and image color space/dimensions valid; a verification report is attached to the export record.
Preflight Rules Engine & Violations Reporting
"As a self-released artist, I want preflight validation against the chosen platform’s rules so that I can fix issues before export and avoid resubmissions."
Description

Implement a Preflight rules engine that autoloads the selected delivery profile and validates assets, metadata, and folder structure in real time. Checks include field presence and format, naming schema compliance, audio/image technical specs, identifier checks (UPC/ISRC), territory/rights constraints, and duplicate detection. Classify findings by severity (error/warn/info), provide inline guidance, suggest safe autofixes (e.g., rename files to match patterns), and enable batch corrections. Generate a comprehensive, exportable Preflight report with filterable violations and a pass/fail gate that blocks noncompliant exports.

Acceptance Criteria
Profile Autoload & Real‑Time Validation
Given a user selects a delivery profile, When the selection is saved, Then the rules, folder schema, naming pattern, and metadata toggles for that profile are loaded into the Preflight engine within 1 second and the UI displays the active profile name and version. Given the workspace contains assets and metadata, When an asset is added/removed or a field is edited, Then the engine revalidates only impacted items within 1 second and updates the violation counts by severity in the sidebar. Given no delivery profile is selected, When Preflight is opened, Then the user is required to choose a profile before validation can run and no pass/fail state is shown.
Metadata Presence, Format, and Identifier Checks
Given an active delivery profile, When validating release- and track-level metadata, Then all profile-required fields are flagged Error if missing and recommended fields are flagged Warning when blank, with field-level highlights. Given UPC and ISRC fields are populated, When validated, Then values must match the profile-defined patterns (e.g., UPC 12–14 digits; ISRC 12-character pattern) and be unique within the release; otherwise raise Error with the corresponding rule ID. Given territory and rights windows are defined, When constraints violate profile rules (e.g., invalid ISO territory code, end date precedes start date), Then an Error is emitted referencing the exact field(s) and offending value(s).
Folder Structure & Naming Schema Compliance with Autofix
Given a profile-specified folder tree, When validating the project, Then missing required folders are flagged Error, unexpected folders are flagged Warning, and misfiled assets are flagged Error with expected path shown. Given profile-defined filename patterns, When a filename deviates, Then an Autofix suggestion is generated showing before/after, normalized safe characters, and a collision-free unique suffix; applying the fix renames/moves the asset and clears the violation. Given the user selects multiple naming violations, When Apply All is triggered for N>100 items, Then all eligible items are corrected without overwriting existing files, a progress indicator is shown, and a rollback option is available until session end.
Audio/Image Technical Specs Validation
Given profile-defined audio specs, When audio files are analyzed, Then sample rate, bit depth, channels, duration bounds, loudness, and codec must match rules; nonconformities are Errors, borderline advisories are Warnings, and measured values are displayed per file. Given profile-defined artwork specs, When image files are analyzed, Then dimensions, aspect ratio, color space, file size, and embedded color profile must meet rules; violations are classified per profile as Error or Warning with measured values shown. Given a batch of 100 audio tracks and 10 artworks, When validation runs on a standard environment, Then technical analysis completes within 30 seconds and results are cached to avoid reprocessing unchanged files.
Duplicate Detection across Assets and Identifiers
Given a release contains two audio assets with identical checksums or fingerprint similarity ≥ 0.98, When validated, Then a Duplicate Error is raised with links to both items and an option to keep/ignore one. Given two tracks share the same ISRC within a release, When validated, Then an Error is raised; if the same ISRC exists in another release in the workspace, Then raise a Warning with a link to the other release. Given two tracks have identical titles, durations (±1s), and artist credits, When validated, Then a Potential Duplicate Warning is raised with options to ignore or merge; ignored pairs are remembered for that release.
Severity Classification, Inline Guidance, and Rule IDs
Given any violation is displayed, When shown in the UI, Then it includes severity (Error/Warning/Info), a stable rule ID (e.g., APPLE-ISRC-001), a clear message, impacted asset/field, and a one-click Help link. Given an actionable violation supports Autofix, When the Fix button is clicked, Then changes are applied safely and an audit entry is recorded with user, timestamp, rule ID, and before/after values; if Autofix is unavailable, Then actionable next steps are shown. Given accessibility requirements, When viewing the violations list, Then severity indicators include accessible names, messages are screen-reader friendly, and color is not the sole means of conveying severity.
Filterable Preflight Report and Export Pass/Fail Gate
Given validation has completed, When the user opens the Preflight Report, Then they can filter by severity, asset type, track, rule ID, and recipient, and counts update instantly as filters are applied. Given the user exports the report, When selecting a format, Then a PDF and CSV export are available that include profile name/version, release ID, timestamp, rule summaries, all violations, statuses, and any applied fixes; the file downloads within 5 seconds. Given unresolved Error violations exist, When the user attempts to export delivery assets, Then export is blocked with a message listing blocking rule IDs; When zero Errors remain, Then the release is marked Pass and export proceeds; When only Warnings/Info remain, Then export proceeds and the report notes non-blocking issues.
Profile-Based Export Packaging
"As a mastering engineer, I want exports to land perfectly formatted per outlet so that deliveries are accepted on the first try."
Description

Produce exports that exactly match the chosen profile’s packaging rules, including folder hierarchy, deterministic file ordering, naming templates, embedded tags, sidecar manifests (XML/JSON, e.g., DDEX where applicable), and archive format (e.g., ZIP). Support batch exporting to multiple outlets from a single master with per-profile transformations applied. Generate checksums and an export log capturing profile version, rule set hash, and build metadata for traceability and reproducibility. Allow configuration of export destinations (local download, cloud bucket) and resumable uploads.

Acceptance Criteria
Single Profile Packaging Conformance
Given a release with all required assets and a selected Delivery Profile When the user initiates an export Then the produced package’s folder hierarchy exactly matches the profile’s canonical structure And file names follow the profile’s naming template with tokens resolved and illegal characters normalized per rule And assets not permitted by the profile are excluded And audio/image/document formats (codec, bit depth, sample rate, dimensions, extensions) satisfy the profile constraints And the archive container type and settings match the profile (e.g., ZIP) with no extra files or unexpected attributes
Reproducible Export Determinism
Given identical inputs, the same Delivery Profile version, and the same rule-set hash When exporting the same release twice Then the file ordering within all folders is deterministic and matches the profile’s ordering rules And the byte-for-byte contents of each produced archive are identical across runs And per-file checksums and package checksums (e.g., SHA-256) are identical across runs And the export logs record the same profile ID/version and rule-set hash
Sidecar Manifest Generation and DDEX Validation
Given a Delivery Profile that requires a sidecar manifest (e.g., DDEX ERN XML or JSON) When exporting the release Then a manifest is generated in the required format at the specified path and filename And the manifest validates against the required schema (e.g., XSD/JSON Schema) with zero validation errors And required metadata fields are mapped and populated per profile, and prohibited fields are omitted And every file reference in the manifest resolves to an exported asset path and its checksum
Batch Export with Per-Profile Transformations
Given a single master release and multiple selected Delivery Profiles (e.g., Apple Music, Spotify, Beatport) When the user runs a batch export Then a separate, compliant package is produced for each profile And per-profile transformations (naming, loudness flags, bit-depth conversion, artwork dimension changes, metadata toggles) are applied only to that profile’s output And a failure in one profile does not block others; per-profile statuses are reported And no cross-profile file leakage occurs and outputs are separated per destination configuration
Configurable Destinations and Resumable Uploads
Given export destinations configured for local download and at least one cloud bucket When an export is performed and the network is interrupted during upload Then local artifacts are available for download upon export completion And cloud uploads automatically resume and complete within configured retry limits without data corruption And uploaded objects appear at the configured bucket/prefix with expected ACL/storage class And final remote object checksums match local checksums
Checksums and Export Log Traceability
Given an export completes successfully When the system writes the export log Then the log includes profile ID/name, profile version, rule-set hash, exporter version, build ID, timestamp, and source release ID/reference And the log lists every exported file with path, size, MIME type, and checksum (e.g., SHA-256) And the log records each produced archive filename and checksum And the log is retrievable via UI and API for audit purposes
Validation Feedback and Failure Handling
Given packaging validation detects any rule violation during preflight or export When the export is attempted Then the affected profile’s export is marked Failed and no noncompliant package is produced And the user is shown a structured error report referencing the violated rule(s) and specific asset(s) And actionable remediation hints are provided (e.g., expected naming pattern, required dimensions) And the batch run summary displays Pass/Fail per profile with links to detailed logs
Profile Selection & Granular Overrides
"As a project manager, I want to choose a profile and tweak allowed settings so that I can adhere to platform rules while fitting our internal workflow."
Description

Enable selection of a delivery profile at workspace, catalog, release, or per-export job level with clear precedence rules. Provide guarded, per-rule overrides where allowed by a platform (e.g., optional field inclusion, custom filename tokens), with real-time compliance checks to prevent breaking the base profile. Offer a diff view to compare overrides against the base profile, audit trails for who changed what and when, and the ability to clone/copy settings across releases.

Acceptance Criteria
Profile Selection Precedence Across Workspace→Catalog→Release→Export Job
Given a workspace default profile W, a catalog-level profile C, a release-level profile R, and an export job-level profile E When an export job is initiated Then the effective profile is E if set, else R if set, else C if set, else W if set; otherwise the job is blocked with a prompt to select a profile And the UI displays the effective profile and its source scope label (Workspace, Catalog, Release, Export Job) And changing a higher-scope profile updates the effective profile within inherited scopes unless a lower-scope selection exists
Guarded Per-Rule Overrides with Real-Time Compliance
Given a selected delivery profile with rules marked Overridable or Locked and constraints defined (e.g., allowed filename tokens, required metadata fields, value ranges) When the user edits an Overridable rule within allowed bounds Then the override is saved, visually flagged as Override, and Preflight shows 0 blocking errors for that rule When the user attempts to edit a Locked rule or violates a constraint Then the change is prevented, an inline error appears with rule id and reason, and Save/Export is disabled until the error is resolved And the validator runs on each change and updates pass/fail counts in the Preflight panel within 1 second
Diff View Comparing Overrides to Base Profile
Given overrides exist at any scope for a selected profile When the user opens the Diff view Then differences are listed per rule with base vs effective values and change type (added/removed/modified) And filters allow scoping by category: folder structure, naming schema, metadata toggles, and other rule types And selecting a diff item focuses the corresponding rule editor And a Clear Override action is available per rule and a Clear All Overrides action per scope
Audit Trail for Profile Selection and Overrides
Given any change to profile selection or rule overrides at any scope When the change is saved Then an audit entry is recorded containing actor, scope (workspace/catalog/release/export job), timestamp (UTC), action, and before/after values And audit entries are immutable, sorted chronologically, and filterable by scope, action, and user And authorized users can view the audit trail within both workspace and release contexts
Clone/Copy Profile Settings Across Releases
Given a source release with a selected profile and overrides When the user triggers Clone Settings and selects one or more target releases Then a confirmation summary shows items to be cloned and the number of targets And upon confirmation, the selected profile and all allowed overrides are copied to each target release and replace any existing release-level profile/overrides on the target And the operation is transactional per target: either all release-level settings are updated or none, with an error surfaced per failed target And a completion report lists successes and failures per target, and audit entries are created for each target updated
Per-Export Job Overrides Do Not Persist
Given a release with an effective profile When the user sets overrides within an Export Job and completes or cancels the job Then those overrides affect only that job’s effective profile and do not modify release, catalog, or workspace settings And the job execution summary displays the effective profile and overrides used And an audit entry is recorded for the export job context only
Reset Overrides to Base Profile
Given overrides exist at the current scope When the user selects Reset to Base and confirms Then all overrides at that scope are removed and the effective profile is inherited from the next higher scope And the UI updates to show zero overrides at that scope And the action is recorded in the audit trail
Profile Editor & Governance Workflow
"As a head of operations, I want a governed way to author and publish profiles so that changes are accurate, auditable, and safe to roll out."
Description

Deliver an internal admin editor for creating and updating delivery profiles using a JSON schema-backed model with validation, unit tests, and preview outputs. Enforce role-based access control, draft/staging environments, peer review, and publish approvals. Support import/export of profiles (JSON/YAML), localization of field labels and help text, and citation of vendor sources. On publish, run regression tests against representative sample releases to detect breaking changes before rollout.

Acceptance Criteria
Create Profile with JSON Schema Validation and Live Preview
Given I have Editor or Admin role and open the Profile Editor, when I enter profile JSON that violates the schema, then the Save action is disabled and field-level errors show the JSONPath, expected constraint, and offending value. Given the profile JSON conforms to the schema, when I click Save, then a new Draft is created with a unique semantic version suffix "-draft" and a live preview renders folder structure, file naming patterns, and metadata toggles within 2 seconds. Given a Draft is open, when I modify the JSON and click Save, then the preview updates within 1 second and the Draft version is incremented. Given the CI pipeline runs, when the unit test suite executes, then schema validator tests pass with >=90% line coverage and 0 failures.
Role-Based Access Control for Profile Editor
Given a Viewer, when accessing Profile Editor, then they can view Published profiles and diffs but cannot create, edit, import, export, submit, approve, publish, or rollback. Given an Editor, when accessing Profile Editor, then they can create/edit Drafts and submit for review but cannot approve/publish or edit Published versions. Given an Approver, when a Draft is In Review, then they can approve, request changes, or promote to Staging but cannot edit profile content unless they also have Editor. Given an Admin, when managing profiles, then they can assign roles and perform emergency rollback with audit entries. Given any access attempt, when executed, then the event is logged with user ID, role, action, target profile ID, timestamp, and result and remains queryable for at least 90 days.
Draft, Review, Staging, and Publish Workflow Governance
Given a Draft exists, when an Editor submits for review, then status changes to In Review and a machine-readable diff versus the last Published version is stored. Given an In Review Draft, when an Approver different from the last editor approves, then the profile is deployed to Staging and becomes available only to staging environments. Given a profile is in Staging, when an Approver publishes, then it becomes the sole active Published version and all new exports reference it within 60 seconds. Given an In Review Draft, when an Approver requests changes, then it returns to Draft with mandatory reviewer comments and notifies the last editor. Given an invalid transition (e.g., Draft -> Published), when attempted, then the action is blocked with a descriptive error. Given any transition, when completed, then an immutable audit record (who, when, from->to, reason, diff hash) is stored.
Profile Import/Export with Validation and Round-Trip Equivalence
Given I import a profile file in JSON or YAML, when the content violates the schema, then the import is rejected with a list of errors including JSONPath (JSON) or line/column (YAML). Given the file is valid, when imported, then a Draft is created preserving citation data and localization keys, and the preview matches the imported structure. Given I export a profile, when I choose JSON or YAML, then the output is deterministic, includes profileId, schemaVersion, and a SHA-256 checksum, and re-importing the exported file reproduces an equivalent profile (excluding system fields like timestamps).
Localization of Field Labels and Help Text in Editor and Preview
Given I open the Localization editor, when I provide translations for required keys in en, es, and fr, then Save is allowed only when required keys are complete; missing keys are flagged per-locale. Given any localized string uses ICU MessageFormat, when syntax is invalid, then Save is blocked with an error pointing to the key and position. Given I switch the preview locale, when rendering the profile preview, then all field labels and help text display in the selected locale with fallback to default for missing keys.
Vendor Source Citations Enforcement and Visibility
Given a rule or field definition is added or modified, when submitting for review, then a vendor source citation (sourceTitle, sourceURL, sourceVersionOrDate) is required and validated; otherwise submission is blocked. Given a citation URL, when validated, then the system confirms HTTP 200 within 5 seconds and stores the URL and retrieval timestamp; failures block submission with a retry option. Given a profile is exported or reviewed, when citations exist, then they are included in the export under a citations section and are visible in the UI with clickable links.
Publish Gate with Automated Regression Tests on Sample Releases
Given a profile is in Staging, when an Approver clicks Publish, then automated regression tests run against at least 10 representative sample releases per targeted distributor/platform before publishing proceeds. Given regression execution completes, when results include any critical rule failures or more than 0 blocking warnings, then the publish is blocked and a downloadable report (per-sample pass/fail, violated rules, remediation hints) is attached to the review. Given all blocking tests pass, when Publish is confirmed, then the profile is published and the test results are archived with the release record. Given tests exceed a 10-minute timeout, when publishing, then the publish is canceled with a timeout status and can be retried.
Delivery Links Bound to Profile Exports
"As an artist manager, I want review links tied to compliant exports so that reviewers and distributors see the exact files we intend to deliver and I can track engagement."
Description

Integrate profile-based exports with IndieVault’s watermarkable, expiring review links to ensure recipients access the exact compliant package tied to a specific profile version. Each link references a single export artifact, carries immutable export metadata (profile ID/version, checksums), and records per-recipient analytics (opens, downloads). Support superseding links when a profile update triggers a new export, and clearly mark older links as out-of-date to prevent accidental use.

Acceptance Criteria
Create Review Link Bound to Profile Export
Given a Delivery Profile export artifact exists with profileId and profileVersion When a user creates a review link selecting that export Then the link persists exportArtifactId referencing exactly that artifact And the link serves files only from that artifact snapshot And subsequent changes to library assets or profile do not alter the served files And fetching link details returns exportArtifactId, profileId, and profileVersion matching the artifact
Immutable Export Metadata Attached to Link
Given an export contains metadata including profileId, profileVersion, file list with sizes, and SHA-256 checksums When a review link is created for that export Then the link stores an immutable copy of the export metadata And the UI and API expose this metadata in read-only form And attempts to modify profileId, profileVersion, or file checksums via UI or API are rejected with a validation error And an audit log entry records creation with the stored metadata hash
Per-Recipient Watermark and Analytics
Given a review link supports issuing recipient-specific access When the owner sends access to multiple recipients Then each downloaded audio file is watermarked with the recipient identifier And the system records per-recipient events: open timestamps and counts, download timestamps and counts And analytics are viewable per recipient and aggregated per link via UI and API And a watermark verification check on a sampled file resolves to the correct recipient identifier
Link Expiration and Access Control
Given a review link has an expiration datetime and optional max downloads per recipient When a recipient accesses the link before expiration and within limits Then file previews and downloads are available And when accessed after expiration or after limits are exceeded Then the API returns 410 Gone for file requests and the UI displays Link expired And no files can be downloaded after expiration or limit breach And extending the expiry updates the expiration datetime without changing the bound export artifact
Supersede Link on Profile Update
Given a Delivery Profile is updated to a newer version that requires re-export And a new export artifact is generated for the same release using the new profileVersion When the owner chooses to supersede an existing review link Then the system issues a new review link bound to the new export artifact And the original link status becomes Superseded and is no longer downloadable And the new link displays the current profileVersion and export metadata
Out-of-Date Link Labeling and Prevention
Given a review link has been superseded by a newer export for the same release and profile When a recipient visits the superseded link Then the UI shows an Out-of-date notice with the superseded timestamp and disables all downloads And the API returns 409 with code LINK_SUPERSEDED for download attempts And analytics record the visit with a superseded_view flag And the owner dashboard displays a reference to the superseding link

One‑Click Fixes

Turn red flags into green in seconds. Auto‑correct common failures with safe, reversible actions: normalize loudness to target LUFS with true‑peak guard, conform filenames to schema, standardize artwork color profile/size, pad/format ISRCs, and map roles to accepted labels. Review a change log, apply, or undo—no DAW reopen needed.

Requirements

LUFS Normalization with True‑Peak Guard
"As an indie producer, I want one-click loudness normalization with a true-peak guard so that my tracks meet platform specs without artifacts or DAW round-trips."
Description

Auto-analyzes integrated loudness and true-peak per track/stem and generates a new version normalized to a configurable LUFS target with a true-peak ceiling, preventing clipping without re-opening a DAW. Supports batch processing across release folders, preserves sample rate/bit depth, writes loudness metadata, and logs deltas. Profiles per destination (e.g., streaming, YouTube, vinyl pre-master) are selectable, with safe, reversible processing and preview of expected gain changes. Integrates with IndieVault versioning, review links, and pre-flight checks so updated versions propagate to release bundles and analytics.

Acceptance Criteria
Normalize Single Track to Target LUFS with True‑Peak Ceiling
Given a PCM audio file (mono or stereo) at 48 kHz/24‑bit with measured integrated loudness of −11.2 LUFS and true‑peak of −0.3 dBTP And a selected destination profile with target loudness −14.0 LUFS and true‑peak ceiling −1.0 dBTP When the user selects One‑Click Fixes > LUFS Normalization and clicks Apply Then the system analyzes, applies transparent gain and true‑peak limiting if needed, and creates a new version And the new version’s integrated loudness is −14.0 LUFS ±0.2 LU unless doing so would violate the true‑peak ceiling And the new version’s true‑peak is ≤ −1.0 dBTP and does not exceed 0 dBFS at any sample And the sample rate and bit depth match the source exactly (48 kHz/24‑bit preserved) And the original file remains unchanged and is linked as the parent of the new version
Batch Normalize All Tracks in a Release Folder
Given a release folder containing at least 10 mixed tracks and stems with varying loudness and true‑peak values And a selected profile with target −14.0 LUFS and ceiling −1.0 dBTP When the user clicks Apply to Folder Then each eligible audio file is processed and output as a new version in its respective asset, preserving channel count and format And any item already within ±0.2 LU of target and ≤ ceiling is marked Skipped with a reason And a per‑item result report lists: before/after LUFS, before/after dBTP, applied gain (dB), limiter engaged (yes/no), outcome (Created/Skipped/Failed) And failures (e.g., unreadable file) are retried once and then reported with actionable error codes without halting other items
Apply and Manage Destination Loudness Profiles
Given destination profiles exist (e.g., Streaming, YouTube, Vinyl Pre‑Master) each defining a LUFS target and true‑peak ceiling When the user selects a profile for a job Then the job uses that profile’s target and ceiling for all calculations And the user can override target and ceiling per job before applying, within allowed ranges (e.g., −20 to −8 LUFS; −2.0 to −0.1 dBTP) And the selected profile name and any overrides are stored in the new version’s metadata and job log And the last‑used profile becomes the default for the next session for that workspace
Preserve Audio Format and Write Loudness Metadata
Given normalization completes for a track or stem Then the output version preserves the source sample rate, bit depth, and channel count exactly And the file container and project database are updated with: integrated LUFS, true‑peak dBTP, LRA, target LUFS, ceiling dBTP, applied gain dB, limiter usage (boolean), measurement standard (ITU‑R BS.1770‑4 or later), and processing timestamp And loudness and processing metadata are embedded in appropriate tags (e.g., BWF bext/iXML or ID3/TXXX) without overwriting existing non‑loudness tags And a delta log entry records before/after values and processing parameters for auditability
Preview Expected Gain and Provide Safe Undo/Redo with Change Log
Given a user selects one or more tracks and a destination profile When the preview is requested Then the UI displays per‑item expected gain change (dB), predicted post‑process LUFS and true‑peak, and whether limiting will engage And after Apply, a change log entry is created with user, time, items processed, and per‑item deltas And clicking Undo restores the previous version as current without data loss and updates all references accordingly And clicking Redo reapplies the normalization with the same parameters, producing the same checksummed output And all undo/redo actions are non‑destructive and reversible at least 20 steps deep within the project
Propagate Normalized Versions to Bundles, Review Links, and Pre‑Flight Checks
Given a track in active release bundles and review links is normalized creating a new version When the new version is marked current in IndieVault Then all bundles referencing the track update to the new version without breaking links And existing review links automatically serve the new version while preserving per‑recipient analytics continuity and history And pre‑flight checks for the release reflect updated LUFS/true‑peak status and turn green when within target tolerances And recipients with expiring links retain original expiration and watermark settings And the system records the propagation event in the audit trail
Filename Schema Conformance
"As a label manager, I want assets auto-renamed to our schema so that deliveries are consistent and nothing breaks in downstream handoffs."
Description

Automatically detects non-conforming asset filenames and renames them to a workspace- or release-level schema using tokens (artist, title, version, role, ISRC), locale-aware slug rules, and collision-safe de-duplication. Updates internal references, release bundle manifests, and review links to the new names, preserving link continuity and maintaining a redirect map for previous filenames. Enforces reserved character rules for all OS/DSPs, supports dry-run preview, and records changes in a reversible change log.

Acceptance Criteria
Auto‑Rename Non‑Conforming Uploads to Workspace Schema
Given a workspace filename schema "{artist}-{title}-{version}-{role}-{isrc}", locale "en-US", and strict reserved-character enforcement And an asset named "01 Track_FINAL!!.wav" with metadata artist="Sunstone", title="Echoes", version="Mix1", role="Master", isrc="US-ABC-24-00001" When I run One‑Click Fixes > Filename Schema Conformance (Apply) Then the file is renamed to "sunstone-echoes-mix1-master-usabc2400001.wav" And the original extension ".wav" is preserved And no reserved/disallowed characters remain, consecutive separators are collapsed to a single hyphen, and leading/trailing separators are removed And the operation is idempotent: re-running produces zero further changes And the change is recorded in the change log with batch ID and before/after names
Locale‑Aware Slugging and Transliteration
Given workspace locale "es-ES" and schema "{artist}-{title}" And a file named "Beyoncé – Niño?.aiff" When conformance runs (Apply) Then the result is "beyonce-nino.aiff" (accents transliterated, punctuation normalized to hyphen, question mark removed) Given workspace locale "tr-TR" and schema "{title}" And a file titled "İstanbul.wav" When conformance runs (Apply) Then the result is "istanbul.wav" (locale-specific casing applied) And Unicode is normalized to NFC; output uses only [a–z0–9-._] plus extension; And results are deterministic and stable across runs
Collision‑Safe De‑Duplication
Given two different assets resolve to the same target filename "sunstone-echoes-master.wav" When conformance runs Then the first asset keeps "sunstone-echoes-master.wav" and the second becomes "sunstone-echoes-master-2.wav" And if a "-2" already exists, the next becomes "-3", continuing numerically without gaps within the batch And no files are overwritten; the rename operation is atomic per batch And all internal references are updated to point to the de-duplicated names
Update References and Preserve Review Links
Given existing review links and a release bundle manifest referencing "01 Track_FINAL!!.wav" When the asset is renamed to "sunstone-echoes-mix1-master-usabc2400001.wav" by conformance Then the manifest and all internal references are updated to the new filename And all existing review links continue to resolve via redirect to the renamed asset without URL changes And per-recipient analytics aggregate across pre- and post-rename events And the redirect map stores old→new mappings for at least 365 days and is queryable by admins
Dry‑Run Preview and Impact Summary
Given a selection of 120 assets with mixed filename conformity When I run conformance in Dry‑Run mode Then I see a preview mapping of old→new names for each asset, including any collision suffixes and rule rationales And the summary displays counts for renamed, unchanged, conflicts, and errors And no files, manifests, or links are modified in Dry‑Run And I can export the preview to CSV or JSON And choosing Apply within the same session executes exactly the previewed changes
Reversible Change Log and Undo
Given a completed conformance batch that renamed 50 assets When I invoke Undo on that batch Then all 50 assets revert to their prior filenames And all internal references, manifests, and redirects revert atomically with no broken links And the system records the undo as a new change-log entry linked to the original batch And if any prior name is now occupied, the undo fails safely with a detailed report and no partial reverts
Cross‑Platform/DSP Reserved Character & Length Compliance
Given assets with names containing reserved characters <>:"/\|?*, control characters, trailing periods/spaces, device names [CON, PRN, AUX, NUL, COM1, LPT1], or names exceeding 255 bytes When conformance runs Then output filenames: - replace/remove reserved and control characters per rules - trim trailing periods/spaces - avoid device names by suffixing "-file" - do not exceed 255 bytes including extension; if exceeded, truncate the basename safely while preserving uniqueness and the extension And resulting filenames comply with Windows, macOS, Linux, FAT/exFAT, and major DSP ingestion rules And the file’s MIME/extension mapping is preserved and validated
Artwork Standardization
"As an artist, I want my cover art standardized automatically so that it passes DSP checks and looks correct everywhere."
Description

Validates and transforms artwork to meet target specs: converts color profile to sRGB, conforms dimensions to configured square sizes (e.g., 3000x3000) with safe crop/pad options, normalizes resolution and format (JPEG/PNG), and enforces file size and color space constraints accepted by DSPs. Generates a new, versioned asset while preserving the original, embeds the correct ICC profile, and runs pre-flight checks for prohibited borders or text. Integrates with release folders and review links, showing side-by-side before/after previews and allowing instant undo.

Acceptance Criteria
sRGB Conversion and ICC Embedding
Given an uploaded artwork asset with any color profile or no embedded profile When the user runs One-Click Fixes > Artwork Standardization with color space target set to sRGB IEC 61966-2-1 Then the output asset's color space is sRGB IEC 61966-2-1 and includes a single embedded ICC profile And no additional or conflicting color profiles remain in metadata And metadata validation reports ColorSpace = sRGB and ICCProfileName contains "sRGB IEC 61966-2-1" And a change log entry lists the color space conversion And the original asset remains unchanged And processing completes in <= 2 seconds for a <= 15 MB input on baseline hardware
Square Dimension Conformance (Crop/Pad/Upscale)
Given a configured target dimension of 3000x3000 px and a chosen strategy of Crop or Pad When Artwork Standardization is executed on an image of arbitrary size and aspect ratio Then the output image dimensions are exactly 3000x3000 px And aspect ratio is preserved prior to the chosen Crop or Pad operation (no stretching) And if Crop is chosen, a centered square crop is applied And if Pad is chosen, canvas is extended using edge-reflection padding (no solid color bars) And if the shortest side is < 3000 px, the image is upscaled using Lanczos resampling before crop/pad And all applied operations are recorded in the change log
Format, Transparency, Resolution, and File Size Enforcement
Given allowed formats are JPEG and PNG, maxFileSizeMB = 10, and targetResolutionPPI = 300 When Artwork Standardization runs Then if source contains any transparency, the output format is PNG; otherwise the output is JPEG And JPEG quality is auto-adjusted to keep file size <= 10 MB with a lower bound quality of 75 And if size cannot be reduced below 10 MB at quality 75, the action fails with a clear error and no overwrite occurs And output metadata resolution/PPI is set to 300 And the final file size and format comply with configured limits and are listed in the change log
Prohibited Borders/Text Pre-flight Checks
Given pre-flight validators for border bars and text overlays are enabled When the standardization preview is generated Then detection of solid border areas >= 3% of the image width on any side triggers a Blocker And OCR-detected text covering > 1% of pixels or containing banned terms triggers a Blocker And the UI lists each issue with location overlays and descriptions And Apply is disabled until issues are resolved or a user with ArtworkPolicyBypass permission explicitly overrides And if no issues are detected, the pre-flight status is Pass and Apply is enabled
Versioning, Original Preservation, and Instant Undo
Given an original artwork asset is in a release folder When the user clicks Apply on the Artwork Standardization preview Then a new versioned asset is created (filename suffixed with v+1) and set as current, while the original remains unchanged And a detailed change log (operations, parameters, timestamps, actor) is attached to the new version And Undo reverts the current pointer to the previous version in <= 1 second without deleting any versions And the asset history shows both versions with correct lineage and metadata And an audit event is recorded for Apply and Undo
Preview and Release/Review Link Integration
Given a release references the current artwork and at least one active review link exists When the standardization preview is opened and the user applies the fixes Then a side-by-side before/after preview with 1:1 zoom and A/B toggle is available prior to Apply And upon Apply, the release's artwork reference updates to the new version And existing review links configured to use "current artwork" display the updated artwork within 60 seconds And analytics record an ArtworkUpdated event with old/new version IDs and link IDs impacted And Undo reverts the reference and review links reflect the previous artwork within 60 seconds
ISRC Format Correction
"As a self-releasing artist, I want my ISRCs auto-corrected so that my deliveries are accepted without manual rework."
Description

Validates and normalizes ISRCs to the canonical 12-character structure, uppercases country/registrant codes, pads numeric segments, removes illegal characters/spaces, and flags duplicates. Applies fixes across all affected tracks and embeds corrected codes into file tags and IndieVault metadata. Provides per-item previews, reversible changes, and export-ready formatting for delivery sheets and DDEX. Integrates with pre-flight checks to block releases with malformed codes.

Acceptance Criteria
Canonical ISRC Normalization on Ingest
Given a track's ISRC input contains lowercase letters, spaces, hyphens, or extra characters When the user runs One-Click Fixes for ISRC Format Correction Then the ISRC is transformed to match ^[A-Z]{2}[A-Z0-9]{3}[0-9]{2}[0-9]{5}$ (12 characters) by uppercasing letters, removing separators/illegal characters, and left-padding numeric segments as needed And if normalization cannot produce a code matching the regex, the item is marked Unrecoverable-ISRC with an explicit reason and no change is applied And a per-item before/after preview is displayed for review prior to applying
Bulk Auto-Correction Across Release
Given a release with multiple tracks containing malformed ISRCs When the user selects Apply Fixes at the release level Then normalization rules are applied consistently to all affected tracks and non-affected tracks remain unchanged And a single change log entry records each track's old and new ISRC values with timestamp and actor And the operation is atomic per track: on any embed failure for a track, its metadata and tags are rolled back and the error is surfaced
Duplicate ISRC Detection and Blocking
Given two or more tracks within the same release have identical ISRCs after normalization When validation runs (fixes or pre-flight) Then each duplicate is flagged with severity Blocker and includes references to the conflicting tracks And One-Click Fixes does not auto-generate or alter ISRCs to resolve duplicates And the release cannot proceed past pre-flight until duplicates are resolved by the user
Embed Corrected ISRC into Audio Files
Given a corrected ISRC for a track When the user applies fixes Then the ISRC is written to file tags appropriate to each format (e.g., MP3: ID3v2 TSRC; FLAC/OGG: Vorbis comment ISRC; M4A/MP4: iTunes custom atom '----:com.apple.iTunes:ISRC'); for formats without a standard ISRC field, no tag write is attempted and a warning is logged And reading the file back immediately after write returns the same ISRC value And IndieVault track metadata reflects the same ISRC value as embedded tags
Pre-Flight Gate for ISRC Validity
Given a release is submitted to pre-flight When validation runs Then any track with missing, malformed, or duplicate ISRC causes pre-flight status Fail with a list of offending tracks and reasons And a Fix with One-Click action is presented to correct eligible issues And after applying fixes, re-running pre-flight passes when all ISRCs are valid and unique
Delivery Sheet and DDEX Export Use Corrected ISRCs
Given a release with corrected ISRCs When the user exports a delivery sheet (CSV/XLSX) Then each track row contains the normalized 12-character ISRC with no separators and matches the track metadata When the user exports a DDEX package Then the generated XML validates against the configured DDEX schema and includes the ISRC for each sound recording in the expected element And exported identifiers match those embedded in file tags and stored in IndieVault metadata
Per-Item Preview and Undo of ISRC Fixes
Given a track has a proposed ISRC correction When the user opens the One-Click Fixes preview Then the UI shows the current value and the proposed normalized value side by side When the user applies the fix Then a change log entry is created with before/after values, timestamp, and actor When the user clicks Undo for that fix Then the ISRC in both IndieVault metadata and file tags reverts to the previous value and the audit trail captures the revert
Role Label Mapping
"As a project manager, I want contributor roles auto-mapped to standard labels so that credits are clean and delivery forms don’t get rejected."
Description

Maps free-form contributor roles to a curated set of canonical labels accepted by DSPs (e.g., Primary Artist, Featured Artist, Producer, Composer, Lyricist, Mixer, Mastering Engineer). Applies smart synonym and fuzzy matching, supports territory/DSP-specific label variants, and allows workspace-level custom dictionaries with audit trails. Updates credit metadata across tracks and bundles, offers preview and undo, and ensures exports and review pages display standardized roles.

Acceptance Criteria
Auto‑map role synonyms and near‑matches
Given a track or bundle contains free‑form contributor roles including synonyms, abbreviations, or misspellings When the user runs One‑Click Fixes → Role Label Mapping Then each role is mapped to a canonical label from the curated set using synonym and fuzzy matching with a confidence threshold of >= 0.85 And roles scoring < 0.85 remain unmapped and are flagged for review And a change log records original value, mapped label, confidence score, timestamp, and actor And non‑role metadata remains unchanged
Apply DSP/territory‑specific role label variants on export
Given the workspace has a default territory and the user selects a DSP and territory for export When an export is generated Then standardized role labels are transformed to that DSP’s accepted variants for the selected territory And if a DSP/territory variant is unavailable, the canonical label is used And the export validation reports 0 role‑label schema errors And the export preview displays the exact labels that will be delivered
Manage custom role dictionary with audit trail
Given a workspace admin opens Role Dictionary settings When they add, edit, or delete a custom synonym mapping to a canonical label Then the change requires a reason note and is saved with user, timestamp, old/new values, and version ID And any subsequent mapping run uses the updated dictionary And non‑admin users cannot modify the dictionary And admins can revert to any prior version and the revert is logged
Propagate standardized roles across tracks and bundles
Given a release bundle contains multiple tracks sharing contributors When the user applies approved role mappings Then all tracks and associated bundles reflect the standardized role labels consistently And duplicate contributor‑role pairs are de‑duplicated without data loss And per‑track role exceptions remain intact And the system reports a summary of items updated and items skipped, with reasons And the entire operation is recorded as a single atomic action in the audit log
Preview, apply, and undo role label changes
Given pending role mappings are available When the user opens the change log Then a side‑by‑side diff shows original vs standardized label for each item And the user can accept all, accept per item, or reject per item before applying And after apply, an Undo action restores all affected fields to their prior values And the audit log captures review, apply, and undo events with timestamps and actor
Display standardized roles in UI and review links
Given assets have been standardized or flagged as unmapped When a user views credits in the app or generates a watermarkable review link Then all mapped contributors display canonical/variant labels consistently across the UI and review pages And search and filters use canonical labels regardless of variant displayed And any unmapped roles display an “Unmapped” tag and a Fix Roles call‑to‑action
Resolve ambiguous or unmapped roles with assisted selection
Given one or more roles cannot be confidently mapped or have multiple candidates below threshold When the user runs Role Label Mapping Then the system presents candidate canonical labels with confidence scores and examples for each ambiguous role And the user’s selection resolves the role immediately and optionally saves it to the custom dictionary And after resolution, the mapping coverage percentage is shown; if coverage < 100%, remaining items are listed for manual follow‑up
Change Log with Apply/Undo
"As a busy indie team, I want a single place to review and apply fixes with guaranteed reversibility so that I can ship quickly without risk."
Description

Presents a consolidated change log of detected issues and proposed auto-fixes across audio, filenames, artwork, IDs, and credits, allowing a single Apply action or selective application per item. Executes changes as atomic transactions with full versioning, per-change diffs, and one-click undo/redo, and records all actions to an audit log with user, time, and reason. Integrates with permissions, notifications, and review links so recipients see updated assets while previous versions remain recoverable.

Acceptance Criteria
Consolidated Change Log Across Asset Types
Given a release with issues across audio, filenames, artwork, IDs, and credits, when the change log loads, then items are grouped by asset type with counts and severities visible. Given up to 500 items exist, when the change log loads on a broadband connection, then the first render completes in ≤ 2 seconds. Given the user applies filters by type, severity, or status, when filters are applied, then only matching items are shown and group counts update in real time. Given a rescan completes, when the log is refreshed, then new issues appear and resolved items are removed within ≤ 5 seconds. Given an item is expanded, when details are shown, then the proposed auto-fix summary and estimated impact are displayed.
Apply All Executes Selected Fixes Atomically
Given the user has permission to Apply and N items are selected (N ≥ 1), when Apply All is clicked, then a single transaction ID is created and either all changes commit or none commit. Given any sub-operation fails, when the process completes, then the system performs a full rollback to the pre-transaction state and displays an error summary including the failing items. Given the transaction succeeds, when complete, then selected items are marked Applied, affected assets receive new version numbers, and a success message shows the count applied. Given up to 200 items are selected, when Apply All is executed, then the transaction completes in ≤ 30 seconds under normal load. Given the user lacks Apply permission, when they attempt to Apply All, then the action is disabled and a tooltip explains the required role.
Selective Per-Item Apply With Dependency Checks
Given an item has unmet dependencies (e.g., filename schema depends on ISRC fix), when the item is selected, then its Apply button is disabled and a tooltip lists required predecessor fixes. Given an item has no unmet dependencies, when Apply is clicked, then a preflight validation runs and only if it passes is the change applied. Given a per-item apply succeeds, when complete, then dependent items are recalculated and their statuses update within ≤ 3 seconds. Given a per-item apply fails validation, when attempted, then no change is made and a blocking error with remediation is shown.
Per-Change Diffs and Versioning
Given an item is selected, when viewing the diff, then before/after values are shown with units (audio: LUFS and dBTP; filenames: old vs new; artwork: dimensions and color profile; IDs/credits: normalized formats/labels). Given a change is applied, when the transaction commits, then a new immutable asset version is created with a version ID and timestamp, and the prior version remains retrievable. Given the user opens a diff of an applied item, when requested, then a link to view the prior version and its diff is available.
One-Click Undo/Redo of Transactions
Given a completed transaction exists, when Undo is clicked, then all changes in that transaction are reverted in a single inverse transaction with its own ID. Given an undone transaction, when Redo is clicked, then the original changes are re-applied atomically. Given a conflicting modification has occurred after the original transaction, when Undo is requested, then the system blocks the Undo with a clear conflict message and no partial changes are made. Given Undo or Redo completes, when finished, then affected items and versions reflect the reverted/applied state and the change log updates within ≤ 5 seconds.
Audit Logging With User, Time, Reason, and Permissions
Given Apply, Undo, or Redo is initiated, when prompted, then the user must enter a reason (min 5 characters) before proceeding unless an admin has marked the reason as optional. Given a transaction completes (success or failure), when logging, then an immutable audit record is stored containing user ID, timestamp (UTC), reason, transaction ID, items affected, before/after hashes, and outcome. Given a user without Audit-View permission attempts to open the audit log, when requested, then access is denied and no record contents are leaked. Given an authorized user opens the audit log, when exporting, then records can be exported to JSON or CSV and include a cryptographic checksum.
Review Links and Notifications Reflect Updates
Given active review links exist, when a transaction applies updates, then recipients accessing those links see the latest asset versions by default with an Updated badge and can toggle to a prior version if allowed by link settings. Given link expiration and watermarking settings are configured, when updates are applied, then those settings persist unchanged for the updated versions. Given notifications are enabled for a project, when Apply, Undo, or Redo completes, then subscribers receive a notification containing the transaction summary and link to the change log; users without permission do not receive notifications. Given review link analytics are enabled, when recipients view assets post-update, then analytics capture the asset version viewed and per-recipient events. Given a transaction completes, when review links are refreshed, then updates propagate to link viewers within ≤ 60 seconds.

Credit Guard

DDEX‑aware credit validation that catches missing roles, inconsistent spellings, and misattributed contributors. Cross‑checks ISNI/IPI where available, enforces primary/featuring consistency across tracks, and suggests merges from a reusable roster dictionary. Prevents distributor rejections and protects proper attribution.

Requirements

DDEX Role Schema Enforcement
"As a label manager, I want automatic DDEX role validation so that I can prevent missing or incorrect credits from causing distributor rejections."
Description

Validate track- and release-level credits against DDEX ERN role and cardinality rules, ensuring all required roles (e.g., MainArtist, FeaturedArtist, Composer, Producer) are present, correctly scoped, and free of contradictions. Provide real-time, inline errors and auto-fix suggestions within IndieVault’s credit editor, leveraging the existing asset model and versioning. Block export on errors, allow override with justification on warnings, and surface a consolidated preflight report per release to reduce distributor rejections and missed attributions.

Acceptance Criteria
Real-time Track-Level DDEX Role Cardinality Validation
Given I am editing a track’s credits in the credit editor, When any credit field is changed or blurred, Then the system validates against DDEX ERN role and cardinality rules within 300 ms. Given validation runs, Then required track-level roles meet their min/max as per DDEX (e.g., MainArtist ≥1, Composer ≥1, Producer ≥1); violations are Errors on the affected fields and the track header. Then each Error message includes the role name, scope=Track, DDEX rule reference, and at least one suggested fix action. Then the track’s validation state updates the sidebar counts of Errors/Warnings in real time.
Release-Level vs Track-Level Role Scope Consistency
Given a release with multiple tracks and defined release-level MainArtist/FeaturedArtist roles, When validation runs, Then each track includes all release-level MainArtists; missing instances are flagged as Errors with suggested add to tracks. Then any release-level FeaturedArtist appears as FeaturedArtist on at least one track; otherwise a Warning suggests adding to applicable tracks or removing from release-level. Then any track-level FeaturedArtist not present at release-level triggers a Warning suggesting add at release-level or justification to keep track-only. Then contradictions where the same person is both MainArtist and FeaturedArtist in the same scope are flagged as Errors with suggested promotion/demotion to resolve.
Contributor Identity and Identifier Validation
Given a contributor has ISNI or IPI entered, When the field is saved or blurred, Then the identifier format and checksum validate; invalid values become Warnings with guidance to correct. Then duplicate entries of the same person with conflicting identifiers in the same scope are flagged as Errors with a suggested merge into one contributor. Then similar names that share the same identifier across tracks produce a Warning suggesting a merge using the roster dictionary with a one-click proposal. Then any override for identifier conflicts requires a justification of at least 10 characters and is stored with the credit record.
Inline Error Presentation and Accessibility
Given one or more Errors or Warnings exist, When a user focuses a field with an issue, Then an inline message appears within 100 ms with icon and text, associated to the field via aria-describedby for screen readers. Then error state color contrast is ≥ 4.5:1 and the input error outline is ≥ 2 px, meeting WCAG AA. Then the editor shows an error summary banner listing all issues with deep links; activating a link focuses the target field within 300 ms. Then keyboard-only users can navigate to, act on, and dismiss messages without focus traps.
Preflight Report and Export Blocking
Given a user initiates Export for a release, When preflight runs, Then a consolidated report is generated per release summarizing Errors and Warnings by track and role with counts and details. Then export is blocked if Errors > 0; the Export action is disabled and explains the blocking issues with a link to open the report. Then if only Warnings remain, the user may proceed only after entering a mandatory justification (≥ 10 characters); the justification is recorded with the export record and displayed in the report. Then preflight generation completes within 5 seconds for releases up to 100 tracks and is downloadable as JSON and PDF.
Auto-Fix Suggestions and Versioned Apply
Given violations such as missing required roles, role scope mismatches, or inconsistent spellings are detected, When the user opens Suggestions, Then the system proposes deterministic auto-fixes (e.g., add missing Composer from roster, normalize artist spelling to canonical, promote/demote Featured/MainArtist to correct scope) with per-change previews. Then clicking Apply All applies the selected fixes atomically, creates a new credit version, re-runs validation, and confirms success only if Errors = 0; otherwise remaining issues are listed. Then each applied auto-fix records a change summary (user, timestamp, rule references, before→after) in version history and supports one-click rollback.
ISNI/IPI Identity Cross-Check
"As an artist manager, I want ISNI/IPI cross-checks so that contributor identities are verified and royalties are attributed to the correct people."
Description

Cross-reference contributor records against ISNI and IPI identifiers where available to confirm identity, detect duplicates, and flag mismatches between name strings and registered identifiers. Implement pluggable connectors for third-party/partner data sources, with caching, rate limiting, and retry policies. Store verified identifier links in IndieVault’s roster dictionary, display confidence levels, and prompt users to confirm matches. Fall back to local heuristics when external lookup is unavailable, and record provenance for auditability.

Acceptance Criteria
Verified ISNI/IPI Match and Roster Linkage
Given a contributor record with a name string and a supplied ISNI or IPI When the cross-check job runs Then all enabled connectors are queried with a per-connector timeout of 5s And if any connector returns an exact identifier match (same ISNI or IPI value) Then the identifier link is saved to the roster dictionary with fields: type, value, provider, provider_record_id, confidence >= 0.99, matched_name, queried_name, timestamp, cache_status And the contributor UI displays the identifier with a Verified badge and confidence percentage And an "identity.linked" event is emitted with contributor_id and identifier value And if the identifier matches but normalized name similarity < 0.6 Then display a "Name mismatch" warning and set review_status = "Needs Review" until user confirms
Multiple External Matches Requiring User Confirmation
Given a contributor record without a supplied identifier and a name search returns 2–10 candidates When the top candidate confidence < 0.9 and >= 0.6 Then present a review list showing candidate name, identifier, source, confidence, and key roles And require the user to choose Confirm, Reject, or Defer for each candidate And on Confirm, store the selected identifier link with status Verified, user_id, timestamp, and provenance = "UserConfirmed" And on Reject, blacklist the rejected candidate for this contributor for 90 days And on Defer, create a reminder task and do not link And after user action, refresh the roster view within 1s
Fallback to Local Heuristics on Provider Outage
Given all enabled connectors either time out (>=5s) or return 5xx/NetworkError When the cross-check job runs Then the system executes local heuristics: identifier format validation, normalized name and role comparison against roster, and alias matching And produces a heuristic confidence score between 0 and 1 with a rationale string And displays a banner "External lookup unavailable" on the review UI And does not mark any identifier as Verified without user confirmation And enqueues a retry job with exponential backoff (1m, 5m, 15m) up to 3 attempts And logs an "identity.lookup_degraded" event with error codes per connector
Connector Caching, Rate Limiting, and Retry Policy
Given a query for an identifier that was successfully resolved within the last 7 days When the cross-check job runs Then the result is served from cache and no external API call is made And the cache TTL is 7d for positive matches and 24h for negative/no-match responses And cache entries are invalidated immediately if a user manually edits the linked identifier Given a connector responds with HTTP 429 and Retry-After = 2s When the job retries Then it waits at least 2s with jitter and attempts up to 3 retries with backoff (2s, 4s, 8s) And total attempts per connector per job do not exceed 3 And if all retries fail, the job records a soft failure and proceeds with other connectors And repeated retries do not create duplicate identifier links or events (idempotency key per contributor+identifier)
Duplicate Detection and Merge Suggestion in Roster
Given two distinct roster entries each have a Verified ISNI or IPI with the same value When cross-check completes Then the system flags a potential duplicate with severity = High And creates a merge suggestion showing differing fields and proposed canonical values And shows a non-blocking warning on credit assignment for the affected entries And on user-approved merge, the system consolidates to one roster entry, preserves all aliases, updates foreign keys, and records a redirect from the losing entry And emits "identity.merge_suggested" and "identity.merged" events
Provenance and Audit Trail for Cross-Checks
Given any external lookup or heuristic evaluation is performed When the operation completes (success, partial, or failure) Then an immutable audit record is written with: contributor_id, inputs (name, identifier), connector name, request_id, cache_hit flag, response summary (identifier, name, confidence), algorithm_version, thresholds_used, user_decisions, and timestamps And audit records are queryable via the audit API by date range, contributor_id, identifier, connector, and outcome And exporting audit records as JSONL includes all fields and redacts secrets And audit records are retained for at least 24 months
Pluggable Connector Interface Compliance
Given a new connector module implementing the IIdentityConnector v1 interface with methods getByIdentifier and searchByName When the module is registered and enabled in settings Then the system discovers it at startup and includes it in the connector pool And its results are normalized to the internal schema and merged with other providers without duplication And if the connector throws errors on 3 consecutive calls, a circuit breaker opens for 5 minutes for that connector only And disabling the connector in settings removes it from subsequent lookups without code changes
Name Normalization & Alias Merge
"As a project coordinator, I want name normalization and alias suggestions so that repeated contributors are merged and spelled consistently across releases."
Description

Normalize contributor name strings (case, diacritics, punctuation, common particles) and apply fuzzy/dedup heuristics to suggest merges into a canonical roster entry. Maintain per-contributor alias lists and locale-specific name formats. Present side-by-side diffs, allow one-click merge with undo, and propagate canonical names across all associated assets and releases in IndieVault. Log merges to the audit trail and prevent regression by locking canonical names unless explicitly changed.

Acceptance Criteria
Normalization Key Generation
Given contributor names that differ only by case, diacritics, punctuation, or common particles When normalization keys are generated Then all such variants produce the same normalization key And the original input string is preserved for display until a canonical name is set And when locale metadata is present, locale-specific normalization rules are applied; otherwise default rules are applied And normalization is deterministic: identical input under the same ruleset version yields identical keys across runs
Alias Suggestions and Ranking
Given a newly entered or imported contributor name When the similarity engine evaluates existing roster entries Then up to 5 merge suggestions are displayed ranked by confidence (0.00–1.00) And only candidates with confidence ≥ 0.85 and at least one corroborating signal (token/phonetic match or shared ISNI/IPI) are shown And suggestions below the threshold are not displayed And selecting a suggestion opens its merge review
Side-by-Side Diff for Merge Review
Given a suggested merge between a canonical contributor and a candidate alias When the merge review is opened Then a side-by-side diff shows Display Name, Normalized Key, Locales, Aliases, ISNI, IPI, Credit Count, and Affected Assets count with differences highlighted And the view shows the post-merge alias list preview (candidate name added) And the view provides primary actions: Merge and Dismiss
One-Click Merge, Undo, and Audit
Given a merge review for a candidate alias When the user clicks Merge Then the system consolidates records, adds the candidate string to the canonical's alias list, and keeps the canonical display name unchanged unless explicitly selected otherwise And all references in assets, tracks, credits, contracts, press kits, review links, and exported metadata are updated to the canonical contributor ID And an audit entry is written capturing user, timestamp, before/after IDs and names, alias added, and reason (if provided) And an Undo action is available for 30 minutes post-merge that fully reverts the merge and writes a reversal audit entry And re-running a completed merge (same source and target) is idempotent and results in no additional changes
Propagation Across Assets and Releases
Given a completed contributor merge that affects related records When propagation jobs execute Then 95% of reference updates complete within 10 minutes and 100% within 30 minutes for up to 10,000 affected references And no broken references occur; each reference points to either the prior or the new canonical at all times And exports (DDEX/CSV) initiated during propagation wait until propagation is complete before emitting contributor names
Canonical Name Lock and Change Workflow
Given a contributor with a canonical name set When new variants appear via manual entry or import Then variants are added to the alias list and do not overwrite the canonical name And changing the canonical name requires an explicit action by an authorized role and a reason And upon canonical change, audit is recorded and name propagation runs as with merges And API/bulk imports cannot change the canonical name unless an explicit override flag is provided by an authorized role
Locale-Specific Display Names and Export
Given a contributor with per-locale display names defined When viewing or exporting metadata with a specified target locale/market Then the locale-specific display name is used; if none exists, the canonical display name is used And locale-specific display names do not alter normalization keys and are tracked as aliases And DDEX/CSV exports include the correct display name per locale rules and include the associated language/locale code where supported
Primary/Featuring Consistency
"As a release manager, I want primary/featuring consistency checks so that artist attributions are clean and aligned across tracks and the overall release."
Description

Enforce consistent treatment of main and featured artists across track and release levels. Validate that featured artists appear as FeaturedArtist roles rather than hardcoded in titles, ensure release artist aggregates reflect track-level credits, and flag contradictions (e.g., a featured artist listed as primary on some tracks but not others without justification). Provide guided fixes to move “feat.” strings from titles into structured credits and update display titles according to style rules.

Acceptance Criteria
Detect and fix 'feat.' in track titles
Given a track title contains a featuring indicator (feat., ft., featuring) followed by one or more artist names When validation runs Then the system flags FEAT_IN_TITLE with the parsed artist names And a one-click Fix is available When the Fix is applied Then FeaturedArtist credits are created for the parsed artists in listed order And the stored Title field is stripped of featuring text and normalized to single spaces And the Display Title is regenerated to "Base Title (feat. Artist A, Artist B)" per style rules And the change is logged with before/after values
Release artist aggregation matches track-level primaries
Given a release with one or more tracks and saved track credits When validation runs Then every unique PrimaryArtist present on any track appears in the release PrimaryArtist list And no artist appears in release PrimaryArtist solely due to being FeaturedArtist on tracks And if a mismatch is found, raise RELEASE_AGG_MISMATCH identifying missing or extraneous artists And a one-click Fix updates the release artist list to match the rule
Consistent primary vs featured roles across tracks
Given an artist appears as PrimaryArtist on at least one track and FeaturedArtist on at least one other track within the same release And no justification note is present When validation runs Then raise ROLE_INCONSISTENCY with list of affected tracks and roles And block release export until resolved When roles are reconciled or a justification note with category and free text is added Then ROLE_INCONSISTENCY clears and export is unblocked
Guided fix preview and bulk apply
Given one or more tracks are flagged with FEAT_IN_TITLE When the user opens the Fix flow Then a preview shows per-track diffs: Title before/after and created FeaturedArtist credits And the user can select/deselect tracks and confirm When confirmed Then changes apply atomically across selected tracks And an undo option is available within 10 minutes or until next export, whichever comes first
DDEX export without featuring text in titles
Given a release passes validation When exporting metadata to DDEX Then Work/Release/Resource Title elements contain no featuring strings And Contributors include FeaturedArtist roles for all featured contributors with appropriate performer role codes And export fails with DDEX_VALIDATION if any title contains featuring text or a featured contributor lacks a FeaturedArtist role
Justified exception for varying primary lineups
Given a compilation or special release where primary lineup intentionally varies by track When a justification is added at the release level selecting reason "Varying primary lineup" and referencing affected tracks Then ROLE_INCONSISTENCY and RELEASE_AGG_MISMATCH are downgraded to warnings And export is permitted while warnings remain visible and auditable
Roster-aware merge and disambiguation of featured names
Given a 'feat.' string includes a name that fuzzy-matches an existing roster contributor or shares ISNI/IPI When validation parses the featuring names Then the system suggests linking to the roster contributor with match score ≥ 0.9 And on acceptance, the FeaturedArtist credit is assigned to the roster entity and duplicates are merged And on rejection, a new contributor is created without merging
Misattribution & Conflict Detection
"As a producer, I want misattribution alerts so that I can correct wrong roles before delivery and avoid disputes later."
Description

Detect conflicting or implausible credit patterns, such as the same person assigned mutually exclusive roles on a track (e.g., MixingEngineer and MasteringEngineer when policy forbids), composer counts not matching publishing splits, or credits applied to the wrong track within a batch. Use rule packs configurable by label policy and distributor expectations. Surface conflicts early in the editing flow, provide rationale, and link directly to the offending fields for quick correction.

Acceptance Criteria
Mutually Exclusive Roles Blocked on Save
Given label policy pack "IndieVault Default" has rule CG-ROLE-001 forbidding the same contributor from holding MixingEngineer and MasteringEngineer on a single track And Track T1 credits list ContributorID=123 as MixingEngineer and MasteringEngineer When the user clicks Save on Track T1 credits Then the save is prevented And a conflict with code CG-ROLE-001 and severity Error is displayed in the editor sidebar within 300 ms And the conflict item links focus the MixingEngineer and MasteringEngineer fields for ContributorID=123 And removing either role clears the conflict immediately and allows save to succeed
Composer Count and Split Totals Validation
Given Track T2 lists exactly 3 contributors with role=Composer And publishing split lines for Track T2 include 3 or more entries with territories=World When the user attempts to validate or save Track T2 Then the system verifies that the sum of all composer publishing splits for World equals 100.00% ± 0.01% And the number of split entries referencing unique Composer contributor IDs equals 3 And any deviation raises conflict CG-SPLIT-100 severity Error with a rationale stating expected total and actual total And the conflict links focus the split rows and the Composer role chips And save and export are blocked until the splits match the composer count and total
Bulk-Paste Misapplied Featuring Credits Across Tracks
Given label policy pack has rule CG-FEAT-001 "Single-feature per track" enabled And the user bulk-pastes credits from Track T3 to Tracks T4–T8 And ContributorID=456 is assigned role=FeaturingArtist on T3 only per roster dictionary When paste completes Then any track in T4–T8 that includes ContributorID=456 as FeaturingArtist without a matching "feat. {Contributor Display Name}" tag in its track title raises conflict CG-FEAT-001 severity Warning And the conflict lists the affected tracks And clicking each conflict item navigates to that track’s Featuring field And choosing "Remove from all but T3" resolves all listed conflicts in one action
Rule Pack Sensitivity and Real-Time Re-Evaluation
Given Track T5 credits pass validation under policy pack "Distributor A" When the user switches the active policy pack to "Distributor B" that forbids Producer and Arranger being the same contributor (rule CG-ROLE-002) And ContributorID=789 holds both roles on T5 Then the validation re-runs within 500 ms without page reload And conflict CG-ROLE-002 severity Error appears with rationale referencing "Distributor B" policy And reverting to "Distributor A" removes the conflict immediately
Inline Conflict Surfacing on Field Blur
Given the user is editing Track T6 credits And ContributorID=321 is assigned as Composer When the user enters publishing splits totaling 95% and tabs away from the last split field Then conflict CG-SPLIT-100 severity Error appears inline under the split table within 300 ms And the Save button becomes disabled with an error badge count incremented by 1 And correcting the total to 100% clears the conflict and re-enables Save without reloading
Split Lines Referencing Non-Listed Composers
Given Track T7 has no contributor with role=Composer matching ContributorID=654 When a split row references ContributorID=654 for a composer share Then conflict CG-SPLIT-101 severity Error appears with rationale "Split references non-listed composer" And the conflict links focus the split row and offers two fix options: "Add as Composer" and "Remove split row" And selecting either fix resolves the conflict and updates validation state
Credit Change Audit & Approval
"As a label admin, I want an approval workflow and audit trail for credit changes so that we maintain accountability and can revert mistakes if needed."
Description

Capture a full audit trail of credit edits, merges, and identifier confirmations with timestamp, actor, and rationale. Provide a lightweight approval workflow for high-impact changes (e.g., role changes, merges, removals) with assignable reviewers, comment threads, and notifications. Integrate with IndieVault’s versioning so approved changes create new credit versions that can be diffed and, if needed, rolled back. Export the audit log with the release package when required by partners.

Acceptance Criteria
Audit Trail Capture for Credit Edits
Given a user with edit permissions updates any credit field and provides a rationale of at least 5 characters, When they save the change, Then an immutable audit entry is created containing: unique change ID, UTC timestamp, actor (user ID and display name), asset scope (release/track IDs), action type (edit/merge/remove/identifier-confirmation), field name, previous value, new value, rationale text, and client IP/user agent. Given an audit entry exists, When viewed in the audit log UI or fetched via API, Then the entry is read-only, non-deletable by any role, ordered reverse-chronologically, and filterable by date range, actor, action type, and asset scope. Given a user attempts to save a credit change without a rationale, When they submit, Then the system blocks the save and displays a validation error indicating rationale is required.
Approval Workflow for High-Impact Changes
Given a proposed change is high-impact (role change, contributor merge, contributor removal, primary/featuring flag change, or ISNI/IPI overwrite), When a user submits the change, Then a Change Request is created with status "Pending" and the underlying credits are not updated. Given project-level approval policies and default reviewers exist, When a Change Request is created, Then the requestor must assign at least one reviewer (or the system auto-assigns defaults), and self-approval is blocked if policy forbids it. Given assigned reviewers, When required approvals are collected (default 1) without any active rejections, Then the Change Request transitions to "Approved"; When any reviewer rejects, Then the status becomes "Rejected" and changes are not applied. Given a Change Request has comments, When participants post, edit, or delete their own comments, Then a comment thread is maintained and each action is captured in the audit trail. Given a Change Request changes state (Pending → Approved/Rejected), When the transition occurs, Then a corresponding audit entry links the request, approvers, timestamps, and rationale for the decision.
Notifications and Reminders for Approval Actions
Given a Change Request is created, When it is submitted, Then in-app and email notifications are sent to all assigned reviewers within 60 seconds containing: request ID, summary of proposed changes, requester, and a deep link to review. Given a Change Request remains Pending, When 24 hours elapse without a decision, Then reminder notifications are sent to assigned reviewers; When 72 hours elapse, Then a second reminder is sent and the project owner is CC’d (configurable), with a maximum of 5 reminders per request. Given notification delivery occurs, When messages are sent, Then delivery status is tracked (queued/sent/failed) and failures are retried up to 3 times with exponential backoff; persistent failures are surfaced to the requester. Given a Change Request is Approved, Rejected, or commented on, When that event occurs, Then the requester and thread participants receive notifications within 60 seconds and the events are logged.
Versioning Integration and Diff on Approval
Given a Change Request is Approved, When it is applied, Then a new immutable credit version is created with a sequential version number and unique version ID, capturing a full snapshot of credits for all assets in scope. Given a new version is created, When viewing the diff, Then field-level changes (add/edit/remove) are shown for contributor names, roles, primary/featuring flags, ISNI/IPI, and track associations, including before/after values and affected asset IDs. Given a Change Request is Pending or Rejected, When viewing versions, Then no new version exists and the current published version remains unchanged. Given any version ID, When retrieving via UI or API, Then the exact snapshot associated with that version is returned consistently and is read-only.
Rollback to Prior Credit Version
Given a user with Manage Credits permission selects a prior version, When they initiate Roll Back, Then the system creates a new version identical to the selected version, records a rollback audit entry linking from and to versions, and leaves all prior versions intact. Given a rollback would revert a high-impact change, When policy requires approval for rollbacks, Then a Change Request of type "Rollback" is created and must be Approved before the rollback version is created. Given a rollback completes, When viewing the diff between the latest version and its predecessor, Then the diff accurately reflects the reverted fields and affected assets. Given pending Change Requests exist, When a rollback is initiated, Then those requests remain unaffected and continue to reference their original target versions.
Export Audit Log with Release Package
Given a user exports a release package, When the partner template requires an audit log (or the user selects Include audit log), Then the export bundle includes an audit-log.json file scoped to the release that contains all audit entries and approval events relevant to that release within the selected time range. Given the audit-log.json is generated, When validating its structure, Then it conforms to the documented schema (including change ID, timestamps, actors, action types, fields changed, before/after values, rationales, related version IDs, and change-request IDs) and passes schema validation. Given the export completes, When inspecting the bundle, Then a SHA-256 checksum for audit-log.json is included and the export action itself is recorded in the audit trail. Given a partner-specific export is requested, When a CSV format is required, Then a companion audit-log.csv is included with the required columns and value formats.

ArtCheck

Automated artwork compliance scanning for dimensions, resolution, file type, color profile, file size, and store‑specific rules (no URLs/pricing, safe border ratios, explicit badges). Auto‑generates corrected variants and anchors the approved image to the export manifest so the wrong cover never ships.

Requirements

Store-Specific Artwork Rule Engine
"As an indie label admin, I want to manage store-specific artwork rules in one place so that all releases validate against the latest policies without developer intervention."
Description

A centralized, versioned rules engine that defines and manages artwork compliance policies per distribution channel (e.g., Spotify, Apple Music, YouTube Music, Bandcamp), including dimensions, aspect ratios, minimum resolution, acceptable file types, color profiles, maximum file sizes, safe-area ratios, prohibited content (URLs, pricing, social handles), and explicit badge requirements. Rules are time-versioned, targetable by destination, and support label-level overrides without code changes. Validation services query active rules via an internal API during upload, batch scan, and pre-export checks. Integrated into IndieVault’s admin UI and applied automatically across releases, ensuring consistent, up-to-date compliance and rapid adaptation to evolving store guidelines.

Acceptance Criteria
Active Rule Retrieval API by Store and Effective Date
Given an authenticated internal client and store=spotify and at=2025-10-01T00:00:00Z (RFC3339) When GET /v1/artwork-rules?store=spotify&at=2025-10-01T00:00:00Z is called Then respond 200 with exactly one active ruleSet where effectiveFrom <= at < effectiveTo (or effectiveTo is null) And payload includes id, store, version, effectiveFrom, effectiveTo, constraints{dimensionsPx, aspectRatios, minResolutionDPI, fileTypes, colorProfiles, maxFileSizeMB, safeAreaRatio, prohibitedContent, explicitBadge} And payload conforms to rules.schema.json and contains no nulls for constraints (omit if unspecified) And p95 latency <= 200 ms at 50 RPS sustained for 5 minutes
Upload-Time Artwork Validation Against Active Rules
Given an image is uploaded for destination=apple-music and an active ruleSet exists for apple-music at upload time When the validation service evaluates the image Then all ruleSet constraints are applied And on full compliance, the result is status="pass" with {ruleSetId, version, evaluatedAt} And on any failure, the result is status="fail" with an array of errors {code, field, expected, actual, message} And an audit record is stored with {assetId, destination, ruleSetId, evaluatedAt, outcome}
Time-Versioned Rules Rollout and Non-Retroactivity
Given spotify ruleSet v1 (effectiveTo=2025-09-30T23:59:59Z) and v2 (effectiveFrom=2025-10-01T00:00:00Z) When a validation occurs at 2025-09-29T12:00:00Z Then v1 is used When the same asset is revalidated at 2025-10-02T12:00:00Z Then v2 is used And historical validation results for v1 remain immutable and queryable And no automatic mutation/backfill of prior outcomes occurs
Label-Level Overrides Precedence and Expiry Fallback
Given labelId=L123 has an override for youtube-music setting maxFileSizeMB=8 and the global default is 10 When a user under labelId=L123 validates artwork within the override effective window Then evaluation uses override values where specified and defaults for unspecified fields And the response includes {source="override", overrideId, parentRuleSetId} When the override expires Then evaluations revert to {source="default"} automatically without deployment or cache flush beyond normal TTL
Admin UI Draft/Publish Workflow with Preview Test and Audit
Given a rules-admin with permission creates a draft ruleSet for bandcamp When the draft is saved Then server-side validation enforces required fields and logical ranges (e.g., widthPx,heightPx > 0; maxFileSizeMB > 0; aspectRatios non-empty) And "Preview Test" allows uploading a sample image to simulate validation against the draft without persisting outcomes And publishing requires effectiveFrom >= now + 5 minutes lead time and increments semantic version And every create/update/publish is audit-logged with {actor, action, diff, timestamp, reason}
Pre-Export Rule Locking in Release Manifest
Given a release targets destinations=[spotify, apple-music] and each has a latest validation with status="pass" When pre-export checks execute Then the export manifest records per-destination {ruleSetId, version, effectiveAt, validatedAt, assetChecksum} And export proceeds only if the currently active ruleSetId for each destination equals the manifest ruleSetId; otherwise export is blocked with code=RULE_MISMATCH and a revalidation is required
Dimension, Resolution, and File Size Scanner
"As an artist manager, I want uploaded cover art automatically checked for size and resolution so that I catch issues before scheduling a release."
Description

On asset upload and selection, validate pixel dimensions against rule minimums and aspect ratios, check effective resolution (where applicable) and enforce per-store maximum file sizes. Provide clear pass/fail results and human-readable remediation messages. Support batch scanning with background processing and progress tracking. Persist scan results to asset metadata and surface outcomes in the release readiness checklist and review link summaries. Early detection reduces rework, prevents distributor rejections, and keeps release schedules on track.

Acceptance Criteria
Single Image Upload: Dimension and Aspect Ratio Validation
Given a user uploads a single image asset (JPEG/PNG/WebP) And the active ruleset defines minimum pixel dimensions and allowed aspect ratios for the selected destinations When the upload completes and the scan runs Then the scanner measures pixel width and height And verifies width >= requiredMinWidth and height >= requiredMinHeight per ruleset And verifies the aspect ratio matches an allowed ratio within an absolute tolerance of 0.5% And records Pass if all checks succeed, else Fail with observed vs expected values And displays the Pass/Fail result and details in the upload confirmation panel without requiring a page refresh
Effective Resolution Check for DPI-Required Destinations
Given an image asset includes metadata sufficient to infer physical dimensions (e.g., DPI or pixel dimensions plus intended print size) And at least one selected destination is flagged as requiresEffectiveResolution When the scan runs Then the system calculates effective DPI in both axes And records Pass if effective DPI >= requiredMinDPI for that destination across both axes And records Fail if effective DPI cannot be determined or is below the required threshold And includes a remediation message stating requiredMinDPI, observedDPI (or Unknown), and suggested corrective actions (e.g., resize to required pixel dimensions for intended size)
Per-Store Maximum File Size Enforcement
Given one or more destinations are selected for the asset And each destination defines a maximum file size limit in bytes When the scan runs Then the system compares the asset's file size to each selected destination's limit And records Pass only if the asset size <= every selected destination's max And records Fail if any destination's max is exceeded, listing each failing destination with observedSize and maxAllowed And provides a remediation suggestion to recompress to a target size that satisfies the strictest max
Batch Scanning with Background Processing and Progress Tracking
Given a user selects N (N ≥ 2) image assets to scan When the user starts a batch scan Then the system enqueues N independent scan jobs in a background worker And the UI shows a progress indicator displaying completed/total and percentage And progress updates after each asset completes scanning And the UI remains interactive and navigable during scanning And upon completion, a summary displays counts of Pass and Fail and a link to detailed results for each asset
Remediation Messages and Auto-Generated Corrected Variants
Given an asset fails due to dimensions or file size When the user opens the scan result details Then each failed rule displays a human-readable remediation message including expected requirement(s), observed values, and a suggested fix And if an automatic correction is available (e.g., downscale to max dimensions while preserving aspect ratio, or recompress to meet size), a Generate Corrected Variant action is shown And when the user triggers the action, the system creates a new variant named per the versioning convention, associates it with the original asset, and re-runs the scan on the variant And the variant shows Pass for the previously failed rule if the auto-correction succeeds, else shows Fail with updated details
Persisting Results and Surfacing in Readiness Checklist and Review Links
Given a scan completes for an asset When results are written Then the system persists to asset metadata: scanTimestamp, rulesetId and version, per-rule outcomes (Pass/Fail), observed values, and remediation messages And the Release Readiness checklist reflects the latest overall artwork compliance status (Pass if all required rules pass; Fail otherwise) with a link to details And review link summaries display a concise Artwork Compliance: Pass/Fail badge derived from the latest persisted results without exposing internal rule names And if the underlying asset or selected variant changes, both the checklist item and review link summary update automatically without requiring a manual refresh
Automatic Re-Scan on Ruleset or Destination Changes
Given an asset has stored scan results And the active ruleset version or the set of selected destinations changes in a way that could affect thresholds When the change is saved Then the system marks the prior results as stale and queues an automatic re-scan within 60 seconds And the UI indicates Re-scan pending until new results are persisted And an audit entry records who/what triggered the re-scan and the before/after ruleset version or destination set
Color Profile and Format Normalization
"As a content operations lead, I want artwork color profiles and formats normalized automatically so that images render consistently and meet each store’s technical requirements."
Description

Detect embedded ICC profiles and convert to sRGB IEC61966-2.1 when required by target stores while preserving color fidelity. Enforce allowed file formats (e.g., JPEG/PNG) and convert from unsupported types (e.g., HEIC/TIFF/PSD) using appropriate quality, chroma subsampling, and compression settings to meet store and file size limits. Strip non-essential metadata while retaining rights/attribution fields as configured. Transformations are non-destructive: originals are retained and normalized variants are stored with lineage metadata for auditability and rollback. Integrated into the ArtCheck pipeline post-scan and selectable during approval.

Acceptance Criteria
Store-Driven sRGB Conversion Compliance
Given an uploaded artwork with an embedded ICC profile not equal to sRGB IEC 61966-2-1 and a selected target store that requires sRGB When ArtCheck normalization runs post-scan Then the output variant is converted to sRGB IEC 61966-2-1 and embeds the correct ICC profile tag And the average ΔE2000 between the source rendered to sRGB and the normalized variant is ≤ 2.0 with max ≤ 5.0 measured on a 10k-point grid And no device link or additional ICC profiles remain embedded And if the target store does not require sRGB, the original ICC profile is preserved and no color space conversion is performed
Unsupported Format Auto-Conversion to Allowed Types
Given a source artwork in an unsupported format for the target store (e.g., HEIC, TIFF, PSD) When ArtCheck normalization runs Then the output format is converted to the highest-ranked allowed format defined by the store profile (e.g., JPEG or PNG) And for JPEG outputs: baseline or progressive per store profile, 8-bit/channel, YCbCr with 4:4:4 subsampling unless the store profile explicitly allows 4:2:0 And for PNG outputs: truecolor 24-bit RGB, alpha removed if the store forbids transparency (flatten to configured background color) And pixel dimensions are preserved unless the store profile explicitly requires resizing (handled by the separate dimension rule)
File Size and Compression Limits per Store
Given a target store profile with a maximum file size limit When encoding the normalized variant Then the file size is ≤ the configured limit without changing pixel dimensions And JPEG quality is adaptively tuned to meet the limit while achieving SSIM ≥ 0.98 (or PSNR ≥ 40 dB) relative to the color-managed source; quality never drops below the store’s configured floor And PNG compression level is tuned to meet the limit without introducing loss And if the limit cannot be met without violating quality floors, the job fails with error code ARTCHK_SIZE_LIMIT_UNMET and the original remains unchanged
Metadata Normalization with Rights Retention
Given a configurable metadata retention policy for rights/attribution fields (e.g., IPTC Creator, Credit, Copyright Notice, Rights Usage Terms, and configured XMP namespaces) When normalization runs Then non-essential EXIF/XMP/GPS and device/camera metadata are stripped from the output variant And the configured rights/attribution fields are preserved verbatim And a metadata diff report shows removed vs retained fields And the variant contains no private maker notes or GPS tags And the original file’s metadata remains intact
Non-Destructive Variants, Lineage, and Rollback
Given normalization is executed on an artwork When the variant is created Then the original asset is retained unmodified with the same SHA-256 checksum And the variant stores lineage metadata: sourceAssetId, sourceChecksum, storeProfileId, transformSteps (ordered), createdAt, createdBy, variantChecksum And an audit log entry is recorded for the transformation And when a rollback is requested, the system can re-point the manifest to the original And when regeneration is requested with the same inputs/profile, the newly created variant’s checksum matches the previous variant
Pipeline Integration and Approval Anchoring
Given ArtCheck completes its scan When the reviewer enters approval Then a “Normalize Color/Format” step is presented post-scan with a per-store summary of planned actions (e.g., profile conversion, format conversion, metadata stripping) And normalization is pre-selected for stores where it is mandatory and optional where allowed And upon approval, the export manifest is locked to the normalized variant per store; exports to stores requiring normalization are blocked unless the normalized variant is present And the UI provides before/after previews rendered in sRGB for visual verification And disabling normalization for a store that requires it surfaces a blocking validation error
Multi-Store Variant Generation and Deduplication
Given multiple target stores with differing color/format requirements When normalization runs Then a distinct variant is generated per unique requirement set and referenced per-store in the export manifest And identical requirement sets across stores deduplicate to a single binary with multiple manifest references And variant filenames/IDs include store code and profile version to avoid collisions And processing occurs concurrently; a failure for one store does not block others, and per-store statuses are reported And all generated variants meet their respective store rules for profile, format, metadata, and size
Prohibited Content and Badge Compliance Detection
"As a release manager, I want the system to flag URLs, pricing, and missing explicit badges so that artwork won’t be rejected by stores."
Description

Use OCR and visual heuristics to detect prohibited text elements (URLs, pricing, contact info, social handles) and machine-readable markers (e.g., QR codes) on artwork. Verify presence, size, and placement of explicit content badges when the release is marked explicit, and check safe border ratios for text and key elements per store rules. Provide confidence-scored findings, granular warnings/errors, and recommended remediation actions. Configurable thresholds per destination with results stored to compliance logs and surfaced in the UI for review.

Acceptance Criteria
OCR Detection of Prohibited URLs, Pricing, Contact Info, and Social Handles
Given an artwork image is uploaded against destination profile D that forbids URLs, pricing, contact info, and social handles When the system runs OCR using the configured language set and normalization for D Then it returns detected text tokens with normalized text, bounding boxes (x,y,w,h), and confidence scores (0.0–1.0) per token And flags tokens matching URL/email/phone/price/handle patterns as findings with category, severity (Error/Warning per D), and confidence And marks the overall asset status as Fail if any Error-class finding exists; otherwise Warn if any Warning-class finding exists; otherwise Pass And provides a recommended remediation per finding (e.g., remove text, crop/move, replace asset) linked to help documentation
Detection of QR Codes and Machine-Readable Markers
Given an artwork image is uploaded for destination profile D When the system runs machine-readable marker detection Then it identifies QR/Datamatrix/1D barcodes with bounding boxes, symbology type, and confidence (0.0–1.0) And classifies each detected code as Error or Warning per D’s rules and confidence thresholds And includes a recommended remediation (e.g., remove/obscure code or replace asset) And updates the overall asset status based on the highest-severity finding from this check
Explicit Content Badge Presence, Size, and Placement Validation
Given the release is marked explicit and destination profile D is selected When the system scans the artwork Then it verifies exactly one explicit content badge exists in an allowed corner, with size ≥ D.badge.minSizePct of the shorter edge, and inset within [D.badge.minInsetPct, D.badge.maxInsetPct] And verifies minimum contrast ≥ D.badge.minContrast and occlusion ≤ D.badge.maxOcclusionPct And if missing or out-of-spec, creates an Error finding with a recommended badge spec (size, position, contrast) for remediation And if the release is not explicit for D, verifies that no explicit badge is present; otherwise flags per D (Warning or Error)
Safe Border Ratio Enforcement for Text and Key Elements
Given destination profile D defines safe border ratio D.safeBorderPct and key-element rules When the system detects text regions and key elements (e.g., logos/faces) and computes their minimum distance to each canvas edge Then it flags any region with inset < D.safeBorderPct as a violation, returning the element type, bounding box, measured inset, and delta required to comply And assigns severity (Error/Warning) per D and includes a remediation (e.g., move/resize element, increase border padding)
Per-Destination Configurable Thresholds and Rule Profiles
Given an administrator updates thresholds and rules for destination profile D (patterns, confidence thresholds, badge specs, safe-border ratios) and publishes version v When a new scan is run for an artwork against D Then the engine applies the published configuration v and records D.id and v in the results And scans against different destination profiles for the same asset may yield different severities per their configs And configuration changes are versioned and auditable, with effective-from timestamps
Compliance Logging and UI Surfacing with Confidence-Scored Findings
Given a scan completes for asset A against destination profile D When results are saved Then the system persists a compliance log entry with A.id, A.version hash, D.id, D.version, timestamp, engine version, findings (category, severity, confidence, bounding boxes), recommended remediations, and overall decision (Pass/Warn/Fail) And the UI surfaces these findings with overlays, filters by severity/category, sort by confidence, and export (JSON/CSV) And access controls ensure only authorized users for asset A can view results And rescans append immutable log entries without overwriting prior records
Auto-Generated Compliant Variants
"As a designer, I want the system to auto-generate compliant variants and show me a preview so that I can approve fixes quickly without manual rework."
Description

When non-compliance is detected, automatically generate corrected variants: resize/crop to compliant dimensions and aspect ratios, normalize ICC profile, convert format, adjust compression to meet file size limits, and add padding to satisfy safe-area requirements. Provide side-by-side previews and quality indicators, track variant lineage, and name variants according to store presets. Offer one-click selection of a preferred variant for approval, while preserving originals for manual editing if needed. Integrated with release assets so approved variants flow directly into export and review workflows.

Acceptance Criteria
Auto-resize/crop to store-compliant dimensions
Given a selected store preset with a required aspect ratio and exact pixel dimensions And an uploaded artwork that fails either the aspect ratio or dimensions When ArtCheck generates variants Then at least one variant exactly matches the preset’s required width and height in pixels And the variant’s aspect ratio equals the preset’s aspect ratio And any cropping is centered and does not introduce empty pixels And variant metadata records resize/crop operations and any upscaling flag
ICC profile normalization to sRGB
Given an artwork with a non-sRGB or missing ICC profile When ArtCheck generates variants Then the variant is embedded with the sRGB IEC61966-2.1 ICC profile And color conversion is applied using relative colorimetric intent with black point compensation And the measured color difference vs. a reference sRGB conversion is ΔE00 ≤ 2 averaged over a 1000‑sample grid And the UI indicates the final ICC profile
Format conversion and file size compliance
Given a store preset that specifies an output format and a maximum file size limit And a generated variant at the target dimensions When ArtCheck encodes the variant Then the file format matches the preset (e.g., JPEG or PNG) And the file size is ≤ the preset limit And objective quality vs. the pre-encode image is SSIM ≥ 0.98 (or ≥ 0.95 if the limit cannot be met at 0.98, with a warning) And the UI displays file size, limit, and quality score And a “quality/size trade‑off” badge appears if SSIM < 0.98
Safe-area padding application
Given a store preset that requires a safe-area border ratio (e.g., ≥ 7% of the shorter edge) And an artwork that would violate the safe area after resizing/cropping When ArtCheck generates variants Then padding is added so that the final safe-area margin meets or exceeds the required ratio, rounded to the nearest pixel And padding color/alpha follows the preset (e.g., edge‑average color for JPEG; transparent for PNG) And a safe-area guide overlay aligns with the computed margins in preview And variant metadata includes safeAreaCompliant = true and recorded padding pixels per edge
Side-by-side previews with quality indicators
Given at least one generated variant exists for an artwork When the user opens ArtCheck preview Then the original and the selected variant are shown side-by-side And the user can toggle 1:1 (100%) zoom and pan both views in sync And the UI displays indicators: dimensions, aspect ratio, ICC profile, format, file size vs. limit, upscaled flag, quality score And switching between variants updates indicators to match the underlying file properties
Variant lineage tracking and preset-based naming
Given multiple variants are generated from a single source artwork When variants are saved Then each variant stores lineage: sourceAssetId, storePresetId, generatedAt, transformSteps, checksum And filenames follow the preset pattern (e.g., [ReleaseCode]_[StoreCode]_[WxH]_[ICC].[ext]) without illegal characters And name collisions are resolved by appending an incremental suffix (_v2, _v3, …) And lineage records are queryable in the asset’s history/audit log
One-click approval and integration to export/review flows
Given one or more compliant variants are available When the user clicks “Approve” on a chosen variant Then the variant status becomes Approved and is locked from further auto-regeneration And it is anchored to the release export manifest for the corresponding store(s) And subsequent review links for that release use the approved variant by default And the original artwork remains preserved and editable And the user can revert approval or switch the approved variant, with all changes logged
Approval Anchoring to Export Manifest
"As a label operations lead, I want the approved cover to be locked to the export manifest so that the wrong image can’t ship by accident."
Description

Upon approval of a passing variant, compute a content hash and anchor the exact asset (ID and hash) to the release’s export manifest and downstream delivery payloads. Prevent exports if the attached artwork changes post-approval without revalidation. Reflect the anchored asset in watermarkable review links and in the release checklist. Record approvals in an immutable audit log. This guarantees that the approved cover is the one that ships, eliminating mix-ups across teams and pipelines.

Acceptance Criteria
Anchor Approved Artwork to Export Manifest
Given a release with an artwork variant that has passed ArtCheck When a user marks the variant as Approved Then the system computes a content hash of the exact binary and stores AssetID and Hash in the release's export manifest And subsequent reads of the manifest return the same AssetID and Hash And the manifest records the anchoring timestamp and approving user
Include Anchored Artwork in Downstream Delivery Payloads
Given a release with anchored artwork (AssetID + Hash) in the export manifest When generating any downstream delivery payload (e.g., DDEX, partner API, ZIP export) Then the payload contains the same AssetID reference and content hash And the delivered or referenced binary matches the anchored hash byte-for-byte And partner-specific artwork metadata fields reflect the anchored identifiers
Block Export on Post-Approval Artwork Change
Given a release with anchored artwork And the underlying artwork file or metadata is changed or replaced When an export or delivery is initiated without revalidation and reapproval Then the system blocks the export And displays an error that the anchored artwork has changed and revalidation is required And requires re-running ArtCheck and explicit re-approval before export can proceed
Watermarkable Review Links Reflect Anchored Artwork
Given a release with anchored artwork When generating watermarkable review links Then the cover displayed in the review link is the anchored artwork binary And the link metadata includes the anchored AssetID and Hash And if artwork changes prior to reapproval, previously issued review links continue to display the last anchored artwork
Release Checklist Shows Anchored Artwork Status
Given a release with or without anchored artwork When viewing the Release Checklist Then the checklist displays an item "Artwork Anchored" with status Not Started, Pending Approval, Anchored, or Requires Revalidation And when anchored, it shows the AssetID, Hash, approver, and timestamp And the checklist status updates automatically upon approval or change detection
Immutable Audit Log of Approvals and Changes
Given any approval or change event related to artwork anchoring When the event occurs Then an append-only audit log entry is recorded with release ID, asset ID, content hash, event type, actor, timestamp, and reason And audit entries cannot be edited or deleted via API or UI And retrieving the audit log returns a verifiable sequence where each entry includes a cryptographic link to the previous entry
Revalidation Flow on Artwork Update
Given anchored artwork exists for a release When a user attempts to replace or edit the artwork Then the system flags the release as Requires Revalidation And automatically queues ArtCheck for the new binary And upon passing, requires explicit approval to compute a new hash and update the export manifest, superseding the prior anchor while preserving audit history
Manual Override with Audit Trail
"As a product owner, I want the ability to override failed checks with audit trails so that we can proceed in edge cases while maintaining accountability."
Description

Enable authorized roles to override failed checks by submitting a justification and optional supporting documents (e.g., distributor waiver). Support destination-scoped overrides, configurable risk levels, and optional second-approver requirements for high-risk cases. Log who, what, when, why, and where in an immutable audit trail, and include override annotations in compliance reports and export manifests. Notify stakeholders of overrides and expose them in review links for transparency. Provides operational flexibility without sacrificing accountability.

Acceptance Criteria
Authorized Override with Justification and Attachments
Given an artwork asset has one or more failed ArtCheck rules And the current user holds an authorized override role When the user selects specific failed check(s) to override And enters a non-empty justification of at least 10 characters And optionally uploads up to 3 supporting files (PDF/JPG/PNG/GIF) each ≤ 25 MB Then the system validates inputs and persists the override request And if the override does not require a second approver per policy, the override is applied immediately to the selected check(s) And if the user lacks authorization, the system returns 403 and does not persist any change And if validation fails (e.g., missing justification or invalid attachment), the system returns 422 with field-level errors and no override is created
Destination-Scoped Override Application
Given a failed ArtCheck rule that blocks multiple distribution destinations And an authorized user initiates an override When the user scopes the override to one or more specific destinations (e.g., Spotify, Apple Music) Then the override applies only to the selected destinations for the specified check(s) And export attempts to selected destinations bypass the failed check(s) with an override annotation And export attempts to non-selected destinations remain blocked by the failed check(s) And compliance views display per-destination override status for the check(s)
Risk Level and Second-Approver Workflow
Given override risk levels are configured (Low, Medium, High) And a submitted override is classified as High risk per current policy When the requester submits the override Then the system requires selection of a second approver with an authorized role who is not the requester And the override status is set to Pending until the second approver acts And when the second approver approves, the override is applied; when rejected, the override is not applied and the requester is notified And all actions (request, approve, reject) are recorded with reasons where applicable
Immutable Audit Trail Capture
Given any override lifecycle event (create, approve, reject, cancel) When the event is committed Then the system writes an append-only audit record containing: action type, override ID, asset ID, asset version, project/release ID, failed check ID(s), destination(s), risk level, requester user ID and role, approver user ID and role (if any), justification text, attachment file names and content hashes, event timestamp in UTC ISO-8601, source IP, and user agent And the audit store rejects edits and deletions (returns 403) and logs such attempts as separate audit events And each audit record includes a cryptographic hash and previous-record hash to provide tamper-evidence And authorized users can retrieve audit records filtered by asset, destination, date range, and status
Annotations in Compliance Reports and Export Manifests
Given an override exists for one or more failed ArtCheck checks When generating compliance reports or export manifests Then each affected item includes an overrides array with entries containing: check ID, destinations, risk level, status (applied|pending|rejected), requestedBy (id, name), approvedBy (id, name) when applicable, justification summary (first 120 chars), event timestamps, and attachment references And the manifest remains anchored to the approved image identifier while including the override annotations And reports and manifests validate against the documented schema; missing required override fields produce a schema validation error
Stakeholder Notifications and Review Link Transparency
Given an override is created, approved, or rejected When the event occurs Then configured stakeholders (e.g., asset owner, project managers, designated reviewers) receive an in-app notification and an email within 60 seconds containing asset, check(s), destination(s), risk level, requester, approver (if applicable), and justification summary And review links display a visible override banner and per-destination badges indicating which checks were overridden and why And review link recipients (non-authenticated) can view override details without editing rights And view events for override details are captured in per-recipient analytics
Configurable Risk Policies and Permissions
Given an administrator manages override policies When the admin configures risk level mappings per check category and destination, and toggles second-approver requirement for High risk Then the new policy is validated, versioned, and applied to subsequent override requests And non-admin users cannot change policies (403) And all policy changes are recorded in the audit trail with who, what changed, and when And permission checks ensure only users with designated roles can request, approve, or reject overrides

ProofChain

Cryptographically signs the export manifest (per‑file checksums in a tamper‑evident tree) and stamps the proof PDF with a time‑based token and QR link to a hosted verification page. Distributors and auditors can verify integrity and provenance instantly, reducing disputes and back‑and‑forth.

Requirements

Merkle Manifest Generation
"As an indie label manager, I want a canonical manifest with a tamper-evident root so that any stakeholder can prove the exact files in a release haven’t been altered."
Description

Generate a canonical export manifest for each release that includes per-file SHA-256 checksums, file paths, sizes, and MIME types, organized in a deterministic order and committed to a tamper-evident Merkle tree. Produce a JSON manifest with schema versioning and the Merkle root hash that uniquely identifies the exported asset set. Integrate manifest creation into the existing release export pipeline, ensuring normalization rules (e.g., path casing, line endings, timestamp handling) are applied consistently so the same input yields the same root hash across environments.

Acceptance Criteria
Deterministic Canonical Manifest JSON
Given a fixed set of input files and metadata When the manifest is generated on any machine Then the manifest JSON bytes are identical (byte-for-byte) across runs and environments And the JSON is UTF-8 without BOM, uses LF line endings, and has lexicographically sorted keys with no insignificant whitespace And manifest.schemaVersion equals "1.0.0" (semver string), manifest.createdAt is UTC ISO 8601 with Z suffix, and manifest.merkleRoot is a 64-char lowercase hex string And manifest.files is an array sorted ascending by normalized path and each item contains: path (relative, forward slashes, no leading slash), size (integer bytes), mimeType (IANA type/subtype), sha256 (64-char lowercase hex)
SHA-256 Checksums Match File Contents
Given an exported file in the release output When its SHA-256 is computed using a standard tool (e.g., sha256sum) on the exported bytes Then the value equals the corresponding files[].sha256 in the manifest And files[].size equals the exact byte length of the exported file And files[].mimeType is determined by content sniffing with extension fallback and defaults to application/octet-stream when unknown
Merkle Root Recomputes and Detects Tampering
Given the manifest.json and the exported files When recomputing the Merkle tree using the files array order as leaf order and each leaf = hex-decoded sha256(file) Then the recomputed root equals manifest.merkleRoot When any file content is modified, a file is added/removed, or a file path in the manifest is changed Then the recomputed root does not equal manifest.merkleRoot
Normalization Rules Applied (Paths, Line Endings, Timestamps)
Given source files with mixed path separators, casing, and Unicode forms When paths are normalized for the manifest Then paths use forward slashes, collapse '.' and '..', remove duplicate separators, are Unicode NFC-normalized, are lowercased, and are relative to the export root Given text assets (mimeType starting with text/ or equal to application/json or application/xml) When exporting Then line endings are normalized to LF in the exported bytes prior to hashing; binary files are exported and hashed as-is And manifest.createdAt is in UTC (e.g., 2025-08-19T12:00:00.000Z) with millisecond precision
Export Pipeline Integration and Artifact Placement
Given a release export job completes successfully When inspecting the export artifact directory Then a file named manifest.json exists at the root alongside exported assets And the job metadata contains the same merkleRoot value as manifest.merkleRoot And the ProofChain step receives merkleRoot via the pipeline handoff payload When manifest generation fails for any reason Then the export job fails and surfaces a clear error message including an error code and cause
Collision and Error Handling for Normalized Paths
Given two source files that normalize to the same path (case-insensitive or Unicode-equivalence collision) When attempting export Then the job fails before writing manifest.json with error code EXPORT_PATH_COLLISION and lists both original source paths Given a source file is missing or unreadable When attempting export Then the job fails with error code EXPORT_FILE_MISSING and identifies the file Given mimeType detection cannot determine a type When generating manifest Then files[].mimeType is set to application/octet-stream and processing continues
Cross-Environment Reproducibility (Windows/macOS/Linux)
Given the same release content is exported on Windows, macOS, and Linux using the same IndieVault version and settings When comparing the three manifest.json files and merkleRoot values Then all three manifests are byte-identical and the merkleRoot values are equal And all paths in each manifest use forward slashes regardless of source OS
Secure Key Management & Signing
"As a workspace admin, I want manifests signed with securely managed keys so that proofs of integrity can be trusted and audited."
Description

Sign the Merkle root of each manifest using a workspace-scoped asymmetric key (e.g., Ed25519) stored in a managed KMS with role-based access, audit trails, and rotation policies. Embed the signature, public key identifier, and signature algorithm metadata into the manifest and proof artifacts. Enforce least-privilege permissions for sign operations, support scheduled and emergency key rotation, and preserve verification of historical proofs via key versioning and published public keys.

Acceptance Criteria
Sign Manifest Merkle Root with Workspace-Scoped KMS Key
Given a finalized export manifest with computed Merkle root for a workspace When the system requests a signature from the managed KMS Then the KMS signs the Merkle root using the workspace-scoped key version And the manifest JSON includes fields: signature (base64), alg="Ed25519", key_id, key_version, signed_at (ISO 8601 UTC), merkle_root (hex) And the proof PDF stamp displays the same signature metadata plus a time-based token and QR link to the hosted verification URL And an audit log entry is written with actor_id, workspace_id, key_id, key_version, request_id, timestamp And 100% of completed exports contain a valid signature verified against the published public key for key_id+version
Enforce Least-Privilege Access for Sign Operations
Given a user without the Signer role or a service without kms:Sign on the specific workspace key When they attempt to invoke a sign operation Then the request is denied with 403 and reason "insufficient_permissions", and an audit event is recorded And only principals in the Signer role with kms:Sign on that key can sign And attempts to export or retrieve private key material are blocked and logged (no plaintext key leaves KMS) And all sign API calls are attributed to a principal and workspace_id
Scheduled Key Rotation with Backward-Compatible Verification
Given a rotation policy of 90 days is configured for the workspace key When the rotation date is reached Then a new key version (n+1) is created and set Active within 10 minutes And new manifests are signed with version n+1 And verification of artifacts signed with versions n and n+1 succeeds via the verification page using the published public keys And the old key version remains verify-only and is disabled for signing And a rotation audit event is recorded and notifications sent to workspace owners
Emergency Key Rotation and Revocation
Given an emergency rotation is triggered by an admin When the rotation command is executed Then the current key version is disabled for signing within 5 minutes and a new version becomes Active And in-flight signing tasks using the disabled version fail fast and auto-retry with the new version within 2 retries And verification of historical signatures produced by the disabled version continues to pass And admins receive confirmation and incident log captures the action with reason code
Detect Tampering via Merkle Root Verification
Given a proof artifact and associated manifest When any file covered by the manifest is modified, added, or removed Then recomputing the Merkle root no longer matches the signed root and verification page displays "Integrity failed" with the mismatched node path And when no changes are made, verification displays "Verified" with the signed root and key_id+version And attempts to alter embedded metadata without access to the signing key fail verification
Signing Path Reliability and Latency
Given normal operating conditions When 10,000 sign operations are executed Then KMS.Sign success rate is >= 99.9% and p95 latency <= 300 ms, p99 <= 800 ms as measured by telemetry And if KMS is unavailable, export jobs are marked "Pending Sign", withheld from distribution, and retried for up to 24 hours with exponential backoff (max 8 attempts) And an on-call alert is triggered if failure rate > 1% over 5 minutes
Complete, Immutable Audit Trails
Given the system is in operation When querying audit logs for a time range Then every sign, rotation (scheduled or emergency), and permission change event is present with fields: event_type, timestamp, actor_id, workspace_id, key_id, key_version, request_id, outcome And logs are immutable (WORM-storage) and retained >= 365 days And access to logs requires Auditor role and all reads are themselves audited And an exportable CSV report for a date range can be generated within 60 seconds
RFC3161 Time-Stamping Token
"As a distributor, I want cryptographic time evidence attached to the proof so that I can verify when the release assets were finalized."
Description

Obtain an RFC 3161-compliant time-stamp token for each signed manifest to prove existence at a specific time, recording the TSA info, serial number, and hash algorithm. Cache and bundle the token with the proof artifacts and store the token chain for later verification. Provide configuration to use an external TSA or a default trusted provider, with resilient retries and alerting on failures, and fallbacks queued for re-issue.

Acceptance Criteria
Token Acquisition and Validation for Signed Manifest
Given a release manifest has been signed and its Merkle root hash is computed And a TSA endpoint is configured When a time-stamp request (TSQ) is submitted with the manifest hash using the selected hash algorithm Then a time-stamp response (TSR) is received within 10 seconds And the TimeStampToken signature verifies against the TSA certificate chain in the trust store And the token includes policy OID, serial number, genTime, nonce, and messageImprint matching the manifest hash and algorithm And the token hash algorithm, TSA name/URL, and serial number are recorded in manifest metadata
Bundle and Cache Token with Proof Artifacts
Given a valid RFC3161 token is obtained for a manifest When exporting proof artifacts Then the token (binary DER .tsr) and a machine-readable metadata file are included in the proof bundle under a deterministic path And a copy of the token is cached in secure object storage keyed by manifest hash and TSA identifier And repeated exports reuse the cached token if the manifest hash and TSA configuration match and the token validates And the proof PDF footer displays the TSA name and token serial number
External TSA Configuration with Default Provider Fallback
Given workspace settings specify either an external TSA (URL/credentials) or a default trusted provider When the configured TSA fails with retriable errors for 3 consecutive attempts Then the system falls back to the default trusted TSA provider and records the fallback in audit logs And all outbound TSA calls enforce TLS 1.2+ with certificate revocation checking And TSA credentials are stored encrypted and are not logged And the selected TSA identifier is stored alongside the token
Retry, Alert, and Queue on TSA Failures
Given a token request fails due to transient errors When retries are executed with exponential backoff and jitter for up to 15 minutes total (max backoff 2 minutes) Then, if all retries fail, a high-priority alert is emitted and the export status is marked "Pending Token" And the manifest is added to a durable queue for re-issue with at-least-once processing semantics And upon recovery, queued requests are processed automatically and tokens are backfilled into existing proof bundles and verification pages
Persist Token and TSA Certificate Chain for Future Verification
Given a token is received from a TSA When persisting artifacts Then store the raw TSR, the TSA certificate chain at issuance, and OCSP/CRL responses And record the request hash algorithm and request nonce used And an internal verify() returns valid only if the messageImprint matches the manifest root, the signature verifies, genTime is trusted, and the chain validates as of issuance time And verification results are exposed via API and on the hosted verification page linked by QR
Hash Algorithm and Request Integrity Controls
Given a time-stamp request is being prepared When selecting a hash algorithm Then only SHA-256 or SHA-512 are accepted; MD5/SHA-1 are rejected with a clear error And a cryptographically strong unique nonce is included per request and must be echoed in the token And genTime is checked within ±5 minutes of authoritative NTP time; larger skew flags a warning And token size is limited to 100 KB to mitigate abuse
Proof PDF with Embedded QR
"As an artist manager, I want a single proof document with a scannable link so that auditors can quickly verify integrity without handling raw manifests."
Description

Generate a branded, human-readable PDF summarizing the release metadata, Merkle root, signature details, and time-stamp token, and embed a QR code that links to a hosted verification page. Ensure the PDF is immutable post-issuance, watermarked with the workspace identity, includes versioned schema references, and is stored alongside the release export. Support light/dark branding and accessible layout for auditors.

Acceptance Criteria
Generate Proof PDF with Required Fields and Branding
Given a finalized release with export manifest and workspace branding settings When the Proof PDF is generated Then the PDF includes release metadata (title, artist, catalog ID, release date), the Merkle root (hex), signature algorithm and signer identity, and the time-stamp token (issuer, serial, UTC time) And the PDF includes a versioned schema reference (name and semantic version) and a resolvable schema URL And the workspace identity watermark is visible on every page at 10–25% opacity without obscuring content And the PDF renders without errors in at least two common viewers (e.g., Adobe Acrobat Reader, Apple Preview)
QR Scanning Resolves to Verification Page and Shows Integrity Status
Given a generated Proof PDF with embedded QR code When the QR code is scanned Then it resolves via HTTPS to a hosted verification URL containing a non-guessable token And the page loads within 2 seconds (p95) and displays the release ID, Merkle root, and signature/timestamp verification status And if the token is valid and data matches, the status is "Verified" with green indicator; if invalid or expired, the status is "Invalid/Expired" with red indicator And the verification page provides a link to download the current manifest and a hash comparison view
PDF Immutability and Digital Signature Validation
Given a Proof PDF has been issued When the document is inspected Then it contains an embedded digital signature that validates as unmodified And attempting to edit or resave the PDF causes the signature to show as invalid in a compliant viewer And the platform prevents overwriting the issued file; any reissue creates a new version with incremented version metadata and audit trail entry And the system displays and stores the PDF SHA-256 hash alongside the release
Accessible Proof PDF for Auditors (PDF/UA & WCAG)
Given the Proof PDF is generated When evaluated for accessibility Then the PDF is tagged with a correct reading order, selectable text (no image-only pages), document title, and primary language metadata And all images/logos have alt text; headings follow a logical hierarchy; tables (if any) include headers And color contrast for text and key UI elements meets WCAG 2.1 AA (≥4.5:1) And the document passes automated PDF/UA checks with zero critical errors (e.g., PAC 2021)
Storage and Association with Release Export
Given a release export is created When the Proof PDF is generated Then it is stored in the same release export folder with a deterministic name including release ID and manifest version And it appears in the release "Files" UI labeled "Proof PDF" and is included in the downloadable export archive And it is retrievable via the API endpoint for release assets with correct content-type (application/pdf) And access is restricted to users with Export permission, and generation/download events are audit-logged
Light/Dark Theme Application and QR Scannability
Given a workspace with light or dark branding selected When the Proof PDF is generated Then the PDF applies the corresponding theme to headers, accents, and watermark while preserving readability And all text and key elements meet contrast thresholds (light and dark variants) per WCAG 2.1 AA And the QR code maintains a sufficient quiet zone and error correction level Q or higher and is scannable in both themes using at least two mainstream scanner apps without failure
Public Verification Page & API
"As a distributor’s QA engineer, I want an online verifier so that I can independently confirm integrity and provenance before ingesting a release."
Description

Provide a hosted verification page (via QR/deep link) and REST API that accept a manifest or sample files, recompute checksums, validate the Merkle tree, verify the signature against published public keys, and confirm the RFC 3161 time-stamp token. Display clear pass/fail results, issuer details, key version, issuance time, and revocation status without exposing actual asset content. Implement rate limiting, link expiration, and privacy controls; do not persist uploaded files beyond verification.

Acceptance Criteria
QR/Deep Link Loads Public Verification Page
Given a valid ProofChain QR or deep link referencing a verification token When a verifier opens the link over HTTPS Then the page resolves without authentication and shows an overall Pass or Fail result And the page displays issuer name, key version, issuance time, and revocation status And the page verifies the referenced manifest and shows per-check statuses for Merkle tree, signature, and RFC 3161 time-stamp token And no asset file contents or download links are exposed
REST API Verifies Manifest Payload
Given a well-formed manifest and associated proof payload is submitted via POST /verify When the request is processed Then the API returns HTTP 200 with JSON {result: "pass"} and fields issuer, key_version, issuance_time, revocation_status, and per_check details And the service does not persist the uploaded payload beyond verification Given a malformed or unsupported payload When submitted to POST /verify Then the API returns HTTP 400 with error code invalid_manifest Given a well-formed but tampered manifest/proof When submitted to POST /verify Then the API returns HTTP 422 with {result: "fail"} and reasons including which checks failed
Sample Files Checksum Re-Verification
Given a loaded manifest on the verification page or API When one or more sample files that correspond to entries in the manifest are supplied Then the system recomputes their checksums and compares them to the manifest values And it reports per-file match or mismatch without persisting the files after the request completes And any mismatch marks the overall result as Fail and lists mismatching file paths and hashes And files not present in the manifest are rejected with reason not_in_manifest and are not retained
Signature Validation Against Published Public Keys
Given a ProofChain-signed manifest proof When verification runs Then the signature validates against published public keys and the matching key_version is displayed And issuer details are displayed for a trusted signature And if the signing key is unknown or revoked, the result is Fail with reason key_untrusted and revocation_status set to revoked
RFC 3161 Time-Stamp Token Verification
Given an RFC 3161 time-stamp token embedded in the proof When verification runs Then the token signature and certificate chain validate to a trusted TSA And the issuance_time is displayed in ISO 8601 UTC And if validation fails, the result is Fail with reason timestamp_invalid
Rate Limiting and Abuse Protection
Given repeated verification requests from the same client or token exceed configured limits When an additional request is made Then the API returns HTTP 429 with a Retry-After header and the page displays a rate limit message And requests under the limit are processed normally with appropriate HTTP status codes
Link Expiration and Privacy Controls
Given a verification link with an embedded expiration timestamp When the link is accessed after expiration Then the system returns an Expired state (HTTP 410 or 404) and does not run verification And any uploaded manifests or sample files are stored only transiently and purged immediately after verification And no temporary files are accessible after the request completes And the page never renders or serves raw asset contents, only non-sensitive verification metadata
Audit Logging & Analytics
"As a compliance lead, I want detailed audit trails and verification analytics so that I can demonstrate due diligence and investigate anomalies."
Description

Record append-only audit events for all sign, rotate, revoke, and verify actions including actor, workspace, IP, user agent, key identifiers, and outcomes. Surface per-recipient verification analytics for proof links, exportable as CSV and accessible via API. Provide retention policies, permissioned access, and webhooks for significant events (e.g., verification failure, key rotation) to support compliance and incident response.

Acceptance Criteria
Append-Only Audit Trail for Proof Actions
Given an authenticated actor performs a sign, key rotation, revoke, or verify action within workspace W, When the action completes (success or failure), Then a single audit event is appended containing: event_id (UUIDv4), action, actor_id, actor_type (user|api), workspace_id, occurred_at (ISO 8601 UTC), ip_address, user_agent, key_id (if applicable), proof_id or export_id, outcome (success|failure), error_code (if failure), request_id, hash_prev, hash_curr. Then audit events are immutable: any attempt to update or delete an event returns 403 and is itself logged as an audit event. Then the audit log forms a verifiable hash chain; calling GET /api/v1/audit-events/verify returns 200 with chain_status=valid for the specified range. Then the event is queryable via UI and API within 5 seconds of action completion (p95). Then concurrent actions from the same actor produce distinct event_ids and strictly increasing occurred_at timestamps within the same node.
Per-Recipient Verification Analytics for Proof Links
Given unique review links are generated per recipient, When a recipient opens the link or attempts a verification (via URL or QR), Then an analytics event is recorded with: link_id, recipient_id (or alias), occurred_at (ISO 8601 UTC), ip_address, user_agent, event_type (open|verify|attempt_after_expiry|attempt_after_revoke), outcome (success|failure). Then the analytics UI displays per-recipient aggregates (first_seen_at, last_seen_at, total_opens, total_verifications, unique_ip_count, last_user_agent) that update within 10 seconds (p95) of the triggering event. Then events with user_agent matching the IAB/industry bot lists are flagged bot=true and excluded from default aggregates, with a toggle to include them. Then attempts on expired or revoked links are counted as failures and surfaced with reason {expired|revoked}. Then per-recipient analytics are accessible only to authorized workspace users and never exposed to recipients.
CSV Export of Verification Analytics
Given a user with the Analytics.Export permission requests a CSV export for a specified date range and filters (proof_id, link_id, recipient_id, outcome, bot_flag), When the export is submitted, Then an asynchronous job is created and a downloadable CSV is produced within 5 minutes for up to 500,000 rows. Then the CSV includes a header and the columns in order: workspace_id, proof_id, link_id, recipient_id, event_type, occurred_at (UTC ISO 8601), ip_address, country_code, user_agent, outcome, reason, bot_flag, request_id. Then the export respects all applied filters; the CSV row count equals the count returned by the UI/API for the same query. Then the file is UTF-8 encoded with LF newlines, comma delimiter, and RFC 4180 quoting; embedded quotes are properly escaped. Then the download URL requires authentication, expires after 24 hours, and each download is recorded as an audit event.
Audit and Analytics API Endpoints
Given a workspace-scoped API token with scopes (audit:read, analytics:read), When calling GET /api/v1/audit-events and GET /api/v1/proof-links/{id}/analytics with valid filters (date_from/date_to, action, outcome, key_id, recipient_id, link_id), Then the API responds 200 with cursor-based pagination (items, next_cursor) and total_count. Then responses contain the same fields and values as shown in the UI; timestamps are UTC ISO 8601; types are correct (integers for counts, booleans true/false). Then unauthorized requests return 401 and insufficient-scope requests return 403; rate limiting is enforced at 600 requests/minute per token with Retry-After headers on 429. Then p95 latency for page_size=100 is < 300 ms and p99 < 800 ms within the same region. Then an OpenAPI 3.0 spec describing these endpoints is available at /api/openapi.json and contract tests validate conformance.
Retention Policies and Legal Hold
Given a workspace owner sets audit and analytics retention periods between 90 and 3650 days, When the setting is saved, Then the change is validated, persisted, and recorded as an audit event capturing previous_value and new_value. Then a daily retention job permanently deletes records older than the configured period, excluding any under legal_hold=true; the job produces a summary with deleted_counts per dataset. Then legal hold can be applied by proof_id or for the entire workspace; while active, purges skip matching records and the hold status is visible via UI/API. Then before purging a dataset exceeding 10,000 records, the UI offers a one-click export; if initiated, purge is deferred for that dataset until export completes (max 24 hours). Then hash-chain verification returns chain_status=pruned for ranges spanning purged segments and chain_status=valid for retained segments.
Role-Based Access and Data Minimization
Given role-based permissions are configured, When a user attempts to access audit logs or analytics, Then access is granted as follows: Owner/Admin (all), Auditor (audit only), Analyst (analytics only); others receive 403 and no data is returned. Then users can export data only if they hold the Export permission; API tokens and exports are scoped to the workspace and dataset needed. Then when the workspace setting "Mask network metadata" is enabled, ip_address is returned as a salted hash and user_agent is abbreviated to family/version; raw values are hidden in UI/API. Then all access to logs, analytics, exports, and settings changes are recorded as audit events with actor, resource, action, and outcome.
Webhooks for Significant Events
Given a webhook subscription is configured with a verified HTTPS URL and secret, When events occur (verification_failed, verification_succeeded, key_rotated, key_revoked, proof_signed), Then a POST is delivered within 10 seconds (p95) with JSON payload {event_id, event_type, occurred_at (UTC), workspace_id, resource_ids, outcome, reason} and header X-IndieVault-Signature (HMAC-SHA256 over the body). Then non-2xx responses trigger retries with exponential backoff at 1m, 5m, 15m, 30m, 60m up to 10 attempts; deliveries are idempotent via X-Idempotency-Key and after final failure the event is moved to a dead-letter queue. Then webhook delivery logs are available with last_attempt_at, last_status, attempt_count, next_attempt_at and can be filtered and exported. Then security requirements are enforced: TLS 1.2+, optional IP allowlist, and write-only secrets; a Test Delivery button sends a signed sample and displays the result.
Proof Revocation & Supersession
"As a catalog owner, I want to revoke and replace proofs when necessary so that downstream parties don’t rely on outdated or compromised attestations."
Description

Allow admins to revoke a proof when keys are compromised or assets are superseded, publishing a signed revocation list consulted by the verification service. Support issuing a superseding proof that references the previous manifest, with the verifier clearly indicating revoked versus current status. Preserve historical artifacts for traceability while preventing revoked proofs from validating as current.

Acceptance Criteria
Admin Revokes Compromised Proof
Given an existing proof with a unique ID and an authenticated platform admin with MFA When the admin submits a revoke action with reason "key_compromised" and an optional note Then the system records the revocation with UTC timestamp, actor ID, reason, and proof ID And publishes an updated signed revocation list entry containing the proof ID, reason code, and revocation timestamp within 60 seconds And the proof’s verification status returns "revoked" via API and UI within 60 seconds And an immutable audit log entry is created and visible to organization auditors And the original proof PDF remains downloadable but the verification page displays a visible "Revoked" watermark
Verifier Shows Revoked Status for Old Proof
Given a revoked proof PDF with a QR link or an uploaded proof manifest When a distributor or auditor verifies it using the hosted verification page or API Then the verification result shows status "Revoked" with the revocation reason and date in the UI And the cryptographic signature verification details are displayed, indicating integrity of files but non-current status And the API response returns HTTP 200 with status=revoked and current=false in the payload And no "Valid/Current" badge is displayed anywhere in the UI And the page includes a link to a superseding proof if one exists
Issue Superseding Proof Referencing Prior Manifest
Given an existing proof A and updated assets requiring a new proof When an admin creates proof B and selects "Supersedes" referencing proof A Then proof B embeds proof A’s ID and manifest root hash in a "supersedes" field And proof A’s verification page displays "Superseded by" with a link to proof B And verifying proof B shows a lineage chain A -> B with both signatures valid And only proof B is marked current=true; proof A is current=false And attempts to set proof A to current are blocked with a validation error
Publish and Sign Revocation List
Given a publicly accessible revocation list endpoint When any proof is revoked or superseded Then a new list version is published with a monotonically increasing sequence number, issueTime, and nextUpdate And the list is signed with the platform’s ProofChain signing key and served over HTTPS with Cache-Control and ETag headers And the list schema includes entries: proofId, status (revoked|superseded), reason, revokedAt, supersededBy (optional) And the list’s signature verifies against the published public key using the reference verifier And any alteration of list content causes signature verification to fail
Verifier Consults Revocation List with Cache and TTL
Given the verification service maintains a cached revocation list When a proof is verified Then the service refreshes the list if the cache is older than 5 minutes or past nextUpdate And if the list cannot be fetched, the result is "Indeterminate" with reason "revocation_status_unavailable" and never "Current" And if the proof appears as revoked or superseded in the list, the status is returned accordingly in the same verification response And metrics record fetch latency, cache hits/misses, and last successful update timestamp
Preserve History Without Validating as Current
Given a proof that is revoked or superseded When a user views or downloads the historical proof PDF or manifest Then the artifacts are retrievable and their checksums/signatures validate And the verification page and PDF show a visible "Revoked" or "Superseded" banner/watermark and indicate the successor when applicable And API operations that would treat the proof as current (e.g., generating "current" review links) are blocked with a clear error And all access and download events are audit-logged with actor, timestamp, and artifact ID

Preflight Sandbox

Share a scoped ‘dry‑run’ link with collaborators to test their uploads against a selected Delivery Profile without touching your master project. They see pass/fail results with fix tips, so assets arrive pre‑compliant and last‑minute preflight firefighting disappears.

Requirements

Profile Snapshot Selector
"As a project owner, I want to bind a sandbox to a delivery profile snapshot so that collaborators test against the exact rules required for release."
Description

Enable selection of an existing Delivery Profile and creation of an immutable snapshot that the sandbox will reference for all checks. Surface a human-readable summary of rules (required assets, file formats, sample rates/bit depths, loudness targets, naming conventions, metadata fields, artwork dimensions/color profile, stem labels/count) to collaborators. Preserve backward compatibility by binding the sandbox to the snapshot even if the underlying profile changes later. Provide localized rule labels and machine-readable rule IDs for validation and analytics. Guard against missing or invalid profiles and fail gracefully with clear owner prompts.

Acceptance Criteria
Immutable Snapshot Creation from Existing Delivery Profile
Given I have Owner or Manager permissions on a project with at least one valid Delivery Profile When I select a Delivery Profile and click "Create Snapshot" in the Preflight Sandbox Then the system creates an immutable snapshot with snapshotId, sourceProfileId, createdAt (UTC), and a checksum/hash of the captured rules and schema version And the Sandbox references snapshotId for all validations and rule summaries And the snapshot's rules cannot be edited; attempts to modify require creating a new snapshot And an audit log entry is recorded with actorId, action, profileId, snapshotId, and timestamp
Sandbox Continues to Use Snapshot After Profile Changes
Given a Sandbox is bound to snapshotId S and the underlying Delivery Profile is later updated or deleted When a collaborator uploads assets or views rules in that Sandbox Then validation uses only the rules stored in snapshot S, not the current Delivery Profile state And pass/fail outcomes for the same files are identical before and after the profile change And if the source profile was deleted or access revoked, the Sandbox remains functional and shows a non-blocking "Profile changed after snapshot" banner to the owner only
Human-Readable Rule Summary Visible to Collaborators
Given a collaborator opens a Sandbox link referencing snapshotId S When the page loads Then the page displays a human-readable summary of rules from S including: required asset types, allowed audio file formats, sample rates, bit depths, loudness targets, naming conventions, required metadata fields, artwork dimensions and color profile, and required stem labels/count And each rule includes a concise label and fix tip text And the summary renders within 2 seconds on a 3G network and is accessible per WCAG 2.1 AA (labels announced, proper headings, contrast) And no owner-only fields (internal IDs, private notes) are visible to collaborators
Localized Rule Labels and Fallback Behavior
Given the Sandbox link is opened with Accept-Language or a user-selected locale that is supported When the rule summary and validation messages render Then all labels and category headings are localized to that locale and pinned to snapshot S's translation set And unsupported locales fall back to English without mixed-language strings And each displayed label maps to a stable machine-readable ruleId exposed as data attributes for QA automation
Machine-Readable Rule IDs for Validation and Analytics
Given a collaborator submits files for validation in the Sandbox When validation executes Then for each evaluated rule the system produces a record containing snapshotId, ruleId, severity (error|warn), status (pass|fail), fileRef, and recipientId And these records are stored server-side within 2 seconds of validation completion and available for aggregation by ruleId and recipientId And exported analytics exclude file contents and PII beyond recipient alias/id
Guard Against Missing or Invalid Profiles at Snapshot Creation
Given I attempt to create a snapshot from a Delivery Profile that is missing, invalid, or I lack permission to access When I click "Create Snapshot" Then no snapshot is created And I see a clear, actionable prompt indicating the specific issue (missing/invalid/permission) with a one-click path to resolve (e.g., request access, select another profile) And an error event is logged with errorCode, attempted profileId (if available), and actorId
Fail Gracefully if Snapshot Cannot Be Loaded at Validation Time
Given a Sandbox references snapshotId S When S cannot be loaded due to corruption, schema mismatch, or missing data Then collaborators see a non-technical error page indicating the sandbox is temporarily unavailable and to contact the owner And owners receive an in-app alert and email containing errorCode, snapshotId, and remediation steps And no validation runs; the system fails closed without exposing partial results or internal system details And a health check event is recorded for S for operational visibility
Scoped Sandbox Link
"As a project owner, I want to generate an expiring, scoped sandbox link per collaborator so that I can control access and limit what they can do without exposing my master project."
Description

Generate per-recipient, access-scoped links that expose only the sandbox workspace and rules, never the master project. Support expiration (date/time), single/multi-use tokens, optional password, and revoke/extend controls. Enforce least-privilege actions: upload, view rules, view validation results, and download only their own uploads if allowed. Integrate with existing link infrastructure and branding, and produce short URLs. Apply rate limiting, size caps, and basic bot/abuse protections. All links include a unique sandbox ID and profile snapshot ID for traceability.

Acceptance Criteria
Generate Scoped Sandbox Link with Expiration and Password
Given an authenticated owner selects a Delivery Profile and recipient When they generate a sandbox link specifying expiration date/time (UTC) and an optional password Then the system returns a URL that exposes only the sandbox workspace and profile rules, never any master project assets or metadata And Then the link denies access after the exact expiration timestamp with 410 Gone and logs the event And Then, if a password is set, access requires the password, attempts are rate-limited, and the password is stored hashed and can be updated without changing the URL And Then allowed actions are scoped in the token (upload, view rules, view validation results, download-own-uploads if enabled), and all other actions return 403 Forbidden
Single-Use and Multi-Use Token Enforcement
Given a generated link with token mode "single-use" When a recipient first redeems the token and a session is created Then the token is marked consumed and subsequent redemption attempts return 410 Gone and are logged Given a generated link with token mode "multi-use" and a max-uses value N When recipients redeem the token Then the system allows up to N successful redemptions and the (N+1)th returns 410 Gone And Then concurrent redemptions are serialized so total successful redemptions never exceed N And Then each redemption is timestamped and attributed to a recipient identifier for analytics
Least-Privilege Permissions in Sandbox Workspace
Given a recipient accesses the sandbox link When they use the UI or API Then they can only upload files, view Delivery Profile rules, and view validation results for their uploads And Then they cannot see master project folders, assets, or other recipients’ uploads; such requests return 404 Not Found And Then, if "download-own-uploads" is enabled, recipients can download only artifacts they uploaded; attempts to download others’ files return 403 Forbidden And Then all permission checks are enforced server-side and covered by automated tests for different roles
Revoke and Extend Link Controls
Given an active sandbox link When the owner clicks Revoke Then the link becomes unusable within 60 seconds globally; further access attempts return 410 Gone and are audit-logged with revocation reason And Then previously created sessions are invalidated within 60 seconds and forced to sign out on next request Given an active or expired sandbox link When the owner extends expiration to a future timestamp Then the new expiry is enforced immediately and reflected in link metadata And Then extension is allowed for expired links (reactivates link) but blocked for revoked links, returning a clear error
Traceability via Sandbox and Profile Snapshot IDs
Given sandbox link creation Then the link is assigned a unique sandbox_id and profile_snapshot_id; both are embedded in link metadata and stored in audit logs And Then every upload, validation event, download, revoke/extend action, and access attempt records both IDs plus recipient identifier And Then the owner can filter activity by sandbox_id and profile_snapshot_id in analytics and export a CSV including both fields
Rate Limiting, Size Caps, and Abuse Protection
Given sandbox endpoints When requests exceed 60 per minute per IP or 600 per hour per link Then subsequent requests return 429 Too Many Requests with Retry-After and are logged Given an upload exceeds the configured size cap When the upload is attempted Then the API rejects with 413 Payload Too Large and displays the configured cap to the user And Then basic bot protections apply: HMAC-signed tokens; >5 failed auth attempts in 10 minutes trigger CAPTCHA; known bad user-agents/IP ranges are challenged or blocked And Then only allowed MIME types per Delivery Profile are accepted; executable content and directory traversal attempts are rejected server-side
Branding Integration and Short URL Creation
Given link generation completes When the short URL is created under the existing branded domain Then it 301-redirects to the sandbox long URL and preserves UTM parameters And Then the sandbox page renders with existing tenant branding (logo, colors) and uses the existing analytics pipeline And Then the short code length is <= 10 URL-safe characters and resolves in < 300ms p95 under normal load
Isolated Upload Workspace
"As a collaborator, I want an isolated place to upload and organize candidate assets so that testing does not alter or overwrite the project’s master files."
Description

Provision a dedicated, temporary storage namespace per sandbox where collaborators can upload files and folders without affecting the master project. Support drag-and-drop, resumable/chunked uploads, ZIP upload with safe server-side unpack, deduplication, and virus/malware scanning. Enforce allowed file types per profile, per-file size ceilings, and total quota. Display folder structure and file states. Encrypt at rest and in transit, and automatically purge content and access when the link expires or is revoked, respecting configurable retention policies.

Acceptance Criteria
Sandbox Namespace Isolation
- Given a master project with existing assets, when a Preflight Sandbox is created, then a unique temporary storage namespace is provisioned for that sandbox. - Given a collaborator uploads files to the sandbox, when the master project is opened, then no files from the sandbox are visible in or modify the master project. - Given an API or UI attempt to access paths outside the sandbox namespace, when the request is made using the sandbox credentials/link, then the operation is denied with 403 and the event is logged. - Given sandbox creation, uploads, renames, and deletes, when activity occurs, then each action is recorded in the sandbox audit log with actor, timestamp, and outcome.
Drag-and-Drop and Resumable Uploads
- Given a supported browser, when the user drags and drops files and folders into the sandbox, then uploads initiate and preserve folder hierarchy. - Given a large file upload interrupted by a transient network loss, when connectivity is restored within the configured resume window, then the upload resumes from the last completed chunk without restarting. - Given an upload of a file already present in the sandbox (same checksum), when re-uploaded, then no duplicate data is stored and the file entry reflects the most recent attempt with a deduplicated status. - Given concurrent uploads, when multiple files are queued, then progress indicators show per-file progress and overall progress until completion or failure.
ZIP Upload with Safe Server-Side Unpack
- Given a user uploads a ZIP archive, when processing completes, then the server unpacks the archive server-side into the sandbox, preserving internal folder structure. - Given a ZIP containing path traversal entries or compressed size inflation (zip bomb), when validation runs, then the upload is rejected and none of its contents are written to storage. - Given an encrypted/password-protected ZIP, when detected, then the upload is rejected with a clear message indicating encrypted archives are not accepted. - Given files extracted from a ZIP, when any file violates file-type, size, or malware rules, then that file is blocked or quarantined with a detailed report, while compliant files are ingested.
Type, Size, and Quota Enforcement
- Given a Delivery Profile with allowed file types, when a user attempts to upload a disallowed type, then the system blocks the file and shows a message listing the allowed types. - Given a per-file size ceiling, when a user uploads a file exceeding the ceiling, then the system rejects the file and it does not count against the sandbox quota. - Given a total sandbox storage quota, when cumulative accepted bytes would exceed the quota, then further uploads are prevented and the UI displays used versus total quota. - Given blocked or rejected files, when the user reviews the file list, then each rejected item shows a clear status and reason.
Malware Scanning and Quarantine
- Given a file is uploaded or unpacked, when scanning completes, then files flagged as malicious are quarantined and made inaccessible to collaborators. - Given a quarantined file, when the user attempts to download or share it, then the action is blocked with a message indicating quarantine status and reason. - Given scan completion, when results are available, then the UI displays per-file scan status (pending, scanning, clean, quarantined) and timestamp. - Given a clean file, when scanning passes, then the file becomes available for preflight checks and counting toward quota.
Folder Structure and File State Display
- Given files and folders in the sandbox, when the user opens the workspace, then a navigable folder tree is displayed reflecting the server-side structure. - Given file lifecycle events, when a file is uploading, uploaded, scanning, quarantined, rejected, duplicate-skipped, or failed, then its state icon and label are shown in the list. - Given the file list, when a user selects a file, then metadata is displayed including filename, path, size, type, checksum, and current state. - Given changes occur (uploads complete, scans finish), when the page is open, then the view updates in near real time without requiring a full refresh.
Encryption and Auto‑Purge on Expiry/Revocation
- Given a sandbox link, when data is transferred, then all requests use HTTPS and reject plaintext access. - Given files stored in the sandbox, when at-rest encryption is verified, then storage reports encryption enabled for all objects. - Given a sandbox link expires or is revoked, when the expiry/revocation time is reached, then collaborator access is invalidated and API/UI requests return 410/403 within the configured grace period. - Given expiry or revocation, when the retention policy elapses, then all sandbox content is permanently deleted and the deletion is logged, with no residual access via direct URLs.
Preflight Validation Engine
"As a collaborator, I want my uploads automatically validated against the selected delivery profile so that I know exactly what passes and what fails."
Description

Run server-side validations on each upload and after batches complete, mapping results to the selected profile’s rules. Perform per-file checks (codec, sample rate/bit depth, channels, LUFS/true peak, artwork dimensions/color space, metadata presence), cross-file checks (required asset completeness, duplicate detection, stem set consistency), and naming convention enforcement with tokenized patterns. Provide severity levels, error codes, and structured JSON results. Re-evaluate incrementally as new files arrive, and scale horizontally to handle concurrent sandboxes.

Acceptance Criteria
Per-file Audio Technical Validation Against Delivery Profile
Given a Preflight Sandbox with a selected Delivery Profile that defines audio rules (codec whitelist, sample rate, bit depth, channels, integrated LUFS, true peak) When the user uploads an audio file to the sandbox link Then the server analyzes the file server-side without persisting it to the master project And the engine measures integrated LUFS and true peak (EBU R128), and reads codec, sampleRateHz, bitDepth, channels And each measurement is evaluated against the profile ruleIds, producing pass/fail with severities defined in the profile And violations use errorCodes in {"AUDIO_CODEC_UNSUPPORTED","AUDIO_SAMPLERATE_OUT_OF_RANGE","AUDIO_BITDEPTH_INVALID","AUDIO_CHANNELS_INVALID","AUDIO_LUFS_OUT_OF_RANGE","AUDIO_TRUEPEAK_EXCEEDED"} And the JSON result for the file contains {fileId, profileId, ruleId, severity ∈ ["info","warning","error"], errorCode, passed, message, metrics:{codec,sampleRateHz,bitDepth,channels,lufsI,truePeakDbTP}, timestamp} And p95 time from upload completion to result availability is ≤ 10s for files ≤ 100MB
Artwork Compliance Validation Against Delivery Profile
Given a selected Delivery Profile that defines artwork requirements (min/max dimensions, aspect ratio tolerance, color space, format whitelist, bit depth) When an image file (JPEG/PNG) is uploaded to the sandbox Then the engine inspects pixel dimensions, aspect ratio, color space/ICC profile, format, and bit depth And evaluates each attribute against mapped ruleIds, producing pass/fail with configured severities And violations use errorCodes in {"ARTWORK_DIMENSIONS_INVALID","ARTWORK_ASPECT_RATIO_OUT_OF_RANGE","ARTWORK_COLORSPACE_UNSUPPORTED","ARTWORK_FORMAT_UNSUPPORTED","ARTWORK_BITDEPTH_INVALID","ARTWORK_MISSING_ICC_PROFILE"} And the JSON result contains {fileId, profileId, ruleId, severity, errorCode, passed, message, metrics:{widthPx,heightPx,aspectRatio,colorSpace,iccProfileName,format,bitDepth}, timestamp} And p95 time from upload completion to result availability is ≤ 5s for files ≤ 20MB
Metadata Presence and Schema Validation
Given a selected Delivery Profile that defines required metadata fields and formats at file-level and release-level (e.g., isrc, trackTitle, primaryArtist, language, explicit, upc) When an audio file with embedded tags and/or a metadata payload is uploaded/attached to the sandbox Then the engine extracts ID3/RIFF/MP4 tags and validates required presence and value formats against profile regex/picklists And release-level fields are validated across the sandbox collection for completeness And violations use errorCodes in {"METADATA_FIELD_MISSING:{FIELD}","METADATA_FORMAT_INVALID:{FIELD}","METADATA_VALUE_NOT_ALLOWED:{FIELD}"} And the JSON result contains {scope:"file"|"release", fileId?, profileId, ruleId, severity, errorCode, passed, message, details:{fieldsMissing[], fieldsInvalid:[{field,reason,actual}]}, timestamp} And p95 validation latency is ≤ 3s for metadata-only checks
Cross-file Required Asset Completeness at Batch Close
Given a selected Delivery Profile that defines required asset types and counts per release (e.g., Main Mix, Instrumental, Clean, Artwork) When the client triggers a "Complete Preflight" event for the sandbox Then the engine evaluates presence and uniqueness of all required assets across uploaded files And returns an overall status with profilePass boolean and a list of missing or extra asset types mapped to ruleIds And violations use errorCodes in {"ASSET_REQUIRED_MISSING:{TYPE}","ASSET_UNEXPECTED_EXTRA:{TYPE}","ASSET_VERSION_MISSING:{VERSION}"} And the JSON result contains {sandboxId, profileId, ruleId, severity, errorCode?, overall:{profilePass, errorsCount, warningsCount}, missingAssets[], extraAssets[], filesSummary:{byType:{}, total}, timestamp} And the event produces results within ≤ 5s for sandboxes with ≤ 50 files
Duplicate Detection and Stem Set Consistency
Given a Delivery Profile that defines duplicate policy and stem set definitions (required names/tokens and technical alignment) When files are uploaded to the sandbox Then the engine computes content hashes and audio fingerprints to detect duplicates within the sandbox and flags conflicts And for each required stem set, verifies presence, name/token match, sampleRate/bitDepth/channels match to main mix, and duration within ±10ms or ±0.5% (whichever is smaller) And violations use errorCodes in {"DUPLICATE_NAME_CONFLICT","DUPLICATE_CONTENT_DETECTED","STEM_MISSING:{STEM}","STEM_SET_COUNT_MISMATCH","STEM_TECH_PROPS_MISMATCH","STEM_DURATION_MISMATCH"} And the JSON result contains {fileId?, groupId?, profileId, ruleId, severity, errorCode, passed, message, details:{duplicates:[{fileId,reason}], stemSet:[{stemName,fileId,deltaMs,deltaPercent,sampleRateHz,bitDepth,channels,matched}]}, timestamp}
Tokenized Naming Convention Enforcement
Given a Delivery Profile that defines a tokenized filename pattern (e.g., "{artist}-{title}-{version}"), allowed separators, case-sensitivity, and illegal characters When a file is uploaded to the sandbox Then the engine parses and validates the filename against the pattern and token constraints And on success, extracts tokens and includes them in the result; on failure, reports position and token that failed And illegal characters are flagged per policy And violations use errorCodes in {"NAMING_PATTERN_MISMATCH","NAMING_ILLEGAL_CHARACTERS","NAMING_TOKEN_INVALID:{TOKEN}"} And the JSON result contains {fileId, profileId, ruleId, severity, errorCode?, passed, message, details:{pattern, tokens:{...}, normalizedFilename, failPosition?}, timestamp}
Incremental Re-evaluation, Delta Results, and Concurrent Sandbox Scaling
Given a sandbox with existing files and an active Delivery Profile When a file is added, replaced, or removed Then the engine re-evaluates only impacted rules (per-file and dependent cross-file) and emits delta results for changed outcomes And p95 time from the change to delta emission is ≤ 5s for metadata-only changes and ≤ 15s for full audio analysis (files ≤ 100MB) And under load of ≤ 200 concurrent sandboxes each uploading ≤ 10 files/min, p95 per-file validation latency meets the above thresholds, with error rate < 0.1% and zero cross-sandbox data leakage And result delivery is idempotent and ordered per file via a monotonically increasing sequence or version And each delta JSON contains {correlationId, sandboxId, fileId?, changedRules:[{ruleId, before:{passed,severity,errorCode?}, after:{passed,severity,errorCode?}}], timestamp}
Guided Fix Tips
"As a collaborator, I want clear, actionable fix tips for each failed check so that I can correct issues quickly without back-and-forth."
Description

Present real-time, per-rule pass/fail with actionable remediation guidance tailored to the failure (e.g., suggested filename based on tokens, recommended export settings, target loudness/peak, missing metadata fields, required artwork dimensions). Offer inline tooltips, copy-to-clipboard patterns, and links to help docs. Refresh results instantly after re-uploads. Ensure accessible UI (keyboard navigation, readable contrast) and mobile responsiveness. Do not require login for collaborators; all guidance is available within the sandbox link context.

Acceptance Criteria
Real-Time Per-Rule Evaluation & Instant Refresh
Given a collaborator opens a Preflight Sandbox link with an associated Delivery Profile When they upload one or more assets Then the system evaluates each asset against all profile rules within 2 seconds per asset and displays per-rule Pass/Fail states adjacent to each rule And for each Fail, a guidance panel appears with rule-specific remediation steps including failure reason, required target values, and an example corrected outcome And for each Pass, the rule displays a green state and the timestamp of last evaluation When the collaborator re-uploads a corrected asset for the same slot Then the evaluation refreshes automatically without page reload and updates the rule states within 2 seconds
Filename Suggestions from Tokens with Copy-to-Clipboard
Given a naming convention rule using tokens (e.g., {artist}_{title}_{version}) When an uploaded asset fails the filename rule Then a suggested filename is generated from available tokens and project/sandbox metadata and rendered inline And a Copy to Clipboard control copies the suggested filename exactly; after click, a confirmation label is announced to screen readers and visible for at least 2 seconds And illegal characters for Windows/macOS are removed or replaced per rule, the base name length is <= 255 bytes, and the file extension matches the allowed set When the collaborator re-uploads the asset using the suggested name Then the filename rule passes and the UI updates within 2 seconds
Audio Compliance Guidance (Loudness, Peak, Format)
Given audio rule targets for LUFS integrated, true-peak, sample rate, bit depth, channel count, and codec When an uploaded audio file violates any target Then the UI displays the measured values next to each rule, the required targets, and recommended export settings (e.g., sample rate, bit depth, interleaved, dithering) for correction And a Copy to Clipboard control is available for target values (e.g., -14 LUFS, -1 dBTP) And a link to the relevant help doc opens in a new tab When the collaborator re-uploads a corrected audio file Then all audio rules are re-evaluated and updated within 2 seconds per file, with Pass/Fail states shown per rule and per file
Artwork Dimension/Format Guidance
Given artwork rules for pixel dimensions, aspect ratio tolerance, color space, format, and max file size When an uploaded image violates one or more rules Then the UI shows measured vs required dimensions, detected color space, file format and size, and provides fix tips (e.g., resize to 3000x3000 px, convert to RGB) And a link to the artwork preparation help doc is available and opens in a new tab When a corrected image is re-uploaded Then artwork rules re-evaluate and Pass/Fail statuses update within 2 seconds
Metadata Completeness & Sidecar Pattern Guidance
Given required metadata fields defined by the Delivery Profile (e.g., ISRC, Primary Artist, Language, UPC/EAN) When the submission lacks required metadata (in embedded tags or sidecar) Then the UI presents a metadata checklist enumerating missing/invalid fields with validation patterns and field-specific examples And a Copy to Clipboard control provides a sidecar CSV header template and example row for the current release And a link to metadata help docs is available When the collaborator re-uploads assets with corrected embedded tags or attaches a valid sidecar file Then metadata checks pass and the UI updates within 2 seconds
Accessible Tooltips, Help Links, and Mobile Responsiveness
Given the sandbox UI with inline info icons and guidance elements When a user navigates via keyboard Then all interactive elements are reachable in a logical tab order, have visible focus indicators, and are operable with Enter/Space And tooltips are accessible on focus and hover, dismissible via Escape, and their content is read by screen readers And live Pass/Fail updates are announced via ARIA live regions without moving keyboard focus And text and essential UI components meet WCAG 2.1 AA contrast ratios (text >= 4.5:1; non-text UI >= 3:1) And on mobile viewports from 320px to 768px width, content reflows without horizontal scroll, tap targets are at least 44x44 px, and tooltips render as accessible popovers
No-Login Scoped Sandbox Access
Given a valid, unexpired Preflight Sandbox link containing a secure token When a collaborator opens the link Then all guidance (pass/fail, fix tips, tooltips, help links, copy patterns) is available without authentication And the link is scoped to the selected Delivery Profile and the collaborator’s uploaded assets only; master project assets and settings are not accessible And if the link is expired or revoked, the page returns a 410 (expired) or 403 (revoked) with no asset data exposure and a contact-owner message
Owner Monitor & Notifications
"As a project owner, I want to monitor sandbox activity and receive notifications on completion so that I can keep releases on schedule without manual check-ins."
Description

Provide an owner-only dashboard card for each sandbox showing recipient, status, last activity, pass rate, required assets completed, and time to compliance. Enable controls to copy/revoke/extend links and adjust upload permissions. Send configurable notifications (email/in-app) on key events such as first open, first upload, all checks passed, and link expiration. Surface a consolidated timeline within the project without exposing uploaded files to the master library.

Acceptance Criteria
Owner-Only Sandbox Dashboard Card Visibility
Given I am the project owner, When I open the Preflight Sandbox monitor in a project, Then I see one dashboard card per existing sandbox for that project. Given I am not a project owner, When I attempt to access the Preflight Sandbox monitor, Then sandbox dashboard cards are not shown and protected endpoints return HTTP 403. Given a sandbox dashboard card is displayed, Then it includes: recipient name and email, link status (Active, Revoked, Expired), last activity timestamp in my selected timezone, pass rate as a percentage, required assets completed as X/Y, and time to compliance as a duration or "—" if not yet compliant. Given I change my profile timezone, When I refresh the monitor, Then last activity and time-to-compliance render in the selected timezone.
Copy, Revoke, and Extend Sandbox Link Controls
Given a sandbox card, When I click "Copy Link", Then the share URL is copied to my clipboard and a success toast appears within 1 second. Given a sandbox is Active, When I click "Revoke", Then the link status changes to Revoked, all active sessions are invalidated within 60 seconds, and subsequent link hits return HTTP 410. Given a sandbox with an expiration date, When I click "Extend" and set a new expiration within the system-configured maximum, Then the new expiration is saved, the status remains Active, and the audit log records the change with old and new values. Given a revoked link, When I extend its expiration, Then the link remains Revoked unless I explicitly choose "Re-activate", and the UI prevents accidental reactivation.
Adjust Upload Permissions Mid-Run
Given a sandbox, When I modify upload permissions (overwrite allowed, file type restrictions per Delivery Profile, max file size), Then changes are saved and logged with timestamp and actor. Given permissions were modified, When a collaborator attempts an upload after the change, Then the new permissions are enforced and disallowed actions are blocked with a specific validation message. Given a collaborator is on the upload page at the time of change, When permissions are saved, Then the UI reflects the updated rules within 5 seconds via realtime update or on next interaction. Given existing uploaded files, When permissions are changed, Then previously uploaded files remain unaffected.
Configurable Notifications for Key Events
Given notification settings per sandbox, When I toggle subscription for an event (first open, first upload, all checks passed, link expiration) and channel (email, in-app), Then preferences persist and survive logout. Given I am subscribed to in-app notifications, When a subscribed event occurs, Then I receive an in-app notification within 5 seconds containing sandbox name, recipient, event type, and timestamp. Given I am subscribed to email notifications, When a subscribed event occurs, Then an email is sent within 2 minutes with subject including event type and sandbox name and body including recipient, timestamp, and a link to the project timeline. Given I am unsubscribed for an event, When that event occurs, Then no notification is sent via that channel.
Event Trigger Semantics and De-duplication
Given a sandbox, When the recipient first loads the link, Then exactly one "First Open" event is recorded and further page loads do not create additional "First Open" events. Given a sandbox, When the recipient successfully uploads any file for the first time, Then exactly one "First Upload" event is recorded. Given all required checks pass for the first time, When that state is detected, Then exactly one "All Checks Passed" event is emitted and subsequent regressions do not emit additional "All Checks Passed" events unless compliance is lost and later re-achieved, in which case a new event is recorded. Given a link reaches its expiration timestamp, When background jobs process expirations, Then one "Link Expired" event is recorded within 60 seconds and notifications sent if subscribed. Given multiple identical triggers occur in a short window, Then duplicate notifications are debounced and not sent more than once per event occurrence.
Consolidated Project Timeline Without File Exposure
Given a project with sandboxes, When I view the project timeline, Then sandbox-related events (link created, copied, first open, uploads attempted, validation summaries, permission changes, revoke/extend, expiration) are displayed in reverse chronological order and are filterable by event type. Given timeline entries reference uploaded files, Then only metadata is shown (filename, size, extension, checksum suffix, validation status) and no direct download or preview links are present. Given I am not a project owner, When I view the project timeline, Then sandbox events are hidden. Given I export the timeline as a project owner, When I download the CSV, Then it includes event fields and excludes file content or file URLs.
Compliance Metrics Calculation and Freshness
Given a sandbox, When pass rate is displayed, Then it is computed as (passed checks / total checks in the most recent validation run) rounded to the nearest whole percent and matches the latest validation results. Given required assets completed is displayed, Then it shows "X/Y" where X is the count of required asset slots with at least one passing upload and Y is the total required slots from the Delivery Profile. Given time to compliance is displayed, Then it is the elapsed time between the timestamp of "First Open" and the timestamp when all required checks first pass; once set, it remains fixed even if later validations fail. Given any new upload or Delivery Profile change occurs, When metrics are impacted, Then the dashboard card values update within 10 seconds and last activity timestamp updates within 5 seconds of the triggering action. Given compliance has not yet been achieved, Then time to compliance displays "—" and is excluded from aggregate calculations.
Audit, Analytics & Compliance Reports
"As a project owner, I want exportable compliance reports and event logs so that I can document readiness and share proof with stakeholders."
Description

Record immutable audit events for link creation, opens, uploads, validations, passes/fails, and revocations with timestamps, IP/user agent, sandbox ID, and profile snapshot ID. Provide per-recipient analytics (opens, upload attempts, pass rate, time-to-pass) and exportable PDF/CSV compliance reports summarizing rule checks and outcomes for sharing with stakeholders. Support configurable data retention and deletion to meet privacy requirements while maintaining operational observability.

Acceptance Criteria
Immutable Audit Trail for Preflight Sandbox Link Lifecycle
Given a Preflight Sandbox link is created for a Delivery Profile snapshot When the system records link creation, recipient opens, file uploads, validation passes/fails, and link revocations Then an audit event is appended for each action including: eventType, timestamp (UTC ISO-8601), sandboxId, profileSnapshotId, recipientId (if applicable), ipAddress, userAgent, and outcome details And events are write-once and tamper-evident via a chained hash or WORM store And attempts to update or delete events are rejected with a 409 or equivalent And retention-based purges remove entire events only after their retention period and emit a purge audit event with timestamp and reason
Per-Recipient Analytics Dashboard for Preflight Sandbox
Given multiple recipients access a sandbox link and attempt uploads When viewing the per-recipient analytics for that sandbox Then for each recipient the UI/API returns: totalOpens, uploadAttempts, validationPasses, validationFails, passRate (0–100%), and timeToFirstPass (duration or null) And metrics reflect new activity within 60 seconds of the underlying audit event And timeToFirstPass is computed from the first upload attempt by that recipient to the first successful validation for that recipient within the sandbox And recipients with no activity display zeros or nulls without errors
Exportable Compliance Reports (PDF/CSV) for Rule Check Outcomes
Given a sandbox link and date range are selected When a PDF or CSV compliance report is generated Then the report contains header metadata: sandboxId, profileSnapshotId, generatedAt (UTC), dateRange, and reportVersion And for each recipient it summarizes: total upload attempts, passes, fails, passRate, timeToFirstPass And for each failed attempt it lists rule IDs/names and failure messages And the CSV mirrors the PDF content and column order; file names include sandboxId and generatedAt; a SHA-256 checksum is provided And regenerating the report for the same inputs yields identical results unless retention purges have removed underlying PII, in which case placeholders are shown
Audit Event Search and Filters by Sandbox, Recipient, and Time
Given audit events exist for multiple sandboxes and recipients When querying audit events by sandboxId, recipientId or email, eventType, and time range via UI or API Then results are filterable, paginated, and sortable by timestamp ascending or descending And each event in the result includes eventType, timestamp, ipAddress, userAgent, sandboxId, profileSnapshotId, and actor And exporting the filtered result set to CSV produces a file that matches the on-screen data
Configurable Data Retention and Privacy Safeguards
Given an administrator configures retention settings (e.g., auditEvents=365 days, analyticsPII=90 days) When the configured durations elapse for stored records Then PII fields (ipAddress, userAgent, recipient identifiers) are purged or irreversibly anonymized according to the setting while aggregate metrics (counts, rates, durations) remain available And purge jobs run at least daily and log start/end, items processed, and errors; purged item counts are exposed in observability metrics And honoring a data subject deletion request removes that subject’s PII and per-recipient analytics immediately, while preserving sandbox-level aggregates
Revocation Handling and Post-Revocation Analytics Integrity
Given a sandbox link is revoked When a recipient attempts to open the link or upload after revocation Then access is denied with a clear message and an audit event with eventType=revocation_blocked is appended including attempted ipAddress and userAgent And per-recipient analytics do not increase successful metrics after the revocation timestamp; blocked attempts are counted separately And compliance reports include the revocation timestamp and exclude uploads after revocation from pass-rate calculations

Duplicate Radar

Detects duplicate ISRCs, recycled audio across releases, and conflicting version labels (clean/explicit, radio edit). Uses audio fingerprinting plus catalog history to flag risks early and recommends safe renames or code reassignment before export to avoid takedowns and mismatches.

Requirements

Audio Fingerprinting & Similarity Index
"As an indie label manager, I want new uploads to be automatically fingerprinted so that reused audio is detected before I package a release."
Description

Implement an audio fingerprinting service that generates robust fingerprints on ingest and during backfills, then stores them in a scalable similarity index to detect exact and near-duplicate audio across the catalog, releases, and stems. Support tolerance for minor edits (gain changes, trimming, bitrate, channel differences) with configurable match thresholds and confidence scoring. Expose APIs/events so other IndieVault modules (ISRC checks, cross-release scans, export gate) can query matches. Run automatically on upload, on release assembly, and via scheduled batch jobs. Persist match metadata (match IDs, confidence, timestamps) for auditing and resolution workflows.

Acceptance Criteria
Auto Fingerprinting on Upload
Given a supported audio file (WAV, AIFF, FLAC, MP3) is successfully uploaded by a user When the upload completes Then a fingerprint is generated and stored in the similarity index linked to the assetId within 60 seconds p95 And the fingerprint record includes assetId, duration, sampleRate, channelCount, algorithmVersion, createdAt And duplicate uploads of the same binary produce a single fingerprint entry (idempotent) Given an unsupported or corrupted audio file is uploaded When fingerprinting is attempted Then the job is marked failed with a machine-readable error code And no index entry is created And an error event fingerprinting.failed is emitted with assetId and correlationId
Scheduled Backfill Fingerprinting
Given assets exist without fingerprints When a backfill job is scheduled by cron or triggered manually Then all eligible assets are enqueued and processed until the queue is empty And progress metrics (processed, succeeded, failed, retried) are exposed via /v1/jobs/{id} And transient failures are retried up to 3 times with exponential backoff And the job supports checkpointing and resume without reprocessing completed assets And the job status is Succeeded when at least 99% of targeted assets fingerprint successfully; otherwise Failed with a downloadable error report
Near-Duplicate Detection Tolerance & Thresholds
Given the system is configured with exactMatchThreshold >= 0.98 and nearMatchThreshold >= 0.90 When comparing fingerprints across catalog, releases, and stems Then content with only gain changes up to ±3 dB, lossy re-encodes down to 128 kbps, mono↔stereo conversion, or total trimming ≤ 2 seconds is detected as a near match with confidence ≥ nearMatchThreshold And bit-identical audio returns confidence ≥ exactMatchThreshold And unrelated audio in a validation set of at least 500 tracks yields a false positive rate < 1% at nearMatchThreshold And configuration changes to thresholds take effect within 1 minute and are audit-logged
Similarity Index Query API & Events
Given a client with scope duplicate:read When it calls GET /v1/similarity/matches?assetId={id}&minConfidence=0.90&limit=50 Then the API returns 200 with results sorted by confidence including matchId, matchedAssetId, confidence, matchType (exact|near), createdAt, updatedAt, trigger And p95 latency is ≤ 250 ms for limit ≤ 50 And results are paginated with nextPageToken when more results exist And access is RBAC-enforced so the client only sees authorized assets And the endpoint enforces a rate limit of at least 60 requests/min per API key and returns 429 when exceeded Given a new match is persisted When the transaction commits Then an event similarity.match.created is published at-least-once within 5 seconds with an idempotency key and the same fields as the API
Match Metadata Persistence & Auditability
Given a match is detected between asset A and asset B When the match is persisted Then the record includes matchId, assetAId, assetBId, confidence, thresholdsUsed, algorithmVersion, trigger (upload|assembly|backfill), createdAt, lastSeenAt, jobId, correlationId And identity fields are immutable and updates to confidence/lastSeenAt are versioned And records are queryable by assetId and by time range via /v1/similarity/matches/search And match records are retained for at least 365 days after asset deletion unless a retention policy explicitly purges them And all create/update/delete operations write an audit log entry with actor, action, and reason where applicable
Release Assembly & Export Gate Checks
Given a release assembly contains up to 20 tracks and associated stems When the assembly step completes Then the system automatically scans all included assets against the similarity index using current thresholds and persists matches with trigger=assembly And a summary (totalScanned, exactMatches, nearMatches, highestConfidence) is attached to the release context and available via GET /v1/releases/{id}/similarity-summary And p95 time to produce the summary is ≤ 60 seconds And if any match exceeds exactMatchThreshold, an event similarity.assembly.alert is published within 5 seconds
ISRC Normalization & Conflict Detection
"As an artist, I want Duplicate Radar to flag when I’m reusing an ISRC on a different mix so that I avoid distributor rejections and takedowns."
Description

Normalize and validate ISRCs on ingest and edit by enforcing format rules, casing, and delimiter standards, then cross-check against catalog history to detect duplicates, mismatches, and code reuse across differing audio fingerprints. Highlight conflicts where the same ISRC maps to materially different audio or where multiple ISRCs map to the same master when not intended. Provide bulk import validation for CSV/metadata files, surface inline errors in the metadata editor, and maintain a reference map to support auditing and rollback. Emit structured conflict records consumable by the dashboard and pre-export gate.

Acceptance Criteria
Normalize ISRC Format on Ingest and Edit
Given a track is ingested or an existing asset’s ISRC is edited When the ISRC contains lowercase letters, spaces, or delimiters Then the system normalizes it to uppercase A–Z and 0–9, removes delimiters, and enforces the 12-character structure CC(2)+RRR(3)+YY(2)+NNNNN(5) And the normalized value is persisted as the canonical ISRC for the asset And the original raw input is captured in the reference map with before/after, user, and timestamp
Validate ISRC Structure and Reject Malformed Entries
Given a user enters or pastes an ISRC in the metadata editor When the ISRC fails any rule (length ≠ 12, invalid characters, structure not 2-3-2-5, non-numeric year/designator, invalid country code format) Then an inline error appears with a specific error code and message tied to the failing rule And the form cannot be saved until the ISRC passes all rules And the error clears immediately once the ISRC complies
Detect Duplicate ISRC with Different Audio
Given the catalog maps ISRC X to audio fingerprint F1 When a new or updated asset is saved with ISRC X and fingerprint F2 such that similarity(F1,F2) < 0.98 Then create a conflict record of type ISRC_AUDIO_MISMATCH linking both assets and fingerprints And surface the conflict inline on both assets and in Duplicate Radar And block pre-export for the conflicted items until the conflict status is Resolved
Detect Multiple ISRCs for the Same Master (Unintended)
Given the catalog contains asset A with fingerprint FA and ISRC X When another asset B with fingerprint FB is assigned ISRC Y where Y ≠ X and similarity(FA,FB) ≥ 0.98 Then create a conflict record of type MULTI_ISRC_SAME_MASTER linking A and B And do not create the conflict if an explicit intended-same-master relationship exists between A and B And block pre-export for B and related items until the conflict is Resolved
Bulk CSV Import: ISRC Validation and Cross-Check
Given a CSV or metadata file with N rows is uploaded for bulk import When validation runs Then each row is evaluated for ISRC normalization, structural validity, and cross-checked against catalog history and within-batch duplicates And a per-row report is produced with Status (Pass/Fail) and machine-readable error codes for all failures And only rows with Status Pass are eligible for import; rows with Fail are excluded without preventing import of passing rows
Inline Metadata Editor Error Surfacing and Real-time Correction
Given a user edits the ISRC field in the metadata editor When the user corrects a previously invalid ISRC to a valid one Then the inline error disappears without needing a page refresh And the field shows a valid state and Save becomes enabled And upon Save, the normalized ISRC is persisted and reflected in the UI
Conflict Record Emission and Reference Map for Audit/Rollback
Given any normalization change, validation failure, or conflict detection event occurs When the event is recorded Then emit a structured record with fields: id, type, assetIds, previousISRC, currentISRC, fingerprints, detectedAt, detectedBy, status And persist the record in the reference map to enable auditing and rollback of ISRC changes And make the record queryable by the dashboard and enforceable by the pre-export gate
Version Label Consistency Rules Engine
"As a project manager, I want warnings when a track’s version label conflicts with its actual content or history so that versions remain accurate across releases."
Description

Create a rules engine that evaluates version labels (Explicit, Clean, Radio Edit, Instrumental, Remix, Remaster) for consistency against track metadata, durations, and catalog context. Detect and flag contradictions such as a Clean label paired with audio that matches a previously marked Explicit master, or a Radio Edit lacking expected duration deltas. Allow workspace-level configuration of naming conventions and acceptable suffix patterns. Generate actionable suggestions to align titles, tags, and folder names, and publish results to the resolution workflow and export checks.

Acceptance Criteria
Flag Clean Label Matching Explicit Master Fingerprint
Given a workspace fingerprint match threshold of 98% (default, configurable) And an existing Explicit master in the catalog for the same track lineage (same title root and ISRC family) When a track labeled Clean is ingested or updated And its audio fingerprint similarity to the Explicit master is greater than or equal to the configured threshold Then the rules engine records violation code "CleanContradictsExplicit" with severity "Blocker" And attaches suggestions: "Relabel as Explicit" and "Provide true Clean edit (bleep/blank)" And publishes the violation to the Resolution Workflow with track ID, baseline track ID, similarity score, and suggested actions And fails the Export Check for any release containing the track until the violation is resolved
Enforce Radio Edit Duration Delta
Given a workspace Radio Edit delta policy: minPercent=8%, minSeconds=15 (configurable) And a canonical non-Radio version exists in the catalog for the same track lineage When a track labeled "Radio Edit" is evaluated And both (durationPercentDelta < minPercent) and (durationSecondsDelta < minSeconds) Then the rules engine records violation code "RadioEditDeltaTooSmall" with severity "Warning" And attaches suggestions: "Remove Radio Edit label" or "Provide true radio cut within policy" And publishes the violation to the Resolution Workflow with measured durations and policy values And allows export to proceed but surfaces a warning in Export Checks summary
Validate Instrumental Label via Vocal Presence
Given a workspace vocal presence threshold of 0.10 probability (configurable) using the system’s vocal-detection model When a track labeled "Instrumental" is evaluated And the measured max vocal probability > threshold for any segment longer than 2 seconds Then the rules engine records violation code "InstrumentalHasVocals" with severity "Blocker" And attaches suggestions: "Relabel as Original" or "Upload true instrumental bounce" And publishes the violation to the Resolution Workflow with evidence timestamps and confidence metrics And fails the Export Check for any release containing the track until resolved
Assess Remaster Label Consistency via Audio Deltas
Given a workspace Remaster criteria: minLUFSDelta=1.0 LU or spectralSimilarity<=97.5% (configurable) And a prior released non-Remaster baseline exists in the catalog When a track labeled "Remaster" is evaluated And integrated LUFS delta < minLUFSDelta AND spectral similarity > 97.5% Then the rules engine records violation code "RemasterNoDiscernibleChange" with severity "Warning" And attaches suggestions: "Relabel as Original" or "Provide true remaster with audible deltas" And publishes the violation to the Resolution Workflow with measured LUFS and similarity values And allows export with warning unless workspace policy escalates Remaster violations to Blocker
Align Naming Conventions Across Title, Tags, and Folders
Given a workspace naming map of acceptable suffix patterns for version labels (e.g., Clean: [" - Clean", " (Clean)"]) When a track with any version label is evaluated Then the rules engine verifies that the chosen suffix pattern is applied consistently to (1) Catalog Title, (2) Audio metadata TITLE/VERSION tags, and (3) Folder/file name And if any surface is missing or uses a non-compliant pattern, records violation code "NamingConventionMismatch" with severity "Warning" And generates a single normalized title and file rename suggestion per workspace rules And publishes the issue and suggestions to the Resolution Workflow and lists the proposed rename in Export Checks
Detect ISRC and Version Label Conflicts
Given a workspace catalog where ISRCs must uniquely map to a specific version label When two or more tracks share the same ISRC but have conflicting version labels (e.g., Explicit vs Clean, Original vs Radio Edit) Then the rules engine records violation code "ISRCVersionConflict" with severity "Blocker" And attaches suggestions: "Reassign ISRCs according to label policy" and "Update version labels to align with canonical ISRC mapping" And publishes the violation to the Resolution Workflow with the conflicting track IDs and labels And fails Export Checks for affected releases until the conflict is resolved
Export Gate Runs Rules Engine and Summarizes Results
Given a release enters the export pipeline When the rules engine evaluates all tracks for version label consistency Then the export process receives a machine-readable summary: totalChecks, violationsByCode, warningsByCode, blockersCount, suggestionsCount And if blockersCount > 0, the export is blocked with HTTP 409 and a link to the Resolution Workflow batch view And if blockersCount = 0, export proceeds and warnings are embedded in the delivery manifest and UI banner
Cross-Release Reuse Scan
"As a catalog admin, I want a cross-release scan that highlights reused masters so that I can decide whether to keep or reassign codes before distribution."
Description

Run scheduled and on-demand scans across the entire catalog to identify reused or highly similar audio appearing on multiple releases, EPs, singles, and compilations. Differentiate common, intentional reuse (e.g., deluxe editions) from risky duplication by leveraging fingerprint matches, release contexts, and metadata links. Present match groups with context (original release, dates, codes, labels) and route findings to the dashboard for review. Provide APIs to tag legitimate reuse cases to reduce future noise and improve model precision over time.

Acceptance Criteria
Nightly Catalog-Wide Reuse Scan Executes and Reports
Given a catalog containing existing releases and audio assets When the scheduled cross-release reuse scan runs at the configured nightly time Then the scan processes the entire catalog and completes within the configured SLA (e.g., ≤ 60 minutes for 50k tracks) And all audio matches with fingerprint score ≥ 0.98 and overlap ≥ 80% across different releases are identified and grouped And a scan job record is created with jobId, start/end times, totals (tracks scanned, groups found, by severity), and status = SUCCESS And a notification with the jobId and summary counts is surfaced on the dashboard within 1 minute of completion
On-Demand Pre-Export Scan for a Target Release
Given a user initiates an on-demand scan for Release R from the export screen When the scan starts Then only tracks in Release R are compared against the full catalog using the current fingerprint model and thresholds And the first results are returned within 5 minutes for up to 500 tracks in Release R And any High severity findings block the export action with a clear message listing blocking groups and recommended actions (e.g., ISRC reassignment, version label update) And the user can proceed to export once blocking findings are resolved or explicitly overridden with a recorded reason
Legitimate Reuse Tagging via API Suppresses Future Alerts
Given a client calls the Legitimate Reuse Tag API with a matchGroupId, scope (track pair or release pair), justification, and expiration (optional) When the tag is created successfully Then the tag is persisted with an audit trail (who, when, scope, justification) And subsequent scans suppress alerts for matches within the tagged scope by downgrading to Info and excluding from export blocks And the same matchGroupId is not re-emitted as Medium/High while the tag is active And if the tag is revoked via API, future scans restore normal severity for those matches
Dashboard Presents Contextual Match Groups with Review Actions
Given match groups are created by a scan When a reviewer opens the Duplicate Radar dashboard Then each group displays for every member: release title, release type (single/EP/album/compilation), release date, label, track title, version label, ISRC, match score, and matched segment timestamps And groups are sorted by severity and recency by default, with filters for severity, label, date range, and release And the reviewer can perform actions per group: Mark Legitimate Reuse, Request ISRC Reassignment, Flag Mismatched Version Label, or Ignore with reason And actions update group status immediately and are reflected on refresh and in subsequent scans
Classification Differentiates Legitimate Reuse from Risky Duplication
Given audio fingerprint matches exist between tracks across releases When classification runs with release context and metadata Then matches linked by same master recording or same release family (e.g., deluxe edition) are auto-classified as Info And matches with conflicting ISRCs across different release families are classified as High And matches with version label conflicts (e.g., clean vs explicit with identical audio) are classified as Medium or High per policy And on the provided validation set, High severity precision ≥ 0.90 and recall ≥ 0.85
Webhook and API Delivery of Scan Results to Integrations
Given a project has a verified webhook URL and API credentials configured When a scan completes Then the system POSTs a JSON payload containing jobId, completion status, counts by severity, and match group summaries to the webhook within 60 seconds And retries are attempted with exponential backoff for up to 24 hours on non-2xx responses And an authenticated Results API endpoint allows fetching full match groups with pagination, filtering (severity, release, date), and idempotent cursors And all payloads include an HMAC signature header that validates against the shared secret
Resolution Assistant: Safe Rename & ISRC Reassignment
"As an artist, I want one-click suggested fixes for naming and code issues so that I can resolve duplicate risks quickly without guessing."
Description

Offer guided fix actions when conflicts are detected, including suggested safe renames (e.g., adding version suffixes) and optional ISRC reassignment aligned to label prefix and sequencing rules. Provide one-click apply to update file names, metadata fields, and release folder structures, with previews of downstream impact. Support batch operations, undo/rollback, and change logs for compliance. Integrate with IndieVault’s versioning so that resolved assets are re-indexed and re-validated automatically.

Acceptance Criteria
Safe Rename Suggestions for Conflicting Version Labels
Given an asset is flagged for a conflicting title/version within the same release When the user opens Resolution Assistant Then at least one safe rename proposal is generated per conflicted asset that ensures unique filename and display title within the release And proposals follow naming rules: standardized version suffixes (e.g., Clean, Explicit, Radio Edit, Remix), base title preserved, filesystem-safe characters only, filename length ≤ 255 characters And the assistant validates no collisions with existing filenames or track titles in the target release folder And proposed rename updates are mapped to metadata fields Title and Version in preview
ISRC Reassignment Aligned to Label Prefix and Sequencing
Given a duplicate or recycled ISRC is detected for an asset When the user selects Reassign ISRC Then a new ISRC is generated using the configured label registrant prefix and the next available sequence per label rules And the generated ISRC is verified as unused across the entire catalog And the new ISRC is previewed in asset metadata and release manifest with zero conflicts reported And optional manual sequence input is validated against prefix and sequencing rules before allowing Apply
One-Click Apply with Atomic Update and Auto Re-index/Re-validate
Given the user has accepted rename and/or ISRC proposals When the user clicks Apply Then filenames, embedded metadata (Title, Version, ISRC, Explicitness), and release folder structures are updated atomically per batch And new asset versions are created per IndieVault versioning with prior versions preserved read-only And the catalog is re-indexed and Duplicate Radar re-validation is triggered automatically for affected assets And a success summary shows counts of Updated, Skipped, and Failed items with reasons
Downstream Impact Preview Prior to Applying Fixes
Given assets have downstream references (exports, delivery packages, share links) When the user opens the Preview in Resolution Assistant Then the preview lists downstream impacts by category (distribution exports, DDEX packages, release folders, share links) with counts And each affected item shows a before/after path, filename, and metadata diff And warnings identify review links that will be invalidated or regenerated, with a toggle to auto-regenerate before Apply
Batch Resolution Across Multiple Detected Conflicts
Given multiple conflicts are selected across one or more releases When the user chooses Apply to N items Then all selected items are processed in one operation with per-item progress and final status (Updated, Skipped, Failed) And failures for some items do not block others and an error report is available for download And batch selection supports filters by release and conflict type and offers Select All in current view
Undo/Rollback of Applied Resolutions
Given a resolution batch has completed When the user triggers Undo within the rollback window or selects Rollback from history Then all changes from that batch (filenames, metadata, folder moves, ISRC reassignments) are reverted to the exact prior state And a new version is created capturing the rollback while preserving the audit trail And downstream artifacts affected by the batch are reverted or regenerated in alignment with the selected rollback options
Compliance Change Log Recording and Export
Given any resolution apply or rollback operation occurs When the operation completes Then an immutable change log entry is created per asset including timestamp, user, operation type, before/after values (filename, Title, Version, ISRC, path), batch ID, and rationale And change logs are viewable with filters (date range, release, operation type) and exportable to CSV and JSON And each log entry links to associated validation results and asset/version IDs for audit
Pre-Export Compliance Gate with Overrides
"As a release coordinator, I want a pre-export gate that prevents risky duplicates unless I explicitly approve an override so that we avoid takedowns and delays."
Description

Embed Duplicate Radar checks into the export pipeline to block or warn on releases with unresolved ISRC conflicts, duplicate audio, or version label inconsistencies. Allow per-destination policy profiles (strict block vs. warn) and require justification to override with full audit logging. Generate a pre-export report summarizing checks, matched items, decisions, and responsible users. Ensure resolved metadata and file structures are used for downstream packaging, watermarking, and expiring review links.

Acceptance Criteria
Block Exports With Unresolved ISRC Conflicts (Strict Policy)
Given a release contains at least one track with an ISRC flagged by Duplicate Radar as conflicting with an existing catalog item And the selected destination profile policy for ISRC conflicts is Strict (Block) When the user initiates Export Then the export job is blocked before packaging starts And the UI displays the list of conflicting tracks, conflicting catalog references, and conflict reasons And the system records a blocked attempt event with release ID, destination, policy profile ID, user ID, timestamp And no packaging, watermarking, or link generation tasks are queued
Warn-Only Exports Proceed With Logged Warnings (Per-Destination Policy)
Given a release has Duplicate Radar findings of severity Warn (e.g., low-confidence ISRC reuse or metadata mismatch) And the selected destination profile policy is Warn (Allow with warning) When the user initiates Export to that destination Then the export proceeds And the warnings are displayed to the user before confirmation And the user must explicitly acknowledge the warnings to continue And the system logs the warning acknowledgment with user ID, timestamp, destination, and warning IDs
Duplicate Audio Fingerprint Enforcement in Gate
Given Duplicate Radar detects audio fingerprint matches at or above the configured duplicate threshold (e.g., similarity >= 0.98) between a track in the release and any prior catalog item not explicitly linked as a reissue/remaster And the destination profile policy for duplicate audio is Strict (Block) When Export is initiated Then the export is blocked And the UI shows matched track pairs with fingerprints, confidence score, and source release links And the user is presented with remediation options: mark as intentional reissue with new version label, replace audio, or request override And no files are exported until the conflict is resolved or overridden
Version Label Consistency Check (Clean/Explicit/Radio Edit)
Given a release contains tracks where the version label (e.g., Clean, Explicit, Radio Edit) conflicts with detected lyrics/profanity metadata or prior catalog labeling for the same ISRC/audio When Export is initiated Then the system flags each inconsistency with the detected vs. declared version labels And if the destination policy is Strict, the export is blocked; if Warn, export requires acknowledgment And the user can correct metadata in-line and re-run checks without leaving the export flow And the gate re-evaluates and allows export only when inconsistencies are resolved or an approved override exists
Override Flow Requires Justification and Captures Full Audit Trail
Given an export is blocked by the compliance gate And the current user has the Override permission for the workspace When the user requests an override Then the system requires a justification text of at least 20 characters And requires selection of a reason code from a configurable list (e.g., Licensed Reissue, Distributor Exception, Time-Critical Release) And captures before/after decision state, affected checks, user ID, timestamp, destination, policy profile ID, and IP address And persists an immutable audit log entry and associates it with the export job and release And Export only proceeds after the override is confirmed
Pre-Export Compliance Report is Generated, Attached, and Shareable
Given all compliance checks have been resolved, acknowledged, or overridden When the export job is approved to proceed Then the system generates a pre-export report (PDF and JSON) including: destination, policy profile, check types run, matched items, decisions (resolve/ack/override), responsible users, timestamps, and override justifications And stores the report with the release and the specific export job record And exposes a downloadable link to authorized users And embeds the report reference (ID/hash) into downstream job metadata for traceability
Downstream Packaging Uses Resolved Metadata and Final File Structure
Given compliance checks resulted in metadata edits (e.g., version label changes, ISRC reassignment) or file replacements When the export proceeds to packaging, watermarking, and expiring review link generation Then the system uses only the post-gate resolved metadata and final folder structure And previously cached pre-gate artifacts are invalidated and not used And generated review links reflect the final metadata and file versions And a checksum manifest of exported files matches the final resolved fileset recorded in the pre-export report
Duplicate Radar Dashboard & Notifications
"As a manager, I want a dashboard with alerts for duplicate risks so that I can triage and assign fixes before deadlines."
Description

Provide a centralized dashboard listing all duplicate-related findings with filters by project, release, severity, and date, including confidence scores, matched assets, and current resolution status. Enable inline actions to accept suggestions, reassign codes, tag legitimate reuse, or create tasks. Deliver real-time notifications to email and Slack on new high-severity findings and on export blocks. Support CSV/JSON export, role-based access control, and activity feeds to maintain team visibility and accountability.

Acceptance Criteria
Dashboard Findings List & Filters
Given a user with access opens the Duplicate Radar Dashboard When there are duplicate-related findings in the catalog Then the dashboard displays a tabular list with columns: Finding ID, Project, Release, Asset Type, Severity, Confidence Score (0–100%), Matched Assets (linked), Detected On (date), Resolution Status, Assignee. Given findings exist across multiple projects and releases When the user applies filters for Project, Release, Severity (All/Low/Medium/High/Critical), and Date Range Then the list updates to only show matches and visible filter chips reflect selections. Given the user clicks a column header (Severity, Confidence Score, Detected On) When sorting is toggled Then the list reorders accordingly and the sort direction is indicated. Given more than 50 results are returned When the page loads Then results are paginated with a default page size of 50 and pagination controls are visible. Given a dataset of up to 5,000 findings When the user loads the dashboard with no filters Then initial render completes within 2 seconds on a standard broadband connection.
Inline Actions & State Transitions
Given a finding with an available system suggestion When the user clicks Accept Suggestion and confirms Then the suggested change is applied, the finding status updates to Resolved - Applied Suggestion, and the change appears in the activity feed. Given a finding represents intended reuse When the user selects Tag as Legitimate Reuse and provides a required reason Then the finding status updates to Resolved - Legitimate Reuse and future exports for the same asset pair are not blocked by this finding. Given a finding with a problematic ISRC or code When the user selects Reassign Code and inputs a new valid code Then the code is updated, conflicts are cleared for that asset, and the status updates to Resolved - Code Reassigned. Given a finding needs follow-up When the user clicks Create Task and submits the form Then a task is created and linked to the finding, the finding status updates to In Review, and the assignee is set. Given a user without edit permission When they attempt any inline action Then the action is blocked and a permission error is displayed.
Real-time Notifications for High Severity
Given a new High or Critical severity finding is created When notification integrations are configured Then an email is sent to project watchers and a Slack message is posted to the designated channel within 1 minute including project, release, severity, confidence, matched assets, and a deep link. Given user notification preferences When a watcher has opted out of email or Slack Then notifications respect preferences and are not sent via opted-out channels. Given multiple findings are generated in rapid succession for the same release When deduplication is enabled Then notifications are bundled or rate-limited to no more than one per 10 minutes per release per channel with a count of bundled findings.
Export Blocking on Unresolved High/Critical Duplicates
Given a release has unresolved High or Critical severity duplicate findings When a user attempts to export the release Then the export is blocked and a modal displays the blocking findings with links and actions to resolve. Given an Admin chooses to override a block When they provide a required rationale Then the export proceeds, a warning banner is included in the export summary, and the override is recorded in the activity feed. Given a non-Admin user attempts to override When they confirm Then the override is denied with an authorization error.
CSV and JSON Export of Findings
Given the dashboard has active filters and sorting When the user clicks Export CSV Then a CSV is generated containing only the currently filtered findings in the current sort order with headers: Finding ID, Project, Release, Asset Type, Severity, Confidence Score, Matched Assets IDs, Detected On (UTC ISO 8601), Resolution Status, Assignee. Given the user selects Export JSON When the export is generated Then the file follows the published schema v1 with correct types and field names and includes pagination metadata. Given up to 50,000 matching rows When the export is requested Then the export streams to the client without timeouts and the file name includes the project or release (if filtered) and a timestamp.
Role-Based Access Control
Given role definitions Admin, Manager, Contributor, Viewer When permissions are enforced Then Admin can view all projects, perform all actions, configure notifications, and override exports; Manager can view assigned projects, perform inline actions, and export findings; Contributor can view assigned projects, create tasks, and tag legitimate reuse but cannot reassign codes or override exports; Viewer can only view and cannot perform inline actions or exports. Given a user outside a project When they navigate directly to a finding URL for that project Then access is denied and a not-found response is returned without leaking the existence of the resource. Given API requests to protected endpoints When a token lacks required scopes Then the API responds 403 and no state change occurs.
Activity Feed & Auditability
Given any state-changing action (accept suggestion, reassign code, tag reuse, create task, override export) When the action completes Then an activity entry is recorded with actor, timestamp (UTC ISO 8601), action type, target finding ID, before and after values, and rationale if provided. Given the activity feed view When the user filters by actor, action type, date range, or project Then only matching entries are shown and are ordered newest-first. Given an audit log export is requested When the user has permission Then a CSV export of the activity feed for a date range is generated and matches the on-screen filters.

Link Kits

Bulk‑generate personalized, scoped upload links for all contributors with preset project/version, allowed file types, quotas, and deadlines. Each link carries tailored instructions, shows per‑recipient progress, sends auto‑reminders, and auto‑closes on time—so you collect exactly what you need, when you need it, without back‑and‑forth.

Requirements

Kit Template Builder & Role Presets
"As an indie label manager, I want to define reusable link kit templates with role-based rules so that I can collect the right assets quickly and consistently across releases."
Description

A builder inside IndieVault to create Link Kit templates that bundle project, release version, allowed file types (e.g., WAV/AIFF, PNG, PDF), per-asset quotas, due dates, timezone, and tailored instructions by contributor role (e.g., mixer, artwork designer, session player). Templates can be saved, cloned, and applied across projects to standardize intake. When a template is applied, IndieVault pre-populates folder targets and naming conventions aligned to the product’s versioning, ensuring uploaded files land in the correct release-ready structure. This reduces setup time, enforces consistency, and decreases errors during asset collection.

Acceptance Criteria
Create and Save a Link Kit Template With Role Presets
Given I am in the Link Kit Template Builder with permission to manage templates And I select a project and a release version And I add the roles mixer, artwork designer, and session player And for each role I set allowed file types, per-asset quotas, a due date, a timezone, and tailored instructions When I click Save and enter a unique template name Then the template is saved and visible in the Template Library with that name And the saved template retains all role-specific settings and global settings exactly as configured And the template is available for application in other projects within the workspace
Validation Rules for Required Fields and Constraints
Given I attempt to save a template When required fields are missing or invalid Then the system blocks save and shows inline, field-specific errors: - Project: required - Release Version: required - Template Name: required, unique within workspace, 3–100 characters, letters/numbers/spaces/dashes/underscores only - At least one Role: required - Allowed File Types per Role: required; options limited to supported types (e.g., WAV, AIFF, PNG, PDF) - Per-asset Quotas per Role: positive integer 1–999 - Due Date/Time per Role: if provided, must be in the future; Timezone required when due date is set And the Save action is disabled until all validation errors are resolved
Apply Template Pre‑Populates Release‑Ready Structure
Given a saved template exists And I open a new Link Kit and choose that template for a target project and release version When I apply the template Then folder targets are auto-generated to match IndieVault’s release-ready structure for the selected project/version And file naming conventions are pre-filled to align with IndieVault versioning rules And a path/naming preview shows the exact destination for each role’s uploads And per-role allowed file types, quotas, due date, timezone, and instructions are populated exactly as defined in the template
Clone and Edit Templates Across Projects
Given a saved template exists When I select Clone on that template And I update the template name and any project/version and role settings And I save the cloned template Then a new template with a new unique ID is created while the original remains unchanged And tokens tied to project/version (e.g., folder targets and naming conventions) update to the newly selected project/version And the cloned template can be applied to any project for which I have access
Timezone‑Aware Due Dates Persist and Inherit
Given I set a due date/time and select a timezone for a role in the template When I save the template Then the due datetime is stored in UTC and displayed in the selected timezone in the UI And the template displays the timezone label next to each due date in previews and summaries And when the template is applied to a Link Kit, each generated link inherits the due datetime and timezone exactly as defined
Per‑Role Instructions and Allowed File Types Preview and Inheritance
Given I enter multi-line, plain-text instructions for each role and specify allowed file types per role When I preview the template Then the instructions render with preserved line breaks and no HTML/script injection And each role’s allowed file types list is visible in the preview And when the template is applied to a Link Kit, recipients for a given role see only that role’s instructions And the allowed file types per role appear in the generated link’s configuration UI and API payload
Scoped Upload Link Security & Enforcement
"As a producer, I want each contributor’s link to only allow the exact files we expect until the deadline so that nothing off-scope or late can be uploaded."
Description

Generate signed, unique upload URLs per recipient that strictly scope permissions to a project/version and target folders, enforce allowed file types, per-file size limits, per-role quotas, and expiration. Links support single-use or multi-session within the deadline, are revocable, and auto-expire on schedule. Server-side validation blocks disallowed types and over-quota uploads, while checksums, resumable uploads, and virus/malware scanning protect integrity. All traffic is encrypted in transit and at rest, with optional passphrase protection to reduce leak risk.

Acceptance Criteria
Unique Signed URL Scoped to Project/Version and Folders
Given a recipient-specific signed upload link is generated for Project A, Version 1, restricted to folders Artwork/Covers and Audio/Stems When the recipient accesses the link and attempts to list or upload content Then the server validates the signature and recipient identity, rejecting any tampered or expired signature with HTTP 401/403 And only uploads to the specified target folders are accepted; attempts to access any other project/version/folder return HTTP 403 with an error code SCOPE_VIOLATION And the link token is unique per recipient and cannot be used to access another recipient’s scope
Allowed File Types Enforcement
Given the link is configured to allow only .wav and .flac audio files When the recipient uploads files Then files with allowed extensions and matching authoritative MIME types are accepted And files with disallowed extensions or MIME types, or with mismatched extension vs MIME, are rejected server-side before storage with HTTP 415 and error code TYPE_NOT_ALLOWED And the rejection is logged with recipient ID, attempted type, and timestamp
Per-File Size Limits
Given the link is configured with a per-file size limit of 500 MB When the recipient uploads a file Then files whose finalized size is ≤ 500 MB are accepted And files exceeding 500 MB are rejected with HTTP 413 and error code FILE_TOO_LARGE And client-side chunking or resume cannot bypass the limit; the server computes size at finalization and discards partial data on violation
Per-Role Quotas Enforcement
Given the recipient has role "Vocalist" with quotas: max 10 files and total size ≤ 2 GB When the recipient uploads files over time Then uploads are accepted until either the file-count or total-size quota is reached And once the quota is reached, further uploads are blocked with HTTP 403 and error code QUOTA_EXCEEDED, indicating remaining quota = 0 And quota calculations are enforced server-side at upload finalization and are consistent across concurrent sessions
Expiration, Single-Use vs Multi-Session, and Revocation
Given a single-use link with a deadline of 2025-09-01T23:59:00Z When the first upload finalizes successfully Then the link immediately closes and all subsequent upload attempts are rejected with HTTP 410 and error code LINK_CONSUMED, even if before the deadline Given a multi-session link with the same deadline When the recipient performs multiple uploads before the deadline Then uploads are accepted until the deadline, after which all requests return HTTP 410 and error code LINK_EXPIRED And when the link owner revokes the link prior to the deadline Then subsequent requests fail with HTTP 403 and error code LINK_REVOKED effective immediately after revocation
Integrity via Checksums and Resumable Uploads
Given resumable, chunked uploads are enabled and the client provides a SHA-256 checksum for each file When an upload is interrupted and later resumed using the server-issued upload ID Then the server resumes from the last confirmed chunk without data corruption And upon upload finalization, the server computes the checksum and compares to the client-provided value; on mismatch, the upload is marked failed, data is discarded, and HTTP 422 with error code CHECKSUM_MISMATCH is returned And a success response includes the computed checksum and byte length for verification
Encryption and Optional Passphrase Protection
Given HTTPS is required for all upload endpoints When a client attempts to upload over plaintext HTTP Then the request is rejected or redirected to HTTPS, and no data is accepted over an insecure channel And uploaded objects are stored encrypted at rest; verification shows storage encryption enabled and encryption metadata recorded for the object Given passphrase protection is enabled for a link When the recipient accesses the link with the correct passphrase Then upload UI and APIs are unlocked for that session And when an incorrect passphrase is provided Then access is denied with HTTP 401 and error code INVALID_PASSPHRASE, and the passphrase is never logged or stored in plaintext
Bulk Recipient Import & Personalization
"As a project coordinator, I want to bulk-create personalized links from a spreadsheet so that every contributor gets precise instructions without manual setup."
Description

Import contributor lists via CSV, Google Sheets, or contact integrations, map each person to a role, and auto-generate personalized links with individualized instructions, deadlines, and quotas derived from the selected template. Support field variables (e.g., recipient name, track list) to personalize instructions and include locale/timezone per recipient. Provide conflict detection (duplicate emails, overlapping quotas) and preview before send. This accelerates setup for large sessions and ensures each collaborator receives clear, relevant guidance.

Acceptance Criteria
CSV Import – Field Mapping & Validation
Given a CSV file containing at least Email and Role columns and up to 10,000 rows When the user uploads the file Then the system auto-detects delimiter (comma/semicolon/tab) and encoding (UTF-8/UTF-16) and displays a mapping UI with autosuggestions for known headers And required fields Email and Role must be mapped before proceeding; unmapped required fields are highlighted with inline errors And the preview shows the first 20 rows with per-cell validation messages (invalid email format, missing role) And duplicate emails within the file are flagged with row references And clicking Continue validates all rows and produces a summary (valid count, error count, duplicate count); only valid, non-duplicate rows proceed
Google Sheets Import – Auth, Mapping, Refresh
Given a Google Sheet URL and a connected Google account via OAuth with read-only access to the selected sheet When the user selects a worksheet tab and header row Then the mapping UI is displayed with the same required fields and validations as CSV import And clicking Refresh pulls the latest data within 5 seconds and updates the preview diff (added/removed/changed rows) And access errors (403/404) show a clear message and block progress
Role Assignment & Template Derivation
Given a selected Link Kit template containing role-based quotas, allowed file types, and deadline rules And recipients imported with a Role field mapped When the user proceeds to generation Then each recipient inherits quotas, allowed file types, and deadlines from the template role And recipients with multiple roles are flagged for overlapping quotas with a resolution prompt before send And per-recipient deadlines are computed relative to the template rule and stored in UTC
Personalized Instruction Variable Substitution
Given an instruction template containing variables like {{recipient.firstName}}, {{project.name}}, {{version.name}}, {{role.name}}, {{trackList}}, and {{deadline.local}} When previewing any recipient Then all variables render with the recipient's and project's values, with dates/times formatted per recipient locale And any missing variable renders as a placeholder token and triggers a warning badge; send is blocked until resolved And the rendered instruction length is displayed with a 5,000-character hard limit
Per-Recipient Locale & Timezone Application
Given recipients with Locale (e.g., en-US, fr-FR) and Timezone (IANA) fields provided or defaulted from project settings When generating links and emails Then all deadlines display in the recipient's local time with explicit UTC offset (e.g., 23:59 GMT+2) while storing UTC in the backend And if Locale or Timezone is missing, project defaults are applied and noted in the preview And numeric and date formats in instructions and emails follow the recipient's locale conventions
Conflict Detection and Resolution – Duplicates & Overlaps
Given the import includes duplicate emails or overlapping quotas for the same email When validation runs Then duplicates are grouped and presented with resolution options: merge roles, keep first occurrence, or remove duplicates And overlapping quotas (same file type or track requested twice from the same recipient) are listed with per-recipient details and must be resolved before send And the Send action is disabled while any conflicts remain; resolving all conflicts updates the summary counts
Pre-Send Preview, Confirmation, and Dispatch
Given all recipients validate without errors and conflicts are resolved When the user opens the pre-send preview Then the user can paginate recipients and see for each: personalized instructions, role, allowed file types, quota count, deadline (in local time and UTC), and link expiration And the summary shows totals for recipients, roles, files requested, duplicates resolved, and any warnings And the user must check a confirmation box before Send is enabled And on Send, the system generates links and dispatches emails; a delivery report with per-recipient status is available for download as CSV
Contributor Upload Portal (No-Login Guided Flow)
"As a contributor, I want a simple upload page that tells me exactly what to provide and confirms when I’m done so that I don’t need back-and-forth emails."
Description

A lightweight, branded, no-login web portal for recipients to view tailored instructions, see remaining required items as a checklist, drag-and-drop files, and monitor upload progress with client-side validation for type and size. The portal displays allowed formats, naming guidance, and remaining quota, supports pause/resume and mobile uploads, and provides confirmation receipts. Accessibility and localization are built in to accommodate diverse collaborators and reduce support requests.

Acceptance Criteria
Open no-login portal via personalized Link Kit URL
Given a valid, unexpired personalized Link Kit URL scoped to a contributor When the recipient opens the link Then the portal loads without requiring authentication and displays workspace branding, project name, version, and recipient name Given an invalid, expired, or closed link When the recipient opens it Then the portal shows an access error with the specific reason (expired, revoked, not found) and a support/contact action and prevents any upload interactions Given the portal is opened on a modern desktop or mobile browser over a 3G Fast connection When the initial view loads Then Time to Interactive is ≤ 3 seconds and all primary actions (view checklist, add files) are usable without blocking spinners
View tailored instructions and checklist of remaining required items
Given the link is configured with required items (e.g., 1 WAV master, 3 JPEG artworks) When the portal loads Then a checklist displays each requirement with count remaining, allowed types, and naming guidance Given some items were already uploaded in a previous session When the portal loads Then previously received items are marked Completed with timestamp and remaining items are clearly indicated Given an upload completes or an item is removed When the change occurs Then the checklist and counts update in real time without full page refresh
Drag-and-drop upload with client-side type and size validation
Given allowed file types and per-file size limits are defined When a user drags in or selects a file of an invalid type or exceeding size Then the file is rejected client-side with a specific error message naming the disallowed type or size limit Given a valid file is selected When the upload starts Then a per-file progress bar and overall session progress are displayed and update at least every 250 ms Given network connectivity is lost during an upload When the connection drops Then the upload does not mark as complete and the user is prompted to retry or resume when connectivity returns
Quota and allowed formats display with real-time remaining updates
Given per-recipient quotas are configured (file count and/or total size) When the portal loads Then the UI displays remaining file count and remaining total size alongside allowed formats Given a selected set of files would exceed either quota When the user attempts to add them Then the UI blocks the action and explains which limit would be exceeded and by how much Given the user removes a queued or completed file before final submission When the action occurs Then the remaining quota recalculates and updates immediately
Pause/resume and mobile-friendly uploads over unstable connections
Given a file larger than 50 MB When the upload is in progress Then the user can pause and resume without restarting from 0 using chunked uploads Given the portal is used on iOS and Android modern browsers When selecting media Then the native file picker/camera roll is available and orientation/viewport is responsive without layout breakage Given the app is backgrounded or the page is refreshed within 5 minutes During an upload Then the upload session state is preserved and resumes automatically upon return if the link remains valid
Deadline-aware link auto-closes with progress saved
Given a link has a configured deadline When within 24 hours of the deadline Then the portal displays a countdown timer and a non-intrusive banner indicating remaining time Given the current time is at or past the deadline (server time authoritative) When the recipient attempts to start or continue an upload Then the portal prevents uploads, shows Submission closed with the deadline timestamp, and retains previously completed items read-only Given client and server clocks differ When the portal loads Then the displayed countdown and enforcement are based on synchronized server time to avoid early or late closure
Confirmation receipt, accessibility, and localization
Given files finish uploading and pass client-side validations When processing completes Then the user sees an on-screen confirmation summary with filenames, sizes, timestamps, and a receipt ID; if an email address was provided, a confirmation email is sent within 5 minutes Given WCAG 2.1 AA standards When the portal is navigated by keyboard and screen readers Then all interactive elements are reachable, labeled (ARIA where appropriate), have visible focus states, and meet color contrast ≥ 4.5:1 Given the link locale or browser language is supported When the portal loads Then UI text, error messages, dates, times, and numbers are localized accordingly with a manual language switcher available and English as fallback
Progress Tracking & Checklist Dashboard
"As an artist manager, I want to see who has delivered what and what’s still missing so that I can proactively resolve gaps before the deadline."
Description

A project-side dashboard that aggregates per-recipient status in real time, showing checklist completion by asset type, last activity timestamp, files received vs. required, and blockers (e.g., rejected types, quota exceeded). Includes filters by role, due date, and track, plus quick actions to nudge, extend, revoke, or reopen links. Progress is tied to IndieVault’s versioning, marking a kit as complete when all required assets for the targeted version are received.

Acceptance Criteria
Real-time Per-Recipient Status Aggregation
Given a project with an active Link Kit and multiple recipients When a recipient uploads, deletes, or the system auto-closes a link due to deadline Then the dashboard updates that recipient’s status, file counts, and last activity timestamp within 5 seconds without page reload And aggregate counts (total, in progress, complete, blocked, closed) reflect the change consistently across all views And last activity timestamp equals the most recent event time recorded for that recipient
Checklist Completion by Asset Type with Files Received vs Required
Given a Link Kit configured with required assets per asset type for a targeted version When files of allowed types are successfully received and validated for a recipient Then the checklist displays received vs required counts and percent complete per asset type for that recipient And an asset type is marked Complete only when all required files for that type are received and valid And the recipient is marked Complete only when all required asset types for that recipient are Complete And rejected or invalid files do not count toward received totals
Blocker Detection and Surfacing
Given system-defined blockers including rejected file type, quota exceeded, and expired link When any blocking condition is triggered for a recipient Then the dashboard shows a Blocked status with a clear reason badge (e.g., Rejected Type, Quota Exceeded, Expired) And a details panel lists the specific offending files or limits hit And clearing the condition (e.g., removing rejected file, increasing quota, extending deadline) removes the blocker within 5 seconds
Filters by Role, Due Date, and Track
Given recipients have roles, due dates, and track assignments within the targeted version When the user applies a Role filter Then only recipients matching the selected role(s) are displayed and all counts reflect the filtered set When the user applies a Due Date filter (range or relative) Then only recipients with due dates in the selected window are displayed and counts update accordingly When the user applies a Track filter Then only recipients associated with those track(s) are displayed and counts update accordingly And clearing filters restores the full unfiltered list and counts
Quick Actions: Nudge, Extend, Revoke, Reopen
Given a recipient is selected in the dashboard When the user clicks Nudge Then a reminder is sent to that recipient with their personalized link and instructions, and a Nudge Sent timestamp is recorded and visible within 5 seconds When the user clicks Extend and sets a new deadline Then the link’s deadline updates immediately and future reminder scheduling recalculates When the user clicks Revoke Then the link becomes inaccessible within 5 seconds and the recipient status changes to Revoked When the user clicks Reopen on a closed or revoked link and sets optional new quota/deadline Then the link becomes accessible with the updated constraints and the status changes to In Progress And an audit entry is recorded for each action
Version-Aware Completion Status
Given the Link Kit targets a specific project version When all required assets for that targeted version are received and validated across all intended recipients Then the kit-level status is marked Complete on the dashboard And assets uploaded for other versions do not affect the completion status of the targeted version And if any required asset for the targeted version is removed or invalidated, the kit status reverts to In Progress within 5 seconds
Automated Reminders, Deadlines & Auto-Close
"As a self-funded artist, I want automated reminders and auto-closing links so that I don’t spend time chasing files and can freeze the release on time."
Description

Configurable reminder schedules per kit and per recipient that send email and optional SMS nudges at set intervals, with smart content that highlights missing items and the time remaining. Reminders respect recipient timezone and pause automatically when the checklist is complete. At the deadline, links auto-close, optionally triggering a thank-you or escalation to the manager and locking the intake path for the targeted version to prevent late changes.

Acceptance Criteria
Per-Kit and Per-Recipient Reminder Scheduling
- Given a Link Kit with a default reminder cadence (e.g., T-7d, T-3d, T-24h, T-2h) and a recipient override (e.g., T-24h only), When the kit is activated, Then the system schedules reminders per recipient such that overridden recipients receive only their override and all others receive the default cadence. - Given two recipients in different time zones (e.g., UTC-5 and UTC+9), When a reminder is due, Then each reminder is sent at the equivalent local time of the defined cadence and the message displays the deadline in the recipient’s local time zone. - Given a reminder is scheduled, When its scheduled time arrives, Then the email is sent within 5 minutes of the scheduled time and the send event is recorded with timestamp, recipient, channel, and status. - Given SMS is enabled and the recipient has a verified mobile number and SMS opted-in, When a reminder is sent, Then an SMS is sent containing the personalized link and time remaining; otherwise, no SMS is sent.
Smart Reminder Content Highlights Missing Items and Time Remaining
- Given a recipient has outstanding required items, When a reminder is sent, Then the message lists the count and names of missing items and shows the time remaining until deadline in the recipient’s local time. - Given a quota or file-type constraint exists, When the reminder lists missing items, Then it specifies the remaining quota and permitted file types for each missing slot. - Given the recipient has partially uploaded items, When a reminder is sent, Then the progress (e.g., 3/5 items complete) is displayed. - Given all required items are present, When a reminder would be sent, Then no reminder is sent.
Automatic Pause of Reminders on Checklist Completion
- Given a recipient completes all required items for their link kit, When the system confirms successful ingestion and validation, Then all pending future reminders for that recipient/kit are cancelled within 2 minutes and no further email/SMS reminders are sent. - Given reminders were cancelled due to completion, When new required items are added to the same kit before the deadline, Then the reminder schedule is recalculated and resumed from the time of change using the kit default unless an override exists for that recipient. - Given reminders were paused, When the deadline passes, Then no completion-related reminders are sent retroactively.
Deadline Enforcement and Auto-Close of Upload Links
- Given a link kit deadline is set for 2025-08-25 18:00:00 UTC, When the deadline is reached, Then each recipient’s upload link transitions to Closed within 1 minute and all upload actions return a user-facing “Link closed” message without accepting files. - Given a link auto-closes, When the recipient was complete at or before the deadline and “send thank-you” is enabled, Then a thank-you notification is sent within 5 minutes and recorded. - Given a link auto-closes, When the recipient was incomplete and “escalate to manager” is enabled, Then an escalation email is sent to the manager with a list of missing items and recipient details within 5 minutes. - Given an auto-closed link, When the recipient attempts access after closure, Then the UI displays the deadline timestamp in their local time and offers contact info per kit settings.
Lock Intake Path for Targeted Version After Deadline
- Given a version is targeted by a link kit, When the deadline passes and links auto-close, Then the version’s intake endpoints (UI and API) disallow new uploads, replacements, or deletions via link kits and respond with HTTP 403 (API) or a disabled UI state. - Given the intake path is locked, When a manager extends the deadline, Then the lock is lifted for affected recipients and their links reopen, and when the new deadline passes, the lock is reinstated. - Given the intake path is locked or unlocked, When the state changes, Then an audit log entry is created capturing actor (system or manager), timestamp, version ID, affected recipients, and reason.
Timezone Accuracy Across Reminders and Deadlines
- Given a kit deadline defined in a specific kit timezone (e.g., America/Los_Angeles), When reminders are generated for recipients in other time zones, Then reminder send times and displayed deadline times are converted to each recipient’s local time without changing the absolute instant. - Given daylight saving transitions occur between scheduling and send time, When reminders are dispatched, Then times respect the correct local DST offset based on the recipient’s timezone database at send time. - Given a recipient changes their timezone preference before the next reminder, When the next reminder is sent, Then the schedule and displayed times reflect the updated timezone.
Audit Trail & Per-Recipient Analytics
"As a label operations lead, I want audit logs and analytics for each link so that I can verify compliance and optimize our intake process across releases."
Description

Capture and surface detailed activity logs per link, including link creation, sends, opens, IP/country, device, uploads, rejections, deletions, and deadline events. Provide per-recipient analytics and exportable reports to verify who saw instructions, what was uploaded when, and by whom. Integrate with IndieVault’s global audit log and analytics layer to correlate intake performance across projects, helping identify bottlenecks and improve future kit templates.

Acceptance Criteria
Per-Recipient Activity Timeline Displays All Events
Given a Link Kit with at least one recipient link has been created When a manager opens the recipient’s Activity Timeline Then the timeline lists, in strict chronological order with timezone set to the project timezone, the following events with timestamps and actors: link created, link sent, link opened (unique and total), IP and country resolved, device/OS/browser parsed, upload started/completed per file (name, size, checksum), file rejected (reason), file deleted (by whom), deadline set/extended/reached, link auto-closed/reopened And each event shows the source (system, manager, or recipient) and immutable metadata And no event can be edited or removed by end users; corrections appear as new events referencing the prior event by ID
IP, Country, and Device Metadata Capture and Privacy Controls
Given a recipient opens a link When the system records an open event Then the event stores the full IPv4/IPv6, geo-resolved country (ISO-3166-1 alpha-2), and parsed user agent (device type, OS, browser) And the full IP is retained for 30 days then automatically anonymized to a /24 for IPv4 or /64 for IPv6 And only users with the Analytics.ViewPII permission can view full IP before anonymization; others see masked IP And if the workspace privacy setting “Disable IP Tracking” is enabled, IP is not stored, and geo resolution is skipped, while the event is still recorded
Upload, Rejection, and Deletion Event Integrity
Given a recipient uploads files via their link When the upload completes Then an Upload Completed event is stored per file with filename, MIME type, byte size, SHA-256 checksum, uploader ID, and storage object ID And if a manager rejects a file with a reason, a File Rejected event is stored with the rejecting user ID and reason code And if a file is deleted by any actor, a File Deleted event is stored with actor ID and justification And all three events are write-once and append-only, signed with a server-side HMAC and unique event ID, and are verifiable for tamper detection And attempting to modify or delete any of these events results in a 403 and an Audit Attempted Tamper event
Deadline, Auto-Close, and Reminder Events Tracked
Given a link kit link with a submission deadline and auto-close enabled When the deadline is set, extended, reached, or the link auto-closes Then corresponding Deadline Set, Deadline Extended, Deadline Reached, and Link Auto-Closed events are stored with timestamps and actor/system And reminder emails/SMS generated by the system are logged as Reminder Sent with recipient ID and delivery channel And the link’s state transitions (Open -> Closing Soon -> Closed) are recorded as separate state change events
Per-Recipient Analytics Dashboard KPIs and Filters
Given a project has multiple recipient links with activity When a manager opens the Per-Recipient Analytics view Then the dashboard displays KPIs per recipient: time-to-first-open, time-to-first-upload, total files uploaded, files accepted/rejected, last activity, completion status And the manager can filter by date range, recipient status (Not Opened, Opened, In Progress, Completed, Overdue, Closed), country, device type, and link kit And the view supports sorting by any KPI, full-text search by recipient name/email, and pagination for 10/25/50/100 rows And clicking a recipient opens the detailed Activity Timeline
Exportable Activity and Analytics Reports
Given a manager selects recipients and a date range in the analytics view When they export as CSV and JSON Then the system generates files with a stable schema including event_id, recipient_id, project_id, link_id, event_type, occurred_at (UTC ISO 8601), project_timezone, actor_type, metadata (JSON), and derived KPIs And exports include a data dictionary file describing columns and value enums And exports larger than 10k rows are processed asynchronously with an email notification and download link that expires after 7 days And exported numeric totals match on-screen totals for the same filters
Global Audit Log Integration and Correlation
Given any recipient-level event is created When the event is persisted Then the same event is published to the global audit stream with a correlation_id composed of workspace_id:project_id:link_id:recipient_id:event_id And ingestion is idempotent with deduplication on event_id and at-least-once delivery with retry backoff And the global analytics layer can query cross-project KPIs (e.g., average time-to-first-open) within 5 minutes of event creation (p95), verified by synthetic tests And if publishing fails, the event is queued for retry and the local log still shows the event with status=Pending Publish, later updated to Published upon success

Smart Intake

A dynamic intake form that only asks for the missing, relevant metadata based on what’s being uploaded (audio, artwork, stems, contracts). It validates entries in real time (ISRC format, role labels, credit names against your roster), attaches the data to the files on arrival, and prevents incomplete deliveries—no account required.

Requirements

Adaptive Field Logic by Asset Type
"As a contributing producer, I want the intake form to only ask me for fields relevant to what I’m uploading so that I can complete submissions quickly without confusion."
Description

Render a dynamic intake form that auto-detects file types (audio, artwork, stems, contracts) and displays only the minimal, context-relevant fields required for that asset. Field visibility, requiredness, and helper text are driven by a server-configured schema and rule conditions (e.g., stems require parent mix reference, artwork requires dimensions and color profile). The form supports mixed batches, applying per-file schemas, and preserves state across drag-and-drop sessions. It integrates with IndieVault projects to prefill known data (release, artist, label) and supports dark/light themes. The outcome is a faster, less error-prone submission flow that reduces cognitive load and ensures only pertinent data is requested.

Acceptance Criteria
Auto-Detects Audio and Shows Audio Fields
Given a user drags one or more .wav or .flac files into Smart Intake When the system inspects the files Then each file is classified as Audio And only audio-relevant fields render for those files per the server schema (e.g., Title, Version/Mix, ISRC, Explicit, BPM, Key) And non-relevant fields (e.g., Artwork Dimensions, Color Profile, Contract Dates) are hidden And requiredness and helper text are applied from the fetched schema for the tenant And real-time validation enforces schema rules (e.g., ISRC pattern) with inline errors And the Submit action remains disabled until all required audio fields for each file pass validation
Artwork Requires Dimensions and Color Profile
Given a user uploads .jpg or .png files intended as artwork When the system auto-detects the asset type Then each file is classified as Artwork And Dimensions (width x height in px) and Color Profile fields are displayed and marked required per schema And where file metadata contains dimensions, the fields are auto-filled and validated against minimums in the schema (e.g., 3000x3000 px) And Color Profile options are restricted to schema-allowed values (e.g., RGB, CMYK) And the Submit action is blocked until required artwork fields are valid for each file
Stems Require Parent Mix Reference
Given a user uploads multiple .wav files identified as Stems (auto-detected or user-corrected) When the stems are listed in the intake table Then a required Parent Mix Reference field is shown per stem as dictated by the server schema And each stem must be associated to one Parent Mix (existing mix ID or designated parent file in the batch) before submission And bulk apply of Parent Mix to selected stems is available and respects validation And submission is prevented until all stems have a valid Parent Mix Reference and required stem fields are complete
Mixed Batch Applies Per-File Schemas
Given a user drags a mixed batch containing audio, artwork, and contract files When the files are displayed in the intake view Then each file renders only the fields defined by its asset-type schema And requiredness and validation states are tracked per file And hidden fields are not included in the outbound payload And the batch-level Submit is enabled only when every file meets its own required criteria And inline, per-file error summaries indicate exactly which fields block submission
State Preserved Across Drag-and-Drop Sessions
Given a user adds files and enters metadata, then later drags more files into the same intake session When the list refreshes to include the new files Then previously entered metadata for earlier files remains intact And validation states for previously edited files persist And the association between each file and its metadata remains correct after reordering or adding files And the user can explicitly clear all persisted state via a Reset action
Prefill Project Data from IndieVault
Given the intake URL is associated with an IndieVault project containing release, artist, and label data When the intake form loads Then those fields are prefilled according to the project context And fields that are locked by schema remain read-only; otherwise they can be overridden by the user And prefilled values satisfy requiredness where applicable and still undergo validation And if no project context is available, the fields remain empty without error
Theme-Aware Field Rendering (Light/Dark)
Given the user or OS selects light or dark theme When the intake form renders or the theme is toggled Then all fields, helper text, validation states, and icons adopt the current theme tokens And color contrast for text and interactive elements meets WCAG AA standards in both themes And switching themes does not clear entered metadata or validation states And asset-type badges and status indicators remain clearly readable in both themes
Real-time Metadata Validation Engine
"As an artist manager, I want the form to flag invalid metadata as I type so that I can correct issues before submission and avoid back-and-forth."
Description

Perform client- and server-side validation of metadata and files as the user types and uploads, returning immediate inline feedback. Validate ISRC/ISWC formats, UPC/EAN, audio duration/sample rate/bit depth/channel consistency, filename conventions, role labels against a controlled vocabulary, and credit names against the roster, with fuzzy matching and diacritic support. Prevent duplicate track codes and conflicting versions, enforce required relationships (e.g., composer required if writer role present), and provide actionable fixes. All rules are centrally configurable and versioned; the validator exposes a public rules endpoint used by the form to stay current.

Acceptance Criteria
Inline Code Validation & Uniqueness (ISRC/ISWC/UPC/EAN)
Given a user enters an ISRC/ISWC/UPC/EAN in its field When the input matches the configured pattern and checksum rules (hyphens/spaces allowed) Then the field shows a Valid state within 300ms client-side and a server confirmation within 1s, and the value is normalized to the canonical format Given a user pauses typing for 300ms or blurs the field When the input violates pattern or checksum rules Then an inline error appears stating the specific failure (e.g., "Invalid check digit", "Incorrect length"), and a suggested correction is displayed if derivable Given a normalized code is submitted When the server detects that the code already exists with a different audio fingerprint/version Then a blocking error is shown linking to the conflicting asset and submission is blocked Given a normalized code is submitted When the server detects that the code already exists with an identical audio fingerprint Then a non-blocking warning is shown allowing reuse with explicit confirmation, and audit logs record the decision Given a user enters a code into a mismatched field (e.g., UPC in ISRC) When validated Then a specific error indicates the expected code type
Audio Technical Property Validation
Given one or more audio files are uploaded When server-side analysis completes Then duration, sample rate, bit depth, and channel count are extracted and displayed per file within 2s per file (95th percentile) Given a release bundle contains multiple tracks When validation runs Then sample rate and bit depth must match the configured delivery targets; mismatches produce blocking errors naming the expected values Given a stems pack is uploaded for a track When validation runs Then each stem's duration must be within 500ms of the main mix duration and channel count must match the configured expectation; violations are blocking Given any audio file has duration < 2s or > the configured maximum When validation runs Then a blocking error is shown with the measured duration Given audio properties are inconsistent with the declared format (e.g., mono file marked stereo) When validation runs Then a blocking error is shown with an actionable fix message
Filename Convention Enforcement
Given a configurable filename convention is active When a user uploads files Then each filename is validated against the configured pattern/tokens and illegal characters are flagged; violations show an inline message citing the first failing token Given a filename violates the convention When a canonical rename can be derived (e.g., token reorder, diacritic fold per rules) Then a proposed new filename is displayed and the user can accept one-click rename; accepted renames are applied server-side before storage and recorded in audit logs Given one or more files still violate the convention When the user attempts to submit Then submission is blocked and a summary lists each offending file with anchor links to fix
Role Labels & Required Relationship Validation
Given a user enters or selects a contributor role When validated Then the role must match a term in the controlled vocabulary; known synonyms are auto-mapped to the canonical label and a tooltip shows the mapping Given the role 'Writer' is present on any track When validation runs Then at least one 'Composer' credit must be present; otherwise a blocking error identifies the track and missing relationship Given relationship rules change in the central configuration When the form refreshes rules Then the new requirements are enforced immediately without page reload and any now-missing relationships are surfaced as blocking errors
Roster Credit Name Matching (Fuzzy & Diacritics)
Given a user types in a credit name field When at least 2 characters are entered Then roster suggestions appear ranked by accent-insensitive, case-insensitive fuzzy match (normalized Levenshtein) with a max distance threshold of 2 for names ≤ 15 chars and 3 for longer names Given the user selects a suggestion When saved Then the credit is linked to the roster entity ID and the canonical display name (with diacritics preserved) is shown Given the entered name is below the match threshold but similar to an existing roster entry When the user attempts to create a new person Then a warning prompts to review the closest matches to prevent duplicates; creation requires explicit confirmation Given diacritics are omitted or added (e.g., "Beyonce" vs "Beyoncé") When validated Then the system matches them as the same roster entity if other tokens align
Versioned Rules Service & Client Sync
Given Smart Intake loads When network is available Then the client fetches /validator/rules with ETag/If-None-Match; if a new semver is returned, rules replace in-memory config without page reload and the active version is displayed in the UI Given a client validates with an outdated rules version When the server responds with a current rules version identifier Then the client automatically refetches and re-validates the current form state, showing a "Rules updated" badge and updating any messages Given the rules endpoint is unreachable When the client has a cached ruleset ≤ 7 days old Then validation proceeds with the cached rules and a non-blocking banner indicates degraded mode; server-side authoritative validation still runs on submit Given any rule is updated centrally When the endpoint is called Then the response includes semver, lastModified, and a ruleset checksum; changes are recorded in an audit log
Prevent Incomplete Delivery with Inline Fixes
Given any blocking validation error exists When viewing the form Then the Submit/Share action is disabled and a summary panel lists all blocking items with deep links to the offending fields/files Given the user clicks "Fix" on an error item When an auto-fix is available (e.g., normalize code, accept proposed filename, map role synonym) Then the auto-fix is applied and the field re-validates within 300ms; the error clears if criteria are met Given only non-blocking warnings remain When validation runs Then Submit/Share is enabled and warnings are summarized without blocking Given validation messages are displayed When using a screen reader or keyboard navigation Then all messages are announced via ARIA live regions, are focusable, and color contrast meets WCAG AA
Roster-backed Credits Autocomplete
"As a mixing engineer, I want credit fields to autocomplete names and roles from the label’s roster so that credits are consistent and correctly attributed."
Description

Provide autocomplete for people, roles, and entities powered by the account’s roster and role taxonomy. Suggest canonical names, aliases, and preferred credit spellings; attach unique IDs and role definitions; allow creation of new entries when permitted, with moderation flags for later reconciliation. Support per-role constraints (e.g., one primary artist, multiple featured artists) and locale-aware sorting. Ensure consistency across submissions by normalizing to canonical records on save.

Acceptance Criteria
Canonical Suggestions from Roster for People and Entities
Given an intake form credit field configured for Primary Artist and a roster with canonical records and aliases When the contributor types at least 2 characters Then the autocomplete shows up to 10 suggestions ranked by match score and recent usage And each suggestion displays canonical name, matched alias (if applicable), role icon, and entity type And selecting a suggestion attaches person_id/entity_id and preferred credit spelling And on save, the stored display name equals the canonical preferred spelling, not the typed value
Per-Role Constraints Enforcement (Primary vs Featured)
Given the Primary Artist role allows max 1 and the Featured Artist role allows 0..N When the user attempts to add a second Primary Artist Then the form shows an inline error Only one Primary Artist allowed and disables submission And when the user adds multiple Featured Artists Then the form accepts them and displays a count And on save, constraints are revalidated server-side and violations return HTTP 422 with field errors
Create New Credit Entry with Moderation
Given the intake link is configured to allow new credit creation When the user selects Create New after searching with no roster match Then the system requires minimum fields (display name, role) and optional locale And upon confirmation, a provisional record with temp_id is created with moderation_flag=true And the submission proceeds, attaching the provisional temp_id to the credit And the new record appears in the admin moderation queue for reconciliation
Read-Only Intake Link Disables New Entry Creation
Given the intake link is configured as read-only with no new credit creation When the user searches and no roster match exists Then the UI does not show Create New and shows a helper Contact manager to add new credits And submission is blocked until a valid roster-backed selection is made
Locale-Aware Sorting and Matching
Given the intake locale is set to es-ES and the roster contains names with diacritics and non-Latin scripts When the user types a name with or without diacritics Then the autocomplete performs accent-insensitive matching and sorts results using es-ES collation And non-Latin names are transliterated for matching but displayed in their original script
Normalization and Deduplication on Save
Given multiple entered credits resolve to the same canonical person_id via alias mapping When the user submits the form Then the system collapses duplicates into one credit per role/person_id And the stored record contains canonical IDs and preferred spellings And an audit log records merges and alias resolutions
Role Taxonomy-Backed Role Selection
Given a Role field backed by the account role taxonomy When the user types prod or producer Then the autocomplete suggests canonical roles with role_id, including Producer And selecting a synonym maps to the canonical role_id and preferred label And free-text roles are rejected; submission fails with HTTP 422 if no role_id is attached
Metadata Attachment on Ingest
"As a catalog admin, I want metadata embedded in the files and stored in IndieVault upon arrival so that assets stay portable and properly labeled downstream."
Description

On successful upload, embed validated metadata into the files and save a canonical record in IndieVault. For audio, write ID3v2/BWF/iXML tags (title, ISRC, artists, version, explicit flag, credits), for artwork write XMP/IPTC (creator, rights, dimensions, color profile), and for documents write PDF metadata; when embedding is not supported, generate a signed JSON sidecar with checksums. Ensure deterministic mapping from intake fields to tag keys, maintain version history, and verify embeds via read-back checks. Link ingested assets to their project and place them into release-ready folders with consistent naming.

Acceptance Criteria
Audio Upload: ID3v2/BWF/iXML Embed and Read-Back Parity
Given a validated intake for an audio file (MP3 or WAV) with title, ISRC, artists, version, explicit flag, and credits When the upload completes successfully Then the system embeds the metadata into ID3v2 frames for MP3 and BWF and/or iXML chunks for WAV according to the documented mapping And a canonical record is created with the same field values And a read-back of the written tags returns values identical to the intake for all fields And the ingest is marked Successful and timestamped And no unintended tags/frames are added and the audio stream integrity (duration, bitrate) remains unchanged
Artwork Upload: XMP/IPTC Embed and Validation
Given a validated intake for an artwork file (JPEG/PNG/TIFF) with creator, rights, dimensions, and color profile When the upload completes successfully Then the system embeds the metadata into XMP/IPTC according to the documented mapping And a canonical record is created with the same field values And a read-back of XMP/IPTC returns values identical to the intake for all fields And the file’s pixel dimensions and color profile remain unchanged after embedding
Document Upload: PDF Metadata Embed and Verification
Given a validated intake for a PDF document with title, creator, and rights When the upload completes successfully Then the system embeds the metadata into the PDF info/XMP sections according to the documented mapping And a canonical record is created with the same field values And a read-back of the PDF metadata returns values identical to the intake for all fields And the document opens without warnings in standard PDF readers
Unsupported Embed Formats: Signed JSON Sidecar with Checksums
Given an uploaded asset type for which native metadata embedding is not supported When the upload completes successfully Then the system does not modify the original file and generates a JSON sidecar containing the intake metadata and per-file checksums And the JSON sidecar is signed by the system and signature verification succeeds And recomputing the file checksum matches the value stored in the sidecar And the canonical record links to the sidecar and the sidecar is stored atomically alongside the asset with deterministic naming
Deterministic Field-to-Tag Mapping and Idempotency
Given identical intake data and the same file type/format When ingest is executed multiple times on the same asset Then the set of written tags/keys and their values are identical across runs, excluding allowed system timestamps And no duplicate or conflicting tag frames/keys are written And the canonical record includes the mapping version identifier used And comparing the files byte-for-byte shows only permissible differences (e.g., metadata block offsets), with core media data unchanged
Versioning on Re-ingest and History Preservation
Given an existing ingested asset linked to a project When a revised file with the same logical identity (e.g., same ISRC or asset ID) is uploaded via Smart Intake Then a new version entry is created and the previous version remains preserved and retrievable And the canonical record is updated to reference the latest version and stores a diff of changed metadata fields And embedded metadata in the latest file matches the updated canonical record on read-back And prior versions’ files and metadata remain unaltered
Project Linkage and Release-Ready Folder Placement
Given an asset submitted via Smart Intake and associated to a project/release When ingest completes successfully Then the asset is linked to the correct project in the database And the file is placed into the project’s release-ready folder structure according to the configured template And the file/folder naming conforms to the template (deterministic, collision-free, sanitized) and includes version indicators where applicable And all related assets (embedded or sidecar) are co-located and referenced consistently
Submission Completeness Gate
"As an A&R coordinator, I want the form to block incomplete deliveries and show exactly what’s missing so that releases don’t stall later."
Description

Prevent submission until each asset meets the completeness criteria defined by its schema, displaying a per-file checklist that highlights missing or invalid fields. Offer a progress indicator, quick-jump to errors, and contextual guidance. Allow authorized users to override specific requirements with a mandatory reason that is logged for audit. Enforce the gate server-side to block API submissions that bypass the UI, and return structured error payloads for integrations. Support saving drafts and resuming later without losing validation state.

Acceptance Criteria
Per-file completeness checklist blocks submission for audio upload
Given the user uploads at least one audio file and one artwork file and opens the submission form When required fields per each file’s schema (e.g., Title, Primary Artist, ISRC for audio; Dimensions for artwork) are missing or invalid Then each file displays a checklist with missing/invalid items highlighted and the Submit action is disabled And when the user corrects all invalid/missing fields for a file Then that file’s checklist shows all items complete and its status is marked Complete And when all files meet completeness or have permitted overrides applied Then the Submit action becomes enabled
Progress indicator reflects overall and per-file validation status
Given a multi-file submission with mixed validation states When the user views the intake form Then a progress indicator shows X/Y files complete and the total count of outstanding errors And each file tile displays a badge for state: Not Started, In Progress, Blocked (has errors), or Complete And when the user fixes or introduces an error Then progress counts and per-file badges update in real time without page reload
Quick-jump navigation to first error across files
Given there are validation errors across multiple files When the user activates "View Errors" or presses the quick-jump shortcut Then focus moves to the first invalid field and its file panel auto-expands And when the user activates "Next Error" Then focus moves to the next invalid field across files until the last error, after which the control is disabled And quick-jump controls are keyboard accessible and announce the target field to assistive tech
Contextual guidance for invalid fields
Given a user enters a value that fails validation (e.g., ISRC format) When the field loses focus or validation runs Then inline error text displays the reason and an example of valid format, with a link to schema guidance And when the value becomes valid Then the error text clears and the field is marked valid And error and guidance messages respect the user’s language setting
Authorized override with reason and audit logging
Given a field marked overrideable by the schema is failing validation When a user with OverrideSubmissionGate permission selects Override for that field Then a modal requires a mandatory reason of at least 10 characters before confirmation And upon confirmation Then the field is marked Overridden, the error is cleared for gating, and the file checklist shows an Override badge And an audit entry records userId, timestamp, fileId, fieldPath, prior value, override flag, and reason And when a user without permission attempts to override Then the option is not available or results in 403 Forbidden And attempts to override non-overrideable fields are blocked with an explanatory message
Server-side gate rejects incomplete API submissions with structured errors
Given a client submits via API a payload containing files that do not meet schema completeness When the server validates the submission Then the request is rejected with HTTP 422 Unprocessable Entity and no partial assets are created And the JSON error payload includes correlationId and an array of per-file errors with fileId, fieldPath, errorCode, message, and isOverrideAllowed And when valid overrides with reasons are included and the authenticated principal has permission Then the server accepts the submission; otherwise, unauthorized overrides result in HTTP 403
Draft save and resume preserves validation state
Given the user has uploaded files and partially completed metadata with some validation errors When the user selects Save Draft Then the current files, field values, overrides, and per-field validation states are persisted under a draftId And when the user resumes the draft from its link and the schema version is unchanged Then the UI restores the same validation indicators (errors, overrides, progress) as at save time And if the schema version has changed since save Then the UI indicates revalidation is required and recalculates status on load while preserving entered values
No-Account Secure Intake Links
"As a project manager, I want to send secure intake links to external collaborators without accounts so that they can deliver assets safely and on time."
Description

Enable creation of secure, expiring intake links scoped to a project or release that allow external collaborators to submit without an IndieVault account. Links are signed, can be limited by asset types, file count, and size, and optionally require email verification, passcode, and reCAPTCHA. Apply rate limits and IP throttling, record consent to terms, and capture an audit trail (who, when, what) per submission. Provide per-link branding and instructions, and automatically associate delivered assets and metadata with the target project while honoring the same validation and completeness rules.

Acceptance Criteria
Create Expiring Signed Intake Link Scoped to Project/Release
Given I select a target project/release and configure expiry, allowed asset types, max file count, and size limits When I create the intake link Then the system stores those settings with the link and returns a URL containing a signed, tamper-evident token scoped to that project/release Given any URL parameter or path segment covered by the signature is altered When the link is visited Then the request is rejected with an invalid signature error and the access attempt is logged Given the link is expired by time or manually revoked When the link is visited or a submission is attempted Then the server responds 410 Gone with the link’s branding message and no further uploads are accepted while previously submitted assets remain associated to the target project
Enforce Optional Access Controls (Email Verification, Passcode, reCAPTCHA)
Given email verification is enabled for the link When a visitor submits an email address Then a single-use verification code or link is issued and expires after 15 minutes; only verified visitors may proceed Given a passcode is enabled for the link When a visitor enters the correct passcode Then access is granted; otherwise access is denied with a non-revealing error message Given reCAPTCHA is enabled for the link When the visitor fails the challenge Then access is denied; when they pass, they may proceed without requiring an IndieVault account Given multiple access controls are enabled When a visitor attempts to proceed Then all enabled controls must be satisfied before the upload interface is shown
Rate Limiting and IP Throttling on Link Access and Upload
Given link access limits are configured to 60 requests per 5 minutes per IP When a single IP exceeds the limit Then subsequent requests return 429 Too Many Requests with a Retry-After header and are recorded in rate-limit logs Given email verification send limits are configured to 3 codes per 15 minutes per email and IP When the limit is exceeded Then no additional codes are sent and the response is 429 with a generic message Given upload session limits are configured to 2 concurrent uploads per IP per link When a third upload session is initiated Then it is rejected with 429 and guidance to retry later
Dynamic Metadata Intake and Real-time Validation
Given a visitor uploads audio files When the Smart Intake form renders Then only required audio metadata fields are shown and ISRC values are validated in real time to match the ISRC format (CCXXXYYNNNNN) Given credit roles and names are entered When validation runs Then role labels must be from the allowed list and credit names must match the roster; invalid entries block submission with inline, actionable errors Given the visitor uploads any supported asset type (audio, artwork, stems, contracts) When files arrive Then entered metadata is attached to each file record upon upload and assets are automatically associated to the configured target project/release
Prevent Incomplete Deliveries and Enforce Asset-Type/Count/Size Limits
Given the link allows only audio and artwork with a maximum of 10 files, 500 MB per file, and 2 GB total When the visitor attempts to upload disallowed types, exceed counts, per-file size, or total size Then the system blocks the action with specific error messages and does not accept the submission Given the project’s completeness rules require at least 1 audio file and 1 artwork with valid required metadata When the visitor attempts to submit with missing files or invalid metadata Then the submit action is disabled client-side and server-side validation rejects the attempt with detailed errors until requirements are met Given the visitor completes all required files and metadata within limits When they submit Then the system accepts the delivery and marks the submission as complete for that link-session
Audit Trail and Consent Capture per Submission
Given the visitor begins a submission When they are presented with the terms Then they must explicitly agree to the terms and the system records the terms version, timestamp (UTC), IP, and user agent with the submission Given files and metadata are uploaded When the submission completes Then the audit trail captures who (verified email or provided email/name), when (start/end timestamps), and what (file checksums, names, sizes, types, and metadata snapshot) for each file, stored as an immutable append-only record Given a project owner views the link’s activity When they open the audit view Then they can see and export the submission audit trail for that link without exposing sensitive visitor data beyond what was captured
Per-Link Branding, Instructions, and Safe Presentation
Given the link is configured with a logo, accent color, and instructions (markdown) When the intake page renders on desktop and mobile Then the configured branding is applied and instructions are displayed; content is sanitized to prevent XSS Given no branding is configured When the intake page renders Then IndieVault defaults are used and the UI does not expose internal project IDs, collaborator lists, or navigation outside the intake flow Given the visitor completes a submission When the success screen is shown Then the screen displays the configured branding and the link-specific next-step instructions

Guarded Drops

Real‑time preflight at the point of upload enforces file type, duration, sample rate/bit depth, loudness windows, and artwork specs per your selected Delivery Profile. Uploaders get instant, plain‑language fix tips before the file is accepted, dramatically reducing bad assets and last‑minute firefighting.

Requirements

Profile-Aware Upload Selector
"As an indie artist uploading assets, I want to select or auto-apply the correct delivery profile so that my files are checked against the right specs without extra steps."
Description

Add an upload flow step that binds each incoming file to a selected Delivery Profile, with sensible defaults based on project/release context. The selector displays a concise summary of enforced rules (audio format, duration, sample rate/bit depth, loudness window, artwork specs) and locks the association for the life of the asset version. Persist the last-used profile per project, support auto-selection via API/URL parameters, and prevent uploads when no profile is chosen. Expose the chosen profile in asset metadata, activity logs, and downstream pipelines (release-ready folders, review links) to ensure consistent enforcement end-to-end.

Acceptance Criteria
Default Profile Preselection by Project Context
Given a user opens the upload flow within a project that has a stored last-used Delivery Profile P When the "Select Delivery Profile" step renders Then P is preselected and the Continue/Upload controls are enabled And an analytics event "profile_preselected" captures projectId and profileId Given the project has no stored profile but the release has a default profile R When the step renders Then R is preselected Given neither a project stored profile nor a release default exists When the step renders Then no profile is selected and the Continue/Upload controls are disabled with helper text prompting selection
Persist Last-Used Profile per Project
Given a user completes an upload binding files to Delivery Profile P within Project X When the upload is confirmed Then P is stored as the last-used profile for Project X Given a subsequent upload is started in Project X When the profile selector loads Then P is preselected unless overridden by an explicit URL/API parameter or the profile has been deactivated Given P has been deactivated or deleted When the selector loads Then no profile is preselected and a non-blocking notice explains the previous profile is unavailable
Auto-Selection via API/URL Parameters
Given the upload flow is opened with URL parameter profileId=Q (or API payload specifies profileId=Q) When the profile selector loads Then profile Q is auto-selected and recorded as the chosen profile for this upload session Given Q does not exist, is not accessible to the user/org, or is not allowed for the project When the selector loads Then the auto-selection is rejected, a clear error is shown, and selection remains required And the Continue/Upload controls stay disabled until a valid profile is chosen Given a valid Q is provided and differs from the stored last-used profile When the upload completes Then Q becomes the new last-used profile for the project
Upload Blocked When No Profile Chosen
Given no Delivery Profile is selected When the user attempts to proceed or upload Then the Continue/Upload controls are disabled and a validation message "Select a Delivery Profile to continue" is displayed Given a client attempts to upload via API without a profile association When the request is processed Then the server responds 400 with error code PROFILE_REQUIRED and no file is persisted Given a profile is selected When the user proceeds Then the validation clears and the upload step continues
Delivery Profile Rule Summary Visible
Given a Delivery Profile is selected When the selector step is visible Then a concise rule summary displays: audio format, duration window (min–max), sample rate, bit depth, loudness window (LUFS), and artwork specs (min px, aspect ratio, color space) And the summary content matches the selected profile's configuration Given the user switches to a different profile When the selection changes Then the rule summary updates within 200 ms without page reload and is announced to screen readers Given no profile is selected When the selector step is visible Then the rule summary is hidden and a prompt instructs the user to select a profile
Preflight Validation Uses Selected Profile Rules
Given a Delivery Profile P is selected When the user drops or selects a file for upload Then client-side preflight validates file type, duration, sample rate, bit depth, loudness, and (for images) artwork specs against P And nonconforming files are rejected with plain-language fix tips listing each violated rule And conforming files are accepted and advance to the next step Given a nonconforming file bypass attempt occurs (e.g., via modified client) When the server receives the upload Then server-side validation enforces the same rules from P and rejects with 422 and structured errors per violation Given a file passes preflight and server validation When the asset version is created Then the asset is marked as "profile-compliant" with profileId=P
Profile Exposure and Propagation End-to-End
Given an asset version is created with selected Delivery Profile P When the asset metadata is saved Then metadata includes profileId=P and profileName at time of binding And an activity log entry "profile_bound" records actor, timestamp, projectId, assetId, and profileId Given downstream processes run (release-ready folders, review links) When they reference the asset Then they receive profileId=P and enforce the same rules consistently Given an attempt is made to change the profile on an existing asset version When the request is made via UI or API Then the action is blocked as immutable with 409 CONFLICT and guidance to create a new asset version with a different profile
Streaming Preflight Validation Engine
"As a manager, I want instant validation feedback while files upload so that mistakes are caught before they enter the library and delay releases."
Description

Implement a low-latency server-side validation service that inspects files as data streams during upload, returning immediate pass/fail signals and granular violations without requiring full file ingestion. For audio, validate container/codec, sample rate, bit depth, channels, duration, integrated LUFS, and true peak; for artwork, validate format, dimensions, aspect ratio, color space/profile, and file size. Provide structured rule-hit output, first feedback within sub-second for headers and within a few seconds for short audio, with scalable concurrency and backpressure. Ensure security by discarding noncompliant payloads and storing only minimal headers for diagnostics. Support batch uploads, aggregating per-file results and a summary status.

Acceptance Criteria
Sub-second Header Feedback on Stream Start
Given an uploader begins streaming an audio or artwork file under a selected Delivery Profile When the first 64 KB of the stream (including container/header bytes) are received Then an initial validation result is returned within 800 ms at p95 and 200 ms at p50, containing pass/fail for container/format and any header-level rule hits And the response includes uploadId, fileName, fileType (audio|artwork), profileId, startedAt, and an array of ruleHits And if header-level validation fails, the engine terminates the stream within 100 ms and marks the file as rejected without persisting payload bytes
Audio Stream Technical Validation and Loudness Window
Given a compliant audio stream with duration ≤ 90 seconds and an active Delivery Profile defining sample rate, bit depth, channels, duration window, integrated LUFS, and true peak thresholds When the final byte of the upload is received Then the full validation result (container/codec, sampleRate, bitDepth, channels, duration, integratedLUFS, truePeak) is returned within 3 seconds at p95 And integratedLUFS accuracy is within ±0.3 LU versus a reference analyzer and truePeak accuracy within ±0.1 dBTP And each out-of-range metric produces a ruleHit with expected range, observed value, comparator, severity=error, and a plain-language fixTip And if any severity=error ruleHit exists, the file status is fail and the payload is discarded
Artwork Spec Enforcement During Streamed Upload
Given an artwork upload under a Delivery Profile defining allowed formats, dimensions, aspect ratio tolerance, color space/profile, and max file size When sufficient bytes are received to determine format, ICC profile, and pixel dimensions, or when the stream completes Then the validation returns format, width, height, aspect ratio, color space/profile, and file size, and flags violations against the profile And aspect ratio tolerance is enforced to ±0.5% of target And noncompliant files are failed, the stream is terminated within 100 ms, and only diagnostics are retained And each violation includes a plain-language fixTip (e.g., "resize to 3000x3000 px, sRGB, ≤ 10 MB")
Structured Rule-Hit Output Contract
Given any validation run (audio or artwork) When rule evaluations are produced Then the response includes ruleHits where each item has ruleId, category (audio|artwork|security), severity (error|warning|info), metric, expected, observed, comparator, location (timecode|header|dimension), message, fixTip, and pass (boolean) And the overall result includes file-level pass (boolean), startedAt, completedAt, and processingLatencyMs And the JSON validates against schema version "preflight.v1" with no additional properties allowed
Batch Upload Aggregation and Streaming Results
Given a batch upload containing N files submitted in one session When per-file validations complete (in any order) Then per-file results are emitted as they complete with stable upload order indices And upon completion of all N files, a batch summary is emitted with counts {total, passed, failed, warnings} and overallStatus in {AllPass, PartialFail, AllFail} And the batch summary includes an array of fileResults with fileId, fileName, type, pass (boolean), and ruleHit counts
Concurrency and Backpressure Under Load
Given 1,000 concurrent upload streams across mixed audio and artwork When the system processes streams under nominal capacity Then header feedback latency remains ≤ 1,000 ms at p95 and ≤ 300 ms at p50, and short-audio (≤ 90 s) full results remain ≤ 5 s at p95 from last-byte And when capacity is exceeded, new streams receive HTTP 429 with a Retry-After between 1 and 10 seconds and no payload bytes are persisted And no active stream is dropped without an explicit error response and correlation uploadId
Secure Discard and Minimal Retention Policy
Given a noncompliant upload or user-cancelled upload When validation fails or the upload is cancelled Then all received payload bytes are discarded within 100 ms and are not retrievable from storage, caches, or logs And only minimal diagnostics are retained for up to 24 hours: fileName, mimeType, header signature hash, measured metrics, timestamps, and ruleHits And logs contain no raw file content; only IDs and hashes And attempts to fetch or reconstruct payload data after failure return 404/Not Found
Actionable Fix Tips UI
"As an uploader, I want clear, human-readable guidance on how to fix issues so that I can correct files quickly without guesswork or external tools."
Description

Surface plain-language guidance inline with upload results that explains what failed, what was expected, and how to fix it. Present the actual vs. required values (e.g., “-17.2 LUFS; target -14 ±1 LUFS; raise by ~3 dB”), severity, and step-by-step suggestions. Include links to help docs, copyable spec snippets for engineers, and quick re-upload controls that retry only failed files. Ensure accessibility (WCAG AA), localization-ready messaging, and consistent design across web and desktop uploaders. Persist validation messages with the asset attempt so collaborators can see what went wrong.

Acceptance Criteria
Inline Plain-Language Error Messaging on Upload Failure
Given a user uploads one or more assets that fail Guarded Drops validation When validation completes for a file Then an inline fix-tip message appears adjacent to the failed file row within 1 second of result availability And the message uses plain language describing what failed without exposing raw error codes or stack traces And the message includes a concise summary sentence of 20 words or fewer And the message contains at least one actionable instruction verb (e.g., raise, convert, trim) And multiple failures for the same file are listed in priority order with severity badges and collapsible details And the error presentation is visually distinct from success messages and warnings
Display of Actual vs Required Values with Severity and Fix Steps
Given a validation rule fails (e.g., loudness outside target window) When the fix tip is rendered Then it shows the actual measured value (e.g., -17.2 LUFS) and the required target/range from the selected Delivery Profile (e.g., -14 ±1 LUFS) And it shows severity as Error or Warning according to rule configuration And it provides 1–3 ordered step-by-step suggestions specific to the failure type And for level-related failures it includes an estimated adjustment magnitude (e.g., raise by ~3 dB) And units and tolerances are displayed explicitly (e.g., kHz, bit depth, pixels, LUFS) And all numeric values are rounded with consistent precision suitable to the metric (e.g., 0.1 LUFS, integer pixels)
Help Links and Copyable Spec Snippets Availability
Given any failed validation rule is shown When the user views the fix tip Then a Learn more link is present to the rule-specific help doc and opens in a new tab/window And a Copy spec for engineers control is present When the user clicks Copy spec for engineers Then the clipboard contains a plain-text snippet including Delivery Profile name, rule name, required values/tolerance, and example acceptable formats And a non-intrusive confirmation toast appears within 500 ms indicating the copy succeeded And the help link URL and copied snippet match the exact rule instance and profile version used for validation
Quick Re-Upload of Failed Files Only
Given an upload batch where some files failed and others passed When the user clicks Retry failed Then only the failed files are queued for re-upload and re-validation And previously passed files are not re-uploaded and retain their pass status When the user drags a replacement file onto a failed row or chooses Replace Then the new file is uploaded, validated against the same Delivery Profile, and replaces only that failed item And progress and final status update inline per file without refreshing the page And if all re-uploaded files pass, the batch state reflects overall pass without duplicating passed assets
WCAG AA Accessibility Compliance for Fix Tips UI
Given a keyboard-only user navigates the uploader results When focusing through file rows and fix tips Then every control and link is reachable in logical order with visible focus indicators And no interaction requires a pointer-only gesture Given a screen reader is active When a validation failure occurs Then an ARIA live region announces the failure including file name, rule, actual and expected values, and severity And all controls have accessible names/roles/states Then text and interactive element contrast ratios meet or exceed 4.5:1 And severity is conveyed by both color and text/aria labels (not color alone) And interactive targets are at least 44 by 44 CSS pixels
Localization-Ready Messaging and Cross-Platform Consistency
Given the application locale is set to es-ES with provided translations When validation failures are displayed Then all fix-tip strings render from i18n keys with ICU MessageFormat placeholders and no hard-coded English And number, unit, and date formatting follow the active locale And the UI accommodates 30% longer strings without truncation, overlap, or layout breakage Given the same scenario on Desktop and Web uploaders Then the components, typography, and spacing match the design system tokens with no functional or visual drift And all user-facing strings for the fix tips exist in resource files and pass a missing-keys check
Persistence and Collaboration Visibility of Validation Messages
Given an upload attempt produces one or more validation failures When the user refreshes the page or returns to the asset later Then the prior validation messages are persisted and visible, associated with that attempt, including timestamp, uploader identity, Delivery Profile, and per-rule details And a collaborator with view permission can see the same attempt history and messages When the public/partner-facing review link is not used, Then persisted messages remain internal-only and are not exposed externally And an API endpoint returns the attempt’s validation messages as structured data (rule id, actual, required, severity, suggestions) And a subsequent re-upload creates a new immutable attempt record without altering the historical messages
Delivery Profile Rule Builder
"As a label admin, I want to create and version delivery profiles so that teams can enforce the right specs per partner or project without ambiguity."
Description

Provide an admin UI and API to create, edit, version, and clone Delivery Profiles that define enforceable rules for audio, artwork, and metadata. Support presets and templating, required/optional fields, ranges and exact targets, and per-rule descriptions shown to uploaders. Changes produce new, immutable profile versions; assets remain bound to the version active at upload. Enable import/export of profiles (JSON), permission controls (who can author/approve), and assignment to projects/releases. Validate profiles at authoring time to prevent conflicting rules.

Acceptance Criteria
Create Profile with Audio/Artwork/Metadata Rules
Given I am an admin with author permission When I create a new Delivery Profile with audio, artwork, and metadata rules including required/optional flags, exact targets and ranges, and per-rule descriptions Then the profile is saved as version 1 and is retrievable via UI and API And each rule persists with its type, constraints, and required flag And per-rule descriptions are stored and returned by the preflight tips API
Edit Produces New Immutable Version
Given a Delivery Profile exists at version 1 When I modify any rule and save Then version 2 is created and version 1 remains immutable And all existing assignments continue to reference version 1 until explicitly changed And attempts to update version 1 are rejected with an error indicating immutability
Clone and Preset Templating
Given a preset or existing profile is selected When I clone it and provide a new name Then a new profile is created with identical rules and a new unique identifier And template variables, if present, resolve or prompt for values before save And no linkage to the source profile's versions remains
Authoring-Time Conflict Validation
Given I define rules that conflict (e.g., empty allowed file types, a sample-rate exact target outside its range, or min>max thresholds) When I attempt to save the profile Then the save is blocked and I see field-level error messages describing each conflict And no new profile version is created
Assign Profile Version to Project/Release and Asset Binding
Given a project or release exists When I assign Delivery Profile X version 2 to it Then the assignment persists and is visible via UI and API And when an asset is uploaded to that project/release, the asset record stores profile X v2 as its bound profile version And if the project/release is later reassigned to version 3, previously uploaded assets remain bound to version 2
Import/Export Profiles (JSON)
Given a Delivery Profile exists When I export it as JSON Then the JSON includes all rule constraints, required/optional flags, per-rule descriptions, and version metadata, and excludes credentials/secrets And when I import that JSON into another workspace Then a new profile is created with equivalent rule semantics and a new identifier And invalid or schema-incompatible JSON is rejected with descriptive errors
Permission Controls for Authoring and Approval
Given role-based permissions are configured for authors and approvers When a user with author role creates or edits a profile Then the changes require approval by a user with approver role before the profile can be assigned to projects/releases And users without author permission cannot create or edit profiles And users without approver permission cannot approve profiles
Enforced Upload Gate & Override Workflow
"As a project manager, I want noncompliant uploads blocked with an auditable override path so that quality standards are upheld without blocking urgent exceptions."
Description

Reject files that fail preflight with clear reasons and prevent noncompliant assets from entering the library, release-ready folders, or being attached to review links. Allow role-based overrides that require a justification, capture who/when/why, and tag the asset version as "accepted with exception" for downstream visibility. Provide quarantine handling for partially uploaded or overridden assets, notifications to uploaders and project owners, and guardrails like rate limiting and cooldowns on repeated failures. Support resumable, chunked uploads with consistent validation outcomes.

Acceptance Criteria
Gate Blocks Noncompliant Uploads
Given a user uploads an asset under a selected Delivery Profile When the asset fails any preflight rule for file type, duration, sample rate, bit depth, loudness, or artwork specs Then the system rejects the upload and creates no library record, release-ready folder entry, or review-link attachment candidate And the response shows each failed rule with measured versus required values and a plain-language fix tip And no partial or failed asset is visible in any picker or API list
Role-Based Override with Audit Trail
Given a user with the Override permission views a failed preflight result When they choose Override and submit a non-empty justification Then the system admits the asset to the intended destination and the library And records who performed the override, when, and the justification in an immutable audit log linked to the asset version And tags the asset version as "accepted with exception" visible in UI and API metadata And the override is attributed to the acting user's role and project context
Quarantine for Partial and Exception Assets
Given an upload is interrupted before completion When the session resumes or expires Then the partial file remains in Quarantine and is not accessible in the library, release-ready folders, or review links And if the upload is overridden, the resulting asset appears in the library with "accepted with exception" status and a corresponding Quarantine record is created for audit review And quarantined records are listable in an admin-only view with state, uploader, project, size, and last activity And quarantined partials can be resumed to completion or deleted by authorized users
Actionable Rejection Feedback
Given an upload fails preflight When the system returns the failure to the uploader Then each failed rule includes a rule name, the measured value, and the required value or range And each failed rule includes a plain-language fix tip specific to the failure And the error list is displayed inline in the upload UI and returned in the API response
Rate Limiting and Cooldown on Repeated Failures
Given rate limits are configured to 3 failed attempts per 5 minutes per user and 30 per 5 minutes per IP And a user triggers repeated preflight failures within the configured window When the threshold is reached Then subsequent upload attempts from that user are blocked for 15 minutes with a rate_limit_cooldown error and countdown shown in the UI And attempts from the same IP respect the IP-level limit And a successful compliant upload resets the user's failure counter
Resumable Chunked Uploads with Consistent Validation
Given a client uploads the same file once as a single stream and once via resumable chunks When both uploads complete Then preflight produces identical pass/fail outcomes and identical failure details for both methods And chunk checksums and a final file checksum verify byte-identical reassembly before validation And partially uploaded chunks remain in Quarantine and do not create any visible asset records
Attachment Guardrails for Review Links and Release-Ready Folders
Given a user attempts to attach an asset to a review link or add it to a release-ready folder When the asset is noncompliant or in Quarantine Then the action is blocked with an explanation and the asset cannot be attached or added And when the asset is "accepted with exception", the attachment is allowed and the exception status is displayed to recipients and downstream consumers
Validation Telemetry & Audit Logs
"As a team lead, I want visibility into validation outcomes so that I can coach contributors and refine profiles to reduce rework and missed deadlines."
Description

Record structured telemetry for every validation attempt, including profile/version, rules evaluated, pass/fail outcomes, timings, uploader identity, and client type. Expose dashboards and filters by project, user, time window, rule, and asset type to highlight common failure patterns and optimization opportunities. Provide exports (CSV/JSON), webhooks for repeated failures or override events, and retention controls. Integrate with per-recipient analytics and release readiness so stakeholders can correlate validation quality with downstream link performance and deadlines.

Acceptance Criteria
Structured telemetry for every validation attempt
Given an uploader validates an asset using a selected Delivery Profile and version When validation completes with pass or fail, or is aborted by network error or user cancel Then exactly one telemetry event is persisted with fields: attempt_id, timestamp (UTC ISO-8601), project_id, uploader_user_id, client_type (web/mobile/api), asset_type, delivery_profile_id, delivery_profile_version, rules_evaluated[{rule_id, outcome, message, duration_ms}], overall_outcome (pass|fail|aborted), total_duration_ms, file_size_bytes, file_hash_sha256 And the telemetry event is queryable in dashboards and APIs within 5 seconds (p95) of validation completion And duplicate submissions for the same attempt_id are rejected idempotently and do not create extra events And validation attempts without a Delivery Profile are logged with overall_outcome=fail and a rules_evaluated entry rule_id=profile_required
Filterable validation dashboards and aggregates
Given telemetry exists across multiple projects and users When a user with Project Viewer or higher opens the Validation dashboard Then they can filter by project_id (multi), user_id (multi), time window (relative last 1h/24h/7d/30d and absolute start/end), rule_id (multi), asset_type (audio|artwork|document), delivery_profile_id/version, client_type, and overall_outcome And applying filters updates aggregates showing: total attempts, pass rate %, median total_duration_ms, p95 total_duration_ms, and top 10 failing rules with counts And selecting a rule shows a paginated event list (100 per page) with sortable columns (time, duration, outcome) And filtered queries over up to 500k attempts return within 2 seconds (p95); over up to 5M attempts return within 8 seconds (p95)
CSV/JSON exports for filtered telemetry
Given a filtered telemetry dataset containing ≤ 1,000,000 rows When the user requests an export as CSV or JSON Then the system generates a file containing only rows matching the active filters with columns: attempt_id,timestamp,project_id,uploader_user_id,client_type,asset_type,delivery_profile_id,delivery_profile_version,overall_outcome,total_duration_ms,rule_id,rule_outcome,rule_message,rule_duration_ms (one row per rule outcome) And CSV output is UTF-8 RFC4180 with header row; JSON output is newline-delimited UTF-8 objects; all timestamps are ISO-8601 with Z (UTC) And exports larger than 1,000,000 rows require pagination via next_token; multiple parts can be generated sequentially without data duplication And the export download link expires after 24 hours and each download is recorded in the audit log
Webhooks for repeated failures and override events
Given a workspace has a webhook endpoint configured with a shared secret When ≥ 3 validation failures occur for the same project_id + user_id + rule_id within a 10-minute window, or when a rule override is created, approved, or revoked Then a webhook event is POSTed within 60 seconds with headers X-Event-Type, X-Event-Id, X-Signature (HMAC-SHA256) and a JSON body containing event_id, timestamp, event_type, project_id, user_id, rule_id (if applicable), window_start, window_end, failure_count, delivery_profile_id, delivery_profile_version, latest_attempt_id And the receiver returning 2xx stops retries; non-2xx triggers exponential backoff retries up to 12 attempts within 24 hours; HTTP 410 permanently disables the subscription And event_id ensures at-most-once processing by consumers; duplicate deliveries (due to retry) carry the same event_id
Telemetry retention and purge controls
Given a workspace admin sets telemetry retention to a value between 30 and 1825 days (default 365) When the nightly retention job runs Then telemetry older than the configured retention window is purged, including per-rule details and file hashes And a retention audit entry records who configured the policy, when, old_value, new_value, and counts purged per project And exports and APIs exclude purged data; expired export links return HTTP 410 And changes to retention take effect within 24 hours of update
Correlation with per-recipient analytics and release readiness
Given a release has per-recipient analytics events and release readiness milestones When the user enables “Validation correlation” on the Release Analytics view Then the system correlates validation telemetry within 14 days prior to each recipient link creation with recipient open rate, time-to-approval, and missed-deadline indicator And the view displays segments (quartiles by pre-send failure rate) with metrics: recipients count, avg validation total_duration_ms, open rate %, avg time-to-approval, missed-deadline %; clicking a segment lists recipients and linked attempt_ids And joins are performed by project_id and release_id; recipients are identified by recipient_id; no plaintext email is displayed And computations for releases with up to 100k recipients complete within 5 seconds (p95)

Version Router

Intelligent routing uses audio fingerprinting plus your naming schema to place incoming files into the right project, track slot, and version (clean, instrumental, radio edit). It flags mismatches, suggests safe renames, and prevents accidental overwrites—keeping your versions spotless.

Requirements

Audio Fingerprint Ingest & Match
"As a mixing engineer uploading files, I want my uploads automatically recognized as the right track and version so that I don’t have to tag or sort them manually."
Description

Upon file ingest, compute an acoustic fingerprint and match against the workspace catalog to determine project, track, and possible version lineage. Use configurable confidence thresholds and deterministic tie-breaking to avoid false positives; fall back to schema parsing when match confidence is below threshold. Persist fingerprint IDs and match metadata to each asset, enable duplicate detection, and support batch backfill for legacy assets. Processing must be performant for large queues, resumable on failure, and operate without sending audio to third-party services to meet privacy requirements. Emits a routing decision event consumed by the Version Router.

Acceptance Criteria
Compute and Persist Fingerprint on Ingest
Given a supported audio file is uploaded to a workspace When ingest starts Then an acoustic fingerprint is computed locally without sending audio to third-party services And fingerprint_id, algorithm_name, algorithm_version, and fingerprint_duration are persisted on the asset And fingerprinting completes in <= min(audio_duration, 180 seconds) And failures are logged with error_code and are retried up to 3 times on retryable errors
Catalog Match with Thresholds and Deterministic Tie-Breaking
Given a computed fingerprint and a workspace catalog with a configured fingerprint match threshold T_fingerprint (default 0.85) When matching is executed Then the highest-scoring candidate with score >= T_fingerprint is selected And if scores tie, the deterministic tie-breaker is applied in order: same project > most recent asset_created_at > lowest asset_id And match_id, match_score, threshold_used, and tie_breaker_applied are stored on the asset match metadata And rerunning the matcher with the same inputs yields the same selected candidate
Fallback to Naming Schema Parsing Below Threshold
Given the highest fingerprint match score is below T_fingerprint And a naming schema is configured with a schema threshold T_schema (default 0.90) When the filename and path are parsed Then project_id, track_id or track_title, and version_type are extracted with confidence >= T_schema to produce a routing decision And if parser confidence < T_schema, the asset is marked Unmatched with reason=low_confidence and no routing is applied And decision_reason is set to schema_fallback when the parser determines the route
Duplicate Detection via Fingerprint Equality
Given a newly ingested asset has a fingerprint equal to an existing asset’s fingerprint within the same workspace When ingest completes Then the asset is flagged duplicate=true with duplicate_of set to the existing asset_id And no existing asset content or metadata is overwritten And a duplicate detection warning is recorded and included in the routing decision event
Routing Decision Event Emission to Version Router
Given a routing decision has been made by fingerprint or schema When processing completes Then an event routing_decision.v1 is published within 30 seconds of decision time And payload includes asset_id, fingerprint_id, decision_reason, matched_project_id, matched_track_id, version_type, match_score, threshold_used, tie_breaker_applied (nullable), duplicate_of (nullable), and timestamp And the event has an idempotency_key and is delivered at-least-once; duplicate deliveries do not cause duplicate downstream actions
Batch Backfill for Legacy Assets with Resumability
Given a workspace with legacy assets lacking fingerprints When a backfill job is started Then fingerprints are computed and matching is attempted for all selected assets And job progress is available via API with total, processed, succeeded, failed, and percent_complete fields And the job is resumable after failure, continuing from the last unprocessed asset without reprocessing completed ones And re-running the same job is idempotent and does not create duplicate fingerprints or events And a routing decision event is emitted for each asset that successfully routes
Performance and Privacy Under Large Queue Load
Given a queue of 10,000 assets with average duration <= 5 minutes When processed by a standard 4-worker cluster Then median time from upload to routing decision is <= 2 minutes for files <= 5 minutes And P95 time from upload to routing decision is <= 5 minutes And sustained throughput across the cluster is >= 500 routed assets per hour during the run And if any worker terminates unexpectedly, no more than 1 in-flight job is lost and all lost jobs are retried automatically up to 3 times And outbound network traffic during processing is restricted to allowlisted first-party endpoints; no audio bytes are sent to third-party services
Naming Schema Rules Engine
"As a label admin, I want to define filename patterns that map to projects and versions so that the system routes files correctly from any contributor."
Description

Provide a configurable rules engine that parses filenames using a token- and regex-based schema (e.g., {projectCode}_{trackTitle}_{versionTag}_{mixNo}). Administrators can define multiple patterns with precedence, token synonyms, and validation rules that map parsed components to canonical fields (project, track, version slot, mix number). Include a test sandbox to validate patterns on sample filenames, Unicode and locale-safe parsing, and normalization of separators, casing, and illegal characters. Output a structured parse result supplied to the Version Router and logged for auditability.

Acceptance Criteria
Pattern Precedence Resolution
Given pattern A "{projectCode}_{trackTitle}_{versionTag}_v{mixNo}" has precedence 1 and pattern B "{projectCode}-{trackTitle}-{versionTag}-m{mixNo}" has precedence 2 When filename "IV123_Sunrise_in_Paris_inst_v2.wav" is parsed Then pattern A is selected, the parse result is {project:"IV123", trackTitle:"Sunrise in Paris", versionSlot:"instrumental", mixNumber:2}, and patternId:"A" is recorded And only one match is applied and alternative matches are logged as alternatives Given no pattern matches When filename "notes.txt" is parsed Then result.status="unmatched", result.errors contains code="NO_PATTERN_MATCH", and no canonical fields are mapped
Token Synonyms Mapping
Given versionTag synonyms {inst→instrumental, instr→instrumental, instrumental→instrumental, clean→clean, radio→radio_edit} with case-insensitive matching When parsing "IV123-Sunrise-INST-m3.wav" using pattern "{projectCode}-{trackTitle}-{versionTag}-m{mixNo}" Then versionSlot="instrumental" and mixNumber=3, and appliedSynonyms includes "INST"→"instrumental" Given mix prefix synonyms {v, m, ver} are allowed via token {mixPrefix}{mixNo} When parsing "IV123_Sunrise_clean_ver12.wav" using pattern "{projectCode}_{trackTitle}_{versionTag}_{mixPrefix}{mixNo}" Then mixNumber=12, mixPrefix normalized to "v", and validation passes Given an unknown versionTag value When parsing "IV123_Sunrise_purified_v1.wav" Then result.errors includes code="UNKNOWN_VERSION_TAG" and result.valid=false
Unicode and Locale-Safe Parsing
Given Unicode filenames are supported with NFC/NFKC normalization and locale-insensitive case folding When parsing "IVÉ123_Árbol – Niño_clean_v01.aiff" using pattern "{projectCode}_{trackTitle}_{versionTag}_v{mixNo}" Then separators are normalized, project="IVÉ123", trackTitle="Árbol – Niño" preserved, versionSlot="clean", and mixNumber=1 Given Turkish casing edge cases When parsing both "IV123_İzmir_clean_v1.wav" and "IV123_IZMIR_CLEAN_V1.WAV" Then both yield identical canonical fields and no character loss occurs Given astral characters in trackTitle When parsing "IV123_夜🌙_inst_v2.wav" Then the parser succeeds, non-mapped characters remain in trackTitle, and result.valid=true
Filename Normalization & Sanitization
Given normalization profile "snake_case + ASCII-safe separators" When parsing "IV123 - Sunrise in/Paris (INST) V02.WAV" Then normalizedFilename="IV123_Sunrise_in_Paris_instrumental_v02.wav", illegal characters are removed or replaced, multiple separators collapse to a single underscore, casing is normalized per profile, and total length ≤ 255 bytes Given Windows-reserved names/characters appear When parsing "IV123_CON_aux:Sunrise*inst?v1.wav" Then normalizedFilename avoids reserved names/characters, result.flags includes "ILLEGAL_CHARS_REMOVED", and a safeRename is generated Given a normalized filename would collide with an existing asset When normalization completes Then a deterministic suffix "_dup01" is appended without altering canonical fields and overwrite is prevented
Validation Rules & Error Reporting
Given rules require projectCode=^[A-Z0-9]{2,8}$, versionTag∈{clean, explicit, instrumental, radio_edit}, mixNo≥1 When parsing "iv123_Sunrise_clean_v0.wav" Then projectCode is normalized to "IV123", result.errors includes code="INVALID_MIX_NO" with value "0", and result.valid=false Given trackTitle must be 1–80 characters post-normalization When parsing "IV123__clean_v1.wav" Then result.errors includes code="MISSING_TRACK_TITLE" and result.valid=false Given a filename maps to a different existing track than its project/track context When parsed Then result.errors includes code="TRACK_MISMATCH", Version Router receive status="blocked", and a safeRename suggestion is included
Test Sandbox Execution & Export
Given an admin pastes 50 sample filenames and selects pattern set version "v1.4" When the sandbox run is initiated Then 50/50 results render with per-file status in under 2 seconds, and each row shows patternId, parsed fields, validation status, normalizedFilename, and safeRename if applicable Given the admin clicks "Export JSON/CSV" When export completes Then the download contains all parse results and errors, with audit metadata {patternSetId, timestamp, userId} Given the admin toggles "Send to Version Router (dry-run)" When executed Then a dry-run payload with structured results is published to the Version Router endpoint without persisting, and a summary shows counts for matched, invalid, and unmatched
Version Slotting & Variant Normalization
"As an artist manager, I want variants like clean and instrumental automatically slotted under the right track so that my release folders stay organized and complete."
Description

Normalize incoming assets into canonical version slots (e.g., Original, Clean, Instrumental, Radio Edit, TV Mix, Acapella, Extended) using a configurable synonym map and precedence rules derived from fingerprint matches and schema parses. Enforce a single canonical asset per slot per track while supporting multiple mixes via mix numbers and timestamps. Automatically associate routed assets to the correct track entity or create a new track under the target project when none exists. Maintain a version matrix per track for release readiness and expose slotting outcomes via API and UI.

Acceptance Criteria
Route by Synonyms and Fingerprint into Canonical Version Slots
Given a target project with a configured synonym map and canonical slots And an incoming audio file whose fingerprint confidence >= 0.92 matches an existing track in the project And the filename or embedded tags contain a synonym resolvable to a canonical slot When the file is uploaded via API or UI Then the system assigns the asset to the matched track and the resolved canonical slot And sets mix_number to the next sequential integer for that slot starting at 1 And generates a normalized filename using the configured template "{TrackTitle} - {Slot} (Mix {mix_number}).{ext}" And returns a 201 outcome containing track_id, slot, mix_number, canonical (boolean), normalized_filename, and flags (array)
Precedence Resolution on Conflicting Signals (Fingerprint vs Schema)
Given a configured precedence order of sources [audio_fingerprint > synonym_map > naming_schema] And an incoming file whose fingerprint confidence >= 0.92 matches a track's Original audio And the filename contains a term mapped to Clean When the file is processed Then the system resolves the slot as Original per precedence And sets flags to include "name_slot_mismatch" And includes detected_name_slot and resolved_slot in the outcome And does not overwrite any existing assets
Enforce Single Canonical Asset per Slot with Mix Numbering
Given a track with an existing asset in slot Instrumental marked canonical=true with mix_number=1 And a new file routes to the same track and slot When the new file is processed Then the system stores it as a new asset with mix_number = (max existing mix_number + 1) And does not overwrite or delete the existing asset And ensures exactly one asset per slot has canonical=true And leaves the existing canonical asset unchanged unless promote=true is supplied And if promote=true is supplied, sets the new asset canonical=true and flips the previous canonical to false atomically
Auto-Associate to Existing Track or Create New Track Under Project
Given a target project context for the upload And no existing track fingerprint match at confidence >= 0.92 When the file is processed Then the system creates a new track under the target project with title parsed from filename or tags per schema And assigns the asset to the Original slot unless a valid synonym resolves a different slot And returns the new track_id in the outcome And ensures subsequent uploads with matching fingerprint associate to this track
Expose Slotting Outcome via API and UI with Version Matrix Update
Given a successfully routed asset When the client calls GET /tracks/{track_id}/version-matrix Then the response includes one row per canonical slot with fields: slot, has_asset (boolean), mix_count (integer), canonical_asset_id, latest_upload_at And the row for the resolved slot reflects the new asset and updated mix_count When the client calls GET /assets/{asset_id}/slotting-outcome Then the response includes: track_id, slot, mix_number, canonical (boolean), flags (array), normalized_filename, created_at And the UI versions panel displays the same values within 5 seconds of upload completion
Real-Time Application of Configurable Synonym Map
Given an admin updates the synonym map to add TV -> TV Mix and publishes the configuration When a new file containing TV in its name is uploaded after the change Then the system resolves the slot as TV Mix without service restart And logs the configuration version used in the outcome And prior assets remain unchanged; no retroactive re-slotting occurs unless a reprocess action is triggered
Mismatch Flagging and Safe Rename Suggestion Without Overwrite
Given an incoming file whose detected_name_slot differs from the resolved_slot When the file is processed Then the system sets flags to include "slot_conflict" And includes a safe_rename_suggestion that conforms to the normalized filename template and resolved_slot And prevents overwriting any existing filenames by appending a unique suffix when necessary And displays the suggestion in UI and includes it in the API outcome
Conflict Detection & Overwrite Guardrails
"As a project owner, I want the system to prevent accidental overwrites and highlight conflicts so that we never lose the correct master."
Description

Detect routing conflicts such as an incoming file targeting an occupied version slot or matching an existing fingerprint with a different mix number. Block destructive overwrites by default and require explicit replace actions with reason logging. Provide automated comparisons (duration, loudness, fingerprint similarity) and visual diffs to help decide between keep/replace/new mix. Preserve lineage with immutable history, soft-archive replaced assets, and emit audit events for compliance.

Acceptance Criteria
Block Overwrite When Version Slot Occupied
Given a project and track where a version slot (e.g., Clean v3) is already populated And the router receives an incoming file that resolves to that same slot When routing is attempted Then the system blocks automatic overwrite by default And displays a Conflict: Slot Occupied state with Replace and Create New Mix actions And does not persist the incoming file under the occupied key And creates a conflict record with conflictId, projectId, trackId, versionSlot, timestamp, detectedBy="slot_occupancy" And quarantines the incoming file for resolution without exposing it to downstream links And no user without explicit resolution sees any change to the active asset
Fingerprint Match With Different Mix Number
Given an existing asset A with audio fingerprint fpX and mixNumber=3 And an incoming file F whose fingerprint similarity to A is >= 0.98 and filename indicates mixNumber=4 When the router evaluates F Then the system flags Fingerprint Duplicate / Different Mix Number And prevents automatic overwrite of A And displays metrics: duration for A and F with delta (ms), integrated loudness (LUFS) with delta (LU), true peak (dBTP) with delta, fingerprintSimilarity [0..1] And offers actions: Replace A, Keep Both (create Mix 4), or Cancel And suggests a safe, non-colliding rename for the Keep Both path And logs the conflict and selected action intent prior to execution
Explicit Replace With Reason and Guardrails
Given a conflict state requiring a decision When a user selects Replace Then the UI requires a non-empty reason (minimum 10 characters) and optionally a ticket/reference ID And the Replace confirmation remains disabled until validation passes And on confirmation, the system performs an atomic swap: soft-archives the prior asset, promotes the incoming asset, and updates references And on any failure, the original asset remains active and no partial state is committed And the system records userId, timestamp, reason, previousAssetId, newAssetId in immutable history And emits an audit event asset.replaced with correlationId and the provided reason
Automated Comparisons and Visual Diffs
Given an existing asset A and an incoming file F in conflict When the comparison panel is opened Then the system computes and displays within 2 seconds for files ≤ 10 minutes: - duration for A and F (ms) and delta (ms) - integrated loudness (LUFS) and delta (LU) - true peak (dBTP) and delta - fingerprint similarity score [0..1] And renders aligned waveform and spectrogram previews And provides A/B audition with instant switch gap < 100 ms and loop region selection And decision buttons remain disabled until metrics are available or the user explicitly chooses a Compare Later bypass
Immutable History and Soft-Archive on Replace
Given asset A is replaced by asset F via an approved Replace action When viewing A in history Then A is marked Archived, read-only, with fields: replacedBy, replacedAt (ISO8601), replacedByUserId, and reason And A remains retrievable and downloadable to authorized roles And lineage view shows A → F linkage with version metadata (e.g., mix numbers, tags) And a Restore action creates a new active asset derived from A without deleting F, preserving chronology And all operations append to an immutable event log with monotonic sequence numbers
Audit Events for Conflict Lifecycle
Given conflict detection, user review, and resolution steps occur When each step is executed Then the system emits audit events: asset.conflict.detected, asset.conflict.viewed, asset.replaced, asset.kept, asset.newMixCreated, asset.restore And each event includes: projectId, trackId, previousAssetId (nullable), newAssetId (nullable), userId (nullable for automated), action, reason (nullable), metrics (for compare), timestamp (ISO8601), correlationId, idempotencyKey And events are delivered to the audit bus within 2 seconds with at-least-once semantics And failed deliveries retry with exponential backoff up to 5 attempts and surface an operational alert on final failure
Bulk Upload Conflict Resolution
Given a bulk upload of N files resulting in M conflicts When routing completes Then the system presents a batch conflict queue with per-item summaries (slot, fingerprint similarity, metrics) and actions And supports Apply to All of Same Conflict Type with an explicit confirmation And prevents destructive overwrites unless each affected item has an explicit Replace with a captured reason And generates an exportable JSON report listing for each file: conflictId, decision, reason (if any), resulting assetIds, and timestamps
Mismatch Alerting & Review Queue
"As a QA reviewer, I want a queue of routing mismatches with suggested fixes so that I can resolve issues quickly and keep releases on schedule."
Description

When fingerprint results and schema parsing disagree or confidence is low, create a review task with suggested routing, confidence scores, and rationale. Surface tasks in a dedicated queue with filters, batch actions, and one-click accept/correct. Notify designated reviewers via in-app and email alerts, track SLAs, and record resolutions for model/rules tuning. Provide analytics on mismatch rates and top error causes to continuously improve routing accuracy.

Acceptance Criteria
Create Review Task on Routing Mismatch or Low Confidence
Given an audio file is uploaded to IndieVault And the Version Router completes fingerprinting and schema parsing When the fingerprint result and naming schema disagree OR the top suggestion confidence is below the configurable threshold Then a review task is created with status "Open" And the task is linked to the file ID and uploader And the task includes at least one suggested routing target And duplicate review tasks for the same file are prevented within a 24h window via idempotency key And the file is not moved or renamed until a reviewer accepts or corrects the routing
Review Task Details Completeness
Given an Open review task exists When a reviewer opens the task details view Then the view displays: file waveform/preview, filename, file size, checksum, uploader, createdAt And shows top 3 routing suggestions with confidence percentages and rank And shows mismatch reasons and rationale text (e.g., token conflicts, metadata variance) And shows parsed schema tokens and fingerprint match IDs And provides actions: Accept [selected suggestion], Correct Manually, Assign, Change Priority And all fields above are present and non-empty except where data is truly unavailable (explicitly labeled as N/A)
Review Queue with Filters, Sorting, and Batch Actions
Given multiple review tasks exist across projects When a reviewer opens the Review Queue Then tasks default to filter Status=Open and sort by Age (oldest first) And the reviewer can filter by Project, Confidence Range, Mismatch Reason, Uploader, Status, Priority, and Date Range And the reviewer can sort by Age, Confidence, Project, and Priority And the reviewer can multi-select tasks and apply batch Accept (top suggestion), Assign, and Change Priority And each batch operation shows a success/failure summary with per-item results And the queue supports pagination or infinite scroll to at least 1,000 tasks without UI time-to-interactive exceeding 2 seconds on a standard laptop
Notifications to Designated Reviewers
Given project-level notification rules designate reviewers When a new review task is created Then each designated reviewer with notifications enabled receives an in-app alert within 5 seconds And each designated reviewer receives an email within 60 seconds containing: file name, project, confidence, suggested routing, and task link And a reviewer is not notified more than once per task within a 10-minute deduplication window And if the task priority is escalated or the SLA breaches, an escalation notification is sent to the next-tier reviewers following the same timing guarantees
SLA Tracking and Escalation
Given SLA targets exist for priorities P1, P2, and P3 When a review task is created Then a dueAt timestamp is set based on its priority SLA And the task detail shows remaining time and SLA status (On Track, At Risk <15% remaining, Breached) And when the current time exceeds dueAt, the task is marked Breached and escalated per escalation policy And SLA outcome (Met/Breached) and resolution time are persisted for reporting
One-Click Accept/Correct Workflow
Given an Open review task with suggestions When a reviewer clicks Accept on a suggestion Then the file is routed/moved to the selected project/track/version And a safe rename is applied if needed to avoid overwrite And checksum verification confirms the move; on failure the task remains Open with an error note And the task is marked Resolved with resolution type=AcceptedSuggestion and audit log recorded When a reviewer clicks Correct Manually and saves a new target Then the file is routed accordingly, conflicts are preflight-checked, and the task is marked Resolved with resolution type=Corrected
Resolution Logging and Analytics Readiness
Given any review task is resolved When the resolution is saved Then the system records: selected route, whether suggestion was accepted or corrected, final confidence, mismatch reason, root cause (from controlled list), reviewer, timestamps, and notes (optional) And the before/after routing and filename are stored for model/rules tuning And the resolution record is available to the analytics service within 5 minutes And personal data is stored and surfaced according to the platform’s data retention and privacy policies
Safe Rename Suggestions & Bulk Apply
"As a catalog manager, I want safe, policy-compliant rename suggestions I can apply in bulk so that our asset names remain consistent across the catalog."
Description

Generate policy-compliant, deduplicated filename suggestions based on the workspace naming schema and resolved routing (including normalized version tags and mix numbers). Present a change list with dry-run validation, then apply renames atomically across storage and metadata without breaking existing links. Support bulk operations, maintain an original-name history for traceability, enforce cross-platform safe characters, and ensure idempotent operations for re-runs.

Acceptance Criteria
Generate Policy‑Compliant Rename Suggestions
Given a batch of routed assets with project, track slot, version tag, and mix number resolved And a workspace naming schema defining token order, separators, case, and mix number width And a normalization map for version tags (e.g., inst→Instrumental, clean→Clean, radio→Radio Edit) When rename suggestions are generated Then each suggested filename matches the schema’s validation regex And version tags are normalized to the schema’s canonical labels And mix numbers are left‑padded to the schema‑defined width And the original file extension is preserved And suggestion generation for up to 5,000 files completes within 60 seconds on standard env
Dry‑Run Validation and Change List
Given a user initiates a dry‑run for a batch of proposed renames When the system computes changes Then a change list is returned containing for each item: current path, suggested path, validation status (Valid|Collision|Invalid|TooLong|PermissionDenied), and reason codes And the list is sorted deterministically by current path And an overall readiness flag is provided and is true only if all items are Valid And no storage, metadata, links, or indexes are modified during dry‑run
Deduplication and Collision Resolution
Given suggested names conflict within the batch or with existing items in target folders When conflicts are detected Then deterministic, schema‑defined suffixing (e.g., -2, -3 or (2), (3)) is applied to produce unique names And the final set of suggested names contains no duplicates And the suffixing keeps names within length limits and remains stable across re‑runs with the same inputs And affected items are marked with reason code DEDUPLICATED in the change list
Cross‑Platform Safe Characters and Length Enforcement
Given source names may contain reserved or unsafe characters and long segments When generating suggestions Then reserved characters <>:"/\|?* and control characters are removed or replaced per schema And leading/trailing spaces and trailing dots are stripped And Windows reserved device names (CON, PRN, AUX, NUL, COM1–COM9, LPT1–LPT9) are avoided via safe transformation And Unicode is normalized to NFC; if ASCII‑safe mode is enabled, diacritics are stripped with safe fallbacks And filename length ≤ 255 bytes and full path length ≤ 240 bytes (UTF‑8) And if truncation is required, a deterministic hash suffix is applied to preserve uniqueness and the item is flagged with reason code TRUNCATED
Atomic Bulk Apply Without Breaking Links
Given a user confirms apply for a batch where overall readiness is true When the system applies renames Then storage keys/paths, database metadata, and search indexes are updated in a single logical transaction per batch And either all items are renamed successfully or all changes are rolled back with no partial effects And existing public/review/watermarked links and API references continue to resolve to the renamed assets within 60 seconds, with analytics preserved And no rename overwrites an existing file; unexpected target conflicts abort the batch before any changes occur And the response includes batch id, counts (renamed, skipped), and duration
Original‑Name History and Revert
Given renames have been applied When inspecting an asset’s history Then the system shows immutable entries capturing previous name, new name, timestamp, actor, batch id, and reason codes for each rename And history is queryable via UI and API and retained indefinitely And a revert action is available per item and per batch that restores the previous name atomically while respecting current collision rules And reverted items are appended to history with REVERT reason code
Idempotent Re‑Runs and No‑Op Applies
Given the same inputs (batch contents, schema, normalization map) are processed again with the same idempotency key When suggestions are regenerated Then outputs are byte‑for‑byte identical to the prior run And when apply is invoked on an already‑applied or no‑change batch Then the operation is a no‑op: returns 200 with 0 changes and does not create new history entries or modify timestamps And partial overlaps only process items whose current names differ from the suggested names

QR Drop

Generate a scannable QR for any scoped link so contributors can upload from mobile with the same gates, expirations, and metadata prompts. Perfect for studio sessions and on‑location shoots—frictionless handoffs with fewer delays and lost files.

Requirements

Scoped QR Generation
"As a producer managing a fast-paced session, I want to generate a scannable QR that’s bound to the right project and intake rules so that contributors can upload from their phones without misplacing files or breaking naming/format standards."
Description

Enable creation of a scannable QR tied to a specific IndieVault Drop configuration (project/release scope, destination folder, allowed file types, metadata prompts, expiration, and usage limits). Provide downloadable QR assets in PNG/SVG/PDF with adjustable size, color, error-correction level, and optional branded logo overlay that preserves scan reliability. Each QR resolves to a short, human-readable URL as a fallback for manual entry. Support single-use and multi-use modes, regeneration, revocation, and extension of validity without breaking analytics continuity. Log creation, scans, and lifecycle events for auditability and surface the QR reference within the release’s asset intake panel.

Acceptance Criteria
Scoped QR Creation with Drop Configuration Fidelity
Given an authenticated manager selects a project/release and opens QR Drop setup When they configure destination folder, allowed file types, required metadata prompts, mode (single-use or multi-use), expiration (RFC3339), and usage limit (integer ≥ 1 for multi-use) Then the system validates all inputs and displays inline errors for any missing/invalid fields And upon save, a QR reference ID is created and status is set to Active And scanning the QR opens the upload page that enforces the configured allowed file types and displays the configured metadata prompts And the upload flow blocks submission if required prompts are incomplete or file types are disallowed And submissions after the expiration time are blocked with an Expired message
Downloadable QR Assets with Adjustable Parameters
Given a QR reference exists When the user exports the QR as PNG, SVG, or PDF with selected size (256–2048 px or 10–200 mm), foreground/background colors, and error-correction level (L/M/Q/H) Then the exported file matches the requested format and size And SVG/PDF are vector (no rasterized QR paths) And PNG renders at exact pixel dimensions with a 4-module quiet zone And the system warns and blocks export if foreground/background contrast ratio < 2.0:1 And the encoded content remains identical across all exported formats for the same QR reference
Branded Logo Overlay Preserves Scan Reliability
Given a user uploads a logo to overlay on the QR When the logo is positioned and scaled within limits (≤15% of QR area, does not cover finder or timing patterns, preserves 4-module quiet zone) Then the system allows export and embeds the logo And the QR must successfully scan on the device test matrix (iOS and Android latest two major versions) at sizes ≥ 30 mm and ≥ 256 px in 20/20 attempts per device And if limits would be violated, the system prevents export and displays a clear corrective message
Short Human-Readable URL Fallback
Given a QR reference is created Then a short, human-readable URL is generated (length ≤ 24 chars after domain, charset [a-z0-9-]) and displayed with a Copy action And the short URL resolves over HTTPS to the same gated upload flow as the QR scan And the short URL is printed under the QR preview for inclusion in exports And if the QR is expired or revoked, the short URL returns a friendly status page explaining the state and does not allow upload
Single-Use and Multi-Use Enforcement with Usage Limits
Given a QR is set to single-use mode When the first contributor completes a successful submission Then the QR is marked Consumed and subsequent scans/URL visits show a Consumed message and block upload Given a QR is set to multi-use with usage limit N When N successful submissions have completed Then further scans/visits are blocked with a Limit Reached message And concurrency is enforced so that simultaneous submissions beyond the limit or after expiration are rejected server-side
Regeneration, Revocation, and Validity Extension with Analytics Continuity
Given a QR reference exists When the user regenerates the QR asset (visual re-render or styling changes) Then the short URL and QR reference ID remain unchanged and previously distributed codes continue to work And analytics (scans, submissions) continue to aggregate under the same reference ID When the user extends the expiration time Then the new expiration is enforced immediately and analytics history remains intact When the user revokes the QR Then scans/visits show a Revoked message, uploads are blocked, and analytics history remains viewable with a logged Revoked event
Audit Logging and Intake Panel Surfacing
Given a QR is created for a release Then the system logs events for creation, exports, scans (timestamp, device/os where available, IP truncated), submission successes/failures, limit reached, expiration, extension, regeneration, and revocation And the release’s asset intake panel displays the QR reference, mode, status (Active/Consumed/Expired/Revoked), expiration, short URL, usage stats (# scans, # submissions, last scan time), and actions (Copy URL, Download, Extend, Regenerate, Revoke, View Logs) And the panel updates within 5 seconds of new scans/submissions without page reload
Mobile Upload Landing (PWA)
"As an on-location videographer, I want a fast, mobile-friendly page after scanning the QR so that I can upload clips immediately without creating an account or losing progress if the connection drops."
Description

Deliver a mobile-first, lightweight upload experience that opens from the QR scan with zero sign-in friction. Support camera roll/file picker and direct capture (photo/video/audio), multi-file selection, large-file resumable uploads (multipart/tus), real-time progress, and safe retry on flaky connections. Show scoped instructions, required metadata prompts, remaining time before expiry, and allowed file types/limits. Ensure accessibility compliance, responsive design across iOS/Android browsers, and fast load via PWA caching. Provide clear success confirmation with a shareable receipt code and a link to view submission guidelines. Degrade gracefully for older devices and low bandwidth.

Acceptance Criteria
QR scan opens mobile upload landing without sign-in
Given a valid, active QR Drop link scoped to an upload target And the device has a modern mobile browser with network connectivity When the QR is scanned and the link is opened Then the PWA upload landing loads its interactive shell within 2 seconds on 4G and 5 seconds on 3G with cached assets And no authentication prompt or account creation flow is shown And the scope title, instructions, allowed file types, and remaining time are displayed above the uploader And the link is bound to the scope's permissions so only upload actions are available
Expiry countdown and gated access
Given a QR Drop link with an expiry timestamp When the landing page renders Then a live countdown shows remaining time with minute-level precision and updates every 1 second And if time remaining <= 10 minutes, a visual warning state is displayed And if the link is expired or revoked, the upload UI is hidden, an "Link expired" message is shown, and a contact/help link is provided And no uploads can be initiated after expiry, including queued files
Required metadata prompts before upload
Given the scope defines required metadata fields and help text When a contributor opens the landing page Then all required prompts are displayed before the file chooser And the user cannot start uploads until all required fields validate And validation errors are shown inline and announced to screen readers And entered metadata persists locally and is restored after page reload or reconnection within 7 days And submitted metadata is attached per file and included in server payloads
Capture, picker, and multi-file selection with type/size limits
Given the device supports camera/mic access When the user chooses "Capture" for photo/video/audio Then the native capture intent opens and returns media to the uploader And when the user chooses "Choose from device", the file picker opens with multi-select enabled And only scope-allowed file types are selectable and validated; disallowed types show a clear error and are not queued And per-file size and total batch limits are enforced with visible counters And if permissions are denied or capture unsupported, the flow falls back to the file picker without blocking
Resumable large-file uploads with real-time progress and safe retry
Given one or more files up to the scope's max size with cumulative size up to the scope's batch limit When uploads begin over variable or flaky connections Then each file uploads in chunks using resumable protocol (tus or multipart) with up to 3 concurrent files And per-file progress, speed, and estimated time remaining are displayed and updated at least every 500 ms And if the network drops or the tab is backgrounded, uploads auto-pause and resume when connectivity returns without data loss And manual "Retry" retries failed files with exponential backoff and without creating duplicates (deduped by server upload ID) And partial uploads older than 24 hours are auto-cleaned up and surfaced as retriable
Successful submission confirmation with receipt and guidelines link
Given all queued files and required metadata have been uploaded successfully When the batch completes Then a success screen displays the number of files received and a unique 8+ character alphanumeric receipt code And a "Copy" and native "Share" action are available; if Web Share API is unavailable, a fallback modal with the code appears And a persistent link to view submission guidelines is visible and opens in a new tab And the receipt code and submitted summary are stored locally for 7 days for later reference
Accessibility, responsiveness, performance, and graceful degradation
Given the upload landing is accessed on common mobile devices When evaluated Then the page meets WCAG 2.1 AA for focus order, labels, roles, and color contrast (>= 4.5:1 normal text) And all interactive elements are operable via keyboard and are announced correctly by VoiceOver and TalkBack And layout adapts for viewports 320px–828px without horizontal scroll; touch targets are >= 44x44 dp And Lighthouse scores are >= 90 for Performance and Accessibility on mid-tier mobile devices And core PWA shell is precached and stays under 200 KB gzipped; offline shows a queued-state message and allows retry on reconnection And on older/low-capability browsers (no Service Worker, no MediaRecorder), the app serves a lightweight HTML uploader with file input, size/type validation, and clear instructions
Gated Access Controls
"As an artist manager, I want to enforce passcodes and expirations on QR Drops so that only invited contributors can upload within a safe window and we reduce spam, leaks, and misuse."
Description

Apply the same gates as standard IndieVault intake: time-based expiration, maximum submissions, optional passcode, hCaptcha/recaptcha, rate limiting, and IP allow/deny lists. Support optional email verification or SMS magic link for identity binding, and require consent to configured terms (e.g., contributor agreement). Use short-lived, scoped tokens embedded in the QR URL to prevent guessing/enumeration, enforce TLS, and invalidate tokens on revocation. Provide admin controls to pause, extend, clone, or terminate a QR Drop. Record gate outcomes (passes/fails) for security analytics without collecting unnecessary PII.

Acceptance Criteria
Scoped Token and TLS Enforcement
Given a QR Drop URL is accessed over HTTP When the request reaches the server Then the request is rejected with HTTP 426 Upgrade Required and no redirect is issued, and no token appears in any Location header Given a QR Drop URL contains a scoped token with an embedded expiry and scope When the current time is before the token expiry and the drop is active Then the upload form is rendered and the token scope permits only the configured actions Given a QR Drop token is unknown, expired, or revoked When it is presented to any endpoint Then the response is HTTP 404 Not Found with a generic message and no indication of which condition applied Given an admin clones a QR Drop When the clone is created Then a new scoped token distinct from the source is generated and the source token(s) remain unchanged unless explicitly terminated
Time-Based Expiration Gate
Given a QR Drop with an expiration timestamp T When a contributor accesses the link after T Then uploads are blocked, the UI displays an expiration message, and the API returns HTTP 410 Gone for upload attempts Given a QR Drop with an expiration timestamp T When a contributor accesses the link before T Then the upload flow is allowed to proceed Given an admin extends the expiration to T2 > T When a contributor accesses the link after T but before T2 Then the upload flow proceeds without requiring a new link
Maximum Submission Cap Enforcement
Given a QR Drop has a maximum submissions cap of N When the Nth upload is accepted Then the system atomically increments the count and any concurrent N+1th attempt is rejected with HTTP 403 Forbidden and a cap-reached message Given a QR Drop has a maximum submissions cap of N When the submissions count is less than N Then additional uploads are accepted normally and the count is updated accurately
Identity Binding, Passcode, and Terms Consent Gates
Given passcode protection is enabled for a QR Drop When a contributor opens the link without providing the correct passcode Then access to the upload form is blocked and an incorrect passcode message is shown Given email verification is enabled for a QR Drop When a contributor verifies via code or magic link within the configured TTL Then the session is marked as verified for that identity and the upload form becomes accessible Given SMS verification is enabled for a QR Drop When a contributor completes the SMS magic link or code within the configured TTL Then the session is marked as verified for that identity and the upload form becomes accessible Given terms consent is required for a QR Drop When a contributor reaches the final step before upload Then the upload action is disabled until the contributor checks consent, and the stored submission includes a timestamp and the exact terms version identifier
Abuse Mitigation: CAPTCHA, Rate Limiting, and IP Lists
Given CAPTCHA is enabled When a contributor fails the challenge or the provider asserts invalid Then access is denied with HTTP 403 Forbidden and the upload does not start Given CAPTCHA provider is unreachable When a contributor attempts access Then the system falls back to the configured alternate provider or blocks access with a service-unavailable message according to configuration Given rate limiting is configured per IP and per token When requests exceed the configured thresholds Then subsequent requests receive HTTP 429 Too Many Requests and are not processed Given an IP deny list contains the requester’s IP When the link is accessed Then the request is blocked with HTTP 403 Forbidden regardless of other gates Given an IP allow list is configured and non-empty When the requester’s IP is not on the allow list Then the request is blocked with HTTP 403 Forbidden Given both allow and deny lists are configured When the requester’s IP appears on both Then the deny rule takes precedence and the request is blocked
Admin Controls: Pause, Extend, Clone, Terminate
Given an admin pauses a QR Drop When a contributor accesses the link Then the UI shows a temporarily unavailable message and uploads are blocked without altering counters Given an admin extends a QR Drop expiration When the change is saved Then the new expiration applies immediately to subsequent requests Given an admin clones a QR Drop When the clone is created Then the clone copies gate configurations but resets counters, generates a new token, and does not carry over submissions Given an admin terminates a QR Drop When a contributor accesses the link thereafter Then the response is HTTP 404 Not Found with a generic message and all tokens associated with the drop are invalid
Security Analytics: Gate Outcomes With Minimal PII
Given any gate (expiration, cap, passcode, CAPTCHA, rate limit, IP list, identity, consent) is evaluated When a contributor attempts access or upload Then an analytics event is recorded with timestamp, gate name, outcome (pass|fail), and reason code, and it excludes raw email, raw phone numbers, and full IP addresses Given analytics are stored for gate outcomes When reviewing stored events Then only non-sensitive identifiers are present (e.g., token ID, truncated or hashed IP, terms version), and any PII used for verification is discarded after its configured TTL
Metadata Prompts & Validation
"As a mix engineer, I want to prompt contributors for names, roles, and notes during upload so that assets arrive labeled and usable without follow-up."
Description

Configure per-Drop metadata prompts (e.g., contributor name, role, track reference, rights/clearances, notes) with field types, required/optional flags, character limits, and conditional logic. Prefill known collaborator data when available and provide inline help text and examples. Validate client- and server-side with clear error messaging. Persist submitted metadata alongside files as sidecar JSON and in the asset record, ensuring it’s searchable and exportable. Allow saving reusable prompt templates at the workspace or project level for repeat sessions.

Acceptance Criteria
Per‑Drop metadata schema configuration
Given a Drop exists and I have Edit permissions When I add fields of types [single-line text, multiline text, number, date, select, multi-select, checkbox, URL, email] with labels, keys, defaults, required flags, limits, and help text/examples Then the schema saves successfully and is immediately reflected on the Drop's upload form Given a field is marked Required When a contributor submits the form without a value Then submission is blocked and the field displays a clear error "This field is required" Given a field has a max length of 100 characters When a contributor enters 101 characters Then the client prevents submission and shows an inline error, and the server returns a 400 with a field-specific error if bypassed Given I save the schema When I reload the Drop settings Then all field definitions (type, label, key, required flag, limits, help text, default) persist
Conditional prompts visibility and dependency logic
Given a field B is configured to show when Field A equals "Producer" When the contributor selects "Producer" for Field A Then Field B becomes visible without a full page reload Given the same condition When the contributor selects any other value Then Field B is hidden, is not validated, and is not persisted Given nested conditions (Field C shown when B is shown and equals "Cleared") When the contributor sets values that satisfy both levels Then Field C appears and is validated accordingly Given a malicious client reveals hidden fields and submits values When the server processes the submission Then the server ignores values for fields whose conditions are not satisfied
Prefill collaborator metadata from known sources
Given a contributor opens a signed QR/link addressed to their email that matches a workspace contact When the upload form loads Then the Name and Role fields prefill from the contact profile and are editable by the contributor Given prior submissions for this Drop include a track reference for the same contributor When the contributor returns Then the track reference field pre-populates with the last used value, unless a new default was defined after Given the contributor is not recognized When the upload form loads Then no fields are prefilled Given prefilled values are present When the contributor edits them Then the edited values are persisted instead of the prefills
Client- and server-side validation parity and error messaging
Given validation rules (required, max length, pattern, URL/email format) are defined for fields When a contributor attempts to submit invalid inputs Then the client shows inline errors next to each field with concise messages and focuses the first error Given the same invalid submission is forced via a bypassed client When the server receives it Then the server rejects with HTTP 400 and returns a JSON body mapping field keys to error codes and human-readable messages Given required metadata is missing When a file upload is attempted Then the system does not create an asset record or store the file until all required metadata validates Given a localized upload page When validation errors occur Then error messages display in the locale of the page
Persist and index submitted metadata
Given a successful submission with files and metadata When the system writes the asset Then the metadata is saved in the asset record and as a sidecar JSON stored alongside the file(s) Given metadata is saved When I search the project for a field value (e.g., Role: "Producer") Then assets matching that value appear in results within 60 seconds of submission Given a request to export assets and metadata When I export as CSV and JSON Then each asset includes its metadata with field keys, labels, and values, and the export completes without data loss Given subsequent edits to metadata via admin UI When changes are saved Then both the asset record and sidecar JSON are updated to the latest values with a new updated_at timestamp
Template creation and application at workspace/project levels
Given a configured metadata schema on a Drop When I save it as a template with a unique name at the project level Then the template is created and visible in the project's template list Given a workspace admin When they create a workspace-level template Then the template is available to all projects in that workspace Given a new Drop When I apply a selected template Then all fields, validations, conditional logic, help texts, and defaults are copied to the Drop Given a template is edited When I update it Then existing Drops that previously applied the template remain unchanged until re-applied
QR mobile upload enforces configured metadata prompts
Given a QR Drop link with configured metadata prompts and an active expiration When a contributor scans the QR and opens the mobile upload form Then the form renders all prompts with their types, required flags, help texts, and conditional logic, and blocks submission after expiration Given a mobile device When fields of types date, select, multi-select, checkbox, URL, and email are displayed Then the controls use mobile-appropriate inputs (date picker, native selects, toggles, email keyboard) Given client- and server-side validation rules When an invalid submission is made from mobile Then the same error messages and structured server errors are returned as on desktop
Intake Rules & Integrity Safeguards
"As a label operations lead, I want uploads checked and normalized against our standards so that bad, unsafe, or duplicate files never enter the catalog."
Description

Enforce file intake policies per Drop: allowed MIME types/extensions, max file size/count, and optional folder/ZIP acceptance with server-side extraction. Perform content-type sniffing, checksum generation, duplicate detection against project assets, and antivirus/malware scanning before finalizing ingest. Offer optional EXIF/metadata stripping for images and normalize audio sample rates/bit depths when configured. Provide descriptive rejection reasons and remediation tips to uploaders, with automatic retry and resume. Capture a tamper-evident audit log of all intake events.

Acceptance Criteria
MIME/Extension Enforcement with Server-side Sniffing
Given a QR Drop configured to allow MIME types [image/jpeg, audio/wav] and extensions [.jpg, .jpeg, .wav] When an uploader scans the QR and submits files of various types Then the server performs content-type sniffing on each file and compares both sniffed MIME and file extension against the allowed lists And files where both checks pass are accepted to staging; all others are rejected pre-ingest And for each rejected file the response includes file name, reason_code in {unsupported_type, type_mismatch}, and a list of allowed types And no rejected file bytes are persisted beyond transient scanning storage And all decisions are recorded per file in the audit log
Max Size/Count Limits and ZIP/Folder Intake with Server-side Extraction
Given a QR Drop with max_file_size=250MB, max_file_count=20, and archive_acceptance=zip_enabled When the uploader submits a mix of individual files and a .zip archive Then the server extracts the zip server-side without executing any content, enumerates contained files, and applies the Drop’s rules to each item And any single file larger than 250MB is rejected; any archive whose extracted total would exceed 20 files causes only the overflow items to be rejected And the response returns a per-file result with accepted_count and rejected_count and a summary reason for each rejection in {exceeds_size_limit, exceeds_count_limit, unsupported_archive} And unsupported archives (e.g., .rar, .7z) are rejected with reason_code=unsupported_archive And accepted files preserve the archive’s relative folder structure in the project
Checksum Generation and Duplicate Detection Against Project Assets
Given an upload reaches pre-ingest staging When checksum generation completes using SHA-256 Then the checksum is compared against all assets in the same project scope (including prior Drops) And if a match is found, the file is rejected with reason_code=duplicate_asset and includes the existing asset_id and location And if no match is found, the ingest proceeds And all checksum values and duplicate decisions are written to the audit log
Antivirus/Malware Scanning Prior to Finalizing Ingest
Given the file passes type and size rules When server-side antivirus scanning runs on the staged bytes Then files flagged as malicious are rejected and quarantined (not available to project), with reason_code=malware_detected and remediation_tip="Contact support" And files with scan errors or timeouts are rejected with reason_code=scan_failure or scan_timeout And only files with a clean scan result proceed to final ingest And scan engine name, version, and result are recorded in the audit log
Media Post-Processing Policies (Image EXIF stripping and Audio normalization)
Given a QR Drop with image_metadata_stripping=true When a JPEG or PNG is accepted for ingest Then all EXIF (including GPS) metadata are stripped server-side before the asset is finalized And a post-process verification reads the stored file to confirm zero EXIF/GPS tags And audit log records metadata_stripped=true per file Given a QR Drop with audio_normalization set to sample_rate=48kHz, bit_depth=24-bit When a WAV or AIFF is accepted with differing format parameters Then a normalized derivative is created to the configured targets without changing channel count or duration by more than 0.5% And the stored asset’s format equals the target parameters, and the transform details are recorded in the audit log
Descriptive Rejections with Remediation, Retry and Resume Support
Given a transient network failure occurs during an upload to a non-expired Drop When the uploader retries using the same upload session within 24 hours Then the upload resumes from the last confirmed byte without data loss And the final server response reflects a single completed file with no duplicates Given a file is rejected for any rule violation When the server responds Then the response includes file name, reason_code, human-readable remediation_tip tailored to the reason, and a link to the Drop’s rules And at least one actionable remediation is provided (e.g., compress to <250MB, convert to WAV 48kHz) Given repeated transient failures (HTTP 5xx, timeouts) When an upload attempt is made Then the client is instructed via response headers to auto-retry up to 3 times with exponential backoff before surfacing an error
Tamper-Evident Audit Logging for QR Drop Intake
Given any intake event (accept, reject, scan result, post-process) When the event is written to the audit log Then the entry includes timestamp (UTC), event_type, drop_id, uploader_session_id, file_name, file_size, checksum, decision, reason_code (if any), and prev_hash And prev_hash links to the previous entry to form a hash chain, enabling tamper-evidence And attempting to modify or delete an entry invalidates the chain verification And logs are immutable to non-admin roles and exportable by project admins as signed JSONL with a daily chain root hash
Auto-Organize & Version to Release Folders
"As an indie artist shipping weekly, I want incoming files to auto-land in the right release folders with versions applied so that I can move straight to review without sorting and renaming."
Description

Automatically route accepted files into the correct project/release folder structure (e.g., Stems, Mixes, Artwork, Contracts), apply naming templates, and increment semantic versions when a filename collision or update is detected. Attach collected metadata, tag assets with source=QR Drop, and link submissions back to their Drop for traceability. Trigger optional follow-up workflows (e.g., notify mixing engineer, start review pipeline, request missing stems) via rules. Respect workspace permissions and inheritance while ensuring contributors never gain broader access.

Acceptance Criteria
Auto-route Accepted Files to Release Folder Structure
Given a Release with folder taxonomy configured (Stems, Mixes, Artwork, Contracts) and a QR Drop scoped to that Release When a contributor uploads a file that passes all gates Then the file is stored under the Release root in the correct top-level folder based on resolved asset type And any missing subfolders in the path are created automatically And no file is stored outside the Release directory And the final stored path is recorded on the asset record
Apply Naming Templates on Ingest
Given a Release with an active naming template "{releaseCode}_{assetType}_{trackNumber}_{name}_v{semver}{ext}" When an accepted file is stored Then the canonical filename is generated from template values And unsupported characters are normalized to safe ASCII And the filename length is <= 120 characters And the filename is unique within the Release
Increment Semantic Version on Filename Collision or Update
Given an existing asset with the same canonical name exists in the Release When a new upload resolves to that canonical name Then the system compares MIME type, technical media properties, and content hash And if MIME type differs, bump MAJOR (X+1.0.0) And if MIME type matches but technical properties differ, bump MINOR (X.Y+1.0) And if technical properties match but content hash differs, bump PATCH (X.Y.Z+1) And if content hash matches, reject as duplicate without creating a new version And the resulting filename reflects the new semver and version history is updated
Attach Metadata, Source Tag, and Drop Link
Given the originating QR Drop has a Drop ID and collected metadata fields When an accepted file is stored Then all submitted metadata fields are attached to the asset And asset.source equals "QR Drop" And asset.dropId equals the originating Drop ID And the metadata and Drop link are visible in the asset detail UI and returned by the API
Trigger Optional Follow-up Workflows
Given optional rules are enabled for the Release (notify mixing engineer, start review pipeline, request missing stems) When an asset is stored and categorized Then matching rules execute within 60 seconds And notifications are sent to configured recipients And the review pipeline state is created or advanced as configured And if required stems remain missing 10 minutes after ingest, a request is sent via the Drop’s contact channel
Enforce Permissions and Limited Contributor Access
Given workspace/project permissions and inheritance policies are configured and the QR Drop allows non-members to upload When a contributor uploads via QR code Then the contributor cannot browse or download any workspace, project, or release assets And the stored asset inherits the project’s ACLs without elevating contributor privileges And attempts by the contributor to access protected routes return HTTP 403 And the Drop link’s access expires per its policy and cannot be used to gain broader access
Notifications, Receipts & Analytics
"As a project manager, I want real-time notifications and Drop analytics so that I can spot issues, confirm receipt, and report progress to stakeholders."
Description

Notify project owners and designated collaborators on key events (first scan, first upload, each successful ingest, Drop expired) via email, in-app, and Slack. Present an in-app dashboard showing scans, unique uploaders, completion rate, average upload size/time, top devices/browsers, and geo-level insights with privacy safeguards. Provide per-Drop and per-source tagging via URL parameters for session-level attribution. Show the uploader an on-screen receipt with submission summary and optional email receipt if verified. Expose webhooks/API for downstream automation and CSV export for reporting.

Acceptance Criteria
First QR Scan Notifications
Given a QR Drop link with notifications enabled and stakeholders configured When the link is scanned for the first time from any device Then send notifications via email, in-app, and Slack to all configured stakeholders within 30 seconds And include: drop_id, drop_name, first_scan_at (UTC), device type, browser, source tags, and coarse geolocation (country/region) And do not send duplicate "first scan" notifications for subsequent scans And respect per-user notification preferences and quiet hours And record an auditable notification log entry with delivery status per channel
Successful Upload & Ingest Notifications
Given a contributor uploads files through a QR Drop When each file batch finishes validation and is successfully ingested Then send a consolidated notification to stakeholders within 60 seconds of ingest completion And include: file_count, total_bytes, avg_upload_time_ms, device, browser, source tags, and receipt_id (if generated) And do not notify for failed or quarantined files; instead emit a single failure notification with reason And de-duplicate events by file hash and receipt_id to prevent double notifications And update the in-app notifications center with an unread badge for the project
Drop Expiration Reminders and Lockout
Given a QR Drop has an expiration timestamp When the time is T-24 hours Then send a reminder notification to owner and designated collaborators across email, in-app, and Slack And include: drop_id, drop_name, expires_at (UTC), current upload stats, and extend_link (if permissions allow) When the expiration timestamp is reached Then block new uploads immediately and return an expired UI (HTTP 410 for API) And send an expiration notification to stakeholders within 30 seconds And log the lockout event in audit and analytics
Per-Drop Analytics Dashboard with Privacy Safeguards
Given a user with analytics permission opens a Drop's Analytics tab with a date range (default last 30 days) When analytics loads Then display: total scans, unique uploaders, completion rate ((uploaders with ≥1 ingest success)/(uploaders who started upload)), avg upload size, avg upload time, top 5 devices, top 5 browsers, and geo distribution (country and region) And apply privacy thresholds so no geo/device/browser slice is shown with fewer than 5 events And reflect new events within 2 minutes (p95) And allow filtering by source tags and channel (QR vs direct link) And CSV export downloads the filtered dataset within 10 seconds for up to 100k rows
Session-Level Attribution via URL Parameters
Given a QR Drop link is accessed with URL parameters src and campaign (and up to 3 whitelisted custom tags) When the session starts (first page load) Then persist these tags to the session and attach them to scan, upload, ingest, notification, analytics, and webhook events And restrict tag values to ASCII, max 64 characters, and disallow email/phone patterns; truncate and sanitize on violation And ignore mid-session tag changes until a new session cookie is established And surface tags as filters in the analytics dashboard and include them in CSV and webhook payloads
Uploader On-Screen and Email Receipt
Given an uploader completes an upload and all files are ingested successfully When the success screen is shown Then display a receipt with: drop_name, receipt_id, local_time and UTC timestamp, file list (names + sizes), total bytes, source tags, and owner contact And provide an optional email receipt: require entering an email and verifying via OTP within the session; send within 2 minutes upon verification And do not send or store an email receipt if email is not verified And provide a Copy Receipt Link that expires in 30 days and is secured via HMAC; link shows the same summary without access to the project And localize date/number formats based on browser locale
Webhooks, API, and CSV Export for Events
Given a project has a registered webhook endpoint and secret When events occur (scan.created for first scan, upload.started, upload.completed, ingest.succeeded, ingest.failed, drop.expired) Then deliver JSON payloads within 5 seconds (p95) with headers X-IndieVault-Signature (HMAC-SHA256) and X-IndieVault-Timestamp And retry non-2xx responses up to 12 times over 24 hours with exponential backoff and jitter; provide manual replay by event_id for 7 days And redact IPs (IPv4 to /24, IPv6 to /48) and exclude PII from payloads; include source tags and receipt_id where applicable And provide GET /v1/drops/{id}/analytics with the same filters as the dashboard, responding within 2 seconds (p95) for ranges ≤90 days And ensure CSV export endpoints mirror filters and complete within 15 seconds (p95) for up to 100k rows

Email Drop

Assign a unique email address to each project/version that turns attachments into scoped uploads. Subject/body become notes, sender identity powers per‑recipient tracking, and the same file gates apply. Ideal for clients who live in email—no new tools to learn.

Requirements

Unique Inbound Address Provisioning
"As an indie artist or manager, I want a unique email address for each project/version so that collaborators can send files by email and the uploads are automatically organized in the correct place."
Description

Provision and manage a unique, scoped email address per project and version (e.g., project+token@drop.indievault.com). Automatically generate, rotate, disable, and regenerate addresses from UI and API with audit logging. Ensure multi-tenant isolation, tokenized routing, and collision-free aliases. Link each alias to its target project/version so incoming mail is deterministically scoped without manual triage.

Acceptance Criteria
UI: Generate Unique Inbound Address
Given tenant T has project P and version V and user U has Manage Inbound Addresses permission When U clicks "Generate Inbound Address" for P/V in the UI Then the system creates an alias matching pattern <local-part with "+<token>">@<inbound-domain> And the token is generated via a cryptographically secure RNG with at least 96 bits of entropy and is URL-safe And the alias is unique across all tenants, projects, and versions And the alias is persisted linked to tenant T, project P, and version V with status "Active" And an audit log entry is recorded with actor U, action "create_inbound_alias", alias, tenantId, projectId, versionId, and timestamp And the UI displays the address and status "Active"
API: Generate Unique Inbound Address
Given a valid API client with scope addresses:write in tenant T and project P with version V exist When the client POSTs to /api/v1/inbound-addresses with { projectId: P, versionId: V } Then the API responds 201 Created with body containing { id, address, tenantId, projectId, versionId, status: "Active", createdAt } And the created alias is unique across all tenants and includes a token with >=96 bits entropy And an audit log entry is recorded with actor (client/app), action "create_inbound_alias_api", alias, tenantId, projectId, versionId, requestId, and timestamp
Deterministic Routing to Linked Project/Version
Given an Active inbound alias A linked to tenant T, project P, and version V When an email with attachments is received by SMTP for address A Then the message is accepted (250) and a new upload batch is created scoped to T/P/V And each attachment is stored as an upload under P/V And the email subject and body are saved as notes on the upload batch And the sender's email address is captured for per-recipient tracking And existing file gating rules (e.g., AV scan, size/extension limits) are applied identically to manual uploads And no other tenant or project receives any of the created uploads
Rotate Alias to New Address
Given an Active inbound alias A linked to T/P/V When user U triggers Rotate Alias via UI or API Then a new alias A2 is generated and linked to the same T/P/V with status "Active" And alias A is immediately set to status "Disabled" And emails to alias A after rotation are rejected with SMTP 550 5.1.1 and a human-readable reason indicating the address is no longer valid And audit logs are recorded for actions "rotate_inbound_alias" and "disable_inbound_alias" with correlation to A and A2 And the UI/API return and display the new address A2 and updated statuses
Disable Alias and Enforce Rejection
Given an Active inbound alias A linked to T/P/V When a user with permissions disables alias A Then alias A status becomes "Disabled" And subsequent inbound emails to A are rejected with SMTP 550 5.1.1 and a human-readable reason And the UI/API reflect status "Disabled" for alias A And an audit log entry is recorded with actor, action "disable_inbound_alias", alias, and timestamp
Regenerate Alias After Disable
Given a Disabled inbound alias A linked to T/P/V When the user triggers Regenerate Alias via UI or API Then a new alias A2 is created and set to status "Active" linked to the same T/P/V And alias A remains "Disabled" and is not reactivated And inbound emails to A2 are accepted, while emails to A are rejected with SMTP 550 5.1.1 And an audit log entry is recorded with actor, action "regenerate_inbound_alias", oldAlias A, newAlias A2, and timestamp
Multi-Tenant Isolation and Collision-Free Aliases
Given tenants T1 and T2 each provision inbound aliases concurrently for their own projects and versions When N (>=1000) alias generations occur per tenant under concurrent load Then no two generated aliases are identical across all tenants (collision-free) And an alias belonging to T1 cannot ingest files into any project/version of T2; mail to that alias always scopes to T1's linked project/version only And attempts to access or manage an alias from a different tenant via UI or API return 404/403 without revealing the alias's existence And alias tokens are not exposed in plaintext in error responses or logs and are generated with >=96 bits of entropy using a CSPRNG
Attachment Ingestion & Scoped Upload Pipeline
"As a collaborator, I want to email stems or artwork and have them appear as assets in the right project so that I don’t need to learn a new tool or upload flow."
Description

Receive inbound emails, parse MIME, and extract attachments into the correct project/version. Enforce size and file-type policies, chunk and stream to storage, compute checksums for de-duplication, and create asset records with versioning rules. Preserve original filenames and MIME metadata, support ZIP extraction (configurable), and attach source email headers for traceability. Implement idempotent processing and automatic retries for transient failures with dead-lettering for irrecoverable cases.

Acceptance Criteria
Route Email to Project/Version via Unique Address
Given an inbound email is sent to a project's unique ingestion address mapped to version V When the message is received by the email gateway Then an ingestion batch is created scoped to that exact project and version And the email Subject and plain-text body are persisted as batch notes And the sender's email address is recorded and linked to the batch for per‑recipient tracking And if the address does not map to an active project/version, the email is rejected and no attachments are processed with reason "unknown-scope" And all standard file validation gates used for UI uploads are applied to each extracted attachment
MIME Parsing and Attachment Extraction Fidelity
Given a RFC‑5322 compliant email with nested multipart/alternative and multipart/mixed sections When the message is parsed Then every part with a filename parameter or Content‑Disposition: attachment is extracted as an attachment candidate And inline parts without a filename are not ingested And the original filename string, MIME Content‑Type, Content‑Disposition, and byte size are preserved in the candidate's metadata And non‑ASCII filenames are decoded per RFC 2231/5987 and stored losslessly
File Policy Enforcement and Batch Outcome
Given attachment candidates have been identified When size and type policies are evaluated Then any file exceeding the per‑project max size is rejected with reason "file-too-large" And any file with a MIME type or extension not on the allowed list is rejected with reason "disallowed-type" And valid files proceed to storage while invalid files do not block valid ones And the batch summary records accepted_count, rejected_count, and a per‑file outcome with reason codes
Streaming Upload, Checksums, De‑dup, and Versioning
Given a valid attachment is ready to store When the system uploads the file Then the file is streamed to storage in chunks without buffering the entire content in memory And a SHA‑256 checksum is computed over the stored bytes And if an asset with the identical checksum already exists in the same project/version, no new blob is stored and a deduplicated reference is created, marked as duplicate_of the existing asset And otherwise, if an asset with the same filename exists in the same project/version, a new version record is created with version incremented by 1 And otherwise, a new asset record is created with version set to 1 And the original filename is preserved verbatim in the asset metadata regardless of versioning
Configurable ZIP Extraction and Safety
Given the project setting "Extract ZIP attachments" is enabled When a .zip attachment is ingested Then the archive is unpacked and each contained file is validated and ingested under the same policies as direct attachments And each ingested file stores its relative path within the archive in metadata And entries with path traversal or absolute paths are skipped with reason "zip-path-traversal" And if the .zip is encrypted/password‑protected, it is rejected with reason "encrypted-zip" And when the setting is disabled, .zip files are treated as single files and are not unpacked
Idempotency, Retries, and Dead‑Letter Handling
Given an inbound email with a specific Message‑Id and To address is processed When the same email is delivered again (duplicate or webhook retry) Then no duplicate assets are created and the second attempt links to the original batch as an idempotent no‑op And transient storage or parsing errors trigger automatic retries with exponential backoff up to the configured maximum And if retries are exhausted or the error is non‑recoverable, the message is placed on a dead‑letter queue without creating partial assets And dead‑letter entries include the dedupe key (Message‑Id + To), error class, error message, and pointers to the original payload And retries and DLQ handling do not violate idempotency (no extra asset records or blobs)
Source Email Header Traceability
Given an email is ingested When the batch is created Then the complete raw header block is persisted with the batch And a normalized subset (Message‑Id, From, To, Date, Subject, Received) is indexed and queryable via API And each asset created from this email links back to the originating batch and exposes the indexed header subset for audit purposes
Subject/Body-to-Notes Mapping
"As a producer, I want my emailed comments to show up as asset notes so that context travels with the files I send."
Description

Map the email subject and body into structured notes attached to the upload event and resulting assets. Normalize plaintext/HTML, strip signatures and quoted replies, support threading to append notes on follow-up emails, and capture timestamps plus sender context. Make notes searchable and visible in activity timelines and review links while respecting privacy and access controls.

Acceptance Criteria
Normalize Email Subject and Body into Structured Note
Given an inbound email with HTML or plaintext body When the email is processed via Email Drop Then the stored note body is normalized to UTF-8 plain text with paragraph breaks preserved and hyperlinks converted to their URL text And all HTML tags, inline styles, and scripts are removed And consecutive whitespace is collapsed and leading/trailing whitespace trimmed And original line breaks are retained And the note title equals the subject trimmed of leading/trailing whitespace and canonicalized by removing common prefixes (Re:, Fwd:, AW:, WG:) And if the subject is empty or missing, the note title is set to "No subject" and suffixed with the last 8 chars of the Message-ID
Strip Signatures and Quoted Replies
Given the body contains a signature separator ("-- ") or common mobile signature patterns (e.g., "Sent from my iPhone") When the email is processed Then text at and after the detected signature is excluded from the stored note Given the body contains quoted reply sections introduced by patterns like "On <date>, <name> wrote:" or leading ">" quote indicators When the email is processed Then the quoted reply content is excluded from the stored note Given localized reply headers in supported locales (EN, ES, FR, DE) When the email is processed Then quoted content following those headers is excluded And the system preserves non-quoted, non-signature content only
Thread Follow-Up Emails into Existing Note Thread
Given a follow-up email is sent to the same project/version Email Drop address And the email contains In-Reply-To or References headers pointing to a prior thread message When the email is processed Then a new note is appended to the existing thread for that upload event rather than creating a new thread Given the follow-up lacks threading headers but has a subject that canonicalizes to the same normalized subject as an existing thread from the same sender within 14 days When processed Then a new note is appended to that existing thread Given neither threading headers nor a matching canonical subject exist When processed Then a new thread is created
Capture Timestamps and Sender Context
Given an inbound email is received by the system When processed Then the note stores received_at as the system receipt timestamp in UTC with millisecond precision And the original Date header value is stored separately for reference And the note stores sender_display_name and sender_email from the RFC 5322 From header (first address if multiple) And the SMTP envelope sender is stored as envelope_from And, if the sender_email matches an existing contact/user, sender_contact_id is set accordingly And authentication results (SPF pass/fail, DKIM pass/fail, DMARC pass/fail) are recorded as booleans
Privacy and Access Controls for Notes
Given a user views the project activity timeline When access checks are applied Then notes are visible only if the user has permission to the associated project/version and assets Given a review link is opened by Recipient A When notes are rendered Then only notes originating from or addressed to Recipient A's email (or explicitly shared with the link) are shown; other notes are hidden Given a note is marked private via subject tag "[private]" or originates from an internal team domain configured as internal When notes are queried by external collaborators or review link recipients Then the private note is not returned (HTTP 403 from API) and is excluded from their search index
Search Notes by Subject, Body, Sender, and Date
Given a user with access submits a search query across notes When querying for a term present in subject or body Then matching notes are returned with case- and accent-insensitive matching And results can be filtered by sender_email, date range (received_at), and thread_id And exact phrase queries in quotes return only phrase matches And p95 query latency is <= 300 ms for up to 10,000 notes in a project And notes outside the user's access scope are neither indexed for nor returned to that user
Display Notes in Activity Timeline and Review Links
Given a project activity timeline is viewed When notes exist for an upload event Then notes are displayed in chronological order with title, sender display name, received_at (UTC localized to viewer), and body preview (first 240 chars) And clickable URLs are auto-linked; HTML is sanitized to prevent XSS; inline images are not rendered Given a review link page is viewed by an authorized recipient When related assets have note threads Then only notes relevant to those assets and permitted for the recipient are shown with pagination And note content remains consistent with stored normalization and stripping rules
Sender Identity Resolution & Per-Recipient Analytics
"As a manager, I want to see who emailed which files and when so that I can track contributions and maintain accountability."
Description

Resolve the sender’s email address to an existing contact or create a lightweight contact when unknown. Record per-sender upload events, associate them with assets, and feed analytics (who sent what, when, and for which project/version). Optionally require sender verification for restricted projects; capture authentication signals (SPF/DKIM/DMARC results) and trust level. Expose identity data in audit trails and recipient analytics dashboards.

Acceptance Criteria
Resolve Known Sender to Existing Contact
Given an inbound email is received at a project/version Email Drop address And the From address matches an existing contact in the workspace When the message is processed Then the sender is linked to that contactId And an identity event "email_ingested" is recorded with messageId, receivedAt, projectId, versionId, contactId And each attachment is associated with the contact and the target project/version according to file gate rules And SPF, DKIM, and DMARC results are stored with the event and a trustLevel is computed
Create Lightweight Contact for Unknown Sender
Given an inbound email From address does not match any existing contact When the message is processed Then a lightweight contact is created with email, displayName (from headers), and source="email-drop" And subsequent messages from the same address reuse the same contactId And the identity event references the new contactId
Record Per-Sender Upload Events
Given one or more attachments are present on an ingested email When processing completes Then a distinct upload event is recorded per attachment with filename, size, hash, assetId (created or matched), contactId, projectId, versionId, and receivedAt And duplicate files by hash do not create new assets but still record an upload event linked to the existing asset And the analytics API returns upload records filterable by contactId, projectId, versionId, and date range, including who sent what and when
Capture Authentication Signals and Trust Level
Given an inbound message is received When SPF, DKIM, and DMARC are evaluated Then the raw results (pass|fail|none) are stored with the message record And a trustLevel is computed per policy: Trusted if DMARC=pass OR (SPF=pass AND DKIM=pass); Unverified if exactly one of SPF or DKIM passes and DMARC is none; Suspect if DMARC=fail or both SPF and DKIM fail And these fields are retrievable via audit trail and analytics APIs and visible in the UI on message and asset detail views
Enforce Sender Verification on Restricted Projects
Given a project/version is configured with requireSenderVerification=true And an inbound email is received from a sender whose trustLevel is not Trusted When the message is processed Then attachments are quarantined and not published to the project's asset library And the event is marked status=Blocked with reason=UnverifiedSender And the item appears in a review queue where an admin can approve or reject And upon approval the attachments are ingested and the event updates to status=Ingested; upon rejection they are discarded and the audit log records the action
Expose Identity Data in Audit Trails and Analytics Dashboards
Given the system has processed email uploads from multiple senders When viewing the project's Audit Trail Then each entry shows sender displayName, email, contactId, authentication results, trustLevel, projectId, versionId, assetIds, messageId, and timestamps And when viewing the Recipient Analytics dashboard for a project/version Then per-recipient metrics show who sent what and when with filters for date range, contact, and project/version And counts and identities match between Audit Trail and Analytics for identical filters
Policy Gates & Watermarking Enforcement on Email Uploads
"As a label rep, I want emailed files to follow the same security and review rules as normal uploads so that nothing leaks and approvals stay consistent."
Description

Apply the same file policies used by standard uploads to email-ingested assets: watermarking/transcoding profiles, acceptance rules, rights/role tagging, quarantine for review, and approval workflows. Generate watermarked derivatives on ingest where configured and ensure any shares created from these assets inherit expiry, download restrictions, and link analytics settings defined at the project level.

Acceptance Criteria
Watermark and Transcode on Email Ingest
Given a project/version with watermarking profile WM-Standard and transcode profile TX-Review configured to run on ingest And an external sender emails one or more supported files (each <= 500 MB) to the version's Email Drop address When the system processes the email Then a derivative is generated for each attachment per TX-Review with codec/bitrate/channels/resolution exactly matching the profile And each derivative contains watermark signature WID matching the version watermark configuration And originals are stored unaltered And processing completes within 10 minutes of receipt for each attachment And an ingest record is created linking source email message-id, sender address, applied profile IDs, start/end timestamps, and outcome=success
Policy Gate Validation and Rejection Feedback
Given project acceptance rules are configured (allowed MIME types, max size 500 MB, forbidden extensions, naming pattern, antivirus scan required) and quarantine is enabled When a sender emails attachments including: one compliant file, one oversized file, one forbidden-extension file, one naming-mismatch file, and one with simulated malware Then only the compliant file is accepted for processing and placed into quarantine And all non-compliant files are rejected and not stored And the sender receives an automated reply within 5 minutes listing each rejected filename with reason codes (size_exceeded, type_forbidden, name_invalid, malware_detected) And the audit log records a policy-evaluation entry per file with rule IDs, decision (accepted|quarantined|rejected), evaluator version, and timestamps
Scoped Ingestion and Rights/Role Tagging Applied
Given a unique Email Drop address for Project P, Version V, configured with rights tag "Review-Only" and role tag "Client" When any sender emails supported attachments to that address Then all accepted files are stored under Project P > Version V only And the applied policy set ID matches the one configured for Version V And each asset is automatically tagged with rights "Review-Only" and role "Client" And no cross-project or cross-version routing occurs regardless of email subject/body content
Quarantine and Approval Workflow for Email-Ingested Assets
Given quarantine is enabled for Version V and approver role "Project Admin" is assigned When files are ingested via Email Drop Then each accepted file is set to state=Quarantined and is hidden from share pickers and release bundles And attempts to create a share or download before approval are blocked with error code QUARANTINED (HTTP 403 via API) and disabled controls in UI When a user with role "Project Admin" approves a quarantined file Then the file transitions to state=Approved, moves to the configured destination folder, and becomes eligible for sharing And the approval event logs approver user ID, timestamp, and reviewer notes
Share Policy Inheritance from Project Defaults
Given project-level default share settings are expiry=7 days from creation, downloads=disabled, analytics=enabled And an email-ingested asset is in state=Approved When a user creates a share link for the asset via UI or API Then the share link expiry is set to exactly 7 days from creation timestamp And download is disabled on the share And link analytics is enabled on the share And the share record references the source asset ID and origin=email_ingest And attempts to create a share for a quarantined asset are rejected with error code QUARANTINED
Analytics and Auditing for Email-Ingested Assets and Shares
Given analytics is enabled at the project level When a recipient opens a share link created from an email-ingested asset Then analytics events link_open, preview_play, and download (if enabled) are recorded with recipient identifier, timestamp, IP, and user agent And events are visible via analytics UI/API within 5 minutes of occurrence And the asset's audit trail includes source email sender address and message-id, policy decisions, derivative generation outcomes, quarantine/approval actions, and share creations referencing project defaults
Inbound Reliability, Security, and Abuse Controls
"As a security-conscious admin, I want inbound email handling to be safe and reliable so that malicious or spammy messages don’t compromise our workspace."
Description

Use a robust inbound email gateway (e.g., SES/SendGrid inbound) with high availability. Validate message authenticity signals, perform antivirus/antimalware scanning, ZIP bomb and archive traversal protection, spam filtering, rate limiting, block/allow lists, and per-alias throttles. Provide comprehensive processing logs, message retention windows, and administrative tools for quarantining, releasing, or rejecting messages with reasons.

Acceptance Criteria
Authenticate Inbound Email (SPF/DKIM/DMARC)
Given an inbound email is received by the gateway When DMARC alignment passes via DKIM or SPF Then the message is accepted for processing and Auth-Results, SPF, DKIM outcomes are logged with the message record When DMARC alignment fails and the tenant policy is reject Then the message is rejected with SMTP 550 and the reason is recorded When DMARC alignment fails and the tenant policy is quarantine Then the message is quarantined and the reason is recorded And For all accepted messages, From, Return-Path, Message-ID, and Auth-Results headers are persisted
Antivirus and Antimalware Scanning
Given an inbound email contains one or more attachments When the antivirus engine detects a malicious payload (e.g., EICAR) Then all infected attachments are quarantined and the email is not ingested into any project/version And The scan verdict, signature name, engine version, and scan timestamp are recorded in logs When no malware is detected Then attachments proceed to the next processing stage and the clean verdict is logged
Archive Bomb and Path Traversal Protection
Given an inbound attachment is an archive (ZIP/RAR/7z/tar.*) When the calculated uncompressed size exceeds the configured max_uncompressed_bytes, or the compression ratio exceeds the configured max_compression_ratio, or the nesting depth exceeds the configured max_nesting_depth Then extraction is aborted, the attachment is quarantined, no partial files are stored, and the reason is recorded When archive entries contain path traversal patterns (../ or absolute paths) Then those entries are rejected and sanitized paths enforced, or the attachment is quarantined per policy, and the decision is logged
Spam Filtering and Block/Allow Lists
Given the inbound email has a computed spam score When the score is >= the configured reject threshold Then the email is rejected with SMTP 550 and the reason and score are logged When the score is between the quarantine and reject thresholds (inclusive of lower bound) Then the email is quarantined and the reason and score are logged When the sender matches a configured allow list (email or domain) Then spam filtering is bypassed but all security checks still apply, and the allow-list match is logged When the sender matches a configured block list (email, domain, or IP) Then the email is rejected with SMTP 550 and the block-list rule is logged
Global and Per-Alias Rate Limiting and Throttles
Given inbound emails are received for a specific project/version alias When the number of messages from a single sender exceeds the configured per-sender rate limit within the sliding window Then additional messages are temporarily rejected with SMTP 451 and a Retry-After value, and the throttle decision is logged with counters When the number of messages to the alias exceeds the configured per-alias rate limit within the window Then additional messages are temporarily rejected with SMTP 451 and a Retry-After value, and the throttle decision is logged with counters And Burst capacity up to the configured burst limit is allowed before throttling, and counters reset per window
High Availability, Retry Handling, and Idempotency
Given provider A is unavailable or returns transient failures When messages are sent to project/version aliases during the outage Then the system accepts via an alternative inbound route or returns SMTP 4xx to trigger retries, and no messages are lost And Upon recovery, all deferred messages are processed within the configured recovery SLA, and the failover event is logged And Idempotency ensures duplicate deliveries of the same Message-ID (or identical content hash) do not create duplicate uploads or notes
Processing Logs, Retention, and Quarantine Administration
Given inbound processing occurs Then structured logs are stored containing timestamp, tenant/project/version alias, sender identity, message-id, authentication results, spam score, AV verdicts, archive checks, throttle decisions, and final outcome And Logs are queryable by time range, alias, sender, and outcome and are exportable in a standard format And Quarantined messages are retained for the configured retention window and automatically expired thereafter, with expirations logged And An admin can list quarantine, view reasons, and release, reprocess, or delete messages; all actions are audit-logged with actor, timestamp, and reason And On release or reprocess, standard file gates (size/type/extension) are enforced identically to manual uploads
Notifications & Sender Feedback Loop
"As a project owner, I want timely notifications and useful sender receipts so that I can act quickly and my collaborators know their files were received."
Description

Notify project owners and watchers when new assets arrive via Email Drop with in-app alerts, email summaries, and optional Slack/webhook events. Send automatic, branded acknowledgements to the original sender on success, and clear failure notices on rejection (e.g., oversized, blocked type) with guidance to resolve. Offer daily/weekly digests and per-alias notification preferences.

Acceptance Criteria
Notify owners and watchers on successful Email Drop ingest
Given a project/version with an active Email Drop alias, at least one owner, and at least one watcher configured, and in-app and email notifications enabled for the alias When an email with one or more attachments is received and all attachments pass file gates and are ingested Then each owner and watcher receives an in-app notification within 60 seconds that includes: project name, version label, sender email, number of files ingested, and a "Review Uploads" link to the project/version activity And each owner and watcher receives an email notification within 2 minutes containing the same details and the file names and sizes And no recipient receives more than one notification per channel for the same email (deduplicated by Message-ID)
Automatic branded sender acknowledgement on success
Given workspace branding (name, logo, colors) is configured and the Email Drop alias is active When all attachments in an incoming email are accepted and ingested Then the original sender receives an acknowledgement email within 2 minutes from the alias address with subject "Received: {Project} — {Version}" And the body lists accepted file names and sizes, received timestamp (UTC), and a unique reference ID And the email uses workspace branding (logo, brand name, footer) and includes a non-reply disclaimer and support contact And no private links requiring authentication are exposed in the message
Sender rejection notice with resolution guidance
Given file gates (max size, allowed types) are enforced for the Email Drop alias When one or more attachments in an incoming email are rejected during ingest Then the original sender receives a rejection email within 2 minutes listing each rejected file with its size and a specific reason (e.g., exceeds size limit, blocked file type) And the message provides guidance to resolve, including the current size limit, allowed types, and a link to help documentation And if some attachments were accepted, the notice clearly states that accepted files were received and lists them separately And the email includes a unique reference ID and a support contact And duplicate rejection notices are not sent for the same email event
Per-alias notification preferences and channel routing
Given a user is an owner or watcher on a project with an Email Drop alias And per-user, per-alias notification preferences exist for channels (in-app, email, Slack, webhook) and email digest frequency (immediate, daily, weekly) When the user updates their preferences for that alias Then only the enabled channels deliver notifications for that user for future events on that alias And preference changes take effect within 5 minutes and persist across sessions And workspace default preferences apply when the user has no explicit alias-specific settings
Daily and weekly digest summaries
Given a recipient has email digests enabled for an Email Drop alias with frequency set to daily or weekly When the digest window elapses (daily at 09:00 in the recipient's timezone; weekly on Monday at 09:00) Then the recipient receives a single email summarizing all Email Drop events for that alias during the period, including counts of accepted files, rejected files (with reasons), unique senders, and links to activity And events already included in a prior digest are not repeated And if there were no events in the period, no digest is sent
Slack and webhook events on new Email Drop activity
Given Slack is connected to a channel for the project alias and a webhook endpoint with a signing secret is configured When a new Email Drop ingest occurs (success or rejection) Then a Slack message is posted within 60 seconds including project, version, sender, count of accepted and rejected files, and a link to the activity And a webhook POST is sent within 60 seconds with JSON payload containing event_type (email_drop.ingest), project_id, version_id, alias_id, sender, and an attachments array with name, size, status, and reason (if rejected), and an event_id And the webhook includes a signature header and idempotency key, and retries up to 3 times with exponential backoff on non-2xx responses

HashLock Binding

Bind every signature to the exact cryptographic hash of the shipped files (audio, artwork, lyrics, docs). If any byte changes, the consent auto‑invalidates and signers are notified. Eliminates wrong‑version approvals and delivers audit‑proof, version‑specific consent.

Requirements

Deterministic File Hashing Engine
"As an indie artist manager, I want IndieVault to fingerprint every file and release bundle deterministically so that approvals and deliveries are tied to the exact bytes we shipped."
Description

Compute immutable, deterministic cryptographic hashes for every asset (audio, artwork, lyrics, documents) and for each release bundle at ingest, on finalize, and before distribution. Hash the exact bytes stored (no transcoding or metadata mutation) using SHA-256 as the default algorithm, with internal abstraction to support future algorithms. Produce a per-file hash list and a bundle-level root hash derived from a signed manifest (manifest.json) containing file paths, sizes, content-type, and hashes; compute a Merkle root to represent the exact version of the bundle. Support streaming, chunked hashing for large files and multi-part uploads; verify integrity post-upload and on download. Persist hashes and manifest metadata in the database and as immutable object metadata; prevent any workflows from altering bytes once a manifest is finalized. Expose fingerprints in UI and API, and ensure all downstream operations (review links, watermarking, zip packaging) reference the manifest so any byte change results in a new root hash.

Acceptance Criteria
Per-File SHA‑256 Hashing at Ingest, Finalize, and Pre‑Distribution
- Given an asset is uploaded via UI or API, When the upload completes, Then the system computes the SHA‑256 by streaming the exact stored bytes (no transcoding or metadata mutation) and records a 64‑char lowercase hex digest. - Given hashing completes, Then the digest is persisted in the database and as immutable object metadata; any attempt to update these stored values is rejected and audited. - Given finalize or pre‑distribution is triggered for an asset, When the system recomputes the hash from storage, Then it must exactly match the stored digest; otherwise the operation is blocked, the asset is flagged integrity_failed, and an audit log entry is created. - Given two assets with byte‑identical content, When hashed, Then their SHA‑256 values are identical; Given any byte differs, Then their SHA‑256 values differ.
Signed Manifest and Deterministic Merkle Root for Bundle Finalization
- Given a release bundle is finalized, When manifest.json is generated, Then it contains for each file: canonical relative path, size in bytes, content‑type, and SHA‑256; entries are sorted deterministically (lexicographic by path) and the manifest has a stable byte order. - Given a manifest is generated, When the Merkle root is computed over the ordered list of per‑file SHA‑256 values, Then the root is persisted in the database, stored as immutable object metadata, and embedded in the manifest. - Given the manifest is produced, When it is signed with the platform private key, Then signature verification with the platform public key succeeds. - Given any file is added, removed, renamed, or any byte changes, When a new manifest is generated, Then the Merkle root differs from the prior root; Given the file set and bytes are unchanged, Then the root is identical across regenerations.
Streaming and Multipart Upload Hash Consistency
- Given a large file (>2 GB) is uploaded via streaming, When hashing is performed, Then the system processes bytes in chunks without loading the full file into memory and produces the correct SHA‑256 digest. - Given a multipart upload completes, When parts are assembled, Then the SHA‑256 of the assembled object equals the SHA‑256 computed by streaming the final object end‑to‑end; if not, the upload is rejected, the object is discarded, and an audit log entry is created. - Given an interrupted multipart upload is resumed, When completed, Then the final SHA‑256 matches the hash of the original bytes as provided by the client; otherwise the upload fails with integrity_mismatch.
Integrity Verification on Post‑Upload and Before/On Download
- Given an asset upload completes, When post‑upload verification runs, Then recomputing the SHA‑256 from storage matches the stored digest; if it does not, the asset is quarantined, flagged integrity_failed, and not available for workflows. - Given a client requests a file download (UI, review link, or API), When the server streams the object, Then it verifies the streamed bytes’ SHA‑256 equals the stored digest; on mismatch, the download is aborted, a 409 integrity_mismatch error is returned, and an audit log is written. - Given a client requests a bundle ZIP, When packaging, Then each file’s hash is checked against manifest.json; any mismatch aborts packaging with error bundle_integrity_failed; on success, manifest.json and the Merkle root are included in the ZIP.
Immutability Enforcement After Manifest Finalization
- Given a bundle’s manifest is finalized, When any workflow attempts to modify underlying object bytes (e.g., in‑place transcoding, watermarking, or metadata rewriting), Then the operation is blocked; a new version must be created under a new manifest, yielding a new Merkle root. - Given downstream operations (review links, watermarking outputs, and ZIP packaging) are executed, When they access assets, Then they read from the finalized manifest’s file list and hashes; any byte difference results in a new manifest and root, and the operation only proceeds against the new version after explicit finalize. - Given object metadata fields marked immutable (per‑file hash, manifest reference, merkle_root), When an update is attempted, Then the system rejects the change and records an audit entry.
Fingerprint Exposure in UI and API
- Given a user opens an asset detail page, When it loads, Then the per‑file SHA‑256 (64‑hex) and algorithm label are displayed and copyable; for assets in a bundle, the bundle Merkle root is also displayed and matches stored values. - Given an API client requests GET /assets/{id}, When the response is returned, Then it includes hash.algorithm ("SHA-256"), hash.value (64‑hex), size, and content_type fields; Given GET /bundles/{id}, Then it includes manifest_url, merkle_root, and signature fields; all conform to the published OpenAPI schema. - Given a client requests the manifest URL, When manifest.json is downloaded, Then it is byte‑for‑byte identical to the stored manifest and signature verification succeeds.
Algorithm Abstraction with Default SHA‑256 and Extensibility
- Given no algorithm is specified, When hashes are computed and exposed, Then SHA‑256 is used by default and the algorithm field is set to "SHA-256" in DB, object metadata, manifest, UI, and API. - Given an API client requests hashing with an unsupported algorithm (e.g., SHA‑512) via an optional parameter, When processed, Then the API returns HTTP 400 with error code unsupported_hash_algorithm and performs no state changes. - Given a pluggable algorithm provider for SHA‑512 is enabled in a test environment, When computing the hash for the NIST test vector "abc", Then the digest equals 0xddaf35a193617abacc417349ae20413112e6fa4e89a97ea20a9eeee64b55d39a2192992a274fc1a836ba3c23a3feebbd454d4423643ce80e2a9ac94fa54ca49f and is persisted/exposed without any changes to public APIs or schemas.
Hash-bound Consent Signature
"As a signer, I want my approval to be bound to the exact version of files I reviewed so that my consent can’t be reused for a different or modified version."
Description

Bind all e-sign consents to the exact hash manifest of the shipped assets. During signing, present signers with the manifest root hash and human-readable file list; upon acceptance, generate a consent record that embeds: manifest root hash, algorithm, file inventory hash list, signer identity and attestations, timestamps, and jurisdiction. Prevent signing if the current working set is not finalized; enforce that consent references a single immutable manifest version. Store consent records immutably with tamper-evident audit trails and cryptographic proof linking the consent to the manifest hash. Integrate with expiring review links and watermarking so recipients sign knowing exactly what is being approved. Surface consent status at release and file levels and block distribution workflows unless a valid hash-bound consent exists.

Acceptance Criteria
Signer Views and Verifies Manifest Root Hash and File Inventory
Given a finalized manifest exists for a release And a signer opens the signing screen via a review link When the signing screen loads Then the manifest root hash and hash algorithm are displayed And the displayed manifest root hash equals the stored manifest root hash for the release And a human-readable file inventory is displayed including filename, size (bytes), and type for each file And the Sign action remains disabled until the signer explicitly confirms the manifest root hash And the confirmation and displayed values are captured in the signing event log
Signing Disabled Until Working Set Is Finalized
Given a release with a working set When the working set is not finalized and a user attempts to initiate a signing session Then signing is prevented and the UI displays "Finalize assets to generate a signable manifest" And no consent record, envelope, or draft signature is created When the working set is finalized and a user attempts to initiate a signing session Then a signing session is created bound to the finalized manifest version ID And the session cannot be rebound to a different manifest
Consent Record Generation and Required Fields
Given a signer approves the manifest and submits a signature When the signature is processed Then a consent record is created containing: manifest root hash, hash algorithm, file inventory hash list, immutable manifest version ID, signer identity (name, email, unique user ID), signer attestations, timestamps (ISO 8601 UTC), jurisdiction, signing method, and signature/envelope ID And the consent record references exactly one manifest version ID And the consent record is retrievable via API and UI within 2 seconds of submission And the consent record passes JSON schema validation and digital signature verification
Immutable Storage and Tamper-Evident Audit Trail
Given a consent record is stored When retrieving the audit trail for that consent Then the record resides in an append-only store with a verifiable chain hash linking entries And a cryptographic proof links the consent record to the exact manifest root hash And any attempted modification produces a verification failure via the audit verification endpoint And verifying the consent against the manifest returns status "valid" for unaltered records
Auto-Invalidation and Notifications on Asset Change
Given a valid consent exists for manifest version V of a release And any asset byte changes producing manifest version V+1 When the system recalculates the manifest root hash Then the prior consent status is set to "Invalidated" And all signers are notified within 5 minutes with the new manifest root hash and a link to review/sign the updated manifest And distribution actions for the release remain blocked until a new valid consent exists for version V+1
Review Link and Watermark Integration
Given a reviewer opens an expiring review link bound to manifest version V When previewing assets Then watermarks are applied per configuration to all previewable media And the review link and signing screen both display the same manifest root hash V And the review link expires at the configured time and cannot be used to initiate or complete signing after expiry And any attempt to sign against a manifest different from V is rejected with error "Manifest mismatch" and logged
Consent Status Surfacing and Distribution Gate
Given a release contains multiple files with varying consent states When viewing the release dashboard and file details Then consent status is displayed at the release level (Valid, Invalidated, Missing) and at each file level And selecting a status reveals signer identities, timestamps, jurisdiction, and manifest root hash And calling GET /releases/{id}/consent returns the same statuses and metadata And initiating distribution (publish/export) is blocked unless the release-level status is "Valid" with a non-expired hash-bound consent
Automatic Consent Invalidation & Re-sign Flow
"As a project owner, I want consents to auto-invalidate when anything changes so that we never ship using approvals from the wrong version."
Description

Automatically invalidate existing consents when any file or metadata affecting the manifest changes. On asset replacement, addition, removal, or byte-level change, recompute the manifest root hash, mark prior consents as invalid, record the cause and superseding manifest, and require re-approval. Provide a guided re-sign flow that highlights differences between manifests (added/removed/changed files, size and hash deltas) and preserves traceability by linking invalidated consents to new ones. Enforce release gating so that publishing, distribution, and link sharing are blocked until a valid consent exists for the current manifest. Log all state transitions in an immutable audit timeline.

Acceptance Criteria
Asset Byte-Level Change Triggers Consent Invalidation
Given a release has a manifest M1 with rootHash H1 and at least one Valid consent bound to H1 And asset "mix_v3.wav" is part of M1 When the file bytes of "mix_v3.wav" are replaced or changed Then the system computes a new manifest M2 with rootHash H2 where H2 != H1 And all consents bound to H1 are set to status=Invalidated with cause="Asset changed: mix_v3.wav" and supersededByManifest=H2 And a re-approval request for M2 is created with status=Pending and targets the previous signers And publish, distribute, and share actions for M2 are blocked until a Valid consent exists for H2
Manifest-Affecting Metadata Change Triggers Consent Invalidation
Given a release has manifest M1 with rootHash H1 and manifest-affecting metadata (e.g., track order, ISRC, credits.json) When any manifest-affecting metadata value is updated Then the system computes a new manifest M2 with rootHash H2 where H2 != H1 And all consents bound to H1 are set to status=Invalidated with cause listing the changed field names And the invalidation record stores previousValues and newValues for changed fields And a re-approval request for M2 is created with status=Pending and prior signers pre-selected And publish, distribute, and share actions are blocked for M2 until a Valid consent exists for H2
Guided Re-sign Flow Shows Manifest Diff and Hash Deltas
Given a signer opens the re-sign link for M2 created by invalidation of M1 Then the UI displays Added, Removed, and Changed file sections with filenames, sizes, and SHA-256 hashes And Changed files show before/after sizes and hashes (H1->H2) And previous rootHash (H1) and new rootHash (H2) are shown prominently And the Sign action remains disabled until the signer acknowledges the diff When the signer completes the signature Then a new consent C2 is recorded as Valid bound to H2 with supersedes=C1 and signer identity, timestamp, and signature artifact stored And per-recipient analytics record viewedAt and signedAt timestamps
Notifications Sent on Invalidation with Re-sign CTA
Given consents bound to H1 are auto-invalidated due to a manifest change to H2 When invalidation occurs Then all prior signers receive email and in-app notifications within 60 seconds containing: release name, reason, top 10 changed files/fields with link to full diff, H1 and H2, and a unique re-sign link And each re-sign link is recipient-bound and expires after 7 days by default And notification delivery status (queued, sent, opened) is recorded per recipient And accessing an expired or tampered link returns 401/403 with guidance to request a new link
Release Gating Prevents Publish/Distribute/Share Without Valid Consent
Given the current manifest Mx has no Valid consent When a user attempts to Publish, Distribute, or Create Review Link for Mx Then the UI disables the actions with tooltip "Consent required for manifest Mx" and the API responds 423 Locked with errorCode=CONSENT_REQUIRED and manifestHash=Hx And existing share/review link creation endpoints reject with the same error until a Valid consent exists And once a Valid consent for Hx is recorded, the previously blocked actions become enabled within 5 seconds without page reload
Immutable Audit Timeline Records State Transitions and Traceability
Given a manifest changes from H1 to H2 and consents for H1 are invalidated and re-signed for H2 When these events occur Then the audit timeline appends events: ManifestUpdated, ConsentInvalidated, ReapprovalRequested, NotificationSent, ConsentSigned And each event stores: timestamp (UTC ISO-8601), actor/service id, resource ids, previousHash, currentHash, reason, changedFields, and correlationId And events are append-only; modification or deletion attempts are rejected and logged And viewing an invalidated consent shows supersededBy=consentId(H2); viewing the new consent shows supersedes=consentId(H1)
Change Notifications & Alerting
"As a signer, I want immediate, clear notifications when my consent is no longer valid so that I can review changes and re-approve without delays."
Description

Notify all relevant stakeholders when consents invalidate or re-sign is required. Deliver real-time, secure notifications to signers and team members via email, in-app alerts, and optional webhooks/Slack, including context (release, affected files, previous vs. new manifest hash) and a one-click path to review and re-sign. Implement batching, rate limiting, retries, internationalized templates, and per-recipient analytics (delivered, opened, clicked). Maintain a preferences model to respect user notification settings and ensure deliverability monitoring with bounce and spam handling.

Acceptance Criteria
Auto‑invalidation notifications on manifest hash change
- Given a consent bound to manifest hash H_prev, and a shipment update produces manifest hash H_new where H_new != H_prev - When the system detects the mismatch - Then the consent status changes to Invalid and notifications are generated for all signers and collaborators with access to the release - And email and in‑app notifications are dispatched; webhook/Slack are dispatched only if configured - And each notification includes: release title and ID, consent ID, affected files with change type (added/removed/modified), previous and new manifest hashes (H_prev, H_new), triggering user, and timestamp - And each notification contains a one‑click "Review & Re‑Sign" link deep‑linking to the exact consent/version - And if no additional changes occur, the first notification is sent within 90 seconds of detection; if changes continue within a batching window, a single aggregated notification is sent within 30 seconds after a 120‑second window closes - And duplicate notifications for the same consent and event are suppressed within a 5‑minute deduplication window - And per‑recipient events for queued/sent are recorded for analytics
One‑click re‑sign deep link security and flow
- Given a recipient clicks the "Review & Re‑Sign" link - When the recipient is not authenticated - Then a signed, single‑use token embedded in the link is validated and the recipient completes a passwordless verification flow - And after verification the recipient lands on the consent review screen for the specific release and H_new - And if the token is expired (older than 72 hours) or already used, the user is shown an expiration prompt and offered a secure reissue flow; no consent details are displayed - And when the recipient is already authenticated, the link opens directly to the consent review with the re‑sign action available - And all link clicks are attributed to the intended recipient and recorded as analytics events with timestamp and channel
Notification batching and rate limiting behavior
- Given multiple file changes occur for the same release within 120 seconds - When invalidation events are raised - Then the system emits a single aggregated notification per recipient summarizing all detected changes in that window - And if more than 6 notifications would be sent to the same recipient for the same release within 60 minutes, subsequent notifications are consolidated into a digest sent at most every 30 minutes - And consolidation notifications state the number of events summarized and include the latest H_prev/H_new pair and cumulative affected files list - And system metrics expose counts of immediate vs batched notifications per release for observability
Internationalized notification templates and rendering
- Given a recipient has a preferred locale configured - When a notification is generated - Then the localized template for that locale is used; if unavailable, fallback to en‑US - And variables render correctly (release title/ID, consent ID, H_prev, H_new, affected files) with no raw placeholders showing - And right‑to‑left locales render with correct direction; all content is UTF‑8 encoded - And links and buttons have accessible labels; in‑app alerts are screen‑reader navigable - And template snapshots pass localization tests for at least en‑US, en‑GB, fr‑FR, es‑ES, de‑DE
Preferences model enforcement per recipient and channel
- Given a recipient has channel preferences set for event type "Consent invalidated / Re‑sign required" - When notifications are generated - Then only the channels the recipient has enabled are used for delivery; disabled channels are not used - And if a recipient has all optional channels disabled, the event is still recorded and an in‑app alert is shown on next session - And preference changes take effect within 60 seconds of update time for new notifications - And each notification log records the preference snapshot applied (per channel) and the reason for any suppression
Secure webhook and Slack delivery with retries and idempotency
- Given a workspace has a webhook URL and/or Slack destination configured - When a consent invalidates or re‑sign is requested - Then a JSON payload including release, consent ID, H_prev, H_new, affected files, event type, and timestamp is delivered - And webhook requests include HMAC‑SHA256 signature and an Idempotency‑Key header unique per event and recipient - And 5xx/timeouts are retried up to 5 times with exponential backoff starting at 30s; 2xx marks delivered; 4xx errors do not retry - And Slack messages post to the configured channel; invalid channel or auth errors are marked failed and surfaced to admins - And duplicate deliveries with the same Idempotency‑Key within 24 hours are not processed by receivers (verified via echo test)
Deliverability monitoring, bounces, spam handling, and analytics
- Given email delivery attempts are made - When the provider reports a hard bounce or spam complaint - Then the recipient is added to a suppression list for 7 days and the email channel is disabled for that recipient; an admin alert is created - And soft bounces are retried up to 3 times over 12 hours before marking failed - And per‑recipient analytics track queued, sent, delivered, opened, clicked, failed with provider message IDs and UTC timestamps - And analytics are queryable by release, consent ID, recipient, and channel with data latency under 2 minutes
Audit-proof Evidence Package & Verification
"As label counsel, I want an exportable evidence pack proving signatures are tied to specific file hashes so that I can satisfy audits and resolve disputes quickly."
Description

Generate a portable, verifiable evidence package for each signed consent, containing the manifest.json, per-file hashes, manifest root hash, signed consent document, signer metadata (IP, user agent), timestamps, and an event timeline of state changes. Produce a sealed PDF summary and a machine-readable JSON bundle with a top-level checksum and optional Merkle proof, plus a verification URL and offline verification instructions. Store evidence packages in immutable storage with retention policies and allow on-demand export and third-party verification without requiring an IndieVault account. Ensure the package proves that the consent is bound to the exact shipped bytes and is audit-ready.

Acceptance Criteria
Evidence Package Generated on Consent Signature
Given a consent is successfully signed and finalized for a release, When the system processes the consent, Then it generates an evidence package containing: manifest.json; per-file SHA-256 hashes; manifest root hash; signed consent document; signer metadata (IP address, user agent); UTC timestamps; and a chronological event timeline of state changes. And the package includes a sealed PDF summary that opens with a valid digital signature indicator in standard PDF viewers. And the package includes a machine-readable JSON bundle with a top-level checksum and, when enabled, a Merkle proof. And the package includes a verification URL and offline verification instructions. And package generation completes within 60 seconds of signature finalization.
Binding Verification to Exact Shipped Bytes
Given the original shipped asset files, When their SHA-256 hashes are computed using the offline instructions, Then each hash exactly matches the corresponding per-file hash in manifest.json. And the computed manifest root hash matches the root hash recorded in the package. And when any file byte is altered, Then verification fails and reports the mismatched file(s) and root hash.
Immutable Storage and Retention Enforcement
Given an evidence package is generated, When it is persisted, Then it is stored in immutable (WORM) storage with an active retention policy. And any attempt to modify or delete the package before retention expiry is rejected with HTTP 403 and an audit log entry is created. And the storage immutability state (e.g., object lock status or version ID) is retrievable via API for the package object.
On-Demand Export and Third-Party Access
Given a signed consent, When a user requests export, Then a single downloadable archive is produced containing the sealed PDF summary and JSON bundle. And the export can be shared via a verification URL that is publicly accessible without login and protected by TLS 1.2+. And the share link has a configurable expiry and displays the archive size and checksum prior to download. And downloading and verifying the archive does not require an IndieVault account.
Online Verification Workflow
Given a verifier opens the verification URL, When they upload the JSON bundle or input the package checksum, Then the verifier returns a Pass or Fail result with explicit reasons. And the verifier displays the manifest root hash, per-file hashes, signer metadata, timestamps, and event timeline as read-only. And any tampering (checksum mismatch, invalid Merkle proof, or invalid sealed PDF signature) is flagged with specific error messages.
Offline Verification Workflow
Given a fully offline environment, When the verifier follows the included offline instructions, Then they can compute per-file hashes and the manifest root hash and reproduce a Pass result without network access. And the offline instructions provide step-by-step commands for macOS, Windows, and Linux using built-in or open-source tools. And the JSON bundle integrity can be validated offline via a provided detached signature or checksum included in the package.
Event Timeline Integrity and Completeness
Given a standard consent lifecycle, When the evidence package is generated, Then the event timeline includes at minimum: consent created, invitation sent, invitation viewed, consent signed (per signer), evidence package generated, notifications sent, and any invalidation events. And each event contains an ISO 8601 UTC timestamp, actor (or system), IP address when available, and event type. And events are strictly ordered by timestamp and are read-only within the package.
HashLock API & Webhooks
"As a developer integrating IndieVault, I want APIs and webhooks for manifests and consents so that my systems can react to invalidations and verify approvals automatically."
Description

Expose secure, versioned API endpoints to retrieve current manifest, per-file hashes, consent status, and evidence packages, and to subscribe to lifecycle events. Provide webhooks for consent.signed, consent.invalidated, manifest.finalized, and manifest.updated with HMAC signatures, idempotency, retries, and delivery logs. Enforce OAuth2/API key auth, fine-grained scopes, and rate limits. Include pagination and filtering for releases and consents. Ensure API responses include algorithm identifiers and stable IDs for manifests so integrators can programmatically verify hash-bound consent before triggering distribution or marketing automations.

Acceptance Criteria
OAuth2 and API Key Authentication, Scopes, and Rate Limits
Given a client presents a valid OAuth2 access token with scopes read:manifests and read:consents, when it requests GET /v1/releases/{releaseId}/manifests/current, then the response is 200 and contains only fields authorized by those scopes. Given a token missing the read:consents scope, when it requests any /v1/consents endpoint, then the response is 403 with error.code=insufficient_scope and a WWW-Authenticate header listing required scopes. Given an expired or invalid token or API key, when any protected endpoint is called, then the response is 401 with error.code=invalid_token and a WWW-Authenticate header. Given an API key restricted to release {releaseId}, when it requests another release, then the response is 403 with error.code=forbidden. Given a client exceeds the configured rate limit, when it continues to call endpoints, then responses include 429 with Retry-After and X-RateLimit-Limit/Remaining/Reset headers; under the limit, headers reflect current usage.
Retrieve Versioned Manifest with Per-File Hashes and Algorithm Identifiers
Given a release with a current manifest, when GET /v1/releases/{releaseId}/manifests/current is called, then the response includes manifest_id (stable UUID), manifest_version, is_finalized, created_at, hash_algorithm, and a files array with file_id, path, byte_size, hash_algorithm, and hash for each file. Given v1 versioning via URL or Accept header, when the endpoint is called, then the response includes X-API-Version: v1 and the schema matches the v1 contract. Given the manifest has not changed, when the endpoint is called repeatedly, then manifest_id remains stable and ETag remains identical. Given the release has a finalized manifest, when the current manifest is retrieved, then is_finalized=true and the same manifest_id is returned across calls.
Webhook Subscription Management and Secret Rotation
Given a user with manage:webhooks scope, when POST /v1/webhooks is called with a target URL and event types [consent.signed, consent.invalidated, manifest.finalized, manifest.updated], then a subscription is created with webhook_id, secret, and status=active. Given a created subscription, when POST /v1/webhooks/{id}/rotate-secret is called, then a new secret is generated and both old and new secrets validate signatures for deliveries for 24 hours, after which only the new secret is valid. Given a subscription is disabled via PATCH /v1/webhooks/{id} with status=disabled, when events occur, then no deliveries are attempted and the subscription reflects status=disabled. Given a subscription exists, when POST /v1/webhooks/{id}/test is called, then a test event is delivered and the delivery is recorded with response_status and response_time_ms.
Consent Auto-Invalidation Reflected via API and Webhooks on File Change
Given a consent is signed for manifest_id A and file hashes X, when any file changes producing manifest_id B, then GET /v1/consents/{consentId} returns status=invalidated with reason=hash_mismatch and invalidated_at within 60 seconds of the change. Given the same consent, when the change occurs, then a consent.invalidated webhook is sent within 60 seconds containing consent_id, previous_manifest_id=A, current_manifest_id=B, and signer_ids. Given release-level queries, when GET /v1/consents?releaseId={id}&status=current is called, then only consents valid for the current manifest are returned and the invalidated consent is excluded. Given the manifest reverts to the original bytes (manifest_id A), when querying consents, then the previously invalidated consent remains invalid and a new consent (distinct consent_id) is required.
Evidence Package Retrieval for Auditable, Hash-Bound Consent
Given a valid consent_id, when GET /v1/consents/{id}/evidence is called, then the API returns 200 with a downloadable package (content-type=application/zip) containing a JSON evidence manifest, signer artifacts, timestamps, the signed file hashes with hash_algorithm, and manifest_id. Given the evidence package is generated, when the response is returned, then headers include Content-Length and X-Content-SHA256 of the ZIP for verification. Given a caller lacks read:evidence scope, when the endpoint is called, then the response is 403 with error.code=insufficient_scope. Given an invalid or nonexistent consent_id, when the endpoint is called, then the response is 404 with error.code=not_found.
Webhook Delivery Guarantees: HMAC, Idempotency, Retries, and Delivery Logs
Given a webhook event is emitted, when it is delivered, then headers include X-Webhook-Event-Id, X-Webhook-Timestamp, and X-Webhook-Signature (HMAC-SHA256 of the raw body using the current secret) and the payload includes event_type, event_id, created_at, and resource identifiers. Given the receiver returns any 5xx or times out after 10 seconds, when delivery is attempted, then retries occur with exponential backoff over at least 8 attempts within 24 hours; each attempt keeps the same X-Webhook-Event-Id. Given deliveries occur, when GET /v1/webhooks/{id}/deliveries is called, then delivery logs list each attempt with attempt_number, status, response_status, response_time_ms, and last_attempt_at. Given the receiver returns 2xx, when delivery succeeds, then no further retries occur and the delivery is marked delivered=true.
Pagination and Filtering for Releases and Consents Endpoints
Given there are more than 25 items, when GET endpoints are called without a limit, then results are limited to 25 items with a next_cursor token; when a valid cursor is provided, the next page of results is returned. Given a client provides limit=100, when the endpoint is called, then at most 100 items are returned; any limit above 100 is clamped to 100. Given filters releaseId, manifestId, status, and date range (created_at_since/until), when provided, then only matching items are returned; invalid filters yield 400 with error.code=invalid_filter and details for each rejected parameter. Given no sort parameter is provided, when results are returned, then items are ordered by created_at descending consistently across pages.
UI Binding Indicators & Release Gates
"As an indie artist, I want clear visual cues and guardrails around consent status so that I don’t accidentally ship the wrong version."
Description

Provide clear UI indicators across asset, release, and link-sharing surfaces that show HashLock status: HashLock Bound badge with copyable fingerprints, consent validity state, and last-updated timestamps. Display warnings and diffs when the working set diverges from the last signed manifest. Gate critical actions (publish, distribute, share review links) behind valid hash-bound consent; explain blockers with actionable guidance. Offer quick actions to finalize manifests, request signatures, or trigger re-sign, and maintain a version history timeline for transparency.

Acceptance Criteria
HashLock Badge & Fingerprints Visible on All Surfaces
Given an asset, release, or review link is bound to a valid hash-based manifest When the page is rendered Then a visible "HashLock Bound" badge appears adjacent to the item title on asset, release, and link-sharing views And the algorithm label (e.g., SHA-256) and truncated fingerprint are displayed with a control to reveal the full hash And a copy-to-clipboard control copies the exact full manifest hash to the clipboard And a last-updated timestamp is displayed in absolute and relative formats and reflects the item’s current timezone setting And the displayed fingerprint and copied value exactly match the stored manifest hash for the item
Consent Validity State Updates on Content Change
Given a signed manifest exists for a release When any file within the release’s working set is modified, added, or removed such that its hash differs from the signed manifest Then the consent state indicator switches to "Invalid" with a red status chip across asset, release, and link-sharing views And a contextual banner appears explaining that the working set diverged from the last signed manifest and provides a link to view the diff And the invalid state persists until a new manifest is finalized and fully re-signed
Divergence Warning with File-level Diff and CTAs
Given the working set diverges from the most recent signed manifest When the user opens the release or asset detail view Then a prominent warning banner is displayed above primary actions And a diff panel lists added, modified, and removed items with old/new hashes and file sizes for each changed file And per-file rows can be expanded to reveal hash changes and path changes And quick actions "Finalize Manifest" and "Trigger Re-sign" are visible and enabled for authorized users
Gated Publish/Distribute/Share Behind Valid Consent
Given the current release lacks valid consent bound to the exact manifest hash When the user attempts to Publish, Distribute, or Create a Review Link Then the action is blocked and a gate modal explains the specific blockers (e.g., missing signatures, invalidated manifest) And the modal enumerates required signers and pending steps And an actionable button "Request Signatures" is available to proceed And once valid consent exists for the exact manifest hash, the blocked actions become enabled without changing the manifest
Finalize Manifest and Request Signatures Quick Actions
Given there are uncommitted changes in the working set When the user clicks "Finalize Manifest" Then a new manifest is created with deterministic file ordering and a computed SHA-256 hash displayed to the user And the previous manifest remains accessible in history When the user clicks "Request Signatures" Then a signer selection dialog appears prefilled with required roles/participants And sending requests logs an auditable event and updates consent status to "Pending Signatures"
Version History Timeline Transparency
Given a release has at least two manifest versions When the user opens the Version History timeline Then each entry shows version identifier, created timestamp, author, manifest hash, and consent state (Valid, Pending, Invalid) And selecting two entries displays a summarized diff (added/modified/removed counts) with option to drill into file-level changes And invalidations caused by content changes appear as timeline events with reasons linked to the affected manifest And the timeline supports filtering by event type (Finalized, Request Sent, Signed, Invalidated)

Quorum Rules

Set flexible approval thresholds—unanimous, role‑based, percentage by split, or minimum signer count—per track or release. IndieVault blocks export until the rule is met, with a live progress bar and clear blockers, so releases don’t slip or go out under‑approved.

Requirements

Quorum Rule Builder (Per Asset)
"As an indie manager, I want to configure approval thresholds per track or release so that the right people must sign off before anything can be exported."
Description

Provide UI and API to define approval thresholds per track or release: unanimous, role‑based, percentage by split, or minimum signer count. Allow selecting a rule type, assigning target roles or split sources, previewing the rule’s effect based on current collaborators, and saving a rule snapshot with versioning. Support templates (e.g., “Unanimous for Singles”, “51% by Master Split”) and inheritance from release to tracks with the ability to override at the track level. Validate configurations to prevent conflicts (e.g., missing roles, splits not totaling 100%, or contradictory thresholds) and surface errors inline. Persist rules in a way that downstream services can evaluate deterministically across environments.

Acceptance Criteria
Create Unanimous Approval Rule (Per Asset)
Given a track or release with N collaborators and the Quorum Rule Builder open, When the user selects the rule type "Unanimous" and clicks Save, Then the API responds 201 and persists a rule with type=UNANIMOUS, version incremented by 1, snapshot of collaboratorIds/roles at save time, and a contentHash. And Then GET /assets/{assetId}/quorum-rule returns the saved rule with {type: UNANIMOUS, version, requiredApprovals: N, snapshot, contentHash, createdAt}. And Then the UI preview panel displays "0 of N approvals required" and indicates that export will be blocked until all N approvals are recorded. Given the user applies the "Unanimous for Singles" template, When they save without edits, Then the persisted rule includes source=template with templateId set and matches the template’s parameters.
Configure Role-Based Rule with Multiple Roles
Given the roles Artist and Manager exist and have at least one assigned collaborator each, When the user selects rule type "Role-Based", chooses roles [Artist, Manager], sets perRoleMin=1, and saves, Then the API responds 201 and persists {type: ROLE_BASED, roles:[Artist,Manager], perRoleMin:1, snapshot of role memberships, version, contentHash}. And Then the UI preview shows requiredByRole: {Artist: ">=1", Manager: ">=1"} and "0 of 2 roles satisfied". Given any selected role has zero assigned collaborators, When attempting to save, Then Save is disabled and an inline error reads "Selected role has no collaborators"; If submitted via API, Then a 422 is returned with fieldErrors.roles[]. Given a role has fewer members than perRoleMin, When saving, Then Save is disabled and an inline error reads "Unsatisfiable threshold for role"; API returns 422 with code UNSATISFIABLE_THRESHOLD.
Define Percentage-by-Split Rule Using Master Splits
Given the asset has a Master split sheet that sums to 100%, When the user selects rule type "Percentage by Split", chooses source=MASTER, sets threshold=51%, and saves, Then the API responds 201 and persists {type: PERCENT_BY_SPLIT, source: MASTER, threshold: 0.51, snapshot of participantIds and percentages, version, contentHash}. And Then the UI preview shows "0.00% of 51.00% approved" and lists participants with their split weights. Given the selected split source totals != 100%, When attempting to save, Then Save is disabled and an inline error reads "Splits must total 100%"; API returns 422 with fieldErrors.splitSource.total. Given threshold is <=0 or >100, When attempting to save, Then Save is disabled and an inline error reads "Threshold must be between 0 and 100"; API returns 422 with fieldErrors.threshold.
Set Minimum Signer Count with Live Collaborator Preview
Given the asset has M collaborators, When the user selects rule type "Minimum Signer Count", enters K (1 ≤ K ≤ M), and saves, Then the API responds 201 and persists {type: MIN_SIGNER_COUNT, minSigners: K, snapshot of collaboratorIds, version, contentHash}. And Then the UI preview displays "0 of K approvals required" and lists the current collaborators counted toward K. Given K < 1 or K > M at the moment of configuration, When attempting to save, Then Save is disabled and an inline error reads "Signer count must be between 1 and M"; API returns 422 with fieldErrors.minSigners. Given collaborators change after save such that M' < K, When the builder is reopened, Then a warning banner reads "Current collaborators fewer than required signers" and the rule is marked UNSATISFIABLE until adjusted.
Inheritance from Release to Track with Override Control
Given a release has a saved quorum rule at version v3, When a new track is created under that release, Then the track displays "Inherited from Release v3" and stores a reference to releaseRuleVersion=v3 (no local snapshot). Given the release rule is updated to v4, When the track has not overridden inheritance, Then the track reference updates to releaseRuleVersion=v4 automatically. When a user enables Override on the track, edits the rule, and saves, Then the track persists a local snapshot (e.g., trackRuleVersion=t1), breaks inheritance, and the UI shows "Overridden locally". When a user selects "Revert to Inherited" on the track and saves, Then the local snapshot is removed and the track resumes referencing the current release rule version. And Then GET /assets/{trackId}/quorum-rule indicates {source: INHERITED|LOCAL, sourceVersion} accordingly.
Rule Snapshot Versioning and Deterministic Persistence
Given a rule is saved for an asset, When the same rule (identical inputs: type, thresholds, selected roles/split source, and the same collaborator/split snapshot) is saved again, Then the API is idempotent and returns 200 with the existing version (no new version created) and the same contentHash. When any input changes, Then the API creates a new immutable version with versionNumber incremented by 1, returns previousVersionId, and GET /assets/{id}/quorum-rule/versions lists all versions in descending order. Given two environments with identical asset data, When the same rule inputs are saved, Then the stored canonical payload and contentHash are identical across environments. Given collaborators are provided in different order in the UI, When saving, Then canonicalization produces the same contentHash for equivalent inputs.
Inline Validation and Error Surfacing During Rule Configuration
Given the user configures an invalid rule (e.g., no roles selected for role-based; minSigners > collaborators; percentage threshold out of range; split source missing or not totaling 100%), When attempting to save, Then the Save action is disabled and each offending field shows an inline error with a concise message and machine-readable code. And Then the error summary links focus to the first invalid field; correcting the input clears the error immediately and re-enables Save. When submitting invalid data via API, Then the service returns 422 with a fieldErrors map keyed by path (e.g., roles[1], minSigners, threshold, splitSource.total), and no version is created.
Collaborator & Split Data Sync
"As an artist, I want my roles and splits kept in sync with each track so that approval rules based on roles or percentages work correctly without manual re-entry."
Description

Integrate collaborator identities, roles, and percentage splits from IndieVault’s contracts and metadata to supply accurate inputs for quorum evaluation. Normalize roles (artist, producer, mixer, label rep) per asset, map each collaborator to a contactable account, and ensure split percentages are current and sum to 100% when required. Detect and flag missing or conflicting split data, provide guided fixes, and lock the rule until prerequisites are resolved. React to updates in contracts or roster by re-evaluating affected rules and notifying owners of any impact.

Acceptance Criteria
Initial Contract Sync Populates Collaborators/Roles/Splits
Given an asset has an executed contract in IndieVault with collaborator identities, roles, and percentage splits And the asset has no existing collaborator records When a sync is triggered (auto on contract save or manual refresh) Then collaborator records are created for the asset within 60 seconds And each collaborator is mapped to a unique, contactable IndieVault account via verified email or handle And roles are normalized to the allowed set: artist, producer, mixer, label rep And split percentages are stored with two-decimal precision And the last-sync timestamp and source reference are recorded on the asset
Split Total Validation to 100% When Required
Given an asset is marked as requires-total-100 for splits When the sum of collaborator splits equals 100.00% with a tolerance of ±0.01% Then the split status is Valid and the quorum rule input is enabled When the sum is outside tolerance Then the split status is Invalid And the quorum rule is locked and export actions are disabled And the UI displays a blocker with the delta from 100% and a Fix Splits action And validation re-runs immediately after any edit and unlocks upon passing
Conflicting Split Sources Detected with Guided Resolution
Given an asset has differing split values between contract data and embedded metadata When a sync is performed Then the system flags a Conflict status and does not overwrite existing values silently And a guided resolution screen shows a side-by-side diff for each collaborator and total And the user can choose Contract, Metadata, or Custom per field And upon confirmation, the chosen values are applied, conflicts are cleared, and an audit entry is recorded And affected quorum rules are re-evaluated within 60 seconds and progress/blockers updated accordingly
Unmapped Collaborator Accounts Block Quorum Until Resolved
Given one or more collaborators in splits lack a mapping to a contactable IndieVault account When a sync completes Then the quorum rule is locked and export actions are disabled with blocker reason Unmapped collaborator(s) And the system suggests candidate account matches by exact email or fuzzy name match (>=0.9 score) And the user can link to an existing account or send an invite to create one And once all collaborators are mapped and verified, the blocker clears and quorum re-evaluates automatically
Role Normalization and Role-Based Constraints per Asset
Given collaborator roles may use synonyms (e.g., vocalist->artist, engineer->mixer, A&R->label rep) When roles are ingested from any source Then roles are normalized to the allowed set: artist, producer, mixer, label rep And any unrecognized role triggers a Needs mapping blocker with an admin mapping action And for assets using role-based quorum rules, at least one collaborator exists for each required role; otherwise the rule is locked with a Missing role blocker
Auto Re-evaluation and Notifications on Contract/Roster Updates
Given a contract amendment or roster change updates collaborator identities, roles, or splits for one or more assets When the update is saved Then affected assets are identified by linkage within 10 seconds And quorum evaluations are re-computed and progress/blockers refreshed within 60 seconds And asset owners and watchers receive an in-app notification immediately and an email within 2 minutes summarizing the impact (what changed, assets affected, current quorum status) And a change log entry is recorded with before/after values and actor
Per-Track Overrides in Multi-Track Releases
Given a release has a release-level contract and one or more tracks have track-level collaborator or split overrides When a sync is executed Then release-level data applies to all tracks by default And track-level overrides supersede release-level data only for the specified fields on those tracks And per-track quorum inputs reflect the correct merged dataset without affecting other tracks And export remains blocked only on tracks with unresolved blockers, not the entire release unless configured
Real-time Quorum Evaluation & Progress Bar
"As a manager, I want to see real-time progress toward approval so that I know exactly who is blocking release and how close we are to being export-ready."
Description

Continuously evaluate approval status against the configured rule and update a live progress bar and blockers list. Compute satisfaction for each rule type, including percentage-by-split math, role coverage, count thresholds, and unanimity. Update in real time as approvals arrive, with event-driven refresh across web and mobile clients. Present blockers as specific people or roles who have not yet approved, and expose a machine-readable state for other services to query.

Acceptance Criteria
Unanimous Approval Progress & Blockers
Given track "Track A" has a unanimous approval rule with approvers Alice (Artist), Ben (Producer), and Carla (Manager) When Alice submits approval Then the progress bar displays 1/3 approvals (33%) with label "Unanimous: 1 of 3" And the blockers list shows Ben (Producer) and Carla (Manager) as pending And the export action remains disabled When Ben submits approval Then the progress bar displays 2/3 approvals (67%) And the blockers list shows only Carla (Manager) When Carla submits approval Then the progress bar displays 3/3 approvals (100%) and the rule state becomes Satisfied And the blockers list is empty And the export action becomes enabled within 1 second of the final approval
Role-Based Coverage Rule Evaluation
Given release "EP 1" requires role coverage from Lead Artist, Producer, and Label Rep And assigned users are Dana (Lead Artist), Eli (Producer), Fox (Label Rep), and Gale (Producer) When Dana submits approval Then the progress bar displays 1/3 roles covered (Lead Artist) And the blockers list shows Producer (Eli, Gale) and Label Rep (Fox) as pending When Fox submits approval Then the progress bar displays 2/3 roles covered (Lead Artist, Label Rep) And the blockers list shows Producer (Eli, Gale) as pending When either Eli or Gale submits approval Then the progress bar displays 3/3 roles covered (100%) and the rule state becomes Satisfied And the blockers list is empty And the export action is enabled
Percentage-by-Split Threshold Calculation
Given track "Single B" has a percentage-by-split quorum threshold of 75% And ownership splits are Hana 40%, Ivan 35%, and Jo 25% When Hana and Ivan submit approvals Then the approved percentage totals 75% and the rule state becomes Satisfied And the progress bar displays 75% with label "75% of 75% threshold met" And the blockers list shows Jo (25%) as pending When Ivan revokes approval Then the approved percentage recalculates to 40% and the rule state becomes Not Satisfied And the progress bar updates to 40% And the export action becomes disabled within 1 second When Hana and Jo submit approvals Then the approved percentage totals 65% and the rule remains Not Satisfied
Minimum Signer Count Threshold
Given release "Album C" requires a minimum of 3 signers from eligible members Kai, Lee, Minh, Noor, and Omar When Kai and Lee submit approvals Then the progress bar displays 2/3 approvals ("1 more required") And the blockers list shows Minh, Noor, and Omar as pending When Minh submits approval Then the progress bar displays 3/3 approvals (100%) and the rule state becomes Satisfied And the blockers list becomes empty And additional approvals do not change the Satisfied state but update auxiliary text to "4 of 5 approved" as applicable
Real-Time Event-Driven Refresh Across Web and Mobile
Given Alice views the quorum status for "Track A" on web and Ben views the same on mobile When Ben submits an approval at 10:00:00 UTC Then Alice's web client updates the progress bar, satisfied/blocked state, and blockers list within 2 seconds at p95 (5 seconds max) without manual refresh And both clients display identical rule satisfaction and blockers after the update And duplicate delivery of the approval event does not change the final state (idempotent processing)
Machine-Readable Quorum State API
Given a service queries GET /api/v1/quorum-state?entityId=TRACK-123 When the track has a percentage-by-split rule with pending approvals Then the response body includes fields: entityId, ruleType, satisfied (boolean), approvedPercent (0–100, one decimal), requiredThreshold, approvalsCount, eligibleCount, blockers (array of userIds and/or roles), lastUpdatedAt (ISO 8601), and version/etag And the values match the UI state as of lastUpdatedAt And the ETag changes when approvals are added, revoked, or edited And a 404 is returned for unknown entityId And the endpoint responds within 300 ms at p95 under normal load
Export Gatekeeper & Clear Blockers
"As a project lead, I want exports to be automatically blocked until approvals are complete so that nothing goes out under-approved or by mistake."
Description

Intercept all export and share flows (downloads, review links, delivery bundles) and block completion until the active quorum rule is satisfied. Display a clear explanation of unmet criteria and list required approvers directly in the export modal. Provide an authorized override with reason capture and two-factor confirmation; log overrides to the audit trail and notify stakeholders. Support test mode for sandbox projects where blocking can be simulated without real enforcement.

Acceptance Criteria
Block Export on Unmet Quorum (All Flows)
Given a user initiates an export via file download, review link creation, or delivery bundle generation for selected asset(s) And the active quorum rule for the asset scope is not satisfied When the user attempts to finalize the export action Then the confirm/submit control is disabled and a gatekeeper panel is displayed in the export modal And the panel states the rule name and summarizes the unmet condition in plain language And no files are downloaded, no links are created, and no bundles are queued And API attempts to finalize export receive HTTP 409 with a machine-readable blockers payload
Real-Time Progress and Blockers Display
Given the export modal is open for asset(s) with an unmet quorum rule When an approver grants or revokes approval, or a required role assignment changes Then the progress bar and blockers list update in the modal within 1 second of the event or on manual refresh And each blocker item shows approver or role name, current status (approved/pending/rejected), and last-updated time And a Request Approval action is available for pending approvers when the current user has permission And when all blockers are cleared, the confirm/submit control becomes enabled within 1 second
Rule Types Enforcement (Unanimous, Role-Based, Split Percentage, Min Signers)
Given a quorum rule configured as Unanimous for a track with 3 approvers When fewer than 3 approvals are recorded Then export remains blocked; when all 3 approve, export is allowed Given a quorum rule configured as Role-Based requiring Producer and Manager roles When at least one approver for each required role approves Then export is allowed; if any required role lacks an approval, export is blocked Given a quorum rule configured as Percentage by Split with threshold 60% When approvers representing less than 60% ownership approve Then export remains blocked; when approvals reach or exceed 60%, export is allowed Given a quorum rule configured as Minimum Signer Count with threshold 2 When fewer than 2 distinct approvers approve Then export remains blocked; when 2 or more approve, export is allowed
Scope Resolution: Active Rule per Track or Release
Given a single track with a configured track-level quorum rule When exporting that track individually Then the track-level rule is used for gating Given a release with a configured release-level quorum rule When exporting the release bundle (or any asset via the release export flow) Then the release-level rule is used for gating Given multiple assets spanning multiple tracks and/or releases are exported together When evaluating export readiness Then each asset is checked against its own active rule (track-level for individual tracks; release-level for release bundles) And export is blocked if any asset’s active rule is unmet, with blockers grouped by asset/scope
Override with Reason, 2FA, and Notifications
Given a user with the OverrideExportGatekeeper permission opens the gatekeeper panel on an unmet quorum When the user selects Override Then the user must enter a reason of at least 10 characters and complete a 2FA challenge And on successful 2FA, the export proceeds immediately And an audit log entry is recorded capturing actor, timestamp, asset identifiers, rule snapshot, reason text, 2FA method, IP/device, and outcome "override-allow" And all approvers referenced by the rule and the project owner receive notifications containing the override details within 1 minute And if the user lacks permission or fails 2FA, the override is rejected, no export occurs, and an audit entry with outcome "override-deny" is recorded; no notifications are sent
Test Mode Simulation Without Enforcement
Given a project is flagged as Test Mode/Sandbox And a quorum rule for selected asset(s) is unmet When the export modal is opened Then the gatekeeper panel displays a Test Mode badge and the unmet criteria And the confirm/submit control remains enabled and allows the export to complete And no approval requests or stakeholder notifications are sent as a result of this export And an audit entry is recorded with marker test-mode=true And the UI continues to display simulated progress and blockers for visibility
Revalidation at Confirmation and Concurrency Handling
Given the export modal shows all quorum requirements met When the user clicks confirm and an approval is revoked or the quorum configuration changes before the backend commit Then the backend revalidates the rule at commit time And if the rule is no longer satisfied, the export is aborted atomically with no artifacts created And the modal refreshes to show updated blockers and the confirm control is disabled And an informational message explains the reason for abort with a link to view blockers
Approver Invitations & Reminders
"As a manager, I want to automatically invite and remind the required approvers so that approvals are gathered quickly without constant manual follow-up."
Description

Generate secure, expiring approval requests with watermarkable review links per recipient. Send email notifications with one-click approve/decline and optional comments, track delivery and opens, and schedule smart reminders based on due dates and inactivity. Prevent spam with reminder caps and let managers nudge specific blockers from the progress view. Record each action per recipient for analytics and evaluation.

Acceptance Criteria
Send Secure Expiring Approval Request
Given a manager selects assets and recipients and sets an expiry within 1 hour–30 days (default 7 days) When the manager sends the approval request Then the system creates a unique, signed, per-recipient approval token scoped to the selected assets And the approval link uses HTTPS and is valid only until the configured expiry or manual revocation And each recipient’s request status is set to Pending with recorded created_at and expires_at timestamps And accessing the link after expiry or revocation shows an Expired/Revoked screen and blocks asset access And creation, expiry, and revocation events are written to the audit log with actor, timestamp, and request ID
Per-Recipient Watermarked Review Link
Given watermarking is enabled for the request When a recipient opens their review link Then previews/playback display a dynamic on-screen watermark containing recipient name/email and request ID throughout playback And any allowed downloads are watermarked or tagged per recipient; if downloads are disallowed, the UI hides download controls And removing or altering URL parameters does not remove the watermark And analytics attribute all views/plays to the intended recipient
One-Click Email Approve/Decline with Comments
Given a recipient receives the approval email with Approve and Decline CTAs When the recipient clicks a CTA Then a confirmation page loads with the corresponding action preselected and an optional comments field And upon confirm, the system records decision, optional comment, timestamp, IP, and user agent against that recipient And the recipient’s state updates to Approved or Declined and cannot be duplicated by repeated clicks And the quorum progress updates for the associated track/release and blocker lists reflect the new state
Delivery, Open, and Action Tracking with Analytics
Given approval emails are sent When delivery succeeds, fails, or is deferred Then the system logs per-recipient delivery status with provider response and timestamp And first open, last open, and open count are captured when possible; clicks on review/decision links are recorded with timestamps And decisions, comments, reminders sent, nudges sent, bounces, and revocations are appended to the recipient’s activity timeline And managers can view these metrics per recipient and export them (CSV/JSON) for the request
Smart Reminders by Due Date and Inactivity
Given a request has a due date and reminders are enabled When no open occurs within 24 hours of send Then Reminder #1 is sent When the link has been opened but no decision is recorded for 3 days Then Reminder #2 is sent When it is 9:00 AM account timezone on the due date and no decision exists Then Reminder #3 is sent And any pending reminders are canceled immediately upon decision, bounce, or revocation And all reminders respect the global reminder cap and are logged with timestamps
Reminder Caps and Throttling
Given a per-recipient per-request reminder cap of 3 and a minimum 24-hour cooldown When reminders (automatic or manual) reach the cap Then the system blocks further reminders and explains the reason in the UI And no two reminders to the same recipient are sent less than 24 hours apart And bounce status disables all future reminders for that recipient
Nudge Specific Blockers from Progress View
Given a manager views the quorum progress for a track/release When the manager selects a blocker and clicks Nudge Then the system composes a nudge email including context (release/track name, due date, current approval status) and the recipient’s active review link And the nudge is sent immediately if under the reminder cap and is recorded with actor, timestamp, and message preview And if the approval link is expired, the manager is prompted to extend expiry before sending; upon extension a fresh token is issued and old tokens are invalidated And the blockers list and activity timeline update to show the nudge event
Role Delegation & Proxy Approval
"As an artist, I want to assign a trusted proxy to approve while I’m on tour so that releases don’t stall when I’m unavailable."
Description

Allow designated delegates to approve on behalf of a role or individual during absences while preserving accountability. Support time-bound proxies, role-based delegation (e.g., any Label Rep), and per-asset exceptions. Clearly indicate when a proxy approved and include both the original role and delegate identity in evaluation and audit. Ensure delegation cannot weaken the quorum beyond the configured thresholds.

Acceptance Criteria
Individual Time-Bound Proxy Approval
Given user O holds an approval role for asset A and delegates to user D from 2025-08-15T00:00Z to 2025-08-25T23:59Z When D submits an approval on A at 2025-08-19T12:00Z Then the approval is recorded for O’s role on A and attributed to D as proxy for O And the approval contributes to quorum evaluation exactly as if O approved And the UI labels the approval as "Proxy" and displays "Approved by D (proxy for O – [Role])" And the audit log captures O, D, delegation window, created-by, timestamp, and asset ID And if D attempts approval outside the window, the action is blocked with the message "Delegation expired or not yet active"
Role-Based Delegation (Any Label Rep)
Given a role-based delegation is active that allows any user with role "Label Rep" to approve on behalf of the "Label Approval" seat for release R When a user D1 with role "Label Rep" approves R Then the "Label Approval" seat is satisfied for quorum on R And additional approvals for that seat by other Label Reps are prevented and shown as redundant And the recorded approval attributes D1 as delegate for the "Label Approval" seat And D1 cannot also approve that same seat as self or via another delegation
Per-Asset Delegation Exceptions & Precedence
Given O has a global delegate Dg And a per-asset exception for track T assigns delegate De (or disables delegation) When delegation is evaluated for approvals on T Then the per-asset exception is applied (De used or delegation blocked) and the global delegate Dg is ignored And when delegation is evaluated for other assets without exceptions, Dg applies And the audit trail records the delegation scope as "asset" or "global" accordingly
Quorum Integrity With Delegation Across Rule Types
Rule: A proxy approval maps to exactly one original role seat and does not create additional seats Rule: For minimum signer count N, the same human cannot be counted more than once across self and any proxy roles Rule: For unanimous-by-role quorum, proxies do not reduce the set of required roles; each required role must still be satisfied Rule: For percentage-by-split quorum, approval weight is derived from the original stakeholder’s split; delegates inherit no split and cannot alter percentage math Rule: Attempting to approve both as self and as a delegate for the same asset counts at most once toward quorum and surfaces a warning
Proxy Approval Attribution & Audit Trail
Given a proxy approval event occurs on asset A When approval history, activity feed, API, or audit export is retrieved Then each approval entry includes: approvalId, assetId, originalRoleId/name, originalOwnerUserId, delegateUserId, delegationType (individual|role-based), delegationScope (global|asset), delegationWindowStart/End, approvedAt, createdByUserId And UI entries are labeled "Proxy" and display both original role/owner and delegate identities
Delegation Expiry & Early Revocation Handling
Given a time-bound delegation D with end 2025-08-20T00:00Z or an early revocation at 2025-08-18T10:00Z When the end or revocation time is reached Then D is immediately ineligible for new approvals and marked expired/revoked with reason And any approvals made before that time remain valid and are linked to D with the validity window captured in audit And attempted approvals after expiry/revocation are blocked with an explanatory message
UI Indicators & Notifications for Proxy Approvals
Given a proxy approval is recorded for release or track A When stakeholders view the progress bar, blockers list, or approval panel Then the proxy badge is displayed and shows "Delegate: D, Original: O, Role: R, Window: [start–end]" And notifications sent to watchers and the original owner state "Proxy approval by D on behalf of O for Role R" with a link to details And hovering or opening details reveals the full attribution without requiring navigation away
Approval Audit Trail & Reporting
"As a label rep, I want a complete approval history so that I can prove who signed off and when if there’s a dispute."
Description

Maintain an immutable, exportable log of rule definitions, snapshots at approval time, approvals and declines with timestamps, identity, device/IP, comments, and any overrides with reasons. Provide filters by project, release, track, date range, and approver; surface a concise view in the asset page and a CSV export for external compliance. Ensure logs are tamper-evident and retained according to workspace policy.

Acceptance Criteria
Log approvals, declines, and overrides with full metadata
Given an approvable asset with an active quorum rule When a member submits an approval or a decline with an optional comment Then the system appends a new immutable audit entry including workspace_id, asset_type, asset identifiers (project_id, release_id, track_id as applicable), rule_id, event_type (approval|decline), approver_id, approver_email, approver_role, comment (may be blank), ISO-8601 UTC timestamp, client device fingerprint, source IP address, and request_id And the entry is assigned a strictly increasing sequence_id within the workspace and a content_hash And the entry is visible through the audit UI and API within 2 seconds of submission And any attempt to modify or delete the entry via public API returns 405/403 with no mutation Given an authorized admin performs an override related to quorum When the override is confirmed Then the system requires a non-empty reason and appends an 'override' audit entry including actor_id, reason, scope (asset/rule), affected approval ids, and timestamp
Capture rule definition and approver roster snapshots at approval time
Given a quorum rule is created or updated When the rule is saved Then the system versions the rule and appends a 'rule_version' entry including rule_type (unanimous|role_based|percentage|min_signers), thresholds/percentages, required roles/min_signers, participant roster with roles and splits, effective_from timestamp, version_id, and version_hash Given any approval or decline event occurs When writing the audit entry Then the current rule_version and participant roster snapshot are referenced by version_id and version_hash within the event to bind it to the exact configuration at that time Given a rule is modified after prior approvals exist When quorum recalculation occurs Then prior approval events remain linked to their original snapshot and are not counted toward the new rule unless re-approved under the new version; the audit log displays both snapshot version_ids for clarity
Filter audit log by project, release, track, date range, and approver
Given a user with permission accesses the audit log UI or API When they apply any combination of filters: project_id, release_id, track_id, approver_id/email, and a date range (start and end) Then only entries matching all selected filters are returned And date range filtering is inclusive of boundary timestamps and evaluated in UTC And results are sortable by timestamp (default desc) and paginated with total count And performance for up to 50,000 matching entries returns first page within 1.5 seconds And the same filters are propagated to CSV export when initiated from the filtered view
Concise audit trail on asset page
Given a user with view_audit permission opens a track or release asset page When the page renders Then a concise audit component displays the current quorum rule summary (type and threshold), counts of approved/declined/pending, and a progress indicator And the last 10 audit events for the asset are shown with actor, action (approve/decline/override), relative time, and hover/click to view full details And any present overrides are flagged with a visible badge and tooltip displaying the recorded reason And a 'View full audit log' control navigates to the audit screen pre-filtered to the asset And the audit component loads within 500 ms given up to 1,000 total events for the asset
CSV export for external compliance
Given a user with export_audit permission is viewing the audit log (UI or API) When they request a CSV export (respecting any active filters) Then a CSV file is generated with a header row and the following columns per event: workspace_id, project_id, release_id, track_id, asset_type, rule_id, rule_version_id, snapshot_hash, event_type, approver_id, approver_email, approver_role, action, comment, timestamp_utc, device_fingerprint, ip_address, override_flag, override_reason, actor_id, sequence_id, entry_hash, previous_hash And field values are UTF-8 encoded and RFC 4180 compliant with proper quoting for commas and newlines And exports up to 1,000,000 rows stream to the client and complete within 5 minutes; larger exports return an async job id with webhook/email notification on completion And the export action itself is recorded as an audit event with requester identity, filter parameters, and generated file id And the download payload includes a separate SHA-256 checksum file for the CSV
Tamper-evident hash chain and verification
Given the audit log stores events append-only When an event is written Then the system computes entry_hash = SHA-256(canonical_event_payload) and stores previous_hash referencing the prior sequence_id within the workspace to form a hash chain Given a user or auditor calls the verification endpoint with optional filters When the system recomputes the chain over the selected range Then it returns status = PASS with the latest anchor_hash when all entries validate, or status = FAIL with the first failing sequence_id and details when any entry is missing or altered And the audit UI displays an 'Integrity Verified' or 'Integrity Mismatch' indicator for the filtered view based on the verification result And CSV exports embed the latest verified anchor_hash and verification timestamp in a metadata footer row
Retention policy enforcement and legal hold
Given a workspace retention policy (duration and archive/purge mode) and optional legal hold are configured by an admin When the policy is saved Then the configuration is recorded in the audit log with effective_from timestamp Given scheduled retention processing runs When events exceed the retention period and are not under legal hold Then they are purged from the active store or archived according to policy, and a 'retention_action' entry is appended summarizing counts and ranges affected And attempts to delete events within retention or under legal hold are blocked with 409 and logged as 'retention_blocked' And after purge/archival, the remaining chain is re-anchored and verify endpoint returns PASS for the remaining range And the asset and audit views display the retention policy and next scheduled purge date to admins

Split Resolver

Auto‑discover and validate all split holders from credits, ISNI/IPI, and past projects. De‑duplicate aliases, suggest missing parties, and assign required/optional status based on split or role. Sends tailored invites with context so you chase the right people once.

Requirements

Multi-Source Credit Harvesting
"As an indie manager, I want Split Resolver to automatically harvest contributor identities from credits, ISNI/IPI, and my past projects so that I don't have to manually compile split holders for each release."
Description

Automatically ingest and normalize potential split holders from multiple sources, including embedded audio metadata (ID3/RIFF/FLAC tags), uploaded credit sheets (CSV/XLSX), DDEX/CWR imports, PRO/collecting society lookups via ISNI/IPI, and prior IndieVault projects. Parse roles, contribution types, and any known split percentages, capturing provenance for each datum. De-duplicate obvious duplicates at the field level, map discovered parties to the current release and track-level assets, and expose a structured candidate list for downstream validation. Respect third‑party API rate limits with retry/backoff, cache responses, and surface connectivity errors non-blockingly. Integrates with IndieVault’s asset ingestion pipeline and project workspace so discovery runs as soon as tracks/artwork are added or credits are updated.

Acceptance Criteria
Auto‑ingest Embedded Audio Credits on Asset Upload
Given MP3/WAV/AIFF/FLAC tracks are uploaded to a project, When asset ingestion runs, Then ID3v2/RIFF INFO/VorbisComment tags for contributors (e.g., Artist, Composer, Lyricist, Producer, Engineer) and any split percentages are parsed and normalized to the IndieVault role taxonomy. Given embedded tags are missing or malformed, When harvesting executes, Then ingestion completes without error, emits per-asset warnings, and creates no placeholder parties. Given contributor data is extracted from multiple files for the same person, When candidates are generated, Then a single candidate is produced with merged field values and per-field provenance entries. Then each extracted field has provenance containing source=embedded_metadata, asset_id, tag_key, raw_value, normalized_value, and timestamp. Then each candidate is mapped to the correct track asset(s) and the release container. Then p95 metadata parsing time is ≤ 2 seconds per asset and memory use stays below 50 MB per concurrent asset.
Parse and Normalize CSV/XLSX Credit Sheets
Given a CSV/XLSX credit sheet is uploaded, When harvesting runs, Then header aliases (e.g., writer/songwriter/author → Writer; producer/prod → Producer) are mapped and roles normalized to the IndieVault taxonomy. Given the sheet contains N data rows, When processed, Then all rows with required columns (name + role) are ingested; rows missing required columns are skipped with per-row warnings including row numbers and reasons. Then split percentages are parsed from numeric or string forms (e.g., "12.5%" → 12.5), validated to be in [0,100], and stored with 2-decimal precision. Then per-cell provenance is recorded with source=credit_sheet, file_id, sheet_name, row, column, raw_value, normalized_value, and timestamp. Then Unicode names, diacritics, and common CSV quirks (quoted commas, newlines) are correctly handled. Then processing 500 rows completes in ≤ 10 seconds p95 and uses ≤ 200 MB peak memory.
Import and Map DDEX/CWR Party Credits
Given a valid DDEX ERN/MEAD or CWR file is uploaded, When harvesting runs, Then contributor parties, roles, ISRC/ISWC, IPI, ISNI, and any declared splits are parsed and normalized. Then parties are mapped to release- and track-level assets using ISRC/Work identifiers; if an identifier is missing, the item is flagged unresolved with a warning and retained for later mapping. Given the same DDEX/CWR message is ingested multiple times, When processing, Then operations are idempotent and produce no duplicate candidates or duplicated provenance entries. Then each parsed field includes provenance with source=ddex|cwr, message_id, record_type, field_path, raw_value, normalized_value, and timestamp. Then malformed records are skipped with validation errors logged and surfaced non-blockingly. Then a 50MB DDEX package processes in ≤ 60 seconds p95.
PRO/Collecting Society Lookup with Rate‑Limit Handling and Caching
Given a candidate has an IPI or ISNI (or name + territory) and external lookups are enabled, When harvesting runs, Then the system queries configured PRO/society APIs to enrich identities and roles. Given the API returns HTTP 429 or a rate-limit header, When retrying, Then exponential backoff with jitter is applied (initial 1s, factor 2, max delay 60s) up to 5 attempts and Retry-After is honored when present. Then no more than 5 concurrent outbound requests per endpoint are issued; excess requests are queued. Then successful responses are cached per person+endpoint for 24h; negative results are cached for 1h; manual refresh bypasses cache for that candidate. Given a network timeout or 5xx error, When continuing, Then the job marks the lookup as partial, surfaces a non-blocking connectivity warning listing the affected endpoint(s) and count, and proceeds with other sources. Then all outbound calls and retries are metered and logged with endpoint, latency, status, and attempt count.
Harvest from Prior IndieVault Projects for Candidate Suggestions
Given a project has new tracks or updated credits, When harvesting runs, Then prior IndieVault projects for the same primary artist/label are queried for historical contributors. Then matches are suggested using identifiers (IPI/ISNI/email) and fuzzy name matching (similarity ≥ 0.92) constrained by role overlap and genre/label context when available. Then suggestions that duplicate already-discovered candidates are suppressed; otherwise, new candidates are created with provenance source=prior_project including project_id, asset_id, and field origins. Then each suggestion carries a confidence score in [0,1]; exact identifier matches yield ≥ 0.95; fuzzy-only matches yield 0.60–0.94. Then generating suggestions for up to 50 prior projects completes in ≤ 15 seconds p95.
Cross‑Source De‑duplication, Alias Resolution, and Provenance Capture
Given multiple sources produce overlapping party data, When candidates are consolidated, Then duplicates are merged using these rules: (1) same IPI or ISNI → definite match; (2) same email/domain + role → match; (3) normalized name similarity ≥ 0.92 + at least one matching role or external id → match. Then alias names from all matched records are preserved; a single display_name is selected by precedence (explicit preferred name > most frequent > longest tokenized name) and recorded with provenance. Then per-field provenance retains all contributing sources; no provenance is lost during merges. Then each candidate includes: internal_id (nullable), display_name, aliases[], external_ids{isni,ipi,emails[]}, roles[], contribution_types[], splits[], asset_refs[], provenance[], confidence∈[0,1], required(boolean), optional_reason(text). Then required is set true when role belongs to rights-bearing categories (Writer, Composer, Lyricist, Publisher, Primary Artist) or split_percentage > 0; otherwise required=false. Then the merge is deterministic: repeated runs over the same inputs produce identical candidate objects and orderings.
Background Discovery Trigger and Candidate List Exposure in Workspace
Given tracks or artwork are added or a credit sheet/DDEX/CWR is uploaded or credits are edited, When changes are saved, Then harvesting starts automatically as a background job without blocking asset ingestion. Then the job lifecycle exposes statuses {queued, running, partial, succeeded, failed} with progress (% complete) and per-source counts (processed, merged, warnings, errors). Then the structured candidate list is available to the Split Resolver within 10 seconds of job completion and includes per-candidate provenance, confidence, required/optional flags, and asset mappings. Then external API issues do not block asset availability; at most a non-blocking banner and per-source warnings are shown. Then for a project with ≤ 50 tracks, one credit sheet (≤ 500 rows), and one DDEX file (≤ 50MB), end-to-end harvesting completes in ≤ 90 seconds p95.
Canonical Identity Resolution
"As a project manager, I want aliases and duplicate names to be merged into a single canonical identity with verified IDs so that invites and approvals go to the right person once."
Description

Resolve aliases and duplicate names into a single canonical person/organization entity by combining deterministic identifiers (ISNI/IPI) with fuzzy matching on names, emails, social handles, and past co‑credit networks. Generate a confidence score and present suggested merges with safe defaults; require human confirmation for low‑confidence cases. Maintain canonical profiles with verified IDs, alternate names, contact methods, and role history. Provide merge/split and override controls with full audit trail. Integrate with the account’s shared address book and enforce PII handling (encryption at rest, field-level permissions) so invites and approvals are addressed to the right entity once.

Acceptance Criteria
Auto‑Merge via Deterministic IDs (ISNI/IPI)
Given two or more person/organization records share the same verified ISNI or IPI and have no conflicting verified identifiers When the resolver job runs Then the records are auto-merged into a single canonical profile without human intervention And the canonical profile retains the matched ISNI/IPI as verified identifiers and stores all original IDs as alternates And all non-conflicting fields are unified; conflicting non-PII fields keep the most recently updated value; PII fields prefer verified sources And a confidence score >= 0.98 is recorded on the merge decision And an audit log entry is created with actor=system, action=merge, affected_record_ids, field-level changes, timestamp, and previous_values snapshot
Fuzzy Match Suggestion with Safe Defaults
Given records lack matching verified ISNI/IPI but have name similarity >= 0.85 OR at least 2 shared co-credit projects OR matching normalized email/handle When the resolver runs Then a Suggested Merge is created with a confidence score between 0.60 and 0.97 and appears in the review queue And suggestions with score < 0.98 are not auto-merged and require explicit human confirmation When a reviewer accepts a suggestion Then the records are merged, aliases captured, and an audit log entry with actor=user is written When a reviewer rejects a suggestion Then the records remain separate, a suppression rule prevents re-suggesting the same pair for 90 days, and an audit entry with reason is written
Manual Merge Override and Rollback
Given a reviewer selects two canonical profiles and chooses Merge When the reviewer resolves field conflicts and confirms Then the system merges profiles, sets one canonical ID as primary, carries over verified IDs, consolidates role history, and preserves all alternate names and contact methods And linked assets, splits, invites, and approvals are re-pointed to the new canonical profile without broken references (0 referential integrity errors) And an audit log with actor=user, action=merge, selected conflict resolutions, and a rollback token is stored When the reviewer triggers rollback within 30 days Then the system restores both original profiles and all prior links and logs a revert audit entry
Split Incorrectly Merged Profile
Given a canonical profile contains attributes from two distinct entities When a reviewer initiates Split and assigns fields, identifiers, and links to Entity A and Entity B Then the system creates two valid canonical profiles, reassigns all linked assets/splits/invites accordingly, and passes a referential integrity check (0 dangling references) And both resulting profiles retain appropriate verified IDs and aliases without duplication And an audit log with actor=user, action=split, mapping details, and timestamps is recorded
PII Encryption and Field‑Level Permissions
Given PII fields (emails, phone numbers, social handles) exist on profiles When stored at rest Then they are encrypted using AES-256 with keys managed by KMS and are unreadable in raw storage Given a user without PII:read permission requests a profile via API or UI Then PII fields are redacted and field-level API access returns 403; an access-denied audit entry is logged Given a user with PII:read permission requests the same profile over TLS 1.2+ Then PII fields are returned decrypted in response and an access audit entry is logged
Address Book Integration and De‑Duplicated Invites
Given the account address book contains multiple entries for the same person under different aliases and emails When split approval invites are generated Then exactly one invite is sent per canonical entity, deduplicated across all known contact methods And invites include role context and required/optional status derived from the split or role And per-recipient analytics aggregate under the canonical profile ID regardless of contact method used And no invite is sent to contacts marked opted-out or retired in the canonical profile
Confidence Score Computation and Visibility
Given the resolver evaluates matching signals (deterministic IDs, name similarity, email/handle normalization, past co-credit network) When computing a match Then it produces a confidence score between 0.00 and 1.00 and stores it with the decision And deterministic ID matches alone yield a score of 1.00; deterministic conflicts yield <= 0.30 and trigger a manual review flag And the review UI displays the score and the top three contributing factors with their weights for each suggestion Given test fixtures: Fixture A (same ISNI and same IPI) produces 1.00; Fixture B (0.88 name similarity, 2 shared projects, matching normalized email) produces a score between 0.80 and 0.95
Split Validation & Conflict Resolution
"As an artist, I want the system to validate that splits are complete, consistent, and sum correctly by rights type so that we catch errors before release."
Description

Validate completeness and internal consistency of splits across rights types (writer, publisher, master, neighboring), ensuring totals meet configured constraints (e.g., 100% per rights bucket), roles are compatible with rights claimed, and the same party is not counted twice. Compare values across harvested sources to detect conflicts, highlight discrepancies with source provenance, and propose resolution options (choose a source, average, manual edit). Provide guardrails such as minimum increment rules, rounding controls, and currency/points handling if applicable. Lock confirmed splits per version, record change history, and expose a clear status banner in the project to prevent release until validation passes. Integrates with IndieVault’s versioning and release‑readiness checks.

Acceptance Criteria
Per-Rights Bucket Total Validation at Save
Given a project configured with rights buckets (writer, publisher, master, neighboring) each with a target total of 100% and a tolerance of 0% And split lines exist assigning shares per party within each bucket When the user attempts to save or mark splits as confirmed Then the system calculates totals per bucket and validates that each equals the configured target within the configured tolerance And if any bucket is under or over the target, the save/confirm is blocked and an inline error identifies the offending bucket(s) and the exact delta required to reach compliance And if all buckets meet the target within tolerance, the save/confirm succeeds
Role-to-Rights Compatibility Check on Entry
Given parties are assigned roles (e.g., writer, publisher, producer, label, featured artist) And each rights bucket is configured with allowed roles When a party claims a share in a rights bucket that does not allow their role Then the system flags the row with an error and blocks save And the error message lists the allowed roles for that bucket and suggests correction (change role or move share to a compatible bucket) And when the claim is corrected to an allowed role/bucket combination, the error clears and save is permitted
Alias De-duplication to Prevent Double Counting
Given harvested parties may appear with multiple aliases and identifiers (name variants, ISNI, IPI) When the same real-world party is detected more than once within the same rights bucket via matching authoritative identifiers or high-confidence alias linkage Then the entries are merged into a single party line for that bucket without increasing the total share And the UI indicates the merged aliases with provenance chips and a merge note And if two entries share a name but have distinct authoritative identifiers, they are not auto-merged and are flagged for manual review
Cross-Source Conflict Detection and Resolution Workflow
Given split values are harvested from multiple sources (credits, ISNI/IPI, past projects) with source provenance and timestamps When conflicting values are detected for a party or bucket (different totals, role mismatches, or divergent shares) Then a discrepancy banner is displayed and conflicted fields are highlighted with a per-source value list And the user is offered resolution options: choose a single source value, average selected source values, or enter a manual value And upon choosing a resolution, the system records a decision log including user, timestamp, method selected, sources considered, and before/after values And the resolved value updates the working draft and triggers re-validation of all affected buckets
Guardrails: Min Increment, Rounding, and Points/Currency Handling
Given configuration exists for minimum increment (e.g., 0.5%), rounding mode (e.g., bankers, up, down), and unit per bucket (percent, points out of 100, or currency share ratio) When a user edits shares or applies a conflict resolution Then values snap to the nearest allowed increment using the configured rounding mode And totals are computed using normalized units per bucket and compared to the configured target totals And master/neighboring shares entered as points (e.g., 3.5/100) display consistently alongside percentages and validate against the bucket target And any value that cannot be represented within the increment rule is rejected with a precise message indicating the nearest valid values
Version Locking and Change History for Confirmed Splits
Given a user marks the current split set as confirmed for version vN When confirmation is executed Then all split fields for vN become read-only and cannot be edited directly And any subsequent change requires creating a new version vN+1, leaving vN immutable And a change history entry is recorded capturing user, timestamp, fields changed, previous value, new value, reason (optional), and source of change (manual or resolution action) And prior confirmed versions remain available for audit with checksum or hash to verify integrity
Release Blocker Status Banner and Readiness Integration
Given the project has a split validation status derived from all checks When any validation rule fails or splits are not confirmed Then a persistent status banner indicates the blocking issues with counts and deep links to each failing bucket/row And the release-readiness gate is set to Blocked and release actions are disabled via the gate API and UI controls And when all validations pass and splits are confirmed, the banner displays Validated and the release-readiness gate transitions to Ready, enabling release actions
Missing Party Suggestions
"As a label admin, I want the system to suggest likely missing parties based on roles and history so that I can quickly add anyone we overlooked."
Description

Use heuristics and lightweight models to suggest likely missing contributors based on track metadata, common role patterns (e.g., writer often paired with publisher), genre/team history, and co‑credit graphs from prior IndieVault projects. Surface suggestions in context with confidence levels, rationale, and quick‑add actions that prefill role and contact details from canonical profiles where available. Avoid over‑notification with thresholds and cooldowns, and never auto‑invite without user confirmation. Suggestions are logged for learning and can be dismissed or accepted, feeding back to improve future recommendations.

Acceptance Criteria
Contextual Suggestion via Co‑Credit Graph
Given project P has track T with contributor E already credited and historical data shows contributor C has ≥3 prior co‑credits with E in the same genre within the past 24 months, and C is not on T’s current split When the user opens Split Resolver for T Then a suggestion card for C is displayed within 2 seconds showing: full name, inferred role(s), confidence score (0–1) ≥ 0.70, and a rationale referencing the co‑credit evidence (e.g., "Frequent co‑credit with E: 3 projects, Genre=G") And the suggestion includes a visible Quick Add action and View Profile link And the system records a suggestion_shown event with fields: suggestion_id, subject_profile_id, project_id, track_id, heuristic='co_credit', confidence, rationale_ids, timestamp
Alias De‑duplication to Canonical Suggestion
Given contributor C has aliases A1 and A2 mapped to canonical profile ID PC, and track metadata mentions A2 When suggestions are generated Then only one suggestion for PC is shown (no duplicates for A1/A2) And the suggestion displays the canonical display name and an alias badge for A2 And Quick Add and View Profile actions reference PC And after adding C, the suggestion list contains zero entries with subject_profile_id = PC
Role Pattern Suggestion: Writer → Publisher
Given writer W is present on the split for track T, W has a canonical link to publisher P, and no publisher is currently assigned for W’s share When the user opens the Suggestions panel Then the system suggests P with role = Publisher, required/optional = Required by rule, confidence ≥ 0.80 And the rationale states "Writer without publisher for W" And Quick Add preselects the link between P and W
Quick Add Prefill and Required/Optional Assignment
Given a suggestion S for contributor C with canonical profile containing roles {R1,...}, primary contact email E, and rule engine R defining required/optional by role When the user clicks Quick Add on S Then a new split entry is created for C with roles set to {R1,...} from S and contact email set to E And required/optional is set according to rule engine R for each role at ruleset version v_current And the UI presents a confirmation state where the user can review and save; no invite is sent yet
Confidence Thresholds and Cooldown for Notifications
Given a generated suggestion S has confidence < 0.60 When rendering the Suggestions panel Then S is not displayed and no notification is produced Given a suggestion S with confidence ≥ 0.60 is dismissed at time t0 for track T in project P When the same suggestion S would be generated again within 14 days for P/T Then it is suppressed and a cooldown event is logged And after 14 days, S may reappear only if its confidence has increased by ≥ 0.10 from the last shown confidence
Invite Confirmation Gate (No Auto‑Invite)
Given the user accepts a suggestion via Quick Add or Add and saves the split When the user has not explicitly triggered "Send Invites" or enabled an invite toggle for the new party Then no outbound invites (email, link, webhook) are sent And the UI indicates the party as "Pending invite" until explicit confirmation And audit logs show no invite_sent event for the party prior to explicit confirmation
Suggestion Logging and Feedback Capture
Given any suggestion S is shown, accepted, or dismissed When the event occurs Then a log record is written with fields: suggestion_id, subject_profile_id, project_id, track_id, heuristic_sources, confidence, rationale_ids, action ∈ {shown, accepted, dismissed}, actor_user_id, timestamp And for dismissals, a dismiss_reason is captured from {Not applicable, Already added, Wrong person, Low confidence, Other:text} And when querying telemetry for the last 24 hours Then ≥ 99% of suggestion events are retrievable within 3 seconds
Role-Based Required/Optional Assignment
"As a manager, I want required vs optional approvers to be assigned automatically based on roles and split thresholds so that I only chase the people whose signoff is needed."
Description

Automatically classify each discovered party as required or optional for signoff based on configurable rules: role (e.g., composer, producer), split percentage thresholds, rights buckets, contractual flags, and project templates. Allow account‑level defaults with per‑project overrides and exceptions. Display required/optional status in the resolver UI and propagate it to the invite workflow to prioritize required approvals first. Provide policy simulation so users can preview how rule changes affect current and future projects. Persist decisions to the project’s approval matrix and expose them via API for downstream automation.

Acceptance Criteria
Role and Split Threshold Classification under Template Precedence
Given account defaults Composer=Required, Producer=Optional, SplitRequiredThreshold=10%, and a project uses template "Single Release" where Producer=Required And discovered parties: A (Composer 50%), B (Producer 5%), C (Featured Artist 20%) When Split Resolver runs classification Then A is Required with source "role-rule" And B is Required with source "template" And C is Required with source "split-threshold" And for identical inputs the classification output is consistent across repeated runs
Contractual Flags and Rights Buckets Precedence
Given account defaults Producer=Optional and SplitRequiredThreshold=10% And party D has role Producer, split 2%, contractual flag MustApprove=true And party E has role Composer, split 30%, contractual flag NoApprovalNeeded=true And party F has any role, split 0%, rights bucket "Master Owner" When classification runs Then D is Required with source "contract-flag" And E is Optional with source "contract-flag" And F is Required with source "rights-bucket" And precedence is applied in order: contract-flag > rights-bucket > template > role-rule > split-threshold
Per-Project Overrides and Exceptions with Audit
Given policy-based classification exists and user manager@example.com has Manage Approvals permission When the user changes party B from Optional to Required and enters reason "Label mandate" Then party B becomes Required with source "manual-override" for this project only And an audit record is stored with projectId, partyId, previousStatus, newStatus, reason, userId, timestamp And the UI shows an Override badge and offers Revert to Policy And account-level defaults and templates remain unchanged
Resolver UI Required/Optional Badges and Explanations
Given a project with classified parties When the Resolver UI loads Then each party row shows a Required or Optional badge, the decision source, and a tooltip describing the applied rule And the header shows total counts for Required and Optional And users can filter to Required only and sort with Required first
Invite Workflow Prioritizes Required Approvals and Enforces Blocking
Given a project has 5 Required and 3 Optional parties and invites are configured When the user sends invites from Split Resolver Then invites to Required parties are queued and dispatched before Optional parties And project approval is blocked until 100% of Required parties approve And Optional parties’ responses are recorded but do not block approval And each invite payload includes the recipient’s required/optional status
Policy Simulation Preview Without Side Effects
Given current policy SplitRequiredThreshold=10% and Producer=Optional When a user proposes SplitRequiredThreshold=5% and Producer=Required in Policy Simulation and runs preview Then the system displays counts of impacted parties on current projects and by template for future projects, including a list of parties whose status would change And no live classifications change until the user confirms Apply And upon Apply with confirmation, the policy is saved and affected current projects are reclassified with a downloadable diff report
Approval Matrix Persistence and API Exposure
Given classifications and overrides are finalized for a project When the approval matrix is persisted Then the matrix stores one entry per party with fields partyId, required(boolean), source(enum: contract-flag|rights-bucket|template|role-rule|split-threshold|manual-override), lastUpdatedAt And GET /api/v1/projects/{projectId}/approval-matrix returns 200 with the matrix data And subsequent overrides are reflected in the matrix and API within 5 seconds
Contextual Split Invite Delivery & Tracking
"As a manager, I want tailored, secure invites with project context sent to each participant and tracked through confirmation so that I have visibility and don't send generic mass emails."
Description

Generate and send tailored invitations to each party with role‑specific context: the project/release, tracks affected, proposed split details, the user who requested confirmation, and what action is required. Use secure, expiring magic links tied to the canonical identity; support email and SMS delivery with localization and templates. Integrate with IndieVault’s existing expiring link infrastructure and per‑recipient analytics to track delivered/opened/clicked/confirmed/bounced statuses, and trigger reminder cadences with quiet hours and rate limiting. Provide a single dashboard to monitor invite health, resend or revoke invites, and export activity via webhooks for external systems.

Acceptance Criteria
Expiring, Identity-Bound Magic Link Generation
Given a resolved canonical identity for a recipient, When an invite is created, Then generate a unique, single-use magic link token bound to the canonical_identity_id. And the link uses IndieVault's existing expiring link service endpoint and records an audit entry with invite_id, recipient_id, and expiry. And the link expires after the configured TTL (default 7 days) and is immediately invalid after first successful confirmation or revoke. And subsequent clicks on an expired or revoked link return a 410 response and a user-facing "Link expired" page. And the token provides at least 128 bits of entropy, is signed, and is only transmitted over HTTPS. And opening the link pre-loads the project/release, affected tracks, proposed split details, requester, and required action for that recipient.
Role-Targeted Invite Content and Localization via Email/SMS
Given a recipient with role(s) and preferred locale, When generating the invite, Then select the template by role and locale; if no match, fall back to en-US. And the invite content includes project/release name, affected track list (show up to 20 with "+N more" overflow), proposed split %, requesting user name/org, and the explicit required action (Confirm/Adjust/Decline) with deadline if set. And email delivery renders the full template with dynamic placeholders; SMS delivery sends a condensed summary plus the magic link. And personalization uses the recipient's display name; no other recipients are visible (no CC or group threads). And recipients marked Optional are labeled as such in content and are not blocking confirmation workflows. And template rendering fails fast with a validation error if required placeholders are missing, preventing send.
Per-Recipient Delivery and Engagement Tracking
Given an invite is sent, Then track per-recipient, per-channel events: sent, delivered, opened (email only), clicked, confirmed, bounced, each with an ISO-8601 timestamp. And append signed tracking parameters to the magic link to attribute channel and campaign without exposing PII. And for recipients contacted via multiple channels, consolidate engagement to a single invite_id timeline while preserving channel-specific metrics. And a confirmation from any channel transitions invite status to Confirmed and suppresses further sends. And analytics are queryable via dashboard/API and match raw event counts within ±1% over a 24h window. And bounced events from email (hard/soft) and SMS (carrier undeliverable) are captured and surfaced with reasons where available.
Reminder Cadence with Quiet Hours and Rate Limiting
Given a project-level reminder policy (e.g., initial + 3 reminders at 3d/7d/14d), When an invite remains unconfirmed, Then schedule reminders according to the policy. And do not send reminders to invites in Confirmed, Declined, or Revoked states. And enforce quiet hours of 21:00–08:00 in the recipient's timezone; if timezone is unknown, infer from phone country or project timezone; otherwise default to UTC. And enforce rate limits: max 1 notification per recipient per 24h and max 100 notifications per project per hour; excess is queued with FIFO order. And select reminder channel by using the last successfully delivered channel; if none exists, attempt email then SMS (configurable). And exclude recipients with recent hard bounces until contact info is updated; log suppression reasons. And create an audit log entry for each scheduled and sent reminder with correlation to the original invite_id.
Invite Health Dashboard: Monitor, Resend, Revoke
Given a user with Splits:Manage permission, When viewing the Invite Health dashboard, Then display a table of recipients with current status, last activity timestamp, channel, locale, and attempt count. And provide filters for project/release, role, required/optional, status (Sent/Delivered/Opened/Clicked/Confirmed/Bounced/Revoked), channel, and date range; results update within 1s. And selecting a row opens a details panel showing timeline of events, content preview in the recipient's locale, and actions: Copy Link, Resend, Revoke, Regenerate Link. And bulk actions allow Resend and Revoke on multi-select with confirmation modal and per-recipient results summary. And real-time updates reflect new events via websocket or 30s polling fallback without full page refresh. And access controls hide action buttons for users without manage rights; read-only users can view but not act. And exporting CSV downloads the current filtered view with column headers and ISO-8601 timestamps.
Webhook Activity Export: Secure, Reliable, Idempotent
Given a subscribed webhook endpoint, When invite lifecycle events occur, Then emit events: invite.created, invite.sent, invite.delivered, invite.opened, invite.clicked, invite.bounced, invite.confirmed, invite.revoked, invite.reminded. And each payload includes event_id, invite_id, recipient_id, canonical_identity_id, project_id, channel, locale, event_type, occurred_at, and a signature header (HMAC-SHA256) with timestamp. And receivers can validate signatures and reject replays older than 5 minutes; the system drops requests failing signature or skew checks. And deliveries use exponential backoff retry (e.g., 1m, 5m, 15m, 1h, 6h) for up to 24h until a 2xx response is received. And idempotency is guaranteed: repeated delivery of the same event_id is acceptable and must not create duplicate side effects. And endpoints are automatically disabled after 5 consecutive failures and owners are notified; a Test Delivery tool verifies configuration from the UI.

Nudge Engine

Timezone‑aware reminders, gentle escalations, and one‑tap mobile signing links keep approvals moving. Include what’s pending, what changed, and each party’s stake. Auto‑escalate to managers after X days, pause on weekends, and surface who’s blocking your deadline.

Requirements

Timezone-Aware Scheduling & Quiet Hours
"As a project manager, I want reminders sent during each recipient’s local business hours so that nudges are timely, respectful, and more likely to get a response."
Description

Schedules reminders based on each recipient’s local timezone and configurable business hours, with automatic weekend pauses and optional regional holiday calendars. The scheduler determines the next permissible send window per recipient, adjusts for daylight savings, and respects per-contact quiet hours. Includes profile-based timezone detection with manual override, per-project defaults, and safeguards to prevent duplicate sends when windows overlap.

Acceptance Criteria
Schedule to Next Permissible Window by Recipient Timezone
Given a recipient profile timezone of America/Los_Angeles And project business hours are 09:00–17:00 Monday–Friday with weekend pause enabled And the recipient has no per-contact quiet hours When a reminder is queued at 18:30 local time on a Friday Then the next send is scheduled for 09:00 local time on the following Monday And the scheduled timestamp is stored as the UTC equivalent of 09:00 America/Los_Angeles on that Monday And no message is sent before that time
Daylight Saving Time Transition Handling
Given a recipient timezone of America/New_York and business hours 09:00–17:00 Monday–Friday And a reminder previously scheduled for 02:30 local time on the spring-forward DST transition day (02:00–02:59 does not exist) When the DST transition occurs Then the send time is adjusted to 03:00 local time the same day without error And only one message is sent Given a reminder scheduled for 01:30 local time on the fall-back DST transition day (the 01:00 hour repeats) When the repeated hour occurs Then the message is sent only once during the first permissible 01:30 instance And no duplicate is sent during the repeated 01:30
Per-Contact Quiet Hours and Manual Timezone Override Precedence
Given project business hours are 08:00–18:00 Monday–Friday And a contact has quiet hours set to 20:00–10:00 local time When a reminder is queued at 07:45 local time on a business day Then the next permissible send is 10:00 local time the same day (quiet hours take precedence over project business hours) Given the same contact has profile-detected timezone America/New_York When a user sets a manual timezone override to Europe/Berlin Then subsequent scheduling uses Europe/Berlin for determining permissible windows And the override persists for future reminders for that contact until changed
Regional Holiday Calendar Skips
Given a project has business hours 09:00–17:00 Monday–Friday And the project has an attached regional holiday calendar (e.g., US Holidays) And the next business day for a recipient is marked as a holiday on that calendar When a reminder is queued outside permissible hours before that holiday Then the send is scheduled for 09:00 local time on the next non-holiday business day Given the regional holiday calendar is detached from the project When the same scenario occurs Then the send is scheduled for 09:00 local time on the next calendar business day regardless of the former holiday
Automatic Weekend Pause Enforcement
Given project business hours are 09:00–17:00 Monday–Friday with weekend pause enabled And a reminder is queued at 10:00 local time on a Saturday for a recipient When the scheduler determines the next permissible window Then the send is scheduled for 09:00 local time on the following Monday And no messages are sent during Saturday or Sunday for that recipient
Duplicate Send Safeguards for Overlapping Windows
Given two reminders for the same recipient, approval task, and channel are queued such that both resolve to the same permissible send window (same local date and within 09:00–17:00) When the permissible window opens Then only one message is sent And the additional reminder(s) are marked as suppressed due to duplicate window And an audit log entry records the suppression with correlation IDs And no additional retry sends occur for the suppressed reminders within that window or due to rescheduling from holidays/weekends
Escalation Rules & Routing
"As a release owner, I want overdue approvals to auto-escalate to managers after X days so that critical releases don’t stall on a single reviewer."
Description

Provides a rule builder to automatically escalate overdue approvals after X days of inactivity, with multi-step routes (e.g., notify reviewer, then manager, then project owner). Supports per-project policies, SLA targets, and conditional criteria (e.g., value at risk, release date proximity). Escalations annotate prior context, include next steps, and record all actions in the audit log. Includes safeguards: snooze, auto-pause if recipient is OOO, and resume logic.

Acceptance Criteria
Escalate overdue approval via multi-step route
Given an approval request with no activity for 48 hours and a route configured as [notify reviewer, escalate to manager after additional 24 hours, escalate to project owner after additional 24 hours] in the project policy When the inactivity threshold of 48 hours is reached Then the system sends an escalation notification to the reviewer and marks route step=1 in the approval record And when inactivity persists to 72 hours Then the system sends an escalation notification to the manager and marks route step=2 And when inactivity persists to 96 hours Then the system sends an escalation notification to the project owner and marks route step=3 And each send records delivery status and timestamp in the audit log And no step is sent more than once for the same approval instance
Apply per-project escalation policies
Given Project A has escalation policy P1 and Project B has escalation policy P2 When approvals are created in Project A and Project B Then each approval uses the policy defined for its project And if a project has no policy, the account default policy is applied And policy parameters (thresholds, routes, pauses) are stored with the approval at creation time and used for all subsequent calculations
Conditional escalation by release proximity and risk
Given an approval with release_date in ≤5 days and value_at_risk = High per policy thresholds When the conditional criteria are met Then the SLA target is set to 24 hours instead of the standard 48 hours And escalation step 1 triggers at 24 hours of inactivity And subsequent steps compress according to the policy’s conditional schedule And the UI labels the approval as "At Risk" and shows the adjusted SLA target and countdown
Escalation messages include context and next steps; audit trail recorded
Given an escalation is sent at any route step Then the message includes: items pending approval, changes since the last notification, the recipient’s stake/impact, the current blocker identity, and explicit next steps with one-tap action And the audit log records: step number, recipients, content summary hash, send timestamp, delivery outcome, and any subsequent open/click/action events with timestamps And the approval timeline view shows the escalation event in order with prior reminders
Auto-pause for out-of-office and resume routing
Given the next recipient in the route has an active OOO period covering the scheduled escalation time (from in-app OOO setting or connected calendar) When the escalation would otherwise send Then the system auto-pauses the escalation timers for that recipient and records the pause in the audit log And if a delegate is configured, the escalation is sent to the delegate; otherwise the route is held until the OOO period ends And upon OOO end, timers resume with the remaining duration, and the next escalation is recalculated from the resume time
Snooze defers future nudges and escalations
Given a recipient taps "Snooze" for 24 hours on an approval When snooze is applied Then no nudges or escalations are sent to that recipient during the snooze window And all pending timers for that recipient shift by the snooze duration And the snooze action and new schedule are recorded in the audit log And if the recipient acts (approve/decline/comment) during snooze, all future escalations for that approval instance are canceled
Timezone- and weekend-aware scheduling of escalations
Given a project policy that pauses on weekends and sends during business hours 09:00–18:00 local time And the next recipients each have a stored timezone When escalation timers are calculated Then inactivity durations are computed using the recipient’s local timezone And if a threshold falls on a weekend or outside business hours, the send time is deferred to the next business day at 09:00 local time And cross-timezone sequences maintain per-recipient local rules at each step
One-Tap Mobile Signing Links
"As a reviewer on mobile, I want a one-tap signing link so that I can approve or sign without hunting through emails or logging into a desktop."
Description

Generates secure, expiring, per-recipient signing links optimized for mobile. Links deep-link into the approval/signing flow with pre-filled context and minimal steps, support device detection, and enforce short-lived tokens tied to identity. Includes optional biometric/OTP verification, link revocation, and automatic status sync back to the release. Provides fallbacks to a responsive web flow if the native app is unavailable.

Acceptance Criteria
Per-Recipient Secure Expiring Link Generation
Given a release has a pending signature request for Recipient X with verified contact info When an authorized user generates a one-tap signing link for Recipient X Then the system issues a unique URL containing a short-lived token bound to Recipient X’s identity and the release ID And the token has a default TTL of 48 hours (configurable between 1 and 168 hours) And the link is accessible only to Recipient X; access by any other user/device results in HTTP 403 and an audit log entry And the link remains valid until either the signature is completed or the token expires, after which subsequent opens show a read-only state and cannot alter the signature
Deep-Link to Mobile Signing with Minimal Steps
Given Recipient X opens the link on a mobile device with the IndieVault app installed When the link is tapped Then the device resolves the universal/app link to open the IndieVault app And Recipient X lands directly on the signing screen with pre-filled context (release title, version, role) And the path from app open to submitting the signature requires no more than 2 taps And the signing screen first meaningful paint occurs within 2 seconds at p95 on a 4G connection
Responsive Web Fallback Without App
Given Recipient X opens the link on a device without the IndieVault app or on desktop When the link is tapped Then the user is routed to a responsive web signing flow preserving the same context And all critical actions are visible without horizontal scrolling at 360px viewport width And the path from page load to submitting the signature requires no more than 3 taps And the flow passes WCAG 2.1 AA for color contrast and provides accessible labels for signature controls
Step-Up Verification via Biometric or OTP
Given the signing policy for this release requires step-up verification When Recipient X’s device supports biometric authentication Then the app prompts for biometric and, upon success within 10 seconds, proceeds to the signing screen When biometric is unavailable or declined Then the system falls back to OTP delivered to X’s registered channel (SMS or email), expiring in 10 minutes And OTP entry is limited to 5 attempts before a 15-minute lockout; all attempts are logged with timestamp and IP/device fingerprint
Token Expiry Handling and Reissue
Given Recipient X opens an expired link When the link is accessed Then an “expired link” state is shown with a one-tap “Send me a new link” action And upon confirmation, a new link with a fresh token is issued and delivered via the configured channel within 30 seconds And all prior tokens for the same signing request are invalidated immediately
Immediate Link Revocation and Audit Trail
Given an owner or admin revokes Recipient X’s active signing link (or X’s role changes) When revocation is saved Then the link becomes unusable within 60 seconds across app and web And subsequent opens return a revoked state with guidance to request a new link And an immutable audit log records who revoked, when, which link, and any post-revocation access attempts
Automatic Status Sync Back to Release
Given Recipient X completes, declines, or times out the signing flow When the final state is reached Then the release’s approval status is updated within 5 seconds to reflect Signed, Declined, or Expired for X And downstream dependencies (nudges, escalations, dashboards) reflect the new status within 60 seconds And a per-recipient activity record captures open, verify, sign/decline timestamps for reporting
Multi-Channel Delivery with Fallback
"As a sender, I want notifications to reach recipients over the channel they respond to best with reliable fallback so that approvals progress even if one channel underperforms."
Description

Delivers nudges over email, push, and SMS with configurable per-recipient channel preferences and automatic fallback if delivery fails or engagement is low. Includes rate limiting, retry/backoff strategies, link tracking, and deliverability monitoring (bounces, spam complaints). Honors unsubscribe/opt-out and regional compliance requirements. Provides templates per channel to ensure consistent context across mediums.

Acceptance Criteria
Per-Recipient Channel Preference Enforcement
Given a recipient with channel preferences ordered [push, email, SMS] and all channels are available When a nudge is triggered Then the system delivers via push and does not send via email or SMS unless a fallback condition occurs Given a recipient has disabled or unsubscribed from a preferred channel When a nudge is triggered Then the system skips that channel and uses the next available preferred channel Given no preferred channels remain available When a nudge is triggered Then the system does not send the nudge and logs a 'no-route' event
Automatic Fallback on Delivery Failure
Given delivery via the selected channel returns a permanent failure (e.g., hard bounce, invalid number, push unregistered) When the failure is recorded Then the system immediately attempts the next preferred channel within the configured fallback window without exceeding rate limits Given delivery returns transient failures When retries reach the configured max attempts for that channel Then the system attempts the next preferred channel Given a fallback send succeeds When additional fallbacks are pending for the same nudge Then the system cancels further fallback attempts
Engagement Tracking and Fallback on Low Engagement
Given a nudge is delivered via a channel When no open/tap/click is recorded within the configured engagement window Then the system attempts the next preferred channel as a low-engagement fallback Given any engagement (open/tap/click) is recorded on any channel for the nudge When a fallback is pending Then the system cancels the pending fallback Given multiple channels were attempted When analytics are aggregated Then the system attributes engagement to the channel where it occurred and prevents further fallbacks
Rate Limiting and Retry Backoff Across Channels
Given per-recipient rate limiting is configured When sends exceed the limit Then additional sends are queued or suppressed according to configuration and logged with a 'rate-limited' reason Given a channel provider throttles requests When sending bursts Then the system enforces per-channel throughput caps and smooths sends with exponential backoff and jitter Given a send attempt fails with a retryable error When retries are scheduled Then the system uses exponential backoff with jitter, respects the configured max attempts, and ensures idempotency (no duplicate deliveries)
Deliverability Monitoring and Alerts
Given sends occur across channels When bounces (hard/soft), spam complaints, or provider rejections are received Then the system records the event with timestamp, reason, and channel and updates recipient deliverability status Given a recipient registers a hard bounce or spam complaint on a channel When future sends are evaluated Then the system suppresses that channel for the recipient and notes the suppression reason Given deliverability KPIs exceed configured thresholds When monitoring runs Then the system raises an alert and surfaces the affected segment and channel
Channel-Specific Templates Preserve Context Consistency
Given a nudge is prepared for email, push, and SMS When templates are rendered Then each channel contains the same core context (release name, due date, asset summary, recipient stake, and a unique tracking link) tailored to channel constraints Given SMS length constraints apply When the message exceeds the single-SMS limit Then the system applies concise template variants and link shortening to keep within the configured segment policy Given templates include placeholders When rendering occurs Then all placeholders resolve without errors, and previews are available per channel before send
Compliance and Opt-Out Enforcement Across Channels
Given a recipient is globally unsubscribed or has opted out of a specific channel When a nudge is evaluated for send Then the system does not send via opted-out channels and logs 'opt-out' with scope and timestamp Given regional compliance rules apply based on recipient locale When composing SMS or email Then the system includes required sender identification, opt-out instructions, and honors any configured quiet hours for that region Given an SMS opt-out keyword or email unsubscribe click is received When processing inbound events Then the system updates consent records immediately and suppresses future sends accordingly
Context-Rich Reminder Content
"As an approver, I want reminders that summarize what’s pending, what changed since I last looked, and why my approval matters so that I can make a quick, informed decision."
Description

Assembles reminders that include exactly what’s pending, what changed since the last notification, and each party’s stake (e.g., deadlines, dependencies, impact). Generates concise diffs, embeds asset thumbnails where relevant, and personalizes copy by role (reviewer, manager, legal). Prevents information overload by summarizing and linking to details. Maintains a per-recipient change cursor to ensure only new changes are called out.

Acceptance Criteria
Per-Recipient Change Diff in Reminder Content
Given recipient R has previously received a reminder for release X When a new reminder is generated at time T Then include only items with changeTimestamp > R.changeCursor and <= T And label each changed item with changeType (added|updated|removed) and changeTimestamp And render an inline diff for text fields capped at 200 characters with ± markers and 20-character context windows And exclude unchanged items from the "What's new" section
Role-Personalized Copy and Stakes
Given recipient.role in {Reviewer, Manager, Legal} or unknown When generating the reminder content Then apply the role-specific subject and opening paragraph template And include a "Your stake" section listing requiredAction, deadline (localized), impactedDependencies count, and escalation target (if any) And for Legal include contractId(s); for Manager include blockingActor(s); for Reviewer include asset name(s) awaiting action And fallback to neutral copy if role is unknown
Visual Asset Thumbnails and Fallbacks
Given a changed item is a visual asset (artwork or video) When composing the reminder Then embed a 200x200 thumbnail linked to the asset preview And apply a semi-transparent watermark overlay if watermarkable=true And include alt text "Thumbnail: {assetName}" And for non-visual assets, show a 32x32 type icon instead of a thumbnail
Information Overload Guardrails
Given the assembled reminder exceeds 8 changed items or 800 words When finalizing the message Then present a summary list of up to 8 bullets grouped by type with per-type counts And include a "View full changes" deep link to the detailed activity view And ensure subject length <= 60 characters and body length <= 800 words And achieve Flesch-Kincaid grade <= 8 by truncating non-essential copy as needed
Change Cursor Persistence and Idempotence
Given a reminder for recipient R is generated at time T When persisting the reminder Then set R.changeCursor = T and store included changeIds And subsequent reminders for R must not re-list stored changeIds in "What's new" And if no changes exist since R.changeCursor, render the "No new changes" nudge template without a "What's new" section
Pending Items Snapshot Accuracy
Given there are N pending actions for recipient R on release X When generating the reminder Then include a "Pending for you" section listing each pending action with assetName, actionType (approve/sign/review), dueDate (localized), and current owner And ensure the count displayed equals N And exclude items already completed or not assigned to R
Blocker Visibility & Deadline Health
"As a release owner, I want a clear view of who is blocking and how close we are to breaching deadlines so that I can intervene before timelines slip."
Description

Surfaces who is currently blocking a release with a real-time list of outstanding approvals, their last activity, and predicted risk to the deadline. Provides a dashboard widget, inline indicators in release folders, and quick actions to nudge or escalate. Shows countdowns to SLA breaches and suggests next-best actions based on past responsiveness.

Acceptance Criteria
Real-Time Blockers List Accuracy
Given a release with N pending approvals, When the dashboard loads, Then the blockers list displays exactly N entries with approver name, role, required action, and last activity timestamp in the approver’s local timezone. Given a pending approver completes their approval, When the system receives the event, Then the corresponding blocker is removed within 5 seconds and countdowns and risk score recalculate. Given network latency or API error occurs, When the blockers list cannot fully refresh, Then a non-blocking warning is shown, the last successful refresh time is displayed, and the system retries up to 3 times with exponential backoff. Given an approver has no activity for ≥72 hours, When the list renders, Then “last activity” shows a relative time (e.g., “3d ago”) with exact timestamp on hover.
Dashboard Widget: Deadline Health & Countdown
Given an SLA deadline exists, When the widget renders, Then it shows a countdown in D:HH:MM, color-coded as green (>72h), amber (24–72h), or red (<24h). Given at least one blocker exists, When the widget renders, Then it shows the top blocker (most overdue/highest risk) with a one-tap Nudge action. Given time elapses, When 60 seconds pass, Then the countdown decrements without page refresh and remains accurate within ±1 second. Given the countdown reaches zero, When the SLA is breached, Then the widget labels “SLA Breached,” switches to red, and an audit event with timestamp is recorded.
Inline Blocker Indicators in Release Folders
Given a release folder contains assets with pending approvals, When the folder view loads, Then each affected asset shows an inline “Blocked” badge, the blocker’s avatar(s), and a tooltip with required action and last activity. Given an asset becomes unblocked, When the approval is saved, Then the badge and avatars are removed within 5 seconds and the list reorders per current sort without full page reload. Given multiple blockers exist for an asset, When displayed, Then a stacked avatar with a count appears and the tooltip lists all remaining approvers and their statuses.
Quick Actions: Nudge and Escalate
Given a blocker is selected, When the user taps Nudge, Then a timezone-aware reminder is sent including pending items, changes since last request, and the recipient’s stake, and an audit log records channel, sender, recipient, and timestamp. Given Auto-escalate after X days is enabled with X=3, When a blocker remains unresponsive for 3 business days, Then the system escalates to the manager at 9:00 AM recipient local time on the next business day and marks the blocker as escalated. Given weekends are excluded, When a nudge would be sent on Saturday or Sunday, Then it is queued for Monday 9:00 AM recipient local time. Given a scheduled escalation exists, When the user cancels before send time, Then no escalation is sent and the audit log records the cancellation.
Deadline Risk Score & Explanations
Given blockers and responsiveness history exist, When the release panel renders, Then a risk score (0–100) with Low/Medium/High label and 24h trend arrow is displayed. Given the risk is Medium or High, When the user hovers the score, Then the UI shows the top 3 contributing factors with weights (e.g., “2 blockers idle >48h,” “SLA <24h,” “low response rate”). Given a blocker resolves or new activity is recorded, When the event occurs, Then the risk score recalculates and updates within 10 seconds. Given all approvals are complete, When risk recalculates, Then the score is 0 and the label reads “No Risk.”
Next-Best Action Suggestions Based on Responsiveness
Given historical contact outcomes for an approver exist, When generating a suggestion, Then the system proposes the channel and send time with the highest past response rate in the same time-of-day window, and shows a confidence score and rationale. Given no history exists for an approver, When generating a suggestion, Then the system defaults to email at 9:00 AM approver local time and marks confidence as Low. Given a suggested action is taken, When the approver responds within 24 hours, Then the outcome is recorded and incorporated into future suggestion models. Given the user dismisses a suggestion, When dismissed, Then it is hidden for 24 hours and the user can select a dismissal reason that is logged for model improvement.
Per-Recipient Analytics & Audit Trail
"As a label admin, I want per-recipient analytics and an audit trail so that I can prove compliance, improve workflows, and resolve disputes quickly."
Description

Tracks sends, opens, clicks, device type, sign/approve events, bounces, and escalations at the recipient level. Presents engagement timelines and cohort summaries to optimize nudge timing and channels. All events are recorded in an immutable audit log with export and webhook capabilities, while respecting privacy controls (do-not-track, data retention, and regional regulations).

Acceptance Criteria
Per-Recipient Event Tracking (Sends, Opens, Clicks, Sign/Approve, Bounces, Escalations)
Given an approval request is sent with unique tracking links per recipient When a recipient is sent, opens, clicks, signs/approves, bounces, or is escalated Then the system records exactly one event per action with fields: tenant_id, project_id, release_id, recipient_id, message_id, link_id, event_type ∈ {sent, open, click, sign, approve, bounce, escalation}, channel ∈ {email, sms, link}, device_type ∈ {desktop, mobile, tablet, unknown}, user_agent_hash, ip_prefix(/24), geo_country, occurred_at_utc, occurred_at_project_tz And bounce events include SMTP/transport status code and reason And escalation events include escalation_level, escalated_to_user_id, and days_since_last_action And duplicate events for the same recipient/message/event_type within 30 seconds are de-duplicated And events become queryable in analytics within 60 seconds of occurrence And events respect do-not-track rules (no open/click logging when DNT is active)
Recipient Engagement Timeline View
Given a user opens the Recipient Details panel for a release When the Engagement Timeline is displayed Then events are shown in strict chronological order with absolute timestamp in recipient local timezone and relative time since send And each event displays event_type, channel, device_type icon, and source (email/SMS/web) And the timeline supports filtering by event type (sent, open, click, sign, approve, bounce, escalation) and updates results within 300 ms for up to 1,000 events And the current recipient status is derived from the latest event and shown (e.g., Awaiting Approval, Bounced, Escalated, Approved) And a “Last activity” metric is shown and matches the most recent event timestamp And recipients with DNT show a visible “Tracking Limited” indicator and exclude opens/clicks from the view
Cohort Summary Analytics for Nudge Optimization
Given a user selects a date range, project/release, and channel filters on the Analytics page When cohort metrics are computed Then the dashboard displays for the selected scope: total recipients, open rate, click-through rate, approve/sign conversion rate, median time-to-first-open, median time-to-approve And heatmaps show conversion rate by recipient local hour-of-day and by day-of-week for each channel (email, sms) And device-type breakdowns (desktop, mobile, tablet) are shown with sample sizes And any cohort bucket with fewer than 30 recipients is suppressed or bucketed to preserve privacy And best-send-hour recommendation is displayed when the top hour’s conversion rate is at least 10% higher than the median and sample size ≥ 30; otherwise show “insufficient data” And metrics exclude recipients/events suppressed by DNT or redaction policies and clearly display the denominators used And metrics reflect new events within 5 minutes
Immutable Audit Log with Export
Given analytics events are recorded When events are written to the audit log Then they are appended-only and linked via a tamper-evident hash chain including event payload hash and previous_hash And any attempted mutation fails validation and is logged as a security event When a user exports the audit log (CSV or JSON) for a date range or release Then the export includes all events in order with fields sufficient for reconstructing the chain, plus a head_hash and signed integrity manifest And the export finishes within 60 seconds for up to 100,000 events and streams for larger datasets And redacted fields are marked with REDACTED and include a redaction_reason code And the export endpoint enforces access controls and is audited
Real-Time Webhook Delivery of Analytics Events
Given a tenant configures a webhook endpoint URL and HMAC secret When an eligible analytics event is recorded (not suppressed by privacy controls) Then IndieVault sends an HTTPS POST within 10 seconds with a JSON payload containing event_id, tenant_id, project_id, release_id, recipient_id (if not redacted), event_type, timestamps (UTC and project TZ), channel, device_type, metadata, and audit hash reference And the request includes X-IndieVault-Signature (HMAC-SHA256), X-IndieVault-Timestamp, and idempotency key headers And 2xx responses are treated as success; non-2xx responses trigger exponential backoff retries with jitter for up to 24 hours (max 10 attempts) And delivery order is preserved per recipient_id And endpoints returning 410 are immediately disabled and marked inactive And admins can replay deliveries by event_id or time window, with signatures recomputed, and see delivery logs with status, attempts, and latency
Privacy Controls: Do-Not-Track and IP Anonymization
Given a recipient has DNT enabled via account policy or link parameter (dnt=1) When the recipient opens or clicks a review link Then no open/click events tied to recipient_id are stored or sent via webhook; only minimally required delivery telemetry (non-identifying) is kept And IP addresses are stored as anonymized prefixes (/24 for IPv4, /48 for IPv6) and user_agent is hashed or omitted per policy And the UI displays a “Tracking Limited” badge for that recipient and excludes opens/clicks from analytics aggregates And sign/approve events required for legal/audit purposes are recorded but exclude IP/user agent and are flagged as privacy-limited And an automated test verifies that enabling DNT reduces stored per-recipient open/click events to zero while maintaining send and approve counts
Regional Data Retention and Redaction
Given tenant-level retention policies are configured per region (e.g., EU=180 days, US=365 days) When an event exceeds its region’s retention window Then PII fields (recipient_id, ip_prefix, user_agent_hash, geo) are irreversibly redacted or deleted while preserving aggregate counters and timestamps necessary for cohort metrics And redacted events are excluded from webhooks and per-recipient timelines, and appear in exports as REDACTED with redaction_reason=RETENTION And dashboards indicate the percentage of records redacted in the selected range and adjust denominators accordingly And data residency is enforced by storing raw events in the region specified by policy and preventing cross-region replication of PII And administrators can run and export a Data Deletion Report showing items redacted by date range and policy

Dispute Vault

A secure track‑level space to raise objections, propose counter‑splits, and attach evidence. Threaded comments, resolution states, and a locked history ensure clean negotiation records without email chaos—everything preserved for legal and label review.

Requirements

Track-level Dispute Creation & Access Control
"As an artist manager, I want to open a dispute tied to a specific track and control who can see it so that only the right stakeholders can participate and sensitive information stays private."
Description

Enable users to create a dispute anchored to a specific track or release, with clear metadata (track ID, release ID, dispute type, claimed percentage/credit, requested change). Pull initial contributor roster and current splits from IndieVault’s catalog to prefill parties. Provide role-based access control (Owner, Party, Observer, Admin) so only invited stakeholders can view or act within the dispute. Support inviting external stakeholders via secure, passwordless magic links with email verification, and allow fine-grained visibility for internal-only notes. Allow multiple concurrent disputes per track with deduplication hints and cross-links. Enforce least-privilege defaults, session timeouts, and privacy by design. Integrates with existing user management and asset models to ensure the dispute’s scope and participants remain aligned with the underlying track data.

Acceptance Criteria
Create dispute anchored to a specific track or release
- Given an authenticated user with edit permission on a track or release, When they select "Open Dispute" from that specific track/release, Then the creation form is pre-filled with trackId and releaseId and requires disputeType, claimedPercentage/credit, and requestedChange before enabling Submit. - Given valid inputs, When the user submits, Then a dispute is created with immutable trackId/releaseId, unique disputeId, creatorId, timestamp, initial state "Open", and a stored snapshot of the current contributor roster and splits from the catalog. - Given invalid or missing required fields, When the user submits, Then the API responds 400 with field-level error codes/messages and no dispute is created. - Then an audit log entry is written capturing actorId, action, disputeId, trackId, timestamp, and origin IP/UA. - Then dispute creation completes within 2 seconds at P95 under nominal load.
Role-based access control within dispute (Owner, Party, Observer, Admin)
- Given a participant’s role, When they access the dispute, Then permissions are enforced as: Owner/Admin = view/comment/attach/invite/edit roles/resolve; Party = view/comment/attach/propose changes; Observer = view only; External non-participants = no access. - Given a non-participant or insufficient role, When they attempt to access the dispute or perform restricted actions, Then the API returns 404 for non-participants (to avoid enumeration) and 403 for participants exceeding role permissions, and no data is leaked. - Given a user with Owner or Admin role, When they change another user’s role, Then the change is persisted, audited, and immediately effective; others cannot change roles. - Then all access checks are enforced server-side on every request (UI cannot bypass) and are covered by automated tests for each role/action matrix.
Invite external stakeholders via expiring magic links with email verification
- Given an Owner or Admin provides an external email and intended role, When they send an invite, Then the system generates a single-use, signed magic-link token scoped to the dispute, expiring in 72 hours, and sends an email without internal notes or sensitive metadata. - Given the recipient clicks the magic link, When they complete an email verification (6-digit code sent to the same email), Then they are granted access to that dispute only with the invited role (defaulting to Observer if unspecified) without creating a password. - Given an expired or already-used link, When it is opened, Then the system returns 410 Gone and offers a path to request a fresh invite from the inviter. - Then all invite creations, acceptances, expirations, and revocations are audited; link tokens cannot be replayed; rate limits are applied to verification attempts (e.g., 5 attempts, escalating backoff).
Internal-only notes visibility and redaction controls
- Given a user composes a note, When they set Visibility = Internal, Then only internal roles (Owner, Admin, internal Parties) can see the note; external Parties and Observers cannot see it in UI, emails, exports, or APIs. - Given a note is Internal by default for internal users, When a user explicitly sets Visibility = Shared, Then the note becomes visible to invited external roles; visibility toggles are recorded in the audit log. - Given an external user requests an Internal note by ID, When they call the API, Then the API returns 404 or redacts content and metadata (author, timestamps) consistently. - Then exports and review links exclude Internal notes by default and offer an explicit "include internal" option restricted to internal roles with step-up verification.
Concurrent disputes with deduplication hints and cross-linking
- Given a user initiates a new dispute on a track, When the system detects existing open disputes on the same track with overlapping disputeType/claimed field/contributor set (similarity score ≥ 0.7), Then it presents a deduplication hint list with titles, owners, and links before submission. - Given the user proceeds anyway, When the new dispute is created, Then bi-directional cross-links to other disputes on the same track are added and visible in the header with status chips. - Then deduplication hints do not block creation, are non-destructive, and are logged for analytics; similarity logic and thresholds are unit-tested. - Then search and listing APIs return all disputes per track without restriction, subject to role-based access filtering.
Session security, least-privilege defaults, and timeouts
- Given a user is viewing a dispute, When there is 15 minutes of inactivity, Then the session requires re-auth before any write action; unsaved form data is preserved locally and offered for restore after re-auth. - Given a sensitive action (role change, dispute resolution, data export), When initiated and the last verification >10 minutes ago, Then step-up verification (email code or SSO re-auth) is required. - Then new external invitees default to Observer role unless explicitly elevated; internal notes and exports default to the most restrictive visibility. - Then all dispute-related endpoints require authentication, enforce least-privilege checks, and return security-appropriate status codes without revealing existence to unauthorized users.
Alignment with catalog data and participant mapping
- Given dispute creation, When parties are loaded, Then contributors are mapped to existing catalog entities (users/artists) by ID/email; unmapped contacts are created as externalParty placeholders and flagged for later linkage. - Given the catalog roster or splits change after dispute creation, When a participant views the dispute, Then the immutable snapshot is shown with a visible "Catalog changed" banner and a diff view; no automatic mutation occurs without an explicit sync action by Owner/Admin. - Given a track is merged or reissued, When the trackId is superseded, Then the dispute remains anchored via canonicalId with a redirect notice; deletion of the track in catalog does not delete the dispute. - Then all references maintain referential integrity; exports include the original snapshot and canonical track metadata at time of export.
Resolution Workflow & States
"As a label admin, I want a clear dispute lifecycle with deadlines and sign-offs so that issues are resolved predictably and releases aren’t delayed by ambiguity."
Description

Provide a configurable lifecycle for disputes with well-defined states: Draft (private), Open, Under Review, Pending Sign-off, Resolved, Withdrawn, and Escalated. Define transition rules, required fields, assignees, and due dates per state. Surface SLAs with automated reminders and escalation triggers when deadlines approach or are breached. Capture structured resolution terms (final splits, credits, conditions, effective date) and freeze modification on resolution, allowing re-open only via explicit action that is recorded. Integrate with release pipelines to optionally block deliveries or flag releases until blocking disputes are resolved. Present state badges and progress indicators in the UI and expose state via API/webhooks for downstream systems.

Acceptance Criteria
Configure States and Transition Rules
Given a workspace with default workflow, When viewing the dispute state machine, Then the states Draft, Open, Under Review, Pending Sign-off, Resolved, Withdrawn, and Escalated are available Given a dispute in any state, When attempting a transition not defined in configuration, Then the transition is blocked and a validation error lists allowed target states Given per-state required fields are configured, When transitioning into that state via UI or API, Then the system enforces completion of those fields and returns 422 with field-level errors if missing Given state transition rules are updated by an admin, When saved, Then new transitions take effect for subsequent actions and all changes are recorded in the audit log
Assignees and Due Dates Per State with SLA Timers
Given a state is configured to require assignee and due date, When a dispute enters Open, Under Review, or Pending Sign-off, Then the system requires and stores assignee and due date before completing the transition Given an SLA of N business days is configured for a state, When a dispute enters that state, Then an SLA due timestamp is calculated using the workspace business calendar and stored on the dispute Given reminders are configured at 75% and 100% of elapsed SLA, When those thresholds are reached, Then email and in-app notifications are sent to the assignee and watchers and an activity entry is logged Given an SLA breach occurs, When breach time is reached, Then the dispute is auto-escalated or flagged for escalation per configuration and escalation notifications are sent; only one escalation is created per breach
Capture Resolution Terms and Freeze on Resolution
Given a dispute in Pending Sign-off, When transitioning to Resolved, Then Final Splits, Credits, Conditions (optional), and Effective Date are required and validated Given a dispute transitions to Resolved, Then resolution terms and prior content become read-only and a signed snapshot with timestamp and actor is recorded in immutable history Given a resolved dispute, When a user attempts to edit resolution terms, Then the system blocks edits and prompts an explicit Re-open action with required reason, recording a new history entry on confirm Given a dispute is reopened, Then a new version of resolution terms is tracked without altering the original resolved snapshot
Withdraw and Escalate Paths
Given a dispute in Open or Under Review, When the reporter selects Withdraw, Then a withdrawal reason is required, state changes to Withdrawn, active SLAs are canceled, and the action is logged Given a dispute in Escalated, When queried via UI or API, Then escalation owner, escalated_at timestamp, and inherited/override SLA are displayed, and allowed transitions are limited per configuration Given an auto-escalation rule is configured, When an SLA breach occurs, Then the system performs a single auto-escalation, deduplicates concurrent triggers, and emits an escalation event
Release Pipeline Blocking and Flags
Given a release has one or more disputes marked blocking, When any blocking dispute is not in Resolved or Withdrawn, Then release delivery is blocked and the UI shows a banner listing blocking dispute IDs and states Given a release has non-blocking disputes in Open, Under Review, Pending Sign-off, or Escalated, When delivery runs, Then delivery proceeds but the release is flagged and downstream systems receive a warning status via API/webhook Given all blocking disputes move to Resolved or Withdrawn, When the pipeline rechecks gate conditions, Then deliveries are unblocked automatically and a webhook is emitted indicating release readiness changed
UI Badges, Progress, and API/Webhook State Exposure
Given any dispute record, When viewed in the UI, Then a state badge displays the current state with distinct color/label and a progress indicator reflects lifecycle order and current step Given accessibility requirements, When using the UI with a screen reader, Then badges and progress have ARIA labels and meet WCAG AA color contrast Given a state change occurs, When calling GET /disputes/:id, Then the response includes current_state, state_entered_at, sla_due_at (nullable), assignee_id (nullable), and immutable state_history[] entries Given a state change occurs, Then a webhook disputes.state.changed is delivered within 60 seconds with HMAC-SHA256 signature, idempotency key, and payload fields matching the API
Counter-Splits & Versioned Proposals
"As a songwriter, I want to propose and counter proposed splits with clear version history so that we can converge on accurate, fair percentages without confusion."
Description

Introduce a structured proposal model for splits and credits where any party can submit a proposal or counter-proposal. Auto-populate the first proposal from current catalog metadata. Validate that splits sum correctly (e.g., 100%) across master and publishing where applicable, support territories and roles, and highlight deltas against the current baseline. Maintain full version history with timestamps, authors, rationales, and inline diffs. Allow accept/decline actions, comment linking to a specific version, and mark a proposal as the candidate for resolution. On final resolution, provide a controlled update path to IndieVault’s canonical metadata, gated by permissions and audit.

Acceptance Criteria
Auto-Populated Initial Proposal from Catalog Metadata
Given a track with existing catalog metadata for splits, credits, roles, and territories When a dispute is initiated for that track in Dispute Vault Then Proposal v1 is auto-created and pre-filled with the current master and publishing splits, roles, territories, and credits from the catalog And the proposal author is set to the initiator and a timestamp is recorded And a rationale field is required before the proposal can be submitted And no changes are applied to IndieVault canonical metadata upon creation or submission of Proposal v1
Split Validation Across Master, Publishing, Territories, and Roles
Given a user is editing a proposal version When the user enters splits for master and/or publishing Then each rights domain total must equal 100.00% with a rounding tolerance of ±0.01% And where territory-specific splits exist, each territory total must equal 100.00% with the same tolerance And each contributor must have at least one valid role from the defined role taxonomy And the UI highlights fields that violate validation, lists specific errors, and disables submission until all errors are resolved And upon successful validation and submission, the version number increments and the proposal is saved
Counter-Proposal Creation and Inline Diffs
Given a baseline proposal exists When another authorized party submits changes as a counter-proposal Then a new proposal version is created with an incremented version number, recorded author, timestamp, and required rationale And the inline diff view highlights added/removed/modified contributors, roles, percentages, and territories compared to the immediately previous version and the current baseline And delta badges display percentage point changes for each affected line item And the diff is immutable once the version is saved
Threaded Comments Linked to Specific Proposal Version
Given a user is viewing a specific proposal version When the user adds a comment Then the comment is permanently linked to that proposal version ID and may optionally reference a specific field (e.g., contributor, role, split, territory) And comment threads support replies and resolution states (open, resolved) with user and timestamp recorded for each state change And comments cannot be edited or deleted after posting; corrections require a new comment And comments remain visible in history even when newer proposal versions are created
Accept/Decline Decisions and Candidate for Resolution
Given a proposal version is under review When a party records an Accept or Decline decision Then the decision is stored with user identity, timestamp, and optional rationale (required for Decline) And when a user with Resolve permission attempts to mark a version as Candidate for Resolution Then the system verifies all required parties have recorded Accept on that exact version and blocks the action with a clear error listing missing parties if not And only one proposal version per dispute can be marked Candidate for Resolution at any time
Final Resolution Updates Canonical Metadata with Audit
Given a proposal version is marked Candidate for Resolution When a user with Update Canonical permission clicks Finalize Then IndieVault canonical metadata is updated atomically to match the proposal’s splits, roles, territories, and credits And an audit record is written capturing before/after values, actor, timestamp, proposal version ID, and rationale And relevant events/webhooks are emitted for downstream systems And if any step in the update or audit fails, the operation is rolled back and no partial changes persist
Evidence Attachments with Integrity Safeguards
"As an artist, I want to attach and securely share supporting documents with tamper-evidence so that my claims are credible and protected."
Description

Allow parties to attach evidence (contracts, emails, screenshots, audio snippets) to a dispute or to specific proposals/comments. Support common file types (PDF, DOCX, PNG/JPG, EML/MSG, WAV/MP3) with size limits, antivirus scanning, duplicate detection, and EXIF stripping for images. Generate SHA-256 checksums and trusted timestamps on upload; record source, author, and sensitivity labels. Provide inline viewers with watermarking and optional redaction to protect PII. Maintain a chain-of-custody log for every add/view/download action. Enable linking to existing IndieVault assets (e.g., contracts already stored) to avoid re-uploading and preserve a single source of truth.

Acceptance Criteria
Upload and Scan Supported Evidence Files
- Given I am an authorized party in a dispute, When I attach a file of type PDF, DOCX, PNG, JPG, EML, MSG, WAV, or MP3 within the configured size limit, Then the system validates MIME/extension and begins an antivirus scan before persisting the file. - When the antivirus scan flags malware, Then the upload is blocked, the file is not stored or previewable, I see a clear error with the detection name, and a failed add event is logged. - When the file passes scanning and validation, Then the evidence is saved, visible in the selected context (dispute/proposal/comment), and available for inline preview with a unique evidence ID.
Duplicate Detection and Image EXIF Stripping
- Given I attempt to upload a file whose SHA-256 matches evidence already present in the same dispute, Then the system offers to link the existing evidence instead of storing a duplicate; no new blob is created if I confirm linking. - When a duplicate is detected across my workspace outside the dispute and I have access, Then I may choose to link that existing asset instead of uploading; otherwise I can proceed to upload as new. - When I upload a PNG/JPG image, Then all EXIF metadata (including GPS/Camera) is stripped from the stored evidence and from any preview/download; verifying EXIF returns no sensitive fields.
Integrity Metadata and Trusted Timestamps on Upload
- Given a successful upload or link, Then the system records: SHA-256 checksum of the content, trusted timestamp token, source (local upload or asset link), uploader account, declared author, and a required sensitivity label from the allowed set. - Then the checksum remains constant across all downloads; recomputing SHA-256 on the downloaded file matches the stored value. - Then the trusted timestamp is verifiable via the configured TSA, stored in UTC, and displayed alongside the evidence.
Inline Viewing, Watermarking, and Redaction Controls
- Given a supported file type, When I open it in the inline viewer, Then a dynamic watermark displays my name/email, date/time, and dispute ID on the preview and any exported preview. - When I enable redaction mode and apply redactions, Then a redacted derivative is saved; the original file remains unchanged; permissions enforce that only authorized roles can access the unredacted original. - When viewing audio evidence, Then the player streams an audibly watermarked version; downloads of the preview include the watermark; the original audio remains intact.
Chain-of-Custody Logging and Immutability
- Given any add, view, or download action on evidence, Then an append-only log entry is created with: evidence ID, action type, actor ID and role, target context ID, timestamp (UTC), IP, user agent, outcome, and SHA-256 at event time. - Then the chain-of-custody log is tamper-evident (hash-chained) and exportable as signed JSON/CSV; attempts to alter historical entries are rejected and flagged. - Then each user-visible action tested produces a corresponding log entry that can be queried by dispute ID and time range.
Link Existing IndieVault Assets as Evidence
- Given a contract or asset already stored in IndieVault and I have read permission, When I link it as evidence, Then no new file is uploaded; the evidence record stores a reference to the asset and version, and a link event is logged. - When linking, Then I can pin a specific version or choose latest; the chosen version is displayed in the evidence details; deduplication prevents linking the same asset/version twice in the same context. - When the linked asset updates and the evidence is pinned to a version, Then the evidence continues to resolve to the pinned version; if set to latest, Then it resolves to the newest version.
Attach Evidence to Disputes, Proposals, and Comments with Permissions
- Given I am composing a dispute, proposal, or comment, When I attach evidence, Then it is associated to that exact context level and displays with a context badge (Dispute/Proposal/Comment) and counts in summaries. - Then only authorized parties for that context can view or download the evidence; unauthorized users see no link or receive a 403 response; all access attempts are logged. - Then a sensitivity label (e.g., Public, Internal, Restricted) is required on attach; access and watermarking behavior reflect the label policy.
Threaded Comments, Mentions, and Private Notes
"As a producer, I want organized, threaded conversations with mentions and private notes so that negotiations stay clear and I can coordinate internally without exposing sensitive strategy."
Description

Implement threaded discussions with reply, quote, and resolve actions, plus lightweight rich text and attachment support. Enable @mentions with typeahead for participants and team members, sending in-app and email notifications. Allow private internal notes visible only to the user’s organization, clearly separated from the shared thread. Support linking a comment to a specific evidence file or proposal version for context, and provide moderation controls (edit windows, redaction, rate limiting). Index comments for search and surface unread indicators to keep negotiations focused and traceable without resorting to email.

Acceptance Criteria
Post and Reply to Thread with Rich Text and Attachments
Given a user with comment permissions on a Dispute Vault track thread When the user posts a top-level comment using bold, italics, underline, inline code, bullet/numbered lists, and hyperlinks Then the comment renders with the specified formatting and persists to the thread with created-at timestamp and author And attachments of types (pdf, docx, xlsx, png, jpg, mp3, wav) up to 25 MB each and max 5 per comment upload successfully and show filename, size, and preview where supported And disallowed file types or oversized files are blocked with a clear error message before submission Given an existing comment When the user replies to it Then the reply is nested one level under the parent and ordered chronologically by created-at And the thread shows total reply count updated in real time without page refresh
Quote and Resolve Comment Threads
Given an existing comment in a thread When a user selects text within that comment and clicks Quote Then the composer is prefilled with a quoted block referencing the original comment ID and author And posting the quoted reply preserves the quoted snippet with a link to the source comment Given a thread with at least one comment When an authorized participant (owner, manager, or moderator role) clicks Resolve Then the thread status changes to Resolved, collapses by default, records resolver, timestamp, and optional resolution note And only authorized users can Unresolve, which reopens the thread and logs an audit entry without altering prior history And a read-only audit trail displays all resolve/unresolve events with actor and time
Mentions with Typeahead and Notifications
Given the comment composer is focused When the user types @ and at least 2 characters Then a typeahead list returns matching participants of the dispute and members of the user’s organization (active only), showing name, role, and avatar When the user selects a mention and posts the comment Then the mention renders as a clickable chip to the user’s profile and filters And the mentioned user receives an in-app notification within 5 seconds and an email within 2 minutes containing the thread link and quoted context And duplicate notifications for the same comment-mention to the same user are deduplicated And mentions added in Internal Notes do not notify external participants
Private Internal Notes Visibility and Separation
Given a user belongs to an organization with access to the dispute When the user toggles the composer to Internal Note and posts Then the note appears in an Internal tab/timeline section visually distinct from the Shared thread And only users from the same organization can view, search, or be mentioned in that note And external participants and recipients cannot view internal notes via UI or API And the default composer mode remains Shared unless explicitly toggled And exporting or sharing the dispute thread excludes internal notes unless the user selects Include Internal Notes
Link Comment to Evidence File or Proposal Version
Given the dispute has evidence files and proposal versions When the user attaches a contextual link to a comment by selecting an evidence file or a proposal version Then the comment displays a contextual badge (e.g., Evidence: filename.ext v3 or Proposal: Split v2) with a deep link to open the item in preview And if the linked item is later updated, the comment continues to point to the specific version selected And if the linked item is deleted or access is revoked, the badge remains with a "No longer available" label while preserving the original reference in the audit log
Moderation Controls: Edit Window, Redaction, and Rate Limiting
Given a user posts a comment When the user attempts to edit within 10 minutes Then the edit is allowed, the comment shows Edited with a hoverable edit history restricted to the user and moderators And after 10 minutes, edit is disabled for the author, but moderators can redact content or attachments When a moderator performs a redaction Then the redacted segment is replaced with [redacted] and an audit entry records actor, timestamp, and reason, while preserving an immutable history for compliance Given rapid posting behavior When a user exceeds 10 comments or 10 attachments within 60 seconds across the same dispute Then subsequent attempts are blocked for 60 seconds with a clear rate-limit message and no partial saves occur
Search Indexing and Unread Indicators
Given new comments are posted to a dispute When 30 seconds have elapsed Then the comments are searchable by text, author, mention handles, attachment filenames, and linked evidence/proposal identifiers And search results highlight matched terms and deep-link to the specific comment within the thread Given a user has unread comments in a dispute When the user opens the dispute Then per-thread unread counts appear, and entering a thread marks comments up to the last viewport as read while retaining counts for those below the fold until viewed And a filter allows Show Unread Only and persists per user across sessions/devices
Immutable Audit Trail & Legal Export
"As legal counsel, I want an immutable record and exportable bundle of the dispute so that I can review or present defensible evidence of the negotiation history."
Description

Create an append-only audit trail that records all dispute events (state changes, proposals, comments, evidence uploads/downloads, participant changes) with actor IDs, timestamps, IP/device metadata, and cryptographic hashes. Use hash chaining to make tampering evident and optionally anchor periodic digests with a trusted timestamping service. Provide a one-click export (PDF/ZIP) that bundles the timeline, final resolution, all proposal versions, evidence files with checksums, and an access log manifest suitable for legal review. Support legal holds to prevent deletion and apply data retention policies aligned with compliance requirements.

Acceptance Criteria
Append-Only Hash-Chained Event Log
Given a dispute exists When any of the following occurs: state change, proposal created/updated, comment added, evidence upload/download, participant added/removed Then the system appends one event containing event_id (UUIDv4), event_type, actor_id, timestamp (UTC ISO 8601), IP, user_agent, entity_ref, payload, previous_hash, event_hash (SHA-256 over canonical UTF-8 JSON payload + previous_hash), and updates chain_head Given any role attempts to edit or delete an existing event When the request is made Then the system returns 403 Forbidden, appends a correction_attempt_denied event, and no existing event content changes Given the event stream is replayed from genesis When hashes are recomputed Then the calculated chain_head equals the stored chain_head for every prefix of the log
Trusted Timestamp Anchoring of Event Digests
Given events have been appended since the last anchor When either 15 minutes elapse or 500 events have been added (whichever comes first) Then the system computes a segment digest (SHA-256), obtains a trusted timestamp token, and persists token, anchor_time, anchor_id, and digest in the audit store Given the timestamping service is unavailable When an anchor attempt is due Then the system retries with exponential backoff up to 24 hours, raises an alert, continues appending events, and creates missed anchors once service is restored without altering existing event hashes Given an export is generated When the audit timeline is included Then anchor proofs corresponding to covered ranges are included and verifiable
One-Click Legal Export Bundle (PDF/ZIP)
Given a user with Dispute Viewer or higher role When they click Export Legal Bundle on a dispute Then within 120 seconds for disputes with ≤10,000 events and ≤5 GB evidence, a ZIP is provided containing: a PDF timeline (ordered events with actors, timestamps, IP/user_agent, event and chain head hashes, anchor proofs, final resolution), all proposal versions, all evidence files (original binary), manifest.json and manifest.csv (every file with SHA-256 checksums and sizes), and access_log.csv (all views/downloads/exports with subject ID, timestamp, IP) Given any file in the bundle When its SHA-256 checksum is computed externally Then it matches the checksum listed in the manifest Given a dispute under legal hold When export is requested Then export proceeds and the action is appended to the audit trail
Legal Hold Enforcement
Given a user with Owner or Compliance role When they apply a legal hold to a dispute and provide a reason Then an immutable audit event is appended and the dispute, assets, and metadata become non-deletable and non-mutable except for adding new audit events Given any user attempts to delete or modify evidence, proposals, comments, participants, or the dispute while a legal hold is active When the request is made Then the system returns 423 Locked, appends a denied action event, and no data changes Given a legal hold exists When a user with Compliance role removes it and provides justification Then the hold is removed, the action is audited, and retention timers resume from removal time
Data Retention and Redaction Compliance
Given workspace retention policies are configured (e.g., keep access logs for N days; anonymize IPs after M days; delete evidence after R days post-resolution unless on legal hold) When the policy thresholds are reached and no legal hold applies Then the system performs the actions on schedule, appends corresponding audit events, and updates manifests to reflect deletions/anonymizations Given a data subject erasure request is approved When redaction is executed Then personally identifiable fields in audit records (e.g., IP) are redacted or pseudonymized per policy via new redaction events that preserve prior hashes and add a redaction_token reference Given retention actions run When the audit chain is verified Then the chain remains continuous and tamper-evident (tombstone/redaction events maintain hash validity)
Access Logging Completeness and Integrity
Given any access to a dispute, evidence file, export download, or review link When the action occurs (success or failure) Then an access event is appended capturing subject_type (user, API key, link recipient), subject_id or token_id, timestamp (UTC), IP, user_agent, action, resource_id, outcome, and origin Given an anonymous review link recipient accesses an asset When the asset is requested Then the access event includes the recipient token ID; if the token is expired, outcome is "denied" and the asset is not served Given an export is generated When the access log is included Then every access since dispute creation is present with no gaps, and spot-checks against application logs match 100%
External Review Links with Expiry & Analytics
"As a manager, I want to share a controlled, expiring dispute packet with outside reviewers so that they can evaluate the case without gaining ongoing access to the full vault."
Description

Generate dispute-specific, view-only review links that compile selected proposals, evidence, and the audit timeline into a curated packet for external counsel or labels. Reuse IndieVault’s expiring, watermarkable link infrastructure and extend it with granular include/exclude controls (hide private notes, mask PII where needed), per-recipient access codes, and one-click revoke. Capture analytics (opens, time viewed, downloads if permitted) and store acknowledgments of receipt. Ensure links inherit dispute privacy rules and auto-expire per policy to minimize leakage risk.

Acceptance Criteria
Create External Review Link with Curated Packet
Given an open dispute with at least one proposal, one evidence file, and an audit timeline When a user with Dispute Vault Editor permission selects "Generate External Review Link" And selects specific proposals, evidence, and timeline entries to include And previews the packet Then the preview shows only the selected items in the chosen order And all rendered previews are watermarked with the dispute ID and recipient tag placeholder When the user confirms generation Then a unique, view-only link is created within 3 seconds And the link enforces read-only mode with downloads disabled by default And the packet header displays dispute title, dispute ID, and generated-on timestamp
Granular Include/Exclude Controls and PII Masking
Given the include/exclude panel shows controls for Private Notes and PII Masking When "Hide Private Notes" is ON Then private notes do not appear in the preview or the live link When "Mask PII" is ON Then fields designated by policy (emails, phone numbers, addresses, legal names) are redacted in the preview and live link And redacted regions display a standardized "Redacted" label When the user toggles a control Then the change is reflected in the preview within 1 second And the chosen settings are saved to the link configuration and recorded in the audit trail
Inheritance of Dispute Privacy Rules
Given a dispute where specific artifacts are marked Confidential-Internal or Restricted-Party When an external review link is generated Then artifacts not permitted for external sharing are excluded automatically And any files flagged "Do Not Export" cannot be selected (UI disables selection with tooltip reason) When a viewer attempts to access an excluded resource via direct URL Then the system returns HTTP 403 and logs a denied-access event with link ID and recipient tag And the link never elevates access beyond the dispute's privacy policy
Per-Recipient Access Codes and Rate Limiting
Given the user creates a link for Recipient A When the system generates the link Then an access code meeting policy (minimum 8 characters, alphanumeric) is created and stored hashed And the link requires the correct access code before any content is revealed When an incorrect code is entered 5 times within 15 minutes Then the link is locked for 15 minutes and a lockout event is logged When the user regenerates the access code Then the previous code is immediately invalidated and the new code is required And rate limiting is enforced per recipient and per IP
Policy-Based Expiration and Auto-Expire Behavior
Given an organization policy defines default expiry and maximum expiry for external links When a link is created without override Then the link expiry equals the policy default and the expiry timestamp is shown to the creator and recipients When a shorter expiry is selected Then the link adopts the shorter expiry And selecting an expiry beyond the policy maximum is blocked with validation feedback When the expiry time elapses Then subsequent requests return HTTP 410 Expired and no assets are served And open sessions are terminated within 60 seconds And an auto-expire event is written to the audit trail
One-Click Revoke with Immediate Invalidation
Given an active external review link When the dispute owner clicks "Revoke Link" Then the link becomes invalid within 10 seconds across all new and existing sessions And any in-progress downloads are aborted And the UI marks the link as Revoked and disables copy/share actions And a revoke event capturing actor, timestamp, and reason is recorded immutably
Analytics Capture and Acknowledgment of Receipt
Given a per-recipient external review link When the recipient opens the link and passes the access code Then an Open event is recorded with timestamp, recipient tag, IP, and user agent And active viewing time is accumulated, pausing after 5 minutes of inactivity When downloads are permitted and the recipient downloads a file Then a Download event is recorded with file ID, filename, and size When the recipient clicks "Acknowledge Receipt" and enters their name Then an acknowledgment record with name, timestamp, and link ID is stored immutably and visible in the Dispute Vault analytics panel And analytics do not count duplicate open events caused by page refresh within 10 seconds

Change Guard

If a mix, artwork, or term changes, IndieVault detects hash deltas and triggers targeted re‑consent only for impacted items and parties. Prevents stale sign‑offs while avoiding unnecessary re‑signing, keeping momentum without risking compliance.

Requirements

Deterministic Asset Hashing & Delta Detection
"As an indie artist-manager, I want reliable detection and classification of asset changes so that I only trigger re-consent when a meaningful change has actually occurred."
Description

Compute and store deterministic signatures for all managed assets (audio mixes, stems, artwork, contracts, press kits, and metadata). Use SHA-256 file hashes and normalized strategies to avoid false positives (e.g., ignore non-content metadata in audio files, normalize line endings in docs). On every upload, replacement, or edit event, detect deltas against the last consented version and classify them (none/minor/major) with reasons (e.g., duration change, LUFS delta, pixel variance, clause edits). Run processing asynchronously with resumable workers, support large files, and expose delta results to downstream modules. Provide project-level “hash freeze” for release candidates to lock the consent baseline.

Acceptance Criteria
Deterministic SHA-256 Hashing Across Environments
Given identical binary content for an asset When the SHA-256 hash is computed on different workers and operating systems Then the resulting hash value is identical across runs and nodes And the stored hash is associated deterministically with the asset version And re-computing the hash at any later time yields the same value
Normalization-Based Hashing Ignores Non-Content Metadata
Given two audio files with identical PCM content but different ID3/APEv2 tags and timestamps When the content-normalized hash is computed ignoring non-audio metadata Then the hashes are identical and the delta classification is None Given two images with identical pixels but different EXIF/XMP metadata When the content-normalized hash is computed from pixel data only Then the hashes are identical and the delta classification is None Given two text/PDF contract files differing only by line endings, BOM, or trailing whitespace When the normalized hash is computed after canonicalizing line endings and trimming insignificant whitespace Then the hashes are identical and the delta classification is None Given two metadata JSON documents with the same data but different key order and spacing When canonical JSON (sorted keys, normalized whitespace, UTF-8) is hashed Then the hashes are identical and the delta classification is None
Delta Detection and Classification on Change Events
Given an asset with a stored last-consented baseline hash When a new version is uploaded, replaced, or edited Then the system computes the new hash and compares it to the baseline And returns a delta classification in {None, Minor, Major} with at least one reason code Given an audio mix where integrated LUFS changes by 0.5–1.5 dB and duration change is < 1s When the new version is processed Then the classification is Minor and reasons include [lufs_delta, duration_delta] Given an audio mix where duration changes by ≥ 1s or sample count changes When the new version is processed Then the classification is Major and reasons include [duration_delta] Given artwork where SSIM ≥ 0.99 When the new version is processed Then the classification is None and reasons include [pixel_variance: below_threshold] Given artwork where 0.95 ≤ SSIM < 0.99 When the new version is processed Then the classification is Minor and reasons include [pixel_variance] Given artwork where SSIM < 0.95 When the new version is processed Then the classification is Major and reasons include [pixel_variance] Given a contract document where normalized text content differs (≥ 1 char) When the new version is processed Then the classification is Major and reasons include [clause_edits] Given a press kit PDF where page count or embedded assets change When the new version is processed Then the classification is Major and reasons include [structure_delta]
Delta Results Exposed via API and Event
Given a completed delta computation for an asset version When a downstream module requests GET /assets/{assetId}/deltas?since=baseline Then the response includes assetId, baselineVersion, newVersion, baselineHash, newHash, classification, reasons[], metrics{}, computedAt And the API returns HTTP 200 within 200ms for cached results Given a completed delta computation When the event is emitted on the internal bus Then a delta.detected message is published with the same fields And classification=None messages are marked non-impacting And messages are idempotent (same computationId is not emitted twice)
Asynchronous and Resumable Processing for Large Files
Given a 10 GB audio file upload When hashing and analysis begin Then processing occurs asynchronously and a jobId is returned And progress updates are available until completion Given the worker processing the job is terminated mid-run When the worker restarts Then the job resumes from the last completed chunk within 60 seconds And the final hash matches the hash from uninterrupted processing And only one completion event is recorded (exactly-once) Given concurrent uploads of large assets (≥ 5 parallel jobs) When processing runs Then no job starves and average throughput does not degrade by more than 20% compared to single-job baseline
Project-Level Hash Freeze Baseline
Given a project with all assets consented When a user with Release Manager role enables Hash Freeze Then a baseline snapshot of hashes is recorded with freezeId and timestamp And baseline hashes become read-only until Unfreeze is performed by an authorized role Given Hash Freeze is active When new versions of assets are uploaded Then deltas are always computed against the frozen baseline And the baseline hash values do not update automatically Given a user attempts to change the baseline while frozen When the action is performed Then the system blocks the change with HTTP 409 and instructs to Unfreeze first Given Hash Freeze is deactivated (Unfreeze confirmed) When deltas are recomputed Then the last-consented versions become the new baseline for future comparisons
No-Change Path and Non-Impacting Outcome
Given an upload where normalized content is identical to the baseline When delta detection completes Then classification is None And reasons[] is empty or contains only below-threshold markers And no re-consent-required flag is set in the exposed result And downstream modules receive a non-impacting delta outcome
Impacted Parties Mapping & Scope Minimization
"As a label project coordinator, I want the system to identify only the recipients affected by a change so that we don’t spam or delay uninvolved parties."
Description

Maintain a version-to-approval matrix that links recipients to the exact assets and terms they consented to. Given a detected delta, resolve the minimal impacted scope: which items changed and which recipients previously consented to those items or terms. Exclude unaffected assets and recipients, respect per-recipient permissions, handle group recipients and role-based routing, and update mappings when assets move between release folders. Output a precise impacted list for targeted re-consent while preserving existing valid sign-offs.

Acceptance Criteria
Mix Hash Delta Triggers Minimal Impact For Prior Mix Approvers
Given a release with assets: Mix v1 and Artwork v1, and an approval matrix: R1=audio (consented Mix v1), R2=audio+artwork (consented Mix v1 & Artwork v1), R3=artwork (consented Artwork v1) When a hash delta is detected for Mix v1->v2 and no other items or terms change Then the impacted items list includes only the Mix asset And the impacted recipients list equals {R1, R2} And recipients without audio permission (e.g., R3) are excluded And existing sign-offs for Artwork v1 remain valid and are not re-requested And each impacted entry includes recipient_id, item_id, item_kind='mix', change_type='asset_delta', from_version='v1', to_version='v2', reason='hash_delta'
Terms Change Targets Business Roles Only
Given recipients R1 (A&R), R2 (Legal), R3 (Designer) and an approval matrix recording term consent T1 When the licensing terms change T1->T2 with no asset deltas Then the impacted items list contains only 'Terms' And the impacted recipients list equals the set of recipients who previously consented to terms and/or hold a role mapped to terms approval (e.g., {R2}) And asset-only approvers (e.g., R1, R3) are excluded And all existing asset sign-offs remain valid and are not re-requested And each impacted entry includes recipient_id, item_id='terms', change_type='term_delta', from_version='T1', to_version='T2', reason='terms_delta'
Exclude Unaffected Assets And Recipients In Multi-Asset Release
Given a release with 10 assets and an approval matrix mapping each asset to its consenting recipients When only the cover artwork asset changes hash Then the impacted items list contains exactly the changed artwork asset(s) And the impacted recipients list equals the union of recipients who previously consented to those artwork asset(s) only And zero recipients who consented exclusively to other assets are included (0 false positives) And the number of impacted entries equals (#changed_artwork_assets × #prior_consenters_per_asset) with no duplicates
Group Recipient And Role-Based Routing Resolution
Given a group recipient 'Label Team' with users U1 (A&R), U2 (Legal), U3 (Design) and role mappings: audio->A&R, terms->Legal, artwork->Design, and the matrix shows 'Label Team via U1' consented to Mix v1 When a mix delta is detected Then impacted recipients resolve to U1 (the individual who previously consented for that asset kind), not the entire group And if no prior individual consent exists, route to users mapped to the asset kind role within the group (audio->A&R) And a user appearing both individually and via group is notified once (deduped by user_id) And each impacted entry records source='group_resolved' or 'individual' accordingly
Mapping Persists When Assets Move Between Release Folders
Given asset S ('Song X - Stems v3') with existing consents moves from 'EP/Disc1' to 'Album/Disc1' without content change When a delta v3->v4 is later detected for S Then approval mapping follows S by immutable asset_id, not folder path, preserving prior consents And the impacted recipients computed after the move are identical to those computed before the move for the same asset_id And an audit log entry exists for the move with old_path, new_path, asset_id, and timestamp And no re-consent is triggered due to the move itself (only due to the later delta)
Idempotent Impact Resolution For Rapid Successive Deltas
Given asset 'Song Y - Mix' receives deltas v1->v2 at 10:00 and v2->v3 at 10:05 before any re-consents are collected When impacted scope is computed at 10:05 for v3 Then the impacted recipients list equals the set of prior consenters for the mix as of v2 and includes no unrelated recipients And repeated computations with unchanged approval state return identical impacted lists (idempotent) And recipients are unique per recipient-item pair (no duplicates) And for a dataset of up to 1,000 assets and 5,000 recipients, impact resolution completes within 1 second p95
Permission Changes Narrow Impact Without Invalidating Valid Sign-offs
Given R4 previously consented to 'Song Z - Mix v1' with audio permission, R4's audio permission is later removed, and a substitute role owner R5 is configured for audio When a mix delta v1->v2 occurs Then R4 is excluded from the impacted recipients for re-consent while their v1 sign-off remains valid in audit history And R5 is included per role-based routing rules if current approver coverage is required And each impacted entry noting substitution includes reason='permission_changed' and substitute_recipient_id=R5
Targeted Re-Consent Orchestration
"As a mixing engineer, I want to receive concise re-consent requests only for what changed so that I can approve quickly without re-reviewing unrelated items."
Description

Automatically assemble and deliver re-consent requests for only the changed items and terms. Generate expiring, watermarkable review links per recipient with device-friendly signing, change summaries, and options to approve or request revisions. Schedule smart reminders, allow one-click bulk approval for multiple items, and prevent duplicate requests by merging pending ones. Support manual override, escalation, and policy-controlled distribution holds until required re-consents are completed. Track delivery, opens, and approvals with per-recipient analytics.

Acceptance Criteria
Auto-detect changes and assemble targeted re-consent package
Given items and terms have stored last-consented hashes and recipient consent mappings When a new version of any item or term is saved and its hash differs from the last-consented hash Then the system assembles a re-consent package that includes only impacted items and only recipients who previously consented to those items or are required by policy Given multiple changes occur within a 10-minute coalescing window When orchestration runs Then impacted changes are grouped into a single package per recipient with an itemized change summary and unaffected items are excluded Given an authorized user reviews the package before send When they add or remove items or recipients and confirm Then the package scope updates accordingly and the changes are captured in an immutable audit log Given up to 100 changed items across up to 100 recipients When orchestration executes Then package assembly completes within 60 seconds and enqueues deliveries successfully
Generate expiring, watermarkable review links per recipient
Given a re-consent package is ready for recipient R When links are generated Then IndieVault creates a unique, signed, per-recipient URL that expires at the configured time (default 7 days, range 1–30 days) and returns HTTP 410 after expiry Given supported media types in the package (audio streams, images, PDFs) When R previews or downloads review assets Then the assets display a visible watermark containing R’s identifier and timestamp; for audio streams, an audible watermark overlay is present during preview Given revocation is triggered by an admin When a link is revoked Then subsequent requests to the URL are blocked with HTTP 410 and analytics record the revocation event Given batch generation for up to 100 recipients When links are created Then average link generation latency is under 500 ms per recipient and the URLs are cryptographically unguessable (>=128-bit entropy)
Device-friendly signing with change summaries and approve/request revisions
Given recipient R opens the review link on a mobile (>=320px width) or desktop device When the review page loads on a 4G connection Then the initial interactive content renders within 2 seconds and the page meets WCAG 2.1 AA for navigation, contrast, and keyboard access Given a package contains changed items and/or terms When R views the package Then each item shows a change summary (previous vs current version, timestamp, and highlights) and terms show a diff summary Given R decides on the package When R clicks Approve All Then the system records approval for every item, captures a single signature (typed or drawn) with IP and user-agent, and updates last-consented hashes Given R requests revisions on any item When submitting the package Then a mandatory comment of at least 10 characters is required for each revision-requested item and those items remain pending while others may be approved Given intermittent connectivity When the submission occurs offline Then the client queues the submission and retries automatically within the link validity window or prompts R to retry manually
Smart reminders and escalation scheduling
Given recipients have pending re-consents When the reminder scheduler runs Then the system sends reminders at T+2 days and T+5 days, and escalates to the designated manager at T+7 days for non-responders Given a recipient has received a reminder When within any 24-hour period Then the system sends no more than one reminder to that recipient Given a recipient approves or requests revisions When state changes to non-pending Then all future reminders and escalations for that package are canceled Given reminders and escalations are sent When events are delivered Then each event is logged with timestamp and recipient, and delivery outcomes (sent, bounced, failed) are recorded
Prevent duplicate requests by merging pending ones
Given recipient R has an open re-consent package P for release X When additional changes for release X are detected before P is completed Then the system merges the new changes into P and updates the change summaries instead of creating a new package Given P already contains per-item decisions When an item included in P changes again Then previous decisions for that item are invalidated and set back to pending with a note explaining the update; unrelated items retain their decision states Given a merge occurs When R opens the existing link Then the link reflects the merged content and R receives a single consolidated notification about the update Given all active packages for release X per recipient When validating constraints Then no recipient has more than one open re-consent package for the same release at any time
Policy-controlled distribution hold until re-consents complete
Given a release is configured with the policy “Hold distribution until required re-consents complete” When any required recipient remains pending or has requested revisions Then a distribution hold flag is applied that blocks exports and distribution jobs for that release Given all required re-consents are approved When the last approval is recorded Then the distribution hold automatically clears, queued jobs resume, and an audit entry is created with timestamp and actor Given an authorized role initiates a manual override When they provide a justification Then the hold is lifted for the release, the justification is stored immutably, and notifications are sent to the compliance channel Given the release dashboard is viewed When the hold is active Then the UI displays the hold status, the list of blocking recipients, and links to their packages
Per-recipient analytics for delivery, opens, and approvals
Given re-consent deliveries are sent When tracking events occur Then the system records per-recipient metrics: delivery status (sent, bounced, failed), opens, link visits with device type, asset previews, approvals, and revision requests with timestamps Given analytics are queried for a release or recipient When filtering by time window and status Then the dashboard returns results within 2 seconds for up to 10,000 events and supports CSV export with the same filters Given a recipient opens a link on multiple devices When events are correlated Then the analytics attribute all events to the same recipient and maintain a chronological audit trail Given data retention policy of 24 months When events exceed the retention period Then analytics data are purged on schedule while preserving the immutable consent records
Change Summary & Diff Visualization
"As a recipient reviewer, I want an at-a-glance summary of exactly what changed so that I can make an informed decision quickly."
Description

Provide clear, human-readable diffs tailored to asset type: audio summaries (duration, sample rate, LUFS, peak, embedded codes, cue changes), visual artwork comparisons (before/after toggle and overlay heatmap), redlined contract text with clause-level change notes, and structured metadata field deltas. Render summaries in emails and the approval page, allow download of prior and current versions, compute heavy diffs asynchronously, and ensure accessibility with alt text and keyboard navigation.

Acceptance Criteria
Audio Diff Summary for New Mix Version
Given an audio asset has version N and version N-1 with different content hashes When the user opens the approval page or views the change-notification email Then the diff summary displays, for both versions, duration (mm:ss.s), sample rate (Hz), bit depth, integrated LUFS (1 decimal), true peak (dBTP, 1 decimal), embedded codes (ISRC/UPC), and cue/marker list with timestamps And metrics that changed are visually highlighted and unchanged metrics are labeled Unchanged And LUFS delta ≥ 0.5 LU is flagged as Level change and true-peak delta ≥ 0.5 dBTP is flagged as Peak change And duration delta ≥ 0.5 s is displayed with +/- sign And added/removed/modified cues are listed with +/- indicators and time deltas And analysis values are within tolerances: duration ±0.1 s, LUFS ±0.1 LU, true peak ±0.1 dBTP
Artwork Before/After & Heatmap Comparison
Given an artwork asset has a new version differing from the prior version When the user opens the artwork diff viewer on the approval page Then the viewer provides Before/After toggle and Overlay Heatmap modes And the heatmap visualizes per-pixel differences with an adjustable sensitivity scale 1–5 (default 3) And zoom controls allow 25%–400% with panning via mouse drag and keyboard arrows And a Change coverage metric is shown as percentage of pixels changed to the nearest 0.1% And the default view fits the full artwork within the viewport without cropping
Contract Redline with Clause-Level Notes
Given a contract text asset has changed between version N-1 and N When the user opens the contract diff Then a redline view shows insertions (green underline) and deletions (red strikethrough) preserving clause numbering And a side panel lists changed clauses with per-clause notes summarizing Added/Removed/Amended and counts of words added/removed And clicking a clause note scrolls and focuses the corresponding clause in the redline And a header summary displays total clauses changed and total words added/removed And copy/download of the redlined view as PDF is available from the diff UI
Structured Metadata Delta View
Given structured metadata exists for both prior and current versions of an asset When the user selects Metadata changes Then only fields with differences are displayed, each with JSON path, previous value, and current value And added fields are labeled Added, removed fields labeled Removed, and modified fields labeled Changed And data types are preserved and compared type-safely (e.g., 12 ≠ "12") And array differences are shown at index level with insertions, deletions, and modifications And the user can copy the changes as an RFC 6902 JSON Patch representing the delta
Asynchronous Diff Processing & Readiness Indicators
Given a diff request is initiated for an asset requiring heavy processing (e.g., audio analysis or image heatmap) When the job is enqueued Then the UI shows a Processing state with a stable job identifier and skeleton placeholders And the client polls status at least every 10 seconds until completion, failure, or a 10-minute timeout And on completion, the diff renders in-place without a full page reload And on failure, an error banner appears with Retry (up to 3 attempts) and a link to diagnostics/log id And progress and final state are persisted so that a returning user resumes at the correct state
Email & Approval Page Summary Rendering with Downloads
Given a change triggers re-consent notifications for impacted recipients When the email is sent and the approval page is opened Then the email contains compact, human-readable summaries per asset type (audio metric deltas, artwork change coverage thumbnail, contract changed-clause count, metadata changed-field count) And the approval page renders the full interactive diff components matching the in-app experience And both email and approval page include secure links to download prior and current versions; links require authentication/recipient token, are audit-logged, and expire after 7 days And download filenames include asset name and version identifiers (e.g., vN and vN-1) And if a diff is still processing at send time, the email indicates Diff processing and the system sends a follow-up email with the rendered summary once ready
Accessibility: Alt Text & Keyboard Navigation
Given a user navigates using a keyboard and/or screen reader When interacting with audio, artwork, contract, or metadata diff components Then all non-text visuals (including artwork thumbnails and heatmaps) have descriptive alt text or ARIA labels And all interactive controls have roles, names, and states exposed via ARIA and meet contrast ratio ≥ 4.5:1 And all actions are keyboard operable: Tab/Shift+Tab order is logical, Enter/Space activate controls, Arrow keys adjust heatmap sensitivity and toggle before/after where applicable And modal diff viewers trap focus until dismissed and are closable with Esc, with visible focus indicators And the experience satisfies WCAG 2.1 AA criteria 1.1.1, 1.3.1, 1.4.3, 2.1.1, 2.4.3, and 2.4.7 for the diff flows
Consent Ledger & Compliance Audit Trail
"As a rights administrator, I want a complete audit trail of consents tied to exact content hashes so that we can prove compliance and resolve disputes."
Description

Create an immutable, versioned ledger recording who consented to which asset/term version and when, including content hashes, timestamps, IP/device fingerprints, signature artifacts, and supersession relationships. Produce exportable audit bundles (PDF/JSON) per release or recipient, support revocations and re-approvals, and provide tamper-evidence via server-side signing. Enable search and filters by asset, recipient, status, and date to satisfy compliance and dispute resolution needs.

Acceptance Criteria
Record Consent with Immutable, Versioned Ledger Entry
Given a recipient approves Asset A:Version V with content hash H1 via the consent UI When the approval is submitted Then the ledger stores a new immutable record containing: asset_id, version_id, content_hash=H1, recipient_id, ISO 8601 UTC timestamp (ms precision), IP address, device fingerprint hash, and signature artifact And the record is append-only: any API/database update attempt to this record is rejected, and changes are only represented by creating a new record referencing the prior one And the record is server-side signed at creation, and signature verification via the public key endpoint returns "valid"
Supersession on Asset/Term Change and Targeted Re‑Consent
Given Asset A changes from Version V1 (hash H1) to Version V2 (hash H2) and Change Guard targets Recipient R for re-consent When Recipient R re-approves V2 Then the ledger creates a new consent record for V2 that references and supersedes the prior V1 consent for R And the status of R’s V1 consent is marked "superseded" without deleting or mutating the original record And a query for the latest effective consent for (A,R) returns the V2 record And recipients not targeted by Change Guard retain their existing consent status with no new records created
Export Audit Bundle per Release/Recipient with Tamper‑Evident Signing
Given an authorized user requests an audit export for Release X and/or Recipient R When the export is generated Then the system produces a bundle containing: JSON (all scoped ledger records, supersession/revocation links, hashes, timestamps, IP/device fingerprints, signature artifacts) and a human-readable PDF summary And the bundle includes a detached server-side signature and a SHA-256 checksum file And verifying the bundle with the published public key returns "valid" And for up to 10,000 included records the export completes within 20 seconds at p95
Revocation and Re‑Approval Lifecycle Tracking
Given Recipient R revokes consent for Asset A:Version V1 When the revocation is submitted Then the ledger appends a revocation record referencing the prior consent, capturing timestamp, reason (optional), and origin IP/device And the effective status for (A,V1,R) becomes "revoked" and the asset is excluded from sharing workflows for R When R later re-approves V1 or a newer version V2 Then a new consent record is added and the latest effective status reflects the newest non-revoked consent
Search and Filter by Asset, Recipient, Status, and Date
Given a user applies filters Asset=A, Recipient in [R1,R2], Status in [approved, revoked, superseded], Date range [D1,D2] When the search is executed Then only matching ledger records are returned, sorted by timestamp desc by default And results are paginated (default 50 per page) with total count returned And the date filter is inclusive of D2 up to 23:59:59.999 UTC And with a dataset of 50,000+ records, time-to-first-page is <= 2 seconds at p95
Tamper Detection on Stored Records and Exports
Given any stored ledger record or exported bundle is altered after signing When signature verification is performed via the verification endpoint or client tooling Then verification fails with a clear "tampered" result and identifies the failing artifact And original, untampered records verify as "valid" using the same mechanism
Policy & Threshold Configuration
"As a team lead, I want to set what counts as a material change for our catalog so that Change Guard triggers re-consent only when it truly matters."
Description

Allow workspace owners to define thresholds and rules that determine when re-consent is required. Configure per-asset-type policies such as audio loudness/duration deltas, artwork variance percentage, metadata fields that mandate re-consent, and contract change severities. Support role-based exceptions, environment presets, dry-run testing to preview impact, and retroactive application to queued requests. Persist policies per workspace and per release template.

Acceptance Criteria
Threshold-based Re‑Consent for Audio and Artwork
Given a workspace policy where audio re-consent is required when loudness delta > 1.0 LUFS or duration delta > 2s, and artwork re-consent is required when SSIM < 0.90 or pHash Hamming distance ≥ 8 And a release with an approved audio track A1 and approved artwork AR1 with prior recipient consents recorded When a new mix for A1 is uploaded with loudness delta = 1.2 LUFS and duration delta = 0.3s Then a re-consent request is generated only for A1 and only for recipients who previously consented to A1 And no re-consent is generated for AR1 When a new artwork for AR1 is uploaded with SSIM = 0.85 Then a re-consent request is generated only for AR1 and only for recipients who previously consented to AR1 When a new mix for A1 is uploaded with loudness delta = 0.4 LUFS and duration delta = 1.0s Then no re-consent is generated for A1
Metadata-Driven Re‑Consent Triggers
Given a workspace policy that marks the metadata fields ISRC, Explicitness, and Release Date as re-consent-mandatory And a release with prior recipient consents When the ISRC is changed Then targeted re-consent requests are generated only for assets and recipients impacted by the ISRC change When a non-mandatory field (e.g., Track Comment) is changed Then no re-consent requests are generated And an audit log records which metadata field(s) triggered re-consent
Contract Severity Policy Enforcement
Given a workspace policy that classifies contract edits with severities: Minor (no re-consent), Major (re-consent), Critical (re-consent with legal review) And severity rules: split change ≥ 1% = Major, payment term change = Major, territory scope change = Critical, typo fix = Minor And a contract with prior consents from signatory parties When a split is changed by 2% Then re-consent requests are generated only for contract signatories and stakeholders defined by the policy When a typo is corrected without altering legal meaning Then no re-consent is generated When territory scope is expanded Then re-consent is generated and the item is flagged for legal review per policy
Role-Based Exception Handling
Given a workspace policy with role-based exceptions: Internal QA and A&R Assist exempt from re-consent for audio-only loudness deltas ≤ 1.5 LUFS; Executive Admin may accept on behalf of Manager role And a release with recipients across roles When an audio update with loudness delta = 1.2 LUFS occurs Then recipients with exempt roles receive no re-consent request while non-exempt recipients do And the acceptance-by-proxy is permitted only for Executive Admin on Manager’s behalf with an audit entry When an audio update with loudness delta = 2.0 LUFS occurs Then all recipients, including exempt roles, receive re-consent if the exception threshold is exceeded
Environment Presets and Policy Persistence
Given environment presets defined as Lenient, Balanced, and Strict with preset threshold bundles When the owner applies the Strict preset to the workspace and saves Then the policy values update to the Strict bundle and persist across sessions And creating a new release from Template T that references the workspace policy uses the persisted values When Template T overrides artwork variance to SSIM < 0.88 Then releases created from Template T use 0.88 for artwork variance while the workspace default remains at the preset value And switching workspaces shows no leakage of policy values between workspaces
Dry‑Run Impact Preview without Notifications
Given a workspace policy and a set of pending changes to audio, artwork, metadata, and contracts When the owner runs a dry-run Then the system displays a deterministic preview including: impacted assets list, impacted recipients list per asset, trigger reasons per item, and counts of re-consent requests that would be created And no emails, links, or notifications are sent And the dry-run output can be exported and matches subsequent live apply results when no additional changes occur
Retroactive Policy Application to Queued Requests
Given existing queued (not yet sent) re-consent requests for a release under a prior policy And the owner updates the policy and chooses Apply Retroactively When retroactive apply is executed Then the queued set is recalculated to add newly required requests and remove requests no longer required, matching the latest dry-run preview And requests already sent or completed are not altered And an audit log records the before/after counts and the reason codes for changes
Debounce & Batch Handling
"As a producer iterating on a mix, I want the system to batch quick successive tweaks into one request so that I don’t overwhelm reviewers with multiple pings."
Description

Reduce notification fatigue by debouncing rapid successive edits into a single re-consent cycle per asset within a configurable time window. Merge pending requests when new changes arrive, recompute diffs, and update recipients without losing prior acknowledgments. Provide per-item cooldowns, conflict resolution when approvals exist for earlier deltas, and clear activity logs reflecting batched operations.

Acceptance Criteria
Debounce Rapid Successive Edits into Single Cycle
Given asset A with a configured debounce window T=10 minutes and no open re-consent cycle When three distinct edits to A occur at t0, t0+2m, and t0+9m Then exactly one re-consent cycle is created for A covering all three edits and opens no later than 60 seconds after the last edit And each intended recipient receives at most one notification for that cycle And the cycle’s diff view includes all changed files/terms spanning t0..t0+9m
Configurable Debounce Window Per Scope
Given a project-level debounce window T_project=2 minutes overriding the default and asset A belongs to that project When edits occur at t0 and t0+90s and another at t0+210s Then the first two edits are batched into one cycle and the third edit starts a new cycle And updating T_project to 5 minutes affects only cycles created after the change And the activity log records the window value used for each cycle
Merge New Changes into Pending Cycle and Recompute Diffs
Given asset A has an open re-consent cycle started at t0 with edits E1..En When a new change E(n+1) arrives at t0+Δ within the debounce window Then E(n+1) is merged into the same cycle and the aggregated diff is recomputed within 30 seconds And previously issued review links remain valid and display the updated diff And only recipients impacted by E(n+1) receive an update notification
Preserve Valid Acknowledgments and Targeted Re‑consent
Given recipients R1..Rk with acknowledgment statuses on items changed by E1..En When new change E(n+1) affects a subset S of items previously acknowledged Then acknowledgments unrelated to S remain valid and are not cleared And acknowledgments for items in S are marked superseded and require re‑consent And the UI shows per-item status transitions with timestamps and reasons
Per‑Item Cooldown and Queuing After Cycle Closure
Given asset A has a cooldown C=15 minutes per item and its cycle closes at tc When edits occur at tc+5m and tc+10m Then no new re‑consent cycle starts before tc+C And the edits are queued for A and included in the next cycle that opens at or after tc+C And no notifications are sent while edits are queued during cooldown And the activity log shows queued edit count and next eligible time
Conflict Resolution with Existing Approvals
Given an approval exists for mix file version v1 by recipients R and a subsequent edit creates version v2 modifying the same file When v2 is merged into an open or next cycle Then approvals on v1 for that file are marked superseded with reason "file changed" And only recipients in R are requested to re‑consent for the affected file And approvals on unrelated assets/terms remain valid
Activity Log Captures Batched Operations End‑to‑End
Given batching operations occur for asset A over a period P When viewing the activity log filtered by A and P Then entries exist for cycle start, merged edits with timestamps, recalculated recipients, preserved and superseded acknowledgments, cooldown applied, queued edits, notifications sent, and cycle closure And each cycle and edit entry has a unique ID, actor, timestamp, and scope (asset/item) And the log is immutable and exportable to CSV/JSON

Consent Ledger

Export a tamper‑evident consent pack with signer identities, timestamps, signer device/passkey info, and the file‑hash tree—plus a QR to a hosted verification page. Share with distributors, PROs, or auditors for instant provenance checks without exposing your whole vault.

Requirements

Tamper‑Evident Consent Pack Export
"As an indie label manager, I want to export a cryptographically signed consent pack so that I can prove rights and approvals to partners without exposing my entire vault."
Description

Generate a single exportable consent pack that bundles a canonicalized JSON manifest, a PDF human‑readable summary, the file‑hash Merkle root, and a detached digital signature. The manifest records signer identities, consent scopes, timestamps, included asset references, and a unique pack ID. The archive is signed with an IndieVault-managed key and includes a timestamp token for non‑repudiation. The pack is verifiable offline and designed to be shared externally without granting access to the broader vault.

Acceptance Criteria
Export Pack Includes Required Artifacts
Given a release in IndieVault has recorded consents for included assets When a user with export permission initiates Export Consent Pack Then exactly one file is produced as the export artifact And the artifact contains: a canonicalized JSON manifest, a human-readable PDF summary, a Merkle root for included assets, a detached digital signature object, and a timestamp token And manifest.merkleRoot is present and equals the bundled Merkle root value And no asset binary content is included in the artifact
Manifest Canonicalization and Schema Compliance
Given export inputs are identical across two runs When the JSON manifest is generated Then the manifest validates against the documented JSON Schema (required fields: packId, createdAt, signers[], consentScopes[], assets[], merkleRoot) And the canonicalization produces byte-identical output across the two runs And keys are sorted lexicographically and timestamps are RFC 3339 UTC strings And field types and enumerations match the schema (e.g., packId is a UUID; assets[].hash.algorithm is declared)
Merkle Root Computation Integrity
Given the list of included assets and their recorded content hashes When the Merkle root is recomputed offline per the documented algorithm Then the computed root equals manifest.merkleRoot And changing any included asset or its recorded hash causes the recomputed root to differ And the manifest declares the hash algorithm used for leaves
Digital Signature and Timestamp Validity
Given the IndieVault public verification key and the exported artifact When verifying the detached digital signature offline Then the signature validates over the signed payload that binds the canonicalized manifest, the Merkle root, and the PDF summary digest And the RFC 3161 timestamp token validates and its time is within ±5 minutes of manifest.createdAt And modifying any file within the artifact results in signature verification failure
Offline Verification Without Vault Access
Given a clean machine with no network connectivity and only the exported artifact plus the IndieVault public key When the documented verification procedure is executed Then verification completes successfully without any network calls And the output reports packId, merkleRoot, signer identities, consent scopes, and overall verification status And no attempt is made to access IndieVault services or vault contents
No Vault Exposure or Secrets Leakage
Given the exported artifact is inspected When scanning file contents and metadata Then no access tokens, API keys, session cookies, or internal service endpoints are present And no absolute internal file paths or unrelated vault identifiers are present And only the signer identity attributes intended for disclosure are included
Pack ID Uniqueness and Traceability
Given multiple consent packs are exported for different consent sets When comparing their identifiers Then each manifest.packId is globally unique and matches the UUID pattern And the same packId appears in both the manifest and the PDF summary And the export filename includes the packId for traceability
WebAuthn Signer Identity & Device Capture
"As a signing artist, I want my consent tied to my passkey and device details so that my approval is provable and cannot be spoofed."
Description

Capture signer identity using passkeys/WebAuthn during consent, storing attested authenticator metadata (where available), device/OS info, and signer display name, with explicit consent and PII minimization. Persist ISO 8601 timestamps, signer public key, and a hash of the identity record. Integrate seamlessly with existing e‑signature flow and include this metadata in the export manifest and PDF summary.

Acceptance Criteria
Passkey WebAuthn Capture During Consent
Given a signer is on the consent step of the e-sign flow on a WebAuthn-capable browser When the signer completes a WebAuthn assertion using a passkey for the relying party Then the server verifies the assertion challenge, RP ID, origin, and signature; rejects on failure with a clear error and no consent record created And the system persists the signer public key (COSE key), credential ID (base64url), sign count, and an ISO 8601 UTC timestamp (e.g., 2025-08-19T14:05:12Z) linked to the consent record And the system stores the signer-provided display name (1–120 UTF-8 chars) normalized to NFC And the system computes and stores a SHA-256 hash of the RFC 8785 JCS-canonicalized identity record
Authenticator Attestation and Device/OS Metadata
Given the authenticator returns attestation data or an existing credential provides attestation metadata When attestation information is present Then the system verifies the attestation statement and trust chain; records AAGUID, attestation format, and trust result (trusted/untrusted/none) And the system records authenticator type (platform or cross-platform) and transports when provided And the system captures device/OS metadata via User-Agent Client Hints: platform, platformVersion, model (if present), and brands And when attestation is not available, the record sets attestation_status = "none" without failing the consent
Explicit Consent and Privacy Notice
Given the signer is about to capture identity and device data When the consent UI lists the data categories to be captured and links to the privacy notice Then the signer must explicitly check "I consent to identity/device capture" before WebAuthn is invoked And the system stores consent_version, consent_text_hash (SHA-256), and an ISO 8601 UTC timestamp alongside the identity record And if the signer declines consent, the flow halts with no identity or device data persisted
PII Minimization Field Whitelist
Given identity capture completes Then only the following fields are persisted: display_name, public_key (COSE), credential_id, sign_count, timestamp_utc, identity_record_hash, aaguid, attestation_format, attestation_trust_result, authenticator_type, transports, device_platform, device_platform_version, device_model, ua_brands And no IP address, email address, phone number, GPS/location, MAC address, device serial/IMEI, or full user agent string are stored And any fields not in the whitelist are dropped before persistence and verified absent in the export
Export Manifest and PDF Summary Includes Identity Metadata
Given a consent pack is exported When the export manifest JSON is generated Then it includes the signer identity record fields and identity_record_hash; timestamps are ISO 8601 UTC; public key is provided as COSE and SHA-256 fingerprint (base64url) And the PDF summary includes signer display name, timestamp, authenticator summary (AAGUID/format/trust), device/OS summary, and public key fingerprint And the manifest and PDF exclude any fields disallowed by the PII minimization rule
Seamless E‑Signature Flow Integration and Performance
Given the WebAuthn identity capture step is part of the consent flow When a signer completes consent on Chrome, Safari, Edge, or Firefox (latest two major versions) on desktop or mobile Then the flow requires no more than one additional user interaction beyond existing consent confirmation And the 95th percentile end-to-end time from WebAuthn prompt to consent recorded is <= 3 seconds on a 50th percentile network and device And the existing e-signature flow remains functional; if WebAuthn fails, the user can retry without losing prior form inputs
Hash Tree Construction & Asset Mapping
"As a distributor reviewer, I want to verify that a specific file matches the consent pack’s hash tree so that I can confirm provenance without requesting the entire archive."
Description

Compute a deterministic Merkle tree over all files within the consent scope using SHA‑256 with stable ordering and chunking for large assets. Record per‑file leaf hashes and the Merkle root in the manifest, mapping each file to its role (track, stem, artwork, contract). Ensure repeatable builds to make independent verification yield the same root. Support partial proofs for individual files without revealing unrelated assets.

Acceptance Criteria
Deterministic Merkle Root Rebuild Across Machines
Given the same consent scope with identical file bytes, normalized paths, and role assignments on two different machines and OSes When the Merkle tree is built using SHA-256 and the defined ordering Then the per-file SHA-256 leaf hashes are identical across builds And the Merkle root is identical (64-character lowercase hex) And the manifest’s algorithm="sha256" and merkle_root values match across builds And file metadata (timestamps, permissions) do not affect any hash or the Merkle root
Stable Path Normalization and Ordering Rules
Rule: All file paths are normalized to UTF-8 NFC with forward slashes and no trailing slash Rule: Sorting is case-sensitive lexicographic by normalized relative path Rule: Redundant components (./) are removed and attempts to traverse above the scope (../) cause build failure Rule: The ordered list of files is identical regardless of OS or filesystem enumeration order
Manifest Contains Per-File Leaf Hashes and Roles
Given a completed build Then the manifest contains, for each included file: relative_path, role ∈ {track, stem, artwork, contract}, content_sha256 (64-char lowercase hex), and leaf_index (integer ≥ 0) And no relative_path or leaf_index appears more than once And unknown or missing roles cause build failure with an explicit error And the manifest includes merkle_root (64-char lowercase hex) derived from the recorded leaf hashes in the recorded order
Streaming SHA-256 for Large Assets is Deterministic
Given an asset larger than 4 GB When computing content_sha256 Then the implementation streams the file in fixed-size chunks (no full-file load into memory) And the resulting content_sha256 equals the reference sha256sum of the file bytes And rehashing the same file yields the same content_sha256 regardless of chunk boundaries
Export and Verify Single-File Partial Merkle Proof
Given a built manifest and a file F within the consent scope When exporting a partial proof for F Then the proof includes: F.relative_path, F.role, F.content_sha256, Merkle path (ordered sibling hashes), and merkle_root And the proof excludes the bytes and filenames of unrelated files And a verifier can recompute F’s leaf and validate the Merkle path to merkle_root successfully And any modification to F’s bytes or any hash in the proof causes verification failure
Consent Scope Isolation from Unrelated Assets
Given a defined consent scope S with files {A, B} When files outside S are added, removed, or modified elsewhere in the vault Then the merkle_root for S remains unchanged And when any file within S is modified at the byte level, added, removed, or its relative path changes, the merkle_root changes
Hosted Verification Page & QR Linking
"As an auditor, I want to scan a QR and instantly see whether a consent pack is authentic and current so that I can complete checks quickly and confidently."
Description

Provide a hosted verification endpoint that accepts a pack ID, QR scan, or uploaded manifest to validate digital signatures, timestamps, and Merkle roots. Display a minimal, privacy‑preserving summary (signers, date, consent scope, verification status) and show revocation/expiry if applicable. Generate a short URL and QR image included in the pack and PDF. Implement rate limiting, link expiration options, and uptime monitoring.

Acceptance Criteria
QR & Short URL Generation and Scan Flow
Given a consent pack is exported with verification enabled, When the export completes, Then a short URL (HTTPS, path length <= 24 chars) is generated and embedded in the pack manifest and PDF, And a QR image (PNG, >= 300x300, error correction level M or higher) is included. Given the QR is scanned on a modern mobile device, When the URL is opened, Then it resolves to the hosted verification page with HTTP 200 within 2s at p95, And the page hostname matches the configured verification domain, And no query parameters contain PII. Given the short URL token is invalid or unknown, When accessed, Then respond HTTP 404 and render a non-identifying error message.
Pack ID Entry Verification
Given a visitor opens the verification page, When they input a valid pack ID and submit, Then the service retrieves the manifest and validates cryptographic signatures over the manifest and the Merkle root against the included public keys, And verification status = Passed. Given the computed Merkle root from the manifest's file-hash tree does not match the stored root, When checked, Then verification status = Failed with reason "Merkle root mismatch". Given an invalid pack ID format, When submitted, Then the UI blocks submission with inline validation and no network call.
Manifest Upload Verification
Given a user selects "Upload manifest", When a valid consent-pack manifest file is uploaded (JSON, <= 2 MB), Then the service validates cryptographic signatures, timestamps (UTC), and Merkle root, And returns verification status and summary without persisting the file beyond the request lifetime. Given the uploaded file is not a manifest or exceeds the allowed size, When submitted, Then return HTTP 400 with a friendly UI error message and no server-side processing. Given the manifest is valid but references assets not present, When verified, Then mark signatures/hashes as Passed while flagging "asset availability not verified" non-blocking notice.
Minimal Privacy-Preserving Summary Display
Given verification completes, When rendering the summary, Then display only: pack ID, verification status (Passed/Failed/Revoked/Expired), consent scope/title, signer display names, signing timestamps (UTC ISO 8601), and pack creation date. Then do not display or embed emails, IP addresses, device IDs, passkey material, or geolocation; page source contains no such values. Then the page loads without third-party trackers and sets no third-party cookies.
Revocation and Expiry Indicators
Given a pack has been revoked in the consent ledger, When verified by any method, Then status = Revoked with revocation timestamp and optional reason, And prior signatures remain visible but marked superseded; HTTP 200. Given a short URL has expired per configured TTL, When accessed, Then respond HTTP 410 Gone and render "Link expired" with instructions to verify via pack ID or manifest upload; do not reveal pack details. Given a pack has an explicit consent validity end date, When current time > end date, Then show status = Expired with end date and do not show status = Passed.
Rate Limiting and Link Expiration Enforcement
Given repeated requests from the same IP exceed 60 requests per minute to verification endpoints, When the limit is hit, Then respond HTTP 429 with a Retry-After header and show a human-friendly throttle message. Given a tokenized short URL, When more than 5 verification attempts occur within 10 seconds, Then temporarily throttle that token for 60 seconds independently of IP. Given per-pack link expiration is configured (e.g., 30/60/90 days), When generating the short URL, Then store expiry metadata and enforce it at edge/CDN and origin layers.
Uptime Monitoring and Alerting
Given the verification service health endpoint is polled every 60 seconds, When a check fails 3 consecutive times, Then an alert is sent to the on-call channel within 2 minutes. Given monthly operations, When calculating availability, Then verification endpoints meet or exceed 99.9% uptime excluding documented maintenance windows on the status page. Given an outage is detected, When the system is degraded, Then serve a static fallback page for verification endpoints that returns HTTP 503 with Retry-After and does not expose PII.
Scoped Disclosure & Redaction Controls
"As an artist manager, I want to tailor the consent pack to the recipient so that I share only what’s necessary while retaining cryptographic proof."
Description

Allow creators to choose exactly which assets, signers, and consent fields are included in an export, with presets for distributors, PROs, and auditors. Redact sensitive internal notes and vault paths by default, while preserving verifiability via hash proofs. Provide a preview showing the PDF summary and manifest before export, with warnings if redactions impact verifiability.

Acceptance Criteria
Apply 'Distributor' preset for scoped export
Given a project with at least 3 assets, associated consent records with signer identities, timestamps, device/passkey info, and internal notes and vault paths present When the user selects Export Consent Pack and applies the "Distributor" preset without manual changes Then the selection auto-includes all selected assets’ consent records with signer identities, timestamps, signer device/passkey info, and the file-hash tree, and auto-excludes internal notes and vault paths And the Preview PDF shows only included fields and indicates redactions for excluded fields And the manifest lists presetName="Distributor", includedFields=[signerIdentities,timestamps,deviceInfo,fileHashTree], redactedFields=[internalNotes,vaultPaths] And no verifiability warnings are displayed
Manually scope assets, signers, and fields
Given a project with multiple assets and at least 2 signers per consent When the user individually toggles asset inclusion, signer inclusion, and per-field inclusion (on/off) in the scoping UI Then the selected counts (assets, signers, fields) update immediately and match the toggles And a "Select all"/"Deselect all" control applies to the current list and updates counts accordingly And the Preview PDF and manifest diff update within 1 second of each change And if zero assets are selected, the Export button is disabled with an inline message "Select at least one asset"
Default redaction for sensitive fields with explicit unredact
Given internal notes and vault paths exist on the consent records When the user leaves default settings or selects "Include all fields" Then internal notes and vault paths remain redacted in the Preview PDF and manifest by default And redacted values are replaced by redact placeholders in the PDF and by hash commitments in the manifest When the user attempts to unredact either field Then a confirmation modal requires explicit acknowledgement ("Type UNREDACT") and records an audit event with user, timestamp, and reason And only after confirmation do the fields render unredacted in preview and export
Preserve verifiability under redaction
Given one or more fields are redacted and/or assets are excluded from content disclosure When the user opens the Preview and Verifiability panel Then each redacted field has an associated proof object (e.g., salted hash or Merkle path) in the manifest that validates against the export’s root hash And the verification page URL/QR validates the manifest and proofs without needing access to unredacted values And if any proof is missing or inconsistent, a blocking warning is shown: "Redactions impact verifiability" and the Export button remains disabled until resolved or the user overrides with explicit acknowledgement recorded
Pre-export preview, warnings, and estimates
Given the user has scoped a consent pack for export When the Preview screen is displayed Then the system renders a PDF summary preview and a machine-readable manifest preview And any redaction or verifiability warnings are shown with counts and affected items list And the preview shows estimated file size and page count within 10% of the actual export And the Export button remains disabled while previews are stale and becomes enabled within 1 second after the latest change is processed
Switch between Distributor, PRO, and Auditor presets
Given the presets "Distributor", "PRO", and "Auditor" are available When the user switches between these presets Then the scoping selections update deterministically per preset definition without retaining manual overrides unless the user chooses "Keep overrides" And the preview and manifest reflect the new selections and presetName accordingly And by default, internal notes and vault paths remain redacted in all presets And selecting "Reset to preset defaults" removes all manual overrides and matches the preset definition exactly
Export Formats & Delivery Controls
"As a rights admin, I want flexible delivery and versioning of consent packs so that I can meet different partner requirements and control access over time."
Description

Offer export as a signed ZIP containing manifest.json, summary.pdf, qr.png, and signature files, with optional password protection and presigned link delivery. Support versioned re‑exports with change logs, configurable expiry, and download analytics. Store checksums for each export and allow revocation of links without invalidating on‑chain/offline verification of previously downloaded packs.

Acceptance Criteria
Signed ZIP Includes Required Artifacts
Given a user initiates a consent pack export When the export completes Then a ZIP archive is produced And the ZIP root contains manifest.json, summary.pdf, qr.png, and signature files And manifest.json enumerates all included files with their SHA-256 hashes and a Merkle root And the signatures validate the manifest and Merkle root with the platform’s signing key And modifying any file within the ZIP causes signature verification to fail
Password-Protected Export
Given the user enables password protection and provides a password When the export is generated Then the resulting ZIP is encrypted and requires the exact password to open in standard unzip tools And attempts with an incorrect password fail to decrypt and are recorded in audit logs without exposing the file contents And downloading the encrypted ZIP via delivered links does not bypass the password prompt
Presigned Link Delivery With Configurable Expiry
Given the user chooses presigned link delivery and sets an expiry duration When the export finalizes Then a unique presigned URL is created per designated recipient with the configured expiry And the link successfully serves the file before expiry And after the expiry time, the link denies access and no file bytes are served And an authorized user can manually revoke the link prior to expiry
Versioned Re-Exports With Change Log
Given a prior export version exists When the user performs a re-export with changes Then a new version identifier is assigned without altering the prior version And a machine-readable change log is generated listing added, removed, and modified files with old and new hashes And the new manifest references the prior version identifier And presigned links for prior and new versions function independently per their own expiry and revocation states
Per-Recipient Download Analytics
Given presigned links are issued to identifiable recipients When recipients access and download the export Then the system records per recipient: first download timestamp, last download timestamp, total downloads, user-agent, IP country/region, bytes transferred, and outcome (success/denied) And analytics are retrievable filtered by export version and recipient And denied attempts due to expiry or revocation are logged without incrementing successful download counts
Checksums Stored And Verifiable
Given an export is created When checksums are computed Then SHA-256 checksums are stored for the ZIP archive and for each file listed in manifest.json And re-computation against a freshly downloaded pack matches the stored checksums And auditors can retrieve the checksums via API or UI for verification
Link Revocation Does Not Invalidate Offline Verification
Given a recipient has already downloaded a consent pack And the corresponding presigned link is later revoked When the recipient performs offline verification using the manifest and signatures included in the pack Then verification succeeds for the previously downloaded pack And any subsequent access attempts via the revoked link are denied without affecting offline or on-chain verification of the already saved file

Escrow Vault

Authorize and securely hold funds per milestone via Stripe/PayPal until the exact, hash-bound deliverable is approved or the deadline hits. Auto-release with itemized receipts, live status, and refund rules reduces back‑and‑forth and protects both sides. Recipients see a clear countdown and what’s required to unlock payment.

Requirements

Multi-Gateway Escrow Payments (Stripe & PayPal)
"As an indie artist funding a collaborator, I want to authorize funds securely per milestone so that payment is guaranteed but only released when the work is delivered and approved."
Description

Integrate secure payment authorization and hold-per-milestone using Stripe and PayPal, supporting manual capture/authorization flows, re-authorization when holds expire, multiple currencies, taxes/fees configuration, and sandbox environments. Implement counterparty onboarding (e.g., payee payout details/connected accounts), PCI-SAQ-A compliant client flows, webhook-driven state updates with idempotency and retries, and robust error handling for declines/timeouts. Ensure clear mapping between IndieVault projects/milestones and processor objects, with secure secret management, environment separation, and comprehensive failure/retry strategies.

Acceptance Criteria
Milestone Authorization Hold Created (Stripe & PayPal)
Given a project milestone requires payment authorization, When the payer selects Stripe or PayPal and submits payment in sandbox or production, Then an authorization/intent is created for the milestone total (amount + configured taxes/fees) with manual capture enabled. Then the authorization/intent ID, gateway, environment, currency, and amount are persisted and mapped to the IndieVault projectId and milestoneId. Then the payer sees confirmation of hold amount and authorization expiry timestamp; no funds are captured at this step. Given a decline, timeout, or SCA/3DS failure, Then the milestone remains in Awaiting Authorization, no active authorization is recorded, and the payer receives a specific actionable error code/message. Then client-side payment collection uses hosted SDKs (Stripe Elements/PayPal Checkout) and no raw PAN/CVV traverses IndieVault servers, maintaining PCI SAQ-A scope. Then all create/confirm calls include idempotency keys so client retries cannot create duplicate authorizations. Then the same flow functions in gateway sandbox environments with test credentials, and sandbox objects never appear in production views or databases.
Manual Capture on Milestone Approval
Given a valid authorization exists for a milestone, When the approver marks the milestone deliverable as Approved, Then the system captures the exact authorized amount (including taxes/fees) in the same currency via the selected gateway. Then capture requests use idempotency keys so repeated approvals or retries do not double‑capture. Then capture success transitions the milestone to Paid and triggers itemized receipts to payer and payee including gateway reference IDs. Then capture is treated as confirmed only after a verified webhook event (e.g., charge.succeeded or capture.completed) is processed; interim UI shows Pending Capture. Given capture fails (e.g., insufficient funds, expired auth), Then the system attempts a single re‑authorization and, if successful, captures; otherwise the milestone moves to Payment Action Required with surfaced error details. Then all state transitions are auditable with timestamps, actor IDs, gateway, and event IDs.
Automatic Re-Authorization on Expired Holds
Given an authorization has a known expiry timestamp, When the time is T‑24h before expiry, Then the payer is notified with instructions to refresh the authorization. When a capture is attempted after the authorization has expired, Then the system initiates re‑authorization for the current milestone amount, cancels/voids the stale authorization, and ensures only one active hold exists. Then re‑authorization results are recorded idempotently and remapped to the same milestone without creating duplicate holds. Given re‑authorization fails, Then the milestone enters Payment Action Required, no funds are captured, and both parties are notified with clear next steps. Then no more than 3 automated re‑authorization attempts occur using exponential backoff; additional attempts require user action.
Webhook State Sync with Idempotency & Retries
Given Stripe and PayPal webhooks are configured, When auth, capture, refund, or dispute events arrive, Then the endpoint verifies signatures using per‑environment secrets and rejects unsigned/invalid requests with 4xx. Then event processing is idempotent via a persistent event ledger so duplicate deliveries do not alter financial state. Then transient processing failures are retried with exponential backoff for up to 24 hours; events still failing are moved to a dead‑letter queue and alerting is triggered. Then processing updates the mapped milestone state within 30 seconds of event receipt and persists gateway IDs and timestamps. Then logs exclude PAN, CVV, or payer PII and only record tokenized IDs and necessary metadata.
Payee Onboarding & Payout Readiness
Given a user is set as payee for a milestone, When they complete Stripe Connect onboarding or provide a valid PayPal payout destination, Then the system marks the payee as Payout Ready and allows authorization attempts. Given the payee is not Payout Ready, When a payer attempts to authorize funds, Then the system blocks hold creation and shows a blocking message indicating required onboarding steps. Then onboarding status and next actions are visible to both parties without exposing sensitive bank details. Then sandbox mode supports test onboarding accounts and prevents payouts to real destinations.
Environment Separation & Secret Management
Given the application is operating in a specific environment, Then only that environment’s API keys/endpoints are loaded from the secret store and production secrets are inaccessible from sandbox. Then secrets are stored encrypted at rest, rotated at least every 90 days, and never displayed in client logs or UI. Then cross‑environment object mapping is prevented (e.g., a production milestone cannot reference a sandbox payment intent) with validation blocking such links. Then an admin diagnostics endpoint exposes current environment, gateway readiness checks, and last‑rotated timestamps without leaking secrets.
Multi‑Currency, Taxes, and Fees Applied
Given a milestone defines currency and tax/fee rules, When an authorization is created, Then the total is calculated with correct rounding per currency minor unit and itemized taxes/fees are included. Then the gateway object uses the same currency as the milestone; processor‑side currency conversion is disabled. Then receipts and UI display standardized currency codes and properly formatted amounts per locale. Given the chosen gateway does not support the milestone currency, Then the system blocks authorization and prompts a supported currency or an alternate gateway. Then changes to tax/fee configuration do not retroactively modify existing authorizations; they apply only to new holds.
Milestone Escrow Setup & Management
"As a project manager, I want to define milestones with clear amounts, deadlines, and requirements so that both sides know exactly what unlocks payment."
Description

Enable creation and management of milestone-based escrows within a release or project, defining amount, currency, deadline, success criteria, and designated recipient(s). Provide a guided setup with validation (minimum amounts, payout eligibility, timezone-aware deadlines), ability to require deposits before work starts, and visibility into funding status. Link each milestone to specific IndieVault assets/folders and enforce non-editable fields after funds are authorized while allowing safe edits via versioned amendments requiring mutual consent.

Acceptance Criteria
Guided Milestone Creation with Validation
Given I am in a Release/Project, When I click "Add Milestone Escrow", Then I must provide title, amount, currency, recipient(s), deadline (date+time+timezone), success criteria text, and at least one linked asset/folder before "Authorize" becomes enabled. And Given I enter an amount below the platform/provider minimum, When I attempt to proceed, Then I am blocked with a validation error stating the minimum allowed amount. And Given the selected currency is not supported for the recipient’s payout method, When I attempt to proceed, Then I see a validation error and cannot continue. And Given the recipient has not completed payout onboarding, When I attempt to proceed, Then I am blocked and shown a "Complete Payout Setup" action. And Given I select a deadline in the past or without a timezone, When I attempt to proceed, Then I am blocked with an error and cannot continue.
Deposit Requirement Gating and Funding Status
Given "Require deposit before work starts" is enabled with a deposit amount or percent, When the milestone is saved, Then the milestone state is "Pending Funding" and a deposit payment link is generated for the payer. Given the deposit is not fully funded, When the recipient views the milestone, Then they see "Awaiting Deposit" and cannot mark work "In Progress" or "Delivered". Given the deposit is fully funded and held in escrow, When payer and recipient view the milestone, Then status reads "Funded (Deposit)" with funded amount and remaining amount to be funded, and the work state can be set to "In Progress". Given the deposit payment fails or is canceled, When the payer returns to the milestone, Then status remains "Pending Funding" and no funds are held.
Asset/Folder Linking and Hash-Bound Deliverables
Given at least one IndieVault asset or folder is linked, When I authorize funds, Then the system snapshots selected items (IDs and content hashes) and displays a "Bound Deliverables" summary. Given no asset/folder is linked, When I attempt to authorize funds, Then I am blocked with an error to link at least one asset/folder. Given a linked asset changes after authorization, When I attempt to approve the milestone, Then the system detects a hash mismatch and blocks approval until a versioned amendment updates the snapshot. Given the bound assets match the stored hashes, When the payer approves delivery, Then the deliverable is verified and eligible for auto-release per rules.
Post-Authorization Immutability and Versioned Amendments
Given funds are authorized, When a user attempts to edit amount, currency, recipient(s), deadline, or bound assets, Then the UI requires creating an Amendment vN with change summary and proposed new values; original fields remain read-only. Given an amendment is created, When both payer and recipient accept in-app, Then the amendment becomes active, the milestone updates to the new version, and an audit log records timestamp, actors, and diffs. Given either party rejects or times out on the amendment, When the response window closes, Then the original milestone terms remain in force and fields stay read-only. Given no funds are authorized, When a user edits any field, Then edits save immediately without invoking the amendment flow.
Timezone-Aware Deadline and Countdown Visibility
Given a deadline with timezone is set, When any user views the milestone from any locale, Then the deadline is shown in their local time with the original timezone indicated, and the countdown matches the same UTC instant. Given the deadline is attempted in the past, When the user tries to save, Then the save is blocked with a clear error. Given the countdown reaches zero, When the deadline instant occurs, Then the milestone transitions to "Deadline Reached" without requiring a page refresh. Given DST transitions occur before the deadline, When viewing countdown, Then no drift or jump occurs; the countdown remains accurate to the UTC instant.
Funding Status Panel and Receipts
Given funds are authorized or deposited, When payer or recipient views the milestone, Then a Funding Status panel shows Held Amount, Currency, Funding Type (Deposit/Full), Provider (Stripe/PayPal), Authorization ID, and itemized fees with sensitive data redacted. Given a funding event completes (deposit funded, full funding, partial/ full refund), When the event settles, Then the Funding Status updates within 5 seconds and an itemized PDF receipt is available for download. Given no funds are authorized, When viewing the milestone, Then Funding Status shows "Not Funded" with a call-to-action for the payer to fund.
Multi-Recipient Splits and Payout Eligibility
Given multiple recipients are designated, When configuring splits, Then the sum of percentages equals 100.00% (±0.01%) or fixed amounts sum exactly to the milestone total; otherwise a validation error is shown and authorization is disabled. Given any designated recipient is ineligible (no payout account, unsupported currency/country), When attempting to authorize funds, Then authorization is blocked and an inline list identifies recipients needing action. Given all recipients are eligible and splits are valid, When authorization completes, Then the escrow holds the total and records per-recipient allocations for later disbursement and receipts.
Hash-Bound Deliverable Verification
"As a payer, I want payment release tied to the exact hashed deliverable so that I’m protected from last-minute swaps or tampering."
Description

Bind each payable milestone to a cryptographic hash (e.g., SHA-256) of the exact deliverable package generated by IndieVault (e.g., release-ready folder/ZIP). Store the hash immutably with the milestone and verify on approval to ensure the artifact released for payment exactly matches the submitted deliverable. Provide deterministic packaging rules, hash display/clipboard, and tamper alerts if files change post-submission. Integrate with watermarkable, expiring review links to ensure the hashed package is the one distributed for review.

Acceptance Criteria
Deterministic Packaging and Stable SHA-256
Given a release-ready folder is submitted, When IndieVault generates the deliverable package as ZIP using deterministic rules (UTF-8 names, lexicographic path order, normalized UTC timestamps, fixed compression settings, excluded OS artifacts), Then two runs on different machines (Windows/macOS/Linux) produce byte-identical ZIPs and the same SHA-256 hash. Given only file modified times or filesystem ordering differ, When packaging, Then the computed SHA-256 remains unchanged. Given any file content changes by at least one byte or a file is added/removed, When packaging, Then the computed SHA-256 changes.
Immutable Hash Storage and Audit Trail per Milestone
Given a milestone submission completes, When the package hash is computed, Then it is persisted immutably with the milestone and visible in the audit log with timestamp and actor. Given a user or API attempts to modify the stored hash, When the request is processed, Then the system rejects the change and records the attempt in the audit log. Given a user needs to replace the deliverable, When they submit a new package, Then a new hash is computed and versioned, preserving the prior hash in history.
Hash Display and Copy-to-Clipboard
Given a submitted milestone is viewed, When the page loads, Then the UI displays the 64-character lowercase hex SHA-256 and a copy-to-clipboard control. Given the copy control is used, When the clipboard is inspected by automated tests, Then the value exactly matches the displayed hash within 1 second. Given the public API is queried for the milestone, When the response is validated, Then it includes fields: hash (lowercase hex) and hashAlgorithm = "sha256".
Approval-Time Hash Verification Gate for Escrow Release
Given an approver initiates Approve Milestone, When the system recomputes the hash of the stored package, Then approval succeeds only if it exactly matches the stored hash. Given hashes match, When approval completes, Then the escrow transitions to Approved and the itemized receipt includes the hash value. Given hashes do not match, When approval is attempted, Then the action is blocked, no funds are released, and an error displays expected vs actual hashes with guidance to resubmit.
Tamper Detection and Alerts Post-Submission
Given a package has been submitted, When any underlying deliverable file or the stored package binary changes resulting in a different recomputed hash, Then the milestone is flagged Tampered, approve actions are disabled, and alerts are sent to payer and payee. Given a Tampered state, When the user resubmits the deliverable, Then a new hash is computed, the Tampered flag clears, and normal approval flow resumes. Given no content changes occur, When periodic integrity checks run, Then no Tampered flag is raised.
Review Links Bound to Hashed Package
Given a review link is generated for a submitted milestone, When a recipient downloads the package, Then the bytes served correspond exactly to the stored hash for that milestone and the link metadata shows a short hash (first 8 chars). Given the deliverable is replaced producing a new hash, When existing review links are used, Then they are invalidated or serve only the original hashed package with a notice, and new links must be issued for the new hash. Given a review link expires, When a recipient attempts access after expiry, Then download is denied and no alternate package is served.
Approval, Deadline & Auto-Release Workflow
"As a collaborator awaiting payment, I want an approval and deadline workflow with auto-release so that I’m paid on time even if the client becomes unresponsive."
Description

Implement a milestone state machine (Funded → Delivered → Under Review → Approved/Auto-Released → Paid or Refunded) with clear transitions based on approvals, deadlines, and rule checks. If approved before the deadline, capture/release funds and issue itemized receipts; if the deadline passes with a verified deliverable and no dispute, auto-release per rules; otherwise trigger refunds or extensions. Generate receipts/invoices with line items, fees, and taxes; update live status, trigger webhooks/notifications, and maintain full audit events.

Acceptance Criteria
Deliverable Submission and Hash Verification
Given a milestone is Funded with a committed deliverable hash When the creator submits a deliverable file or package Then the system computes the submitted artifact hash and compares it to the committed hash And if the hashes match, the milestone state transitions to Delivered and immediately to Under Review with a review countdown started And if the hashes do not match, the submission is rejected, the milestone remains Funded, and the error reason is shown to the creator And an audit event is recorded with actor, timestamps, prior/new states, submitted hash, and result And a webhook (milestone.delivered or milestone.delivery_failed) and notifications are emitted to relevant parties
Manual Approval Prior to Deadline
Given a milestone is Under Review with a verified deliverable and the deadline is in the future When the payer clicks Approve Then the milestone state transitions to Approved and then Paid And the system captures/releases funds via the configured provider (Stripe or PayPal) and stores the provider transaction ID And itemized receipt and invoice are generated and delivered to both parties And live status, webhooks (milestone.approved, milestone.paid), and notifications are updated/emitted And an immutable audit record is stored including actor, timestamps, prior/new states, deliverable hash, and provider transaction ID
Auto-Release at Deadline with Verified Deliverable and No Dispute
Given a milestone is Under Review with a verified deliverable, auto-release rules enabled, and no open dispute And the milestone deadline timestamp is reached When the deadline job executes Then the milestone transitions to Approved/Auto-Released and then Paid per rules And funds are captured/released via the configured provider and the provider transaction ID is stored And receipts/invoices are generated and delivered; webhooks (milestone.auto_released, milestone.paid) and notifications are emitted And an audit trail entry records the auto-release trigger, rule snapshot, timestamps, and resulting state
Refund on Missed Deadline or Failed Verification
Given a milestone is Funded When either (a) the deadline passes with no deliverable submitted, (b) a submitted deliverable fails hash verification, or (c) a dispute is upheld per rules Then the milestone transitions to Refunded And the system initiates a refund via the payment provider and records the provider refund transaction ID And a refund receipt/credit note with itemized amounts, fees, and taxes is generated and delivered to both parties And live status, webhooks (milestone.refunded), and notifications are updated/emitted And an audit record captures the refund reason, rule checks, timestamps, and financial references
Deadline Extension Before Expiry
Given a milestone is Funded or Under Review and the deadline is in the future And an authorized actor requests a deadline extension with a new timestamp and rationale When the extension is approved per configured rules (e.g., both parties or designated approver) Then the milestone deadline is updated, the countdown reflects the new deadline, and auto-release timing adjusts accordingly And the state remains otherwise unchanged, and an audit record of the extension (old/new deadlines, approvers, rationale) is stored And webhooks (milestone.deadline_extended) and notifications are emitted
Receipts and Invoices Generation on Payout or Refund
Given a milestone transitions to Paid or Refunded and the provider confirms the financial event When the system generates financial documents Then a receipt/invoice (for Paid) or refund receipt/credit note (for Refunded) is created with line items: milestone amount, platform fee, payment processor fee, taxes, and net amounts And documents include unique document ID, currency, timestamps, payer/payee legal names, tax IDs (if provided), milestone ID, and provider transaction IDs And documents are stored, downloadable from the milestone, and emailed to both parties And totals reconcile with provider amounts within currency rounding rules
Live Status, Webhooks, Notifications, and Audit on Transitions
Given any milestone state transition occurs within the state machine (Funded, Delivered, Under Review, Approved/Auto-Released, Paid, Refunded) When the transition is committed Then the UI status and countdown reflect the new state And idempotent webhooks for the transition event are emitted with retry on failure and unique event IDs And in-app and email notifications are sent according to user preferences And an immutable audit log entry captures actor (or system), timestamps, prior/new states, rule checks, and related financial/deliverable references
Refund & Dispute Rules Engine
"As a payer, I want predictable refund and dispute rules so that my funds are protected if the deliverable doesn’t meet the agreed requirements."
Description

Provide configurable, transparent refund and dispute policies per milestone: conditions (e.g., no delivery by deadline, hash mismatch, unmet checklist), partial refund percentages, mediation windows, and escalation paths. Support evidence submission, timeboxed decisions, partial releases, and ledger reconciliation. Integrate with Stripe/PayPal dispute APIs where applicable, and ensure outcomes propagate to payment captures/refunds, notifications, and receipts while preserving a clear audit trail.

Acceptance Criteria
Auto-Refund on Missed Delivery Deadline
Given a funded milestone with a rule "No delivery by deadline => 100% refund" And no approved deliverable is linked by the milestone deadline (project timezone) When the deadline passes Then the system initiates a full refund via the configured processor within 10 minutes And marks the milestone as Refunded with reason_code=NO_DELIVERY And sends email and in-app notifications to both parties including refund amount, reason, and receipt link And generates an itemized receipt attached to the milestone And writes an immutable audit log entry with timestamp, actor=system, idempotency_key, and payment_reference And retries the refund up to 3 times on transient errors with exponential backoff and surfaces failures in the UI
Hash Mismatch Triggers Dispute Hold
Given a milestone with an expected content hash recorded at funding time When a deliverable is submitted and its computed hash does not match the expected hash Then auto-release is blocked and the milestone status changes to Dispute.Hold with reason_code=HASH_MISMATCH And both parties are notified with mismatch details (expected vs actual hash, filename, timestamp) And the mediation window timer (configured per milestone) starts And evidence submission is enabled for both parties until the window expires And funds remain held in escrow; no partial release occurs until a decision is recorded
Partial Refund Based on Unmet Checklist
Given a milestone with a checklist where each item has a weight summing to 100% And the deadline passes with one or more items marked unmet When the rules specify partial refund based on unmet weight Then the system calculates refund_percent = sum(weight of unmet items) And issues a partial refund equal to refund_percent of the held amount within 10 minutes And releases the remaining amount to the recipient And ledger entries reflect debit=refund_amount, credit=release_amount, and net=0 against escrow liability And the receipt itemizes unmet items, weights, and calculation And an audit log entry records the calculation inputs and outputs
Mediation Decision and Timeboxed Resolution
Given a milestone in Dispute with a configured mediation window (e.g., 5 days) And both parties can submit evidence files (pdf, jpg, png, mp3, wav) up to 25 MB each and text statements up to 2000 characters When a mediator records a decision before the window expires selecting one of: full refund, partial refund X%, or full release Then the system applies the outcome immediately to payments and ledger And closes the dispute with final status and reason_code And notifies both parties with the decision, amounts, and next steps And locks further edits to evidence while preserving read access And if no decision is recorded by expiry, the system auto-applies the configured default rule and records auto_decision=true
External Dispute Sync with Stripe/PayPal
Given a charge or capture linked to a milestone enters dispute/chargeback at the PSP (Stripe or PayPal) When a PSP webhook event is received Then the milestone dispute state is synchronized within 2 minutes with PSP dispute_id, status, amount, and reason And internal timers pause while the PSP dispute is open And evidence uploaded in IndieVault can be submitted to the PSP via API where supported And when the PSP resolves the dispute, the outcome (won/lost, amounts) is propagated to captures/refunds and ledger And idempotency is enforced on webhook processing, with safe retries on duplicates And all sync actions are logged in the audit trail with request/response IDs (sensitive data redacted)
Audit Trail, Receipts, and Notifications Integrity
Given any refund, partial release, dispute open/close, or mediation action occurs When the action is executed Then an audit record is appended with immutable fields: action_type, actor, timestamp (UTC), amounts, currency, reason_code, before_state, after_state, idempotency_key, payment_reference And a receipt PDF is generated with itemized amounts and rule references and is accessible to both parties And in-app and email notifications are sent within 2 minutes including milestone name, amount, outcome, and receipt link And the audit trail can be exported as CSV or JSON with pagination and filters by date range, action_type, and milestone_id
Recipient Countdown, Checklist & Notifications
"As a recipient, I want a clear countdown and checklist so that I know exactly what’s needed and by when to unlock my payment."
Description

Expose a recipient-facing panel showing the escrow amount, live countdown to deadline, and a checklist of requirements to unlock payment (e.g., deliverable uploaded, hash verified, metadata complete). Integrate per-recipient analytics from review links (opens/downloads) and send configurable in-app/email notifications for fund authorization, delivery received, approval needed, impending deadlines, auto-release, and refunds. Ensure accessibility, localization readiness, and mobile-friendly layouts.

Acceptance Criteria
Recipient Panel Shows Escrow Amount and Live Countdown
Given an authenticated recipient opens the milestone panel with backend amount, currency, and ISO 8601 deadline When the panel renders Then the escrow amount displays with the correct currency symbol and locale formatting And the countdown shows D:H:M:S remaining in the recipient’s timezone And the countdown ticks at least once per second without a page reload And the displayed deadline and amount match backend values (time drift ≤ 1s) And once the deadline passes, the countdown switches to an “Expired” state and shows time since expiry And a page refresh restores the correct countdown state and values
Checklist Status and Payment Unlock Conditions
Given required checklist items include Deliverable Uploaded, Hash Verified (SHA-256 equals expected), and Metadata Complete When the recipient uploads a deliverable file Then the system computes the SHA-256 hash and compares to the expected value And if the hash matches, “Hash Verified” is marked complete with timestamp; if not, an error shows and the item remains incomplete And required metadata fields (Title, ISRC, Credits) must be filled and pass validation And each checklist item displays its status (Incomplete/Complete) and last updated time And the “Unlock Payment” action remains disabled until all required items are complete And when all required items are complete, the action enables and shows a confirmation summary
Per-Recipient Review Link Analytics in Recipient Panel
Given the recipient has a unique review link token associated with the milestone When the recipient opens or downloads via their review link Then the panel displays total opens, total downloads, and last activity time for that recipient only And metrics update in-panel within 60 seconds of the action without full page reload And after the review link expires, further opens/downloads are not counted and the panel indicates “Link expired” And analytics for other recipients are not visible or aggregated into this recipient’s metrics
Configurable Notifications for Escrow Events
Given notification preferences exist per recipient for event types (Funds Authorized, Delivery Received, Approval Needed, Impending Deadline, Auto-Release, Refund) and channels (In-App, Email) When an event occurs Then notifications are sent only for enabled event types and channels And in-app notifications appear within 10 seconds; emails are sent within 5 minutes And each notification includes milestone name, amount with currency, deadline with timezone, current status, and a deep link to the panel And duplicate notifications for the same event occurrence are not sent per channel And impending deadline notifications are scheduled at 72h, 24h, and 1h before deadline by default and respect user overrides And Auto-Release and Refund notifications include a link to an itemized receipt
Accessibility Compliance for Recipient Panel
Given the recipient panel is loaded When navigating using only the keyboard Then all interactive elements are reachable in a logical order and have a visible focus indicator And all text and interactive element contrast ratios are ≥ 4.5:1 And the countdown and checklist status changes are announced via an aria-live region (polite) no more than once per second And all form controls and icons have accessible names/labels readable by screen readers And there are no keyboard traps, and Escape closes any modal dialogs And validation errors are programmatically associated with their inputs and described to screen readers
Localization and Currency/Date Formatting Readiness
Given the application locale is set to en-US, fr-FR, es-ES, and ar-SA When the recipient panel renders Then all user-visible strings come from translation keys and display in the selected language And currency amounts format correctly per locale and currency minor units without precision loss And dates/times display in the selected locale format with the correct timezone And right-to-left locales render mirrored layouts and correctly aligned text And if a translation key is missing, English text is shown and the missing key is logged
Mobile-Friendly Layout and Interaction
Given the recipient panel is viewed on devices from 320px to 768px width When the panel renders and the user interacts via touch Then no horizontal scrolling is required and core sections (amount, countdown, checklist, CTA) are visible without layout overlap And touch targets are at least 44px in height/width with 8px spacing And the primary call-to-action remains accessible (sticky or clearly visible) without obscuring content And the panel loads with LCP ≤ 2.5s on a simulated 3G Fast network and TTI ≤ 5s And orientation changes preserve state (countdown, checklist progress) without visual glitches
Escrow Audit Log & Financial Reporting
"As an artist handling my own accounting, I want downloadable itemized receipts and a full audit log so that I can reconcile payments and prove what happened if there’s a dispute."
Description

Maintain an immutable audit log of all escrow actions (who, what, when, where/IP), including funding, deliveries, approvals, auto-releases, refunds, and disputes. Provide exportable itemized receipts and financial reports per project, recipient, and timeframe with reconciliation to Stripe/PayPal IDs. Implement role-based access, data retention policies, encryption at rest/in transit, and PII minimization to support compliance needs and simplify bookkeeping for users and internal operations.

Acceptance Criteria
Append-Only Escrow Audit Log for Core Actions
- Given an escrow is funded via Stripe or PayPal, When the payment succeeds, Then an audit entry is appended with fields: eventId, escrowId, projectId, milestoneId, action='funded', actorId or 'system', externalId, amount, currency, IP, userAgent, timestamp (UTC ISO-8601), requestId, and sequence. - Given a delivery submission, When the deliverable is uploaded, Then an audit entry is appended with action='delivered', contentHash=SHA-256 of the submitted bundle, and storageRef. - Given an approval, auto-release, refund-initiated/refund-completed, dispute-opened/dispute-resolved, When the action occurs, Then a corresponding audit entry is appended capturing the core fields plus reason/status where applicable. - Given any attempt to alter or delete an existing audit entry, When executed via UI/API/DB, Then the system denies mutation and instead writes a new corrective entry with previousEventId and previousEventHash, preserving an append-only chain. - Given entries for an escrowId, When reading the log, Then sequence is strictly increasing without gaps and previousEventHash validates the chain end-to-end.
Exportable Itemized Receipts with Payment Gateway Reconciliation
- Given a milestone releases funds, When the user downloads a receipt (PDF/CSV), Then the receipt includes: payer, payee, project, milestone, line items (gross, gateway fees, IndieVault fees, taxes, net), currency, exchange rate if applied, escrowId, externalId(s) (Stripe/PayPal), timestamps (UTC and selected timezone), and a receipt number. - Given a receipt is generated, When comparing to the payment gateway dashboard, Then externalId(s) on the receipt exactly match the Stripe charge/paymentIntent or PayPal order/capture IDs. - Given multiple milestones are selected, When exporting a consolidated CSV, Then totals equal the sum of all line items and rounding is deterministic to currency minor units. - Given an export is requested, When the file is generated, Then the download link is available for at least 24 hours and expires thereafter.
Financial Reports by Project, Recipient, and Timeframe
- Given filters (projectId(s), recipientId(s), date range, status), When generating a report, Then only matching records are included and counts/totals are displayed per status (funded, released, refunded, disputed, pending). - Given a generated report, When cross-checking against the escrow ledger, Then aggregate totals (gross, gateway fees, platform fees, taxes, net) match exactly and a reconciliation checksum is provided. - Given a report export, When downloading CSV/XLSX, Then it contains one row per escrow/milestone with reconciliation columns (escrowId, externalId, firstEventId, lastEventId) and generates within 10 seconds for up to 10,000 rows. - Given timezone preferences, When viewing the report, Then amounts are unaffected and timestamps render in the selected timezone with UTC shown on hover or secondary column.
Role-Based Access Control for Audit Logs and Reports
- Given user roles (Owner, Finance, Manager, Contributor, Reviewer), When accessing audit logs and reports, Then permissions apply: Owner/Finance can view/export all; Manager can view/export project-scoped; Contributor can view their own escrows; Reviewer cannot view financial amounts or PII. - Given an unauthorized user, When requesting restricted endpoints or exports, Then the API returns 403 and no sensitive fields appear in error payloads. - Given PII fields (IP, email), When viewed by roles without PII permission, Then values are masked or omitted in UI, API, and exports consistently. - Given permission changes, When they are updated, Then access takes effect within 1 minute and is reflected in subsequent authorization checks.
Data Retention, Purge, and Legal Hold
- Given a workspace retention policy (e.g., 7 years) is configured, When an audit entry exceeds the retention period and is not under legal hold, Then it is purged by a scheduled job and a purge-summary entry is appended to the log with counts and ranges. - Given a legal hold is placed on a project or escrow, When the retention period elapses, Then affected entries are not purged until the hold is lifted and the hold action is itself logged. - Given an export request after purging, When generating reports/receipts, Then only retained entries appear and a footer notes any records withheld by retention or legal hold. - Given a purge operation fails, When retried, Then it is idempotent and does not remove records outside the intended window.
Encryption In Transit and At Rest
- Given any client or service connection, When transmitting audit or financial data, Then TLS 1.2+ with modern ciphers and PFS is enforced and HSTS is enabled on public endpoints. - Given data persistence (primary DB, backups, object storage, exports at rest), When stored, Then AES-256 encryption at rest via managed KMS is enforced and no plaintext copies exist. - Given key rotation policy, When a rotation occurs (at least every 90 days), Then services continue to operate without data loss and old keys are retired per policy with audit entries for rotation events.
PII Minimization and Redaction in Audit and Exports
- Given the audit schema, When recording events, Then only necessary PII is stored (userId, IP, masked email) and no full payment card data or unnecessary identifiers are persisted. - Given default exports and receipts, When generated, Then PII fields are minimized/masked by default and an "Include PII" option is available only to Owner/Finance roles and defaults to off. - Given a data subject access/export request, When fulfilled from this module, Then only the data held here is included and the request is logged with eventId and timestamp.

SplitSync Payouts

Automatically route payouts by roles and splits pulled from credits and Split Resolver. Supports percentages, fixed fees, and bonuses with Quorum Rules gating release. Eliminates spreadsheets while producing per‑recipient breakdowns and preventing over/under‑payment across versions.

Requirements

Split Import & Role Mapping
"As an indie manager, I want to import contributor splits and roles directly from existing credits so that I can eliminate manual re-entry and ensure payout data matches the source of truth."
Description

Ingest splits from IndieVault Credits and the Split Resolver, automatically mapping contributors to roles (e.g., primary artist, featured artist, producer, mixer, session musician, label) and associating them to releases, tracks, and stems. Provide templates and defaults per team to reduce setup time, with validation for missing/duplicate recipients, sum-of-splits checks, and role-specific constraints. Maintain version history and change logs, supporting territory or release-scope differences and effective dates. Surface conflicts and suggest resolutions, ensuring imported data is normalized for downstream calculation and approvals.

Acceptance Criteria
Ingest Splits from Credits and Split Resolver into a Release
Given a release with associated tracks and stems and available Credits entries and a valid Split Resolver reference When the user initiates Split Import for the release Then the system fetches the latest split data from both sources successfully And compiles a candidate import set tagged by source and timestamp And includes only items scoped to the selected release, its tracks, and stems And displays an import summary showing total recipients, roles covered, and items per scope And records any source fetch errors with actionable messages and does not corrupt existing saved data And the operation completes within 10 seconds for up to 200 split lines
Auto-Map Contributors to Standard Roles Across Release, Tracks, and Stems
Given imported lines with free-text roles and contributor identifiers (email, name, or external ID) When role mapping runs Then each line is mapped to the platform's standard role taxonomy (primary artist, featured artist, producer, mixer, session musician, label) or flagged for manual selection And each contributor is associated to the correct scope (release, track, or stem) based on source metadata or user selection And unmapped roles are presented with suggestions and confidence scores and cannot be saved until resolved And a per-line preview shows resulting contributor-role-scope linkage
Validate Sum of Splits and Role-Specific Constraints on Import
Given a candidate import set containing percentage, fixed-fee, and bonus split lines When validation runs Then for each scope-territory-effective-date group, percentage lines must sum to 100.00% within a tolerance of 0.01% And fixed-fee and bonus lines are excluded from the percentage sum check And the system enforces configured role constraints (required roles, uniqueness, and maximum counts) for the group And violations are listed with precise locations and must be resolved before save
Detect Missing or Duplicate Recipients and Provide Fixes
Given imported lines with potential duplicate recipients across sources When recipient normalization runs Then duplicates are detected using configured match keys (email, external IDs, exact/normalized name) And a merge suggestion is offered to map duplicates to a single contributor record And missing required recipient fields (e.g., payout email or payee ID) are flagged with inline fixes And the dataset cannot be saved until duplicates are merged or explicitly marked as distinct with justification
Apply Team Templates and Defaults During Import
Given a team with an active split template defining default roles, scopes, and split values When importing splits without complete data Then template defaults auto-fill missing roles, scopes, and split values according to template rules And any auto-filled values are visibly marked and can be overridden by an authorized user And all overrides are captured in the change log with before/after, actor, and timestamp And saving applies the current template version only and does not alter previously saved versions
Maintain Version History with Effective Dates and Territory-Specific Variations
Given an existing saved split set for a release When a new import is confirmed Then a new version entry is created with author, timestamp, diff summary, and optional notes And each line can store effective start and end dates and territory codes (ISO 3166-1 alpha-2 or alpha-3) And querying splits with an as-of date and territory returns the correct effective version for each scope And overlapping effective-date ranges for the same scope and territory are blocked until resolved
Surface Conflicts and Normalize Data for Downstream Payout Calculation
Given conflicting inputs across sources or versions When conflict detection runs Then the system identifies conflicts including overlapping effective dates, role collisions, and percentage-sum mismatches And provides at least one machine-suggested resolution per conflict (e.g., keep newest by timestamp, redistribute percentages proportionally, choose one role) And requires the user to accept a resolution or manually resolve before approval And the approved dataset conforms to the canonical schema (contributorId, roleId, scope, payoutType, value, territory, effectiveStart, effectiveEnd) and passes schema validation without errors
Flexible Split Calculation Engine
"As an artist, I want payouts calculated from various split types and deductions so that everyone is paid fairly and consistently regardless of deal structure."
Description

Calculate payouts using multiple models: percentage-based shares, fixed fees (per track/release), and conditional bonuses (e.g., milestone or date-based). Support off-the-top deductions (platform fees, distribution costs), per-recipient recoupable advances with waterfall order, caps/floors, and rounding rules. Handle multi-currency revenues with FX conversion at payout time and configurable rates. Allow time-bounded rules, revenue-source filters, and exceptions (e.g., exclude non-recoupable items). Provide deterministic, auditable calculations that can be re-run consistently for a given period and version.

Acceptance Criteria
Percentage Splits with Off-the-Top Deductions and Rounding
Given gross revenue = 1000.00 USD for release R in period P And off-the-top deductions configured as: 10% platform fee on gross, 50.00 USD distribution cost fixed And distributable base = gross - platform fee - distribution cost And recipients A=60%, B=40% And rounding mode = Bankers (round half to even) to 2 decimals in USD When the engine calculates payouts for period P, version V1 Then platform fee = 100.00 USD and distribution cost = 50.00 USD And distributable amount = 850.00 USD And A payout = 510.00 USD and B payout = 340.00 USD And A+B = distributable amount exactly with no residual And audit log records inputs, deductions, rates, rounding mode, timestamps, and a deterministic checksum
Fixed Fees Per Track with Revenue-Source Filter
Given total revenue = 500.00 USD from source = "Bandcamp" for a 3-track release in period P And a fixed fee of 25.00 USD per track payable to Engineer E applies only to source "Bandcamp" And participants A=50%, B=50% on distributable remainder When the engine calculates payouts for period P, version V1 Then fixed fees total = 75.00 USD paid to Engineer E off-the-top And distributable remainder = 425.00 USD And A payout = 212.50 USD and B payout = 212.50 USD And if Engineer E is also A or B, they receive both the fixed fee and their percentage share And audit log itemizes fixed fees by track and source
Conditional Bonus Triggered by Milestone Threshold
Given revenue from source "Spotify" for period P = 2100.00 USD And a conditional bonus rule: pay 150.00 USD to Marketer M if "Spotify" revenue in period >= 2000.00 USD When the engine calculates payouts for period P, version V1 Then bonus is triggered and M receives 150.00 USD off-the-top after deductions but before percentage splits And if "Spotify" revenue were < 2000.00 USD, bonus = 0.00 USD And audit log includes rule ID, evaluation inputs, and trigger evaluation result = true
Per-Recipient Recoupment Waterfall with Non-Recoupable Exclusions
Given Artist A has an outstanding recoupable advance balance = 1000.00 USD And non-recoupable items labeled "PromoGrant" must be excluded from recoupment And current period P distributable amount for A's share before recoupment = 600.00 USD derived from eligible sources only When the engine calculates payouts for period P, version V1 Then 600.00 USD is applied to A's advance, reducing balance to 400.00 USD And Artist A cash payout = 0.00 USD for period P And other recipients are paid their shares unaffected And audit log records pre- and post-recoup balances and excludes "PromoGrant" amounts from the recoup base
Caps and Floors with Pro-Rata Redistribution
Given distributable amount after fees for period P = 1000.00 USD And recipients: Mixer X=40%, Producer Y=60% And cap for Mixer X = 300.00 USD per period, floor for Producer Y = 500.00 USD per period And redistribution policy for over-cap amounts = pro-rata to uncapped recipients in same calculation And rounding = 2 decimals, Bankers When the engine calculates payouts for period P, version V1 Then initial calculated amounts are X=400.00 USD, Y=600.00 USD And cap reduces X to 300.00 USD, creating 100.00 USD over-cap pool And over-cap pool is allocated to Y (only uncapped recipient) -> Y increases to 700.00 USD And Y meets floor >= 500.00 USD And X+Y = 1000.00 USD exactly
FX Conversion at Payout Time with Configurable Rate Source
Given revenue = 1000.00 EUR in period P and payout currency = USD And FX rate source = ECB and rate timestamp = 2025-08-15T12:00:00Z with EUR->USD = 1.10 And rounding rules: USD 2 decimals, Bankers When the engine calculates payouts for period P, version V1 on 2025-08-15 Then converted distributable = 1100.00 USD before splits And audit log records source=ECB, rate=1.10, timestamp=2025-08-15T12:00:00Z And re-running the calculation for period P, version V1 with the same configuration produces identical outputs and checksum
Time-Bounded Split Rule Change Mid-Period
Given a split rule for Artist A = 40% effective until 2025-07-31 and 35% effective from 2025-08-01 And period P spans 2025-07-15 to 2025-08-15 with transaction-level revenues dated accordingly And total gross revenue in period P = 2000.00 USD, with 800.00 USD dated on/before 2025-07-31 and 1200.00 USD dated on/after 2025-08-01 And no off-the-top deductions for simplicity When the engine calculates payouts for period P, version V1 Then A receives 40% of 800.00 USD = 320.00 USD plus 35% of 1200.00 USD = 420.00 USD, total = 740.00 USD And the remainder 1260.00 USD is allocated to other recipients per their rules And audit log shows rule versions applied with effective dates per transaction
Quorum Rules & Approvals
"As a label rep, I want payouts to wait until required stakeholders approve so that we prevent disputes and unauthorized disbursements."
Description

Define approval quorums that gate payout initiation based on roles or named approvers (e.g., any 2 of 3 producers, all primary artists). Support thresholds, per-release or per-batch rules, expiry windows, reminders, and escalation paths. Present a clear approval timeline with comments and change context, and capture an immutable audit trail. Block disbursements until quorum conditions are met and automatically re-request approvals when material inputs (splits, amounts, recipients) change.

Acceptance Criteria
Any-2-of-3 Producer Quorum Blocks Payout
Given a payout batch for Release R with a quorum rule "Producers: any 2 of 3" applied And no approvals recorded When a user attempts to initiate disbursement Then the system sets batch state to "Awaiting Approvals" and blocks disbursement When two distinct producer approvers submit "Approve" within the approval window Then the system marks the quorum as "Met" and the batch as "Ready for Disbursement" And disbursement initiation is allowed When a third producer approves Then the approval is recorded without changing the "Ready for Disbursement" state And the audit log records approver IDs, timestamps, and decisions for each action
All Primary Artists Approval per Release in Multi-Release Batch
Given a batch containing Release A and Release B And a per-release rule "All Primary Artists must approve" is configured And Release A has two primary artists (A1, A2); Release B has one (B1) When A1 and A2 approve but B1 has not Then the batch remains "Awaiting Approvals" and disbursement is blocked When B1 approves Then both releases' quorums are "Met" and the batch becomes "Ready for Disbursement" And the approval timeline shows releases segmented with their approvers
Approval Expiry with Reminders and Escalation
Given an approval rule with a 5-day expiry, daily reminders, and escalation to "Label Admin" on expiry And approvers have been invited at T0 When no approval is received by T0 + 24h Then reminder notifications are sent to all pending approvers and logged When no approval is received by T0 + 5 days Then the request is escalated to the Label Admin and escalation notifications are sent and logged And the approval request remains open until quorum is met or manually canceled
Material Change Re-requests and Resets Approvals
Given approvals were previously "Met" for a batch And a material input changes (splits, amounts, or recipients) When the change is saved Then prior approvals for impacted parties are invalidated and marked "Superseded" And new approval requests are sent to required approvers with a diff of changes And disbursement reverts to "Awaiting Approvals" and is blocked until quorum is met again
Immutable Approval Timeline with Comments and Change Context
Given users can comment on approval requests When any approver submits Approve/Reject with an optional comment Then the timeline records the action, comment, approver identity, timestamp, and version hash And all timeline entries are immutable (append-only) and exportable as an audit report And change context (who changed what fields and when) is displayed adjacent to each approval decision
Quorum Revalidation Across Versions Pauses Disbursement
Given a payout was scheduled based on approvals for Release R, Version 1 When Version 2 of Release R updates splits or recipients prior to disbursement Then the system pauses the scheduled disbursement And triggers quorum revalidation for Version 2 and sends new approval requests And disbursement remains blocked until Version 2 quorum is met
Amount Threshold Triggers Named Approver Quorum
Given a rule: if total batch amount is greater than or equal to $10,000, require Finance Controller approval in addition to base role approvals And a batch total is $12,500 When all base role approvals are met but Finance Controller has not approved Then the batch stays "Awaiting Approvals" and disbursement is blocked When Finance Controller approves within the approval window Then the batch becomes "Ready for Disbursement" When revisions reduce the batch total below $10,000 Then the Finance Controller requirement is removed, approvals are recalculated, and status updates accordingly And all threshold evaluations and notifications are logged in the audit trail
Version-Aware Payment Guardrails
"As a finance coordinator, I want safeguards that account for version changes so that we don’t double pay or miss payments when releases are updated."
Description

Bind payouts to specific asset and split versions, preventing over/under-payment across revisions. Detect diffs between versions, calculate deltas for already-paid amounts, and enforce locks on finalized periods. Provide pre-run impact analysis showing who changes, by how much, and why, with warnings for out-of-balance totals. Require re-approval when significant changes occur and keep a cross-version ledger to ensure cumulative accuracy over time.

Acceptance Criteria
Bind Payouts to Explicit Asset/Split Versions
Given a payout run R is initiated for a release, When the user selects specific Asset Version ID(s) and Split Version ID, Then all payout calculations for R must reference only those exact version IDs. Given run R is saved or finalized, When the underlying asset or split is updated to a new version, Then R’s stored inputs, preview, and outputs remain unchanged and auditable with immutable version IDs and content hashes. Given a user attempts to change versions within an existing run R, When a different version is selected, Then the system creates a new run R2 with a distinct run ID and version references, and R remains unchanged. Given a run R is finalized, When exporting artifacts (breakdowns, remittance), Then the export includes the bound Asset Version ID(s) and Split Version ID and their hashes.
Version Diff Detection and Attribution
Given a prior finalized run R_prev used Split Version Sv2.0 and a new Split Version Sv2.1 exists, When initiating a new preview R_new, Then the system displays a per-role and per-recipient diff including additions, removals, and changes to percentages, fixed fees, and bonuses. Given a diff item is shown, When inspecting details, Then the item includes a reason code and human-readable attribution (e.g., “credit role updated,” “percentage edited,” “new bonus added”), the change author, and timestamp. Given the diff is calculated, When presenting the summary, Then the system shows per-recipient net delta amount (currency) and percent, plus aggregate delta totals at run level. Given no change is detected, When generating the preview, Then recipients with zero delta are marked “no change.”
Delta Calculation Against Already-Paid Amounts
Given recipient Y has Amount_paid recorded in the ledger for Release X, When calculating a new run R_new against the selected versions for period P, Then Amount_owed = Amount_target_cumulative_to_date − Amount_paid, rounded to currency precision using half-up rounding. Given Amount_owed < 0, When generating remittance, Then the payout for recipient Y is set to $0 and an overpayment alert with magnitude is shown in the preview; no negative payments are issued. Given Amount_owed = 0, When generating remittance, Then recipient Y is excluded from payment instructions and labeled “settled this period.” Given Amount_owed > 0, When generating remittance, Then the remittance instruction for recipient Y equals Amount_owed and is included in the payment file. Given deltas are calculated, When totals are computed, Then the sum of all Amount_owed across recipients equals the run’s total proposed payout within rounding tolerance.
Finalized Period Lock Enforcement
Given accounting period P is marked Finalized, When a user attempts to edit credits or splits effective within P, Then the system blocks the edit or requires an effective date after P end; the attempt is logged with user and timestamp. Given period P is Finalized, When running payouts for P, Then only versions effective on or before P end and valid for P are considered; retroactive recalculation within P is disallowed without admin override. Given an authorized admin invokes override, When applying a retroactive change within P, Then a justification note is required and an automatic adjustment entry is scheduled for the next open period instead of modifying P’s finalized totals.
Pre-Run Impact Analysis and Out-of-Balance Warnings
Given a new run R_new is prepared, When generating the preview, Then the system lists each recipient with proposed payout, prior cumulative paid, delta amount (currency and percent), and reason(s) for change. Given percentage-based pools exist, When validating splits, Then the system blocks finalization if recipient percentages do not sum to 100% and shows an out-of-balance error. Given available payable balance is known for the period, When validating the preview, Then the system blocks finalization if the sum of proposed payouts exceeds the available balance and shows the shortfall. Given recipients are missing required payment details, When validating the preview, Then warnings identify affected recipients and those recipients are excluded from remittance until resolved.
Significant Change Re-Approval Gate
Given organization thresholds are configured (default: per-recipient absolute delta ≥ $50 or ≥ 5%, or any role addition/removal), When a preview R_new exceeds any threshold relative to the last approved run, Then the system requires re-approval per configured Quorum Rules before finalization. Given an approval request is generated, When approvers review, Then the request includes the diff summary, per-recipient impact, and a required justification field; all decisions are timestamped and auditable. Given quorum is not met or the request is rejected, When attempting to finalize R_new, Then finalization is blocked and the run is marked “Changes Require Rework.” Given quorum is met, When finalizing R_new, Then the approval record is linked to the run and exported with payout artifacts.
Cross-Version Ledger Integrity and Reconciliation
Given multiple runs exist across versions for a release, When generating the ledger report, Then the system shows per-recipient cumulative gross, deltas by run, breakdown of percentages/fees/bonuses, and current entitlement versus paid-to-date. Given ledger integrity checks execute, When validating per-recipient totals, Then cumulative paid must not exceed current final entitlement by more than the configured rounding tolerance; otherwise an overpayment flag is raised with run references. Given a previously finalized run is re-executed or retried, When posting to the ledger, Then idempotency prevents duplicate entries using run ID and remittance hash; duplicates are logged and not reposted. Given audits are requested, When exporting data, Then the ledger export includes run IDs, bound version IDs, diff references, approver records, and timestamps.
Recipient Statements & Breakdown Delivery
"As a contributor, I want a clear breakdown of how my payout was calculated so that I can verify the amounts without back-and-forth emails."
Description

Generate per-recipient statements that itemize calculations by release/track, split type, deductions, version references, FX rates, and date ranges. Provide branded PDF/CSV exports, an in-app viewer, and secure expiring links for external recipients. Support localization (currency, date, language), custom notes, and period summaries. Store statements with retention controls and allow recipients to self-serve historical statements.

Acceptance Criteria
Monthly Per-Recipient Statement Generation and Itemization
Given a payout period from 2025-07-01 to 2025-07-31 (UTC) and a recipient with splits across multiple releases and versions When statements are generated for the period Then a unique statement is created per recipient covering only transactions dated within the period boundaries inclusive of start and end dates (UTC) And each line itemizes by release and track with: split type (percentage, fixed fee, bonus), base amount, applied split value, deductions with labels, version reference (release ID, track ID, version tag), and resulting net amount And FX conversion details are included where applicable: source currency, target currency, FX rate to 6 decimal places, rate source and ISO 8601 timestamp, local and converted amounts shown And period summary totals are shown: gross, deductions by category, FX gains/losses (if any), net payable; totals reconcile within ±0.01 of the currency minor unit And rounding rules are applied per currency minor unit without inflating negative nets And cumulative percentage splits for any track version do not exceed 100% and fixed amounts do not exceed base; invalid lines are excluded and logged without blocking other valid lines And generation metadata (statement ID, recipient ID, period start/end, generator, timestamp) is recorded and the statement is stored And an optional custom notes section (max 2,000 chars) is stored and appears on viewer and exports
Branded PDF and CSV Export Delivery
Given a generated statement exists When the user exports as PDF Then the PDF includes organization branding (logo, brand name, primary color), statement ID, recipient name, period range, and all itemized lines and totals exactly matching in-app values And the PDF is produced within 5 seconds for up to 5,000 lines and is under 10 MB; otherwise an asynchronous export is queued and the user is notified And monetary formatting follows the active locale; Unicode and line breaks render correctly When the user exports as CSV Then the CSV is UTF-8 with BOM, comma-delimited, RFC 4180 compliant, includes header columns for all fields (including FX rate and source), and uses ISO currency codes in a dedicated column And exported filenames follow <org>_<recipient>_<statementId>_<periodStart>-<periodEnd>_<locale>.pdf|csv with safe characters And both exports are stored and linked to the statement for audit and re-download And any custom notes are included under a Notes section (PDF) and a Notes column (CSV)
In-App Statement Viewer and Interactions
Given a generated statement When opened in the in-app viewer Then the itemization table renders within 2 seconds for up to 5,000 lines and supports pagination or virtual scrolling beyond that And the viewer displays columns: release, track, version, split type, base amount, split value, deductions (expandable), FX rate/source, converted amount, net amount; totals appear in a fixed summary bar When the user filters by release, track, split type, or date range Then rows update and totals recalculate accordingly without mismatch When the user clicks a version reference Then a panel shows linked asset/version metadata When the user changes locale in the viewer Then currency and date formats update instantly without altering underlying numeric values Then the viewer meets WCAG 2.1 AA for contrast, keyboard navigation, and screen reader labels for columns and totals And Export PDF/CSV actions from the viewer produce identical outputs to the export feature
Secure Expiring External Statement Links with Analytics
Given a generated statement for an external recipient without an account When a secure link is created with a 7-day expiry Then a unique, unguessable URL with a token is generated and the expiration timestamp is displayed And access requires email verification via one-time passcode; only verified access reveals the statement And optional single-use enforcement, when enabled, blocks any subsequent access after the first successful view And authorized users can revoke the link at any time; revoked links return HTTP 410 with a branded message And per-recipient analytics are captured: first opened, last opened, open count, IP country, user agent, and downloads by format; these are visible in the audit log And when watermarking is enabled, PDF previews/downloads display a watermark with recipient email and access timestamp And all access and events are logged with timestamp, actor, and action
Localization of Currency, Date, and Language
Given organization default locale and recipient-specific locale settings When generating and viewing a statement Then currency symbols or ISO codes, decimal and thousands separators, and date formats match the recipient locale And labels, column headers, and error messages load in the recipient language; missing translations fallback to org default, then English And currency conversion uses the statement’s FX rate snapshot; totals reconcile in the presentation currency after formatting without value changes from re-rounding When exporting PDF/CSV Then the locale is applied to formatting and filename, and language code is included in document metadata And right-to-left language layouts render correctly with mirrored table alignment and preserved numeric alignment
Statement Storage, Retention, and Self-Serve History
Given retention policy settings (e.g., 7 years) When a statement is generated Then it is stored with retention metadata (retention end date) and is immutable except for allowed metadata (notes, locale) And after the retention end date and upon policy execution, statements and exports are purged and an audit record of the purge (without content) is retained When a recipient logs into the portal Then they can view and download their historical statements filtered by period and release, but cannot see other recipients’ statements And role-based permissions enforce that org admins can view all org statements while recipients see only theirs And each download request is logged with timestamp, user, IP, and file format And bulk download of more than 20 statements creates a zip via an async job with notification upon completion
Automated Payout Routing & Reconciliation
"As an operations lead, I want approved payouts to be sent automatically and reconciled back to the ledger so that our books stay accurate with minimal manual work."
Description

Route approved payouts via integrated processors (e.g., ACH/SEPA, Stripe Connect, PayPal Payouts), honoring recipient payment preferences, KYC/AML checks, tax profile completeness, minimum thresholds, and batch scheduling. Manage retries for failed payments, partial payments, and holds. Reconcile transactions by ingesting processor reports and webhooks, updating payout statuses, fees, and reference IDs, and marking items as settled. Expose a reconciliation view with filters and exceptions queue.

Acceptance Criteria
Payout Routing Honors Preferences and Processor Constraints
Given a payout batch includes recipients with stored payment preferences and supported currencies/regions When routing is executed Then for each recipient the preferred processor is selected if available and compliant, otherwise the next configured fallback is used, otherwise the payout is marked Unroutable with a specific reason code Then each routed payout includes a unique idempotency key and processor-specific external reference Then the gross amount equals the computed amount owed and never exceeds the available payable balance Then a single transfer is created per recipient per currency per batch
KYC/AML and Tax Profile Gating Before Disbursement
Given a recipient has incomplete KYC/AML verification or an incomplete/invalid tax profile When batch preparation occurs Then the recipient’s payout is set to On Hold with reason codes (KYC_PENDING, AML_BLOCK, TAX_INCOMPLETE) and no transfer payload is created Then once verification and tax profile become compliant before the next scheduled run, the payout automatically moves to Ready in that run Then all gating decisions are recorded in an immutable audit trail with timestamp, evaluator, and rules matched
Minimum Thresholds and Batch Scheduling
Given recipient-level minimum payout thresholds and a configured batch schedule with timezone When the payable balance for a recipient is below the threshold at batch time Then no payout is created and the balance rolls over to the next batch with reason BELOW_THRESHOLD When the balance meets or exceeds the threshold at batch time Then a payout is created for the balance amount rounded per currency rules and included in that batch Then batch execution adheres to schedule windows and does not execute more than once per window
Failure, Partial Payment, and Hold Retry Management
Given a routed transfer fails with a processor error When the retry policy runs Then the system retries up to the configured limit with exponential backoff, marks final state as Failed after limit, and stores last error code/message Given a partial payment occurs (processor settles a subset of the amount) When reconciliation identifies the partial settlement Then the settled portion is marked Settled, the remainder is returned to Pending Retry with a new attempt record, and totals remain consistent Given a payout is On Hold When a user clears the hold or the hold condition expires Then the payout moves to Ready for the next batch without creating duplicate transfers
Processor Reconciliation via Webhooks and Reports
Given processor webhooks and periodic settlement reports are enabled When a webhook or report line is received Then the system matches it to an internal payout using idempotency key or external reference with deterministic fallback keys Then payout status is updated to Settled, Failed, or Partially Settled, processor fees/net amounts are recorded, and settlement timestamps are stored Then updates are idempotent and resilient to out-of-order delivery; reprocessing the same event/report does not change the final state Then unmatched events are routed to the exceptions queue with a concrete reason (NO_MATCH, DUPLICATE, CURRENCY_MISMATCH)
Reconciliation View with Filters and Exceptions Queue
Given a finance user opens the Reconciliation view When they filter by status, processor, date range, batch ID, recipient, currency, and amount range Then the grid returns results matching all filters within 2 seconds for up to 10,000 records Then the Exceptions queue displays items categorized by reason with counts and supports assigning, commenting, and marking as resolved Then exporting the current filtered view produces a CSV with all visible columns and applied filters
Over/Under-Payment Prevention and Quorum Gating Across Versions
Given multiple release versions and split definitions exist for recipients When calculating payable amounts for a payout cycle Then per-recipient totals across versions do not exceed the owed amount and do not drop below zero after bonuses/fees; discrepancies are flagged before routing Then routing is blocked until required quorum approvals are met; payouts remain in Blocked with reason QUORUM_UNMET Then the per-recipient breakdown shows version-level allocations and adjustments used to derive the final routed amount
Payout Analytics & Export API
"As a founder, I want visibility into payout trends and anomalies so that I can make informed decisions and catch issues early."
Description

Provide dashboards and exports for payout KPIs by release, recipient, role, period, and revenue source. Offer anomaly detection (e.g., sudden split changes, outlier deductions), time-series trends, and drill-down to statements. Expose an authenticated API and CSV export for downstream accounting and BI tools, with scheduling and webhooks for completed runs.

Acceptance Criteria
KPI Dashboard Filtering by Release, Recipient, Role, Period, and Revenue Source
Given an authenticated manager with access to Release A's payouts When they apply filters Release=A, Role=Producer, Period=2025-05 to 2025-07, Revenue Source=Streaming Then KPI tiles and tables reflect only matching payouts and the total equals the sum of visible rows within ±0.01 of currency precision Given no filters are applied When the dashboard loads Then results default to the last 90 days across accessible releases and totals are clearly labeled with the active period Given the user selects Recipient=Jane Doe with multiple roles When Role filter is set to "All roles" Then role-level KPIs aggregate across all roles for the recipient and recipients with no data show a zero state without errors
Drill-Down from KPI Tiles to Source Statements
Given a KPI tile "Net Payouts" shows value X for Release A in Period P When the user clicks the tile Then a drill-down view lists underlying statements and line items that sum to X within ±0.01 tolerance, and displays the applied filters Given the drill-down view displays line items When the user opens a line item Then a panel shows source statement ID, version, revenue source, role, recipient, split %, deductions, net amount, and the calculation formula Given the user lacks permission to a statement's contract When viewing the drill-down Then restricted fields are masked while aggregate totals remain unchanged and an access notice is displayed
Anomaly Detection for Sudden Split Changes and Outlier Deductions
Given baseline splits for Release A are 25/25/50 for roles [Artist, Producer, Label] When a change to 10/40/50 occurs effective mid-period Then an anomaly "Sudden split change" is created with before/after values, timestamp, actor, affected payouts count, and a link to impacted statements Given historical deductions per release and revenue source have mean μ and standard deviation σ When a new deduction exceeds μ + 3σ within the analysis window Then an "Outlier deduction" alert is created with context (statement ID, amount, z-score) and remediation guidance link Given unresolved anomalies exist When the user toggles "Show anomalies only" Then the dashboard and exports include only affected releases/recipients and an anomalies.csv file is attached to exports with each flagged event
Time-Series Trends Visualization and Aggregation Accuracy
Given the time-series chart is set to Metric=Gross Revenue and Group by=Month When the user switches Group by=Week Then buckets use ISO week boundaries, totals across buckets equal the aggregate KPI within ±0.01, and tooltips show period start/end Given months with zero activity are present When rendering the chart Then zero-value points are included to maintain continuity without interpolating non-existent data Given account timezone is set to America/Los_Angeles When viewing period boundaries and calling the API with identical parameters Then UI and API use the same local cutoff (23:59:59 PT) and return identical bucket totals
CSV Export with Scheduling and Webhook Notification
Given filters Release=A, Period=2025-Q2, Recipient=All When the user clicks Export CSV Then a CSV is generated with headers [release_id, release_title, period_start, period_end, recipient_id, recipient_name, role, revenue_source, gross, deductions, net, currency] and row count matches the on-screen table Given the user schedules a weekly export every Monday 09:00 account timezone to S3 and configures webhook URL https://acct.example/webhooks/payout-export When the scheduled run completes Then a POST webhook is sent with payload {run_id, status:"completed", file_url, filters, row_count, file_hash} signed with HMAC, and the file is uploaded to the configured S3 path Given an export run fails When retry is enabled with max_attempts=3 Then retries use exponential backoff and a final "failed" webhook is sent including error_code and trace_id
Authenticated Analytics API for Downstream Accounting and BI
Given a client holds an OAuth2 client_credentials token with scope payout.analytics:read When it requests GET /api/v1/payout-analytics?release_id=A&period_start=2025-04-01&period_end=2025-06-30&group_by=recipient Then the API responds 200 with aggregates per recipient including [recipient_id, name, role, gross, deductions, net, currency] and a checksum, matching UI totals for the same filters within ±0.01 Given the result set exceeds 500 rows When the client includes page_token=XYZ Then the API returns the next page with a new page_token until no more pages remain Given invalid or expired credentials When requesting any analytics endpoint Then the API returns 401 with WWW-Authenticate header; given insufficient scope, it returns 403 with an error code without body data Given the client sets Accept: text/csv on the same endpoint When the request is processed Then a streamed CSV is returned with the same schema as the UI export and Content-Disposition filename includes the filter summary Given per-client rate limit is 120 requests per minute When the limit is exceeded Then the API responds 429 with Retry-After header and no partial data

Recoup Engine

Define recoupable costs (mixing, video, advances) and priority waterfalls per project. Deductions apply before payouts with transparent ledgers, caps/floors, and per‑milestone toggles. Everyone sees what’s been recouped and what remains, reducing disputes and email audits.

Requirements

Recoupable Cost Categories & Rules
"As an indie label manager, I want to configure recoupable cost categories and their rules per project so that deductions are applied consistently and transparently before payouts."
Description

Enable project owners to define standardized recoupable cost categories (e.g., mixing, mastering, video, marketing, advances) with configurable rules per project. Each category supports caps, floors, percentages, interest/fees, tax flags, start/end applicability windows, milestone-based toggles, backdating controls, and contract references/attachments. Provide reusable templates at workspace level, validation to prevent conflicting rules, and versioning with effective dates. Integrate costs with IndieVault projects/releases so expenses can be attached to assets and contracts, ensuring consistent, transparent setup that reduces disputes and accelerates onboarding.

Acceptance Criteria
Workspace Template: Standard Categories & Rule Defaults
Given I am a workspace admin with permission to manage Recoup Templates When I create a template named "Indie Standard v1" with categories Mixing, Mastering, Video, Marketing, Advances And for each category I configure: cap (currency), floor (currency), deduction percentage (0–100), interest (none/simple/compound with rate and period), fees (none/flat/percent), tax flag (pre-tax or post-tax), default start date, default end date, milestone toggle (on/off + event), contract reference required (on/off) Then the template is saved successfully And category names are unique within the template And numeric fields accept up to 2 decimal places and non-negative values And the template appears in the workspace template list and is selectable for projects
Project Template Application & Overrides
Given a project exists without recoup rules When I apply the "Indie Standard v1" template Then the project receives a snapshot of the template as Project Rules v1 effective today And I can override any field per category for this project before publishing v1 And overrides are saved and auditable per field And template changes made later do not affect this project unless a new version is applied
Rule Validation: Prevent Conflicts
Given I am configuring category rules for a template or project version When I attempt to save rules with any of the following: - floor greater than cap (when both set) - deduction percentage outside 0–100 - start date after end date - overlapping applicability windows for the same category - milestone toggle enabled without selecting at least one milestone event - both interest and fee set to compound on principal+interest (double-compounding) Then the save is blocked And I see specific error messages per invalid field And no partial changes are committed
Versioning, Effective Dates, and Backdating Controls
Given Project Rules v1 are effective 2025-09-01 When I create v2 effective 2025-10-01 Then calculations for expense dates before 2025-10-01 use v1, and on/after 2025-10-01 use v2 And effective date ranges for versions cannot overlap When I attempt to create v3 effective 2025-08-15 Then only users with Backdate permission can proceed And an audit log entry records user, timestamp, old/new effective window, and reason And the version history shows v1, v2, v3 with their effective windows
Attach Expenses to Assets, Releases, and Contracts
Given a user is recording an expense in a project When they select a category and link one or more assets, releases, or contracts and upload supporting attachments Then the expense cannot be saved without a category And if the category requires a contract reference, the expense cannot be saved unless a contract is linked And the expense is associated to the project and visible in the category ledger with links to the attached items And the applicable rules version is determined by the expense date
Milestone Toggles & Applicability Windows Enforcement
Given a category is configured with a milestone toggle "Activate on Release Published" and an end date of 2025-12-31 When the release is published Then the category automatically activates and begins applying deductions on the next calculation run And if the end date passes, the category automatically deactivates and no further deductions are taken And the UI shows the active/inactive state with the triggering milestone or window reason
Interest, Fees, Tax Flags, and Cap/Floor Enforcement in Ledger
Given a category with cap $10,000, floor $1,000, 20% deduction, simple interest 6% annually, fee 2% of principal monthly, and tax flag set to pre-tax When the payout calculation runs for a period with $8,000 gross revenue and $5,000 eligible expenses in this category Then deductions are computed pre-tax per rules, including principal, interest, and fees And the ledger line items show principal, interest, fees, tax label, running total, and remaining to recoup And rounding is to 2 decimals in the project currency And once the cap is reached, additional expenses in this category do not increase the recoupable balance and the ledger marks the category "Cap reached" And the category is not marked settled until the floor is met
Waterfall Editor & Scenario Preview
"As a project owner, I want to design and preview payout waterfalls so that I can verify deductions and splits before enabling live payouts."
Description

Provide a visual, drag‑and‑drop editor to define payout waterfalls and recoup priority orders per project. Allow configuring pre‑ and post‑recoup splits, bucket ordering, thresholds, and conditional gates (milestones) with clear summaries of who gets paid when. Include scenario previews that simulate recoup timelines using sample or historical revenue, showing cumulative recoup, remaining balance, and projected participant payouts. Support versioning, draft vs. active configurations, and change impact diffing to ensure safe updates without disrupting live payouts.

Acceptance Criteria
Drag-and-Drop Waterfall Builder Persists and Validates Bucket Order
Given I am editing a project's payout waterfall in the visual builder And the waterfall contains three or more buckets with assigned recipients When I drag a bucket to a new position and click Save Then the new order is immediately reflected in the UI And reloading the editor shows the same saved order And validation prevents duplicate bucket names And validation prevents saving if any bucket lacks a required field (name, type, split) And inline errors identify the exact bucket and field to fix And the Save action is disabled until all validation errors are resolved
Configure Pre- and Post-Recoup Splits with Thresholds, Caps, and Floors
Given a project with defined participants When I configure pre-recoup splits that total 100% And I configure post-recoup splits that total 100% And I set bucket-level caps (absolute currency) and floors (percentage) And I add a bucket threshold that delays payments until a target (e.g., $50,000 gross) is met Then the summary panel displays effective rates and constraints per bucket And saving is blocked if any split section does not equal 100% And saving is blocked if any cap is below its floor or any value is not numeric/positive And after saving, reopening the editor shows all values exactly as entered
Milestone Gates Control Bucket Eligibility with Per-Milestone Toggles
Given a milestone gate (e.g., “Video Delivered”) exists for a bucket And the milestone is currently not met When I run a payout simulation Then the gated bucket is excluded from payouts When I mark the milestone as met with a timestamp T Then simulations include the bucket only for revenue occurring at or after T And toggling the milestone back to not met excludes the bucket in subsequent simulations And the milestone state and timestamp are shown in the configuration summary
Scenario Preview Simulates Recoup Timeline Using Historical or Sample Revenue
Given historical revenue exists for the project within a selected date range When I open Scenario Preview and select a configuration Then the preview computes per statement period: cumulative recouped amount, remaining recoup balance, and projected payouts by participant And totals reconcile such that (recoup applied + payouts + remaining recoup) equals total revenue for each period And switching the data source to a provided sample dataset updates results deterministically for the same inputs And the preview renders both a table and a chart view and allows CSV export of the table And the initial render completes within 3 seconds for 24 months of data and up to 10 participants
Draft vs Active Versioning Prevents Live Payout Disruption
Given an active waterfall configuration exists When I clone it to a draft and make changes Then live payouts continue to use the active configuration until activation And the draft is labeled as Draft with version and last-modified metadata And attempting to activate the draft shows a confirmation modal with an impact diff summary And activation requires no validation errors in the draft And upon activation, the prior active version is archived as read-only with an audit log entry And the activation timestamp ensures the new configuration only affects payouts from that time forward
Change Impact Diff Highlights Configuration and Payout Differences
Given two configurations V1 and V2 exist for the same project When I open the impact diff comparing V1 to V2 Then changes in bucket order, split percentages, caps/floors, thresholds, and milestone gates are highlighted line-by-line And the diff includes a projected payout delta by participant for the last 3 months using historical data And if historical data is insufficient, the UI states this and offers to run the diff using sample data And the diff view supports exporting the report as CSV or PDF And the diff computation completes within 10 seconds for 24 months of history and up to 20 participants
Unified Revenue & Expense Ledger
"As a finance admin, I want a single ledger for all income and expenses so that recoup calculations are accurate, auditable, and easy to reconcile."
Description

Create an immutable, auditable ledger that ingests revenue (DSP, Bandcamp, merch) and expenses from manual entry, CSV import, and API/webhooks. Support multi‑currency inputs with FX normalization at posting time, source attribution, invoice/receipt attachments, and links to assets, releases, and contracts. Enforce double‑entry style adjustments (reversals instead of destructive edits), tagging, reconciliation workflows against statements, and period locking. Expose filters and saved views for finance reviews and export to CSV/JSON for audits.

Acceptance Criteria
Post Entries from Manual, CSV, and Webhook Sources
Given a project exists and the actor is authorized to post ledger entries When the actor posts a revenue or expense via the manual UI Then a ledger entry is created with fields: entry_id, project_id, entry_type (revenue|expense), source_type (manual), source_ref, tags[], original_amount, original_currency, base_currency_amount, fx_rate, fx_source, posted_at, created_by, links (asset_id|release_id|contract_id), attachments[], audit_trail_id Given a valid CSV file with multiple entries and unique external_ids When the file is imported Then valid rows are posted and invalid rows are rejected with row-level error codes, and an import summary reports total, succeeded, failed, and duplicate counts Given an inbound API or webhook payload with external_id and signature When it is received and validated Then the entry is posted with source_type (api|webhook), signature verified, and duplicate payloads with the same external_id are ignored with an idempotent 200 response Given any created entry When the entry is queried by entry_id Then the response includes source attribution, original values, normalized values, and a reference to the audit trail
FX Normalization at Posting Time
Given the entry currency differs from the project base currency When the entry is posted Then the system records fx_rate, fx_source, and fx_timestamp for the transaction date-time and computes base_currency_amount using the configured rounding rules Given fx data is unavailable for the specified timestamp When posting is attempted Then posting fails with a retriable error and no partial entry is created Given the entry currency equals the project base currency When the entry is posted Then fx_rate is stored as 1.0 and base_currency_amount equals original_amount Given a user with finance_admin role provides a manual fx override When the override is within allowed policy and justification is provided Then the override is accepted, logged in the audit trail, and marked as manual_fx_override true
Immutable Ledger and Reversal-Only Adjustments
Given a posted ledger entry exists When a user attempts to edit or delete the entry Then the operation is blocked and the user is prompted to create a reversal instead Given a user with permission initiates a reversal on an entry When the reversal is posted Then a new entry is created that negates the original amounts, links reversal_of to the original, links reversed_by on the original, and both are visible in the audit trail Given an entry has been reversed When the ledger balance is computed for any period including both Then the net impact of the pair is zero in both original and base currencies Given any mutation (reversal, attachment add, tag add) When it is saved Then the audit log captures actor, timestamp, action, and a content hash, and the original entry record remains immutable
Attachments and Entity Linkages
Given a user posts an expense with an invoice attachment When the file is uploaded Then only allowed types (pdf, jpg, png) up to 25 MB are accepted, virus-scanned, stored with checksum, and linked to the ledger entry with filename, size, and mime_type metadata Given the user links the entry to an asset, release, and or contract When the entry is saved Then referenced IDs are validated for existence and access, and required link types per project policy are enforced Given an attachment needs to be updated post-posting When a new attachment version is added Then the new file is appended as a new attachment record with its own audit trail entry; prior versions remain referenced and downloadable to authorized roles; direct replacement or deletion is disallowed without a reversible action Given an authorized reviewer downloads an attachment When the download occurs Then the action is logged with entry_id, user_id, timestamp, and IP
Reconciliation Against Statements
Given a DSP or bank statement is imported via CSV or API for a project and period When the reconciliation screen is opened Then the system auto-suggests matches by amount, date tolerance, and reference, and surfaces unmatched items separately Given an auto-suggested match exceeds the variance tolerance When the reviewer attempts to reconcile Then the action is blocked until a justification note is provided or the tolerance is updated by a finance_admin Given items are reconciled When the ledger is queried Then reconciled status, reconciliation_batch_id, reconciled_by, and reconciled_at are visible on each entry, and an unreconciled aging report can be generated for the remaining items Given a reconciliation is finalized When a reviewer attempts to alter a reconciled entry Then the system requires an unreconcile action with audit logging; direct edits are not allowed
Period Locking and Role-Based Overrides
Given a project accounting period is open When a finance_admin locks the period with start and end dates Then all entries with posted_at within that range become non-postable for standard users, and the lock event is recorded with reason and actor Given a period is locked When a standard user attempts to post, reverse, or change reconciliation status dated within the locked period Then the action is rejected with a locked_period error and audit logged Given a period is locked When a finance_admin uses an override to post into the locked period Then the action requires a justification note, is highlighted as override in reports, and is fully audit logged Given a period is unlocked When the unlock occurs Then the event requires justification, is audit logged, and triggers a notification to the finance reviewers group
Finance Reviews: Filters, Saved Views, and Audit Exports
Given ledger entries exist for a project When a reviewer applies filters Then the list can be filtered by date range, currency, source_type, tag, project, linked entity (asset, release, contract), reconciled status, amount range, and free-text in source_ref, with results returned within 2 seconds for up to 50k records Given a reviewer configures columns, sorting, and filters When the configuration is saved as a view Then the view stores name, owner, shared_with (user or role), and default flag, and can be loaded, updated, or deleted by the owner or finance_admin Given a filtered result set is visible When the reviewer exports Then the system produces CSV and JSON exports containing all visible columns plus required audit fields (entry_id, created_at, created_by, content_hash, schema_version), uses UTC ISO 8601 timestamps and dot decimal formatting, includes a record count and checksum, and logs the export event
Recoup Calculation Engine & Reconciliation
"As an artist, I want clear, up‑to‑date recoup balances with explanations so that I understand what has been deducted and why."
Description

Implement a deterministic calculation engine that applies recoup rules to the ledger in near real time and in scheduled batches. Handle caps/floors, priority ordering, interest accrual, partial and cross‑bucket recoup, milestone toggles, rounding policies, and carryover between periods. Recompute safely on rule or ledger changes with full traceability, idempotency keys, and performance safeguards for large catalogs. Provide reconciliation views that explain each deduction line‑by‑line, with drill‑downs to source transactions to minimize disputes.

Acceptance Criteria
Priority Waterfall with Caps and Artist Floor
Given a project with recoup buckets A (priority 1, cap $5,000, outstanding $10,000) and B (priority 2, cap $2,000, outstanding $2,000) and an artist per-period floor of $500, and period income of $4,000 When the engine computes recoup for the period Then it allocates $3,500 to bucket A and $0 to bucket B, preserving $500 artist payable And total deductions respect each bucket’s remaining cap and do not reduce artist payable below $500 And remaining unrecouped balances carry forward to the next period unchanged And the posting set balances (debits equal credits) and includes audit metadata (rule version, period, calculator run ID)
Interest Accrual on Advances (Daily Compounding, Prorated)
Given an advance of $20,000 at 6% annual interest, compounding daily on negative balances using Actual/365, funded 2025-01-01 00:00:00 UTC with period cutoff 2025-01-31 23:59:59 UTC When the engine computes accrued interest for the period Then it posts interest of $102.18 (banker’s rounding to 2 decimals at posting) with internal precision >= 6 decimal places And the interest posting references the advance principal, day-count basis, start/end timestamps, and rule version in the audit trail And disabling interest in the rule for the period results in $0 interest and a trace note indicating the toggle
Cross-Bucket and Partial Recoup with Carryover
Given bucket A (priority 1, outstanding $1,200) and bucket B (priority 2, outstanding $800), no caps, no floors, and period income of $1,500 When the engine computes recoup for the period Then it allocates $1,200 to A and $300 to B, leaving $500 outstanding in B carried forward And the period close snapshot shows A remaining $0 and B remaining $500 with matching ledger postings And cumulative lifetime recovered amounts update consistently with no duplicate postings on retry
Milestone Toggle Gating Recoup Buckets
Given a bucket "Video Marketing" (priority 2, outstanding $5,000) gated by milestone "Video Released" (OFF), and bucket A (priority 1) active with period income of $3,000 during 2025-02-01..2025-02-28 When the engine computes recoup with the milestone OFF Then $0 is allocated to "Video Marketing" and all eligible funds flow by priority to other active buckets When the milestone is toggled ON effective 2025-02-15 Then only income on or after 2025-02-15 is eligible for "Video Marketing"; pre-activation income follows next eligible bucket by priority And recompute produces updated postings with an audit trace indicating the gating rule and effective timestamp
Deterministic Recompute, Idempotency, and Concurrency Safety
Given an existing computed period (version v1) and an edit to a recoup rule (e.g., change cap) with idempotency key K1 When two recompute requests for the same project/period are submitted concurrently with key K1 Then only one replacement posting set (version v2) is applied and prior postings are soft-voided; no duplicates occur And rerunning the same recompute with key K1 yields no additional changes (idempotent) And the audit trail records before/after deltas, rule versions, actor, timestamp, and recompute scope; unaffected periods remain unchanged
Near Real-Time Updates, Batch Consistency, Rounding Policy, and Performance Safeguards
Given a project with <= 10,000 ledger transactions When a new income transaction posts Then the engine updates recoup state within 5s p95 and 10s p99 And a nightly batch recompute over >= 1,000,000 transactions across >= 500 projects completes within 60 minutes p95 using chunked processing with max resident memory <= 2 GB per worker And internal math uses >= 6 decimal places with banker’s rounding to 2 decimals only at posting; rounding differences never exceed $0.01 per recipient per period And results from batch and near real-time paths are byte-for-byte identical for the same inputs
Reconciliation View with Line-Level Explanations and Drill-Down
Given a computed period with deductions When a user opens the reconciliation view for a project Then each deduction line displays: bucket, rule version, input values, formula, computed amount, pre/post balances, cap/floor/priority references, and posting timestamp And each line links to the source ledger transaction IDs with a one-click drill-down showing transaction details And totals by bucket and period equal the sum of visible line items and match posted ledger entries; CSV export reproduces the same totals And filters by project, recipient, bucket, and date range work and preserve totals correctness
Payout Integration & Statements
"As a royalties coordinator, I want recouped deductions to feed directly into payouts and statements so that payments are accurate and communication is streamlined."
Description

Integrate recoup outcomes with IndieVault’s payout/splits module so deductions apply before distributions. Enforce minimum payout thresholds, holdbacks, and schedule alignment. Generate per‑recipient statements that itemize revenue, deductions by category, and net payouts, with secure share links, expiry controls, and downloadable PDFs/CSV. Provide period close utilities to snapshot statements, lock calculations, and queue payouts via connected payment rails or export files for offline payment processing.

Acceptance Criteria
Pre-Distribution Recoup Deduction Application
- Given a project with configured recoup waterfalls and a period's recognized revenue, When distributions are calculated, Then recoupable amounts are deducted in priority order before any split distributions are applied. - Given insufficient revenue to cover recoup for the period, When calculations run, Then the remaining recoup balance is carried forward and no negative distributions are created. - Given the calculation summary, When viewed, Then it shows gross revenue, deductions by category, remaining recoup balance, and net distributable amount for the period.
Caps, Floors, and Milestone Toggles Reflected in Deductions
- Given per-category caps and floors, When deductions are computed, Then no category exceeds its cap and floors prevent payouts until minimum recoup is met for that category. - Given milestone toggles on categories, When a milestone is not met during the period, Then that category is not deducted and is labeled inactive; When met, Then deductions commence from that date forward within the period. - Given statements, When generated, Then they indicate cap reached status and the exact amount withheld due to floors or inactive milestones.
Minimum Payout Thresholds and Holdbacks Enforcement
- Given a recipient with a configured minimum payout threshold, When their net payout is less than the threshold, Then no payout is queued and the amount is carried forward with a visible balance on their statement. - Given a holdback rule (percentage or fixed), When net payout is calculated, Then the holdback is deducted and tracked as a liability separate from recoup. - Given a subsequent period where cumulative net meets the threshold, When payouts are generated, Then prior carryover is released minus current-period holdbacks.
Per-Recipient Itemized Statements
- Given a closed period, When statements are generated, Then each recipient receives a unique statement containing period dates, gross revenue by source, recoup deductions by category, fees, holdbacks, carryovers, and net payout. - Given the statement totals, When validated, Then Gross - total deductions - holdbacks +/− adjustments equals Net Payout to the cent using the system’s rounding rules. - Given multi-project participation, When a recipient has assets across projects, Then the statement itemizes by project and provides a period total.
Secure Share Links, Expiry, and Downloads
- Given a generated statement, When a share link is created, Then a tokenized URL is produced with configurable expiry date/time and optional password; access is limited to the recipient. - Given an expired or revoked link, When accessed, Then the statement is not displayed and an expiry message is shown; access is logged with timestamp and IP. - Given an active link, When the recipient downloads, Then PDF and CSV files are available; each file’s values reconcile to the on-screen statement totals.
Period Close Snapshot and Lock
- Given an open period, When Close Period is executed, Then the system snapshots source data and statement outputs, assigns an immutable version ID, and locks inputs from retroactive change. - Given a locked period, When a change is attempted to any input affecting calculations, Then the system blocks the change or creates a dated adjustment scheduled for the next open period and records an audit entry. - Given payout schedules (e.g., monthly on 1st, weekly Friday), When closing a period, Then the close date/time conforms to the configured schedule and timezone or the system prevents close with a reason.
Payout Processing via Payment Rails or Export
- Given closed statements with positive net payouts, When payouts are initiated, Then payments are queued to connected rails with idempotency keys and status tracking (queued, sent, failed, retried). - Given recipients not enabled for online payment, When payouts are initiated, Then a downloadable export file (CSV) is generated with the required fields per recipient; the sum equals the total net payouts. - Given any payout errors, When processing completes, Then failures are reported with reasons and remain unpaid; successful payments receive transaction IDs and timestamps.
Permissions, Transparency & Notifications
"As a collaborator with limited access, I want a transparent view of my recoup status and alerts on milestones so that I stay informed without needing manual updates."
Description

Introduce role‑based access controls defining which stakeholders can view, edit, or comment on recoup settings and ledgers. Offer participant‑specific dashboards that show current recoup status, history, and remaining balances without exposing sensitive data of others. Send notifications on key milestones (e.g., 50% recouped, fully recouped, cap reached, rule change pending) via email and in‑app, with an audit log of who changed what and when. Enable comment threads and request‑for‑clarification flows on specific ledger lines or rules to resolve issues quickly.

Acceptance Criteria
RBAC: View/Edit/Comment Controls for Recoup Settings and Ledgers
Given a project with roles Owner, Manager, Participant, and Viewer And a user is assigned one of these roles When the user accesses Recoup Settings or Ledger views via UI or API Then permissions are enforced: - Owner, Manager: can view and edit recoup settings and submit ledger corrections - Participant: cannot access settings; can view their own balances and comment on permitted items - Viewer: can view read-only ledgers; cannot comment or edit - Non-member: receives 403 and no data is returned And unauthorized UI controls are hidden or disabled And forbidden API requests return 403 and are audit-logged And permission changes take effect within 60 seconds of update
Participant Dashboard: Privacy-Preserving Recoup Overview
Given a participant has access to Project A When they open the Recoup dashboard for Project A Then they see: total recouped to date, remaining balance, applicable caps/floors, and the 10 most recent ledger entries relevant to them And they do not see other participants’ names, emails, splits, or amounts And all totals reconcile to the sum of ledger entries visible to them And the Export Statement action produces a PDF/CSV that matches on-screen data and includes only their own information And the dashboard loads within 2 seconds at P95 for projects with up to 1,000 ledger entries
Notifications: Milestones & Rule-Change Alerts (Email + In-App)
Given notification preferences default to on for Owners, Managers, and Participants When any of these events occur on a project: recouped crosses 50%, fully recouped, cap reached, rule change submitted, rule change approved, rule change declined Then an in-app notification is created within 5 seconds and an email is sent within 2 minutes And notifications include project name, event type, timestamp, actor (if applicable), and a deep link to the relevant screen And duplicate notifications for the same event are not created within a 24-hour window And user preferences (mute per project, channel on/off) are respected And delivery status (queued, sent, failed) is recorded for each channel
Audit Log: Immutable Change History for Recoup Rules & Ledger
Given audit logging is enabled by default When a user creates, updates, deletes, or approves a recoup rule, rate, cap/floor, milestone toggle, ledger correction, or notification preference Then an audit entry is recorded with: actor ID and role, action, entity type and ID, old value, new value, timestamp (UTC, ISO-8601), request ID/IP, and optional reason/comment And audit entries are append-only and cannot be edited or deleted by any role And users can filter the log by date range, actor, action, and entity type, and export CSV where exported rows exactly match the filtered view And non-admins only see audit entries for projects they can access
Comments & Clarifications on Ledger Lines and Rules
Given a user with comment permission When they open a specific ledger line or recoup rule Then they can start a thread, reply, @mention users, and attach files up to 5 MB (virus scanned) And they can mark a thread as Clarification, which sets the item status to Pending Clarification and assigns an owner And all thread participants receive in-app and email notifications respecting their preferences And the owner can mark the thread Resolved with a required resolution note, which updates the item status and notifies participants And edits and deletes of comments are versioned and visible in thread history
Rule Change Approval Workflow Before Activation
Given a user proposes a change to a recoup rule and approvals are required from an Owner or Manager When the change is submitted Then the change is marked Pending and is not applied to calculations And approvers receive notifications with a diff of old vs new values and the proposed effective date And upon approval, the change is applied at the next payout recalculation and an audit entry records the approval; affected participants are notified And upon rejection, no changes are applied and the rationale is captured in the audit log; the requester is notified And if no action is taken within 7 days, the pending change auto-expires and the requester is notified

Milestone Presets

Drag‑and‑drop templates for common workflows (mix, master, artwork, PR, release). Each preset ties deliverables to approval gates and payout triggers, auto‑building schedules you can reuse across artists or labels. Cuts setup time and enforces consistent, on‑time payments.

Requirements

Preset Library & Versioning
"As a label manager, I want to create and version milestone presets so that my team can reuse consistent workflows across releases and audit changes over time."
Description

A repository inside IndieVault to manage reusable milestone presets for common workflows (mix, master, artwork, PR, release). Users can create, edit, clone, archive, and version presets with human-readable changelogs. Each preset captures deliverables, dependencies, approval gates, payout triggers, relative date offsets, assignee roles, and required metadata. Presets can be applied to any artist or label workspace, ensuring consistent execution and reducing setup time. Versioning preserves historical behavior on active releases while allowing improved versions for new work. Permission controls restrict who can publish, update, or retire presets.

Acceptance Criteria
Create New Preset with Required Fields
Given I have the "Preset Editor" permission in a workspace And I am in the Preset Library When I create a new preset and provide deliverables, dependencies, approval gates, payout triggers, relative date offsets, assignee roles, and required metadata Then the system validates all required fields and blocks save with specific inline errors for any missing or invalid values And on successful validation, the preset is saved as version 1.0.0 with immutable version ID, creator, and timestamp recorded And relative date offsets accept positive or negative values in days or weeks and are stored consistently And the new preset appears in the library list and can be opened for review
Edit and Publish New Version with Changelog
Given a published preset exists at version v1.0.0 When I edit its configuration and enter a human‑readable changelog summary of at least 10 characters And I select "Publish changes" Then the system creates version v1.1.0, preserving v1.0.0 as read‑only And the version history shows v1.1.0 with the entered changelog and a diff of changed fields And an audit log entry records editor, timestamp, and changed sections
Clone Preset Across Workspaces
Given a published preset v1.x exists in Workspace A When I choose "Clone to" and select Workspace B where I have the required permission Then a draft preset is created in Workspace B with identical configuration, a new preset ID, and version reset to 1.0.0 And assignee roles are mapped to Workspace B; any unmapped roles are flagged and must be resolved before publishing And required metadata schemas are copied; missing schemas in Workspace B are reported with actionable prompts And the clone operation succeeds without altering the source preset
Apply Preset Version to New Release
Given a new release exists with an anchor date defined (e.g., Release Date) When I apply a preset and select a specific published version Then milestones and tasks are instantiated with dates calculated from the anchor date using the preset's relative offsets And approval gates and payout triggers are created and linked to their respective milestones And tasks are assigned to the specified assignee roles or I am prompted to assign missing roles before confirmation And the release shows a success summary listing all instantiated items and their dates
Preserve Active Release Behavior After Preset Update
Given Release R was instantiated from preset P at version v1.1.0 And preset P is later updated and published as v1.2.0 When I view Release R Then Release R continues to reflect v1.1.0 behavior with no automatic changes to milestones, approvals, or payout triggers And creating a new release with preset P defaults to v1.2.0 unless another version is explicitly selected
Permission Controls for Publish, Update, and Retire
Given role permissions restrict publishing, updating, and retiring presets to authorized users When an unauthorized user attempts to publish, update, or retire a preset Then the action is blocked with a clear 403-style error message and disabled controls, and no changes are persisted And an authorized user performing the same action succeeds and receives a success confirmation And all permission checks are recorded in the audit log
Archive and Restore Preset
Given a published preset exists When I archive the preset Then it is marked Archived, removed from default selection lists, cannot be applied to new releases, and remains visible in history And active releases that used the preset remain unaffected When I restore the preset (with required permission) Then its status returns to Published without altering its version history and it becomes selectable for new releases
Drag-and-Drop Milestone Builder
"As a producer, I want to drag and drop steps and approvals into a workflow so that I can quickly build a preset tailored to my release without writing complex rules."
Description

A visual, drag-and-drop editor to assemble milestones, tasks, deliverables, approval gates, and payout triggers into a preset. Users define dependencies, relative offsets (e.g., T-28 days from release), and required asset types (tracks, stems, artwork, contracts, press kits) mapped to IndieVault folders. The builder supports role-based assignments, checklists, acceptance criteria, and file requirements, validating that each step can be fulfilled with existing asset types. Inline previews and ghosted timelines show the impact of changes before saving.

Acceptance Criteria
Drag-and-Drop Canvas Assembly
Given I am in the Milestone Builder, When I drag a Milestone/Task/Deliverable/Approval Gate/Payout Trigger from the palette onto the canvas, Then the element appears on the canvas and is recorded in the unsaved draft. Given elements on the canvas, When I drag a Task into a Milestone, Then the Task nests under that Milestone and the hierarchy is reflected in the outline. Given elements on the canvas, When I drag to reorder within the same parent, Then the order updates and the outline and preview reflect the change. Given an action is performed, When I press Undo or Redo, Then the canvas and outline revert or reapply the last change. Given a canvas element is selected, When I press Delete and confirm, Then the element and its child elements are removed from the draft.
Dependencies and Relative Offsets
Given two steps exist, When I set a Finish-to-Start dependency from Step A to Step B, Then Step B cannot be scheduled before Step A's finish in the preview. Given a step exists, When I set an offset of T-28 days from Release, Then the preview date shows 28 days before T-0 and updates when the Release anchor changes. Given a set of dependencies, When a circular dependency is introduced, Then the builder blocks saving and displays an error identifying the cycle. Given dependent steps exist, When I change a predecessor's offset value, Then all downstream dates recalculate in the ghosted timeline and inline preview before saving.
Asset Type Mapping to IndieVault Folders
Given a step requires an asset, When I choose Tracks/Stems/Artwork/Contracts/Press Kits as the required type, Then the builder requires selection of a mapped IndieVault folder for that type. Given a required asset type lacks a mapped folder, When I attempt to save the preset, Then the save is blocked with a validation message listing the unmapped steps. Given valid folder mappings are set, When I save the preset, Then the mappings are persisted and visible on the step in the preset preview. Given an invalid or unavailable folder is selected, When I attempt to save, Then the builder flags the mapping as invalid and blocks the save until corrected.
Role-Based Assignments
Given workspace roles are available, When I assign a Responsible or Reviewer role to a step, Then the role appears on the step and in the outline tags. Given an approval gate is configured on a step, When I open the gate settings, Then I can assign an approver role from the available roles list. Given a step has an assigned role, When I duplicate the step or the containing milestone, Then the role assignment is retained in the duplicate. Given a role referenced by the preset is unavailable, When I attempt to save, Then the builder requires reassignment to a valid role before saving.
Checklists, Acceptance Criteria, and File Validation
Given a step, When I add checklist items, Then they appear with checkboxes and are stored with the step on save. Given a step has an approval gate, When required file types and counts are defined for that step, Then the builder validates each requirement against supported asset types and mapped folders before saving. Given acceptance criteria text is entered for a step, When I save the preset, Then the text is persisted and displayed in the preset preview. Given file requirements reference unmapped or unsupported asset types, When I attempt to save, Then the save is blocked and the unmet or invalid requirements are listed.
Approval Gates and Payout Triggers Linking
Given a step with deliverables, When I add an approval gate, Then I can link required deliverables and an approver role to that gate. Given a payout trigger is added, When I link it to an approval gate, Then the trigger shows its eligibility offset relative to the gate in the preview schedule. Given a payout trigger has no associated approval gate, When I attempt to save, Then the save is blocked with an instruction to link or remove the trigger. Given an approval gate is configured without required deliverables or approver role, When I attempt to save, Then the builder blocks saving and lists the missing fields.
Inline Preview and Ghosted Timeline
Given the preset editor is open, When I change a dependency or offset, Then a ghosted timeline immediately displays before/after date changes without persisting. Given multiple unsaved changes exist, When I click Cancel, Then the canvas and preview revert to the last saved state. Given multiple unsaved changes exist, When I click Save, Then the ghosted timeline commits and the updated schedule is reflected in the inline preview. Given the Release anchor date is modified, When I view the inline preview, Then all step dates recompute according to defined dependencies and offsets.
Auto Schedule Generation from Target Date
"As an artist manager, I want schedules to auto-generate from a target release date so that every stakeholder knows what is due when without manual timeline math."
Description

When a preset is applied to a project, IndieVault converts relative offsets and dependencies into concrete dates based on a start date or target release date. The scheduler accounts for time zones, working days, weekends, and label holidays, and automatically recalculates when the target date shifts. Completed milestones remain locked while downstream dates reflow. The system generates release-ready folder structures and task assignments, ensuring every deliverable is due at the right time with minimal manual coordination.

Acceptance Criteria
Schedule Generation From Target Release Date
Given a project with a selected Milestone Preset containing relative offsets and dependencies And a target release date and project time zone are set And a business calendar with weekends and label holidays is configured When the user applies the preset to the project Then the system generates concrete due dates for all milestones by anchoring to the target release date And shifts any milestone that lands on a weekend or label holiday to the next working day And preserves all dependency constraints so no successor is dated before its prerequisite And displays the computed schedule in the project time zone And marks the schedule as generated without errors
Schedule Generation From Start Date Anchor
Given a project with a selected Milestone Preset containing relative offsets from a start date And a start date and project time zone are set When the user applies the preset Then the system generates concrete dates by adding each relative offset to the start date And shifts dates falling on weekends or label holidays to the next working day And preserves all dependency constraints And displays the computed schedule in the project time zone
Automatic Recalculation On Target Date Change
Given a project schedule has been generated from a target release date And one or more milestones are marked Completed When the user changes the target release date Then the system locks all Completed milestones and does not change their dates And recalculates all downstream, non-completed milestones based on the new target date and dependencies And shifts any affected milestones that land on weekends or label holidays to the next working day And preserves previously calculated dates for unaffected milestones And records a change log entry summarizing the date shifts
Time Zone and DST Correctness
Given a project time zone is set and collaborators are in various time zones When the schedule is generated or recalculated Then all milestone dates are stored in the project time zone And are rendered in each viewer’s local time zone without off-by-one day errors And dates remain on the same intended calendar day across Daylight Saving Time transitions in the project time zone
Business Calendar Observance (Weekends and Label Holidays)
Given a business calendar containing weekends and label holidays is active for the project When the schedule is generated or recalculated Then any milestone that falls on a non-working day is moved to the next working day And the UI indicates when a date was shifted due to a non-working day And dependency order is preserved after shifts (no successor before its prerequisite)
Release Folder Structure and Task Assignment Generation
Given a Milestone Preset defines deliverable folders and role-based task assignments When the schedule is generated Then the system creates the release-ready folder structure aligned to the milestone dates And assigns tasks to the mapped users or roles with due dates from the computed schedule And the operation is idempotent (re-running generation does not duplicate existing folders/tasks) And on recalculation, completed deliverables and completed tasks remain unchanged while pending items update to the new dates
Approval Gates and Payout Triggers
"As a label finance lead, I want payouts to trigger automatically when approvals are met so that artists and vendors are paid on time and in compliance with contracts."
Description

Milestones can include approval gates that require named reviewers or roles to approve before downstream work unlocks. Upon approval, configured payout triggers execute based on contract terms stored in IndieVault, supporting split percentages and role-based payouts. The gate surfaces watermarkable, expiring review links with per-recipient analytics to inform approval decisions, and records an auditable trail of approvals, rejections, and revisions. Failed gates pause dependent milestones and notify stakeholders until resolution.

Acceptance Criteria
Dependent Milestone Unlock on Approval
Given Milestone A has an approval gate and Milestone B is configured to depend on Milestone A When Milestone A is in Pending Approval Then Milestone B cannot be started or scheduled; start actions are disabled in UI and API attempts return HTTP 423 Locked And the lock state is visible on Milestone B with a reason referencing Gate ID of Milestone A Given all required approvals for Milestone A are submitted When the final approval is recorded Then Milestone B unlocks within 5 seconds, its lock state changes to Unlocked, and an unlock timestamp is recorded in the activity log And the dependency resolution event is captured in the audit trail with before/after states
Named and Role-Based Reviewer Quorum
Given a gate requires approvals from named reviewers Alice and Bob, and any 1 user with role Label When Alice approves and Bob approves and one user with role Label approves Then the gate status becomes Approved and the approval timestamp is recorded Given the gate requires 2 of 3 approvals across the role PR When two distinct users with role PR approve Then the gate counts the quorum as satisfied for the PR role requirement Given a named reviewer is reassigned from Bob to Carol before decision When Carol approves Then the gate recognizes the approval as satisfying the named reviewer requirement and logs the reassignment event And attempts by the same user to approve twice are prevented with HTTP 409 and a single approval is counted
Watermarkable, Expiring Review Links with Analytics
Given a gate has deliverables to review and reviewers Alice and PR-Team are invited When review links are generated Then each recipient receives a unique, unguessable URL token and assets are watermarked with recipient ID and Gate ID And the links honor a configured TTL (e.g., 7 days); accessing after expiry returns HTTP 410 Gone and blocks streaming/downloading And per-recipient analytics capture opens, play/stream counts, download attempts, IP/country, device, and timestamps And the gate UI displays per-recipient analytics and a summary (total opens, plays, downloads) updated within 60 seconds of events When an invite is revoked Then the corresponding link is immediately invalidated and further access attempts are denied and logged
Approval-Triggered Payout Calculation and Disbursement
Given contract terms in IndieVault define splits for Mix Approved: Artist 50%, Producer 30%, Mixer 20% with a payout base of $1,000 USD When the gate transitions to Approved Then payout instructions are generated for each payee with role, percentage, amount (Artist $500.00, Producer $300.00, Mixer $200.00), currency, due date, and contract reference ID And amounts are rounded to 2 decimal places using standard rounding and the total equals the payout base And the trigger is idempotent: re-approvals of the same gate revision do not create duplicate instructions; subsequent revisions create new instructions only if configured to do so And each instruction moves to Ready status and is visible in the Payouts queue and attached to the gate’s audit log with the calculation details
Rejection Handling: Pause, Notifications, and Revision Resubmission
Given a gate requires approval from Alice and one Label reviewer and Milestone B depends on this gate When a required reviewer rejects with a reason Then the gate status becomes Rejected, dependent milestones (e.g., Milestone B) are Paused, and their owners see the pause reason referencing the gate And stakeholders (milestone assignees, project owner, and required reviewers) receive in-app and email notifications within 1 minute including the reject reason and next steps When a new revision is submitted for the gate Then the gate returns to Pending Approval, new review links are issued, previous links are invalidated, and all stakeholders are notified of the resubmission And the pause on dependent milestones is lifted only upon subsequent gate approval
Comprehensive Audit Trail of Decisions and Revisions
Given approvals, rejections, reassignment, link issuance/revocation, and payouts occur on a gate When any such event happens Then an immutable audit entry is appended capturing actor identity and role, action type, target gate ID and milestone, deliverable IDs with content hashes, timestamps (UTC), revision ID, and any message/reason And audit entries are read-only, cannot be edited or deleted, and any administrative corrections are recorded as separate append-only events And the audit trail is filterable by project, milestone, gate, reviewer, action, and date range and exportable to CSV and JSON And each approval/rejection entry references the analytics snapshot (opens/plays/downloads) used at decision time
Notifications and Reminders
"As a mixing engineer, I want timely reminders and escalations for my assigned milestones so that I never miss an approval or delivery deadline."
Description

A rules-driven notification engine sends alerts and reminders tied to milestones, approvals, and payouts. Users can configure channels (email, Slack, in-app), reminder cadences, and escalation paths for overdue items. Calendar invites (ICS) reflect due dates, and quiet hours respect recipient time zones. Digest summaries provide weekly status across all active releases, reducing missed deadlines without overwhelming users.

Acceptance Criteria
User-configurable notification channels per event type
Given a release with upcoming approval and payout events and a user with notifications enabled And the user selects Email and In-App for Approvals, Slack only for Payouts, and disables Reminders When those events occur Then Approval notifications are delivered via Email and In-App and not via Slack And Payout notifications are delivered via Slack and not via Email or In-App And no Reminder notifications are sent And if Slack is disconnected at send time, Then Payout notifications fall back to Email And each delivered notification appears once in the in-app notifications center with a timestamp and a link to the event
Reminder cadence and escalation for overdue milestones
Given a milestone due on 2025-09-01 17:00 local and a default cadence of 7d, 3d, and 1d pre-due And an escalation path to the project manager 24h after the milestone becomes overdue When time reaches 7, 3, and 1 days before the due date/time Then reminders are sent on each cadence to assigned approvers via their configured channels When the milestone is overdue by 24 hours Then an escalation notification is sent to the project manager via Email and Slack And all future reminders stop once the milestone is approved or rescheduled And if the due date/time changes, future reminders are recalculated to the new schedule
ICS calendar invites mirror due dates and updates
Given a milestone with a due date/time and recipients with calendar invites enabled When the milestone is created Then an .ics invite is sent with SUMMARY including release and milestone name, DTSTART/DTEND matching the due date/time in the milestone’s time zone, and a stable UID When the due date/time or title is updated Then an updated .ics (METHOD:REQUEST, same UID, incremented SEQUENCE) is sent reflecting the changes When the milestone is canceled or removed Then a cancellation .ics (METHOD:CANCEL, same UID) is sent And recipients importing the .ics see a single calendar entry updated in place
Quiet hours defer non-critical alerts by recipient time zone
Given a recipient with quiet hours 22:00–08:00 in their local time and non-critical alerts enabled When a non-critical notification would occur during quiet hours Then it is queued and delivered at 08:00 local time via configured channels When a critical notification (e.g., payout failure) occurs during quiet hours Then it bypasses quiet hours and is delivered immediately And the system determines local time using the recipient’s profile time zone, falling back to an inferred time zone from recent activity if unset
Weekly digest summarizes status across active releases
Given a user with digest summaries enabled across all active releases When it is Monday 09:00 in the user’s local time Then a single weekly digest is delivered summarizing counts of items due in the next 7 days, overdue items, pending approvals, and pending payouts, with the top 10 items listed and links to filtered views And no individual reminder notifications for listed items are sent within 30 minutes after the digest to avoid duplication And the user can opt out of digests without affecting real-time notifications
Rate limiting prevents alert floods and deduplicates events
Given multiple events of the same type for the same milestone occur within 5 minutes When notifications are generated Then they are coalesced into a single notification per channel with a concise summary And the system enforces a per-user limit of 20 non-critical notifications per hour per channel And once the limit is reached, additional notifications are deferred to the next hour or folded into a digest, with a log entry recorded
Permissions and Sharing Across Workspaces
"As a label ops lead, I want to share standard presets across artists while allowing local tweaks so that we maintain consistency without blocking team autonomy."
Description

Role-based access controls govern who can create, edit, publish, and apply presets at the artist and label levels. Presets can be private, team-visible, or globally shared within a label, with cloning to adapt for specific artists without changing the source. Audit logs track who changed what and when. Shared presets ensure consistency across teams while allowing controlled customization per roster.

Acceptance Criteria
Enforce Role-Based Permissions for Preset Lifecycle
Given a user without Create Preset permission in the current workspace, When they attempt to create a preset, Then the request is rejected with 403 INSUFFICIENT_PERMISSION, no preset is created, and an audit entry is recorded with outcome "denied". Given a user with Edit Preset permission on a draft, When they save changes, Then the draft is updated successfully and remains a draft version. Given a user attempts to edit a published preset version, When they save changes, Then a new draft version is created and the published version remains unchanged. Given a user without Publish Preset permission at the preset's scope level, When they attempt to publish any draft, Then the request is rejected with 403 INSUFFICIENT_PERMISSION and an audit entry is recorded. Given a user without Apply Preset permission in the target workspace, When they attempt to apply a preset, Then the request is rejected with 403 INSUFFICIENT_PERMISSION and no schedule is created. Given a user with Apply Preset permission in the target workspace, When they apply a preset, Then the schedule is created successfully using allowed data and an audit entry is recorded with outcome "success".
Preset Visibility Scopes: Private, Team, Label-Global
Given a preset with scope "Private", When any user other than the creator or explicitly granted users attempts to discover it via UI search or API, Then the system returns 404 NOT_FOUND and the preset is not listed in search/browse results. Given a preset with scope "Team", When a member of the artist workspace with View Presets permission browses or searches, Then the preset is listed and retrievable; When a user outside the workspace queries, Then 404 NOT_FOUND is returned and the preset is excluded from indexes. Given a preset with scope "Label-Global", When any member of the label with View Presets permission browses or searches from any artist workspace under the label, Then the preset is listed; When a user outside the label queries, Then 404 NOT_FOUND is returned. Given a preset's scope is changed from Label-Global to Private, When a previously authorized but now unauthorized user attempts access, Then access is revoked within 60 seconds and returns 404 NOT_FOUND; audit entries reflect the scope change and subsequent denied access.
Apply Label-Global Presets Across Artist Workspaces
Given a label-global preset with at least one published version, When a permitted user applies it into Artist Workspace B within the same label, Then the system uses the latest published version, creates the schedule and deliverables, associates provenance (source preset ID and version), and records an audit entry. Given a label-global preset with no published version, When any user attempts to apply it, Then the action is blocked with 400 NO_PUBLISHED_VERSION and no schedule is created. Given a user outside the label, When they attempt to access or apply the label-global preset, Then the system returns 404 NOT_FOUND and records a denied audit entry. Given a user within the label without Apply Preset permission in the target workspace, When they attempt to apply the preset, Then the system returns 403 INSUFFICIENT_PERMISSION and no changes are made.
Clone Shared Preset Without Mutating Source
Given a label-global or team-visible preset, When a user with Clone Preset permission clones it into an artist workspace, Then a new preset with a new ID is created in the target workspace with scope defaulting to Private, retaining all tasks, approval gates, payout triggers, and due date offsets from the source. Then edits or publishes made to the clone do not change the source preset; edits or publishes made to the source after cloning do not change the clone. Then the clone stores provenance (source preset ID and version) and displays "Cloned from" metadata accessible via UI and API. Then an audit entry is recorded for both the source (action "cloned") and target (action "created_from_clone") presets with actor, timestamp, and IDs.
Audit Log for Preset Changes and Access
Given any action on a preset (create, edit, publish, scope change, clone, apply, permission change, failed permission attempt), When the action occurs, Then an immutable audit record is written containing timestamp (UTC ISO 8601), actor ID, actor role(s), action type, preset ID, version, workspace/label IDs, outcome (success/denied), and a redacted diff of changed fields. Given a user with View Audit Logs permission, When they filter by date range, actor, action, preset ID, or outcome via UI or API, Then the system returns matching records within 2 seconds for up to 10,000 records and supports CSV export. Given any attempt to alter or delete an audit record, When executed, Then it is blocked and logged; audit records are append-only with hash-chain integrity to detect tampering.
Publish Workflow and Versioning Constraints
Given a draft preset, When validation passes for required fields (name, scope, deliverables, approval gates, payout triggers), Then a user with Publish permission can publish it, creating a new immutable published version with incremented version number. Given validation fails, When attempting to publish, Then the operation fails with 422 VALIDATION_FAILED and field-level errors are returned. Given multiple published versions exist, When applying a preset, Then the latest published version is selected; drafts cannot be applied and attempts return 400 CANNOT_APPLY_DRAFT. Given a user attempts to change visibility scope of a preset, When they lack Manage Scope permission, Then the change is blocked with 403 INSUFFICIENT_PERMISSION; When authorized, Then the change is applied immediately and an audit entry is recorded.
Template Variables and Output Automation
"As a project coordinator, I want templates to auto-fill names and create standard folders and links so that I can set up a release in minutes with fewer mistakes."
Description

Presets support template variables (e.g., {artist}, {release_title}, {isrc}, {catalog_no}, {version}) to auto-name folders, file paths, and review link titles. Applying a preset pre-fills metadata fields, attaches checklists, tags assets, and can auto-create expiring review links at specific milestones. This reduces manual errors, accelerates setup, and ensures deliverables land in the correct, versioned locations within IndieVault.

Acceptance Criteria
Auto-name Folders and File Paths with Template Variables
Given a preset defines folder templates that include {artist}, {release_title}, {isrc}, {catalog_no}, and {version} When the preset is applied to a project with values for those variables Then all variables in folder and file path templates are replaced with the project values And unresolved tokens do not appear in any generated name And illegal path characters (\ / : * ? " < > |) are replaced with underscores And consecutive spaces are collapsed to a single space and names are trimmed And if a target path already exists, the lowest-level folder name gains a numeric suffix starting at " (2)" to avoid overwriting And the folder structure is created within 3 seconds of apply
Pre-fill Metadata Fields on Preset Application
Given a preset maps template variables to specific metadata fields (e.g., Artist, Release Title, ISRC, Catalog No, Version) When the preset is applied Then each mapped field is populated with its corresponding value And fields marked "locked" by the preset are read-only for non-admin users And unmapped fields remain unchanged And a single audit log entry is recorded listing the fields populated by the preset
Attach Checklists and Approval Gates per Milestone
Given a preset includes milestone checklists with task definitions and approvals When the preset is applied to a project Then each milestone receives its defined checklist with the exact task count and labels from the preset And all tasks are set to Not Started by default And milestone approval cannot be set to Approved until all tasks are Completed And payout triggers tied to the milestone remain blocked until approval criteria are met And task due dates are calculated from the milestone target date using the preset offsets
Auto-tag Assets Using Preset-Defined Tags
Given a preset specifies a set of tags for deliverables and asset types When the preset is applied Then the specified tags are created if they do not already exist And tags are assigned to the relevant assets or deliverable slots without duplication And tags are visible on asset detail and available as filters in search within 10 seconds And removing the preset disassociates its tags from assets without deleting the tag entities
Auto-create Expiring Review Links at Configured Milestones
Given a preset configures a milestone to auto-generate review links with a title pattern using {artist}, {release_title}, and {version}, recipients, watermark, and expiration When that milestone transitions to Ready for Review Then a review link is created for each configured recipient within 5 seconds And each link title reflects the substituted variable values with no unresolved tokens And the link expires exactly at the configured duration and is inaccessible after expiry And per-recipient analytics begin recording views and downloads on first access And if an unexpired link already exists for that milestone and recipient, no duplicate is created
Missing Variable Resolution Gate Before Apply
Given a preset references variables that are not populated in the target project When the user clicks Apply Preset Then a Resolve Variables modal lists each missing variable with its display name and validation rule And variables marked Required must be filled with values that pass validation before Apply is enabled And Optional variables may be left blank and will be omitted from outputs without leaving unresolved tokens And a live preview shows the resulting folder paths and link titles updating as the user enters values And submitted values are saved to the corresponding project fields upon apply
Versioned Deliverables Route to Correct Locations
Given a preset defines output paths that include {version} When the project version value changes (new version created or incremented) Then new exports and uploads for configured deliverables are written to the folder path for the current version And assets from prior versions remain in their original versioned folders unchanged And if the version is edited retroactively, the user is prompted to relocate affected assets; choosing Move relocates them and updates internal references and breadcrumbs And review links generated after the change use the new {version} value while existing links remain unchanged

Smart Invoicing

Auto‑generate compliant invoices and receipts on approval with tax capture (W‑9/W‑8, VAT), multi‑currency display, and CSV/1099 exports. Attach ProofChain and Consent Ledger references for instant provenance. Sends docs to recipients and your accounting inbox, shortening payment cycles.

Requirements

Approval-triggered Auto Invoicing
"As an indie artist manager, I want invoices and receipts to generate automatically on approval so that I can bill immediately without manual data entry or delays."
Description

Automatically generates compliant invoices and receipts the moment a release, asset delivery, or work order is marked Approved in IndieVault. Pulls release metadata, client and payee details, line items, dates, and payment terms from the project to pre-fill documents with zero manual entry. Supports invoice and receipt generation in a single flow, attaches release/version identifiers, and stores documents alongside the release folder. Reduces turnaround time and errors by making billing a natural step of the approval workflow.

Acceptance Criteria
Approval Triggers Invoice Generation (Release, Asset, Work Order)
Given a project item (release, asset delivery, or work order) has no existing invoice for its current version And required billing configuration is present (client, payee, currency, tax region) When the item’s status is changed to Approved by an authorized user or rule Then exactly one invoice PDF is generated within 60 seconds And the invoice is linked to the approved item and project And duplicate approvals within 24 hours do not create additional invoices (idempotent by item-version) And a creation event is recorded with timestamp and actor
Automatic Field Population from Project Metadata
Given client legal name and billing address, payee legal entity and remit-to details, tax identifiers (if applicable), line items with quantities and rates, payment terms, and approval date exist in project metadata When an invoice is generated by approval Then the invoice header shows client and payee legal details and addresses exactly as stored And line items, quantities, unit prices, subtotals, taxes, total due, and due date are populated from project metadata with zero manual entry And the document includes project name, item title, and item-version identifier And no required fields are blank; otherwise generation is blocked with a specific validation error
Invoice vs Receipt Logic in Single Flow
Rule: If payment status at approval is Not Paid, generate invoice only for full balance due Rule: If payment status at approval is Partially Paid, generate an invoice for remaining balance and a receipt for the collected amount in the same transaction Rule: If payment status at approval is Paid in Full, generate receipt only indicating zero balance due Given documents are generated under any rule above Then the generated documents share a common document group ID and cross-reference numbers And receipts display payment method, payment date, and amount sourced from ledger entries
Attach Release and Version Identifiers
Given an invoice or receipt is generated from an approved item When the document is rendered Then the Release ID and Version ID appear in the header and footer And the filename follows {Project}-{Item}-{Version}-[{Invoice|Receipt}]-{DocNumber}.pdf And internal metadata stores itemId, itemVersionId, and projectId for queryability
Store Documents Alongside Release Folder
Given a release folder exists for the approved item When invoice/receipt PDFs are generated Then they are saved under /Billing within the corresponding release folder And links to the PDFs appear in the project Billing tab and the release Activity log And access permissions inherit from the release folder with no privilege escalation And soft-deleting a document retains an audit record and does not break historical activity entries
Multi-Currency Display and Tax Compliance Capture
Given the project currency and tax region are set When generating an invoice or receipt Then amounts display in the project currency with ISO code and symbol And when a client display currency differs, show a secondary currency with FX rate and timestamp And include required tax elements based on region: US (W-9/W-8 status, EIN/SSN if required, state tax where applicable); EU/UK (VAT numbers for both parties and VAT breakdown per line where applicable) And if required tax data is missing or invalid, block generation and return a descriptive error without creating documents And log success/failure with a correlation ID for audit
Tax Capture & Compliance (W‑9/W‑8, VAT)
"As a payee or vendor, I want to provide my tax details once and have them applied to all invoices so that my documents are compliant without back-and-forth."
Description

Collects and validates tax profiles for domestic and international recipients (W‑9, W‑8BEN/W‑8BEN‑E, VAT/GST numbers) before invoice issuance. Enforces jurisdiction-specific fields, formats, and withholding rules, and stores validated IDs securely for reuse across releases. Flags missing or invalid tax data, blocks issuance when mandatory data is absent, and annotates invoices with required tax disclosures. Ensures downstream exports include tax identifiers and country codes for compliance and year-end reporting.

Acceptance Criteria
US Recipient W-9 Capture, Validation, and Withholding Enforcement
Given a recipient with country = US When they are added as a payee for an invoice Then the system requires a completed and signed W-9 before invoice issuance Given a submitted W-9 When the TIN/EIN and legal name are provided Then the system validates format and returns a 'valid' result from the configured TIN-matching service or marks as 'invalid' and blocks issuance Given a W-9 indicating backup withholding or a missing TIN When an invoice is generated Then the backup withholding rate is applied and the invoice is annotated with the appropriate withholding disclosure Given required W-9 fields are missing or the form is unsigned When the user attempts to issue an invoice Then issuance is blocked and field-level errors identify each missing/invalid item Given a valid W-9 is on file When issuing an invoice Then the invoice record is associated with form type=W-9, last_validated_at, and country_code=US
Foreign Individual W-8BEN Capture and Treaty/Withholding Rules
Given a recipient with country != US and type = Individual When creating an invoice Then the system requires a completed and signed W-8BEN before issuance Given a submitted W-8BEN When validating Then required fields (name, country, foreign TIN or valid reason, signature, dated within 3 years) are present or issuance is blocked with field-level errors Given no treaty claim or an incomplete treaty claim When calculating withholding Then the default statutory withholding rate is applied Given a valid treaty claim with required treaty article and residency assertion When calculating withholding Then the configured treaty rate is applied and the basis is recorded Then the invoice is annotated with non-US status, form type=W-8BEN, and withholding basis
Foreign Entity W-8BEN-E and VAT/GST Number Validation
Given a recipient with country != US and type = Entity When setting the tax profile Then the system requires a completed and signed W-8BEN-E before invoice issuance Given a provided VAT/GST/ABN number When validating Then the number format is validated and verified against the appropriate registry (e.g., EU VIES) and the validation result and timestamp are stored Given the VAT/GST number fails validation and the business requires a valid number for B2B treatment When issuing an invoice Then issuance is blocked or the recipient is treated as B2C per configuration and the invoice is annotated accordingly Given reverse charge applies and the VAT ID is validated When generating the invoice Then the invoice includes the required reverse charge wording and no VAT is charged
Blocking Invoice Issuance on Missing or Invalid Tax Data
Given any recipient missing mandatory tax form/data per jurisdiction and entity type When the user clicks 'Issue Invoice' Then issuance is prevented and specific missing/invalid items are listed with links to resolve Given a tax form is expired (e.g., W-8 older than 3 years) or revoked When attempting issuance Then issuance is blocked and the user is prompted to refresh the form Given external validation services are unavailable When attempting issuance Then the system uses last-known valid status within a configurable grace period or blocks issuance and surfaces a service outage message
Secure Storage, Access Controls, and Masking for Tax Identifiers
Given any stored tax identifier or form Then data is encrypted at rest and in transit per platform standards Given a user without Tax PII permissions When viewing a recipient profile Then tax identifiers are masked except last 4 digits and form downloads are restricted Given any access, update, or download of tax identifiers/forms When the action occurs Then an immutable audit log entry is created with user, timestamp, action, and recipient reference Given configured data retention policies When retention expires or a deletion request is processed Then identifiers are purged/redacted while preserving required reporting aggregates
Downstream Exports Include Tax Identifiers and Country Codes
Given a CSV/1099 export is generated When the file is produced Then it contains columns: legal_name, country_code (ISO 3166-1 alpha-2), tax_form_type, tax_identifier (masked unless required for filings), validation_status, validation_date, withholding_rate, tax_year_amounts Given a 1099 year-end export for US recipients When the export runs Then only US-taxable payees with reportable amounts are included with unmasked TIN where required and totals reconcile to the payments ledger within 0.1% Given international recipients in a payments export When the export runs Then their VAT/GST identifiers and jurisdiction codes are included or the fields are marked 'missing'/'invalid' as applicable
Re-use and Expiry Handling of Validated Tax Profiles Across Releases
Given a recipient with a validated, non-expired tax profile When added to a new release or invoice Then the existing profile auto-associates without re-entry and shows last_validated_at Given a tax profile within 30 days of expiry When creating invoices Then the system prompts for refresh and allows issuance only if a grace policy is enabled; otherwise blocks after the expiry date Given a recipient’s country or entity type changes When saving the update Then the prior form is invalidated and the correct new form is required before further invoice issuance
Multi‑Currency Display & FX Lock‑In
"As a label working with international collaborators, I want invoices to display dual currencies with a locked rate so that accounting can reconcile precisely and pay the correct amount."
Description

Supports quoting and displaying invoice amounts in both the payer’s currency and the payee’s preferred settlement currency. Locks the FX rate at time of approval using a configured daily source, stores the rate and timestamp on the document, and shows dual currency totals and tax amounts. Prevents accidental currency mismatches and provides clear, auditable conversion context to shorten payment and reconciliation.

Acceptance Criteria
Dual-Currency Display on Invoice and PDF
Given a draft invoice with payer_currency and payee_settlement_currency set And line items and any applicable taxes are present When the invoice is viewed in-app or rendered to PDF/HTML Then each line item subtotal, tax line, subtotal, and grand total are displayed in both currencies with ISO 4217 codes and symbols And a summary section displays the FX context (rate, currency pair, timestamp, source) And numeric formatting respects each currency's minor units and locale-independent separators And before approval the rate is labeled Indicative and may refresh; after approval it is labeled Locked and does not change
FX Rate Lock at Approval with Stored Metadata
Given a draft invoice with payer_currency and payee_settlement_currency defined When the approver approves the invoice Then the system fetches the daily FX rate for payer->payee from the configured source at approval time And stores on the invoice: fx_rate, currency_pair, fx_timestamp_utc, fx_source_id, fx_source_name, fx_fallback_used=false And computes and persists all payee-currency amounts (per line, taxes, subtotal, total) using fx_rate And marks the FX status as Locked And all subsequent views and exports use the stored values without recalculation
Immutable Currency and Amounts Post-Approval
Given an approved invoice with a locked FX rate When a user attempts to edit payer_currency, payee_settlement_currency, or fx_rate Then the action is blocked and the user is prompted to revert to Draft to make changes When the invoice is reverted to Draft Then the previously locked FX metadata is archived in history and removed from the active version And on re-approval a new FX rate is fetched and locked, and payee-currency amounts are recomputed And any attempt to change currencies while Approved does not mutate stored amounts
Non-Trading Day and Source Fallback Handling
Given approval occurs on a weekend/holiday or before the daily rate is published When the system requests the FX rate Then it uses the most recent prior business day closing rate And sets fx_fallback_used=true and records fx_effective_date for the rate used And writes an audit log entry noting previous_day_rate fallback Given the configured FX source is unreachable at approval time When Approve is clicked Then the approval is aborted, the invoice remains in Draft, and the user sees an actionable error with retry guidance And an operational alert is recorded for observability
Dual-Currency Totals and Tax Calculations
Given tax rules apply to the invoice When amounts are calculated at approval time Then tax and totals are computed in payer currency first, then converted to payee currency using the locked fx_rate And both currencies are rounded to their ISO minor units using half-up rounding And payee-currency totals equal the sum of converted line items plus converted tax within one minor unit, with any remainder applied to the invoice total And zero-tax invoices display 0 in both currencies with correct minor units
Exports Include FX Context and Dual Amounts
Given an approved invoice with locked FX When the user exports invoices to CSV Then each row contains at minimum: invoice_id, payer_currency_code, payee_currency_code, currency_pair, fx_rate, fx_source_name, fx_timestamp_utc, fx_fallback_used, subtotal_payer, tax_payer, total_payer, subtotal_payee, tax_payee, total_payee And the CSV values exactly match the on-screen/PDF values And exported timestamps are UTC ISO-8601 And aggregated report totals sum correctly per currency without mismatch
Compliant Templates & Sequential Numbering
"As a self-funded artist, I want professionally formatted, legally complete invoices with proper numbering so that clients accept them without revisions."
Description

Provides jurisdiction-aware PDF/HTML templates with required fields (legal entity, registered address, tax IDs, line items, payment terms, remit-to) and automatic sequential numbering by legal entity and currency. Supports customizable prefixes, per-entity counters, and credit note numbering rules. Ensures every invoice/receipt is consistently formatted, legally complete, and searchable within IndieVault’s release folders.

Acceptance Criteria
US Invoice Template Compliance (PDF/HTML)
Given a US legal entity with legal name, registered address, and EIN configured And a customer with legal name and billing address When a user finalizes an invoice in USD Then the PDF and HTML outputs include: Invoice label; invoice number; issue date; due date; seller legal name; seller registered address; seller EIN (labeled "EIN"); customer legal name; customer billing address; itemized line items (description, quantity, unit price, line subtotal); subtotal; tax total; grand total; currency (USD); payment terms; remit-to details And the outputs are visually consistent (matching header/footer, column order, and totals) And required fields are validated at finalization; missing fields block finalization with field-level error messages And all money amounts are formatted with 2 decimal places and include the USD currency code or symbol consistently
EU VAT Invoice Template Compliance
Given an EU legal entity with registered address and VAT ID configured And a customer with country and optional VAT ID When a user finalizes an invoice in EUR Then the PDF and HTML display seller VAT ID and, if provided, buyer VAT ID And if line items include VAT, the template displays a VAT rate column and totals VAT by rate And if tax amount is zero and buyer VAT ID is present for a cross-border EU sale, the document includes a "Reverse charge" note And required fields (seller name, address, invoice number, dates, line items, totals, payment terms, remit-to) are present
Sequential Numbering per Legal Entity and Currency
Given an invoice is finalized for entity E in currency C Then it is assigned the next integer in the series keyed by (E, C), formatted as <prefix><counter> And draft or preview states do not reserve or increment the counter And concurrent finalizations for the same (E, C) produce unique, gapless numbers with no duplicates And voided/canceled documents retain their numbers and are not reused And the assigned number is immutable after finalization and appears identically in PDF, HTML, UI, and exports
Configurable Prefixes and Per-Entity Counters
Given an admin defines a prefix pattern for entity E and currency C (e.g., {ENTITY_CODE}-{CURRENCY}-{YYYY}-) When a new document is finalized for (E, C) Then its number begins with the evaluated prefix followed by a zero-padded counter (minimum 4 digits) And changing the prefix affects only subsequent documents; previously issued numbers remain unchanged And the system prevents saving duplicate prefix patterns that would collide within the same (E, C) And prefix length is limited to 32 characters post-evaluation; invalid tokens are rejected with a validation error And per-entity counters are independent across currencies
Credit Note Numbering and Linkage
Given a finalized invoice I for entity E and currency C When a user issues a credit note against I Then the credit note is labeled "Credit Note" and assigned the next number in a distinct (E, C) credit-note series (e.g., prefix contains "CN") And the credit note references the original invoice number I in the header and metadata And the system prevents issuance of a credit note without linking to an existing invoice And amounts on the credit note are shown as negative or presented as credits, and totals reflect the credited amount And the credit note number sequence is independent from the invoice sequence and per (E, C)
Searchability within Release Folders
Given a finalized invoice, receipt, or credit note linked to a release folder R When the document is issued Then it is indexed within 60 seconds and becomes searchable within R And searching by document number, legal entity name, customer name, currency, or date range returns the document And search results return within 500 ms for up to 10,000 documents And opening a search result navigates to the document viewer with the document number visible And filters for document type (invoice, receipt, credit note) and status (finalized, voided) function correctly
Receipt Template Completeness
Given a payment is recorded for a finalized invoice for entity E and currency C When a receipt is generated Then the PDF and HTML display: Receipt label; receipt number; issue date; payment date; seller legal name; seller registered address; customer legal name; customer billing address; original invoice number; amount paid; currency; payment terms; remit-to details And receipt numbering follows a distinct per-entity, per-currency series separate from invoices and credit notes And required fields are validated; missing fields block generation with field-level error messages And the PDF and HTML outputs are visually consistent (matching header/footer, column order, and totals)
Secure Delivery & Accounting Inbox Routing
"As an artist manager, I want invoices automatically sent to the client and our accounting inbox so that payments start immediately and records stay organized."
Description

Delivers invoices and receipts to designated recipient emails and a configurable accounting inbox with DKIM/SPF-compliant sending, link-expiring PDF access, and optional password protection. Includes delivery status tracking, automatic retries, bounce handling, and a one-click resend from the release page. Minimizes lost invoices and accelerates payment by ensuring documents land where finance teams expect them.

Acceptance Criteria
Dual Delivery to Recipients and Accounting Inbox
Given an approved invoice or receipt with at least one recipient email and a configured accounting inbox When the user triggers send (manually or via auto-send on approval) Then separate emails are dispatched to each designated recipient and to the accounting inbox within 60 seconds of trigger And each recipient receives a unique secure link to the document And a send event is recorded per recipient and for the accounting inbox with timestamp and provider message ID Given no accounting inbox is configured When sending is triggered Then emails are sent to recipients only And a non-blocking warning "Accounting inbox not configured" is displayed and logged in the activity timeline Given any recipient email fails format validation When attempting to send Then sending is blocked for that address with a validation error and no delivery attempt is made
Authenticated Sending with DKIM/SPF Compliance
Given the platform or tenant has a verified sending domain When an invoice/receipt email is delivered Then Authentication-Results for the message shows spf=pass and dkim=pass aligned to the sending domain And the delivery log marks the message as Authenticated = Yes Given a tenant's custom sending domain fails DNS verification When sending is triggered Then the system falls back to the platform default verified domain And logs a "Domain misconfigured" alert in tenant settings and notes fallback in the send event Given the provider reports an authentication failure (e.g., permerror) When attempting to send Then the message is marked Dropped with reason = Authentication failed And no retries are attempted for that message
Expiring, Password-Optional PDF Access
Given link expiry is enabled with a configured TTL (default 14 days) When a recipient opens the document link before expiry Then the PDF is accessible via a signed URL and download is permitted And the access is logged with recipient, timestamp, IP, and user agent Given the link is accessed after expiry When the URL is requested Then the system returns 410 Gone and displays an "Link expired" page with a Resend CTA (if permitted) Given password protection is enabled for document access When the recipient opens the link Then the system prompts for the configured password and grants access only on correct entry And after 5 consecutive incorrect attempts the link is temporarily locked for 15 minutes and an alert is logged Given a resend is performed When new links are generated Then previous links for those recipients are invalidated if the "Invalidate previous links on resend" setting is enabled
Delivery Status Tracking Timeline
Given documents have been sent to one or more recipients When viewing the Delivery panel on the release page Then each recipient row displays real-time status: Queued, Sent, Delivered, Opened, Bounced, Dropped, Complaint, or Failed And each message shows a timeline with timestamps for queued, sent, delivered, opened events and any error reasons/codes And messages can be filtered by status and exported to CSV including recipient, status, timestamps, reason, and message ID Given provider webhooks are received When a status update event arrives Then the corresponding recipient message is updated within 60 seconds and the timeline is appended without duplicating prior events
Automatic Retries and Bounce Handling
Given a transient failure (e.g., 4xx soft bounce, mailbox full, connection timeout) When sending fails Then the system retries with exponential backoff (1m, 5m, 30m, 2h, 8h) up to 24 hours total And the current attempt count and next retry time are shown in the delivery timeline Given a hard bounce or permanent failure (e.g., 5xx user unknown, blocked) When sending fails Then retries stop immediately and status is set to Bounced with provider reason code and description And the address is flagged to prevent auto-resend until updated Given a previously bounced address is corrected in the contact record When sending is retried Then the bounce flag is cleared and normal delivery attempts resume
One-Click Resend from Release Page
Given prior sends include recipients with non-success statuses (Bounced, Dropped, Failed, Expired, Not Delivered) When the user clicks Resend on the release page and confirms the default selection Then the system queues resends to the selected recipients within 30 seconds and generates new expiring links And duplicate resends to the same recipient are prevented within a 10-minute window Given the user selects "Resend to all recipients" When confirmed Then all original recipients and the accounting inbox receive new emails And the delivery timeline records a new send event per recipient with new message IDs Given resends are queued When the emails are dispatched Then previous links are invalidated if the setting is enabled and statuses update to Sent/Delivered as events arrive
CSV & 1099-ready Exports
"As a finance admin, I want to export standardized CSVs and 1099 data so that I can complete filings and reconcile books quickly."
Description

Generates time-bounded CSV exports and 1099-ready files containing invoice totals, tax identifiers, withholdings, currencies, FX rates, and recipient details. Supports filters by date range, entity, project, and recipient, and includes stable column names for ingestion by accounting tools. Speeds year-end reporting and cross-system reconciliation without manual spreadsheet work.

Acceptance Criteria
Filtered, Time-Bounded CSV Export Generation
Given an authenticated user with Finance role on payer entity E And filters: date_range (start,end), entity E, optional project P, optional recipient R When the user requests "CSV Export" Then the export contains only invoices with payment_date within [start,end] and matching E, P (if provided), and R (if provided) And the CSV includes a header row and at least one data row when matches exist; otherwise a header-only file And generation completes in <= 10 seconds for up to 5,000 invoices; otherwise an async job is started and a notification is shown And the file is UTF-8, RFC 4180 compliant, comma-delimited, quoted as needed, with LF line endings And the filename follows: indievault_export_{entitySlug}_{YYYYMMDD}_{HHmmss}_{rangeStart}_{rangeEnd}.csv
1099-Ready Year-End Export (NEC/MISC Aggregation)
Given a US payer entity with completed payments and a selected tax_year When the user requests "1099-Ready Export" Then a CSV is generated with one row per recipient per 1099 form type needed for that year And amounts are aggregated by payment_date within the calendar year in the payer entity's timezone And the file includes columns exactly: tax_year,payer_name,payer_tin,payer_tin_type,payer_address1,payer_address2,payer_city,payer_state,payer_postal,recipient_name,recipient_tin,recipient_tin_type,recipient_address1,recipient_address2,recipient_city,recipient_state,recipient_postal,recipient_country,form_type,nec_box1_amount,misc_box3_amount,backup_withholding_amount,fatca_flag,account_number And recipients classified as non-reportable (e.g., corporations via W-9) are excluded unless backup_withholding_amount > 0 And all amounts are rounded half-up to 2 decimals and non-negative And the CSV passes import validation on at least one supported e-file provider template (e.g., Track1099 or Tax1099)
Multi-Currency Amounts with FX Details
Given invoices in various currencies and a defined home_currency for the payer entity When the user exports CSV or 1099-ready data Then each row includes both invoice_currency and home_currency amounts: amount_subtotal,amount_tax,amount_withholding,amount_gross,amount_net in both currencies And columns include: invoice_currency,home_currency,fx_rate,fx_source,fx_timestamp And home_currency amounts are computed as invoice_amount * fx_rate with rounding half-up to 2 decimals And fx_source is one of [ECB,OANDA,Manual] and fx_timestamp is ISO 8601 with timezone And the fx_rate used equals the stored rate on payment_date; variance from recomputation is <= 0.01 in home currency
Recipient Tax Identity and Withholding Completeness
Given invoices with recipients having stored W-9/W-8 data When the user exports CSV Then each row includes: recipient_id,recipient_name,recipient_email,recipient_country,recipient_tax_id,recipient_tax_id_type,recipient_tax_form_type,withholding_rate,withholding_amount And rows with missing required US tax info (US-sourced, reportable) set missing_tax_info=true; otherwise false And withholding_amount equals withholding_rate% of gross where applicable, rounded half-up to 2 decimals And recipient_tax_id is masked in CSV by default (last 4 shown) unless the user checks "Include full TIN", in which case full value is included and the export is marked contains_pii=true
Stable Schema: Column Names, Order, and Data Formats
Given the CSV export specification version v1 When any CSV export is generated Then the header row matches exactly and in order: export_id,exported_at_utc,schema_version,payer_entity_id,payer_entity_name,entity_tax_jurisdiction,invoice_id,invoice_number,invoice_date,payment_date,project_id,project_name,recipient_id,recipient_name,recipient_email,recipient_country,recipient_tax_id,recipient_tax_id_type,recipient_tax_form_type,missing_tax_info,currency,home_currency,fx_rate,fx_source,fx_timestamp,subtotal,tax_amount,withholding_rate,withholding_amount,total_gross,total_net,payment_method,invoice_status And nulls are emitted as empty strings, booleans as true/false lowercase, dates as YYYY-MM-DD, datetimes as ISO 8601 UTC, and currency codes as ISO 4217 And any schema change increments schema_version and preserves backward-compatible aliases; previous headers remain available behind a version toggle And automated schema tests validate header equality and types on each build
Access Control, Audit Log, and Expiring Delivery Links
Given role-based access control When a user without Finance Admin or Owner role attempts an export Then the action is denied with HTTP 403 and no file is generated When a permitted user completes an export Then an audit log entry is created with user_id,entity_id,filter_params,export_type,row_count,schema_version,started_at,completed_at,checksum And the file is stored for max 72 hours with a signed URL that expires after 72 hours or on revoke And the user can optionally email the file to the configured accounting inbox; delivery success/failure is recorded
Large Export Handling and Performance SLAs
Given an export yielding up to 5,000 rows When the export is requested Then it completes synchronously in <= 10 seconds Given an export yielding 5,001 to 100,000 rows When the export is requested Then an async job is created and a status page shows progress; the job completes in <= 15 minutes and emails a download link on completion Given an export exceeding 100,000 rows When the export is requested Then results are chunked into multiple files of <= 100,000 rows each and provided as a zip; each file preserves the header row And the system computes and exposes SHA-256 checksums for each file and the zip bundle
ProofChain & Consent Ledger References
"As a rights manager, I want invoices to reference ProofChain and Consent Ledger entries so that I can prove provenance and avoid disputes."
Description

Embeds immutable ProofChain transaction hashes and Consent Ledger entry IDs into invoices and receipts as metadata and as human-readable/QR references on the PDF. Establishes verifiable provenance from approvals and consent events to billing artifacts, enabling auditors and partners to confirm authenticity without accessing IndieVault. Strengthens trust and reduces disputes over rights and scope.

Acceptance Criteria
Embed provenance metadata into invoice/receipt PDFs
Given an invoice or receipt is approved for generation And the ProofChain transaction hash and Consent Ledger entry ID exist and are immutable When the PDF is created Then the PDF XMP metadata MUST include key "proofchain_tx_hash" with a 64-character lowercase hex value exactly matching the stored transaction hash And the PDF XMP metadata MUST include key "consent_ledger_entry_id" with a canonical UUID value exactly matching the stored consent entry ID And both metadata values MUST be non-empty and identical to the values rendered on the PDF
Display human-readable and QR provenance on PDFs
Given a PDF invoice or receipt is generated When viewing page 1 Then a "Provenance" section MUST display "ProofChain Tx" and "Consent Ledger ID" with their exact values And a QR code MUST be present encoding the canonical verification URL that includes both identifiers And the QR code MUST be decodable by the ZXing reference decoder and native iOS and Android camera scanners from a 300 DPI print And the displayed values MUST match the PDF metadata
Public verification without IndieVault access
Given only the ProofChain transaction hash and Consent Ledger entry ID from the document are available When a user enters them at the public verification endpoint or scans the QR Then the endpoint MUST respond 200 with status "verified" when both records exist and are linked to the document And MUST respond 404 with status "not_found" when either record does not exist And the response MUST omit PII and require no authentication
Block document generation if provenance unavailable
Given an invoice or receipt is requested And either the ProofChain transaction hash or Consent Ledger entry ID is missing or unconfirmed When document generation is attempted Then generation MUST fail with error code "provenance_unavailable" and a human-readable message And no PDF MUST be stored or sent And the system MUST automatically retry generation up to 3 times over 2 minutes and succeed once both records are available
Consistency between stored and sent documents
Given a document is generated and dispatched to recipients and the accounting inbox When comparing the stored PDF and each sent attachment Then the visible provenance values and QR payload MUST be identical across all copies And the verification endpoint MUST return the same result for all copies And the email body MUST NOT include the hash or ID outside the PDF
Privacy and security of provenance payload
Given a generated PDF with provenance references When inspecting human-readable text, QR payload, and metadata Then none MUST contain PII (names, emails, addresses), bank details, or monetary amounts And only the ProofChain transaction hash, Consent Ledger entry ID, and canonical verification URL MAY be present And the QR payload MUST NOT include tokens or session identifiers And metadata keys MUST be restricted to an allowlist including "proofchain_tx_hash" and "consent_ledger_entry_id"

Dispute Holdback

If a split or deliverable is contested in Dispute Vault, automatically hold only the disputed portion while releasing undisputed funds. Configurable percentages/timeouts, targeted notifications, and a locked history keep work moving without penalizing everyone. One‑tap release resumes payouts after resolution.

Requirements

Partial Funds Holdback Engine
"As a finance manager, I want the system to automatically hold only the disputed portion of a payout so that undisputed collaborators receive payments on time."
Description

Implements automated, line‑item aware holds that isolate only the disputed portion of payouts while allowing undisputed funds to be released on schedule. The engine evaluates disputes initiated in Dispute Vault against current payout instructions (splits, fixed‑fee deliverables, taxes/fees) and creates hold transactions per recipient and currency. It supports percentage‑based and fixed‑amount disputes, multi‑recipient payouts, currency conversion at payout‑run time, and prorated fees. The process is idempotent across reruns, maintains deterministic calculations, and integrates with the payout scheduler and ledger to post hold and later release entries. Concurrency controls prevent race conditions during payout runs. Expected outcome: collaborators not implicated in a dispute are paid on time, while only the contested amounts are withheld.

Acceptance Criteria
Percentage Dispute: Multi-Recipient Split With Prorated Fees
Given a payout batch with gross USD 1000.00, a platform fee of 5%, and recipient splits A 50%, B 30%, C 20% And recipient B has an active dispute for 20% of their split When the payout run executes the Partial Funds Holdback Engine Then the engine holds exactly USD 57.00 for recipient B And releases USD 475.00 to A, USD 228.00 to B, and USD 190.00 to C on schedule And posts one HOLD ledger entry for B with amount USD 57.00 and one PAYOUT ledger entry per recipient for the released amounts And the sum(held) + sum(released) + fees equals the batch gross and all amounts are rounded to currency minor units deterministically
Fixed-Amount Dispute With Multi-Currency Conversion at Payout-Run
Given a payout batch where recipient D is paid in EUR and has eligible net funds And there is a fixed-amount dispute of USD 150.00 targeting D's deliverable And the FX rate snapshot at payout-run time is 0.9000 EUR/USD When the engine calculates holds Then it creates a hold for recipient D of EUR 135.00 using the snapshot rate and rounds to two decimals And records the FX rate, source, and timestamp in the hold metadata And ensures the hold amount does not exceed D's eligible net; if it would, the hold is capped at the eligible net and the cap is recorded And no other recipients’ payouts are reduced
Idempotent Rerun: No Duplicate Holds Across Scheduler Retries
Given payout run R1 evaluates disputes and posts 3 hold transactions And R1 fails after posting holds but before completing the batch When the scheduler reruns the same batch as R1 with the same inputs Then the engine produces identical hold calculations and does not create duplicate holds And ledger shows exactly 3 HOLD entries with the same dispute_ids, recipients, currencies, and amounts as before And the rerun is marked idempotent and safe to commit
Concurrency Control: Simultaneous Dispute and Payout Run
Given two scheduler instances S1 and S2 start processing the same payout batch concurrently And a new dispute D is created during processing When the engine acquires locks and evaluates disputes Then only one instance proceeds to create holds for the batch; the other aborts or waits without writing holds And dispute D is included only if its creation timestamp is before the batch evaluation cutoff; otherwise it is deferred to the next run And no recipient ends up with more than one hold for the same dispute_id and batch
Auto-Release on Timeout With Targeted Notifications
Given a hold H with a configured timeout of 14 days and no extension or resolution recorded When the timeout elapses and the next payout run executes Then the engine automatically releases the held amount in the original currency per recipient And posts a RELEASE ledger entry linked to H and updates H status to Released And sends targeted notifications only to the dispute participants and impacted recipients
Dispute Resolution: One-Tap Release Resumes Payouts
Given a disputed amount is resolved in Dispute Vault and a user triggers One‑Tap Release for dispute D When the release command is issued Then the engine posts RELEASE ledger entries for all holds tied to D within the same run And marks the holds as Released and makes amounts available for the next disbursement window without affecting unrelated payouts And sends success notifications to impacted recipients
Ledger Integration: Deterministic Hold and Release Entries With Audit Trail
Given holds and releases are posted for a batch with mixed percentage and fixed disputes When an auditor queries the ledger for that batch Then each HOLD and RELEASE entry includes recipient, currency, amount, dispute_id, payout_run_id, calculation basis (percentage or fixed), and FX rate if applicable And entries are immutable (read-only) with a locked history and reference to the exact input snapshot used And recomputing amounts from the stored inputs reproduces the posted amounts exactly
Configurable Holdback Rules & Timeouts
"As an admin, I want to set default holdback percentages and auto‑release timeouts so that disputes follow consistent, auditable policies without manual intervention."
Description

Provides policy controls to define how holdbacks operate across the workspace, project, or release: default percentage or fixed‑amount holds, minimum/maximum caps, auto‑release timeouts (in days), extension limits, and escalation behaviors. Supports policy templates per client/roster, with per‑dispute overrides subject to role‑based permissions and reason capture. Includes validation to prevent over‑holding (e.g., cannot exceed payable net of fees/taxes) and calendar rules for weekends/holidays. Integrates with IndieVault’s settings service and Dispute Vault triggers so holds are created/updated according to the active policy without manual intervention. Expected outcome: consistent, auditable holdback behavior that aligns with business rules and reduces manual work.

Acceptance Criteria
Default Policy Selection & Hold Creation
Given a workspace default policy exists, a project-level policy exists, and the release has no override When a dispute is opened on that release and Dispute Vault triggers hold creation Then the system selects the most specific active policy (project-level) and creates a hold with the configured type (percentage or fixed) And the hold amount is calculated from net_payable_after_fees_taxes and respects min_cap and max_cap And the sum of all active holds for the payout does not exceed net_payable_after_fees_taxes And the hold record stores policy_scope, policy_id, policy_version, and calculation_details And an audit log entry is recorded for hold creation
Fixed-Amount Policy With Caps
Given a fixed-amount hold policy with min_cap and max_cap is active When a dispute triggers hold creation on a payout Then the system sets the hold amount to the fixed amount And if the fixed amount exceeds net_payable_after_fees_taxes, the hold is capped at net_payable_after_fees_taxes And if the fixed amount is below min_cap, the hold amount is raised to min_cap (but never above net_payable_after_fees_taxes) And calculation_details capture base_amount, min_cap_applied, max_cap_applied, and net_cap_applied And an audit log entry records the cap decisions
Auto-Release Timeout With Calendar Rules
Given a hold has an auto-release timeout in days and a workspace business calendar (weekends/holidays) is configured When the timeout elapses Then if the due date falls on a non-business day, auto-release occurs at 09:00 in the workspace time zone on the next business day And the hold status updates to Released, releasing only the held portion And notifications are sent to designated roles/participants per policy And an audit log entry records the auto-release with due_date, adjusted_release_date, and reason = timeout
Extension Limits and Escalation
Given the policy defines max_extensions and total_extension_days, requires reason capture, and configures escalation recipients When an authorized user requests an extension with a reason before the current due date Then the system validates remaining extensions and remaining days against policy limits And if valid, updates the due date, increments extension counters, and records requester, timestamp, added_days, and reason And if invalid, rejects the request with a clear error and triggers escalation notifications And all extension attempts (approved or rejected) are written to the audit log
Per-Dispute Override With RBAC and Reason Capture
Given RBAC allows Finance Admin (and denies others) to apply per-dispute overrides within allowed variance When Finance Admin submits an override for percentage or amount with a reason Then the system validates against min_cap, max_cap, net_payable_after_fees_taxes, and variance limits And if valid, applies the override and updates the hold with override_by, reason, before/after values, and timestamp And if invalid, no changes are applied and a validation error is returned And watchers receive a notification and an audit log entry captures the override attempt and outcome
Settings Integration and Triggered Updates
Given holdback policies are managed in IndieVault's settings service and change events are published When a policy is updated Then new disputes use the updated policy immediately without manual intervention And open holds update only fields marked mutable by policy (e.g., timeout, escalation) and preserve immutable fields (e.g., original amount) unless explicitly allowed And closed/released holds remain unchanged And each automatic update records an audit log and emits a hold.updated event with before/after snapshots And if a dispute event occurs during a policy update, the policy snapshot at trigger time is used
Client/Roster Policy Templates and Inheritance
Given a client/roster has an assigned holdback policy template When a new project or release is created under that client/roster Then the template is instantiated as the active policy at the project (and inherited by releases) unless an explicit override is set And inheritance follows release > project > workspace precedence And updating a template does not change already-instantiated policies unless propagate_changes is enabled; if enabled, open holds update per mutable fields and changes are audited And all template assignments and propagations produce audit entries
Dispute Vault Linkage & Immutable Audit Log
"As a label counsel, I want an immutable history of holdback decisions linked to the dispute so that I can demonstrate due diligence and resolve challenges quickly."
Description

Ensures every holdback is linked to its originating Dispute Vault case and is recorded in an append‑only audit log capturing event type (creation, change, approval, release), timestamp, actor identity, and reason codes. Ledger entries are write‑once; changes emit new events with versioned snapshots of the affected splits/deliverables at the time of the action. Tamper‑evident hashing provides integrity across the event chain. The UI exposes a locked history view within the dispute and payout detail pages. Expected outcome: end‑to‑end traceability that supports compliance, external reviews, and rapid resolution of challenges.

Acceptance Criteria
Holdback–Dispute Case Linkage
Given a Dispute Vault case exists and a holdback is initiated from it When the holdback is created Then the holdback record stores disputeCaseId equal to the originating case ID And the dispute case stores holdbackId in its links collection And both records are retrievable by ID via GET /holdbacks/{id} and GET /disputes/{id} And creating a holdback without a valid disputeCaseId returns 422 with code INVALID_DISPUTE_REFERENCE And attempts to update or remove the linkage return 409 and do not mutate data
Append-Only Audit Log with Chain Integrity
Given any holdback lifecycle action (creation, change, approval, release) When the action is committed Then a new audit event is appended with fields: eventId, eventType in {CREATED, CHANGED, APPROVED, RELEASED}, timestamp (ISO-8601 UTC), actorId, actorType, reasonCode (non-empty, from enum), previousHash, eventHash And no existing audit event can be updated or deleted (attempts return 405 and emit a TAMPER_ATTEMPT audit event) And chain verification over the event sequence returns valid=true with eventHash computed from payload + previousHash
Versioned Snapshots on Change/Approval/Release
Given a change affects splits or deliverables When the action is committed Then the emitted audit event contains a read-only snapshot object with versionId, affected items, amounts/percentages, and file checksums And retrieving the snapshot by eventId returns the same bytes as originally stored And subsequent changes produce new snapshots without altering prior events
Locked History UI in Dispute and Payout Details
Given a user with view permissions opens a linked dispute or payout detail page When the History view is opened Then events render in chronological order with eventType, timestamp, actor display name, reason code, and integrity status And all rows are read-only (no edit/delete affordances) And the view supports filter by eventType and date range and paginates at 50 events/page And an integrity badge reflects current chain verification result And export control is visible to users with Export permission only
Audit Log Export and Verification
Given a user with Export permission requests an audit export for a holdback or dispute When the export is generated Then the system produces a JSON bundle including ordered events, snapshots, and a manifest with rootHash and SHA-256 checksum And the API returns a signed checksum and content-length And the Verify endpoint accepts the bundle and returns verified=true when hashes match
Tamper Detection Signal
Given an audit chain fails verification (missing event or hash mismatch) When verification runs (on load or scheduled job) Then the system creates a TAMPER_DETECTED audit event with timestamp and details And the UI displays a prominent warning banner in the History view And notifications are sent to users with Admin role And subsequent valid events remain viewable; no existing events are altered
Targeted Notifications & Recipient Scoping
"As an artist, I want to be notified when my payment is partially held and why so that I understand what I will receive and what is pending."
Description

Delivers event‑driven notifications scoped to only impacted parties: disputed recipients, undisputed recipients receiving adjusted payouts, admins, and optionally external collaborators. Supports email and in‑app channels with localized, role‑aware templates that redact sensitive details for unauthorized viewers. Includes batched digests, retry/backoff, and preference/quiet‑hours honoring. Messages contain deep links to the Dispute Vault case, a holdback summary, and one‑tap release (for authorized roles). Expected outcome: stakeholders stay informed without spam, reducing confusion and support load.

Acceptance Criteria
Event-Scoped Notification Routing for Disputed Parties
Given a dispute is opened or updated affecting recipients R1 and R2 on release X, and recipients R3 and R4 are unaffected And notification preferences exist for each recipient and admin users When the notification job runs for the Dispute Vault event Then only R1, R2, configured admins, and configured external collaborators receive a “Dispute Opened/Updated” message And R3 and R4 receive no message for this event And each delivered message records recipient_id, channel, template_id, and redaction_level in the immutable history
Adjusted Payout Notice to Undisputed Recipients
Given a holdback applies to release X causing a payout adjustment of 25% for undisputed recipient R3 When the payout cycle is calculated and notification dispatch starts Then only R3 receives an “Adjusted Payout Due to Dispute Holdback” message And the message includes holdback percentage, calculated adjusted amount, and effective period And disputed recipients do not receive this adjusted payout notice And the message contains no sensitive details about other recipients
Role-Aware Templates and Redaction Enforcement
Given recipient A is an admin, recipient B is a disputed party, and collaborator C is external with limited permissions When a dispute-related notification is generated Then admin A’s message includes full dispute details including counterparties, amounts, and attachments And disputed recipient B’s message includes only their own amounts and counterparties and excludes other parties’ PII And external collaborator C’s message redacts amounts, hides counterparties beyond permitted display names, and includes no attachments And deep links for unauthorized viewers resolve to an access-denied view
Localization and Template Fallback
Given recipient D has locale es-ES and recipient E has locale fr-CA where a localized template is unavailable When generating notifications for the same event Then recipient D receives the Spanish template with all placeholders correctly populated And recipient E receives the default en-US template as a fallback And monetary values and dates are formatted according to each recipient’s locale And any template rendering error blocks send and logs an error; no partially rendered message is delivered
Deep Links and One‑Tap Release Authorization
Given a notification is sent to an authorized role (Admin or Finance) with a one‑tap release action link When the recipient activates the link within 24 hours of send Then the system validates a signed, single‑use token and releases the disputed holdback per current policy And the action is audit-logged with actor_id, case_id, and message_id; subsequent link uses show “Already executed” And notifications to unauthorized roles contain no one‑tap link; attempting the deep link results in 403 Access Denied And the case deep link opens the exact Dispute Vault case with the holdback summary visible on load
Batched Digest Delivery
Given ≥3 notification events occur for recipient R within a 60‑minute digest window and R has digest preference enabled When the digest job runs at window close Then R receives a single digest summarizing all events with per-item titles and deep links And R does not receive individual event messages in addition to the digest for those events And if events >20, the digest truncates to 20 items and includes a “View all” link to the case activity And if digest preference is disabled, events are delivered individually as they occur
Delivery Reliability, Retry/Backoff, and Quiet Hours
Given email is enabled for recipient R and in‑app is always on, and a transient SMTP error occurs on send When dispatching the message Then the system retries email up to 3 times with exponential backoff (1m, 5m, 15m) and marks email failed after the final attempt while keeping in‑app delivered And permanent failures (hard bounce/blocked) are not retried and suppress future email sends to that address until updated by the user And if R’s quiet hours are active, email is deferred until quiet hours end (recipient timezone) while in‑app is posted immediately And no notification is sent on any channel that R has disabled in preferences
One‑Tap Release & Resume Payouts
"As a project manager, I want to release held funds with one action once a dispute is resolved so that payouts resume immediately without manual reconciliation."
Description

Adds a permissioned action to resolve a dispute and immediately release held funds. On invocation, the system verifies dispute status and policy gates, recalculates any deltas (e.g., split edits made during the dispute), and merges released amounts into the next payout run or triggers an ad‑hoc payout per policy. Includes confirmation UX, multi‑factor/double‑confirm for large releases, error handling with rollback, and full audit/log updates. Integrates with the payout scheduler, ledger, and notifications. Expected outcome: rapid return to normal payout operations with minimal manual reconciliation.

Acceptance Criteria
Permissioned One‑Tap Release On Resolved Dispute
- Given a dispute in status "Resolved" or "Release Approved" and a user with "dispute.release" permission, When the user taps "Release" and confirms, Then the system authorizes the action and proceeds to execute the release. - Given a dispute not in a releasable status (e.g., "Open", "Pending Review", "Appealed") or the user lacks permission, When the user attempts to release, Then the action is blocked, a 403/validation error is shown, and no state changes occur. - Given a dispute with a zero holdback balance, When the user attempts to release, Then a no‑op message is shown and no payout jobs are created. - Rule: Only holds linked to the dispute ID are eligible for release; unrelated holds remain untouched.
Delta Recalculation On Split Changes During Dispute
- Given held earnings detail lines and current split configuration (as of execution time), When release executes, Then the releasable amount per payee equals sum(lines × current split %) minus amounts already disbursed for those lines. - Rule: Sum of per‑payee release amounts equals total releasable amount within rounding tolerance, with residual handled per policy (e.g., residual bucket). - Then ledger entries are created per payee with a single journal group: debit Holdback Liability, credit Payable, using a stable idempotency reference. - Then a reconciliation artifact is attached to the audit record showing before/after balances and delta calculations.
Payout Routing Per Policy (Next Run vs Ad‑Hoc)
- Given policy.release_mode = "immediate", When release executes, Then an ad‑hoc payout batch is created within 60 seconds containing all releasable amounts and queued with the payout scheduler. - Given policy.release_mode = "next_cycle", When release executes, Then released amounts are merged into the next scheduled payout run and appear in the run preview with correct per‑payee lines. - Rule: No duplicate inclusion—each payee’s release amount appears in exactly one payout batch; idempotency checks prevent duplication across retries. - Then the resulting payout batch ID (or scheduled run ID) is linked in the audit record and visible in the confirmation UI.
Secure Confirmation And MFA For Large Releases
- Given total release amount ≥ policy.large_release_threshold, When the user taps Release, Then a summary dialog displays totals, beneficiaries, fees, and policy snapshot, requires MFA (TOTP/SMS) and a second explicit confirm before proceeding. - Given total release amount < policy.large_release_threshold, When the user taps Release, Then a single confirm dialog suffices without MFA unless overridden by policy. - Rule: Confirmation session times out after 5 minutes; three failed MFA attempts lock the action for 15 minutes and are rate‑limited. - Then confirmation captures user agent, IP, timestamp, and MFA method and stores them in the audit evidence.
Atomic Execution, Rollback, And Clear Error Handling
- Given any failure in ledger write, payout batch creation, scheduler enqueue, or notification dispatch, When the release executes, Then the entire transaction is rolled back and no partial state persists (no ledger postings, no payout batches, no success notifications). - Then the user sees a descriptive error with a retry‑safe idempotency token and correlation ID. - Rule: Retries are safe and do not create duplicate payouts; compensating actions are triggered automatically if any external side effect was initiated. - Then failure is recorded as an audit entry with error codes, stack trace reference, and unchanged balances.
Comprehensive Audit Trail And Targeted Notifications
- Then a success audit entry records actor ID, dispute ID, policy snapshot hash, pre/post holdback balances, per‑payee release amounts, payout batch/schedule ID, MFA evidence, and checksums of input/output artifacts. - Given configured recipients (e.g., manager, accountant, disputing parties), When release succeeds, Then notifications are sent within 60 seconds containing amount, beneficiaries, link to audit, and expiration, honoring notification preferences. - Rule: Notification payloads mask sensitive identifiers (e.g., bank tokens) and exclude raw account numbers. - Then notification delivery status and opens are captured per recipient for analytics.
Idempotency And Concurrency Safety
- Given repeated identical release requests for the same dispute, When processed with the same idempotency key, Then exactly one release is executed and subsequent requests return the original result (HTTP 200 idempotent replay). - Given concurrent release attempts without an idempotency key, When they race, Then a lock on the dispute prevents double execution; losing attempts return HTTP 409 with guidance to retry using the provided key. - Rule: Idempotency keys expire after 24 hours and are scoped to (dispute ID, policy snapshot hash, release scope). - Then metrics record deduplicated requests, lock contention, and average release latency.
Per‑Recipient Holdback Analytics & Reporting
"As a manager, I want to see which collaborators have funds on hold and for how long so that I can manage expectations and cash flow."
Description

Provides visibility into held vs. released amounts per recipient and release, including reason codes, dispute age, expected release dates, and impact on cash flow. Includes filters by project, release date, recipient, and dispute status; time‑series charts; CSV export; and secure API endpoints for BI tools. Surfaces alerts for aging holds nearing timeout. Integrates with IndieVault’s analytics layer and respects data‑access permissions. Expected outcome: transparent insights that support communication with collaborators and financial planning.

Acceptance Criteria
Per-Recipient Holdback Summary by Release
Given I access Holdback Analytics for a specific release with at least one disputed split or deliverable When the page loads Then I see a per-recipient table with columns: Recipient, Release, Held Amount, Released Amount, Held %, Reason Code, Dispute Age (days), Expected Release Date, Last Updated And Held Amount + Released Amount equals Total Owed for that recipient-release row within ±0.01 And Held % = Held Amount / (Held Amount + Released Amount) rounded to 2 decimals And Reason Code reflects the latest active dispute code from Dispute Vault for that recipient-release And Dispute Age is computed as full days since dispute opened; 0 if less than 24 hours And Expected Release Date derives from dispute timeout or configured override and is today or a future date, or null if unknown And the summary Impact on Cash Flow equals the sum of Held Amount across all rows in the current filter context
Filter and Search Across Project/Date/Recipient/Status
Given multiple projects, releases, recipients, and disputes exist When I apply filters: Project (multi-select), Release Date (range), Recipient (typeahead multi-select), Dispute Status (Active, Resolved, Timed Out) Then the table, summaries, charts, and exports reflect only records matching all active filters And filter chips display active filters and allow single-click removal And clearing filters resets the date range to the last 90 days And applying or removing filters updates URL query parameters for deep-linking And filtered results render within 2 seconds for datasets up to 100k rows
Time-Series Trends and Drilldown Consistency
Given a date range is selected When I view the time-series chart Then it shows daily totals for Held Amount, Released Amount, and Net Change lines over the selected range And hovering a point shows exact totals and number of recipients contributing that day And clicking a point drills down to the table filtered to that day with totals that differ by no more than 0.1% from the chart And granularity auto-adjusts: daily for ranges ≤ 90 days, weekly for ranges ≤ 12 months, monthly for longer ranges
CSV Export Accuracy and Fidelity
Given active filters are applied to the analytics view When I export to CSV Then the file includes headers: recipient_id, recipient_name, release_id, release_title, held_amount, released_amount, held_percent, reason_code, dispute_age_days, expected_release_date, last_updated, dispute_status And the exported row count equals the on-screen table row count And the CSV totals for held_amount and released_amount match on-screen totals within ±0.01 And timestamps are in ISO 8601 UTC format And the export completes within 10 seconds for up to 200k rows, or streams in chunks if larger
Secure Analytics API for BI Integration
Given I authenticate with a valid API token that has analytics.read scope When I call GET /api/analytics/holdbacks with query parameters project_id, release_date_from, release_date_to, recipient_id, dispute_status, page, page_size Then I receive HTTP 200 with a JSON body containing data[], totals{}, page_info{page, page_size, total_pages, total_records} And each data item includes: recipient_id, recipient_name, release_id, release_title, held_amount, released_amount, held_percent, reason_code, dispute_age_days, expected_release_date, last_updated, dispute_status And responses exclude records the token’s principal is not permitted to access And rate limits enforce 120 requests per minute per token, returning HTTP 429 with Retry-After when exceeded And requests and responses are recorded in the analytics audit log with timestamp and token identifier
Aging Hold Timeout Alerts
Given holds exist with expected release dates within the configured alert threshold (N days) When I view the dashboard or have alert subscriptions enabled Then an Aging Holds list shows items sorted by soonest expected release And each item displays recipient, release, held amount, expected release date, dispute age, and reason code And an alert triggers once when a hold first enters the threshold window and is not duplicated daily And holds resolved or released are removed from the list and suppress future alerts within 15 minutes of resolution And selecting an alert opens the analytics view filtered to that specific hold context
Permissions and Data Access Enforcement
Given role-based access controls are configured for projects and recipients When a user without access attempts to view holdback analytics for a restricted project or recipient Then no restricted data appears in UI, CSV export, or API responses And totals, charts, and alerts are computed only from data the user is permitted to see And attempts to access restricted data are logged with user identifier, scope, and timestamp in the audit log
Overlap & Edge‑Case Safeguards
"As a payments engineer, I want safeguards against double‑holds and inconsistent states so that payouts remain accurate even with overlapping disputes."
Description

Handles complex scenarios such as multiple simultaneous disputes on the same payout lines, split changes after a hold is created, partial resolutions, chargebacks/reversals, and currency rate drift. Ensures the sum of holds never exceeds the payable amount and prevents double‑holding through deterministic conflict resolution and idempotency keys. Supports force‑release with justification and role checks, and provides rollback‑safe transactions. Includes comprehensive validation and unit/integration tests. Expected outcome: accurate payouts and system stability under real‑world dispute patterns.

Acceptance Criteria
Multiple Simultaneous Disputes on Same Payout Lines
Given a payout line of amount X with no existing holds When two or more disputes referencing overlapping portions are created concurrently Then the system holds only the union of disputed amounts, sum(holds) <= X, and no double-hold occurs Given multiple disputes are received within the same millisecond When conflict resolution is applied Then ordering is deterministic by createdAt asc, then disputeId asc, and results are repeatable across replays Given a dispute creation is retried with the same idempotency key When the request is processed Then the response returns the original hold reference and no new hold is added Given holds are applied to a payout line When the ledger and audit trail are inspected Then entries exist per dispute with requestedAmount, appliedAmount, ordering inputs, idempotencyKey, actor, and timestamps
Split Change After Hold Creation
Given a hold is created against a disputed portion of a payout line under Split S0 When the split is updated to S1 effective after the hold creation Then the hold remains bound to the disputed logical share, the undisputed portion is recalculated per S1, and released amounts reflect S1 Given a split update reduces the payable on the disputed share When the next payout cycle runs Then the hold amount auto-adjusts down to not exceed the new payable and never creates a negative available balance Given a split update increases the payable on the disputed share When the next payout cycle runs Then no automatic increase to the existing hold occurs; only an explicit dispute update may increase the hold Given the split changes When audit is queried Then the event log records S0 snapshot, S1 values, effectiveAt, recalculation results, and user/automation actor
Partial Resolution Releases
Given a dispute holds multiple items or portions When a subset is marked resolved for release Then the corresponding held amount is released immediately and queued for payout without affecting remaining holds Given a partial release occurs When recipient notifications are sent Then only affected recipients are notified with amounts, items, and remaining hold summary Given a partial resolution is applied When viewing the dispute history Then the timeline shows resolution type, released amount, remaining held amount, actor, justification (if provided), and timestamps Given a partial release is retried with the same idempotency key When processed Then the release is applied at most once and returns the original release reference
Chargeback or Reversal Handling
Given an undisputed payout was released previously When a processor chargeback is received for that amount Then a reversal ledger entry is created without modifying any existing holds and available balances are updated accordingly Given a chargeback exceeds current available balance When accounting is applied Then the system tracks the deficit without creating negative held amounts and schedules recovery against future payables per policy Given a chargeback event is recorded When notifications are dispatched Then only impacted recipients and designated managers are notified with chargeback details and next steps Given a chargeback and a dispute resolution occur on the same day When transactions commit Then ordering is deterministic (chargebacks applied before discretionary releases) and the final balance is consistent on replay
FX Rate Drift and Rounding Integrity
Given a hold is created on a payout line denominated in currency C When the hold is stored Then amounts are normalized and persisted in base currency B using the creation FX rate and the original currency/amount are recorded for audit Given a release occurs at a later settlement time with a different FX rate When computing the release Then the payable in C is converted using the settlement rate, rounding follows bankers rounding to 2 decimals in C, and no negative residuals are produced Given residuals under the minimal unit (e.g., < 0.01 in base currency) When settlement completes Then residuals are carried forward to the next cycle and logged as variance with references to source holds Given FX rates are unavailable at settlement time When the operation is attempted Then the release is blocked with a retriable error, no partial state is written, and an alert is raised
Force-Release Governance and Audit Trail
Given a user with insufficient role attempts a force-release When authorization is evaluated Then the operation is denied with 403 and no state changes occur Given an authorized approver initiates a force-release When the request includes a non-empty justification (>= 15 characters) and a valid 2FA challenge Then the release executes only up to the available held amount and an immutable audit record captures actor, justification, 2FA method, and impacted lines Given a force-release is executed When notifications are sent Then affected recipients and dispute participants receive targeted messages including justification and amounts Given a force-release request is retried with the same idempotency key When processed Then the system returns the original release outcome without duplicating effects
Idempotent, Rollback-Safe Operations Under Retry
Given network timeouts or worker retries occur When create-hold, adjust-hold, or release APIs are called with the same idempotency key Then each operation is applied at most once and returns the same result on subsequent retries Given any multi-step hold or release transaction When a downstream dependency fails mid-operation Then the entire transaction is rolled back with no partial ledger or audit writes and the request is marked retriable Given eventual consistency repair jobs run When drift between requested and applied holds is detected Then the job reconciles to the deterministic state and emits a reconciliation audit event Given the system test suite executes When unit and integration tests covering concurrency, FX drift, chargebacks, split changes, and force-release run Then all tests pass and code coverage for the Dispute Holdback safeguard module is >= 90% lines and 80% branches

SilentStamp

Embed per-recipient, inaudible audio identifiers at the moment each review link or export is generated. Adaptive masking keeps the mix pristine while surviving common transcodes, dithering, and light edits. Works on full tracks and stems with batch apply and an instant A/B Audibility Check. Benefit: every copy is uniquely traceable without compromising sound, so you can share confidently and act fast if it leaks.

Requirements

Per-Recipient Inaudible Watermark Embedder
"As an indie label manager, I want a unique, inaudible stamp embedded automatically for each recipient when I create review links or exports so that every shared copy is traceable without affecting sound quality."
Description

Implements the core audio-stamping engine that embeds a unique, inaudible identifier per recipient at the moment a review link or export is generated. Uses psychoacoustic modeling to adaptively mask the payload, preserving mix integrity while ensuring recoverability after common transcodes (e.g., AAC/MP3), dithering, sample-rate conversion, light edits (head/tail trims, gain changes), and loudness normalization. Supports mono/stereo and multi-channel files across common formats (WAV, AIFF, FLAC) and bit depths, works on both full tracks and stems, maintains deterministic/idempotent outputs for the same input and recipient, respects peak headroom constraints, and records embed parameters for auditability. Provides clear error states, retries, and performance targets suitable for batch workflows without blocking the UI.

Acceptance Criteria
Embed Unique Watermark at Review Link Generation
Given a source audio asset (full track or stem) and a recipient ID When a review link is generated or an export is triggered Then a per-recipient watermark payload derived from the recipient ID and asset checksum is embedded into the audio And decoding the watermarked output returns the correct recipient ID with confidence ≥ 0.999 And two different recipient IDs produce outputs that decode to different IDs and have cross-correlation of decoded payloads ≤ 0.05 And a 4-minute, stereo, 44.1 kHz/24-bit WAV completes embedding in ≤ 5 seconds on reference host (8 vCPU, 16 GB RAM, SSD)
Idempotent and Deterministic Embeds
Given the same input file checksum, recipient ID, and embed settings/version When embedding is executed multiple times Then the produced audio bytes are bit-identical across runs (SHA-256 hashes match) And changing the recipient ID or embed settings/version produces non-identical audio bytes and a distinct decoded payload
Robust Recovery After Transcodes, Dither, SRC, Trims, Gain, and Loudness Normalization
Given a watermarked output file When it is subjected to any of: AAC 256 kbps, MP3 192 kbps, 16-bit TPDF dithering, sample-rate conversion 44.1↔48 kHz, global gain change between −6 dB and +6 dB, head/tail trims totaling ≤ 1000 ms, or loudness normalization to −14 LUFS Then the watermark decoder recovers the correct recipient ID with overall success rate ≥ 99.5% and bit error rate ≤ 1% across a corpus of ≥ 100 test files And the false-positive rate on non-watermarked files is ≤ 1e−6 And decode time per 4-minute track is ≤ 2 seconds on the reference host
Headroom and Loudness Preservation
Given an input with measured true peak ≤ −1.0 dBTP When the watermark is embedded Then the output true peak remains ≤ −1.0 dBTP (no new inter-sample clipping) And the change in integrated loudness |ΔLUFS-I| ≤ 0.1 LU and max short-term loudness change within any 3 s window ≤ 0.3 LU And if the headroom constraint would be violated, the operation fails with error code HEADROOM_INSUFFICIENT and includes the recommended gain reduction (dB) to proceed
Multi-Format and Multi-Channel Coverage Including Stems
Given input audio in WAV, AIFF, or FLAC at common bit depths (16/24/32 float) and channel layouts (mono, stereo, up to 7.1) When the watermark is embedded Then the output preserves channel count and order, sample rate, and bit depth unless explicit conversion is requested And file metadata (e.g., tags/ISRC/artwork) is preserved unchanged And batch applying to a set of stems for the same asset assigns consistent per-recipient payloads across all stems And no channel reordering or polarity inversion occurs (channel mapping verified)
Audit Trail of Embed Parameters
Given any embed operation (success or failure) When querying by asset ID, recipient ID, or job ID Then an immutable audit record exists containing: asset checksum, recipient ID, timestamp (UTC), algorithm+model versions, payload strength, deterministic seed, input/output formats, channel map, measured headroom and loudness deltas, outcome status, and error code/message if failed And the record is exportable as JSON and is retrievable in ≤ 200 ms at P95
Batch Performance, Non-Blocking UI, and Resilient Retries
Given a batch of 100 stereo 4-minute WAV files at 44.1 kHz/24-bit on a reference host (8 vCPU, 16 GB RAM, SSD) When batch embedding is started from the UI Then total processing time is ≤ 20 minutes and the UI remains responsive, emitting progress updates at least every 1 second And transient failures are retried up to 3 times with exponential backoff (1 s, 2 s, 4 s); persistent failures are marked Failed with error codes without blocking remaining items And CPU utilization averages ≤ 85% and peak RSS memory ≤ 8 GB during the batch And cancel/pause/resume operations do not corrupt outputs; partial files are cleaned up on cancel
Adaptive Masking with Instant A/B Audibility Check
"As a mixing engineer, I want to A/B the stamped audio against the original in real time so that I can confirm the watermark is inaudible before sharing."
Description

Adds a perceptual masking controller that continuously tunes embed strength per time/frequency region to remain below audibility thresholds, with guardrails for sparse passages and quiet fades. Includes an instant A/B Audibility Check in the asset preview that toggles stamped vs. original playback, shows a difference meter (e.g., loudness delta and spectrum diff), and blocks finalization if an embed exceeds audibility thresholds. Provides configurable sensitivity profiles, fast preview rendering (<1s startup), and fallback strategies that reduce payload density rather than risk artifacts.

Acceptance Criteria
A/B Toggle Sync and Seamless Playback
Given an asset is open in the preview player with both original and stamped renders prepared When the user toggles A/B via button or keyboard shortcut during continuous playback Then audio continues without interruption and time alignment between A and B remains within ±5 ms And no clicks, pops, or gain discontinuities greater than 0.1 dB occur at the toggle point And the A/B state indicator updates within 100 ms of the user action And the difference meter updates within 200 ms of the toggle
Difference Meter Accuracy and Responsiveness
Given the difference meter is visible during A/B Audibility Check When measuring a calibrated test asset with a known −0.3 LUFS gain change Then the loudness delta displays −0.3 ± 0.1 LU within 2 seconds of playback start And when measuring a pink-noise sweep, the spectrum‑diff displays magnitude differences within ±1 dB across 50 Hz–16 kHz compared to an offline reference And numerical values and units (LU, dB) are shown and updated at least 5 times per second And all meter elements meet WCAG 2.1 AA contrast and are keyboard‑focusable
Audibility Threshold Enforcement Blocks Finalization
Given the audibility model predicts any time–frequency bin with audibility margin < 0 dB for the selected sensitivity profile When the user clicks Finalize Stamp Then the finalize action is blocked And an error banner lists each offending segment with start/end timestamps and dominant frequency bands And switching to the Conservative profile or enabling fallback reduces all offending regions to margin ≥ 0 dB And once all regions meet the threshold, Finalize becomes enabled
Adaptive Masking in Sparse and Quiet Sections
Given an asset containing passages with RMS below −30 dBFS or isolated sources When stamping with the Balanced profile Then embed strength in those passages is reduced to keep predicted audibility margin ≥ 3 dB And if margin cannot be maintained, payload density is reduced up to full suppression in those passages rather than increasing strength And a visual timeline overlay marks reduced‑density regions and is listed in an event log And the A/B toggle in those regions produces no audible artifacts per a double‑blind listening test pass rate ≥ 90% with n ≥ 10 listeners
Sensitivity Profiles Selection and Persistence
Given profiles Conservative, Balanced (default), and Aggressive are available When a user selects a profile and restarts the A/B preview Then the profile is applied to the masking controller and pre‑check thresholds immediately And the selection persists for that asset for the current user and is restored on reload And Reset to Default sets the profile to Balanced And switching profiles triggers a re‑evaluation completed within 500 ms for 10‑minute assets
Fast Preview Startup Performance
Given a 10‑minute 96 kHz/24‑bit stereo asset not cached locally When the user presses Play in the preview player Then audible playback begins within 1000 ms in the 95th percentile over 20 trials And CPU utilization remains below 80% for more than 95% of the first 2 seconds And if startup exceeds 1000 ms, a loading indicator appears within 200 ms and disappears when playback starts And all metrics are captured to performance logs with timestamps
Batch Stamping for Tracks and Stems
"As an artist manager, I want to stamp entire releases and their stems in one operation so that I can deliver consistent, traceable assets without manual repetition."
Description

Enables batch application of SilentStamp across full tracks and stem sets within a release, preserving recipient identity consistently across all related assets. Provides a queue with parallel processing, resumable jobs, progress indicators, detailed per-file logs, and automatic stem alignment handling. Keeps metadata intact, writes outputs to release-ready folder structures, and surfaces failures with actionable remediation (e.g., unsupported format, clipping risk). Ensures reproducibility and consistent embed parameters across the batch for reliable downstream detection.

Acceptance Criteria
Batch Stamp Entire Release (Tracks + Stems)
Given a release containing at least one full track and one stem set, and a selected recipient When the user initiates Batch Stamp Then SilentStamp is applied to every input file in the selection within a single batch run And each input yields exactly one stamped output file, with no source files overwritten And outputs are created only if stamping completes without error for that file
Recipient Identity Consistency Across All Outputs
Given recipient R with recipientId = X and a new batchRunId = Y When the batch completes Then every stamped output (tracks and all stems) embeds recipientId = X and batchRunId = Y And the batch manifest lists a single recipientId for the run and per-file checksums mapping inputs to outputs And any file failing recipientId verification is marked Failed and excluded from success counts
Queue Processing With Parallelism and Bounded Concurrency
Given a queue of at least 10 files and a maximum parallelism C When processing starts Then no more than C files are in Processing state simultaneously And queue states transition only through Queued -> Processing -> Done/Failed with no deadlocks And concurrency and throughput metrics are recorded in the batch log
Resumable Jobs After Interruption
Given a batch interrupted by application crash or network loss When the user selects Resume for that batch Then previously completed files (verified by checksum) are skipped And partial outputs are discarded and regenerated And remaining files continue using the original batchRunId and embed parameters And the final manifest reflects a single run with a resumed = true flag and accurate start/end timestamps
Progress Indicators and Per-File Logs
Given an active batch When the user opens the Batch panel Then the UI displays overall percent complete, elapsed time, ETA, and counts of Done/Processing/Failed updated at least once per second And each file row shows current state and progress percent with an expandable log including start/end timestamps, input path, output path, and applied embed parameters (stampVersion, maskingMode, strength, seed) And per-file logs and the batch manifest can be exported as JSON
Automatic Stem Alignment Preservation
Given a stem set with aligned sources When stamping completes Then all stamped stems maintain original relative alignment with maximum inter-stem drift <= 1 sample And no added leading or trailing silence is introduced And if alignment cannot be guaranteed, the affected stem is marked Failed with reason = alignment-risk and remediation instructions
Reproducibility, Metadata Preservation, and Release-Ready Outputs
Given identical inputs, recipientId, and embed parameters (including fixed seed) When the batch is rerun Then bit-identical outputs with matching SHA-256 checksums are produced And all non-audio metadata (ID3, BWF, Vorbis) is preserved unchanged, with added provenance fields (stampRunId, recipientId) appended without altering existing tags And outputs are written under the release’s designated output root preserving original per-track/per-stem folder structure, with a manifest.json mapping inputs to outputs and listing checksums And failures surface with actionable messages including reason codes (unsupported-format, clipping-risk, alignment-risk, read-error) and suggested remediation steps, and Failed files do not block completion of other files
Watermark Robustness Verification Suite
"As a product owner, I want automated robustness checks with clear pass/fail scores so that I can enforce share gates and avoid distributing fragile stamps."
Description

Introduces an automated verification pipeline that subjects stamped assets to common transformations—AAC/MP3 transcodes at typical bitrates, sample-rate conversion, 16-bit dithering, loudness normalization, small trims, and light time/pitch adjustments—and then attempts recovery to measure confidence. Produces a per-asset robustness score and pass/fail gating against workspace-defined thresholds, stores verification artifacts and logs for audit, and surfaces results in the UI and via API. Provides presets per distribution target and scheduled rechecks when algorithms or thresholds change.

Acceptance Criteria
AAC/MP3 Transcode Robustness Test
Given a stamped asset with a known recipient ID, When transcoded to AAC 256 kbps CBR and MP3 192 kbps CBR using reference encoders, Then the decoder recovers the correct recipient ID with confidence >= workspace.transcodeThreshold and zero false positives within the batch. Given the same asset, When transcoded to AAC 128 kbps and MP3 128 kbps, Then the decoder recovers the correct ID with confidence >= workspace.transcodeThreshold and the Transcode dimension score is recorded. Given any recovery confidence < workspace.transcodeThreshold, When the run completes, Then the Transcode dimension is marked Fail, artifacts (transcoded files and decoder logs) are stored, and the overall robustness score reflects the failure.
SRC and 16-bit Dither Robustness Test
Given a stamped asset at 48 kHz/24-bit, When sample-rate converted to 44.1 kHz and 96 kHz and bit-depth reduced to 16-bit with TPDF dithering, Then the decoder recovers the correct ID with confidence >= workspace.srcDitherThreshold for each transform. Given the same asset, When triangular and noise-shaped dithers are applied, Then detection meets or exceeds workspace.srcDitherThreshold and results are logged per variant. Given step completion, When the run finalizes, Then the SRC/Dither dimension score is computed, stored, and included in the overall robustness score.
Light Edits, Trims, and Loudness Normalization Robustness Test
Given a stamped asset, When leading and trailing trims up to ±2.0 seconds are applied, Then the decoder recovers the correct ID with confidence >= workspace.editsThreshold. Given the asset, When global gain adjustments of ±1.5 dB and normalization to -14.0 LUFS integrated are applied, Then detection meets or exceeds workspace.editsThreshold. Given the asset, When a 0.5% time-stretch OR a ±10 cents pitch shift is applied (individually), Then detection meets or exceeds workspace.editsThreshold. Given all edit variants processed, When artifacts are stored, Then each variant and its decoder report are present in the artifacts bundle with SHA-256 checksums.
Robustness Score Calculation and Threshold Gating
Given per-dimension outcomes, When verification completes, Then a weighted overall robustness score in the range [0,100] is computed per asset using workspace-configured weights and stored with algorithmVersion and profileVersion. Given computed scores, When overallScore >= workspace.overallThreshold AND all mandatory dimensions have status Pass, Then the verification status is Pass; otherwise it is Fail. Given gating is enabled for exports and review links, When the latest verification status is Fail, Then the action is blocked and a user-readable error lists failed dimensions and the required thresholds; the API returns HTTP 412 with a structured error payload.
Verification Artifacts, Logs, and Audit Trail Persistence
Given a verification run starts, When initialization occurs, Then the system records runId, initiator, assetVersionId, workspace snapshot (thresholds, weights, preset/profile), and algorithmVersion in the audit trail. Given each transform and decode step completes, When artifacts are persisted, Then the transformed file, decoder report (recipientId, confidence, timestamp), step parameters, and SHA-256 checksums are stored and linked to runId. Given retention policies, When cleanup runs, Then artifacts are retained for >= workspace.artifactRetentionDays and remain retrievable by runId via UI and API. Given results are available, When viewed in UI or fetched via API, Then the results include per-dimension scores/status, overall score/status, algorithmVersion, profileVersion, runId, startedAt/completedAt, and are filterable/sortable by status, score, and date; the API supports pagination and filtering by assetId and date range.
Presets per Distribution Target and Custom Profiles
Given the workspace selects the "Spotify" preset, When a verification run is triggered, Then the system applies the preset's defined transforms and thresholds and computes/report scores accordingly. Given a custom profile with transforms, weights, and thresholds is created, When saved, Then it is versioned, validated, and can be set as the workspace default for future runs. Given a preset or profile is updated, When subsequent runs execute, Then the new version is used and the change is recorded in the audit trail; prior runs remain linked to their original profileVersion.
Scheduled Rechecks and Change-Triggered Reverification
Given a weekly schedule is configured, When the schedule elapses, Then the system queues and executes re-verification of targeted assets against current algorithms, presets, and thresholds, updating scores and statuses on completion. Given the stamp/decoder algorithmVersion or any workspace threshold/weight changes, When a change is published, Then affected assets are queued for recheck within 15 minutes and marked Pending until completion. Given a recheck completes, When results are written, Then prior runs remain accessible and the UI/API expose deltas (score change and status change) relative to the last completed run.
Recipient Identity and Key Management
"As a workspace admin, I want secure, signed recipient identifiers embedded with each share so that I can trust leak attribution and comply with privacy requirements."
Description

Creates a secure identity layer that maps each review link recipient to a cryptographically signed payload embedded in audio. Generates collision-resistant recipient IDs, signs payloads with workspace-scoped keys, embeds timestamps and link IDs for non-repudiation, and supports key rotation and revocation. Stores only minimal PII, encrypts sensitive data at rest, enforces cross-tenant isolation, and maintains an audit log of stamp events. Exposes admin controls for rotation policies and integrates with existing per-recipient analytics without exposing raw keys.

Acceptance Criteria
Recipient ID Generation and Payload Signing on Review Link Creation
Given a workspace and recipient are selected to create a review link When the link is created Then the system generates a collision-resistant recipientId of at least 128 bits with estimated collision probability < 1e-12 over 10M IDs per workspace And the payload contains {workspaceId, keyId, recipientId, linkId, iat (UTC ISO-8601), version} And the payload is signed using the active workspace-scoped private key (Ed25519 or ECDSA P-256) And the signature verifies with the corresponding workspace public key And the end-to-end operation completes within 200 ms p95 and 500 ms p99 under 100 RPS per region And no raw private keys, seed material, or unredacted PII are written to logs or analytics streams
Signed Payload Embedding and Extraction for Exports and Streams
Given an audio asset and an active review link for a recipient When an export is generated or streaming is initiated Then the signed payload is embedded into the audio stamp and associated with the output file/stream When the stamped audio is later submitted for verification Then the system extracts the payload and validates the signature against the correct workspace public key And extracted fields equal the source values {workspaceId, keyId, recipientId, linkId} and iat is within ±5 minutes of the stored creation time And verification succeeds at ≥ 99.9% across supported codecs/transcodes (WAV 16/24-bit, MP3 320 kbps, AAC LC) under light edits (gain ±1 dB, trim ≤ 1 s) And per-recipient analytics receive only recipientId and linkId; no keys, signatures, or raw payload bytes are exposed to analytics sinks or clients
Workspace-Scoped Key Rotation Without Breaking Legacy Verification
Given an admin triggers key rotation for a workspace When rotation executes Then a new key pair is generated in HSM/KMS, assigned a new keyId, and marked Active, while the previous key becomes Verify-Only And all new stamps use the new Active keyId within 60 seconds of rotation with zero failed sign attempts And verification of stamps signed with any still-valid prior keyIds continues to succeed for the configured legacy window (default 365 days) at ≥ 99.99% success And rotation events are recorded in the audit log with actor, oldKeyId, newKeyId, timestamp, and policy reference
Emergency Key Revocation and System-wide Enforcement
Given a keyId is marked Revoked by an admin or automated compromise signal When revocation takes effect Then any signing attempt with that key is blocked with HTTP 403 and zero bytes stamped And verification of payloads signed by the revoked key returns status=Revoked with effectiveFrom timestamp and leak risk flag set And revocation status propagates to all signing and verification services within 60 seconds (eventual consistency ≤ 120 seconds) And the UI/API presents a prompt to create or select a new Active key for continued operations, and both actions are captured in the audit log
Cross-Tenant Isolation for Keys, Payloads, and Recipient Data
Given two distinct workspaces A and B When any API call or internal process in A targets keys, recipient data, payloads, or audit entries belonging to B via direct ID or enumeration Then the system returns 404/403 and emits no cross-tenant data And signing uses only private keys scoped to the calling workspace; cross-workspace key material is never loaded into process memory And queries scoped to workspace A return 0 records from workspace B in automated isolation tests And at-rest records enforce workspaceId foreign keys with RLS/ABAC; bypass attempts fail and are logged with high-severity alerts
Minimal PII Storage and Encryption-at-Rest Controls
Given recipient creation or review link issuance When recipient details are persisted Then only minimal PII is stored: recipientId, email (salted SHA-256 hash plus encrypted canonical email), optional displayName And no phone numbers, physical addresses, or social handles are stored And sensitive columns are encrypted at rest with AES-256-GCM using KMS/HSM-managed keys; backups and replicas are encrypted with the same policy And decrypt access requires least-privilege service principals; all access attempts are logged and policy violations alert within 5 minutes And GDPR/CCPA export/delete requests for a recipient complete within 7 days, removing or irreversibly anonymizing PII while preserving aggregate analytics
Tamper-Evident Audit Logging of Stamp Lifecycle Events
Given any stamp lifecycle event occurs (sign, verify, rotation, revocation) When the event is committed Then an append-only record is written with timestamp (RFC3339 UTC), actor/service, workspaceId, keyId, recipientId, linkId, eventType, and outcome And each log record includes a SHA-256 hash of the previous record in the workspace stream; daily manifests publish a Merkle root for verification And audit write success is ≥ 99.99% with p95 write latency ≤ 100 ms; failures are retried and alert if unresolved within 60 seconds And authorized admins can query and export logs for a workspace and time range without accessing other workspaces' streams
Leak Detection and Traceback Workflow
"As a label owner, I want to identify the recipient of a leaked file and generate a defensible report so that I can act quickly and confidently."
Description

Provides a detection service to ingest suspect audio (upload or URL), recover embedded identifiers, and link them to the original recipient, share event, and asset set. Offers a guided incident workflow with confidence indicators, timeline reconstruction, and an exportable evidence pack (hashes, detection logs, payload signature verification) to support enforcement. Includes role-based access controls, rate limiting to prevent abuse, and chain-of-custody logging for legal defensibility.

Acceptance Criteria
Ingest Suspect Audio (Upload or URL) and Detect Identifier
Given a signed-in user with Leak:Detect permission When they submit suspect audio via file upload (≤ 2 GB) in WAV, AIFF, FLAC, MP3, AAC, or OGG, or via a reachable HTTPS URL (direct file or pre-signed cloud link) Then the system validates reachability, media type, and size and returns specific errors (415-UNSUPPORTED_MEDIA_TYPE, 413-PAYLOAD_TOO_LARGE, 404-NOT_REACHABLE) when applicable And computes a SHA-256 of the retrieved bytes and records a chain-of-custody entry with userId, sourceType (upload|url), timestamp (UTC), and hash And runs SilentStamp detection and, on success, returns recipientId, shareEventId, assetSetId, detectionId, and confidenceScore (0.00–1.00) And returns a result state of Match|NoMatch|Error with machine-readable reason codes And completes detection within 120 seconds for files ≤ 500 MB under nominal load
Identifier Recovery After Common Transcodes and Edits
Given a standardized test corpus of stamped assets and their transformed derivatives When detection is run on derivatives including MP3 320 kbps, AAC 256 kbps, Ogg Vorbis q5, 16-bit dithered, ±1 dB gain change, 0.5 s trimmed head/tail, resampled 48 kHz↔44.1 kHz, and mono fold-down Then the system recovers the correct recipientId for ≥ 95% of derivatives with confidenceScore ≥ 0.90 (High) And for remaining stamped derivatives, confidenceScore ≥ 0.70 (Medium/Low) and the result is not labeled High And the false positive rate on an unstamped corpus is ≤ 1% at the High threshold
Guided Incident Creation with Timeline Reconstruction
Given a detection result with state Match When the user selects "Open Incident" Then a new incident is created with status Open and links to recipientId, shareEventId, assetSetId, and detectionId And the incident view shows a reconstructed timeline including original share time, link views/downloads by recipient, any IndieVault re-shares, and detection submission time, each with UTC timestamps And the user can assign an owner, add notes, and update status through Open → Under Review → Confirmed|Dismissed → Closed, with each transition audit-logged And the incident supports attaching additional evidence files and maintains versioned edit history
Confidence Indicators and Thresholding
Given a completed detection When the result is presented Then the system provides confidenceScore (0.00–1.00) and a label derived from thresholds: High ≥ 0.90, Medium 0.70–0.89, Low < 0.70 And the UI displays the score, label, and a short rationale (e.g., match strength, corruption indicators) And administrators can adjust thresholds within 0.50–0.99, and changes are audit-logged and applied to subsequent detections only And API and UI present identical confidence values to 2 decimal places
Exportable Evidence Pack with Signature Verification
Given an incident in state Under Review or Confirmed When the user selects "Export Evidence Pack" Then the system generates a ZIP within 60 seconds containing: PDF summary (case metadata, timeline, parties), JSON detection report (hashes, confidence, parameters, reason codes), original share-event metadata, chain-of-custody log excerpt, and payload signature verification report And the ZIP and PDF are digitally signed by IndieVault and include detached .sig files and the public certificate chain for verification And re-downloading the same export yields an identical SHA-256 hash And the export link is a signed URL that expires after 24 hours and is accessible only to users with Leak:Export permission
Role-Based Access Control for Detection and Incidents
Given organization-scoped permissions Leak:Detect, Leak:Investigate, and Leak:Export When a user without the required permission attempts to run detection, view/manage incidents, or export evidence Then the system returns 403 Forbidden and logs an access-denied audit entry with actor, action, and timestamp And users with Leak:Detect can submit detections; Leak:Investigate can view and manage incidents; Leak:Export can export evidence packs And permissions are enforced consistently across UI and API, and data from other organizations is not accessible (multi-tenant isolation)
Rate Limiting and Abuse Prevention on Detection Service
Given rate limits configured at 10 detections per minute per user and 200 per day per organization When requests exceed the limit Then the API responds with 429 Too Many Requests and includes a Retry-After header, and the UI displays a clear throttling message And limits reset on schedule, are visible to organization admins, and can be tuned by platform administrators And repeated 429s from a single IP trigger step-up verification (e.g., CAPTCHA) and are logged with reason and scope And rate limiting is enforced on both upload and URL ingestion endpoints
Tamper-Evident Chain-of-Custody Logging
Given any action related to detection or incidents (ingest, detection result, status change, export) When the action occurs Then an append-only audit record is written containing actor, action, affected objects, previousHash, newHash, and UTC timestamp, chained via rolling hash And audit records are immutable; administrative corrections create compensating records without overwriting history And the chain integrity can be verified end-to-end by recomputing hashes and matches the value embedded in the evidence pack And querying the audit log by incidentId returns all related entries within 2 seconds at the 95th percentile
Export & Review Link Pipeline Integration and API
"As a developer integrating IndieVault, I want an API that returns stamped exports tied to recipient identities so that my app can deliver secure review links seamlessly."
Description

Integrates SilentStamp into all export and review link generation paths with clear status feedback and failure handling. Adds UI toggles and policy defaults (e.g., always stamp review links), ensures stamping is performed synchronously or via a short preflight queue with progress updates, and returns idempotent results per recipient and asset. Provides API endpoints and webhooks for third-party tools to request stamped renders, supports fallbacks for unsupported formats, and annotates analytics with stamp IDs for end-to-end traceability.

Acceptance Criteria
Synchronous stamping during manual export
Given a user initiates a manual export of a supported asset and SilentStamp is enabled and estimated processing time is below the synchronous threshold When the user clicks Export Then the export workflow includes a visible "Stamping" step in the progress UI And the file is delivered only after stamping completes successfully And the export summary lists a Stamp ID and checksum for each output And if stamping fails, the export is aborted, an actionable error with code is shown, and the user can retry or export without stamping per policy
Preflight queue with progress updates for long operations
Given an export or review link generation where the predicted processing exceeds the synchronous threshold or concurrency limits are reached When the job is submitted Then the job is placed in a preflight queue with a jobId and initial status "Queued" And the UI shows stage-based progress (Queued, Preparing, Stamping, Rendering, Uploading, Finalizing) with updates at least every 2 seconds And the user can cancel while status is Queued or Preparing And on completion, the user receives in-app and email notifications with result details And on timeout or failure, the job transitions to "Failed" with reason code, and a one-click retry is offered
Per-recipient stamping and idempotency for review links
Given a review link is generated for one or more recipients and the workspace policy enforces stamping for review links When the link is created Then each recipient's asset copy is stamped with a unique Stamp ID And repeated generation for the same asset version hash and recipient using the same Idempotency-Key or natural key returns the same Stamp ID and file hash And analytics events for plays and downloads are tagged with the Stamp ID and recipient id And expiring links maintain recipient-to-Stamp ID mapping until link expiry
Unsupported format fallback and clear messaging
Given a stamping request targets an asset in a format or settings not supported by SilentStamp When the request is processed Then the system either (a) converts to the nearest supported intermediate format according to policy and proceeds, or (b) skips stamping if policy forbids conversion And the UI/API response clearly indicates the action taken, target format, and reason code And original sample rate and channel count are preserved where possible; if not, a warning is shown And analytics for unstampable deliveries are flagged with "unstamped" and a reason
Stamped render API with webhooks and idempotency
Given a third-party client calls POST /v1/stamped-renders with assetId, recipientId, output options, and an Idempotency-Key header When the request is valid Then the API returns 202 with jobId and a status endpoint, and registers webhook callbacks for subscribed events And duplicate requests with the same Idempotency-Key return 200 with the original job/result without creating a new job And webhooks are signed (HMAC-SHA256) and delivered for events: queued, processing, stamped, completed, failed, with retries on 5xx and exponential backoff And the final result includes a download URL, Stamp ID, file checksum, content-length, mime type, and expiry timestamp
UI policy defaults and toggles reflect effective stamping policy
Given workspace or project policy defaults are set for stamping behavior When a user opens the Export or Review Link dialog Then the SilentStamp toggle reflects the effective policy (locked if enforced) and default state is applied And the dialog explains the policy via tooltip/help text and shows estimated time impact when enabled And user changes are persisted per project unless overridden by an enforced workspace policy And an audit log entry records policy changes with user, timestamp, and scope
End-to-end traceability via Stamp ID in analytics and reports
Given stamped assets are delivered via exports or review links When recipients play or download the assets Then analytics events include Stamp ID, recipient id, asset id, delivery channel, timestamp, and IP/country where permitted by policy And the analytics UI and API allow filtering and aggregation by Stamp ID to trace activity And the leak investigation view resolves a Stamp ID to recipient and delivery details in under 2 seconds at the 95th percentile

ArtTrace

Fingerprint cover art, one-sheets, and EPK PDFs per recipient using a robust, invisible mark that persists through typical resizing and recompression. Drop any suspected file back into IndieVault to pinpoint the exact recipient and link. Benefit: identify who shared visuals, not just audio, so you can protect full-campaign assets and avoid guesswork.

Requirements

Invisible Watermark Engine
"As an indie artist or manager, I want my visual assets to be invisibly fingerprinted per recipient so that I can trace leaks without degrading image quality or changing my workflow."
Description

Implement a robust, imperceptible fingerprinting engine for images (JPEG/PNG) and PDFs that embeds a unique, keyed mark per recipient without visible artifacts. The engine must operate in the frequency domain for images and target image streams within PDFs (with a fallback to per-page rasterize-and-recompose when required), preserving layout and file usability. It must meet quality constraints (e.g., SSIM ≥ 0.98 or PSNR ≥ 40 dB relative to the source) and resilience targets (recoverable after resizing 50%–200%, JPEG recompression quality ≥ 60, minor crops ≤ 10%, and format conversions between JPEG/PNG/PDF). The embedder must be deterministic given an asset ID, recipient/link fingerprint, and algorithm version; store algorithm metadata for future verification; and support adjustable strength profiles. The service should scale via worker pools, process batches, and expose an internal API usable by the review-link delivery pipeline and bulk export jobs, with average embed time ≤ 400 ms for a 2048×2048 image and ≤ 1.5 s for a 5-page PDF.

Acceptance Criteria
Per-Recipient Deterministic Embed and Accurate Decode
Given the same assetId, recipientId, linkId, algorithmVersion, and strengthProfile, When the engine embeds a watermark into the same source file multiple times, Then the resulting output file bytes are bitwise-identical and the decoder extracts the exact same payload 100% of the time. Given N=1000 unique recipient/link fingerprints for the same asset, When each is embedded, Then decoding any watermarked copy returns the correct recipientId and linkId with 100% precision and 0 collisions across the set. Given an embedded file, When decoded, Then the engine returns assetId, recipientId, linkId, algorithmVersion, and strengthProfile, and the same metadata is stored and queryable with the asset record. Given strengthProfile ∈ {low, medium, high}, When embedding pristine files, Then decoding succeeds 100% of the time for each profile and the selected profile is persisted in metadata. Given image watermarking, When inspecting DCT/DWT-domain coefficients during unit tests, Then the payload is present in designated frequency bands and spatial-domain per-pixel deviation remains within ±1 gray level for ≥ 99% of pixels.
Image Quality Preservation Thresholds
Given 2048×2048 JPEG/PNG sources, When watermarked with the default strengthProfile, Then SSIM ≥ 0.98 and PSNR ≥ 40 dB versus source for each output. Given any PNG input with alpha, When watermarked, Then the output remains PNG with identical pixel dimensions and alpha channel preserved. Given any image input, When watermarked, Then pixel dimensions and ICC color profile remain unchanged.
Image Resilience to Common Transformations
Given a watermarked image, When uniformly resized between 50% and 200% and decoded, Then the correct payload is recovered with ≥ 99% success across the test matrix. Given a watermarked JPEG recompressed at quality ≥ 60, When decoded, Then the correct payload is recovered with ≥ 99% success. Given a watermarked image cropped by ≤ 10% of total area from any combination of edges, When decoded, Then the correct payload is recovered with ≥ 95% success. Given format conversions between JPEG and PNG using standard tools, When decoded, Then the correct payload is recovered with ≥ 99% success. Given a corpus of 10,000 unwatermarked images, When decoded, Then the false-positive rate is ≤ 0.01%.
PDF Image-Stream Marking with Fallback Rasterization
Given a PDF containing image XObjects, When watermarked, Then marks are embedded in image streams while preserving text selectability, vector graphics, links, page count, page sizes, and document metadata. Given a PDF page where image-stream marking is unsupported, When processed, Then only the affected page(s) are rasterized and recomposed; the output PDF preserves page count and sizes and opens without errors in Acrobat, Preview, and Chrome. Given watermarked PDFs re-saved by common tools that recompress images, When decoded, Then the correct payload is recovered with ≥ 95% success. Given a pristine watermarked PDF, When decoded, Then the correct payload is recovered 100% of the time.
Suspected File Decode and Attribution
Given a user uploads a suspected image or PDF, When the decoder runs, Then it returns matched assetId, recipientId, linkId, algorithmVersion, and a confidence score; if no valid mark is present, it returns a definitive "no match" response. Given modified files within resilience bounds (resize, recompress, minor crop), When decoded, Then the correct attribution is returned with ≥ 97% success. Given a corpus of 10,000 non-IndieVault files, When decoded, Then the overall false-positive rate is ≤ 0.01% and no attribution is returned. Given a successful decode, When completed, Then an audit log entry is created containing file hash, decoded payload, timestamp, and requesting user ID.
Performance and Scalability Targets
Given a 2048×2048 image, When watermarked on a standard worker, Then average embed time ≤ 400 ms and p95 ≤ 700 ms across ≥ 500 samples. Given a 5-page PDF, When watermarked, Then average embed time ≤ 1.5 s and p95 ≤ 2.5 s across ≥ 200 samples. Given a worker pool of size 8, When embedding a batch of 500 images, Then sustained throughput is ≥ 12 images/second with zero job failures and queue wait p95 ≤ 2 seconds. Given transient failures (e.g., timeouts), When jobs are retried, Then at-least-once processing is guaranteed without producing duplicate outputs for the same idempotency key.
Internal API Integration for Delivery and Bulk Export
Given the review-link pipeline requests per-recipient embedding, When calling the internal API with assetId, recipientId, linkId, algorithmVersion, and strengthProfile, Then the API returns a jobId and upon completion provides a URL to the watermarked file plus stored algorithm metadata. Given repeated API calls with the same idempotency key and identical inputs, When processed, Then the service returns the existing artifact without re-embedding. Given a bulk export of 1,000 recipients, When submitted as a batch, Then 100% of jobs either complete successfully or return actionable per-item errors; the batch response includes per-item status codes and messages. Given invalid files or unsupported formats, When requested to embed, Then the API returns a 4xx error with a machine-readable error code and does not enqueue a job.
Per-Recipient Fingerprints & Key Management
"As a manager, I want unique, secure fingerprints tied to each recipient and link so that I can definitively attribute leaks and rotate keys without re-uploading assets."
Description

Generate and manage unique, non-PII fingerprints per recipient, per link, and per asset, derived via HMAC from a rotated master key with per-asset salt. Maintain a secure mapping table (asset_id, campaign_id, link_id, recipient_id, fingerprint_id, algo_version, created_at, revoked_at) with access controls and audit trails. Support key rotation without re-uploading originals, re-issuance of links with new fingerprints, and revocation of compromised fingerprints. Ensure payload encodes only opaque identifiers; all PII remains server-side. Provide internal utilities to migrate algorithm versions and backfill fingerprints for existing campaigns.

Acceptance Criteria
Generate unique HMAC fingerprints per recipient, link, and asset
Given asset_id=A, campaign_id=C, link_id=L, recipient_id=R, master_key_version=V, and per-asset salt=S When requesting fingerprint generation for (A, C, L, R) Then the system computes fingerprint_id using an approved HMAC keyed with master key version V and including salt S And the fingerprint_id length equals the configured length for algo_version V And repeating the same request with the same inputs (A, C, L, R, V, S) returns the same fingerprint_id And changing any of A or L or R or V returns a different fingerprint_id And a mapping row is inserted with fields {asset_id=A, campaign_id=C, link_id=L, recipient_id=R, fingerprint_id, algo_version=V, created_at set, revoked_at null} And a uniqueness constraint prevents more than one active (revoked_at null) row for (asset_id, link_id, recipient_id, algo_version)
Secure mapping table with access controls and audit trails
Given an authenticated principal without fingerprint.read permission When attempting to read the mapping table Then access is denied (HTTP 403 or equivalent) and the attempt is logged in the audit trail And an authenticated principal with fingerprint.read may query by fingerprint_id and receives only {asset_id, campaign_id, link_id, recipient_id, fingerprint_id, algo_version, created_at, revoked_at} And write operations require fingerprint.write and are denied otherwise And all read and write operations record audit entries with actor_id, action, resource, before/after hashes, timestamp, and outcome And row-level security prevents cross-tenant access to rows outside the principal’s tenant scope And database encryption at rest is enabled for the tablespace or column family containing fingerprint_id and salts (verified via configuration)
Payloads expose only opaque identifiers; no PII outside server
Given an API client creates or fetches a review link for recipient R When inspecting the URL, response body, and any embedded watermark payload Then no PII (e.g., email, name) is present in the payload or URL parameters And only opaque identifiers (e.g., link_id, fingerprint_id, algo_version) are included And fingerprint_id is non-reversible to PII by design (one-way HMAC) and no server endpoint exposes a reversal And logs and analytics events omit PII in any payload fields related to fingerprinting
Key rotation with re-issuance without original re-upload
Given the active master key is rotated from version V to version V+1 and the prior key V is retained for verification When reissuing fingerprints for links in campaign C without re-uploading assets Then new mapping rows are created with algo_version=V+1 and new fingerprint_ids distinct from those created under V And existing links can be reissued to new URLs carrying the V+1 fingerprint_ids And historical analytics tied to old links remain preserved; new analytics accrue to reissued links And verification accepts both V and V+1 fingerprints until V reaches configured end-of-life And an operational job exists to bulk reissue fingerprints with progress metrics and without requiring asset re-uploads
Revocation of compromised fingerprints with forensics preserved
Given a fingerprint_id F is suspected compromised When an authorized user revokes F Then the corresponding mapping row’s revoked_at is set to the current timestamp atomically And any access via the associated link_id is blocked within latency SLO And internal forensic lookup by fingerprint_id F still resolves to its mapping data and indicates revoked=true And a replacement fingerprint_id can be generated for the same (asset_id, link_id, recipient_id) using the current master key version, creating a new active row And bulk revocation by campaign_id or link_id processes all affected rows and records audit entries per row
Algorithm version migration and backfill utilities
Given existing campaigns with missing fingerprints for target algo_version T When running the backfill utility in dry-run mode Then it reports counts of rows to create by campaign and detects any conflicts without writing data When running in execute mode Then it creates missing mapping rows idempotently (re-running produces zero additional rows) And it respects configured batch sizes, retries transient failures, and resumes from checkpoints after interruption And it completes within the defined performance budget for N rows (e.g., <= X rows/sec) and emits progress and error metrics And no existing active rows are modified except to populate algo_version where explicitly configured by a migration plan
Verification resolves recipient and link from provided fingerprint
Given a valid pair (fingerprint_id F, algo_version V) extracted by the ArtTrace service from a file When the verification API is called with F and V by an internal authorized principal Then the system returns exactly one mapping row with asset_id, campaign_id, link_id, recipient_id and the revoked flag And if F is revoked, the response includes revoked=true and HTTP 200; if unknown, HTTP 404 is returned And if multiple rows would match F and V, the system returns HTTP 409 and raises an alert And median verification latency is <= the configured SLO (e.g., p50 <= 50 ms, p95 <= 200 ms) under load QPS=Q And the endpoint is rate-limited and requires fingerprint.read scope
On-the-Fly Watermarked Delivery
"As a sender using review links, I want watermarked files generated automatically per recipient so that every visual download is traceable without extra steps."
Description

Integrate the watermark engine with IndieVault review links to generate and serve watermarked files per recipient request automatically. Ensure all downloadable and previewed visuals (cover art, one-sheets, EPK PDFs) are fingerprinted at request time or served from a short-lived cache aligned to link expiry. Support folder/package downloads (ZIP) with all contained visuals watermarked. Enforce watermarking for external recipients while allowing internal bypass for trusted team roles when configured. Meet delivery SLOs (time-to-first-byte ≤ 1.0 s for cold watermark of a 2 MB image; ≤ 3 s for a 10-page PDF), with background warming for high-traffic campaigns and CDN caching of recipient-specific artifacts respecting expiry.

Acceptance Criteria
On-Demand Per-Recipient Watermarking for Image Previews and Downloads
Given an external recipient with a valid review link to a 2 MB JPEG/PNG cover image and watermarking enabled When the recipient requests a preview or download Then a recipient-specific watermarked image is generated on-the-fly if not present in cache And the delivered file, when re-uploaded to IndieVault ArtTrace, resolves to the exact recipient and link ID And no unwatermarked variant is served to the external recipient And the asset URL is recipient-scoped and expires with the review link
Watermarked Viewing and Download of PDFs and One-Sheets
Given an external recipient with a valid review link to a 10-page PDF (EPK/one-sheet) When they open the in-app viewer Then page renders are derived from a recipient-watermarked source And on download the PDF saved is recipient-watermarked And the watermark persists after typical resizing and recompression And re-upload of the downloaded PDF or an exported page image after typical recompression resolves to the same recipient and link in ArtTrace
Watermarked ZIP Package Delivery for Folders
Given a recipient requests a ZIP of a folder/package that includes images and PDFs via a review link When the ZIP generation starts Then each contained visual is recipient-watermarked prior to or during streaming And the ZIP contains no unwatermarked visuals And filenames match the source items without exposing internal storage paths or IDs And the ZIP download URL is recipient-scoped and expires with the review link
Performance SLOs for Cold Watermarking
Given the watermark artifact is not cached and the engine is cold When an external recipient requests a 2 MB image Then time-to-first-byte at the CDN edge is ≤ 1.0 s in the p95 And when an external recipient requests a 10-page PDF Then time-to-first-byte at the CDN edge is ≤ 3.0 s in the p95
Cache, Expiry Alignment, CDN, and Background Warming
Given a recipient-specific watermarked artifact has been generated When the same recipient re-requests it before link expiry Then it is served from a short-lived cache or CDN and never past the link’s expiry And upon link expiry or revocation, subsequent requests return 410 Gone and cached artifacts are purged or allowed to expire without serving And CDN cache keys include link ID, recipient ID, and asset hash to prevent cross-recipient cache hits And for configured high-traffic campaigns, background warming pre-generates recipient-specific artifacts for targeted assets before the send window, achieving ≥ 70% cache hit ratio during the first 10 minutes
Internal Bypass Controls and Enforcement for External Recipients
Given watermark bypass is enabled for trusted internal roles on a review link When a signed-in internal user with a trusted role previews or downloads a visual Then the unwatermarked original is served And access logs record the bypass event with user, link, asset, and timestamp And when an external recipient accesses the same link Then a watermarked artifact is always served And changing the bypass setting invalidates relevant cached artifacts within 60 seconds
Safe Failure Handling of Watermarking Pipeline
Given the watermark engine fails or times out during request processing for an external recipient When the recipient requests a visual Then the system does not serve an unwatermarked asset and instead returns a retriable error page or 503 with Retry-After And an alert is emitted and the event is logged with correlation ID and recipient, link, and asset references And automatic retries are attempted up to 3 times with exponential backoff without violating link expiry And once the engine recovers, a subsequent request serves the correct recipient-watermarked artifact
Drag-and-Drop Verification & Evidence Report
"As a manager, I want to drop a suspected file into IndieVault and instantly see who it was sent to so that I can address leaks quickly with defensible evidence."
Description

Provide a UI and API where users can drop or upload a suspected image or PDF to extract and verify the embedded fingerprint, returning the matched recipient, link, campaign, timestamp, and algorithm version with a confidence score. Support bulk verification and common formats (JPEG, PNG, PDF). Offer an exportable evidence report (PDF and JSON) that includes the uploaded file hash, verification results, server timestamp, and a signed attestation for legal/compliance use. Gracefully handle near-miss cases by surfacing top candidates with confidence and detected transformations. Include rate limiting, size limits, and secure temporary storage with automatic purge.

Acceptance Criteria
Single-File Drag-and-Drop Verification (UI)
Given an authenticated user on the ArtTrace verification screen, When they drag-and-drop a single supported file (JPEG, PNG, or PDF) within the configured max size limit, Then the UI shows an upload/progress state and the server stores the file in encrypted temporary storage. Given the upload completes, When fingerprint extraction and verification finishes, Then the UI and API return recipientId, recipientName, linkId, campaignId, matchedAt (UTC ISO 8601), algorithmVersion, confidence (0–1 float), and uploadedFileSha256, and the UI displays these fields. Given no valid fingerprint is found, When processing completes, Then the UI shows "No match" with confidence=0 and provides an action to view near-miss candidates. Given the file type is unsupported or exceeds the configured size, When the user drops the file, Then the UI prevents upload and the API responds with HTTP 415 (unsupported type) or 413 (payload too large) including machine-readable error codes and human-readable messages.
Bulk Verification of Multiple Files
Given an authenticated user, When they drag-and-drop multiple files up to the configured bulkMaxFiles and within per-file limits, Then the system queues and processes them concurrently and displays per-file and overall progress. Given processing completes, When results are ready, Then the UI shows a per-file status (MATCHED | NO_MATCH | NEAR_MISS | ERROR) and the API returns a JSON array with one result per input including originalFilename and sha256. Given duplicate files (identical sha256) are included in the batch, When processing occurs, Then work is deduplicated and results for duplicates include deduplicated=true. Given some files fail to process, When the batch finishes, Then successful results are returned and failed items include machine-readable error codes and messages without blocking the rest.
Near-Miss Candidate Surfacing
Given a file’s highest confidence is below the configured exactMatchThreshold but above the nearMissThreshold, When verification completes, Then the UI and API return a candidates[] list of up to topK (default 5) ordered by confidence descending. Given candidates are returned, When displayed, Then each candidate includes recipientId, linkId, campaignId, algorithmVersion, confidence, and detectedTransformations (e.g., resize, recompress, format conversion, crop) with parameters where available. Given the user selects candidates for inclusion, When exporting an evidence report, Then the selected candidates are included in a "Near-Miss Candidates" section with confidences and transformations.
Evidence Report Export (PDF & JSON, Signed)
Given at least one verification result (single or bulk) exists in the session, When the user requests Export Evidence, Then two files are produced for download: PDF and JSON with filenames arttrace-evidence-YYYYMMDD-<shortSha>.pdf and .json. Given the export is generated, When inspected, Then both files include serverTimestamp (UTC ISO 8601), uploadedFileSha256, verificationResults (per file: status, recipient/link/campaign IDs if matched, algorithmVersion, confidence), and the requesting account identifier. Given the export is generated, When validated cryptographically, Then the JSON is signed as JWS (alg=EdDSA) and the PDF includes an embedded digital signature and visible "IndieVault Attestation" seal; public keys are retrievable from the keys endpoint and signatures verify successfully. Given no match occurred for a file, When exporting, Then the report clearly states "No fingerprint match found" and includes any near-miss candidates if present.
Verification API v1
Given a valid API token, When a client POSTs to /v1/arttrace/verify with multipart/form-data containing file or files[], Then the API responds 200 with a results[] where each object contains originalFilename, mimeType, sizeBytes, sha256, status (MATCHED|NO_MATCH|NEAR_MISS|ERROR), algorithmVersion, confidence, matchedRecipientId/linkId/campaignId when applicable, matchedAt (UTC ISO 8601), and candidates[] when status=NEAR_MISS. Given an Idempotency-Key header is provided and an identical request is retried within 24 hours, When the same file content is submitted, Then the API returns the original response without reprocessing and includes idempotencyKey in the response. Given authentication is missing or invalid, When the endpoint is called, Then the API returns 401 with a WWW-Authenticate header; if the token lacks required scope, return 403. Given an unsupported media type or oversized payload is submitted, When processed, Then the API returns 415 or 413 respectively with machine-readable error codes and human-readable messages.
Rate Limiting and Abuse Protection
Given a single account, When more than the configured limit (e.g., 60 verify requests/min) are sent to /v1/arttrace/verify, Then subsequent requests receive HTTP 429 with Retry-After and X-RateLimit-* headers until the window resets and the UI shows a non-blocking "Rate limit reached" notice. Given a client honors Retry-After, When they retry after the specified delay, Then the request is accepted and processed normally. Given repeated failure patterns (e.g., >10 errors/min), When detected, Then server applies temporary backoff and records an audit log entry with accountId, reason, and window.
Secure Temporary Storage and Auto-Purge
Given an uploaded file is received, When stored server-side, Then it is encrypted at rest (AES-256) and accessed only within the uploader’s account scope; all transfers occur over TLS 1.2+. Given the temporary storage TTL (24h) elapses, When the purge job runs (at least hourly), Then the original upload and derived artifacts are irreversibly deleted; attempts to access them return 404 and a purge event is logged including sha256 and timestamp. Given an evidence report was exported, When the temporary file is later purged, Then the report remains verifiable via embedded hashes and signatures without requiring the original file. Given the service restarts or recovers from failure, When it comes back online, Then orphaned temp files older than TTL are detected and purged within 1 hour.
Robustness & Quality Assurance Suite
"As an artist or manager, I want the watermark to survive common resizing and recompression so that I can rely on it for attribution if assets leak."
Description

Build an automated test harness that embeds fingerprints into sample assets and then subjects them to real-world transformations (resizing, recompression at multiple JPEG qualities, light crops, rotations, PDF re-save, format conversions, and basic screenshot capture) to validate recoverability and visual quality. Define acceptance thresholds for detection accuracy (e.g., ≥ 99% at target transformations) and visual metrics (SSIM/PSNR) with per-algorithm-version baselines. Integrate into CI with regression gates, produce comparative reports across algorithm versions/strength profiles, and flag assets that risk visible artifacts or sub-threshold robustness before delivery.

Acceptance Criteria
JPEG Resizing & Recompression Robustness
Given a corpus of at least 120 image assets (PNG/JPEG) with unique per-recipient fingerprints embedded using algorithm version V and strength profile S And a transformation matrix of resize scales {50%, 75%, 100%} and JPEG qualities {95, 85, 75, 65, 55} When the harness applies every combination to each asset and attempts fingerprint recovery Then detection accuracy per combination and overall is >= 99.0% And the recovered recipient ID and link match the ground truth for all true positives And median recovery time per image on the CI reference runner is <= 500 ms And per-file, per-combination JSON/CSV results with detection outcome, confidence score, and recovery time are exported as a CI artifact
Light Crop & Rotation Robustness
Given at least 100 image assets with embedded fingerprints And crop operations of up to 5% from any single edge and rotations of ±1° and ±2° with auto-canvas fit When the harness applies each transformation and attempts fingerprint recovery Then overall detection accuracy across all variants is >= 99.0% And no individual variant type falls below 98.5% detection accuracy And the recovered recipient ID and link match the ground truth for all true positives And median recovery time per image on the CI reference runner is <= 600 ms
Screenshot Capture Robustness
Given at least 80 image assets with embedded fingerprints and a simulated screenshot pipeline (downscale to 0.8–1.0x with linear filtering, composited on sRGB background, PNG encode) When simulated screenshots are generated and fingerprints are recovered Then overall detection accuracy is >= 99.0% And false-positive rate against 200 negative-control images is <= 0.1% And median recovery time per image on the CI reference runner is <= 700 ms
PDF Re-save & Format Conversion Robustness
Given at least 60 PDF assets (one-sheets/EPKs) with embedded per-recipient fingerprints And re-save pipelines using Ghostscript, Quartz, and Acrobat Distiller, plus conversion to PDF/A and export of first page to JPEG at qualities {95, 85, 75} When all transformations are applied and fingerprint recovery is attempted from the resulting PDFs and JPEGs (rasterized PDFs at 300 DPI where needed) Then detection accuracy per transformation family and overall is >= 99.0% And the recovered recipient ID and link match the ground truth for all true positives And for image comparisons, SSIM >= 0.98 or PSNR >= 40 dB versus original artwork images
Visual Quality Thresholds & Baselines Enforcement
Given canonical image/PDF test sets and stored per-algorithm-version baselines And fingerprints embedded at strength profiles {low, medium, high} When metrics are computed for (a) embed-only (no transform) and (b) target transformation matrices Then for embed-only images SSIM >= 0.99 or PSNR >= 42 dB for at least 99% of assets And for post-transform images SSIM >= 0.96 or PSNR >= 36 dB for at least 95% of assets And no metric degrades vs the stored baseline by more than 0.002 SSIM or 0.5 dB PSNR per asset cohort; otherwise the run fails and records regressions
CI Regression Gates & Comparative Reporting
Given CI jobs triggered on pull requests and nightly builds with access to the baseline algorithm version V and candidate version V′ When the harness executes the full transformation matrix and visual metrics for both versions across strength profiles Then the pipeline fails if any target transformation category drops below 99.0% detection accuracy or regresses by >0.5 percentage points vs baseline And the pipeline fails if SSIM/PSNR regress by more than 0.002 SSIM or 0.5 dB vs baseline or fall below absolute thresholds And comparative HTML and JSON reports (including per-profile charts and top-N regressions) are published as CI artifacts and linked from the job summary
Pre-Delivery Artifact/Risk Flagging
Given assets queued for delivery with selected strength profile and their simulated transformation results When any asset is predicted to violate detection accuracy (threshold 99.0%) or visual quality thresholds (SSIM/PSNR) based on harness outcomes Then the system marks the asset as At Risk, blocks delivery in API/UI, and includes reason codes (threshold type, observed value, required value) And suggests the minimal strength/profile change that satisfies thresholds and provides a one-click re-test action And writes an audit log entry including asset ID, algorithm version, thresholds violated, and user override (if any)
Watermark Audit Logging & Analytics Integration
"As a campaign manager, I want watermark events visible in my analytics so that I can correlate distribution history with suspected leaks and take action."
Description

Record watermark embed and verification events with correlation to asset, campaign, recipient, and link. Surface watermark status and algorithm version in asset detail and send-history views, and tie verification events into per-recipient analytics. Provide filters and exports to analyze which recipients received which fingerprint versions and when, enabling root-cause analysis of leaks. Anonymize and aggregate non-sensitive metrics for system health and adoption dashboards without exposing recipient PII.

Acceptance Criteria
Embed Event Audit Log with Asset/Campaign/Recipient/Link Correlation
Given a watermark embed is initiated for an asset via ArtTrace for a specific recipient and link When the embed operation completes (success or failure) Then an audit record is created with fields: event_type=embed, asset_id, campaign_id, recipient_id, link_id, watermark_id, algorithm_version, event_timestamp_utc (ISO-8601), outcome (success|failure), actor_user_id (nullable), output_checksum (nullable), error_code (nullable) And the record is queryable by any of asset_id, campaign_id, recipient_id, or link_id within 5 seconds of write And retries of the same embed request do not produce duplicate records (idempotency key honored)
Verification Event Audit Log and Attribution
Given a suspected file is submitted to IndieVault for watermark verification When the system evaluates the file Then an audit record is created with fields: event_type=verification, matched (true|false), match_confidence (0.0–1.0, nullable if failed early), detected_watermark_id (nullable), matched_asset_id (nullable), campaign_id (nullable), recipient_id (nullable), link_id (nullable), verification_algorithm_version, event_timestamp_utc, uploader_user_id, error_code (nullable) And when matched=true and confidence ≥ 0.95, the matched recipient_id and link_id correspond to the original embed event for the detected_watermark_id And the verification record is retrievable by detected_watermark_id and by matched_asset_id within 5 seconds
Surface Watermark Status & Algorithm Version in Asset Detail and Send History
Given an asset with one or more embed events exists When a user opens the asset detail view Then the UI displays a Watermark section showing: latest_status (e.g., Fingerprinted/Failed/Not Started), latest_embed_timestamp_utc, algorithm_version used for the latest successful embed, total_fingerprinted_recipients, and a link to view all embed events And failed embeds display an error indicator with error_code available in a tooltip or detail drawer Given a campaign's send-history is viewed When per-recipient rows render Then each row shows watermark_status, embed_timestamp_utc (if any), and algorithm_version for that recipient/link
Tie Verification Events into Per-Recipient Analytics
Given the analytics view for a recipient is opened When watermark verification events exist for assets sent to that recipient Then the activity timeline lists entries labeled "Watermark Verified" including: event_timestamp_utc, asset_name/thumbnail, link_id, match_confidence, verification_source (upload|automated) And the KPI summary includes counts of verifications and average match_confidence for the selected date range And timeline filters by asset and date range include/exclude verification events accordingly
Filters and CSV Export by Fingerprint Version and Time Window
Given a user opens the Watermark Audit view When the user applies filters for asset_id/campaign_id/recipient_id/link_id/algorithm_version/date_range Then the results reflect all active filters and return the first page within 2 seconds for result sets ≤ 5,000 rows And pagination correctly advances through the full result set When the user clicks Export CSV for the current filtered set Then a CSV is generated within 60 seconds for up to 200,000 rows with columns: event_type, asset_id, campaign_id, recipient_id, link_id, watermark_id, algorithm_version, event_timestamp_utc, outcome, match_confidence, error_code And the CSV accurately shows which recipients received which algorithm_version and when
Anonymized Aggregated Metrics for System Health Dashboard
Given a user with access opens the System Health & Adoption dashboard When metrics are fetched Then only aggregated, non-PII metrics are returned and displayed, including: daily_embed_count, embed_success_rate, daily_verification_count, avg_match_confidence, campaigns_with_arttrace_enabled And the dashboard payload and UI contain no recipient PII (no names, emails, phone numbers, or link URLs); recipient_id is never present And aggregation enforces k-anonymity with k ≥ 5 for any segmented breakdown; segments violating k are grouped into "Other" And metrics are updated at least every 15 minutes and reflect audit logs within that latency
Admin Policy & Configuration Controls
"As an admin, I want to configure watermark defaults and strength per campaign so that I can balance quality, performance, and traceability for different release workflows."
Description

Add workspace- and campaign-level controls to enable/disable watermarking by default, choose robustness profiles (e.g., quality-biased vs resilience-biased), set file-type coverage (images, PDFs), and allow internal-role exemptions. Provide key rotation tools with safe rollout, link reissue workflows, and bulk re-fingerprinting actions. Expose per-asset overrides with clear UI indicators when watermarking is active, and validation warnings if settings may compromise robustness or performance.

Acceptance Criteria
Workspace Default Watermarking Toggle
Given I am a workspace admin, When I set Watermarking Default to Enabled, Then all newly created campaigns inherit Enabled and display an Inherited: Workspace badge. Given the default is set to Disabled, When a new campaign is created, Then its watermarking setting defaults to Disabled unless explicitly overridden during campaign setup. Given existing campaigns, When I change the workspace default, Then no existing campaigns change unless I confirm Apply to X campaigns in a modal and exactly X campaigns update. Given the setting is changed, When the Settings API is queried, Then it returns the new value and an audit log entry is recorded with actor, timestamp, old_value, and new_value. Given a non-admin user views the setting, When they attempt to save, Then the control is read-only and the save is blocked with a permissions error.
Campaign-Level Override of Watermarking Default
Given a campaign that inherited the workspace default, When a campaign owner toggles watermarking, Then the campaign shows Overridden at Campaign and other campaigns remain unchanged. Given a campaign turns watermarking off mid-campaign, When new review links are created, Then assets in those new links are not watermarked; previously issued links remain unchanged. Given a campaign override exists, When exporting campaign settings, Then the export includes watermarking state, source (workspace or campaign), and timestamp. Given a campaign override is removed, When saving, Then the campaign reverts to the workspace default and the UI badge updates accordingly.
Robustness Profile Selection and Enforcement
Given the workspace default profile is set to Resilience-Biased, When content is fingerprinted, Then fingerprint metadata includes profileId=resilience and configVersion, and embed parameters match the profile specification. Given a campaign selects Quality-Biased, When previewing estimated impact, Then the UI shows delta file size and image quality estimates within configured tolerances for images and a fidelity note for PDFs. Given an asset below minimum dimensions or with excessive compression, When saving with Quality-Biased, Then a warning is displayed stating Low robustness risk with a reason code and link to remediation, and save proceeds only after acknowledgment. Given the profile is changed, When subsequent fingerprints are generated, Then they use the new profile; existing fingerprints remain verifiable and are tagged with their original profile metadata.
File-Type Coverage Configuration (Images, PDFs)
Given file-type coverage toggles are available, When an admin disables PDFs, Then attempted fingerprinting of PDFs is skipped with reason=coverage_disabled and links display Not watermarked (coverage off) for PDFs. Given images coverage is enabled, When uploading JPEG/PNG assets, Then watermarking jobs are queued and status chips show Watermarking until completion. Given an unsupported image format (e.g., TIFF) is uploaded, When processing, Then it is accepted but not watermarked and a banner suggests supported formats. Given coverage settings are changed, When creating a link, Then watermarking indicators appear only for covered types and the summary reflects per-type coverage accurately.
Internal Role Exemptions from Watermarking
Given an admin exempts roles Owner and Designer, When links are generated for recipients with those roles, Then delivered assets are unwatermarked and the link summary lists the count of exempt recipients. Given a mixed recipient list, When a link is created, Then per-recipient delivery manifests include watermarking=true/false flags matching exemption rules. Given exemption configuration changes, When new links are created, Then they honor the new rules; previously sent links remain unchanged. Given a non-exempt recipient, When assets are delivered, Then a per-recipient fingerprint is embedded and analytics event fingerprint_applied is recorded with recipientId.
Key Rotation with Safe Rollout and Link Reissue
Given I initiate key rotation, When I confirm, Then the system enters Dual Key mode for a configurable grace period, using newKey for new fingerprints and retaining oldKey for validation, and displays rotation status with start and target end timestamps. Given Dual Key mode is active, When validating fingerprints created before rotation, Then validation succeeds against oldKey and is logged with keyId. Given I trigger link reissue for selected campaigns, When the job completes, Then 100% of targeted active links are regenerated with new fingerprints and URLs, old links are revoked, and a job report lists successes and failures with error codes; notifications are sent if enabled. Given rotation completes, When I finalize, Then oldKey is archived for validation of pre-rotation artifacts and disabled for minting, and an audit entry records completion and keyIds; rollback requires elevated approval and explicit confirmation.
Per-Asset Overrides, Validation Warnings, and Bulk Re-Fingerprinting
Given an asset inherits watermarking=Enabled, When a user sets a per-asset override to Disabled and saves, Then the asset shows an Override: Off badge and subsequent link generations for that asset skip watermarking. Given watermarking is active for an asset, When viewing the asset card, Then a visible indicator Watermarking On with profile name is shown and a tooltip reveals the source (workspace/campaign/asset). Given multiple assets are selected, When Bulk Re-fingerprint is run with a chosen profile, Then all selected assets are queued; progress displays percent complete; failures list assetIds with error codes; on success, asset version numbers increment and previous fingerprints remain verifiable. Given the system detects a configuration that may degrade robustness or performance (e.g., resilience profile on very large PDFs), When saving settings, Then a non-blocking warning is shown with estimated processing time impact and an option to proceed or adjust.

LeakMatch

Drag‑and‑drop any leaked snippet or full file (audio or artwork) to instantly identify the source. IndieVault decodes the embedded mark and returns the recipient, link, timestamp, and confidence score, even from partial clips. Benefit: cut hours of investigation to seconds and move straight to decisive action.

Requirements

Drag-and-Drop Leak Intake
"As an indie label manager, I want to drag and drop a leaked clip or artwork into IndieVault so that I can start identification immediately without complex steps."
Description

Provide a dedicated LeakMatch intake surface that supports drag-and-drop and file picker uploads for audio (e.g., WAV, AIFF, MP3, AAC) and artwork (e.g., PNG, JPEG) files. Validate file type/size, display upload progress, and perform client-side chunking and resumable uploads to handle large files. Run uploads through sandboxed malware scanning and store inputs in temporary, access-controlled storage. Seamlessly hand off uploaded files to the decoding service, show real-time analysis status, and offer clear error states with retry. Integrate with IndieVault’s design system for consistent UX and accessibility.

Acceptance Criteria
Drag-and-Drop and Picker Upload for Supported Media
Given I am on the LeakMatch intake page When I drag and drop a single supported file (WAV, AIFF, MP3, AAC, PNG, JPEG) onto the drop area Then the drop area indicates acceptance and the upload starts And when I use the file picker to select a supported file Then the upload starts And when I drop multiple files Then the system rejects the drop and shows "Only one file at a time" without starting an upload
File Type and Size Validation with Clear Errors
Given I attempt to upload an unsupported file type (e.g., ZIP, PDF) Then the system blocks the upload and displays "Unsupported file type. Accepted: WAV, AIFF, MP3, AAC, PNG, JPEG." And no network request is made to start the upload Given I attempt to upload a file exceeding the maximum size (audio > 5 GB, artwork > 300 MB) Then the system blocks the upload and displays "File exceeds maximum size" And no upload is initiated And the error message includes the allowed limit for the selected media type
Resumable Chunked Upload with Progress
Given an upload is in progress Then the UI displays a determinate progress bar showing percent complete and bytes uploaded When the network drops mid-upload Then the upload automatically resumes within 30 seconds of connectivity restoration without re-sending already uploaded chunks And total uploaded bytes after completion equal the original file size When the page is refreshed during upload and the same file is re-selected within 10 minutes Then the upload resumes from the last confirmed chunk And chunk integrity is verified via checksum; corrupted chunks are re-sent
Sandboxed Malware Scan and Blocking
Given an upload completes When the sandboxed malware scan runs Then the UI shows a "Scanning" status within 2 seconds of upload completion And analysis/decoding does not begin until the scan returns "clean" If the scan returns "malicious" or "suspicious" Then the file is quarantined, no hand-off occurs, the user sees "Potential malware detected" with guidance, and the action is logged If scanning exceeds 60 seconds Then the UI shows a non-blocking message "Still scanning…" and continues polling And the user may dismiss the result or upload a different file; retry is disabled for the same file
Temporary Access-Controlled Storage with Auto-Expiry
After a clean scan, the file is stored in temporary private storage Access is restricted to the uploader account and internal services; direct URL access requires a pre-signed token valid for ≤ 15 minutes If decoding completes, the temporary file is deleted within 15 minutes; otherwise it is auto-deleted within 24 hours of upload Attempts by other authenticated users to fetch the file return 403 and are audited with user ID, IP, and timestamp
Decoder Hand-off and Real-Time Analysis Status
Given the scan passes When the file is handed off to the decoding service with a correlation ID Then the UI shows real-time statuses updated at least every 2 seconds: Uploading, Scanning, Decoding, Matching, Complete or Failed On success, the result displays recipient, link, timestamp, and confidence score, with copy-to-clipboard controls On transient decoder errors (timeouts/5xx), the system retries up to 3 times with exponential backoff (1s, 2s, 4s) On persistent failure, the UI shows a Retry button that re-submits the decoding step without re-uploading the file On permanent errors (unsupported watermark/invalid media), the UI presents a clear message and an option to start a new upload
Design System and Accessibility Compliance
All UI components use IndieVault design system tokens and components for colors, spacing, typography, buttons, progress, and alerts Text and interactive elements meet ≥ 4.5:1 contrast; focus states are visible and consistent The drop zone and controls are fully operable via keyboard; Enter/Space on the focused drop zone opens the file picker Progress and status changes are announced to screen readers via ARIA live regions; errors are announced and associated with their fields All interactive elements have accessible names and roles; tab order is logical with no keyboard traps
Embedded Mark Decoder Engine
"As a security-conscious artist, I want the system to decode marks from leaked files so that I can know exactly which recipient link was the source."
Description

Implement a scalable decoding service that recovers IndieVault’s embedded forensic marks from distributed assets to attribute leaks. Support both audio and artwork watermarks generated at share time, remaining robust to common transformations (re-encoding, trimming, downmixing, noise, pitch/tempo adjustments for audio; resizing, cropping, recompression, and screen capture for images). Expose a deterministic API that accepts normalized media, outputs recovered mark payloads, raw likelihood scores, and decoder diagnostics. Integrate with the mark-to-recipient mapping store to resolve recipient, link ID, and original share metadata. Design for horizontal scaling, timeouts, and circuit breaking to ensure reliability under load.

Acceptance Criteria
Deterministic Decoder API Contract
Given a normalized media file (audio or image) and valid authentication When the client POSTs to /decode with media, mediaType, and requestId headers Then the service returns 200 with a JSON body containing fields: payload (base64), payloadVersion, likelihood (float), mediaType, diagnostics {decoderVersion, operations, segmentsAnalyzed, segmentScores[], reasons[], warnings[]}, resolved {recipientId, linkId, shareTimestamp} or null, requestId, processingMs And repeated calls with identical input return identical payload and likelihood within epsilon ≤ 1e-6 And invalid mediaType or malformed body returns 400 with diagnostics.reasons populated And unsupported media codec returns 415 And un-decodable but well-formed input returns 200 with payload = null, resolved = null, likelihood < configuredThreshold, and diagnostics.reasons includes "no_mark_detected"
Audio Mark Robustness to Common Transformations
Given an audio asset watermarked at share time When the decoder processes transformed variants: AAC 128kbps, MP3 128kbps, OGG 96kbps; trimmed start/end by up to 30s; downmixed to mono; loudness-normalized to -14 LUFS; additive noise at SNR ≥ 20 dB; pitch-shift ±3%; time-stretch ±5% (tempo change without pitch) Then the correct payload is recovered with likelihood ≥ configuredThreshold for ≥ 95% of variants in a 200-sample test set per transform And the per-variant diagnostics.segmentScores show at least 3 segments ≥ segmentThreshold And the false-positive rate on 10k negative controls (unmarked audio) at the same threshold is ≤ 1e-6
Image Mark Robustness to Common Transformations
Given an image asset watermarked at share time When the decoder processes transformed variants: JPEG recompression at qualities 30–90; resizing to 25–100% of original longest side; center and random crops retaining ≥ 50% of area; Gaussian noise σ ≤ 5/255; screen-capture rephotograph at 1080p with moderate perspective distortion Then the correct payload is recovered with likelihood ≥ configuredThreshold for ≥ 95% of variants in a 200-sample test set per transform And the false-positive rate on 10k negative controls (unmarked images) at the same threshold is ≤ 1e-6 And diagnostics.operations includes detected transformations when inferred (e.g., resize, crop)
Partial Audio Clip Decoding from Short Snippets
Given a 3–5 minute watermarked track When the decoder processes contiguous snippets of lengths {5s, 10s, 20s, 30s} sampled anywhere in the track, possibly after re-encoding to MP3 128kbps and with noise at SNR ≥ 20 dB Then for snippet lengths ≥ 10s the correct payload is recovered with likelihood ≥ configuredThreshold in ≥ 98% of cases; for 5s snippets ≥ 90% And diagnostics.segmentsAnalyzed ≥ 1 and diagnostics.segmentScores reflects the contributing snippet region
Mark-to-Recipient Mapping Resolution
Given a successfully recovered payload When the decoder queries the mapping store Then the service returns resolved.recipientId, resolved.linkId, resolved.shareTimestamp that match the stored mapping exactly And if the payload is unknown or expired, resolved is null and diagnostics.reasons includes "no_mapping" And all mapping lookups complete within 50ms p95 and 100ms p99, with retries on transient errors per backoff policy
Scaling, Timeouts, and Circuit Breaking Under Load
Given a deployment scaled to handle 200 RPS sustained and 500 RPS peak with mixed media When load tests run for 30 minutes with a 90/10 read/write mix on mapping store Then p95 end-to-end latency ≤ 1500ms and p99 ≤ 4000ms for decodable inputs ≤ 30s audio or ≤ 10MB images And any decode exceeding the configured timeout (8s) returns 504 with diagnostics.reasons includes "timeout" And the service trips circuit breakers after 20 consecutive downstream failures and half-opens after 30s, with <1% request failure leakage during breaker-open periods And the system auto-scales to maintain CPU ≤ 70% and queue wait ≤ 200ms p95
Diagnostics Completeness and Observability
Given any decode request (success or failure) When the response is returned Then diagnostics includes decoderVersion, operations[], segmentsAnalyzed (int), segmentScores[] (float), reasons[] (strings), warnings[] (strings) And a unique requestId is echoed and correlated with structured logs and metrics (latency, errors, result type) exported to the monitoring system within 10s And PII in logs is limited to recipientId and linkId only when resolved; no raw media is persisted beyond transient processing buffers And a redaction mode can be enabled via header X-Diagnostics-Level=basic to omit segmentScores and reasons from the response
Partial Clip Matching
"As a manager investigating a 10-second snippet, I want LeakMatch to attribute short clips so that I can act even when only fragments leak."
Description

Enable detection and decoding from partial media to support real-world leaks. For audio, implement sliding-window analysis and time-synchronization recovery to attribute clips as short as a few seconds, tolerant of fades and background noise. For artwork, perform region-based detection to extract marks from cropped or partial images, including screenshots. Provide minimum viable thresholds and confidence heuristics to indicate when a fragment is too short or degraded for reliable attribution. Optimize for speed so short clips return results in seconds.

Acceptance Criteria
Audio: 4s clip with fade and background noise
Given a 4–5 second audio clip from a watermarked track with 500 ms fade-in/out and SNR ≥ 6 dB, When the clip is uploaded to LeakMatch, Then the system attributes the source with recipientId, linkId, deliveryTimestamp, matchOffsetSeconds, and confidence ≥ 0.90 within 3 seconds.
Audio: mid-track 7s time-shifted segment
Given a 7-second clip from an unknown position within a watermarked track with up to 1.0 second of leading or trailing silence, When processed by LeakMatch, Then time-synchronization is recovered and the correct recipientId and linkId are returned with matchOffsetSeconds accurate to ±0.25 s and confidence ≥ 0.88 within 3 seconds.
Artwork: cropped screenshot with compression
Given a screenshot-derived image (JPEG/PNG) of a watermarked artwork where 35–50% of the original area is present and ≥ 25% of the marked region remains visible at JPEG quality ≥ 60, When uploaded to LeakMatch, Then the system decodes and returns recipientId, linkId, and confidence ≥ 0.85 within 2 seconds, and includes cropRegion diagnostics.
Insufficient fragment handling and thresholds
Given an audio clip shorter than 2.5 seconds or with SNR < 3 dB, or an artwork fragment with < 15% of the marked region visible, When processed, Then no attribution is made and the response includes result=InsufficientFragment, reason ∈ {"below_min_length","low_quality","insufficient_region"}, recommendedMinLengthSeconds ≥ 3.0 (audio) or recommendedMinRegionPercent ≥ 20 (artwork), and confidence ≤ 0.49 within 2 seconds.
Ambiguous match flagging
Given an input producing multiple candidate matches where top-1 and top-2 confidence scores are both ≥ 0.75 and differ by < 0.10, When processed, Then the system returns the top candidate with ambiguous=true and definitive=false, includes nextBestCandidate {recipientId, linkId, confidence} in diagnostics, and completes within 3 seconds.
Performance: short media p95 latency
Given 100 audio clips (≤ 10 s each) and 100 artwork fragments that meet minimum viable thresholds, When processed on a standard worker configuration, Then the 95th percentile end-to-end processing time is ≤ 3 s for audio and ≤ 2 s for artwork, with timeout error rate = 0%.
Attribution & Confidence Report
"As an artist, I want a clear attribution and confidence score so that I can decide whether to revoke access or escalate."
Description

Present a clear result view that attributes the leak to a recipient and share link, including recipient name, email (subject to permissions), share URL, asset name/version, share timestamp, and a confidence score with plain-language explanation. Handle ambiguous cases by ranking top candidates with scores and highlighting required next steps. Provide one-click actions to revoke the implicated link, notify the recipient, and open the related asset. Support exporting the result as a signed PDF/CSV and copying a shareable incident summary for internal teams.

Acceptance Criteria
Single-Match Attribution Result View
Given a dropped leaked file yields one unambiguous source match When the analysis completes Then the report displays the following fields: recipient name, recipient email (only if viewer has permission), share URL, asset name, asset version, share timestamp, confidence score, and a plain-language confidence explanation And the share timestamp is shown in the viewer’s timezone with ISO 8601 hover detail And the share URL is clickable and opens in a new tab with tracking parameters stripped And fields unavailable due to permissions are visibly masked with a tooltip explaining why
Ambiguous Attribution Ranking and Guidance
Given the analysis produces multiple plausible matches When the result is presented Then the top 3 candidates are listed in descending order by confidence score with ties broken by most recent share timestamp And each candidate row shows recipient name, (permissioned) email, share URL, asset name/version, share timestamp, and confidence score And an "Ambiguous attribution" banner is shown with recommended next steps And selecting a candidate expands the confidence explanation and exposes one-click actions scoped to that candidate
One-Click Remediation Actions
Given a candidate is selected in the report When the user clicks "Revoke link" Then the corresponding share link is set to Revoked, immediate access via that URL is blocked, and an audit log entry is recorded with incident ID and actor When the user clicks "Notify recipient" Then a templated notification is sent to the recipient (or to the account’s escalation email if recipient email is hidden), and the outcome (sent/failed) is surfaced in the UI When the user clicks "Open related asset" Then the app navigates to the asset’s detail page at the matched version in a new tab
Signed Export of Attribution Report (PDF and CSV)
Given a result is visible When the user exports as PDF Then the file includes all displayed fields (respecting permission masking), a unique incident ID, generation timestamp, and a digital signature block with verification URL And verifying the exported file via the verification URL returns "valid" for an untampered file When the user exports as CSV Then the CSV contains the same core fields plus a signature checksum row and incident ID And exported filenames follow the pattern LeakMatch_Report_{incidentId}_{yyyy-mm-ddThhmmssZ}.{pdf|csv}
Copy Shareable Incident Summary
Given a result is visible When the user clicks "Copy incident summary" Then the clipboard contains a single-paragraph summary including incident ID, top match recipient name, (permissioned) email or redacted placeholder, share URL, asset name/version, share timestamp, and confidence score with explanation in <= 800 characters And the summary excludes PII that the viewer is not permitted to see And a "Copied" confirmation message is shown
Permission-Aware PII Handling and Propagation
Given the viewer lacks permission to view recipient email When the report renders, is exported, or a summary is copied Then the email field is masked as *** and no raw email appears in the UI, exports, notifications, or clipboard And a "Request access" control is present and does not disclose PII And all PII views and actions are recorded in the audit log with actor, timestamp, and action
Audit Trail & Evidence Export
"As a label legal representative, I want a tamper-evident report of the match so that I can use it in communications or legal proceedings."
Description

Record a tamper-evident audit trail for each LeakMatch submission, including file cryptographic hash, submitter identity, timestamps, decoder version, parameters, results, confidence scores, and any actions taken (revoked links, notifications). Store logs in append-only storage with integrity verification and retention controls. Enable exporting an evidence package (report, hashes, metadata) suitable for compliance and legal review, with optional watermark thumbnails/redactions for privacy.

Acceptance Criteria
Audit Record on LeakMatch Submission
Given an authenticated user submits a file (snippet or full) to LeakMatch When the submission is accepted for processing Then an audit record is appended within 2 seconds containing: - submission_id (UUID) - submitter_id and role - submission_timestamp (UTC ISO 8601 with millisecond precision) - file_sha256 and file_size_bytes - decoder_version - decoder_parameters (serialized) - detection_results (recipient_id, link_id, embedded_timestamp) - confidence_score (0.0–1.0) - source_ip and user_agent
Append-Only Log Semantics
Given the audit log contains existing records When any user (including admins) attempts to update or delete an existing record via API, UI, or storage Then the operation is rejected with HTTP 403 and no records are altered And only append operations are permitted And each append updates a hash-chain where record_hash = SHA-256(previous_chain_hash || current_record_payload) And the current chain hash is persisted and exposed as read-only
Integrity Verification and Tamper Detection
Given the integrity verification job runs on-demand or hourly When verification processes the audit log Then it recomputes the chain hash from genesis to head and returns status=PASS with head_chain_hash and last_index if no discrepancy And if any record is missing, reordered, or altered, verification returns status=FAIL with first_bad_index and expected_vs_actual_hash, and emits a high-severity alert audit event
Retention Controls and Purge Proof
Given a workspace retention policy is configured (in days) When a record is younger than its retention period Then any delete request is rejected with HTTP 403 and the attempt is logged When a scheduled purge runs for records older than their retention period Then records are purged in batches and a signed purge manifest is appended containing purged_indices, prior_chain_hash, post_purge_chain_hash, executed_by (system), executed_at (UTC) And retention policy changes are recorded with old_value, new_value, changed_by, changed_at
Standard Evidence Package Export
Given a reviewer with permission requests an evidence export for a submission_id When the export is generated Then a ZIP is produced within 10 seconds for logs <= 10,000 records containing: - evidence_report.json and evidence_report.pdf - audit_record.json for the submission and linked actions - file hashes (SHA-256) and sizes of all referenced assets - decoder_version and parameters - chain_inclusion_proof (path, head_chain_hash) - checksums.txt with SHA-256 for every file in the ZIP And the ZIP is signed with the platform signing key (Ed25519) and the signature verifies with the published public key
Privacy-Preserving Export (Redactions & Thumbnails)
Given the requester enables Privacy Mode for the export When the export is generated Then PII fields (email, phone, IP, user_agent) are redacted or hashed (SHA-256 with salt) in the reports And artwork assets are replaced with watermarked thumbnails (max 800px longest edge, 72 DPI) labeled "IndieVault Evidence" and case_id And audio assets are replaced with 10-second mono 128 kbps previews with an audible watermark tone every 3 seconds And the original asset SHA-256 and size are still included to preserve evidentiary linkage And the export indicates which fields/assets were redacted or transformed
Action Logging for Revocations and Notifications
Given a LeakMatch result triggers link revocation and/or recipient notifications When the actions are executed via UI or API Then an action entry is appended within 2 seconds referencing the originating submission_id and including: - action_id (UUID), action_type (revocation|notification), actor_id, executed_at (UTC) - targets (link_ids, recipient_ids), outcome (success|partial|fail), error_details if any And the action entry is included in the evidence export and in integrity verification
Role-Based Access & Privacy Controls
"As an org admin, I want LeakMatch access and results scoped by role so that sensitive recipient data is protected."
Description

Restrict LeakMatch usage and visibility of results based on roles (asset owners, project managers, org admins). Mask or pseudonymize recipient PII for users without elevated permissions, with just-in-time reveal for authorized actions. Enforce encryption in transit and at rest, isolate uploaded samples from general asset libraries, and apply configurable retention windows for submitted leak files. Log all access to sensitive fields for compliance.

Acceptance Criteria
Restrict LeakMatch Access by Role
Given a signed-in user with role in [Org Admin, Project Manager, Asset Owner] and the "Use LeakMatch" permission for the target project, When they open the LeakMatch UI or POST /leakmatch with a valid file, Then the request is accepted (HTTP 200/202) and the UI features are available. Given a user without the "Use LeakMatch" permission (e.g., Collaborator/Viewer) or with a suspended account, When they attempt to access the LeakMatch UI or API, Then the operation is blocked (HTTP 403) with no file persisted and the upload control is hidden/disabled in the UI. Given a PM or AO not assigned to the target project, When they attempt to run LeakMatch against that project, Then the request is denied (HTTP 403) and no project metadata is disclosed. Given an Org Admin toggles the "Use LeakMatch" permission for a role, When the permission is removed, Then subsequent requests by affected users are denied within 60 seconds or the next token refresh (whichever occurs first). Given an unauthenticated or expired token request, When POST /leakmatch is called, Then the API returns HTTP 401 with a WWW-Authenticate header and performs no processing.
Mask PII in LeakMatch Results for Non-Elevated Users
Given a user without the "View Full PII" permission, When they view LeakMatch results, Then recipient PII (name, email, phone, company) is masked or replaced with an alias (e.g., Recipient#AB12) and a hashed recipientId; IP/device identifiers are omitted. Given the same user exports results via UI or API, When the export completes, Then all PII fields remain masked identically in the exported file and API payloads. Given PII is masked in the UI, When the user attempts to copy or hover for tooltips, Then no unmasked values are exposed via tooltips, titles, ARIA labels, or DOM attributes. Given a masked results list, When filtering/sorting is used, Then it operates on allowed fields only (e.g., alias, confidence score, timestamp) and does not leak raw PII via query responses or error messages.
Just-in-Time PII Reveal Authorization
Given a PM or Org Admin with the "Request PII Reveal" permission, When they click Reveal on a specific LeakMatch result, provide a textual reason, and successfully complete MFA, Then the full PII for that result is revealed for that user only and auto-re-masked after 15 minutes. Given a user without the required permission, When they attempt a Reveal action, Then the control is disabled in the UI or the API returns HTTP 403 and no PII is exposed. Given a successful reveal, When the reveal occurs, Then an immutable audit event is recorded capturing userId, role, resultId, reason, MFA=true, timestamp (UTC), IP, and outcome. Given an export is requested during an active reveal, When the requester has "Export PII" permission, Then the export includes full PII for only the revealed result(s); otherwise exports remain masked. Given a reveal request with failed MFA or missing reason, When the user submits the request, Then the reveal is denied and logged with outcome=denied and no PII exposure.
Isolation of Uploaded Leak Samples
Given any file uploaded to LeakMatch, When it is stored, Then it resides in a dedicated, access-controlled storage namespace isolated from asset libraries and excluded from indexing, search, sync, and sharing features. Given a user searches the general asset library, When results are returned, Then LeakMatch sample files never appear in listings, previews, or suggestions. Given a direct URL to a stored sample, When requested without a valid signed URL and appropriate role, Then access is denied (HTTP 403); with a signed URL, Then expiry is <= 5 minutes and scope is read-only for that object. Given cross-tenant boundaries, When a user from Org A attempts to access any sample or metadata from Org B, Then the request is denied (HTTP 403) and no object existence is leaked. Given the uploaded file is detected as malware, When scanning completes, Then the file is quarantined/not persisted, the user receives HTTP 422 with a safe error message, and an audit event is recorded.
Configurable Retention for Leak Files
Given the org default retention is 30 days and configurable between 7 and 90 days by Org Admins, When an admin sets retention to 60 days, Then all new LeakMatch uploads honor a 60-day deletion schedule. Given a sample reaches its retention end, When the retention job runs, Then the binary, derived fingerprints, and temporary artifacts are permanently deleted within 24 hours, and an audit event is recorded; immutable audit logs remain intact. Given an admin places a Legal Hold on a specific result, When the retention job evaluates that result, Then deletion is paused until the hold is removed; hold changes are fully audited. Given an admin triggers a manual Purge before expiry, When confirmed, Then irreversible deletion occurs and the API returns 202 Accepted; all related artifacts are removed and the action is logged. Given a disaster-recovery restore occurs, When restored data includes items past their retention, Then those items are re-deleted automatically on restore evaluation per current retention settings.
Encryption in Transit and at Rest Enforcement
Given any LeakMatch upload, download, or API call, When a client connects, Then TLS 1.2+ is required; HTTP and TLS < 1.2 are rejected (no downgrade); HSTS is enabled with max-age >= 6 months. Given stored samples and derivatives, When inspected via storage metadata, Then at-rest encryption (AES-256 via KMS) is enforced and verifiable; no plaintext copies exist in logs, temp dirs, or caches. Given KMS key rotation, When rotation occurs, Then new writes use the rotated key immediately and existing data remains accessible; operations continue without error. Given ephemeral processing storage is used, When processing completes, Then the storage is encrypted and securely wiped within 60 minutes, leaving no recoverable fragments. Given an external TLS assessment is performed, When scored by SSL Labs, Then the LeakMatch endpoint receives a grade of A or better.
Audit Logging for Sensitive Access and Actions
Given any view of sensitive fields (masked or revealed) or any PII reveal request/approval/denial, When the action occurs, Then an immutable audit entry is recorded with userId, role, orgId, projectId, resultId, action, reason (if supplied), outcome, timestamp (UTC ISO-8601), source IP, and user agent. Given an Org Admin queries audit logs via UI or GET /audit with filters (userId, action, date range, projectId), When the query targets <= 10,000 events, Then results return within 2 seconds; larger result sets are paginated. Given a user without the "View Audit Logs" permission attempts to access audit logs, When the request is made, Then the system returns HTTP 403 and does not reveal counts or record existence. Given audit log retention is configured for at least 1 year, When export is requested, Then CSV and JSON exports are available; PII values in logs remain masked unless the requester has "View Full PII" and an active reveal scope for the referenced result. Given any attempt to modify or delete audit entries, When the attempt occurs, Then it is blocked (WORM storage) and a tamper-attempt audit event is created.
Automation & API Webhooks
"As a technical manager, I want to automate leak investigations and follow-up actions so that our team responds in minutes without manual work."
Description

Expose a secure API endpoint to submit suspected leaks programmatically and retrieve results. Provide webhooks and native integrations (email/Slack) to notify teams upon match, mismatch, or low-confidence results. Offer optional automated mitigations—auto-expire the implicated link, flag the recipient account, and create an incident ticket—behind configurable policies and rate limits. Include API keys, OAuth scopes, and per-org quotas to prevent abuse.

Acceptance Criteria
Programmatic Leak Submission and Result Retrieval API
Given a valid API credential with scopes leak:submit and leak:read When the client POSTs /api/leak-match with a supported audio or image file within documented limits or a signed URL Then the API validates input and returns either 200 with result (matchStatus, confidence, recipientId if any, linkId if any, timestamp, matchType, requestId) when completed within 10s, or 202 with jobId and statusUrl for async processing Given a partial audio clip of at least 10 seconds or a cropped image fragment containing the embedded mark When submitted to /api/leak-match Then the system decodes the mark and returns matchStatus=match with confidence >= 0.80 in p95 end-to-end time <= 30s Given an unsupported format, missing payload, or invalid URL When submitted Then the API returns 400 with a machine-readable error code and does not enqueue processing Given missing or invalid authentication When calling any leak-match endpoint Then the API returns 401 or 403 without disclosing resource existence Given a completed jobId and authorized client When GET /api/leak-match/{jobId} Then the API returns 200 with an immutable result; repeated GETs are idempotent
Webhook Event Delivery for Match Outcomes
Given an org has configured a webhook endpoint and secret When a result is produced with matchStatus in {match, low_confidence, no_match} Then IndieVault sends a signed HTTPS POST within 5 seconds including eventType, requestId, recipientId (if any), linkId (if any), timestamp, confidence, matchType, with HMAC-SHA256 signature headers X-Webhook-Signature and X-Webhook-Timestamp Given the receiver responds non-2xx When IndieVault retries delivery Then retries use exponential backoff for up to 24 hours with a maximum of 15 attempts, preserve per-request ordering, and final state is recorded as failed after exhaustion Given duplicate deliveries When the receiver uses X-Idempotency-Key to deduplicate Then payloads are safely processed at-least-once without side effects Given test mode is triggered by an admin When sending a test event Then the configured webhook receives a valid, signed sample payload and the delivery is logged
Configurable Automated Mitigations Policies
Given mitigation policies are configured by an org admin with conditions and actions When a result meets policy conditions (e.g., confidence >= threshold, environment=production) Then IndieVault executes selected actions: expire implicated link within 60 seconds, flag recipient account as under_review, and create an incident ticket correlated to requestId and linkId Given a policy is set to dry-run When a qualifying event occurs Then no actions are executed and a would_have_applied notification is emitted with reasons Given an action fails (e.g., link already expired) When the policy engine runs Then the action is retried up to 3 times, failure is recorded with error code, and surfaced in audit logs and notifications Given mitigation rate limits are configured When multiple matching events occur rapidly Then actions are throttled or queued according to policy with explicit audit entries of decisions
Authentication, Authorization, and Key Management
Given API keys are issued per org with assignable scopes (leak:submit, leak:read, webhook:manage, policy:manage) When a request is made with insufficient scope Then the API returns 403 with error code insufficient_scope and required scopes in the response Given OAuth 2.0 client credentials are used When exchanging for a token with requested scopes Then the token reflects allowed scopes, expires per configuration, and requests with expired or revoked tokens return 401 Given a key rotation or revocation event When the old credential is deactivated Then requests using it are rejected within 60 seconds and an audit log records actor, time, and affected credentials
Rate Limiting and Per-Org Quotas
Given per-org quotas and per-key rate limits are configured When an org exceeds burst or sustained thresholds for leak submissions Then the API responds 429 with Retry-After, does not process the payload, and emits a quota.exceeded notification to org admins Given an org is within quota When requests are made Then the API sustains at least the documented throughput and maintains p95 auth+enqueue latency under 500 ms Given an org exhausts its monthly quota When further submissions are attempted Then requests are rejected per configuration (402 or 429), and admins can retrieve usage via GET /api/usage with breakdown by day and endpoint
Native Email and Slack Notifications
Given email recipients and Slack channels are configured for leak result and mitigation events When a result is produced or a mitigation action executes Then notifications are sent containing requestId, matchStatus, confidence, recipient/link identifiers if present, action summary, and deep links; Slack messages are delivered via an installed app to mapped channels; emails use templated subjects and bodies Given notification filters are configured (e.g., only match and low_confidence) When a no_match result occurs Then no notification is sent Given a delivery failure occurs (e.g., Slack token revoked, email bounce) When sending a notification Then the system retries per channel policy, marks the attempt failed on exhaustion, and alerts org admins of the failure

Tripwire Alerts

Get real‑time alerts the moment a positive match occurs—via email, push, or Slack. Each alert includes a quick proof summary and one‑tap actions to revoke links, notify stakeholders, or open a case. Benefit: react in minutes instead of days, reducing spread and keeping release timelines on track.

Requirements

Real-time Positive Match Detection
"As an indie label manager, I want positive matches detected and evaluated in real time so that I can act before leaks spread and jeopardize release timelines."
Description

Build a low-latency detection pipeline that listens for watermark hits, unauthorized review-link access, and third-party leak signals, normalizes them into a unified MatchEvent, and evaluates them against tripwire criteria. The system must process events end-to-end in under 60 seconds, ensure idempotency, and attach rich context (asset IDs, release, link IDs, recipient, source URL/IP, timestamp, confidence, and severity). Events that meet criteria are published to the alerting service; non-qualifying or false-positive-flagged events are suppressed while preserving an audit record. The pipeline exposes health metrics, retries transient failures with backoff, and degrades gracefully to queue-and-drain during outages to prevent data loss.

Acceptance Criteria
End-to-End Processing Latency  60s
Given the system is in steady state and all dependencies are healthy When a watermark hit, unauthorized review-link access, or third-party leak signal is ingested at time T0 Then the pipeline produces a publish-or-suppress decision for that event no later than T0 + 60 seconds And the end-to-end latency metric e2e_latency_seconds is recorded for the event
Unified MatchEvent Normalization with Required Context
Given an incoming detection signal from any supported source When the signal is normalized Then the resulting MatchEvent contains non-null fields: matchEventId, assetId, releaseId, eventTimestamp (UTC ISO-8601), detectionSource, confidence (0.0–1.0), severity (LOW|MEDIUM|HIGH|CRITICAL), idempotencyKey, schemaVersion And the MatchEvent includes contextual fields when available: linkId (nullable), recipient (id/email), sourceUrl or sourceIp And events missing required fields are rejected to a quarantine with reason code schema_validation_failed and are not evaluated against tripwire rules
Tripwire Rule Evaluation and Alert Publication
Given one or more active tripwire rules When a MatchEvent satisfies at least one rule Then exactly one AlertPublished message is sent to the alerting service for that MatchEvent And the alert payload includes: matchEventId, correlationId, assetId, releaseId, linkId (if any), recipient (if any), sourceUrl/IP (if any), timestamp, confidence, severity, proofSummaryRef, quickActionRefs [revokeLink, notifyStakeholders, openCase] And delivery is acknowledged by the alerting service or retried per the retry policy
Suppression and Audit Trail for Non-Qualifying/False-Positive Events
Given a MatchEvent that does not meet any active tripwire rule or is flagged as false positive When the event is evaluated Then no alert is published for that MatchEvent And an immutable audit record is stored containing matchEventId, suppressionReason (NotMatchingCriteria|FalsePositive|ConfidenceBelowThreshold), evaluator (system|userId), timestamp, and the evaluation outcome And the audit record is queryable by matchEventId via the audit API
Idempotent Processing and Duplicate Suppression
Given multiple deliveries of the same logical event with the same idempotencyKey When the deliveries are processed concurrently or sequentially Then only the first accepted instance is normalized, evaluated, and (if qualifying) published And subsequent duplicates do not produce additional alerts or duplicate audit entries, and are logged as duplicate_detected And the system remains correct under at-least-once delivery from upstream sources
Resilience: Retries with Exponential Backoff and Dead-Lettering
Given a retryable transient failure (e.g., HTTP 5xx, network timeout) during normalization, evaluation, or alert publication When the operation fails Then the system retries using exponential backoff with jitter up to the configured maximum attempts/time budget And upon a successful retry, processing resumes without emitting duplicate alerts And upon exhausting retries, the event is moved to a dead-letter queue with failure cause and original payload preserved And metrics retry_attempts_total and dlq_events_total are incremented
Graceful Degradation: Durable Queue-and-Drain Without Data Loss
Given a detected outage of a downstream dependency When new detection signals continue to arrive Then the pipeline enqueues MatchEvents durably to persistent storage and applies backpressure to remain within resource limits And no data loss occurs; each enqueued event is eventually processed after recovery or present in the DLQ with complete context And upon recovery, the system drains the backlog automatically, preserving per-key ordering (assetId/linkId) and exposing metrics backlog_depth and backlog_oldest_age_seconds
Multi-channel Alert Delivery (Email/Push/Slack)
"As an artist manager on the go, I want alerts delivered on my preferred channels immediately so that I never miss a critical incident regardless of where I am."
Description

Implement an alerting service that composes templated messages and delivers them via email (SMTP provider), mobile push (APNs/FCM), and Slack (App/Webhooks) within seconds of a qualifying MatchEvent. Support per-recipient channel preferences, fallback paths (e.g., if push fails, send email), retries with exponential backoff, idempotent delivery keys, and rate controls per channel. Messages must include signed deep links for one-tap actions and respect organization branding and localization (timezones, 12/24h). Delivery status is captured for analytics and compliance.

Acceptance Criteria
Real-time Multi-Channel Dispatch on MatchEvent
Given a qualifying MatchEvent M is created for Org O with recipients who have at least one enabled alert channel When M is ingested and persisted Then for each recipient, the first delivery attempt on their highest-priority enabled channel starts within 5 seconds of M.createdAt And the composed message includes the proof summary and one-tap action links And a provider success response (SMTP 250, APNs 200, FCM 200, Slack 2xx with ok=true) marks the attempt as Delivered and finalizes the flow for that recipient And if a recipient has no enabled channels, no delivery is attempted and a Suppressed(NoEnabledChannels) status is recorded
Per-Recipient Channel Preferences
Given a recipient has a saved channel preference order and enablement flags (e.g., push: enabled, slack: enabled, email: enabled) When a MatchEvent M triggers alert delivery Then channels are attempted strictly in the saved order for that recipient, skipping any disabled channel And if no preference exists for the recipient, the org’s default channel order is used And org-level channel disablement prevents use of that channel regardless of recipient settings And only the first successful channel delivery per recipient is performed; lower-priority channels are not attempted after a success
Resilience: Fallback, Retry, and Rate Control per Channel
Given the primary channel attempt fails with a retriable condition (e.g., network error, 5xx, provider 429) When retrying Then exponential backoff with jitter is applied (e.g., ~1s, ~2s, ~4s, ~8s; capped at 30s), up to the configured max attempts per channel And a non-retriable failure (e.g., 4xx except 429, invalid token) triggers immediate fallback to the next preferred enabled channel within 2 seconds And if all enabled channels fail or are exhausted, the delivery outcome is Final=Failed with the last error captured Given channel rate limits are configured (e.g., per-recipient per-channel and per-org per-channel thresholds) When incoming alerts exceed the configured thresholds Then excess attempts are queued and scheduled without breaching provider limits, and any attempts exceeding the maximum queue age are Dropped(RateLimited) with audit entries And retries respect the same rate controls and never violate configured caps
Idempotent Delivery Keys Prevent Duplicates
Given idempotency keys are computed as a hash of (orgId, matchEventId, recipientId, channel, templateVersion) When duplicate delivery requests or re-queued jobs for the same tuple occur within the idempotency window (24 hours) Then at most one successful send is performed per channel per recipient, and subsequent attempts short-circuit as AlreadyProcessed without contacting the provider And idempotency is enforced across process restarts and distributed workers, with collisions logged for audit
Signed Deep Links and One-Tap Actions
Given an alert is delivered with embedded one-tap action links (revokeLink, notifyStakeholders, openCase) When a recipient taps a link Then the request contains a signed token (JWT) with claims {orgId, recipientId, matchEventId, action, iat, exp} and is verified against the current key And tokens with invalid signature or exp in the past return 401 Unauthorized and perform no action And a valid token executes the intended action and responds 200 within 2 seconds And tokens are single-use; a second use returns 409 Conflict (AlreadyUsed) And mobile clients deep-link to the app scheme when installed; otherwise the web fallback is opened
Branding and Localization (Timezone, 12/24h)
Given Org O has branding (logo, display name, brand color) and localization settings (default timezone, time format 12h/24h) When an alert is rendered for recipient R Then email, push, and Slack messages include O’s branding elements or fall back to system defaults if absent And all timestamps in the message are displayed in R’s timezone; if unset, O’s default; if unset, UTC And time formatting respects R’s 12h/24h preference; if unset, O’s default And the same event timestamp renders consistently across channels in the chosen timezone and format
Delivery Status Tracking and Analytics
Given deliveries are attempted for recipients across channels When each attempt completes (Queued, Sent, Delivered, Failed, Dropped, Suppressed) Then a DeliveryRecord is written with fields {deliveryId, orgId, recipientId, channel, attemptNumber, idempotencyKey, providerMessageId (if any), requestTs, responseTs, status, errorCode, errorMessage, final} And an AnalyticsEvent is emitted within 5 seconds of status change for aggregation and dashboards And DeliveryRecords are queryable by time range, org, recipient, channel, status, and matchEventId with results returned within SLA (p95 <= 2s for 10k-record scans)
Proof Summary in Alerts
"As a security-conscious artist, I want each alert to show compact proof of the match so that I can trust the signal and decide quickly whether to escalate."
Description

Embed a concise, verifiable proof block in every alert containing the asset and release identifiers, watermark or recipient token, matched URL or endpoint, event time, geo/IP snippet, and a confidence score. Include a lightweight evidence preview (e.g., redacted screenshot or hash excerpt) and a safe link to view full evidence in IndieVault. Ensure sensitive data is redacted by default and that proofs are signed and tamper-evident to support downstream incident handling and external sharing.

Acceptance Criteria
Proof Block Field Completeness and Formatting
Given an alert is generated for a positive match When the alert is composed for any channel (email, push, Slack) Then the proof block includes exactly these labeled fields: asset_id, asset_title, release_id, release_title, token_type, token_masked, matched_target_type, matched_url_or_endpoint, event_time_utc, geo_country, geo_region (optional), ip_masked, confidence_score_percent, evidence_preview_present, safe_evidence_link And event_time_utc is ISO 8601 UTC (e.g., 2025-08-19T14:23:11Z) And ip_masked shows IPv4 masked to /24 or IPv6 masked to /64 And confidence_score_percent is 0.0–100.0 with one decimal And no required field is null, empty, or labeled "unknown"
Default Redaction of Sensitive Data in Alerts
Given the proof contains any sensitive value (PII, tokens, secrets) When the alert is delivered Then sensitive values are redacted by default: emails masked as user…@domain.tld, tokens show only last 4 chars, IPs masked as defined, URL query strings stripped, file paths truncated to basename And an automated PII scan of the alert body and attachments reports 0 critical leak findings and ≤1 minor finding And the alert UI provides no control to reveal unredacted values; unredacted data is only viewable in IndieVault by authorized users
Lightweight Evidence Preview
Given the match includes visual evidence When the alert is delivered Then a redacted static image preview (PNG or JPEG) is included with size ≤150 KB and max dimensions ≤600×600 px And the preview contains no active content, scripts, or external references And if the match is non-visual (e.g., hash match), a text preview shows a 16-byte hex excerpt with start/end offsets And if safe preview generation fails, a placeholder and failure reason are shown instead of the preview
Safe Link to Full Evidence Behavior
Given a recipient clicks the safe evidence link in the alert When the link is opened Then it resolves to an HTTPS URL under *.indievault.app, requires authentication or a signed token scoped to the recipient and proof_id, and expires within 24 hours of alert creation And the URL contains no PII or long-lived secrets And after expiry or revocation the endpoint returns HTTP 410 Gone without exposing evidence metadata And each access is audit-logged with proof_id, recipient_id, and masked IP
Cryptographic Signature and Verification
Given a proof block is generated When the alert is sent via any channel Then the proof JSON payload includes proof_id, payload_sha256, signature_ed25519, and key_id And IndieVault publishes the corresponding public key set at /.well-known/indievault-keys.json And verification of the signature with the published key succeeds for unmodified alerts and fails when any byte of the payload is altered And the alert includes a Verify Proof link that confirms validity server-side and returns proof_id, status=valid|invalid, and signed_at
Cross-Channel Rendering Consistency
Given alerts are delivered via email, push, and Slack When viewed on Gmail (web), Apple Mail (latest), iOS/Android push notifications, and Slack (desktop/mobile) Then the proof block presents the same mandatory field set and order across channels And push notifications include at minimum asset_id, release_id, confidence_score_percent, event_time_utc, and the safe link And Slack uses Block Kit with a code block containing the proof JSON; email provides both plain-text and HTML versions And no channel truncates or omits mandatory fields; links and attachments pass malware scanning and do not trigger client security warnings
One-Tap Remediation Links
"As a project lead, I want one-tap actions in the alert so that I can revoke access or start an incident without logging into the dashboard and losing precious time."
Description

Provide secure, single-action links in alerts to immediately revoke affected review links, notify predefined stakeholder groups, or open an incident case in IndieVault. Links must be short-lived, signed, and permission-aware, executing the action without additional navigation when possible and returning a clear success/failure confirmation. Include safeguards such as undo within a short window for revocations, CSRF protection for web flows, and full audit logging of who triggered what, when, and from which channel.

Acceptance Criteria
Email One-Tap Revoke of Affected Review Links
Given an email alert contains a one-tap Revoke link scoped to the incident and signed for the recipient, When an authorized recipient clicks it within the link validity window, Then all and only the affected review links referenced by the alert are revoked within 5 seconds and a success confirmation is shown without additional navigation. - The confirmation displays the number of links revoked and the incident ID. - If some links were already revoked, the action is idempotent and only changes active links; the result reflects partial changes. - If the link is expired, tampered, or the user lacks permission, no links are changed and a clear failure message is shown with retry/sign-in options.
Mobile Push One-Tap Notify Stakeholder Groups
Given a mobile push alert includes a one-tap Notify Stakeholders action, When an authorized user taps it within the link validity window, Then predefined stakeholder groups receive notifications via their configured channels within 60 seconds containing the incident reference, affected assets, and proof summary. - The initiator receives an immediate in-app confirmation indicating recipients and delivery status. - The action is idempotent for duplicates within a 10-minute window; repeat taps do not send duplicate notifications. - Per-recipient delivery failures are retried up to 3 times and surfaced in the confirmation with a count of failures.
Slack Alert One-Tap Open Incident Case
Given a Slack alert contains an Open Case button, When a mapped and authorized Slack user clicks it within the link validity window, Then a new incident case is created in IndieVault with incident type, affected assets, matched link IDs, and source channel prefilled, and an ephemeral Slack confirmation returns the case ID and link within 5 seconds. - If a case already exists for the incident correlation ID, no duplicate is created; the existing case is returned in the confirmation. - If the request is expired or unauthorized, no case is created and an ephemeral error is shown.
Short-Lived, Signed, Permission-Aware Action Links
- All one-tap action links are HMAC-signed with rotating keys and embed action, scope, expiry timestamp, nonce, channel, and actor hint; default TTL is 15 minutes and is configurable. - Links are single-use and idempotent: replays within TTL do not re-execute and return the prior result. - Expired, tampered, or scope-mismatched links return 401/403 and perform no changes. - Authorization enforces only permitted roles (e.g., Owner, Manager, Security) in the workspace of the affected assets can execute; forwarded links from outside the workspace are denied. - Responses do not leak resource existence beyond authorization state.
Undo Window for Revocations
Given a revoke action succeeds, When an authorized user activates the provided Undo within 2 minutes, Then all review links revoked by that action are restored to their prior state (permissions, expiry, watermark settings) within 5 seconds. - The Undo link is single-use, idempotent, and invalidated if any of the affected links are manually changed before undo. - If the Undo is expired or unauthorized, no changes are made and a clear failure message is shown. - Both revoke and undo are correlated and referenced in confirmations.
Web Flow CSRF Protection for Fallback Confirmation
- Any browser-based fallback that can mutate state requires a same-origin POST with a per-session CSRF token; GET requests never mutate state. - CSRF tokens are unique per action, bound to session and origin, and expire with the link TTL; replayed tokens are rejected. - SameSite cookies are enabled for session; Origin/Referer are validated for same-origin POSTs; mismatches return 403 with no side effects. - Security tests simulate cross-site form POST and image/script GET to verify no state change occurs.
Comprehensive Audit Logging Across Channels
- Every attempted one-tap action (revoke, notify, open case) logs actor or channel identity, workspace, action type, target resource IDs and counts, incident/correlation ID, source channel, IP/device or user agent (where available), UTC timestamp, outcome, latency, and error codes. - Logs are immutable, queryable, and visible in the Admin Audit Log within 30 seconds of the attempt. - Confirmation messages display a correlation ID that matches the audit entry. - Export and retention policies are enforced per workspace settings; attempts to alter or delete logs are denied and recorded.
Alert Rules, Preferences, and Escalations
"As an operations manager, I want configurable alert rules and escalations so that the right people are notified at the right time without creating alert fatigue."
Description

Expose organization- and project-level configuration for tripwire criteria, severity thresholds, quiet hours, channel preferences, recipients, and escalation policies (e.g., if unacknowledged for 10 minutes, escalate to Slack and email leadership). Support rule scoping by asset, release, or link, with override hierarchies and test mode to simulate alerts. Provide RBAC-controlled access, versioned rule history, and safe defaults for new projects to reduce misconfiguration.

Acceptance Criteria
Project Inherits Org Defaults with Override
Given the organization has configured tripwire default settings When a new project is created in that organization Then the project's tripwire settings are initialized from the organization's defaults And any setting not present at the organization level uses platform safe defaults Given a project-level override is saved for any tripwire setting When the effective settings are requested via API or UI Then the project-level value takes precedence over the organization default And a "reset to org default" action restores the inherited value and removes the override Given no user changes have been made at project level When safe defaults are updated at the organization level Then the project's effective settings reflect the updated organization defaults within 1 minute
Rule Scoping by Asset, Release, and Link
Given rules exist at organization, project, release R1, asset A1, and link L1 with conflicting settings When a tripwire event is raised for link L1 of asset A1 in release R1 Then the system applies precedence Link > Asset > Release > Project > Organization to resolve conflicts And only the highest-precedence matching rule determines severity threshold, quiet hours, channels, recipients, and escalation policy And the effective rule identifier is attached to the event record Given an asset A2 has no asset- or link-scoped rule When an event is raised for A2 within release R2 Then the release rule applies if present; otherwise the project rule; otherwise the organization rule
Quiet Hours and Severity Threshold Filtering
Given the project time zone is set to America/Los_Angeles and quiet hours 22:00–07:00 are configured When a Low severity match occurs at 22:30 local time Then no notification is sent to any channel And the event is logged with suppressed_by = "quiet_hours" Given the rule severity threshold is High When a Medium severity match occurs at 14:00 local time Then no notification is sent And the event is logged with suppressed_by = "severity_threshold" Given quiet hours cross midnight When the clock transitions across DST Then suppression still applies correctly based on local time boundaries
Multi-Channel Preferences and Recipients
Given a rule has channels email = on, push = off, slack = on and recipients include user u1, group g1, and Slack channel #security When a qualifying alert is emitted Then delivery attempts are made only to email and Slack And per-recipient delivery status is recorded as delivered|bounced|failed And Slack delivery is skipped with status = "channel_not_configured" if the workspace integration is not connected And the alert record lists the resolved recipient set with channel used per recipient
Escalation on Unacknowledged Alert
Given a rule has escalation "if unacknowledged for 10 minutes then notify Slack #leadership and email leadership@org.com" When an alert is emitted at T0 Then an escalation timer is scheduled for T0+10m When any authorized user acknowledges the alert before T0+10m Then the escalation is canceled and no escalation notifications are sent When the alert remains unacknowledged at T0+10m Then a Slack message is sent to #leadership and an email to leadership@org.com And the event record shows escalated = true with timestamp and targets And subsequent duplicate alerts for the same incident within 10 minutes do not schedule additional escalations
Test Mode Simulation Without Live Notifications
Given test mode is enabled for the project and a user with permission starts a "simulate alert" for asset A1 When the simulation runs Then a TEST alert event is generated with incident_type = "simulation" And only sandbox recipients configured for test mode receive notifications And no production notifications are sent And no automatic actions (e.g., revoke links) are executed And the simulation appears in logs and analytics marked test = true
RBAC and Versioned Rule History
Given RBAC roles where Organization Admin and Project Maintainer can create/update rules, and Project Viewer has read-only access When a Project Viewer attempts to modify a rule Then the request is denied with 403 Forbidden and no changes are written When a Project Maintainer updates a project-scoped rule with change note "raise threshold" Then a new rule version is created capturing before/after diff, editor id, timestamp, and note And the effective configuration reflects the new version immediately When an Organization Admin reverts the rule to version N Then the effective configuration matches version N And the history shows a new "revert" entry linking to the source version And prior versions remain immutable and cannot be edited or deleted by any role
Alert Analytics and Audit Trail
"As a label owner, I want analytics and an audit trail of alerts and actions so that I can measure responsiveness and meet compliance requirements."
Description

Capture delivery metrics (sent, bounced, opened, clicked), action outcomes (revoked, notified, case opened), and time-to-acknowledge/resolve for each incident. Surface dashboards and exports to correlate alerts with per-recipient analytics, measure response times, and demonstrate compliance. Maintain an immutable audit log linking MatchEvents, alerts, and actions with user identity and channel, with retention settings aligning to organizational policies.

Acceptance Criteria
Record Delivery Metrics per Recipient and Channel
Given a MatchEvent triggers alerts to multiple recipients via email, push, and Slack When delivery providers respond and recipients view or interact with the alerts Then the system records, per recipient and per channel, sentAt; bounceType and bouncedAt (if any); openedAt for email/push; and click events with firstClickAt and totalClicks And deduplicates multiple opens/clicks while retaining total counts And updates metrics within 60 seconds of receiving provider/webhook events And exposes these metrics in the incident detail UI and via API at /analytics/alerts/{incidentId}
Log Action Outcomes with User Identity and Channel
Given an authenticated user receives an alert When the user invokes one-tap actions (Revoke Link, Notify Stakeholders, Open Case) from email, push, Slack, or web Then the system creates an action record linked to the incident and matchEvent with fields: actionType, status (success|failure), timestamp, actor.userId, actor.role, invokedFromChannel, targetEntities, errorCode (if failure) And retries transient failures up to 3 times with exponential backoff and records each attempt And surfaces action outcomes in the incident timeline, dashboard rollups, and exports
Compute Time-to-Acknowledge and Time-to-Resolve
Given an incident is created at t0 When the first authenticated user views the alert or performs any one-tap action at tAck Then timeToAcknowledgeSec = tAck - t0 (seconds) And when the incident is marked Resolved by a user or all offending links are successfully revoked at tResolve Then timeToResolveSec = tResolve - t0 (seconds) And both timestamps (ackAt, resolveAt) and durations are persisted, shown in the incident UI, available via API, and included in exports And all timestamps are ISO 8601 UTC and computed server-side
Correlate Alerts with Per-Recipient Analytics in Dashboard
Given incidents exist across channels, assets, and recipients within a date range When a user opens the Alert Analytics dashboard and applies filters by date range, channel, recipient, asset, and status Then the dashboard displays totals for sent, bounced, opened, clicked; median and p95 timeToAcknowledgeSec and timeToResolveSec; and trend charts by day And clicking a metric drills down to the filtered incident list with matching counts And dashboard aggregates match underlying event data within 0.1% for the same filters
Export Incident Analytics and Audit Data
Given a user selects a date range and filters When the user exports CSV and JSON or schedules a weekly export Then the export contains one row per alert-recipient per incident with fields: incidentId, matchEventId, recipientId, channel, sentAt, bounceType, bouncedAt, openedAt, firstClickAt, totalOpens, totalClicks, actions[], actionActorIds, ackAt, resolveAt, timeToAcknowledgeSec, timeToResolveSec, recordHash, createdAt And timestamps are ISO 8601 UTC; empty values are blank; column names are stable across versions And row counts and aggregates match the dashboard for identical filters And scheduled exports are delivered via expiring link to specified emails and logged in the audit trail
Immutable Audit Log with Retention and Legal Hold
Given audit logging is enabled with a retention policy of R days and optional legal holds When any MatchEvent is created, any alert is sent/delivered/opened/clicked, or any action is taken Then an append-only audit entry is written with sequenceId, timestamp (ISO 8601 UTC), actor (system or userId), channel, eventType, payloadDigest, previousHash, entryHash And a verify endpoint returns integrity proof by validating the hash chain over a requested range And attempts to edit or delete audit entries are rejected (HTTP 403) and no data is mutated And entries older than R days are purged and replaced by a RetentionPurge tombstone unless on legal hold And fetching an incident’s audit trail returns an ordered chain linking MatchEvent -> Alerts -> Actions with identities and channels
Deduplication and Noise Suppression
"As a busy producer, I want duplicate and low-value alerts suppressed or bundled so that I can focus on truly urgent issues."
Description

Implement deduplication windows and incident grouping to avoid spamming recipients for repeated matches on the same asset/link. Provide severity-based suppression rules, per-channel rate limits, and digest bundling for low-priority events while ensuring high-severity alerts always break through. Offer user-visible indicators when an alert was suppressed or grouped, and allow per-project tuning of suppression timeframes.

Acceptance Criteria
Deduplication Window for Same Asset-Link-Recipient
Given a project with a configured deduplication window W minutes and alerts enabled for email, push, and Slack And a positive match is detected for asset A on link L for recipient R with severity Medium When additional matches for asset A on the same link L for the same recipient R occur within W minutes Then only the first match within W minutes triggers a real-time alert per channel And subsequent matches within W minutes are suppressed with reason "deduplicated" and associated to the original incident And the incident shows an incrementing suppressed count and last-seen timestamp And when W minutes elapse without an alert, the next match creates a new incident and sends a new alert
Incident Grouping Summary Within Dedup Window
Given incident grouping is enabled and the deduplication window is W minutes And at least two distinct matches for the same asset A (e.g., different source URLs) occur within W minutes of the initial alert When the deduplication window closes Then a single grouped update alert is sent per channel summarizing the incident with total match count, first-seen and last-seen timestamps, and up to 10 distinct sources listed (with "+X more" if truncated) And the grouped alert includes the same quick proof summary as the initial alert And one-tap actions (revoke links, notify stakeholders, open a case) apply to all grouped matches And recipients do not receive more than one grouped update alert per incident per channel for that window
Severity-Based Suppression and High-Severity Breakthrough
Given severity rules are configured as Low, Medium, and High And suppression, grouping, and rate limits are active for the project When a High-severity match occurs for any asset and link Then an immediate alert is sent to email, push, and Slack within 5 seconds, bypassing deduplication, suppression rules, grouping delays, and internal rate limits And the alert is not deferred into any digest And when Low or Medium severity matches occur Then suppression, deduplication, grouping, and rate limits apply as configured
Per-Channel Rate Limits Enforcement
Given project-level per-channel rate limits are configured as: email (per recipient) = E per hour, push (per device) = P per 15 minutes, Slack (per workspace channel) = S per 5 minutes And Low/Medium severity alerts are generated that would exceed these limits When the limit for a channel is reached for the applicable scope Then subsequent Low/Medium alerts in that scope within the window are suppressed with reason "rate_limited" and counted toward the next digest And the first alert after the window resets is delivered normally And High severity alerts ignore these internal limits and are delivered immediately
Low-Severity Digest Bundling
Given a project has digest bundling enabled for Low severity events with a cadence of D minutes And Low severity alerts were suppressed due to deduplication, suppression rules, or rate limits during the period When the digest window D elapses Then a single digest is sent per recipient per channel containing a summary of all Low severity incidents during the window with counts, first/last seen, and links to details And no High severity events appear in the digest And any Medium severity events included in the digest comply with the project’s digest inclusion rules And one-tap actions are available per incident entry within the digest where applicable
User-Visible Indicators for Suppressed and Grouped Alerts
Given alerts and incident details are viewed in the UI or received via email, push, or Slack When an alert was suppressed or grouped Then the delivered alert or subsequent grouped update includes badges/labels indicating "Grouped" with total events and "Suppressed" count with reason (deduplicated, severity_rule, rate_limited, digest) And the incident detail view shows a timeline with suppressed events, reasons, and counts And API/webhook payloads include fields: grouped=true/false, group_size, suppressed_count, suppression_reasons[], and grouping_id And when only suppressed events occurred during a window with no real-time delivery, the next delivered alert contains a summary line indicating the number of suppressed events in the prior window
Per-Project Suppression Timeframe Tuning
Given a project admin opens the Tripwire Alerts settings When they configure deduplication window (1–120 minutes), grouping enabled/disabled, digest cadence (15–1440 minutes), and per-channel rate limits with valid values Then the inputs validate with inline errors for out-of-range or conflicting settings and cannot be saved until resolved And upon save, changes are applied within 5 minutes and logged to the project audit trail with who/when/what details And settings affect only the selected project and do not change other projects And incidents opened before the change retain their original window/cadence; new incidents use the updated settings

AutoRecall

Define containment rules that automatically revoke, expire, or replace related review links when a leak is confirmed. Choose scope (track, version, full drop), auto‑issue fresh watermarked links to clean recipients, and pause exports until safe. Benefit: stop leaks cold without derailing the campaign or punishing compliant collaborators.

Requirements

AutoRecall Rules Engine
"As an indie label admin, I want to define scoped containment rules that run automatically on leak confirmation so that we can stop leaks quickly without manual intervention."
Description

A configurable rules engine in AutoRecall that lets admins define containment policies that trigger on a confirmed leak and execute actions across a chosen scope (track, version, full drop). Policies support actions such as immediate link revocation, forced expiry, automatic replacement issuance, and export pause; conditions based on recipient segments, link channel, time windows, and asset states; rule priority and conflict resolution; reusable templates per campaign; impact preview; and safe rollback. The engine integrates with IndieVault’s asset graph, link service, watermarking, and analytics to act within seconds, ensuring consistent, low‑touch containment aligned with release workflows.

Acceptance Criteria
Confirmed Leak Triggers Scoped Revocation and Replacement
Given an active AutoRecall policy configured with a defined scope (track, version, or full drop) and actions {revoke, force-expire, auto-issue replacements} And a leak has been confirmed for assets within that scope When the policy is executed Then 100% of matching active review links are set to Revoked or Expired within 5 seconds p95 and 15 seconds p99 And replacement links are issued only to non-implicated recipients with fresh watermark IDs and policy-defined expirations And implicated recipients receive no replacements And per-recipient analytics lineage associates replacement links to the original recipient And an audit log entry records counts of {revoked, expired, replacements issued, recipients skipped} and the execution timestamp
Recipient Segments, Channels, Time Window, and Asset State Conditions
Given a policy with conditions: recipient segment = External PR, link channel ∈ {Email, Direct}, time window = last 14 days, asset state = Unreleased And there exist links that both match and do not match these conditions When the policy is previewed and executed Then only links satisfying all specified conditions are targeted And preview shows exact counts of targeted links and recipients prior to execution And post-execution metrics equal preview counts (±0) And non-matching links remain unchanged
Rule Priority and Conflict Resolution
Given two enabled policies P1 (priority 10) and P2 (priority 20) that both target the same link with conflicting actions {P1: replace, P2: revoke} When a leak is confirmed and both policies are eligible Then only the higher-priority policy (P2) determines the final action And the lower-priority policy (P1) is recorded as suppressed for that link And no duplicate actions, notifications, or replacement links are generated And the audit log for the link records the winning policy ID, suppressed policy IDs, and the resolved final state
Impact Preview (Dry Run) Accuracy and Latency
Given a policy targeting up to 5,000 links across multiple assets and recipients When the admin runs an impact preview (dry run) Then the preview returns within 5 seconds p95 and 10 seconds p99 And it lists counts by action {to revoke, to expire, to replace, exports to pause} and a downloadable CSV of targeted link IDs and recipient IDs When the policy is executed immediately after the preview without changes to the dataset Then actual action counts exactly match the preview (±0) And discrepancies > 0 trigger an error event with a retry recommendation
Safe Rollback Restores Pre-Containment State
Given a completed containment execution that revoked links, issued replacements, and paused exports When an admin triggers Safe Rollback for that execution Then previously valid links for non-implicated recipients are reactivated with original permissions and expirations And replacement links issued by the execution are revoked and marked superseded And links for implicated recipients remain revoked And paused exports for affected assets resume And watermark uniqueness is preserved (no duplicate watermark IDs across active links) And the rollback operation is idempotent and fully audit-logged with before/after counts
Export Pause and Resume Integration
Given a policy with action = Pause Exports for the selected scope And export pipelines exist for the affected assets When the policy is executed Then new export attempts for those assets are blocked with HTTP 423 (Locked) and error code EXPORT_PAUSED, including the policy execution ID And the asset export status displays Paused with a user-facing banner referencing AutoRecall When the assets are marked Safe or the pause action is explicitly lifted Then exports automatically resume within 60 seconds And a resume event is written to the audit log
Reusable Policy Templates per Campaign with Versioning
Given an admin creates a policy template T v1 with defined conditions, actions, and priority When the template is applied to Campaign C to instantiate Policy PC1 Then PC1 inherits all values from T v1 and stores the template version reference When the admin updates the template to T v2 Then new policies created from T use v2, while existing PC1 remains on v1 unchanged And the admin can clone T v2 to another campaign and edit without altering the original template And deleting a template does not delete or disable already instantiated policies And all template create/update/delete actions are audit-logged
Targeted Link Revocation & Replacement
"As an artist, I want compromised review links revoked and clean recipients automatically reissued fresh watermarked links so that the campaign continues without punishing compliant collaborators."
Description

Mechanisms to instantly revoke or expire affected review links and issue fresh, uniquely watermarked replacements to verified clean recipients. Replacement links preserve original permissions (stream/download), expiry windows, and access scopes, while regenerating identifiers and watermark fingerprints. Propagation must be atomic and idempotent, with real‑time invalidation of old tokens, cache busting, and update of per‑recipient analytics. Integrates with the link service, watermarking pipeline, and notification system to minimize campaign disruption.

Acceptance Criteria
Leak Confirmed: Immediate Scoped Revocation
Given AutoRecall is triggered for a selected scope {track|version|drop} with a leak incident ID When the revocation executes Then all review links within the selected scope are invalidated in ≤10 seconds across API and CDN And requests to revoked links return HTTP 410 with error code LINK_REVOKED and no asset bytes served And an audit log entry records actor, scope, counts of links revoked, and timestamp
Clean Recipients Receive Preserved-Permission Replacement Links
Given revocation has completed for the selected scope When replacement link generation runs Then each recipient in scope not marked compromised receives exactly one new link And the new link preserves the original stream/download permissions, expiry window, and access scope And the new link has a regenerated identifier, token, and unique watermark fingerprint And links are activated only after the watermarking pipeline reports success And generation completes in ≤60 seconds for up to 500 recipients
Atomic, Idempotent AutoRecall Execution
Given an AutoRecall job with correlation ID C is executed When the job is retried due to transient failures using the same correlation ID Then the outcome is atomic: either all scoped links are revoked and replacements issued, or none are applied And the operation is idempotent: no duplicate replacement links or tokens are created And the final count of active links in scope equals the number of clean recipients And a single audit trail exists for correlation ID C
Real-Time Token Invalidation and Cache Busting
Given a previously valid link URL has been revoked by AutoRecall When a client requests the asset via API or any CDN edge Then the request returns HTTP 410 within ≤10 seconds of the AutoRecall trigger And CDN caches for the revoked token are invalidated and serve no asset bytes And replacement links generate different cache keys and fresh ETags, ensuring cached stale content is not reused
Per-Recipient Analytics Migration and Continuity
Given replacement links have been issued for clean recipients When analytics are queried for a specific recipient Then historical events from the revoked link are associated with the recipient and displayed on the replacement link detail And the revoked link stops accruing events from the revocation timestamp forward And cumulative metrics (opens, streams, downloads) remain continuous with a rotation marker indicating the switch And the API returns a stable recipient analytics ID across the rotation
Recipient Notifications for Replacement Links
Given replacement links are ready and active When notifications are dispatched Then each clean recipient receives exactly one notification per configured channel containing the new link, unchanged permissions summary, and expiry And no notifications are sent to compromised recipients And delivery success rate is ≥98% within 5 minutes for email and ≥99% immediate enqueue for in-app notifications And failed deliveries are retried up to 3 times with exponential backoff and are visible in the audit log
Export Pause and Safe Resumption During AutoRecall
Given AutoRecall is active for scope S When an export or public share is requested for assets in S Then the request is blocked with HTTP 423 LOCKED and message "Recall in progress" until replacement links are issued and watermarking completes And upon AutoRecall completion, exports automatically resume without manual intervention And pause/resume events are recorded with timestamps and the incident ID
Leak Confirmation Workflow
"As a project manager, I want a clear workflow to confirm a leak and choose the containment scope so that the right actions are applied quickly and accurately."
Description

A guided UI and API to confirm suspected leaks, capture evidence (e.g., URLs, screenshots, fingerprint matches), select containment scope (track, version, drop), choose or override applicable AutoRecall rules, and set activation timing (immediate or scheduled). Supports role‑based approvals, audit notes, and a one‑click execute action with preflight checks to surface impact (links affected, recipients, exports to pause). Ensures a fast, accurate path from detection to enforcement with guardrails.

Acceptance Criteria
Immediate Leak Confirmation and Execution (Single Track Version)
Given a user with role Release Manager or Security Admin opens Leak Confirmation for a suspected link And provides at least one evidence URL, one file attachment, and one fingerprint match for the case And selects scope "Version" for the specific track version And applies the default AutoRecall rule with activation set to Immediate When the user runs Preflight Then the system displays within 3 seconds the counts of affected review links, unique recipients, and exports to pause And provides a downloadable CSV of affected links and recipients And enables Execute only if approval requirements are already satisfied or the user has approver privileges When the user clicks Execute Then within 60 seconds 100% of review links for the selected version are revoked And fresh watermarked replacement links are issued to recipients not flagged as suspected leakers And export jobs referencing the version are marked Paused And notifications are sent to impacted clean recipients And an immutable audit entry is created containing timestamp, actor, scope, rule id, evidence hashes, and impact counts And the UI displays a success state with the operation id
Scheduled Activation with Role-Based Approvals (Full Drop Scope)
Given a user initiates Leak Confirmation for a full drop And chooses scope "Drop" and schedules activation for a future timestamp And selects applicable AutoRecall rule And the workflow requires at least one approver from the Security Admin group When the user requests approval Then designated approvers receive in-app and email notifications containing scope, evidence summary, and preflight impact When at least one required approver approves before the scheduled time Then the workflow state becomes Approved and locks the scope (time may still be edited or cancellation allowed) When the scheduled time is reached Then containment executes automatically and applies the selected rule across the drop And an audit record includes the approval chain, scheduled time, executor (system), and impact counts If no approval is recorded by the scheduled time Then execution is skipped, the request is marked Expired, and an "Approval Missed" notification is sent to requestor
Override AutoRecall Rules for Exceptional Recipient Handling
Given a user selects scope "Version" and the default rule would revoke links for all recipients And the user overrides the rule to (a) block recipient L as Source of Leak and (b) replace-only for recipients R1 and R2 When Preflight runs Then the impact list categorizes L as Revoke+Block, R1 and R2 as Replace Only, and all other recipients per base rule And the downloadable CSV includes override flags and rationale notes When Execute runs Then L’s current and future review links for the scoped assets are revoked within 60 seconds and further link issuance to L is blocked And R1 and R2 receive new watermarked links and their old links expire within 60 seconds And no notifications are sent to blocked recipients; replace-only recipients receive update notifications And the audit log records the overrides, actor, and rationale
Preflight Impact Assessment and Guardrails
Given the user is on the Leak Confirmation form When required evidence fields are missing or invalid (no URL and no attachment and no fingerprint match) Then the Preflight action is disabled and inline errors identify missing fields Given valid evidence is provided and a scope is selected When Preflight runs Then the system lists impacted review links, recipients, and exports with totals and provides CSV download And if impacted totals are zero, Execute remains disabled with a "No impact" message And if required approvals are not satisfied, Execute remains disabled with an approval-required hint And if more than 5 minutes elapse or scoped assets change after Preflight, a "Preflight stale" warning appears and Execute is disabled until Preflight is re-run
Evidence Capture and Integrity Preservation
Given the user attaches evidence files (screenshots/logs) and enters URLs and fingerprint matches Then only allowed file types (png, jpg, pdf, txt, csv, zip) up to the configured size limit are accepted And the system computes and displays SHA-256 hashes for each evidence item and stores them with a case id using write-once semantics And evidence is permissioned to Security Admins and Release Managers only When the containment is executed Then the audit log entry includes evidence ids, hashes, and a redacted preview of metadata (filename, size, type); contents remain immutable and retrievable via authorized request And the API exposes evidence_ids in the leak case resource for downstream systems
API-Driven Leak Confirmation Workflow
Given a client holds a valid service token with scope leak:write When it calls POST /api/leaks with payload {scope, activation_time, ruleset_id, evidence, approvals} Then the API validates payload and returns 202 with operation_id and initial status Pending-Preflight And a subsequent GET /api/leaks/{id} reflects status transitions: Preflighted -> Approved -> Executing -> Executed (or Failed/Expired) with timestamps And a webhook autoRecall.executed is emitted containing operation_id, scope, impact counts, and audit url And idempotency is enforced via Idempotency-Key header so retried POSTs do not create duplicate executions And requests exceeding rate limit receive 429 with Retry-After
Export Pause Enforcement and Safe Resume
Given containment execution identifies exports E1 and E2 to pause Then within 60 seconds E1 and E2 transition to Paused with reason LeakContainment and a link to the containment case And any new export jobs within the affected scope are rejected with 409 and a user-readable message until safe conditions are met And the campaign dashboard lists paused exports with Resume CTA disabled until a safe version is available or an authorized user overrides with an audit note When a safe version is marked or an authorized override is applied Then Resume becomes available, resumes are executed successfully, and audit entries record who resumed, when, and under what condition
Recipient Hygiene & Allow/Deny Lists
"As a campaign manager, I want AutoRecall to separate clean collaborators from suspected leakers so that replacements go only to trusted recipients."
Description

Automated determination of impacted recipients using per‑recipient analytics and watermark attribution, maintaining dynamic quarantine (deny) lists for suspected or confirmed leakers and clean allow lists for compliant collaborators. AutoRecall uses these lists to target revocation and to restrict replacement issuance only to clean recipients, with configurable re‑verification (e.g., email re‑auth). Provides bulk management, manual overrides, and time‑bound probation rules.

Acceptance Criteria
Auto-quarantine on watermark attribution
Given a confirmed leak event E with watermark attributing recipient R and scope S (track|version|drop) When AutoRecall ingests E Then R is added to the deny list within 10 seconds And R's deny list record includes E.id, scope S, and timestamp And repeated ingestion of the same event E is idempotent and does not create duplicate deny-list entries
Targeted revocation and clean reissue
Given a confirmed leak event E with scope S and deny list D={R1..Rn} and allow list A={C1..Cm} When AutoRecall executes containment for scope S Then all active review links for recipients in D within scope S are revoked within 15 seconds and return HTTP 410 on access And no replacement links are issued to any recipient in D And new watermarked replacement links are issued to all recipients in A who had prior access to scope S within 60 seconds And exports for scope S are paused until an admin sets Safe=true And recipients in A receive a notification containing the new link and reason code "Leak Containment"
Re-verification before re-allow
Given recipient R is on the deny list and policy requires email re-authentication on reinstatement When an admin initiates Move-to-Allow for R Then the system sends a one-time code to R's verified email And R must verify the code within 15 minutes And upon success, R moves to the allow list and a probation period P days begins And upon failure or timeout, R remains on the deny list and the attempt is logged with reason
Bulk list management with audit trail
Given an admin uploads a CSV containing N recipients with requested list changes and enters justification J When the bulk operation is processed Then each row is validated (identity exists, email format valid, target list is allowed) And changes are applied atomically per row without partial field updates And the system returns a summary with counts: processed=N, succeeded, failed, and per-row error reasons And an immutable audit log entry is created for each successful change with actor, justification J, before/after state, and timestamp And if justification J is missing, no changes are applied and the upload is rejected with HTTP 400
Time-bound probation rules
Given recipient R is on probation for X days with rule "no flagged anomalies" from per-recipient analytics When X days elapse with zero anomalies for R Then R is automatically moved to the allow list and notified And if any anomaly occurs during probation, R is immediately returned to the deny list, exports/replacements are blocked for R, and the probation window resets And all evaluations and transitions are recorded in the audit log with rule id and evidence
Impacted recipient determination from watermark and analytics
Given a leaked file with extracted watermark W mapping to recipient R1 and recipients R2 and R3 have access to the same scope S with different watermarks When the impacted-recipient determination job runs Then only R1 is marked impacted and added to the deny list for scope S And R2 and R3 remain on the allow list with no revocations And a machine-readable report is produced listing impacted recipients, evidence (watermark W, access logs), scope S, and timestamps
Export Pause & Resume Controls
"As an indie artist, I want exports to be automatically paused on compromised assets so that no further copies leave the system until it’s safe."
Description

Controls to pause outbound exports and sharing actions on affected assets during containment, including download buttons, external share generation, and automated delivery jobs. Scope can be set at track, version, or drop level. Provides visible UI banners, API flags, and per‑asset locks, plus resume conditions (manual, time‑based, or rule‑driven) with safeguards to prevent premature reopening. Cancels or queues in‑flight jobs and surfaces status to owners.

Acceptance Criteria
Track Scope Pause Enforcement
Given a track-level pause is activated on Track T When any user attempts to download Track T, generate an external share link for Track T, or trigger an automated delivery job for Track T Then the action is blocked and no content is delivered And the relevant UI controls are disabled with a tooltip indicating "Exports paused (scope: track)" And API export/share endpoints for Track T respond with HTTP 423 Locked and a payload containing exportsPaused=true and pauseScope="track" And a per-asset lock indicator is visible on Track T And assets not in scope remain exportable
Version and Drop Scope Enforcement
Given a version-level pause is activated on Version V of Track T When an export/share/delivery is attempted for Version V Then the action is blocked and reported as paused (scope: version) And exports of other versions of Track T remain allowed unless explicitly included in the pause scope Given a drop-level pause is activated on Drop D When an export/share/delivery is attempted for any asset contained in Drop D (tracks, versions, artwork, stems, press kit) Then the action is blocked and reported as paused (scope: drop) And assets not belonging to Drop D remain exportable
UI Banners, API Flags, and Per-Asset Locks
Given an owner views the workspace while a pause is active Then a global banner is displayed that includes the scope (track/version/drop) and a human-readable reason for the pause And each affected asset displays a lock icon with tooltip "Exports paused" and the scope And the asset/detail API returns fields exportsPaused=true, pauseScope, pauseReason, resumeMode, and resumeAt (nullable) And an exports/status API endpoint returns counts of affected assets and current resume mode for each active pause And removing the pause immediately removes the banner, lock indicators, and sets exportsPaused=false in APIs
Manual Resume with Safeguards
Given a pause is active with resumeMode="manual" and safeguards enabled When an owner with permission attempts to resume Then the system validates that the associated containment incident status is "resolved" and all replacement review links have been issued to clean recipients And if any validation fails, the resume is blocked, the UI shows specific blocker reasons, API returns HTTP 412 Precondition Failed, and the pause remains active And if validations pass, the pause is lifted, per-asset locks are cleared, and queued jobs begin resuming within 5 minutes And an audit log entry records who resumed, when, scope, and validations passed/failed
Time-Based Auto-Resume
Given a pause is active with resumeMode="time-based" and resumeAt is set to a future ISO-8601 timestamp When the current time reaches resumeAt Then the system automatically lifts the pause within 5 minutes without manual action And queued jobs resume processing, while previously canceled jobs do not auto-restart And no resume occurs before resumeAt And an audit log entry records the automatic resume with timestamp and scope
Rule-Driven Auto-Resume Conditions
Given a pause is active with resumeMode="rule-driven" and rules require containmentResolved=true and no new leaks detected for the last 24 hours When the rules evaluate to true Then the system automatically lifts the pause within 5 minutes And if the rules revert to false before the resume occurs, the pause remains active and no resume is performed And an audit log entry records the rule evaluation and automatic resume And the API exposes the current rule evaluation state on the pause object
In-Flight Export Handling and Visibility
Given a pause is activated while export/share/delivery jobs are in-flight When jobs are in state queued or scheduled Then they are moved to a paused queue and will automatically resume when the pause is lifted When jobs are in state preparing or packaging Then they are canceled with reason="paused" and no artifacts are delivered to recipients When jobs are actively transferring Then the system attempts to abort transfer; if abort succeeds the job is canceled with reason="paused", otherwise it is marked paused and will retry after resume without partial delivery And owners can view paused/canceled job counts and reasons in the UI and via API, scoped to the affected assets
Stakeholder & Recipient Notifications
"As a label coordinator, I want clear notifications with new access links sent to the right people so that everyone stays informed and work can continue smoothly."
Description

Templated, localized notifications to internal stakeholders and external recipients that communicate revocations, replacements, and export pauses. Supports in‑app alerts, email, and webhooks with per‑recipient context (new link, expiry, reason code) and rate limiting. Integrates with allow/deny lists to avoid sending replacements to quarantined recipients. Tracks delivery, opens, and retries, and exposes settings per project for tone and timing.

Acceptance Criteria
Localized Revocation Notices on Leak Confirmation
Given a confirmed leak event for a track version and project default locale "es-ES" And recipient A has locale "fr-FR" and email and in-app channels enabled When revocation notifications are generated Then A’s email and in-app alert use the "fr-FR" template And if "fr-FR" is unavailable, they fall back to the project default "es-ES" And the rendered content includes asset_title, scope, reason_code, and effective_at values And no unresolved template tokens remain
Replacement Link Notification Includes Per-Recipient Context
Given a replacement link is issued for a clean recipient after a full-drop leak And the replacement link has an expiry and a unique previous_link_id When the notification is composed Then the message includes new_link_url, expiry (ISO 8601), reason_code, previous_link_id, and recipient_id And the replacement link is watermarked for the recipient_id And the notification is sent via each enabled channel respecting recipient preferences
Allow/Deny List Suppression for Replacements
Given recipient R is on the project's deny (quarantine) list When replacements are generated after a leak Then no replacement notifications are sent to R via email, in-app, or webhook And an audit log entry records suppression with reason "deny_list" And all recipients not on the deny list receive the replacement notifications And the total sent count equals the number of allowed recipients
Per-Channel Rate Limiting Enforcement
Given project rate_limit_per_channel_per_recipient is set to 2 per 10 minutes And 5 notifications are triggered for recipient B within 10 minutes When queuing email, in-app, and webhook notifications Then at most 2 emails, 2 in-app alerts, and 2 webhooks are dispatched within the 10-minute window And excess notifications are deferred until the window resets And a throttled_count metric is recorded for the deferred events
Webhook Delivery Retries and Status Tracking
Given a webhook endpoint responds 500 on the first two attempts and 200 on the third And retry policy is max_attempts=5 with exponential backoff starting at 30s When sending the webhook notification Then the system retries and succeeds on attempt 3 And delivery status transitions queued -> retrying -> delivered And attempt_count=3 and next_attempt_at=null after success Given a webhook endpoint responds 500 for all 5 attempts When sending the webhook notification Then status is marked failed after 5 attempts And an internal stakeholder alert is emitted for the failure
Project Tone and Timing Overrides Applied by Audience
Given the project config sets recipient_tone="firm" with send_delay=15m and stakeholder_tone="neutral" with send_delay=0 When a revocation and replacement are triggered Then recipient notifications use the "firm" templates and are scheduled 15 minutes after the event And stakeholder notifications use the "neutral" templates and are sent immediately And changing these settings affects future notifications only and leaves already scheduled sends unchanged
Export Pause Notifications and Messaging
Given AutoRecall activates an export pause for project P with reason_code "leak_detected" And stakeholders and recipients have preferred channels configured When the pause is activated Then stakeholders receive immediate in-app and email notifications including scope, reason_code, and optional review_eta And external recipients receive localized notifications that exports/downloads are temporarily disabled without revealing internal detection details And active review links display a non-dismissable banner and disable export/download actions until the pause is lifted And when the pause is lifted, resume notifications are sent within 1 minute and export/download actions are re-enabled And recipients on the deny list do not receive resume notifications
Containment Audit & Analytics
"As a product owner, I want a complete audit and metrics on containment actions so that we can prove compliance and continuously improve our leak response."
Description

An immutable audit trail of containment events (confirmation, rules executed, links revoked/replaced, exports paused/resumed, notifications sent) with timestamps, actors, scopes, and evidence attachments. Dashboards report time‑to‑containment, links affected, recipient impact, and recurrence by project. Exposes exportable logs and APIs for compliance and post‑mortems, and integrates with IndieVault’s analytics to attribute leak sources and measure prevention effectiveness.

Acceptance Criteria
Immutable Audit Trail for Containment Events
Given a leak is confirmed for a project and AutoRecall executes containment rules When each containment action occurs (confirmation, rule_executed, link_revoked, link_replaced, exports_paused, exports_resumed, notification_sent) Then an audit event is appended capturing: event_id (UUIDv4), project_id, actor_id or "system", actor_type, event_type, scope (track|version|drop), affected_resource_ids, timestamp (ISO 8601 UTC), rule_id (nullable), correlation_id, evidence_ids (nullable) Given the audit store is append-only When a client attempts to update or delete an audit event Then the write is rejected with HTTP 409 and a new tamper_attempt audit event is appended referencing the original event_id Given simultaneous containment actions by multiple actors When events are read Then results are returned in a stable total order by (timestamp, event_id) and are idempotent across repeated reads Given a project_id and time range When querying the audit log via UI or API Then the system returns all matching events within 2 seconds for up to 10,000 events, with pagination cursors provided beyond that
Evidence Attachment Management
Given an authorized user (Owner or Auditor) attaches evidence to a containment event When the upload completes Then the attachment is virus-scanned, encrypted at rest, recorded with attachment_id, sha256 checksum, mime_type, size, and linked to the event; unsupported types return HTTP 415 Given evidence is linked to an event When a user without permission requests the file Then access is denied with HTTP 403 and the attempt is logged as attachment_access_denied Given evidence needs correction When a new attachment supersedes an existing one with a redaction_reason Then the original remains immutable; the event reflects a new evidence_superseded entry referencing both attachment_ids Given an evidence download occurs When the file is served via a signed URL Then the URL expires within 24 hours and the download is logged as attachment_downloaded with requester_id and timestamp
Dashboard KPIs for Containment Performance
Given an incident has leak_confirmed_at and containment_resolved_at timestamps When the dashboard loads Then time_to_containment = containment_resolved_at - leak_confirmed_at is displayed per incident and aggregated (median, p90) for the selected period Given containment events affect links and recipients When viewing the dashboard Then totals for links_affected, recipients_impacted, assets_in_scope are displayed and filterable by project, date range, and scope; metrics refresh within 5 minutes of new events Given a user drills into a KPI When clicking the metric Then an incident detail view shows the ordered event timeline with actors and scopes Given insufficient permissions When a user opens the dashboard Then KPIs and details are hidden and a permission error is shown; no sensitive counts are leaked
Exportable Logs and Compliance API
Given a user with audit.export permission requests an export for a project and date range When the export is generated Then CSV and JSONL files are produced with the documented schema and UTC timestamps, downloadable via signed URLs that expire in 24 hours Given large datasets When exporting or calling the audit API Then results are paginated with a next_cursor; API supports limit up to 10,000; HTTP 206 indicates partial page; rate limit is enforced at 60 req/min with Retry-After on 429 Given API authentication When calling /api/v1/audit/events Then requests must include OAuth2 token with scope audit.read; responses return 200 on success, or 400/401/403/429/500 with machine-readable error codes and trace_id Given parity with the UI When comparing dashboard counts and exported/API counts for the same filters Then totals match within the same UTC day and support deterministic re-runs via idempotency_key
Analytics Integration for Leak Source Attribution
Given a review link is confirmed as leaked When containment is initiated Then the system correlates watermark and access logs to attribute a likely source at recipient or cohort level with a confidence score (0–1) and displays attribution in the incident detail Given attribution is inconclusive When confidence < 0.5 or signals conflict Then the source displays as unknown and the incident records attribution_status = "inconclusive" Given attribution exists When viewing prevention effectiveness Then the dashboard shows pre- vs post-containment unauthorized access counts for 7 days and a percent change metric per incident and aggregated per project
Recurrence and Trend Reporting by Project
Given 90 days of audit history When viewing trends Then recurrence_rate = incidents_last_90d / releases_last_90d is displayed per project with a weekly time series and top recurring scopes and rules by count Given an alert threshold is set for a project When recurrence_rate exceeds the threshold Then an alert notification is sent to project owners and logged as notification_sent with threshold and current rate Given the trends view is filtered When filters change (date range, project, scope) Then charts and tables update within 5 seconds and export of the current view is available as CSV

Mark Tuner

Choose the right balance of watermark robustness and transparency with presets for Internal, Press, and Legal. Run a quick resilience report across common codec targets and listening scenarios, then lock settings to your Delivery Profile. Benefit: tailor protection to the moment—extra stealth for critics, extra bite for high‑risk drops.

Requirements

Preset Profiles with Robustness–Transparency Tuning
"As an indie label manager, I want to choose and fine‑tune a watermark preset so that I can balance leak risk and audio quality for each audience."
Description

Provide first-class presets (Internal, Press, Legal) for watermark configuration, exposing a single Mark Tuner control for quick balance plus an advanced panel (embed strength, masking depth, spread, insertion rate). Allow cloning to custom presets and saving to a release’s Delivery Profile. Surface guardrails and contextual tips to prevent over/under‑protection. Persist selections at the release level and per-asset overrides when needed. Ensures fast, consistent setup that aligns protection with audience expectations while minimizing audible impact.

Acceptance Criteria
Resilience Report Across Codecs & Listening Scenarios
"As an artist, I want a resilience report for my chosen settings so that I know the watermark will survive common sharing paths without audible degradation."
Description

Generate a quick resilience report that simulates common distribution/transcode paths (e.g., AAC LC 256, MP3 320/128, Opus 128, Ogg q5, platform streaming chains, messaging-app recompress) and playback scenarios (mono fold‑down, loudness normalization, noisy/mobile). Produce a per-track score with detection probability, false‑negative risk, and artifact deltas (LUFS, spectral error). Flag pass/fail against target thresholds per preset and recommend Mark Tuner adjustments. Integrate results into the release dashboard, allow export (PDF/JSON), and store snapshots for auditability.

Acceptance Criteria
Delivery Profile Lock & Governance Controls
"As a manager, I want to lock watermark settings for a release so that teammates can’t accidentally send unprotected or overly aggressive files."
Description

Enable locking of Mark Tuner settings within a Delivery Profile to enforce consistent watermarking across all outgoing assets. Provide RBAC (Owner/Manager/Collaborator), approval workflows for changes, and an immutable audit log (who/what/when/why) for preset edits and overrides. Display lock status in the UI, block ad‑hoc changes during link generation, and require justification when temporarily bypassing with time‑boxed exceptions. Ensures compliance, reduces misconfiguration, and aligns teams on approved protection levels.

Acceptance Criteria
Per‑Recipient Application & Analytics Tagging
"As a publicist, I want each review link to carry the right watermark and a unique tag so that I can tailor transparency and trace any leak to its source."
Description

Automatically apply the appropriate preset when creating expiring review links based on recipient type (Internal/Press/Legal) or selected custom profile. Embed a unique forensic tag per recipient and record the applied parameters. Surface preset usage and tag IDs in per‑recipient analytics (opens, streams, downloads) and leak forensics. Allow safe overrides with permission checks and capture rationale. Delivers tailored listening experiences while preserving traceability and deterrence.

Acceptance Criteria
Batch Watermark Pipeline for Release Folders
"As an artist who ships weekly, I want to watermark an entire release in one run so that I save time and avoid mistakes."
Description

Provide bulk processing to apply selected Mark Tuner settings across all assets in a release (tracks, instrumentals, stems). Implement a queued, resumable pipeline with parallel workers, integrity checks (hash before/after), metadata preservation, deterministic output naming, and non‑destructive writes to release‑ready folders. Expose progress, failure retries, and rollback to previous versions. Accelerates weekly shipping while reducing manual errors.

Acceptance Criteria
Audio Quality Safeguards & A/B Preview
"As a mixing engineer, I want to preview and validate watermark impact so that I can ensure transparency before delivering assets."
Description

Offer real‑time, level‑matched A/B preview of original vs. watermarked audio with visual diffs (spectrum/waveform delta) and objective checks (loudness shift, noise floor, crest factor). Define tolerance thresholds per preset and auto‑suggest lighter settings if artifacts exceed limits. Include headphone/speaker toggles and quick jump to problem passages. Save preview snapshots with settings for later reference. Ensures transparency targets are met before sending.

Acceptance Criteria

Forensic Dossier

One‑click evidence pack combining the watermark decode report, ProofChain manifest, relevant link analytics, and a time‑stamped chain of custody. Export a DMCA‑ready PDF and a sharable verification link for partners and platforms. Benefit: accelerate takedowns and resolve disputes with clear, verifiable proof—no manual compiling.

Requirements

One-click Dossier Assembly
"As an indie label manager, I want to generate a complete evidence dossier with one click so that I can move quickly on takedowns without manually compiling artifacts from multiple tools."
Description

Provide a single action from any asset, release, or incident to automatically compile a complete evidence pack consisting of the latest watermark decode report(s), the ProofChain manifest, relevant per-recipient link analytics, and a time-stamped chain of custody. The assembly runs as an idempotent background job with progress states, retries, and notifications, and supports scoping (single file, release, or batch), time windows, and recipient filters. The output is a versioned dossier object stored in IndieVault with immutable content-addressed artifacts and metadata, enabling quick regeneration, comparison across versions, and attachment to takedown tickets without manual document gathering.

Acceptance Criteria
Watermark Decode Integration
"As an artist, I want the dossier to include a verified watermark decode so that I can prove which recipient copy leaked and on what date."
Description

Automatically run or attach the latest watermark decode for selected audio/image assets and include payload details (recipient ID, embed time, encoder config), confidence scores, and verification steps. Support multiple encoder/decoder backends with a normalized schema and caching keyed by file hash to avoid redundant work. Handle protected or encrypted assets with safe fallbacks and clearly mark unknown or outdated decodes. Package both machine-readable JSON and a human-readable summary section for inclusion in the dossier.

Acceptance Criteria
ProofChain Manifest Builder
"As a platform compliance reviewer, I want a signed integrity manifest so that I can independently verify that the evidence has not been tampered with."
Description

Generate and attach a cryptographic manifest capturing per-file content hashes, Merkle roots for release bundles, signer identity, and trusted timestamps (RFC 3161) with optional external anchors (e.g., OpenTimeStamps). Detect drift between current assets and prior manifests and record lineage across versions. Provide clear verification instructions and embed signatures so third parties can independently validate integrity without accessing IndieVault.

Acceptance Criteria
Link Analytics Correlation Engine
"As a manager, I want correlated link analytics in the dossier so that I can demonstrate suspicious activity tied to a specific recipient or link."
Description

Aggregate and correlate per-recipient link analytics relevant to the dossier scope and timeframe, including link creation, opens, plays/downloads, device/geo, IP hash, and token IDs. Map events to specific assets via link tokens and file fingerprints, highlight anomalies (e.g., excessive downloads, atypical geos), and compute a concise summary with an appendix of redacted raw events. Respect privacy configurations, apply selective redaction, and document methodology for transparency.

Acceptance Criteria
Chain of Custody Timeline
"As a rights holder, I want a verifiable chain of custody so that I can show an unbroken history of how and when the asset was accessed and shared."
Description

Assemble a chronological, time-normalized timeline of asset handling events—ingest, edits, approvals, exports, link shares, accesses, watermark decodes, and dossier generation—each recorded with actor, source system, timestamp, and file hash snapshot. Store as an append-only audit trail and include tamper-evidence via hash chaining. Present both a readable timeline and machine-readable export to substantiate continuous control and transfer history.

Acceptance Criteria
DMCA-ready PDF Export with Digital Signing
"As a label operations lead, I want a digitally signed, DMCA-compliant PDF export so that I can file takedowns quickly with documentation that platforms accept."
Description

Render a jurisdiction-aware, DMCA-ready PDF that includes an executive summary, evidence sections (watermark report, ProofChain manifest, analytics, chain of custody), exhibits, and an annex of raw JSON. Auto-populate rights holder and contact details from organization settings, and apply a PAdES-compliant digital signature with embedded hash and trusted timestamp. Ensure accessibility tagging, localized templates (EN initial with i18n hooks), and size optimization for email submission. Allow optional redaction of PII and branding removal for neutral submissions.

Acceptance Criteria
Shareable Verification Link with Access Controls
"As a platform trust & safety analyst, I want a secure verification link so that I can validate the evidence quickly without handling large attachments or sensitive raw data."
Description

Create a tamper-evident, expiring verification URL that presents a read-only web view of the dossier with selective disclosure (hide/show PII, raw logs). Support access modes (public token, allowlist, authenticated partners), viewer watermarking, expiration and revoke, and detailed view/download audit logs. Serve an immutable snapshot referenced by a content-addressed ID so recipients can independently verify the evidence without downloading the full pack. Provide API endpoints for partner ingestion.

Acceptance Criteria

Cohort Overlays

Overlay per‑recipient heatmaps by audience cohort (press, collaborators, early fans, internal) to spot where different groups skip, replay, or drop off. Filter by link type, date range, or asset version to tailor edits and outreach. Benefit: focus revisions and follow‑ups based on who actually struggles where, not on averages.

Requirements

Cohort Directory & Auto‑Tagging
"As an indie manager, I want recipients to be accurately grouped into cohorts automatically so that overlay analytics reflect real audience segments without manual cleanup."
Description

Provide first‑class cohort management so recipients can be grouped into press, collaborators, early fans, and internal, with the ability to add custom cohorts. Support bulk import, manual assignment, and auto‑tagging rules (e.g., email domain, invite source, contact role, link label). Persist cohort membership at the contact level and allow per‑link overrides. Backfill cohorts for existing contacts and review links. Integrate with IndieVault’s contact book and link generator so cohorts are attached to watermarkable, expiring review links at creation time. Expose an API and CSV import to sync cohorts from external CRMs. Maintain change history and allow safe merges/splits without breaking historical analytics by snapshotting cohort membership at event time.

Acceptance Criteria
Playback Event Instrumentation & Heatmap Engine
"As a product user, I want accurate per‑recipient heatmaps to power cohort overlays so that I can trust where listeners skip, replay, or drop off."
Description

Capture high‑fidelity playback telemetry for review links and internal players, including play, pause, scrub, skip, replay, and complete events with wall‑clock and media timestamps. Attribute each event to recipient token, cohort, link type, asset ID, asset version, and session/device. Buffer and retry to handle offline use and clock skew; deduplicate events idempotently. Store events in an analytics store optimized for time‑series aggregation. Generate per‑recipient heatmaps with configurable time bins and smoothing, then aggregate into cohort overlays on demand or via scheduled materializations. Ensure events are bound to watermark identifiers to prevent spoofing and support leak forensics. Provide data quality checks, P95 latency targets for visualization (<2s for typical tracks), and backfill jobs for historical links.

Acceptance Criteria
Cohort Overlay Visualization UI
"As an artist, I want to visually compare how different cohorts interact with my track so that I can spot where each group drops off or replays."
Description

Deliver an interactive visualization that overlays cohort heatmaps on a timeline for any asset or release folder. Support multiple visualization modes: stacked bands, differential (delta) view, and single‑cohort focus. Include legends, color‑blind‑safe palettes, and hover tooltips with metrics (play‑through %, skip rate, replay hotspots) per cohort and timestamp. Provide normalization options (absolute counts vs. cohort‑normalized percentages) and confidence indicators for low‑sample regions. Enable quick cohort toggles and compare up to four cohorts concurrently. Ensure responsive design, keyboard navigation, and screen‑reader labels. Persist view state and sync with global filters.

Acceptance Criteria
Filter Bar: Link Type, Date Range, and Asset Version
"As a label assistant, I want to filter overlays by link type, date range, and asset version so that my insights match the specific campaign and mix I’m evaluating."
Description

Add a global filter bar to constrain overlays by link type (review, internal, fan preview), date range (absolute and relative presets), and asset version (mix/master iterations). Support multi‑select link types, open/close date filters, and version lineage awareness when releases have version trees. Filters must apply consistently across the visualization, metrics, and exports. Persist user preferences, allow deep‑linked URLs that encode filter state, and expose preset saves (e.g., “Press last 7 days on v3”). Ensure filters integrate with release‑ready folders so users can pivot at track, asset bundle, or entire release levels.

Acceptance Criteria
Permissions & Privacy Guardrails for Per‑Recipient Analytics
"As a team admin, I want strict privacy and access controls around per‑recipient data so that insights are useful without exposing sensitive information."
Description

Enforce role‑based access so only owners and permitted collaborators can view per‑recipient analytics; others see cohort aggregates only. Mask personally identifiable information by default outside the owning team and apply k‑anonymity thresholds to prevent deanonymization in small cohorts. Respect recipient opt‑outs and legal requirements (e.g., GDPR/CCPA), including data deletion and consent tracking. Log access to sensitive views, and propagate permissions via link sharing and folder ACLs. Provide admin controls for retention windows and aggregation thresholds. Ensure snapshot shares contain only the authorized aggregate data and expire per policy.

Acceptance Criteria
Insight Annotations & Shareable Snapshots
"As a producer, I want to annotate cohort hotspots and share a snapshot with my team so that everyone aligns on what to edit and who to follow up with."
Description

Allow users to annotate hotspots on the timeline (e.g., “press drops at 0:28 intro length”) with cohort context and attach tasks or follow‑ups. Enable creation of secure, expiring snapshot links or exports (image/PDF) that preserve the current cohorts and filters without revealing recipient identities beyond permissions. Support commenting and @mentions for internal collaboration, and store annotations alongside the asset/version so they remain when mixes update. Provide change tracking and the ability to compare snapshots across versions to inform edits and outreach.

Acceptance Criteria

AB Cut Test

Run randomized A/B tests on alternate edits (e.g., longer intro vs punch‑in chorus) to the same cohort. IndieVault splits traffic fairly, compares retention, skips, and completion rates, and calls a statistically confident winner. One click promotes the winner to your active review link set. Benefit: choose the cut that keeps listeners engaged, backed by real behavior.

Requirements

Variant Management & Normalization
"As an indie artist or manager, I want to upload and manage alternate cuts of the same track as variants so that I can A/B test them without disrupting my asset organization and ensure apples-to-apples comparisons."
Description

Enable uploading and managing multiple alternate cuts for a single track under a shared canonical asset, preserving existing IndieVault versioning and release folder structure. Validate technical consistency (sample rate, channels, duration bounds) and auto-normalize loudness to a target (e.g., integrated LUFS) to ensure fair comparisons. Ingest waveforms and fingerprints for each variant, deduplicate near-identical uploads, and maintain metadata parity (ISRC placeholder, notes, mix engineer). Ensure watermarking seeds remain stable per recipient across variants, and provide audit logs for create/edit/delete actions. Integrate with existing storage, access controls, and trash/restore flows.

Acceptance Criteria
Cohort Definition & Fair Randomization
"As a manager, I want to define my recipient cohort and split traffic fairly across variants so that test results are unbiased and replicable."
Description

Allow creators to define a test cohort from existing contacts and review link recipients using tags, lists, and inclusion/exclusion rules. Provide n-way traffic allocation (default 50/50) with persistent per-recipient assignment via a deterministic hash of recipient ID to prevent cross-variant contamination. Support stratified randomization (e.g., platform, territory, role) and caps on maximum exposures per recipient. Include pause/resume and rebalancing controls, seeding for reproducibility, and safeguards to keep excluded VIPs or internal users out of results. Log assignment decisions for auditability without storing plaintext PII.

Acceptance Criteria
Instrumented Player Events & Data Pipeline
"As an artist, I want accurate, privacy-conscious playback analytics across variants so that I can compare real listener engagement with confidence."
Description

Extend the review player to capture standardized engagement events (play_start, first_10s/30s, seek, skip, pause, resume, completion) and per-second retention curves for each variant. Batch and transmit events reliably with retries and backoff, de-duplicate on the server, and attribute sessions to recipients while obfuscating PII (hashing, IP truncation). Record device, referrer, and campaign parameters (UTM) where available. Normalize timestamps to UTC, handle partial offline playback, and store events in a queryable analytics warehouse with schemas keyed by test, variant, and cohort. Expose a secure internal API for analytics reads.

Acceptance Criteria
Statistical Engine & Winner Calling
"As a manager, I want the system to call a statistically confident winner between cuts so that I can choose the more engaging edit without manual number crunching."
Description

Implement a statistical decisioning service that evaluates variant metrics (primary: retention/skip/completion rates; secondary: time to first skip, average listen time) and determines when a winner can be confidently called. Support minimum sample sizes, configurable confidence thresholds (e.g., 95%), and sequential monitoring with error control to avoid peeking bias. Offer both frequentist (two-proportion tests) and Bayesian modes behind a feature flag. Define stop rules (max test duration, exposure caps) and tie-breaking logic. Persist interim analyses, expose rationale for decisions, and lock the test upon winner call while archiving full results for audit.

Acceptance Criteria
Review Link Integration & One‑Click Promotion
"As a manager, I want to promote the winning cut to my active review links in one click so that reviewers automatically hear the best version without re-sending links."
Description

Integrate A/B testing transparently into existing watermarkable, expiring review links. Ensure recipients see a single stable variant assignment per test and that watermarking keys and permissions propagate correctly. After a winner is called, allow one click to promote the winning variant to the active review link set, updating future serves while preserving historical analytics and not breaking existing URLs. Provide rollback, confirmation, and audit trails, and respect link expirations, access lists, and rate limits. Ensure embed and mobile views behave consistently.

Acceptance Criteria
A/B Test Dashboard & Reporting
"As an artist, I want a clear dashboard to track my test and share results so that I can make quick, informed decisions and align my team."
Description

Provide a dedicated dashboard to create, monitor, and analyze A/B tests, showing variant performance side-by-side with retention curves, skips, completion rates, and confidence indicators. Include cohort filters (tags, geography, device), date ranges, and segment comparisons. Surface test health (sample size progress, imbalance alerts, event latency), annotations (release dates, promo pushes), and export options (CSV, shareable read-only link). Ensure accessibility, responsive layout, and role-based visibility, with consistent visual language across IndieVault analytics.

Acceptance Criteria
Notifications, Guardrails & Permissions
"As a project owner, I want alerts and safety checks around my tests so that I avoid bad setups and prevent over-exposing unreleased material."
Description

Send in-app and email notifications for key milestones (test launched, min sample reached, winner called, anomalies detected). Implement pre-flight checks (variant loudness mismatch, missing metadata, inadequate cohort size), automatic caps (max days live, max exposures), and anomaly detection for data integrity. Enforce role-based permissions so only project owners/managers can create or promote tests, with reviewer-level visibility restricted to assigned variants. Log all actions for compliance and provide admin tools for emergency test termination.

Acceptance Criteria

Section Map

Auto‑detects sections (intro, verse, pre, chorus, bridge, outro) and aligns heatmaps to musical structure. Surfaces hook strength, chorus entry drop‑offs, and section‑level completion deltas across cohorts. Benefit: turn raw playback traces into actionable mix/edit notes tied to recognizable song parts.

Requirements

Automatic Section Detection Engine
"As an indie artist, I want my tracks auto-labeled into musical sections so that I can quickly understand structure without manual markup."
Description

Implement an on-ingest audio analysis pipeline that automatically detects and timestamps musical sections (intro, verse, pre-chorus, chorus, bridge, outro) with confidence scores. Use beat tracking, spectral novelty, and structure segmentation models tuned for common indie genres. Persist section labels as versioned asset metadata linked to each mix/master variant, with deterministic re-processing on new uploads. Provide fallbacks when confidence is low and expose detection parameters via internal API for retraining and tuning. Ensure processing scales for bulk uploads and completes within acceptable latency for weekly release cycles, emitting events to downstream analytics once sections are available.

Acceptance Criteria
Manual Section Editing & Versioned Overrides
"As a producer, I want to fine-tune the detected section boundaries so that the analytics reflect the song as I intend it to be heard."
Description

Provide an interactive editor to review, add, split, merge, and relabel detected sections on a waveform/timeline. Support the canonical taxonomy plus custom labels, drag-to-adjust boundaries with snap-to-beat, and keyboard shortcuts. Store edits as immutable, time-stamped overrides tied to the asset version, with the ability to lock overrides so future auto-detection does not overwrite them. Maintain an audit trail and support revert/compare between auto-detected and manual maps. Propagate finalized section maps to derived assets (stems, alt mixes) and to review links to keep downstream analytics consistent.

Acceptance Criteria
Analytics Alignment to Musical Structure
"As a manager, I want our playback data aligned to song sections so that I can read heatmaps in the context of the musical structure."
Description

Align existing playback telemetry (plays, seeks, skips, replays, completions) and heatmaps to the detected section map, binning events by section and entry/exit transitions. Compute core per-section KPIs such as playthrough rate, skip-at-entry, time-in-section, and repeat density. Normalize metrics for track length and audience size, and support backfill of historical events when a section map is added later. Expose aligned metrics via API and render overlays in the analytics UI with consistent color-coding per section. Ensure idempotent, scalable data pipelines and handle reprocessing when section maps are edited or versions change.

Acceptance Criteria
Hook Strength & Chorus Entry Drop-off Metrics
"As a mixing engineer, I want quantified hook strength and chorus entry drop-offs so that I can make targeted edits that increase listener retention."
Description

Derive a hook strength score using signals such as replay clusters, share/save events, and dwell time around candidate hooks, with emphasis on the first chorus and repeated motifs. Detect the first chorus entry point and calculate drop-off at chorus entry versus preceding section. Provide comparative scoring across versions/mixes and flag statistically significant changes. Display scores alongside section overlays and make them queryable for A/B testing and release readiness reviews.

Acceptance Criteria
Cohort-Based Section Completion Deltas
"As a label rep, I want to compare section performance across cohorts so that I can tailor edits or marketing to specific audiences."
Description

Enable cohort definitions (e.g., reviewer groups, geography, device type, campaign, link ID) and compute section-level completion, skip-at-entry, and dwell deltas across cohorts. Provide filters and side-by-side comparisons with confidence intervals and minimum cohort thresholds to protect privacy and avoid noisy reads. Support CSV/JSON export and snapshotting for shareable reports. Integrate with existing audience segmentation and respect consent/privacy settings for per-recipient analytics.

Acceptance Criteria
Review Link Integration with Section Analytics
"As an artist manager, I want review links to capture section-level behavior per recipient so that I can see where industry contacts drop off without leaking the track."
Description

Embed section markers and names in the web player used for watermarkable, expiring review links, enabling per-recipient analytics to be captured at the section level without exposing sensitive raw telemetry. Provide a sender-controlled setting to show or hide section labels to recipients. Ensure watermarking, expiry, and access permissions function unchanged, and pass section-aligned events to the analytics pipeline with recipient attribution. Include deep links to sections for feedback and ensure links remain valid across asset re-uploads if the section map is unchanged.

Acceptance Criteria
Section Map Timeline Visualization & Interactions
"As a songwriter, I want a clear visual map of sections with aligned metrics so that I can quickly spot problem areas and share precise edit notes."
Description

Deliver a responsive timeline visualization that displays colored sections over the waveform with zoom, hover tooltips, and toggles for heatmap overlays and cohort filters. Support inline commenting pinned to sections and timestamps, quick compare between versions/mixes, and badges for hook strength and drop-off metrics. Provide accessible interactions (keyboard navigation, high-contrast mode) and ensure performance on long tracks and mobile. Allow exporting the section map as CSV/PDF and DAW-friendly markers (e.g., ID3 chapters, cue sheets) to turn insights into actionable mix/edit tasks.

Acceptance Criteria

Drop‑Off Rules

Set thresholds that trigger actions when drop‑offs cluster at a timestamp (e.g., >25% exit within 5s around 0:30 across 50 plays). Auto‑open a task at that timecode, ping the right stakeholders, or schedule a targeted retest post‑edit. Benefit: catch problem spots early and convert them into guided next steps instead of guesswork.

Requirements

Rule Composer & Threshold Builder
"As an indie manager, I want to create precise drop-off rules with thresholds and filters so that I can automatically detect problem spots that matter to my audience."
Description

A UI and API that let users define drop-off rules with parameters such as target timestamp (absolute or relative to start), window radius (e.g., ±5s), exit percentage threshold, minimum play count, audience/link/territory/device filters, rolling evaluation period, and scope across assets or releases. Includes validation for feasibility (e.g., sufficient sample size), presets that can be saved and shared at the org level, and permission-aware visibility. Integrates with IndieVault’s asset library, release folders, and versioning so rules can be attached to specific versions or inherited. Ensures accessibility, localization, and mobile responsiveness so rules can be created and edited on any device. The outcome is a precise, reusable configuration that expresses user intent and drives reliable detection.

Acceptance Criteria
Real-time Drop-Off Cluster Detection Engine
"As a product-minded artist, I want drop-off clusters to be detected quickly and reliably so that I can act before momentum stalls."
Description

A streaming analytics service that ingests playback events from watermarkable, expiring review links and aggregates exits by timecode to detect statistically meaningful clusters per active rule. Applies smoothing and debouncing to reduce noise, deduplicates by recipient, enforces minimum-sample constraints, and attributes events to assets, versions, links, and recipients for per-recipient analytics. Emits trigger events within five minutes of threshold breach, with observability, alerting, and horizontal scalability for peak traffic. Preserves privacy, respects link expirations, and provides an internal API for rule evaluation results used by downstream automations.

Acceptance Criteria
Auto Task Creation at Timecode
"As a mixing engineer, I want a task to open automatically at the problem timestamp so that I can jump straight into fixing it without hunting for details."
Description

On rule trigger, automatically create a task anchored to the exact timecode and asset/version, using a configurable template with severity, due date, and checklist (e.g., tighten intro, rebalance vocal). Auto-assign stakeholders based on ownership metadata and role mappings, attach a waveform snapshot and playback context, and link back to the source review links and evaluation details. Tasks reside in IndieVault’s task board and release folders, support comments and attachments, and update status bi-directionally via webhooks and in-app actions. This turns detected problems into actionable, trackable work.

Acceptance Criteria
Stakeholder Notifications & Routing
"As a label coordinator, I want targeted alerts with timecoded context so that the right stakeholders can respond quickly without inbox overload."
Description

Configurable notifications that route rule triggers and resulting tasks to the right people via in-app alerts, email, and Slack. Supports recipient rules by role, release, territory, or link owner; deep links to the timecoded player and task; throttling, batching, and quiet hours to reduce noise; retries and failure handling; and per-recipient open/click analytics consistent with IndieVault permissions. Administrators can define defaults and teams can override at the project level to balance speed with signal quality.

Acceptance Criteria
Targeted Retest Scheduler
"As an artist, I want the system to orchestrate a retest after changes so that I can verify the fix and move the release forward confidently."
Description

After an edit or new version is uploaded, schedule a focused retest for the flagged timestamp by generating new watermarkable review links, preselecting recipients who previously dropped off or key reviewers, setting an availability window, and collecting post-edit analytics. Automatically compare pre- and post-edit drop-off rates, notify stakeholders of results, and close or reopen the associated task based on configurable improvement thresholds. This creates a tight feedback loop that validates fixes and de-risks release deadlines.

Acceptance Criteria
Rule Management & Audit Trail
"As a team lead, I want clear control and history of drop-off rules and their actions so that we can govern automation and troubleshoot issues."
Description

Centralized administration to enable, disable, clone, scope, and version rules with effective dates and precedence ordering. Provides an audit trail of rule evaluations, triggers, actions taken, and notifications sent, with timestamps and actors for compliance and troubleshooting. Supports export, retention policies, and access controls aligned with IndieVault roles. Surfaced health metrics like triggers per rule and suppression due to throttling help teams tune configurations and reduce false positives over time.

Acceptance Criteria
Rule Simulator & Backtesting
"As a cautious PM, I want to backtest a rule before enabling it so that we avoid noisy alerts and busywork."
Description

A simulator that runs proposed rules against historical playback data to estimate expected trigger frequency, precision, and false-positive risk prior to activation. Offers visualizations of drop-off curves around selected timecodes, sensitivity controls for thresholds and windows, and a preview of resulting tasks and notifications. Enables safe iteration on rules so teams can ship automation with confidence.

Acceptance Criteria

Timecode Tasks

Create shareable, timestamped tasks straight from the heatmap (e.g., “tighten intro at 0:12”). Deep links jump collaborators to the exact moment in the web player; Link‑Only Reviewers can leave quick reactions without accounts. Tasks stay version‑aware and roll forward to new edits. Benefit: faster, clearer feedback loops anchored to real listener behavior.

Requirements

Heatmap-to-Task Creation
"As a producer, I want to create a task at the exact moment I hear an issue so that I can capture actionable feedback without losing context."
Description

Enable users to create timestamped (or ranged) tasks directly from the track’s heatmap and waveform by clicking or dragging on a time segment. The task composer auto-inserts the exact timecode, a playable preview starting a few seconds before the marker, and optional labels (category, severity). Support keyboard shortcuts and right‑click context actions. Tasks are stored against the specific asset version and include metadata (creator, created time, due date, assignee placeholders) to drive workflows. This integrates with IndieVault’s asset model so tasks live within release folders and inherit folder permissions.

Acceptance Criteria
Deep-Link Timestamp Sharing
"As a manager, I want to share a link that jumps to a specific moment so that collaborators can review exactly what needs attention without scrubbing."
Description

Generate tokenized deep links for each timecode task that open the web player at the precise timestamp with the task highlighted. Links respect IndieVault’s existing expiring link and watermark policies and attach per‑recipient tracking parameters to preserve analytics. Deep links render consistently across desktop and mobile, and gracefully fall back by cueing the time and showing task details when embedding is restricted. Links can be copied, shared, or included in notification emails and comments.

Acceptance Criteria
Link-Only Reviewer Quick Reactions
"As a label reviewer, I want to react and comment at a timestamp without signing up so that I can give fast feedback with minimal friction."
Description

Allow non‑account reviewers, accessing via secure expiring links, to leave quick reactions (emojis, preset tags like “vocals” or “timing”), and short comments on a timecode task without creating an account. Capture lightweight identity (name and optional email) inline, enforce rate‑limiting and anti‑spam, and attribute reactions to the recipient for analytics. Preserve read‑only constraints outside the task context and prevent broader project access. Persist reactions in the task thread and include them in per‑recipient analytics.

Acceptance Criteria
Version-Aware Task Roll-Forward
"As a mixing engineer, I want my tasks to carry over to new edits so that I don’t lose track of what still needs fixing after updates."
Description

Maintain task continuity across new uploads by automatically remapping timecodes from the source version to subsequent edits. Use duration-aware proportional mapping with optional audio waveform alignment to improve accuracy. Flag tasks as “needs remap” when confidence is low or the section was significantly altered, and offer manual remap tools (nudge controls, set new time). Preserve lineage by storing source and target version references and a remap audit trail. Display “stale” badges when a task’s target cannot be located.

Acceptance Criteria
Task Assignment and Notifications
"As an artist manager, I want to assign timestamped tasks and notify owners so that accountability is clear and deadlines are met."
Description

Enable assignment of timecode tasks to team members with due dates and statuses (Open, In Progress, Resolved). Send notifications (email and in‑app) on creation, assignment, comments, status changes, and remap events, with daily or immediate delivery options. Respect workspace notification preferences and include deep links to jump to the task timestamp. Surface unread indicators and an activity log within the task panel to speed follow‑up.

Acceptance Criteria
Waveform Overlay and Threaded Task UI
"As a collaborator, I want to see and navigate tasks directly on the waveform so that I can review and respond in context without losing my place."
Description

Render visible pins and ranges on the waveform and heatmap, color‑coded by task status and ownership. Clicking a pin seeks the player, opens a side panel with the task thread, and supports inline replies, reactions, and attachments. Provide filters (status, assignee, tag), keyboard navigation between tasks, and accessibility (ARIA roles, focus states, contrast). Optimize for mobile by collapsing the thread and enabling sticky seek controls. Ensure performance with virtualized lists for dense feedback sessions.

Acceptance Criteria

Resume Nudges

Send personalized, context‑aware reminders to recipients who didn’t finish a listen, with a one‑tap ‘Resume at 1:07’ link. Nudges throttle by timezone and campaign priority, and report post‑nudge completion lift. Benefit: recover stalled listens without spamming, improving sample size and decision confidence.

Requirements

Per-Recipient Listen State Tracking
"As an artist manager, I want to see exactly where each recipient stopped listening so that I can send a precise resume link and understand drop-off."
Description

Capture and store per-recipient playback events from IndieVault’s review links and player, including started_at, last_position_ms, percent_completed, device type, and completed_at. Persist state across devices and sessions to accurately determine whether a recipient finished a listen and the exact resume point (e.g., 1:07). Expose this state to the nudge engine and UI, enabling context-aware messaging and the 'Resume at <timestamp>' CTA. Handle multi-track releases and playlists by tracking progress per asset and per campaign. Ensure data integrity, idempotency, and privacy using recipient-scoped records and time-bound retention.

Acceptance Criteria
Nudge Eligibility & Trigger Rules
"As a campaign owner, I want nudges to send only to recipients who haven't finished within a sensible window so that I avoid unnecessary reminders."
Description

Implement a rules engine that determines who should receive a nudge and when based on listen state and campaign settings. Default criteria: percent_completed below a configurable threshold (e.g., <80%) with no activity for N hours since last play, within campaign’s active window. Support exclusions for recipients who completed, declined, bounced, or opted out, and include a randomized holdout control group for measuring lift. Allow per-campaign priority levels to influence send order and concurrency. Provide admin-tunable parameters with sane defaults and guardrails.

Acceptance Criteria
Timezone-Aware Send Throttling
"As a marketer, I want nudges to send during each recipient’s local waking hours so that open and completion rates improve without spamming at odd times."
Description

Schedule and throttle outbound nudges using each recipient’s inferred timezone and preferred send window to avoid off-hour pings. Batch and queue sends so higher-priority campaigns receive earlier slots while respecting per-domain rate limits and provider APIs. Provide backoff and retry strategies, and avoid duplicate sends during window overlaps. Surface scheduling status in the campaign UI and expose overrides for urgent sends subject to compliance checks.

Acceptance Criteria
Personalized Nudge Templates
"As a sender, I want personalized nudge content with a one-tap resume CTA so that recipients know it's relevant and are more likely to continue."
Description

Provide dynamic, brandable templates for email and in-app messages that personalize subject, preview, greeting, track/release names, last heard timestamp, and include a single clear CTA to 'Resume at <time>'. Support localization, variable token fallbacks, dark-mode safe rendering, and A/B variants for subject/CTA copy. Include preview rendering with sample recipients, test sends, and template versioning to maintain consistency across campaigns.

Acceptance Criteria
Secure One-Tap Resume Deep Link
"As a recipient, I want a secure link that resumes playback at the exact timestamp on my device so that I can pick up instantly without re-scrubbing."
Description

Generate expiring, signed resume links that open the IndieVault player at the exact last_position_ms, with device-aware routing (mobile web, native app, or desktop) and graceful fallback if the position is unavailable. Maintain existing watermarking and per-recipient analytics by preserving attribution parameters. Enforce access controls (recipient-bound link, optional PIN/SAML for confidential assets) and prevent replay via short TTLs and single-use tokens where configured.

Acceptance Criteria
Post-Nudge Lift Analytics & Reporting
"As a product analyst, I want to measure completion lift attributable to nudges so that we can prove impact and optimize our strategy."
Description

Track post-nudge behavior including opens, clicks, resumed plays, and completions, and attribute outcomes to the specific nudge. Compute and display completion lift versus a holdout control with confidence intervals, broken down by campaign, asset, recipient segment, and timezone window. Provide time-series charts, cohort views, exportable CSVs, and an API endpoint for BI. Support attribution windows and de-duplication rules to avoid double-counting when multiple nudges are sent.

Acceptance Criteria
Frequency Capping & Compliance
"As a compliance-conscious admin, I want caps and opt-out handling enforced across campaigns so that we respect recipients and reduce spam risk."
Description

Enforce per-recipient frequency caps (e.g., max X nudges/day and Y/week across all campaigns), quiet hours, and global suppression lists. Respect unsubscribe/opt-out preferences, consent status, and regional regulations (e.g., CAN-SPAM, GDPR). Deduplicate overlapping sends, handle bounces and complaints with automatic suppression, and maintain an auditable log of nudge decisions and deliveries for compliance review. Provide admin-level overrides with just-in-time warnings and required justification.

Acceptance Criteria

Device Lock

Bind each review link to approved devices via passkeys. Set a per-recipient device cap, see an at-a-glance list of authorized devices, and revoke any single device instantly—without changing the link. Benefit: kills link forwarding while keeping access effortless and analytics clean.

Requirements

Passkey-Based Device Binding
"As a reviewer receiving a link, I want to register my device with a passkey seamlessly so that I can access the assets securely without extra friction."
Description

Bind each review link access to a recipient’s approved device using WebAuthn passkeys. On first open, the recipient registers a passkey on their device; subsequent opens require a successful assertion from the same device, transparently maintaining a signed session. Store credential IDs and public keys mapped to link+recipient, enforce origin binding, and rotate nonces per assertion to prevent replay. Integrates with link creation and viewing flows, requiring no account creation for recipients. Ensures effortless, secure access while neutralizing link forwarding and aligning with IndieVault’s expiring, watermarkable links.

Acceptance Criteria
Per-Recipient Device Cap & Management
"As an artist or manager, I want to set a device cap per recipient and view authorized devices so that I can prevent link forwarding and maintain control over access."
Description

Allow senders to set a device limit per recipient at link creation (with workspace defaults and per-link overrides). Enforce caps during passkey registration, blocking additional device enrollments once the limit is reached. Provide a management UI showing authorized devices (nickname, OS/browser, last seen, first seen, location approximation) per recipient and link, with the ability to label devices and export an audit trail. Expose equivalent controls via API for automation. Keeps access constrained and transparent while simplifying administration for weekly release workflows.

Acceptance Criteria
Instant Single-Device Revocation
"As an artist or manager, I want to revoke a single device instantly without changing the link so that I can stop a suspected leak without disrupting legitimate reviewers."
Description

Enable immediate revocation of any individual authorized device without changing the review link. Trigger server-side invalidation of the device’s credential mapping and active sessions, and push real-time revoke events to open sessions (WebSocket/SSE) so playback or downloads halt instantly. Ensure revocation propagates globally within seconds and is durable across CDNs. Log all revocations to an immutable audit trail and surface status in the UI and API. Minimizes leak window and operational disruption by preserving access for other authorized devices.

Acceptance Criteria
Clean Analytics Enforcement
"As a manager, I want analytics that reflect unique authorized devices per recipient so that I can trust engagement metrics and spot suspicious activity."
Description

Tie engagement analytics to authorized devices and recipients, recording opens, plays, and downloads per device while excluding blocked or unverified attempts from core metrics. Flag and log denied access events separately for security insights. Deduplicate events using device credential IDs to keep per-recipient analytics accurate, and annotate timelines when devices are added or revoked. Integrate with existing IndieVault analytics dashboards and exports to preserve reporting continuity while improving signal quality.

Acceptance Criteria
Secure Recovery & Device Change Flow
"As a reviewer who changed devices, I want a secure way to regain access so that I can continue my review without delays or creating a new link."
Description

Provide a controlled path for recipients who lose or replace devices to regain access without weakening security. Support owner-approved device addition requests, one-time recovery links with short expiry and rate limits, and identity checks (email verification plus signed challenge) before allowing a new passkey registration within the configured cap. Notify senders of recovery events and log them in the audit trail. Offer self-serve guidance and in-product prompts to minimize support load while maintaining strict access control.

Acceptance Criteria
Cross-Platform & Browser Compatibility
"As a sender, I want the device lock flow to work across common browsers and mobile devices so that recipients can access links without unnecessary support friction."
Description

Ensure device lock works reliably across major browsers and platforms (Safari, Chrome, Firefox, Edge on macOS, Windows, iOS/iPadOS, and Android). Use modern WebAuthn features (discoverable credentials, conditional UI) where available, detect unsupported environments (e.g., in-app webviews), and guide recipients to open links in a supported browser. Provide graceful, sender-controlled fallback policies (e.g., disallow fallback or require owner approval) to avoid security regressions. Document support matrices and automate compatibility checks during link open.

Acceptance Criteria
Security & Privacy Hardening
"As a security-conscious admin, I want the device lock feature to follow strong security and privacy practices so that we protect users and the business from risk."
Description

Harden the device lock system with best-practice cryptography and privacy controls: store only public keys and hashed identifiers, encrypt at rest with KMS, enforce origin-bound challenges, apply adaptive rate limiting and bot protection, and prefer platform authenticators. Support attestation policies without collecting unnecessary device PII, comply with GDPR/CCPA (data minimization, export, and deletion), and provide admin controls for retention. Set up monitoring, anomaly detection, and audit logs to meet security and compliance requirements.

Acceptance Criteria

Smart Step-Up

Add adaptive re-auth that triggers FaceID/TouchID passkey checks only when risk spikes (new device, unusual location, sensitive asset, or long idle). Fully configurable per project. Benefit: stronger protection exactly when needed, minimal friction when it’s not.

Requirements

Adaptive Risk Scoring Engine
"As a security-conscious artist manager, I want the system to detect higher-risk situations in real time so that I’m only asked to re-auth when it truly matters."
Description

Compute a real-time risk score for every privileged action and asset access using signals such as device reputation, login history, IP/ASN risk, geolocation variance, time-of-day anomalies, session idle duration, asset sensitivity, role criticality, and project-level overrides. Execute server-side with a deterministic score and human-readable reasons, enforcing configurable thresholds per project and per action type. Provide a low-latency API (<75ms p95) and client hints to minimize prompts while preserving security, with circuit breakers and safe defaults if signals are unavailable.

Acceptance Criteria
OS-Native Step-Up Authentication Flow
"As a frequent uploader, I want a quick FaceID prompt only when needed so that my workflow isn’t slowed down by constant logins."
Description

Invoke platform-native WebAuthn/passkey biometric challenges (FaceID/TouchID) with graceful fallbacks (device passcode, security key, or TOTP) when risk thresholds are exceeded. Provide consistent UX across web and mobile, with non-blocking modals, retry/timeout handling, accessibility compliance, and localized copy. Preserve in-progress work (uploads, edits) during challenge and, upon success, grant a configurable grace window scoped to the project and asset sensitivity. Emit structured events for success/failure and user cancellations.

Acceptance Criteria
Per-Project Policy Configuration
"As a project owner, I want to tailor when step-up is required for my project so that security matches my team’s risk tolerance."
Description

Deliver an admin UI and API for project owners to configure when step-up is required: thresholds by action (view, download, share, export, contract access), triggers (new device, unusual location, idle > X minutes, off-hours), role-based rules, and exceptions (IP allow/deny lists). Include presets (Strict, Balanced, Minimal), simulation mode to preview impact without enforcement, and versioned policies with audit history and rollback. All changes propagate in near real time.

Acceptance Criteria
Sensitive Asset Classification
"As an artist, I want to mark certain files as sensitive so that extra protection is applied when they’re accessed or shared."
Description

Introduce asset sensitivity metadata with defaults by type and release state (e.g., pre-release tracks, master stems, legal contracts) and inheritance to folders/releases. Enable manual and bulk tagging via UI and API, with validation on share/export flows. Integrate sensitivity into risk scoring and enforcement scope to increase protection on high-value assets. Provide migration to backfill existing assets and guardrails to prevent accidental downgrades by non-owners.

Acceptance Criteria
Trusted Device and Location Recognition
"As a touring artist, I want the app to recognize my laptop and usual cities so that I’m not repeatedly challenged while still being protected in unfamiliar locations."
Description

Maintain privacy-preserving device identifiers and behavioral location profiles to recognize trusted environments. Trigger step-up on unknown devices, cleared cookies, major OS updates, or anomalous geolocation patterns. Allow users to trust a device for a configurable duration, bound to the WebAuthn credential and device characteristics, with automatic revocation on password reset, role change, or admin action. Provide user-visible device management to view and revoke trusted devices, adhering to regional privacy requirements.

Acceptance Criteria
Step-Up Audit Trails and Analytics
"As a label advisor, I want visibility into when and why step-up occurred so that I can tune policies and prove due diligence."
Description

Capture immutable logs of risk evaluations, step-up prompts, outcomes, reasons, latencies, and impacted assets/users, with correlation IDs for end-to-end tracing. Surface project-level dashboards showing prompt rate trends, success/failure distribution, top triggers, false-positive indicators, and grace window effectiveness. Provide CSV/JSON export and webhooks for SIEM integration, enforce retention policies, and minimize PII. Tie events to per-recipient analytics for review links to demonstrate policy effectiveness and compliance.

Acceptance Criteria

Live Revoke

Invalidate active sessions in real time while the URL stays the same. Open players lock within seconds and show a polite ‘access expired’ screen with optional re-request flow. Benefit: contain issues fast without broken links, mass resends, or analytics fragmentation.

Requirements

Real-time Session Invalidation
"As an indie label manager, I want to revoke a recipient’s access immediately without changing the link so that I can contain leaks fast and avoid resending or confusing collaborators."
Description

Server-side capability to instantly revoke active sessions while preserving the original share URL. Implements a revocation registry keyed by recipient, link, asset, and organization scopes, backed by low-latency distributed cache (e.g., Redis) and persisted for audit. On revoke, emits pub/sub events to players/CDN to halt playback within seconds, invalidates streaming tokens/keys, and rejects new segment or file requests. Guarantees idempotent, race-safe operations with <5s propagation target globally, configurable TTLs for temporary suspensions, and safeguards against accidental mass revocations (preview and confirm). Ensures no new URLs are generated, maintaining share continuity and avoiding analytics fragmentation.

Acceptance Criteria
Player Lock & Expired Screen
"As a reviewer, I want a clear expired screen when my access is revoked so that I understand what happened and how to request access again without broken links."
Description

Client behavior for web/mobile players to detect revoked access in real time and lock the interface gracefully. On receiving revoke signal or failing a periodic status check, the player stops playback, prevents seeking/download, clears prefetch buffers, and displays a branded, localized “Access expired” view with optional reasons and a ‘Request access’ button. Supports theming, dark mode, and embeddable iframes with postMessage events to parent pages. Resilient to offline/latency scenarios by validating per-chunk HLS/DASH requests with short-lived tokens. Provides accessibility compliance (WCAG AA) and analytics events for lock, view, and re-request actions.

Acceptance Criteria
Granular Revocation Controls
"As an artist manager, I want fine-grained revoke options so that I can disable only risky recipients or assets without disrupting ongoing campaigns."
Description

UI and API to target revocations by scope: per recipient, per email domain, per link, per asset within a link, per IP/country (optional), or org-wide. Supports bulk selection with search/filters, previews the affected recipients and assets, and allows scheduled revocations or time-bounded suspensions. Includes exclusion lists (e.g., keep A&R leads active) and a short undo window for accidental actions. All operations require role-based permissions and dual confirmation for destructive scopes.

Acceptance Criteria
Re-Request Access Flow
"As a recipient who lost access, I want to request renewed access from the expired screen so that I can regain permission quickly without chasing new links."
Description

End-to-end renewal request path initiated from the expired screen. Collects recipient identity and reason, enforces rate limits and CAPTCHA to prevent spam, and creates an approval task for the owner. Owners get in-app and email notifications, can approve/deny with templates, set new expiry windows, and optionally watermark the renewed access. Approved requests re-enable the same URL for that recipient, preserving analytics continuity. All outcomes feed back to the requester with clear messaging.

Acceptance Criteria
Analytics Continuity & Revocation Events
"As a product lead, I want continuous analytics on a link even after revocations so that I can measure containment effectiveness and avoid fragmented reporting."
Description

Maintain a single canonical link identity while attributing metrics before and after revocation. Track attempted plays/downloads blocked post-revoke, time-to-containment (revoke to last blocked attempt), recipient-level timelines, and geographic/device breakdowns. Surface dashboards and exports (CSV/JSON) with event stamps for revoke, lock, re-request, approval/denial. Provide filters by scope and compare cohorts to measure impact without fragmenting data across regenerated links.

Acceptance Criteria
Audit Logging & Alerts
"As a compliance-conscious admin, I want detailed revoke audit logs and alerts so that I can prove control and respond quickly to suspicious activity."
Description

Immutable audit trail capturing who revoked what, when, scope, justification, and previous state, with tamper-evident hashes. Exposes searchable logs in the app and via export. Sends real-time alerts (email, Slack, webhooks) on critical revocations, failures to propagate within SLA, and high-volume blocked attempts that may indicate sharing. Honors RBAC/Privacy settings and data retention policies with configurable retention periods.

Acceptance Criteria
Developer API & Webhooks
"As a developer integrating IndieVault into our review portal, I want APIs and webhooks for revocation and status so that I can automate containment and keep our player in sync."
Description

Public endpoints to manage and query revoke state (e.g., POST /v1/revocations, GET /v1/access-status) with OAuth2/SCIM-compatible auth, idempotency keys, and per-tenant rate limits. Emits webhooks for revoke.created, revoke.propagated, player.locked, access.requested, and access.approved/denied. Provides lightweight JS SDK for embedding status checks and lock handling in custom players, plus a sandbox environment and OpenAPI schema for integration testing.

Acceptance Criteria

Passkey Bridge

Guided, one-tap onboarding that helps first-time reviewers create a platform passkey using native prompts (iOS, Android, macOS, Windows, Chrome). Uses a verified email link once to bootstrap, then stays passwordless forever. Benefit: near-zero support burden and higher completion rates for non‑technical reviewers.

Requirements

WebAuthn Passkey Registration & Sign-in
"As a first-time reviewer, I want to create and use a passkey with my device’s native prompt so that I can open review links securely without creating or remembering a password."
Description

Implements standards-compliant WebAuthn/FIDO2 registration and authentication for reviewers across iOS, Android, macOS, Windows, and major browsers (Safari, Chrome, Edge). After email verification, the flow invokes the native passkey prompt to create a platform credential bound to IndieVault’s RP ID. Subsequent access to review links uses passkeys for one-tap sign-in, eliminating passwords. Supports resident credentials, synced passkeys (iCloud Keychain, Google Password Manager), and conditional UI where available to streamline prompts. Handles errors (user cancel, not allowed, unsupported), provides clear retry paths, and gracefully falls back to re-verification via a fresh bootstrap link without exposing passwords. Integrates with existing review-link access checks and per-recipient permissions, ensuring passkey-authenticated identities map to the correct invitee records.

Acceptance Criteria
One-Tap Email Bootstrap Link
"As an invited reviewer, I want to click one secure link that verifies me and sets up my passkey so that I can start reviewing immediately with no account setup hassles."
Description

Delivers a single-use, time-limited magic link that verifies the reviewer’s email and immediately initiates passkey registration. Links open in-app/on-device and deep-link to the WebAuthn flow, with automatic environment detection to trigger the most compatible native prompt. Enforces single consumption, short TTL, and device-agnostic behavior so recipients can start on mobile or desktop. On success, the reviewer is returned to the target review page with authenticated access. On failure or expiration, the system guides the user to request a fresh link without contacting support. All tokens are signed, auditable, and scoped to the intended review invitation.

Acceptance Criteria
Guided One-Tap Onboarding UX
"As a non-technical reviewer, I want simple guidance through creating a passkey so that I can complete onboarding quickly without confusion or errors."
Description

Provides a concise, step-by-step UI that prepares users for the native passkey prompt, explains what to expect on their device, and minimizes friction. Detects OS/browser to show tailored copy, icons, and animations, and auto-invokes the passkey registration when safe. Includes inline troubleshooting for common cases (no platform authenticator, blocked pop-ups, unsupported browser) and a single action to resend the bootstrap link if needed. Uses clear, non-technical language aimed at non-technical reviewers and preserves the look and feel of IndieVault’s review experience, ensuring a seamless transition into the passwordless flow.

Acceptance Criteria
Cross-Device Passkey Linking
"As a reviewer who works across phone and laptop, I want to add my passkey to another device so that I can review assets wherever I am without creating passwords."
Description

Enables reviewers to add access on additional devices with minimal support by offering built-in cross-device options. Supports synced passkeys where available, plus "use a passkey from another device" via QR or nearby device prompts when the browser/OS provides it. Provides a lightweight manage-devices view showing recognized devices and last-used timestamps, with the ability to revoke a device. Ensures additional device linking respects the original invitee identity and per-recipient access rules while keeping the experience passwordless.

Acceptance Criteria
Passwordless Recovery & Re-Provisioning
"As a reviewer who lost my device, I want to securely re-provision my access via email so that I can continue reviewing without contacting support or setting a password."
Description

Delivers a secure, self-serve flow for lost or replaced devices without introducing passwords. Reviewers can request a fresh email bootstrap that re-verifies identity and provisions a new passkey, while prior device credentials are auto-revoked or optionally retained based on user choice. Applies rate limiting, device change notifications, and clear UX explaining what changed. Integrates with invite lifecycle so that expired or revoked invites cannot be reactivated via recovery. Keeps support load low by automating common recovery paths.

Acceptance Criteria
Security & Anti-Abuse Hardening
"As a security-conscious sender, I want strong protections around passkey onboarding so that review links remain secure against phishing, abuse, and unauthorized access."
Description

Implements defense-in-depth controls specific to passwordless onboarding. Enforces strict RP ID/domain binding, origin checks, and replay-resistant token handling. Applies device and IP rate limits, bot/automation heuristics, and link-throttling for bootstrap requests. Stores minimal PII, hashes magic-link nonces, and logs key security events (link issued/consumed, attestation policy decisions, repeated failures) for auditability. Defines attestation policy (prefer platform, no collection of device-identifying data) and ensures privacy-preserving analytics. Coordinates with watermarkable, expiring review-link rules so that authentication strength and link policies align.

Acceptance Criteria
Onboarding Funnel Analytics & Reporting
"As a sender, I want visibility into where reviewers drop off during passkey onboarding so that I can improve completion rates and meet release deadlines."
Description

Captures and surfaces passkey onboarding metrics end-to-end to improve completion rates and reduce support. Tracks events such as email delivered/opened, link clicked, environment detected, prompt shown, passkey created, auth success/failure (with non-sensitive reason codes), and time-to-complete. Presents per-recipient status within existing IndieVault analytics, plus aggregate funnels by campaign/release. Exposes exportable, privacy-preserving reports to help senders optimize outreach timing and instructions, and flags cohorts with unusual drop-off for targeted fixes.

Acceptance Criteria

Safe Transfer

Lost or upgraded device? Reviewers can securely move access to a new device via a verified email check plus sender approval or auto-policy. Old device access is auto-revoked and identity/analytics are preserved. Benefit: continuity without duplicate recipients or link churn.

Requirements

Verified Email Challenge
"As a reviewer who changed devices, I want to verify my identity via my email so that I can move my review access to my new device without waiting for a new link."
Description

Implement a secure, expiring email verification step to initiate Safe Transfer. When a reviewer attempts to move access to a new device, the system sends a signed, single-use link or one-time code to the recipient’s verified email on record. The flow must validate ownership of the email, bind the transfer request to the target link/release and recipient identity, and prevent new recipient creation. Tokens must be short-lived, scoped, replay-resistant, and invalidated upon use. The experience should be lightweight on mobile and desktop, require no account creation, and align with IndieVault’s existing per-recipient model. All attempts and outcomes are recorded for auditing. Benefits: confirms identity before any device change, reduces support load, and preserves continuity without issuing new review links.

Acceptance Criteria
Sender Approval & Auto-Policy
"As a sender, I want to approve or automatically allow legitimate transfer requests so that reviewers can regain access quickly without exposing assets to unauthorized devices."
Description

Provide a rules-driven approval layer allowing senders to manually approve/deny transfer requests or enable auto-approval policies. Policies can be configured per workspace, project, or link and may include conditions such as recipient email domain, project sensitivity, time since last transfer, max transfers per recipient, geographic/IP reputation, and link expiry proximity. Manual approval notifications are delivered in-app and via email, with a one-click approve/deny action that includes request context (device info, IP, geolocation, link/release). Auto-policy decisions are logged with rationale. Default policies are configurable and inherited by new releases. Benefits: balances speed with control, minimizes social engineering risk, and reduces link churn for trusted reviewers.

Acceptance Criteria
Device Binding & Atomic Transfer
"As a reviewer, I want my old device to lose access as soon as my new device is authorized so that my content stays secure and I can continue reviewing seamlessly."
Description

Bind review access to a device fingerprint and execute transfers atomically. On an approved request, authorize the new device by issuing fresh, scoped tokens and keys while simultaneously revoking all active sessions and refresh tokens on the old device. Ensure rollback safety if any step fails (no partial dual access). Device fingerprinting should be privacy-conscious (hashed, non-PII where possible) and work across web, iOS, and Android. Handle offline edge cases (e.g., queued revocation), ensure single-active-device enforcement per recipient per link when configured, and preserve watermarking keys tied to the recipient identity. UX: streamlined 2–3 steps with clear status and retry guidance. Benefits: seamless continuity for reviewers and immediate risk reduction by terminating legacy access.

Acceptance Criteria
Identity & Analytics Continuity
"As a sender, I want analytics and recipient identity to persist across a device transfer so that campaign metrics remain accurate and security watermarks still identify the same person."
Description

Maintain a persistent recipient identity across device transfers to avoid duplicate recipients and protect data integrity. Merge all historical and future analytics (opens, plays, download attempts, time-on-page, completion rates) under the same recipient profile after transfer. Annotate analytics timelines with a ‘device transfer’ event while preserving attribution for watermarking and leak forensics. Keep link-level quotas, expirations, and access scopes unchanged. Ensure dashboards, exports, and per-recipient analytics remain continuous without gaps or spikes from duplicate entities. Benefits: accurate campaign metrics, trustworthy watermark identity, and simplified reporting for senders.

Acceptance Criteria
Abuse Prevention & Rate Limiting
"As a security admin, I want automated limits and protections on transfer attempts so that attackers can’t brute-force email checks or hijack access."
Description

Introduce layered protections to deter brute force and account takeover during Safe Transfer. Implement per-email, per-IP, and global rate limits for verification requests; progressive challenges (e.g., CAPTCHA) after thresholds; velocity checks for repeated transfers; and temporary cool-offs or hard blocks for suspicious patterns. Validate IP reputation and geolocation anomalies to flag or route requests for manual approval regardless of auto-policy. Provide configurable org-level thresholds and blocklists. All mitigation events must be logged and surfaced in security reports. Benefits: reduces risk of SIM-swap/social engineering exploits and protects high-sensitivity pre-release assets.

Acceptance Criteria
Audit Trail & Notifications
"As a label manager, I want a complete audit trail and timely notifications of transfer activity so that I can trace issues and comply with security policies."
Description

Create a comprehensive, immutable audit trail of all Safe Transfer activities and deliver timely notifications. Log events including request initiation (device, IP, user agent), verification success/failure, approval decision (manual or policy with rule match), transfer execution, old device revocation, and any mitigation triggers. Expose a filterable timeline in the sender’s dashboard, with CSV export and webhook events for SIEM integration. Notify recipients and senders at key milestones (request received, approved/denied, transfer complete, revocation executed) with localized, branded templates. Benefits: operational transparency, compliance readiness, and faster incident response.

Acceptance Criteria

PressRoom Mode

Enable secure shared setups with hardware keys or shared devices. Issue time-boxed, device-scoped passes, require step-up per playback, and auto-lock on idle. Track per-seat activity without personal accounts. Benefit: newsroom-friendly security with intact per-recipient analytics.

Requirements

Device-Scoped, Time-Boxed Guest Passes
"As a PR coordinator, I want to issue time-limited access tied to a newsroom device so that press can review assets safely without creating personal accounts."
Description

Provide issuable passes tied to a specific shared device via secure device tokens or WebAuthn attestation, with clearly defined start and end windows. Passes limit access to assigned assets or folders, inherit link-level watermarks and expiration, and can be revoked instantly. Include QR-code provisioning for kiosks and ensure operation without personal accounts by mapping each pass to a PressRoom seat identifier. Support timezone awareness, clock skew tolerance, and automatic pass deactivation upon expiry.

Acceptance Criteria
Per-Playback Step-Up Verification
"As a security-minded artist manager, I want a quick extra check before each playback so that unattended devices can’t leak unreleased tracks."
Description

Require an additional low-friction verification before each audio or video playback and other sensitive actions in PressRoom Mode. Support hardware key touch, session PIN entry, or NFC tap as configurable step-up methods. Allow policy settings (every playback, every N minutes, or only for high-risk assets) and log each step-up event at the seat level to strengthen deterrence and auditability while minimizing workflow friction.

Acceptance Criteria
Hardware Key and Shared Device Authentication
"As a label publicist, I want PressRoom seats to authenticate with a hardware key so that access is both simple for staff and secure for pre-release material."
Description

Support FIDO2/WebAuthn hardware security keys and platform authenticators to enroll newsroom devices into PressRoom Mode without tying them to personal identities. Store public keys scoped to the workspace, rotate device tokens per session, and prevent key export. Provide a fallback shared-device PIN policy for environments without keys, and ensure compatibility with major browsers and kiosk modes.

Acceptance Criteria
Auto-Lock on Idle with Privacy Blur
"As a newsroom producer, I want the session to auto-lock when unattended so that sensitive content isn’t exposed on shared screens."
Description

Automatically lock the PressRoom session after configurable idle thresholds or on system lock, network change, or tab visibility loss. On lock, pause playback, blur or hide sensitive metadata and artwork, and require step-up to resume. Include on-screen countdown warnings, keyboard-only re-auth flows for kiosks, and clearing of clipboard plus revocation of pre-signed URLs to reduce exposure risks.

Acceptance Criteria
Seat-Level Activity Tracking without Personal Accounts
"As an artist manager, I want analytics by PressRoom seat and original recipient so that I can assess interest without requiring individuals to sign up."
Description

Capture per-seat analytics for views, playbacks, downloads, and link interactions using pseudonymous seat IDs bound to device-scoped passes. Attribute activity to the original recipient for each expiring review link while preserving seat granularity in PressRoom Mode. Provide exportable audit trails and aggregate stats without collecting personal data, with configurable retention windows and region-aware consent banners for compliance.

Acceptance Criteria
PressRoom Session Admin Controls and Audit
"As a PR lead, I want centralized controls and audit logs for PressRoom sessions so that I can manage access quickly and prove chain-of-custody if needed."
Description

Offer an admin console to create, configure, and monitor PressRoom sessions: define seats, assign assets, set time windows, choose step-up policies and idle timeouts, and manage hardware key enrollment. Display real-time status for each seat and allow immediate revoke or extend actions. Provide downloadable, tamper-evident audit logs and webhooks for seat activation, step-up events, and suspicious activity alerts.

Acceptance Criteria
Watermarked, Expiring Review Links for PressRoom
"As a campaign manager, I want review links that work in shared press rooms while preserving per-recipient watermarks so that leaks can be traced and links expire on schedule."
Description

Ensure review links used in PressRoom Mode retain per-recipient watermarking and expiration while supporting multiple seats on the same link. Generate per-seat stream tokens that embed recipient and seat IDs to maintain analytics continuity and enable seat-level revocation. Disable raw downloads by default and permit controlled downloads only with configured step-up, honoring IndieVault’s release-ready folder structure.

Acceptance Criteria

KeyRoll

Schedule pass access to auto-expire at campaign milestones (e.g., mix v3, press day). Prompt reviewers to re-establish with the same passkey, keeping the URL and analytics stable while pruning stale access. Benefit: fresh, controlled access throughout the campaign lifecycle.

Requirements

Milestone-based Access Scheduling
"As an indie label manager, I want to schedule access windows tied to campaign milestones so that review links automatically expire and refresh without manual intervention."
Description

Enable campaign owners to define KeyRoll schedules tied to explicit milestones (e.g., “Mix v3 ready,” “Press day,” “Embargo lift”) with start/end times and optional trigger conditions (such as asset version reaching Approved). At each milestone, passes automatically shift access windows: access is revoked at expiry and re-enabled at the next window without changing the review URL. Includes timezone-aware scheduling, calendar-style visualization, per-recipient overrides, and audit logs. Integrates with IndieVault release folders and asset status to ensure the correct window activates in sync with campaign progress. Outcome: link lifecycle is automated, reducing manual coordination and errors.

Acceptance Criteria
Passkey Re-Establishment Flow
"As a press reviewer, I want to re-establish my access using the same passkey on the same URL so that I can continue reviewing without losing context or bookmarks."
Description

Provide an expiry-aware flow that prompts recipients to re-establish access using the same passkey on the same URL when a KeyRoll occurs. Support friction-light reactivation via passkey entry or a one-time email code/magic link, with rate limiting and lockout after repeated failures. Preserve recipient identity and maintain the URL to avoid breaking bookmarks or threads. Integrates with existing authentication, email service, and audit logging. Outcome: fresh access with minimal friction, no new links created, and continuity for recipients.

Acceptance Criteria
Recipient Analytics Continuity
"As an artist manager, I want analytics to persist across access rolls so that I can track each recipient’s engagement over the whole campaign."
Description

Ensure per-recipient analytics (opens, plays, downloads, notes/feedback) persist across KeyRoll transitions. Maintain a stable recipient ID that aggregates activity over time while segmenting metrics by roll window and asset version. Merge sessions across devices using the passkey signature and verified email when available. Update dashboards and exports to display both cumulative and by-roll views. Outcome: uninterrupted measurement and reliable attribution across the campaign lifecycle.

Acceptance Criteria
Watermarking and Version Pinning on Roll
"As a mastering engineer, I want the rolled link to point to the correct asset versions with consistent watermarking so that reviewers always hear the intended mix and leaks remain traceable."
Description

On each roll event, automatically update the asset set shown at the stable review URL to the designated milestone version (e.g., mix v3) and regenerate watermarks if required while preserving per-recipient watermark seeds for traceability. Pin stems, artwork, and documents to milestone-specific versions so reviewers always see the intended materials. Include preflight checks for missing assets and a safe fallback if the target version isn’t ready. Integrates with IndieVault’s asset versioning and watermarking services. Outcome: correct content at every phase with leak accountability maintained.

Acceptance Criteria
Access Pruning and Notifications
"As a PR coordinator, I want automated notifications around expiry and re-establishment so that recipients are nudged at the right times and deadlines aren’t missed."
Description

Automate communication and access hygiene around KeyRoll. Send configurable notifications (email/Slack) to recipients N days before expiry, at expiry, and when re-establishment is available; send owners a digest of upcoming rolls and lapsed access. Include customizable templates, per-campaign settings, unsubscribe handling, and delivery/engagement logs. Provide one-click “Re-establish access” CTAs tied to the recipient and pass, with throttling to prevent spam. Outcome: timely nudges, fewer missed deadlines, and cleaner access lists.

Acceptance Criteria
Admin Controls and Overrides
"As a campaign admin, I want manual override controls for passes so that I can quickly respond to schedule changes or access issues."
Description

Offer a dashboard for campaign admins to monitor passes, upcoming rolls, and recipient status, with the ability to extend or shorten windows, force-roll immediately, revoke access, pause schedules, and perform bulk operations (filter/search, select, export). Every action records an audit trail with actor, timestamp, and reason. Integrates with permissions/roles and campaign management. Outcome: rapid operational control when plans change without compromising traceability.

Acceptance Criteria
Security and Abuse Prevention
"As a security-conscious manager, I want safeguards against link sharing and brute-force attempts so that campaign content remains protected."
Description

Harden the KeyRoll flow against misuse. Enforce passkey complexity policies, optional 2FA/email code on re-establish, device/browser fingerprinting to limit concurrent sessions, and detection of suspicious sharing patterns. Provide IP allow/deny lists, rate limiting, and CAPTCHA after repeated failures. Align with IndieVault’s security services and GDPR-compliant data retention. Outcome: minimized risk of leaks and brute-force attempts while keeping reactivation friction appropriate.

Acceptance Criteria

Product Ideas

Innovative concepts that could enhance this product's value proposition.

Preflight Proof Pack

One-click preflight validates ISRCs, loudness, filenames, and credits. Exports a checksumed, version-locked folder plus a stamped proof PDF for distributors.

Idea

Magic Inbox Links

Generate scoped upload links that drop files into the correct project and version. Auto-rename to schema, virus-scan, and request missing metadata—no account required.

Idea

SplitLock Consents

Collect legally binding approvals tied to each file’s cryptographic hash. Auto-block release until all split holders sign; export a timestamped audit trail.

Idea

Milestone Auto-Pay

Route approvals into automatic milestone payouts via Stripe or PayPal. Hold funds in escrow until approval or deadline, then release with itemized receipts.

Idea

Leakprint Forensics

Embed inaudible, per-recipient audio watermarks and art fingerprints. If a leak surfaces, auto-match the source and revoke remaining links instantly.

Idea

Play Heatmap Analytics

See per-recipient playback heatmaps showing skips, replays, and drop-off moments. Trigger smart nudges for unopened links or unfinished listens.

Idea

Passkey Review Doors

Let reviewers use passkeys for passwordless, device-bound access. Preserve per-recipient analytics and revoke access without changing the link.

Idea

Press Coverage

Imagined press coverage for this groundbreaking product concept.

Want More Amazing Product Ideas?

Subscribe to receive a fresh, AI-generated product idea in your inbox every day. It's completely free, and you might just discover your next big thing!

Product team collaborating

Transform ideas into products

Full.CX effortlessly brings product visions to life.

This product was entirely generated using our AI and advanced algorithms. When you upgrade, you'll gain access to detailed product requirements, user personas, and feature specifications just like what you see below.